sentences
sequence
labels
sequence
[ "The problem of designing NLP solvers for math word problems (MWP) has seen sustained research activity and steady gains in the test accuracy.", "Since existing solvers achieve high performance on the benchmark datasets for elementary level MWPs containing one-unknown arithmetic word problems, such problems are often considered solved with the bulk of research attention moving to more complex MWPs.", "In this paper, we restrict our attention to English MWPs taught in grades four and lower.", "We provide strong evidence that the existing MWP solvers rely on shallow heuristics to achieve high performance on the benchmark datasets.", "To this end, we show that MWP solvers that do not have access to the question asked in the MWP can still solve a large fraction of MWPs.", "Similarly, models that treat MWPs as bag-of-words can also achieve surprisingly high accuracy.", "Further, we introduce a challenge dataset, SVAMP, created by applying carefully chosen variations over examples sampled from existing datasets.", "The best accuracy achieved by state-of-the-art models is substantially lower on SVAMP, thus showing that much remains to be done even for the simplest of the MWPs.", "A Math Word Problem (MWP) consists of a short natural language narrative describing a state of the world and poses a question about some unknown quantities (see Table 1 for some examples).", "MWPs are taught in primary and higher schools.", "The MWP task is a type of semantic parsing task where given an MWP the goal is to generate an expression (more generally, equations), which can then be evaluated to get the answer.", "The task is challenging because a machine needs to extract relevant information from natural language text as well as perform mathematical reasoning to solve it.", "The complexity of MWPs can be measured along multiple axes, e.g., reasoning and linguistic PROBLEM : Text: Jack had 8 pens and Mary had 5 pens.", "Jack gave 3 pens to Mary.", "How many pens does Jack have now?", "Equation: 8 3 = 5 QUESTIONSENSITIVITYVARIATION : Text: Jack had 8 pens and Mary had 5 pens.", "Jack gave 3 pens to Mary.", "How many pens does Mary have now?", "Equation: 5 + 3 = 8 REASONINGABILITYVARIATION : Text: Jack had 8 pens and Mary had 5 pens.", "Mary gave 3 pens to Jack.", "How many pens does Jack have now?", "Equation: 8 + 3 = 11 STRUCTURALINVARIANCEVARIATION : Text: Jack gave 3 pens to Mary.", "If Jack had 8 pens and Mary had 5 pens initially, how many pens does Jack have", "now?Equation: 8 3 = 5 Table 1: Example of a Math Word Problem along with the types of variations that we make to create SVAMP.", "complexity and world and domain knowledge.", "A combined complexity measure is the grade level of an MWP, which is the grade in which similar MWPs are taught.", "Over the past few decades many approaches have been developed to solve MWPs with significant activity in the last decade (Zhang et al., 2020).", "MWPs come in many varieties.", "Among the simplest are the one-unknown arithmetic word problems where the output is a mathematical expression involving numbers and one or more arithmetic operators ( + , , , / ).", "Problems in Tables 1 and 6 are of this type.", "More complex MWPs may have systems of equations as output or involve other operators or may involve more advanced topics and specialized knowledge.", "Recently, researchers have started focusing on solving such MWPs, e.g. multiple-unknown linear word problems (Huang et al., 2016a), geometry (Sachan and Xing, 2017) and probability (Amini et al., 2019), believing that existing work can handle one-unknown arithmetic MWPs well (Qin et al., 2020).", "In this paper, we question the capabilities of the state-of-the-art (SOTA) methods to robustly solve even the simplest of MWPs suggesting that the above belief is not well-founded.", "In this paper, we provide concrete evidence to show that existing methods use shallow heuristics to solve a majority of word problems in the benchmark datasets.", "We find that existing models are able to achieve reasonably high accuracy on MWPs from which the question text has been removed leaving only the narrative describing the state of the world.", "This indicates that the models can rely on superficial patterns present in the narrative of the MWP and achieve high accuracy without even looking at the question.", "In addition, we show that a model without word-order information (i.e., the model treats the MWP as a bag-of-words) can also solve the majority of MWPs in benchmark datasets.", "The presence of these issues in existing benchmarks makes them unreliable for measuring the performance of models.", "Hence, we create a challenge set called SVAMP ( S imple V ariations on A rithmetic M ath word P roblems; pronounced swamp ) of one-unknown arithmetic word problems with grade level up to 4 by applying simple variations over word problems in an existing dataset (see Table 1 for some examples).", "SVAMP further highlights the brittle nature of existing models when trained on these benchmark datasets.", "On evaluating SOTA models on SVAMP, we find that they are not even able to solve half the problems in the dataset.", "This failure of SOTA models on SVAMP points to the extent to which they rely on simple heuristics in training data to make their prediction.", "We show that the majority of problems in benchmark datasets can be solved by shallow heuristics lacking word-order information or lacking question text.", "We create a challenge set called SVAMP 1 for more robust evaluation of methods developed to solve elementary level math word problems.", "1 The dataset and code are available https://github.com/arkilpatel/SVAMP", "2018), semantic parsing (Huang et al., 2017) and most recently deep learning (Wang et al., 2017; Xie and Sun, 2019; Zhang et al., 2020); see (Zhang et al., 2020) for an extensive survey.", "Many papers have pointed out various deficiencies with previous datasets and proposed new ones to address them.", "Koncel-Kedziorski et al. (2016) cu-rated the MAWPS dataset from previous datasets which along with Math23k (Wang et al., 2017) has been used as benchmark in recent works.", "Recently, ASDiv (Miao et al., 2020) has been proposed to provide more diverse problems with annotations for equation, problem type and grade level.", "HMWP (Qin et al., 2020) is another newly proposed dataset of Chinese MWPs that includes examples with muliple-unknown variables and requiring non-linear equations to solve them.", "Identifying artifacts in datasets has been done for the Natural Language Inference (NLI) task by McCoy et al. (2019), Poliak et al. (2018), and Gururangan et al. (2018).", "Rosenman et al. (2020) iden-tified shallow heuristics in a Relation Extraction dataset.", "Cai et al. (2017) showed that biases prevalent in the ROC stories cloze task allowed models to yield state-of-the-art results when trained only on the endings.", "To the best of our knowledge, this kind of analysis has not been done on any Math Word Problem dataset.", "Challenge Sets for NLP tasks have been proposed most notably for NLI and machine translation (Belinkov and Glass, 2019; Nie et al., 2020; Ribeiro et al., 2020).", "Gardner et al. (2020) suggested creating contrast sets by manually perturbing test instances in small yet meaningful ways that change the gold label.", "We believe that we are the first to introduce a challenge set targeted specifi-cally for robust evaluation of Math Word Problems.", "We denote a Math Word Problem P by a sequence of n tokens P = ( w 1 , . . . , w n ) where each token w i can be either a word from a natural language or a numerical value.", "The word problem P can be broken down into body B = ( w 1 , . . . , w k ) and question Q = ( w k +1 , . . . , w n ) .", "The goal is to map P to a valid mathematical expression EP composed of numbers from P and mathematical operators from the set { + , , /, } (e.g. 3 + 5 4 ).", "The metric used to evaluate models on the MWP task is Execution Accuracy, which is obtained from com-Model MAWPS ASDiv-A Seq2Seq (S) 79.7 55.5 Seq2Seq (R) 86.7 76.9 GTS (S) (Xie and Sun, 2019) 82.6 71.4 GTS (R) 88.5 81.2 Graph2Tree (S) (Zhang et al., 2020) 83.7 77.4 Graph2Tree (R) 88.7 82.2 Majority Template Baseline 2 17.7 21.2 Table 2: 5-fold cross-validation accuracies ( ) of baseline models on datasets.", "paring the predicted answer (calculated by evaluating EP ) with the annotated answer.", "In this work, we focus only on one-unknown arithmetic word problems.", "Many of the existing datasets are not suitable for our analysis as either they are in Chinese, e.g. Math23k (Wang et al., 2017) and HMWP (Qin et al., 2020), or have harder problem types, e.g. Dolphin18K (Huang et al., 2016b).", "We consider the widely used benchmark MAWPS (Koncel-Kedziorski et al., 2016) composed of 2373 MWPs and the arithmetic subset of ASDiv (Miao et al., 2020) called ASDiv-A which has 1218 MWPs mostly up to grade level 4 (MAWPS does not have grade level information).", "Both MAWPS and ASDiv-A are evaluated on 5-fold cross-validation based on pre-assigned splits.", "We consider three models in our experiments:", "(a) Seq2Seq consists of a Bidirectional LSTM Encoder to encode the input sequence and an LSTM decoder with attention (Luong et al., 2015) to generate the equation.", "(c) GTS (Xie and Sun, 2019) uses an LSTM Encoder to encode the input sequence and a tree-based Decoder to generate the equation.", "(d) Graph2Tree (Zhang et al., 2020) combines a Graph-based Encoder with a Tree-based Decoder.", "The performance of these models on both datasets is shown in Table 2. We either provide RoBERTa (Liu et al., 2019) pre-trained embeddings to the models or train them from scratch.", "Graph2Tree (Zhang et al., 2020) with RoBERTa embeddings achieves the state-of-the-art for both 2 Majority Template Baseline is the accuracy when the model always predicts the most frequent Equation Template .", "datasets.", "Note that our implementations achieve a higher score than the previously reported highest score of 78% on ASDiv-A (Miao et al., 2020) and 83.7% on MAWPS (Zhang et al., 2020).", "The implementation details are provided in Section A in the Appendix.", "As mentioned in Section 3.1, each MWP consists of a body B , which provides a short narrative on a state of the world and a question Q , which inquires about an unknown quantity about the state of the world.", "For each fold in the provided 5-fold split in MAWPS and ASDiv-A, we keep the train set unchanged while we remove the questions Q from the problems in the test set.", "Hence, each problem in the test set consists of only the body B without any question Q .", "We evaluate all three models with RoBERTa embeddings on these datasets.", "The results are provided in Table 3. The best performing model is able to achieve a 5-fold cross-validation accuracy of 64.4% on ASDiv-A and 77.7% on MAWPS.", "Loosely translated, this means that nearly 64% of the problems in ASDiv-A and 78% of the problems in MAWPS can be correctly answered without even looking at the question.", "This suggests the presence of patterns in the bodies of MWPs in these datasets that have a direct correlation with the output equation.", "Some recent works have also demonstrated similar evidence of bias in NLI datasets (Gururangan et al., 2018; Poliak et al., 2018).", "They observed that NLI models were able to predict the correct label for a large fraction of the standard NLI datasets based on only the hypothesis of the input and without the premise.", "Our results on question-removed examples of math word problems resembles their observations on NLI datasets and similarly indicates the presence of artifacts that help statistical MAWPS ASDiv-A Model Easy Hard Easy Hard Seq2Seq 86.8 86.7 91.3 56.1 GTS 92.6 71.7 91.6 65.3 Graph2Tree 93.4 71.0 92.8 63.3 Table 4: Results of baseline models on the Easy and Hard test sets.", "models predict the correct answer without complete information.", "Note that even though the two methods appear similar, there is an important distinction.", "In Gururangan et al. (2018), the model is trained and tested on hypothesis only examples and hence, the model is forced to find artifacts in the hypothesis during training .", "On the other hand, our setting is more natural since the model is trained in the standard way on examples with both the body and the question.", "Thus, the model is not explicitly forced to learn based on the body during training and our results not only show the presence of artifacts in the datasets but also suggest that the SOTA models exploit them.", "Following Gururangan et al. (2018), we attempt to understand the extent to which SOTA models rely on the presence of simple heuristics in the body to predict correctly.", "We partition the test set into two subsets for each model: problems that the model predicted correctly without the question are labeled Easy and the problems that the model could not answer correctly without the question are labeled Hard .", "Table 4 shows the performance of the models on their respective Hard and Easy sets.", "Note that their performance on the full set is already provided in Table 2. It can be seen clearly that although the models correctly answer many Hard problems, the bulk of their success is due to the Easy problems.", "This shows that the ability of SOTA methods to robustly solve word problems is overestimated and that they rely on simple heuristics in the body of the problems to make predictions.", "We construct a simple model based on the Seq2Seq architecture by removing the LSTM Encoder and replacing it with a Feed-Forward Network that maps the input embeddings to their hidden representations.", "The LSTM Decoder is provided with the average of these hidden representations as its initial hidden state.", "During decoding, an attention mechanism (Luong et al., 2015) assigns weights to individual hidden representations of the input Model MAWPS ASDiv-A FFN + LSTM Decoder (S) 75.1 46.3 FFN + LSTM Decoder (R) 77.9 51.2 Table 5: 5-fold cross-validation accuracies ( ) of the constrained model on the datasets.", "tokens.", "We use either RoBERTa embeddings (non-contextual; taken directly from Embedding Matrix) or train the model from scratch.", "Clearly, this model does not have access to word-order information.", "Table 5 shows the performance of this model on MAWPS and ASDiv-A.", "The constrained model with non-contextual RoBERTa embeddings is able to achieve a cross-validation accuracy of 51.2 on ASDiv-A and an astounding 77.9 on MAWPS.", "It is surprising to see that a model having no word-order information can solve a majority of word problems in these datasets.", "These results indicate that it is possible to get a good score on these datasets by simply associating the occurence of specific words in the problems to their corresponding equations.", "We illustrate this more clearly in the next section.", "To get a better understanding of how the constrained model is able to perform so well, we analyze the attention weights that it assigns to the hidden representations of the input tokens.", "As shown by Wiegreffe and Pinter (2019), analyzing the attention weights of our constrained model is a reliable way to explain its prediction since each hidden representation consists of information about only that token as opposed to the case of an RNN where each hidden representation may have information about the context i.e. its neighboring tokens.", "We train the contrained model (with RoBERTa embeddings) on the full ASDiv-A dataset and observe the attention weights it assigns to the words of the input problems.", "We found that the model usually attends to a single word to make its prediction, irrespective of the context.", "Table 6 shows some representative examples.", "In the first example, the model assigns an attention weight of 1 to the representation of the word every' and predicts the correct equation.", "However, when we make a subtle change to this problem such that the corresponding equation changes, the model keeps on attending over the word every' and predicts the same equa-Input Problem Predicted Equation Answer John delivered 3 letters at every house.", "tion, which is now incorrect.", "Similar observations can be made for the other two examples.", "Table 22 in the Appendix has more such examples.", "These examples represent only a few types of spurious correlations that we could find but there could be other types of correlations that might have been missed.", "Note that, we do not claim that every model trained on these datasets relies on the occurrence of specific words in the input problem for prediction the way our constrained model does.", "We are only asserting that it is possible to achieve a good score on these datasets even with such a brittle model, which clearly makes these datasets unreliable to robustly measure model performance.", "The efficacy of existing models on benchmark datasets has led to a shift in the focus of researchers towards more difficult MWPs.", "We claim that this efficacy on benchmarks is misleading and SOTA MWP solvers are unable to solve even elementary level one-unknown MWPs.", "To this end, we create a challenge set named SVAMP containing simple one-unknown arithmetic word problems of grade level up to 4. The examples in SVAMP test a model across different aspects of solving word problems.", "For instance, a model needs to be sensitive to questions and possess certain reasoning abilities to correctly solve the examples in our challenge set.", "SVAMP is similar to existing datasets of the same level in terms of scope and difficulty for humans, but is less susceptible to being solved by models relying on superficial patterns.", "Our work differs from adversarial data collection methods such as Adversarial NLI (Nie et al., 2020) in that these methods create examples depending on the failure of a particular model while we create examples without referring to any specific model.", "Inspired by the notion of Normative evaluation (Linzen, 2020), our goal is to create a dataset of simple problems that any system designed to solve MWPs should be expected to solve.", "We create new problems by applying certain variations to existing problems, similar to the work of Ribeiro et al. (2020).", "However, unlike their work, our variations do not check for linguistic capabilities.", "Rather, the choice of our variations is motivated by the experiments in Section 4 as well as certain simple capabilities that any MWP solver must possess.", "We create SVAMP by applying certain types of variations to a set of seed examples sampled from the ASDiv-A dataset.", "We select the seed examples from the recently proposed ASDiv-A dataset since it appears to be of higher quality and harder than the MAWPS dataset: We perform a simple experiment to test the coverage of each dataset by training a model on one dataset and testing it on the other one.", "For instance, when we train a Graph2Tree model on ASDiv-A, it achieves 82% accuracy on MAWPS.", "However, when trained on MAWPS and tested on ASDiv-A, the model achieved only 73% accuracy.", "Also recall Table 2 where most models performed better on MAWPS.", "Moreover, ASDiv has problems annotated according to types and grade levels which are useful for us.", "To select a subset of seed examples that suffi-ciently represent different types of problems in the ASDiv-A dataset, we first divide the examples into groups according to their annotated types.", "We dis-Group Examples in ASDiv-A Selected Seed Examples Addition 278 28 Subtraction 362 33 Multiplication 188 19 Division 176 20 Total 1004 100 Table 7: Distribution of selected seed examples across types.", "card types such as TVQ-Change , TVQ-Initial , Ceil-Division and Floor-Division that have less than 20 examples each.", "We also do not consider the Difference type since it requires the use of an additional modulus operator.", "For ease of creation, we discard the few examples that are more than 40 words long.", "To control the complexity of resulting variations, we only consider those problems as seed examples that can be solved by an expression with a single operator.", "Then, within each group, we cluster examples using K-Means over RoBERTa sentence embeddings of each example.", "From each cluster, the example closest to the cluster centroid is selected as a seed example.", "We selected a total of 100 seed examples in this manner.", "The distribution of seed examples according to different types of problems can be seen in Table 7. 5.1.1 Variations The variations that we make to each seed example can be broadly classified into three categories based on desirable properties of an ideal model: Question Sensitivity , Reasoning Ability and Structural Invariance .", "Examples of each type of variation are provided in Table 8. 1. Question Sensitivity.", "Variations in this category check if the model's answer depends on the question.", "In these variations, we change the question in the seed example while keeping the body same.", "The possible variations are as follows:", "(a) Same Object, Different Structure: The principal object (i.e. object whose quantity is unknown) in the question is kept the same while the structure of the question is changed.", "(b) Different Object, Same Structure: The principal object in the question is changed while the structure of question remains fixed.", "(c) Different Object, Different Structure: Both, the principal object in the question and the structure of the question, are changed.", "termine a change in reasoning arising from subtle changes in the problem text.", "The different possible variations are as follows:", "(a) Add relevant information: Extra relevant information is added to the example that affects the output equation.", "(b) Change information: The information provided in the example is changed.", "(c) Invert operation: The previously unknown quantity is now provided as information and the question instead asks about a previously known quantity which is now unknown.", "3. Structural Invariance.", "Variations in this category check whether a model remains invariant to superficial structural changes that do not alter the answer or the reasoning required to solve the example.", "The different possible variations are as follows:", "(a) Add irrelevant information: Extra irrelevant information is added to the problem text that is not required to solve the example.", "(b) Change order of objects: The order of objects appearing in the example is changed.", "(c) Change order of phrases: The order of number-containing phrases appearing in the example is changed.", "Since creating variations requires a high level of familiarity with the task, the construction of SVAMP is done in-house by the authors and colleagues, hereafter called the workers .", "The 100 seed examples (as shown in Table 7) are distributed among the workers .", "For each seed example, the worker needs to create new variations by applying the variation types discussed in Section 5.1.1.", "Importantly, a combination of different variations over the seed example can also be done.", "For each new example created, the worker needs to annotate it with the equation as well as the type of variation(s) used to create it.", "More details about the creation protocol can be found in Appendix B. We created a total of 1098 examples.", "However, since ASDiv-A does not have examples with equations of more than two operators, we discarded 98 examples from our set which had equations consisting of more than two operators.", "This is to ensure that our challenge set does not have any unfairly difficult examples.", "The final set of 1000 examples was provided to an external volunteer unfamiliar with the task to check the grammatical and logical correctness of each example.", "Our challenge set SVAMP consists of one-unknown arithmetic word problems which can be solved by expressions requiring no more than two operators.", "Table 9 shows some statistics of our dataset and of ASDiv-A and MAWPS.", "The Equation Template for each example is obtained by converting the corresponding equation into prefix form and masking out all numbers with a meta symbol.", "Observe that the number of distinct Equation Templates and the Average Number of Operators are similar for SVAMP and ASDiv-A and are considerably smaller than for MAWPS.", "This indicates that SVAMP does not contain unfairly difficult MWPs in terms of the arithmetic expression expected to be produced by a model.", "Previous works, including those introducing MAWPS and ASDiv, have tried to capture the notion of diversity in MWP datasets.", "Miao et al. (2020) introduced a metric called Corpus Lexicon Diversity (CLD) to measure lexical diversity.", "Their contention was that higher lexical diversity is correlated with the quality of a dataset.", "As can be seen from Table 9, SVAMP has a much lesser CLD than ASDiv-A.", "SVAMP is also less diverse in terms of problem types compared to ASDiv-a.", "Despite this we will show in the next section that SVAMP is in fact more challenging than ASDiv-A for current models.", "Thus, we believe that lexical diversity is not a reliable way to measure the quality of MWP datasets.", "Rather it could depend on other factors such as the diversity in MWP structure which preclude models exploiting shallow heuristics.", "We train the three considered models on a combination of MAWPS and ASDiv-A and test them on SVAMP.", "The scores of all three models with and without RoBERTa embeddings for various subsets of SVAMP can be seen in Table 10. Seq2Seq GTS Graph2Tree S R S R S R Full Set 24.2 40.3 30.8 41.0 36.5 43.8 One-Op 25.4 42.6 31.7 44.6 42.9 51.9 Two-Op 20.3 33.1 27.9 29.7 16.1 17.8 ADD 28.5 41.9 35.8 36.3 24.9 36.8 SUB 22.3 35.1 26.7 36.9 41.3 41.3 MUL 17.9 38.7 29.2 38.7 27.4 35.8 DIV 29.3 56.3 39.5 61.1 40.7 65.3 Table 10: Results of models on the SVAMP challenge set.", "The best performing Graph2Tree model is only able to achieve an accuracy of 43.8% on SVAMP.", "This indicates that the problems in SVAMP are indeed more challenging for the models than the problems in ASDiv-A and MAWPS despite being of the same scope and type and less diverse.", "Table 23 in the Appendix lists some simple examples from SVAMP on which the best performing model fails.", "These results lend further support to our claim that existing models cannot robustly solve elementary level word problems.", "Next, we remove the questions from the examples in SVAMP and evaluate them using the three models with RoBERTa embeddings trained on combined MAWPS and ASDiv-A.", "The scores can be seen in Table 11. The accuracy drops by half when compared to ASDiv-A and more than half compared to MAWPS suggesting that the problems in SVAMP are more sensitive to the information present in the question.", "We also evaluate the performance of the constrained model on SVAMP when trained on MAWPS and ASDiv-A.", "The best model achieves only 18.3% accuracy (see Table 12) which Model SVAMP w/o ques ASDiv-A w/o ques Seq2Seq 29.2 58.7 GTS 28.6 60.7 Graph2Tree 30.8 64.4 Table 11: Accuracies ( ) of models on SVAMP without questions.", "is marginally better than the majority template baseline.", "This shows that the problems in SVAMP are less vulnerable to being solved by models using simple patterns and that a model needs contextual information in order to solve them.", "We also explored using SVAMP for training by combining it with ASDiv-A and MAWPS.", "We performed 5-fold cross-validation over SVAMP where the model was trained on a combination of the three datasets and tested on unseen examples from SVAMP.", "To create the folds, we first divide the seed examples into five sets, with each type of example distributed nearly equally among the sets.", "A fold is obtained by combining all the examples in SVAMP that were created using the seed examples in a set.", "In this way, we get five different folds from the five sets.", "We found that the best model achieved about 65% accuracy.", "This indicates that even with additional training data existing models are still not close to the performance that was estimated based on prior benchmark datasets.", "To check the influence of different categories of variations in SVAMP, for each category, we measure the difference between the accuracy of the best model on the full dataset and its accuracy on a subset containing no example created from that category of variations.", "The results are shown in Table 13. Both the Question Sensitivity and Struc-Removed Category # RemovedExamples Change in Accuracy () Question Sensitivity 462 +13.7 Reasoning Ability 649 -3.3 Structural Invariance 467 +4.5 Table 13: Change in accuracies when categories are removed.", "tural Invariance categories of variations show an increase in accuracy when their examples are removed, thereby indicating that they make SVAMP more challenging.", "The decrease in accuracy for the Reasoning Ability category can be attributed in large part to the Invert Operation variation.", "This is not surprising because most of the examples created from Invert Operation are almost indistinguishable from examples in ASDiv-A, which the model has seen during training.", "The scores for each individual variation are provided in Table 14. We also check the break-up of performance of the best performing Graph2Tree model according to the number of numbers present in the text of the input problem.", "We trained the model on both ASDiv-A and MAWPS and tested on SVAMP and compare those results against the 5-fold cross-validation setting of ASDiv-A.", "The scores are provided in Table 15. While the model can solve many problems consisting of only two numbers in the input text (even in our challenge set), it performs very badly on problems having more than two numbers.", "This shows that current methods are incapable of properly associating numbers to their context.", "Also, the gap between the performance on ASDiv-A and SVAMP is high, indicating that the examples in SVAMP are more difficult for these models to solve than the examples in ASDiv-A even when considering the structurally same type of word problems.", "Going back to the original question, are existing NLP models able to solve elementary math word", "problems?", "This paper gives a negative answer.", "We have empirically shown that the benchmark English MWP datasets suffer from artifacts making them unreliable to gauge the performance of MWP solvers: we demonstrated that the majority of problems in the existing datasets can be solved by simple heuristics even without word-order information or the question text.", "The performance of the existing models in our proposed challenge dataset also highlights their limitations in solving simple elementary level word problems.", "We hope that our challenge set SVAMP, containing elementary level MWPs, will enable more robust evaluation of methods.", "We believe that methods proposed in the future that make genuine advances in solving the task rather than relying on simple heuristics will perform well on SVAMP despite being trained on other datasets such as ASDiv-A and MAWPS.", "In recent years, the focus of the community has shifted towards solving more difficult MWPs such as non-linear equations and word problems with multiple unknown variables.", "We demonstrated that the capability of existing models to solve simple one-unknown arithmetic word problems is overestimated.", "We believe that developing more robust methods for solving elementary MWPs remains a significant open problem.", "We thank the anonymous reviewers for their constructive comments.", "We would also like to thank our colleagues at Microsoft Research for providing valuable feedback.", "We are grateful to Mono-jit Choudhury for discussions about creating the dataset.", "We thank Kabir Ahuja for carrying out preliminary experiments that led to this work.", "We also thank Vageesh Chandramouli and Nalin Patel for their help in dataset construction." ]
[ "abstain", "abstain", "abstain", "result", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "abstain", "objective", "abstain", "result", "abstain", "objective", "abstain", "objective", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "method", "method", "method", "other", "method", "other", "other", "other", "method", "other", "other", "method", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other" ]
[ "Extracting informative arguments of events from news articles is a challenging problem in information extraction, which requires a global contextual understanding of each document.", "While recent work on document-level extraction has gone beyond single-sentence and increased the cross-sentence inference capability of end-to-end models, they are still restricted by certain input sequence length constraints and usually ignore the global context between events.", "To tackle this issue, we introduce a new global neural generation-based framework for document-level event argument extraction by constructing a document memory store to record the contextual event information and leveraging it to implicitly and explicitly help with decoding of arguments for later events.", "Empirical results show that our framework outperforms prior methods substantially and it is more robust to adversarially annotated examples with our constrained decoding design.", "1 1 Introduction An event is a specific occurrence involving participants (people, objects, etc.).", "Understanding events in the text is necessary for building machine reading systems, as well as for downstream tasks such as information retrieval, knowledge base population, and trend analysis of real-life world events (Sundheim, 1992).", "Event Extraction has long been studied as a local sentence-level task (Gr-ishman and Sundheim, 1996; Ji and Grishman, 2008b; Grishman, 2019; Lin et al., 2020).", "This has driven researchers to focus on developing approaches for sentence-level predicate-argument extraction.", "This is problematic when events and their arguments spread across multiple sentences in real-world cases, events are often written through-1 Our code and resources are available at https:// github.com/xinyadu/memory_docie for research purpose.", "[S3] After having a shootout with several [ policemen including Collin ] last Thursday, both [ Tamerlan Tsarnaev ] and his younger brother [ Dzhokhar ] were captured a day later.", "[S6] Two week ago, in Boston, authorities on Wednesday reopened [ Boylston Street ], the city thoroughfare where the explosion occurred near the finish line of the race.", "[S7] a memorial service for campus policeman Sean Collin, who authorities say the brothers shot to death three days after the bombings [S1] After having a shootout with police last Thursday, both [Tamerlan Tsarnaev]_Detainee and his younger brother [Dzhokhar]_Detainee was captured a day later.", "In Figure 1, the excerpt of a news article describes two events in the 3rd sentence (an arrest event triggered by captured) and the 6th sentence (an attack event triggered by explosion).", "S6 on its own contains little information about the ar-guments/participants of the explosion event, but together with the context of S3 and S7, we can find the informative arguments for the ATTACKER role.", "In this work, we focus on the informative argument extraction problem, which is more practical and requires much a broader view of cross-sentence context (Li et al., 2021).", "For example, although the brothers also refers to Tamerlan T. and Dzhokhar (and closer to the trigger word), it 2 In WIKIEVENTS (Li et al., 2021), nearly 40% of events have an argument outside the sentence containing the trigger.", "In recent years, there have been efforts focusing on event extraction beyond sentence boundaries with end-to-end learning (Ebner et al., 2020; Du, 2021; Li et al., 2021).", "Most of the work still focuses on modeling each event independently (Li et al., 2021) and ignores the global context partially because of the pretrained models' length limit and their lack of attention for distant context (Khan-delwal et al., 2018).", "Du et al. (2021) propose to model dependency between events directly via the design of generation output format, yet it is not able to handle longer documents with more events whereas in real-world news articles there are often more than fifteen inter-related events (Table 2).", "In addition, previous work often overlooks the consistency between extracted event structures across the long document.", "For example, if one person has been identified as a JAILER in an event, it's unlikely that the same person is an ATTACKER in another event in the document (Figure 1), according to world event knowledge (Sap et al., 2019; Yao et al., 2020).", "In this paper, to tackle these challenges and have more consistent/coherent extraction results, we propose a document-level memory-enhanced training and decoding framework (Figure", "2) for the problem.", "It can leverage relevant and necessary context beyond the length constraint of end-to-end models, by using the idea of a dynamic memory store.", "It helps the model leverage previously generated/extracted event information during both training (implicitly) and during test/decoding (explicitly).", "More specifically, during training, it retrieves the most similar event sequence in the memory store as additional input context to mode.", "Plus, it performs constrained decoding based on the memory store and our harvested global knowledge-based argument pairs from the ontology.", "We conduct extensive experiments and analysis on the WIKIEVENTS corpus and show that our framework significantly outperforms previous methods either based on neural sequence labeling or text generation.", "We also demonstrate that the framework achieves larger gains over baseline non memory-based models as the number of events grows in the document, and it is more robust to manually designed adversarial examples.", "In this work, we focus on the challenging problem of extracting informative arguments of events 3 from the document.", "Each event consists of (1) a trigger expression which is a continuous span in the document, it is of a type E which is predefined in an ontology; (2) and a set of arguments { arg 1 , arg 2 , ... } , each of them has a role predefined in the ontology, for event type E .", "In the annotation guideline/ontology, the template that describes the connections between arguments of the event type is also provided.", "For example, when E is Arrest , its corresponding arguments to be extracted should have roles: JAILER ( <arg1> ), DETAINEE ( <arg2> ), CRIME ( <arg3> ), PLACE ( <arg4> ).", "Its description template is: <arg1> arrested or jailed <arg2> for <arg3> crime at <arg4> place Given a long news document Doc = { ..., <Trg1> , ..., x i , ..., <Trg2> , ..., x n } with given event triggers, our goal is to extract all the informative argument spans to fill in the role of E 1 , E 2 , etc.", "For the example piece in Figure 1, E 1 is Arrest (triggered by <Trg1> captured) and E 2 is Attack-Detonate (<Trg2> is explosion).", "The ontology is constructed by the DARPA KAIROS project 4 for event annotation.", "It defines 67 event types in a three-level hierarchy, which is richer than the ACE05 ontology with only 33 event types for sentence-level extraction.", "In this section, we describe our memory-enhanced neural generation-based framework (Figure", "2) for extracting informative event arguments from the document.", "Our base model is based on a sequence-to-sequence pretrained language model for text generation.", "We first introduce how we leverage previously extracted events as additional context for training the text generation-based event extraction model to help the model automatically capture event dependency knowledge (Section 3.1).", "To explicitly help the model satisfy the global event knowledge-based constraints (e.g., it is improbable that one person would be JAILER in event A and then ATTACKER in event B), we propose a dynamic 3 Name entity mentions are recognized as more informative than nominal mentions.", "decoding process with world knowledge-based argument pair constraints (Section 3.2).", "Following Li et al. (2021), the main model of our framework is based on the pretrained encoder-decoder model BART (Lewis et al., 2020).", "The intuition behind using BART for the extraction task is that it is pre-trained as a denoising autoencoder reconstruct the original input sequence.", "This fits our objective of extracting argument spans from the input document because the extracted arguments' tokens are from the input sequence.", "The generation model takes (1) context: the concatenation of the piece of text x (of document D ) containing the current event trigger 5 and the event type's corresponding template in the ontology; (2) memory store m : of previously extracted events of the same document D , as input, and learns a distribution p ( y | x, m ) over possible outputs y .", "The ground truth sequence y is a sequence of a template where the placeholder <arg> s are filled by 5 Up to the maximum length limit of the pre-trained model.", "the gold-standard argument spans of the current event.", "6 p ( y | x, m ) = N (cid:89) i p ( y i | y 1: i 1 , x, m ) (1) To be more specific on building the dependency between events across the document, we use the most relevant event in the memory store m as additional context, instead of the entire memory store.", "To retrieve the most relevant event (i.e., a generated sequence) from the memory store m = { m 1 , m 2 , ... } , we use S-BERT (Reimers and Gurevych, 2019) for dense retrieval (i.e., retrieval with dense representations provided by NN).", "S-BERT is a modification of the BERT model (Devlin et al., 2019) that uses siamese and triplet network structures to obtain semantically meaningful embeddings for text sequences.", "We can compare the distance between two input sequences with cosine-similarity in an easier and faster way.", "Given a current input document piece x , we encode all of 6 The gold sequence for the 1st event in Figure 1 would be [policemen including Collin] arrested or jailed [Tamerlan T. and Dzhokhar] for <arg> crime at <arg> place 5266 the previously generated event sequences in the memory store and x .", "Then we calculate the similarity scores with vector space cosine-similarity and normalization: score ( m i | x ) = exp f ( x, m i ) (cid:80) m i m exp f ( x, m i ) f ( x, m i ) = Embed ( x ) T Embed ( m i ) Afterwards, we select the m i with the highest similarity score: m R = arg max i score ( m i | x ) To summarize, the input sequence for the memory-enhanced model consists of the retrieved generated event sequence ( m R ), the template for the current event type ( T ) provided by the ontol-ogy/dataset, and the context words from the document ( x 1 , ..., x n ): <S> m R 1 , m R 2 , ..., </S> <S> T 1 , T 2 , ... </S> x 1 , x 2 , ..., x n [EOS] During training time, the memory store consists of gold-standard event sequences while at test time, it contains real generated event sequences.", "The training objective is to minimize the negative log likelihood over all (( x, m R , T ) , y ) instances.", "Since we fix the parameters from S-BERT, the retrieval module's parameters are not updated during training.", "Thus the training time cost of our memory-based training is almost the same to the simple generation-based model.", "The constrained/dynamic decoding is an important stage in our framework.", "We first harvest a number of world knowledge-based event argument pairs that are probable/improbable of happening with the same entity being the argument.", "For example, (<Event Type: Arrest, Argument Role: JAILER > | <Event Type: Attack-Detonate, Argument Role: ATTACKER >) is an improbable pair.", "In the framework (Figure 2), they are called argument pairs.", "Then based on the argument pairs constraints, the dynamic decoding is conducted throughout the document if one entity is decoded in an event in the earlier part of the document, it should not be decoded later in another event if the results are incompatible with the improbable argument pairs.", "Harvesting Global Knowledge-based Argument Pairs from the Ontology We first run an algorithm to automatically harvest all candidate argument pairs (Algorithm 1).", "Basically, we First enumerate all possible event type pairs, and count how many times they co-occur in the training set (Line 26).", "Then we enumerate all possible argument types pairs that share the same entity type from the ontology (e.g., argument ORGANIZATION (ORG) and argument VICTIM (PER) don't have the same entity type), and count how many times both of the args are of the same entity in training docs (e.g., Dzhokhar are both DETAINEE and ATTACKER in two events in Figure", "1) (Line 711).", "Finally we add into the set of probable argument pairs, whose normalized score is above a threshold (99% of the candidate arguments with nonzero score); and the rest into the set of improba-5267 [policemen] including [Collin] arrested or jailed >Jailer: policeman, Colin Detainee: Tamerlan T., Dzhokhar enc previously generated event sequences <S> [Dzhokhar] and Knowledge constraint-based decoder Argument Pairs Dzhokhar and Colin policemen Tamerlan <input> Arg Type: Attacker Improbable Arg.", "After automatic harvesting, since there is noise in the dataset as well as cases not covered, we conduct a human curation process to mark certain improbable argument pairs as probable, based on world knowledge.", "Finally, we obtain 1,568 improbable argument pairs and 687 probable pairs.", "Dynamic Decoding Process During the decoding process, we keep an explicit data structure in the memory store, to record what entities have been decoded and what argument roles they are assigned to (Figure 3).", "During decoding the arguments of later events in the document, assuming we are at a time step t for generating the sequence for event E i , to generate token y t , we first determine the argument role ( A k ) it corresponds to.", "Then we search through the memory store if there are extracted entities e that have argument role A h , where < A k , A h > is an improbable argument pair.", "Then when decoding to token at time step t , we decrease the probability (after softmax) of generating/extracting tokens in entity e according to the improbable argument pair rule.", "Compared to decreasing the probability of extracting certain conflicting entities, we are more reserved in utilizing the probable argument pairs, only if the same entity has been assigned the argument role for more than 5 times in the document, we are increasing the probability of extracting the same entity (generat-ing the token of the entity) for the corresponding argument role (the most co-occurred).", "After the generation process for the current event, we add the newly generated event sequence (ex-tracted arguments) back into the memory store.", "We conduct evaluations on the newly released WIKIEVENTS dataset (Li et al., 2021).", "As compared to the ACE05 7 sentence-level extraction benchmark, WIKIEVENTS focuses on annotations for informative arguments and for multiple events in the document-level event extraction setting, and is the only benchmark dataset for this purpose to now.", "It contains real-world news articles annotated with the DARPA KAIROS ontology.", "As shown in the dataset paper, the distance between informative arguments and event trigger is 10 times larger than the distance between local/uninformative arguments (including pronouns) and event triggers.", "This demonstrates more needs for modeling long document context and event dependency and thus it requires a good benchmark for evaluating our proposed models.", "The statistics of the dataset are shown in Table 2.", "We use the same data split and preprocessing step as in the previous work.", "As for evaluation, we use the same criteria as in previous work.", "We consider an argument span to be correctly identified if its offsets match any of the gold/reference informative arguments of the current event (i.e., argument identification); and it is correctly classified if its semantic role also matches (i.e., argument classification) (Li et al., 2013).", "To judge whether the extracted argument and the gold-standard argument span match, since the exact match is too strict that some correct candidates are considered as spurious (e.g., the 22 policemen and 22 policemen do not match under the exact match standard).", "Following Huang and Riloff (2012); Li et al. (2021), we use head word match 7 http://www.itl.nist.gov/iad/mig/tests/ace/2005/ 5268 Argument Identification Argument Classification Models Head Match Coref Match Head Match Coref Match P R F1 P R F1 P R F1 P R F1 BERT-CRF (Shi and Lin, 2019) -52.71 -58.12 -43.29 -47.70 BART-Gen (Li et al., 2021) 58.62 55.64 57.09 62.84 59.64 61.19 54.02 51.27 52.61 57.47 54.55 55.97 Memory-based Training 61.07 56.18 58.52 66.21 60.91 63.45 55.93 51.45 53.60 60.47 55.64 57.95 w/ knowledge constrained decoding 62.45 56.55 59.35 67.67 61.27 64.31 57.23 51.82 54.39 61.85 56.00 58.78 Table 3: Performance (%) on the informative argument extraction task.", "F1 ( Head F1 ).", "We also report performance under a more lenient metric Coref F1 : the extracted argument span gets full credit if it is coreferential with the gold-standard arguments (Ji and Grishman, 2008a).", "The coreference links information between informative arguments across the document are given in the gold annotations.", "We compare our framework to a number of competitive baselines.", "(Shi and Lin, 2019) is a popular baseline for semantic role labeling (predicate-argument prediction).", "It performs sequence labeling based on automatically extracted features from BERT (Devlin et al., 2019) and uses Conditional Random Fields (Lafferty et al., 2001) for structured prediction ( BERT-CRF ).", "Li et al. (2021) propose to use conditional neural text generation model for the document-level argument extraction problem, it handles each event in isolation ( BART-Gen ).", "For our proposed memory-enhanced training with retrieved additional context, we denote it as Memory-based Training .", "We also present the argument pairs constrained decoding results separately to see both components' contributions.", "8 In Table 3, we present the main results for the document-level informative argument extraction.", "The score for argument identification is strictly higher than argument classification since it only requires span offset match.", "We observe that: The neural generation-based models (BART-Gen and our framework) are superior in this document-level informative argument extraction problem, as compared to the sequence labeling-based approaches.", "Plus, generation-based methods only require one pass as 8 All significance tests for F-1 are computed using the paired bootstrap procedure of 5k samples of generated sequences (Berg-Kirkpatrick et al., 2012).", "compared to span enumeration-based methods (Wadden et al., 2019; Du and Cardie, 2020).", "As compared to the raw BART-Gen, with our memory-based training leveraging previously closest extracted event information substantially helps increase precision (P) and F-1 scores, with smaller but notable improvement in recall especially under Coref Match.", "With additional argument pair constrained decoding, there is an additional significant improvement in precision and F-1 scores.", "This can be mainly attributed to two factors: (I) during constrained decoding, we relied more on improbable arg. pairs as a checklist to make sure that the same entity not generated for conflicting argument roles in the same document, and only utilize very few top proba-ble arg. pairs for promoting the decoding for frequently appearing entities; (II) If an entity has been decoded in previous event A by mistake then under the argument pair rule, it will not be decoded in event B even if it correct which might hurt the recall.", "Robustness to Adversarial Examples To test how the models react to specially designed adversarial examples, we select a quarter of documents from the original test set, and add one more adversarial event into each of them by adding a few 5269 new sentences.", "The additional event is designed to attract the model to make mistakes that are against our global knowledge-based argument pair rules.", "9 An excerpt for one example: Tandy, then 19, talks to his close friend, Stephen Silva, about ...", "Tandy and Silva both died as lifeguards together at the Harvard pool.", "Later a kid was killed by a Stephen Silva-lookalike guy.", "In this example, we know Stephen Silva died in the second event Life.Die triggered by died .", "Although it is also mentioned in the last sentence, Stephen Silva should not be extracted as the KILLER .", "In Table 4, we summarize the F-1 scores of argument classification models.", "Firstly we see on the adversarial examples, the performance scores all drop as compared to the normal setting (Table 3), proving it's harder to maintain robustness in this setting.", "Our best model with argument pair constrained decoding outperforms substantially both BART-Gen and our memory-based training model.", "The gap is larger than the general evaluation setting, which shows the advantage of explicitly enforcing the reasoning/constraint rules.", "Influence of Similarity-based Retrieval In Table 5, we first investigate what happens when our similarity-based retrieval module is removed we find that the F-1 scores substantially drop.", "There's also a drop of scores across metrics when we retrieve a random event from the memory store.", "It is interesting that the model gets slightly better performance with random memory than not using any retrieved/demonstration sequences.", "This corresponds to the findings in other domains of NLP on how demonstrations lead to performance gain when using pre-trained language models (especially in the few-shot learning setting).", "Document Length and # of Events In Figure 4, we examine how performances change as the document length and the number of events per document grow.", "First we observe that as the document length grows, challenges grow for both the baseline and 9 In our open-sourced repository, readers will be able to find our designed adversarial examples under the data folder.", "our framework (F-1 drops from over 70% to around 55%).", "While our framework maintains a larger advantage when document is longer than 250 words.", "As the number of events per document grows (from <=8 to around 25), our model's performance is not affected much (F-1 all over 60%).", "While the baseline system's F-1 score drops to around 50%.", "Qualitative Analysis We present a couple of representative examples (Table 6).", "In the first example, for the event triggered by wounds , it's hard to find the VICTIM argument Ahmad Khan Rahimi since it's explicitly mentioned far before the current sentence.", "But with retrieved additional context, both our framework variants are able to extract the full name correctly.", "In the second example, Cuba was mentioned in two sentences with two events (Impede event triggered by sidesteps and Arrest triggered by capture ).", "But it only participated in the first event.", "According to our argument pair 5270 BART-Gen Baseline Memory-enhanced Training w/ Constrained Decoding Input Doc.", "constraints it's improbable that one entity is both an IMPEDER and a JAILER , our framework with constrained decoding conducts reasoning to avoid the wrong extraction.", "Error Analysis and Remaining Challenges Table 7 categorizes types of argument extraction errors made by our best model.", "The majority of errors is from missing arguments and only around 7% of cases are caused by incorrectly-assigned argument roles (e.g., a PLACE argument is mistakenly labeled as a TARGET argument).", "Interestingly, from Figure 5's distribution, we see that as compared to the distance of gold-standard informative arguments to the trigger (avg. 80.41 words), the missing arguments are far away (avg. 136.39 words) showing the hardness of extracting distant arguments as compared to local arguments.", "Finally we examine deeper the example predictions and categorize reasons for errors into the following types: (1) Challenge to obtain an accurate boundary of the argument span.", "In the example excerpt On Sunday, a suicide bombing in the southeastern province of [Logar] left eight ..., our model extracts southeastern province as PLACE .", "Similarly in ... were transported to [Kabul] city.., our model extracts city as DESTINATION .", "In both cases the model gets no credit.", "To mitigate this problem, models should be able to identify certain noun phrase boundaries with external knowledge.", "Plus, the improvement of data annotation and evaluation is also needed the model should get certain credit though the span does not overlap but related to the gold argument.", "(2) Long distance dependency and deeper context understanding.", "In news, most of the contents are written by the author while certain content is cited from participants.", "While models usually do not distinguish the difference and consider the big stance difference.", "In the excerpt Bill Richard, whose son, Martin, was the youngest person killed in the bombing, said Tsarnaev could have backed out ... Instead, (cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58) Richard (cid:58)(cid:58)(cid:58)(cid:58)(cid:58) said, (cid:58)(cid:58) he (cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58) chose (cid:58)(cid:58)(cid:58)(cid:58)(cid:58) hate. (cid:58)(cid:58)(cid:58) he (cid:58)(cid:58)(cid:58)(cid:58)(cid:58) chose (cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58) destruction. (cid:58)(cid:58)(cid:58) He (cid:58)(cid:58)(cid:58)(cid:58)(cid:58) chose (cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58) death. (cid:58)(cid:58) ..., the full name of the informative argument (D. Tsarnaev) was mentioned at the very beginning of the document.", "Although our model can leverage previously decoded events, it is not able to fully understand the speaker's point of view and misses the full KILLER argument span.", "edge with heuristic-based rules or crowdsourcing-based methods.", "Sap et al. (2019) propose to use crowdsourcing for obtaining if-then relations between events.", "Bosselut et al. (2019) use generative language models to generate new event knowledge based on crowdsourced triples.", "Yao et al. (2020) propose a weakly-supervised approach to extract sub-event relation tuples from the text.", "In our work, we focus on harvesting knowledge-based event argument pair constraints from the predefined ontology with training data co-occurrence statistics.", "Plus, the work above on knowledge acquisition has not investigated explicitly encoding the knowl-edge/constraints for improving the performance of models of document-level event extraction related tasks.", "Document-level Event Extraction Event extraction has been mainly studied under the document-level setting (the template filling tasks from the MUC conferences (Grishman and Sundheim, 1996)) and the sentence-level setting (using the ACE data (Doddington et al., 2004) and BioNLP shared tasks (Kim et al., 2009)).", "In this paper, we focus on the document-level event argument extraction task which is a less-explored and challenging topic (Du et al., 2021; Li et al., 2021).", "To support the progress for the problem, Ebner et al. (2020) built RAMS dataset, and it contains annotations for cross-sentence arguments but for each document it contains only one event.", "Later Li et al. (2021) built the benchmark WIKIEVENTS with complete event annotations for each document.", "Regarding the methodology, neural text generation-based models have been proved to be superior at this document-level task (Huang et al., 2021; Du et al., 2021; Li et al., 2021).", "But they are still limited by the maximum length context issue and mainly focus on modeling one event at a time.", "Yang and Mitchell (2016) proposed a joint extraction approach that models cross-event dependencies but it's restricted to events co-occurring within a sentence and only does trigger typing.", "In our framework, utilizing the memory store can help better capture global context and avoid the document length constraint.", "Apart from event extraction, in the future, it's worth investigating how to leverage the global memory idea for other document-level IE problems like ( N ary) relation extraction (Quirk and Poon, 2017; Yao et al., 2019).", "In this work, we examined the effect of global document-level memory on informative event argument extraction.", "In the new framework, we propose to leverage the previously extracted events as additional context to help the model learn the dependency across events.", "At test time, we propose to use a dynamic decoding process to help the model satisfy global knowledge-based argument constraints.", "Experiments demonstrate that our approach achieves substantial improvements over prior methods and has a larger advantage when document length and events number increase.", "For future work, we plan to investigate how to extend our method to multi-document event extraction cases.", "We thank the anonymous reviewers helpful suggestions.", "This research is based upon work supported by U.S. DARPA KAIROS Program No.", "FA8750-19-2-1004, U.S. DARPA AIDA Program No.", "FA8750-18-2-0014 and LORELEI Program No.", "HR0011-15-C-0115.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government.", "The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein." ]
[ "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "other", "other", "other", "method", "other", "other", "abstain", "other", "other", "other", "other", "other", "method", "other", "abstain", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other" ]
[ "We consider neural language generation under a novel problem setting: generating the words of a sentence according to the order of their first appearance in its lexicalized PCFG parse tree, in a depth-first, left-to-right manner.", "Unlike previous tree-based language generation methods, our approach is both", "(i) top-down and", "(ii) explicitly generating syntactic structure at the same time.", "In addition, our method combines neural model with symbolic approach: word choice at each step is constrained by its predicted syntactic function.", "We applied our model to the task of dialog response generation, and found it significantly improves over sequence-to-sequence baseline, in terms of diversity and relevance.", "We also investigated the effect of lexicalization on language generation, and found that lexicalization schemes that give priority to content words have certain advantages over those focusing on dependency relations.", "Neural encoder-decoder architectures have shown promise and become very popular for natural language generation.", "Over the past few years, there has seen a surging interest in sequence-to-sequence learning for dialog response generation using neural encoder-decoder models (Vinyals and Le, 2015; Serban et al., 2017).", "Typically, an encoder encodes conversational context (source side) information into vector representations, and a decoder auto-regressively generates word tokens conditioned on the source vectors and previously generated words.", "Two problems arise with the standard left-to-right decoding mechanism.", "First, no future information is available at any step of the decoding process, while the study of linguistic dependency structure shows that certain words depend on the others that come right to them.", "Second, preceding words define the context for following words in left-to-right, auto-regressive language models, while linguistic theories may prefer other hierarchies (e.g., adjectives modifying nouns, adverbs modifying verbs).", "Psycho-linguistics studies also suggest that human may first generate the abstract representation of the things to say, and then linearize them into sentences (Dell et al., 1999).", "Therefore, it is appealing to consider language generation in alternative orders.", "This poses a greater challenge because a mechanism in extra to word generation is needed for deciding the position of each word.", "Some recent works adopt a syntax-free approach to address this problem.", "(Mehri and Sigal, 2018) proposed a middle-out decoder that starts from the middle of sentences and finishes the rest in forward and backward directions.", "(Mou et al., 2016) and (Li and Sun, 2018) start with one or two predicted keywords and generate the rest of sentences in a similar fashion.", "Others incorporate tree structures without syntactic relations and categories.", "(Zhou et al., 2018) canonicalizes the dependency structures of sentences into ternary trees, and generate only the words top-down.", "Yet another line of work aim to model the full syntactic trees.", "(Gu et al., 2018) generates phrase structures and part-of-speech tags along with words for machine translation.", "(Dyer et al., 2016) generates shift-reduce action sequences of context-free grammars in addition to words for language model and parsing.", "But words are still generated in left-to-right order in their approaches.", "In the domain of dialog, we believe language generation can benefit from alternative orders, for the same reasons argued earlier.", "On the other hand, in human conversations, the structure of utterances usually correspond with dialog states (e.g., wh-noun or wh-adverb phrases are more likely to be used in a request state), so modelling phrase structures can potentially help capturing discourse level information.", "In order to be able to generate complete syntactic trees, while be flexi-ble about word generation order at the same time, the use of lexicalized grammar becomes a natural choice.", "Recent years has seen works in language model and generation through alternative orders.", "(Zhang et al., 2016) developed a top-down neural architecture for language model that alternates between four LSTM decoders according to given dependency relations.", "(Ford et al., 2018) proposed a two-stage language model, of which the first stage is a language model that generates templates, and the second stage is a translation model that fills in the blanks.", "Word generation order varies with the choice of words that are generated at different stages.", "Language generation with tree structures has been explored more thoroughly for neural machine translation.", "(Eriguchi et al., 2017) and (Aha-roni and Goldberg, 2017) generate CFG trees in bracketed form.", "(Wu et al., 2017) generates the sequence of transitions to form dependency trees.", "More recent works have focused on explicitly generating tree structures (Wang et al., 2018; Gu et al., 2018).", "Regarding neural architectures for tree generation in the field of natural language processing, (Dong and Lapata, 2016) and (Yin and Neu-big, 2017) use a single decoder with parent-feeding mechanism to generate logical forms and programming codes.", "(Gu et al., 2018) applied the doubly-recurrent neural networks of (Alvarez-Melis and Jaakkola, 2016) with attention mechanism to machine translation.", "Their model uses two decoders, of which one memorizes the ancestors, and the other remembers the siblings.", "(Wang et al., 2018) also uses two decoders, but one for generating words and the other for generating syntax trees.", "In the domain of dialog response generation, the use of syntactic structures is under-studied.", "(Mou et al., 2016) and (Li and Sun, 2018) considered starting with keywords, and finish the sentences in forward and backward directions.", "Their models in principle are not tree-structured.", "The closest thing to our knowledge is by (Zhou et al., 2018).", "They proposed to convert dependency trees to ternary trees, but ignore the type of dependency relations.", "In other words, they modelled on trees of which the nodes and edges have no labels.", "The key difference between their approach and ours is that we generate syntax trees with labels, and word choices are also constrained by the labels.", "We first consider the following three requirements", "when generating an L-PCFG syntax tree: Deciding the structure of children.", "Several mechanisms have been proposed for deciding the structure of children of each node in the context of tree generation.", "One of them decide tree topology by using a sequence model to generate children one by one and predict stopping tokens (Alvarez-Melis and Jaakkola, 2016).", "Then there is a simpler approach that treats each combination of the labels of the children as one token, and predict such tokens when generating the parent node (Yin and Neubig, 2017).", "For language generation, we adopt the second approach and predict the combination of the labels of children, i.e. the rules, for two reasons:", "(i) the space of grammar rules is generally sparse even when its dimensionality is exponential of the number of labels, and", "(ii) with sequential generation of labels, as in the first approach, it is hard to enforce the labels of the children to form a valid grammar rule.", "Deciding the heir of a node.", "Recall the definition of lexicalized PCFG: let W , N , R be the sets of lexicons, labels, and rules, where each rule is one of the following forms: X ( h ) h X ( h ) Y 1 ( h 1 ) . . . Y k ( h k ) such that there exists i , h i = h .", "where X, Y 1 , . . . , Y k N , h, h 1 , . . . , h k W .", "We do not restrict ourselves to Chomsky Normal Form, and rules can have any number of children.", "The i th children in the second case is called the heir .", "The key difficulty to top-down generation of lexicalized PCFG parse tree is deciding which child would be the heir.", "One way is to make explicit decision to select the child by adding a switch variable, at the cost of increasing the complexity of the problem.", "Instead, we make a change to the second case above, and simplify the problem by restricting the rules to be of the following form: Figure 1: The first tree is the result of Stanford parser.", "In other words, the heir would inherit both the lexicalization and the label of the parent (with the possible exception that the root node may produce children that are not labeled with root).", "When generating a parent node and its children, we restrict the choice of rules to those containing the label of the parent, so the heir can be inferred from the chosen rules by looking for the child that has the same label as its parent (in case there are multiple children that have the same label, we choose the rightmost one; other heuristics are possible).", "Note that under such restriction we end up with parse trees in which all labels are part-of-speech tags.", "Sequentialization of a tree.", "Previous works adopt various construction orders of trees.", "(Zhang et al., 2016), (Zhou et al., 2018), (Alvarez-Melis and Jaakkola, 2016), and (Gu et al., 2018) generate trees through level-order traversal (breadth-first, left-to-right), whereas (Yin and Neubig, 2017) and (Wang et al., 2018) generate trees through preorder traversal (depth-first, left-to-right).", "(Kun-coro et al., 2018) also experimented with bottom-up and left-corner construction orders for language model.", "While finding the optimal order of generating trees is beyond the scope of this work, we follow (Yin and Neubig, 2017) and (Wang et al., 2018), and generate lexicalized PCFG syntax trees through pre-order traversal.", "In this paper, we give the following graph-theoretic definition of L-PCFG syntax trees.", "Let W and N be the sets of lexicons and labels.", "Let R = (cid:83) k =1 N k be the set of production rules.", "Then an L-PCFG syntax tree T is an ordered tree defined by the triple of vertices v V W N R N , edges e E V V , and bijection f : V N + V such that ( v, f ( v, j )) E , where j range from 1 to the number of children of v .", "The fourth coordinate of v is the index of its heir, that is, ( w, n, r, i ) = f (( w 0 , n 0 , r 0 , i 0 ) , i 0 ) = w = w 0 , n 0 = n We say a node v = ( w, n, r, i ) is a leaf if r is unary: v is a leaf r N The parent of v k is denoted by v p ( k ) = ( w p ( k ) , m p ( k ) , n p ( k ) , i p ( k ) ) .", "Following previous work, we sequentialize L-PCFG parse trees and generate its content in an auto-regressive manner.", "When generating k th node, we predict the lexicalization w k and the rule r k .", "The pre-order history available when generating k th node is n 1 n k 1 , w 1 w k 1 , r 1 r k 1 , denoted by H k .", "The label ROOT is given at the start of generation.", "The label of the k th node is inferred from the production rule of its parent and the order of k th node among its siblings, and is used as input together with H k .", "When a leaf node is reached, the program backtraces until it finds an ancestor that has unfinished child, and proceeds to the first such child.", "We factor the joint probability of w k and r k into two component: a word model and a syntax model, as follow: P ( w k , r k | n k , H k ) = P ( w k | n k , H k ) P ( r k | n k , H k ) The details of both models are given in the following sections.", "We parse the responses in training corpus using the lexicalized parser by (Klein and Manning, 2003) (which we call Stanford parser for the rest of this paper).", "We then replace the label of each node with that of their heir in a bottom-up manner.", "Unary rules at non-leaf nodes are removed as they become redundant given our definition of lexiclaized PCFG.", "Stanford parser lexicalizes PCFG phrase structures by looking for the most likely combination of phrase structure and dependency structure.", "While their approach is optimized for parsing, the syntax trees lexicalized this way has a drawback for the purpose of generation.", "Empirically, their parser tends to lexicalize the first few nodes with auxiliary verbs or common verbs (e.g. be, must), and in some cases prefer function words over content words (e.g. in preposition phrases).", "We hypothesize that choosing content words over functions, or infrequent words over frequent words as lexicalization heads will help making the generation more specific and meaningful.", "Hence, we consider two alternative lexicalization schemes: Content-based lexicalization.", "We rank words according to their part-of-speech in the sentence by the following order: nouns > verbs = adjectives > adverbs > everything else.", "If two words have the same rank, we give priority to the rightmost one.", "See Figure 1 for an example.", "Frequency-based lexicalization.", "We ignore part-of-speech information and rank all words by their frequencies.", "We regard less frequent words as more important.", "To represent the state of a tree node by encoding its pre-order history H k , we use 3 LSTMs to memorize", "memorize the lexical and grammatical contents in H k .", "Encoding lexicalization history.", "We use 2 LSTMs to encode the lexicalization history, i.e. w 1 w k 1 : a surface decoder, L s , which takes the lexicalization of the leaves in the history as inputs; and an ancestor decoder, L a , which is given the lexicalization of the ancestors of the current node.", "This is another form of doubly-recurrent neural networks.", "Different from (Alvarez-Melis and Jaakkola, 2016), we chose to encode leaves instead of siblings.", "Denote the lexicalizaiton of leaves and ancestors in H k by { w l ( k ) i } and { w a ( k ) i } .", "We show that for an L-PCFG syntax tree, { w l ( k ) i } and { w a ( k ) i } sufficiently cover the lexical content of H k : Proposition.", "For any index set I k { 1 , 2 , . . . , k 1 } , if w n { w j } j I k for all n < k , then { l ( k ) i } (cid:83) { a ( k ) i } I k , i.e. { l ( k ) i } and { a ( k ) i } together is the minimal index set to cover w 1 w k 1 .", "Encoding syntactic history.", "We encode the previous labels and rules using the full history with a single LSTM, L g .", "At step k , the input to L g is the concatenation of the embeddings of n k , the rule of parent node r p ( k ) , and depth of the node d .", "The depths of nodes deeper than 10 are rounded down to 10.", "We adopt attention mechanism into our architecture for response generation.", "We use a one-layered LSTM to encode the dialog history, which is the concatenation of the last few utterances.", "The initial hidden states of L s , L a , and L g is computed from the last hidden state of source encoder using 2 fully-connected layers with rectified linear activation.", "At time step t , the concatenation of the hidden states of L s and L a at step t 1 is used as the key for querying the source.", "The attention weights are the inner products of the key and the hidden states at source side, normalized by softmax function.", "The weighted sum of source hidden states results in the attention context, c ( k ) 4.6 Decoding Denote the hidden states of L s , L a , L g at node k as h s ( k ) , h a ( k ) , h g ( k ) .", "Denote the softmax func-Figure 2: Demonstration of inputs and outputs at node DT.", "The sequence of inputs to each encoder are shown in the graph.", "The inputs to L g is a sequence of labels, rules of parents, and tree depths (only labels are shown).", "L s and L a are used for predicting the word for DT.", "L s and L g are used for predicting the rule for DT.", "RULE: DT indicates DT will be a leaf node since the number of symbols is 1.", "In this tree, words are generated in the order: daughter I have a.", "tion by .", "E w R | W | d w and E r R | P | d r are embedding matrices for words, labels, and rules.", "A w R n w d w and A r R n p d r are weight matrices ( n w , n p are the dimensions of input neu-rons).", "We use weight tying (Press and Wolf, 2017) to limit the search space for parameters.", "Word prediction.", "To decode for w k , we use the the hidden states of surface decoder L s and ancestor decoder L a .", "If v k is a heir, then P ( w k | n k , H k ) = (cid:40) 1 w k = w p ( k ) 0 w k (cid:54) = w p ( k ) Otherwise, the probability of w k is given by: P ( w k | n k , H k ) = ( tanh ([ h s ( k ); h a ( k ); c ( k )] A w ) E Tw ) At decoding time, we impose an additional constraint that w k be a valid word for label n k , to enforce grammaticality.", "This is estimated from the co-occurrence of w k and n k in the tagged training corpus.", "We only use those words whose frequency of co-occurrence with the given label is above a certain threshold.", "Given the definition of L-PCFG syntax tree, we only consider rules that contain n k at decoding time.", "There is one exception: at ROOT node, only unary rules are considered, and they do not have to contain the label ROOT.", "Hence, we train our architecture by minimizing the negative log-likelihood of words and rules: log P ( T ) = 1 | W ( T ) | (cid:88) k v k (cid:54) = f ( v p ( k ) ,i p ( k ) ) log P ( w k | n k , H k ) 1 | T | (cid:88) k log P ( r k | n k , H k ) where | W ( T ) | is the number of non-heir nodes (or the number of words in the original sentence), and | T | is the number of nodes in T .", "Note that the log probability of each word in the sentence appears exactly once in the above equation.", "At test time, we conduct beam search and use the same equation to score each generation for selecting words and rules.", "In our experiments, we use unlexicalized PCFG as an additional baseline.", "We still replace the labels of each node with their heirs' in the parse tree returned from Stanford parser, but words are generated only at leaf nodes.", "This baseline has syntactic structure while generating words from left to right.", "We use it as a test against top-down generation of words with syntax.", "All models are implemented using PyTorch.", "The hidden size of all LSTM encoders and decoders are 512.", "The size of embeddings of words, labels, rules, and tree depth are 300.", "We trained our models using stochastic gradient descent (SGD) with momentum and exponential learning rate decay.", "Dropout is applied to the input and output layer of LSTMs.", "We evaluate our model for dialog response generation on Persona dataset ((Zhang et al., 2018)).", "Each person is give a list of persona descriptions in simple sentences, and they are required to con-verse according to the given persona.", "We use last 3 utterances for each response as source.", "We prepend persona descriptions to source.", "We use global attention over persona descriptions to compute context vectors.", "During pre-processing, we truncate all trailing punctuations.", "We measure how early do each type of words appear in different generation orders standard left-to-right order, dependency-based lexicalization (as in Stanford parser), and content-based lexicalization.", "The earlier a word appear, the less context there is for predicting it.", "As shown in Table 1, content-based lexicalization can make nouns and verbs appear much earlier, while delaying function words.", "To verify frequency-based lexicalization is making infrequent words appear earlier, we show the average frequencies of the first five words.", "The first few words are more important since they decide the context for generating the following words.", "In Table 2, the first two words of each parse tree under frequency-based lexicaliza-Seq2seq Ours Standard 3.682 N/A Dependency 4.015 3.964 Content 4.115 3.865 Frequency 4.088 3.827 Table 3: Perplexities.", "5.3 Perplexity We compare per word likelihood given different generation orders and architectures.", "For left-to-right sequence decoder, the non-standard generation orders are obtained by linearizing L-PCFG parse trees in pre-order traversal, and words of heirs are not repeated in the linearization.", "Note that our word model generates word without using rules and labels as inputs to its networks.", "As can be seen from Table 3, alternative word generation orders all make it harder for standard left-to-right sequence decoder to learn to predict the next word.", "On the other hand, using a doubly-recurrent architecture, specifically the surface decoder and the ancestor decoder, can improve perplexity scores for top-down word generation over the left-to-right decoder.", "While our word model with top-down word generation orders has higher perplexity scores than simple model with standard generation order, we emphasize that perplexity is not an appropriate measure for generation tasks.", "Word Overlap Based Metrics.", "We use BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), and METEOR (Banerjee and Lavie, 2005) scores as automatic evaluation metrics.", "While the reliability of such metrics has been criticized (Liu et al., 2016), there is also evidence that for task-oriented domains, these metrics correlate with human judg-ment to a certain extent (Sharma et al., 2017).", "Word Embedding Based Metrics.", "We evaluate the semantic similarity between generated responses and human responses/persona by the cosine distance of their sentence embeddings.", "We use the word averaging approach by (Arora et al., 2016) to embed the responses, which has been demonstrated to be very good at capturing lexical level semantic similarity.", "The normalizing singular vector is obtained from the responses in training set.", "Novelty and diversity.", "We measure word overlapping between generated responses and the responses in training set using BLEU and ROUGE as a proxy for novelty.", "The responses in training set with most common words with generated responses are used as references.", "For diversity, we count the number of distinct n-grams.", "In addition, we perform a k-means clustering on the sentence embeddings of responses into 10 clusters, and measure average squared Euclidean distance between members of each cluster (Inertia).", "The larger the number, the harder it is to separate embeddings into 10 clusters, thus the greater the diversity.", "Evaluation results are shown in Table 4.", "For metrics that are not measured using ground truth response as reference, we consider the closer to the number for human responses the better.", "We first look at measures for overall generation quality.", "We can see modelling syntactic structures is capable of generating longer responses.", "BLEU scores are positively correlated with lengths.", "While syntactic models do better on BLEU, and slightly better on METEOR than sequence-to-sequence baseline, they are generally not on par with the baseline in terms of ROUGE-L, except for frequency-based lexicalization.", "Among grammar models, lexicalized grammars out-performed unlexicalized grammar.", "Relevance is measured using cosine similarities with the previous utterance and persona.", "Syntactic models with lexicalized grammar beat the baseline in terms of relevance.", "Furthermore, content-based lexicalization is much more on topic with the last source utterance than dependency-based L g + L s L g + L s + L a L g Lengths 11.04 7.95 8.46 Distinct uni-gram 813 383 376 Distinct bi-gram 5427 1763 1775 Distinct tri-gram 10787 3061 3522 Inertia 3756 2049 2346 BLEU on training 0.5402 0.8056 0.6612 ROUGE on training 0.6226 0.8543 0.7417 Table 6: Ablation studies.", "and frequence-based lexicalization.", "Dependency-based lexicalization is best at being adherent to personas than the other two lexicalization schemes.", "All syntactic models generate more novel responses than sequence-to-sequence baseline, as reflected in the last two rows in 4.", "This is consistent with the observation that sequence-to-sequence model exhibits retrieval-like behaviour, selecting what is most common in the training corpus.", "Syntactic models also have larger vocabularies.", "As for cluster analysis, unlexicalized grammar model and dependency-based lexicalized grammar model have better diversity than sequence-to-sequence model; content-based and frequency-based lexicalization have slightly smaller inertia than the baseline.", "We present a few examples generated by sequence-to-sequence baseline and L-PCFG model.", "There is a clear difference of how left-to-right decoder and L-PCFG tree decoder do conjunctions.", "Most of the time, standard LSTM decoder combine sentences with periods, while tree decoders learn to use conjunction words, or even clauses.", "We also performed an error analysis on the generated responses by L-PCFG, and in Table 7 we selected the most peculiar and representative.", "These examples are all syntactically fine, but they do not follow the convention of the language or common sense.", "The first example contains the most common errors in the responses generated by L-PCFG: misuse of prepositions and determiners.", "It can be fixed by replacing as a with for.", "The error of other two examples have even less to do with syntax.", "The second one misuses the verb be, which is probably caused by the high frequency of the word in the corpus.", "The error of the third example is beyond surface level.", "Note that phrases such as cooking as a dinner and be a dog never appear in the corpus.", "It is clear that L-PCFG models are learning to make compositions of words and phrases, unlike standard LSTM decoder, which seems to only memorize word combinations.", "i am doing well .", "just finished cooking as a dinner i am sure it is nice .", "i am going to be a dog i like to ride my horses on my bike Table 7: Some peculiar examples generated by L-PCFG models.", "We perform ablation studies on our architecture, with content-based lexicalization.", "Specifically, we consider two alternative ways of making rule prediciton.", "The first one takes only the hidden state of L g for predicting rules, hence making the prediction of rules entirely independent from words.", "The second one takes both the hidden states of L s and L a , together with L g , for predicting rules, in which way future lexical information is used for the construction of syntax trees.", "For rule prediction using the hidden states of both surface decoder and ancestor decoder, we noticed a significant drop in diversity in generated responses.", "The model over-predicts what do you do for a living for than 50% of the time, and the lengths of responses tend to be shorter.", "For the other choice in which rule prediction is independent from words, the results are closer to the original model, but there are still some decreases in lengths and lexical diversity.", "Upon manual inspection, we found that this model behaved more like sequence-to-sequence model.", "There are less compound sentences, and more conjunctions of simple sentences by end punctuation marks.", "also larger.", "David Alvarez-Melis and Tommi S Jaakkola.", "2016.", "Tree-structured decoding with doubly-recurrent neural networks.", "Sanjeev Arora, Yingyu Liang, and Tengyu Ma.", "2016.", "A simple but tough-to-beat baseline for sentence embeddings.", "Meteor: An automatic metric for mt evaluation with improved correlation with human judgments.", "In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization , pages 6572.", "Gary S Dell, Franklin Chang, and Zenzi M Grif-fin.", "Connectionist models of language production: Lexical access and grammatical encoding.", "Li Dong and Mirella Lapata.", "2016.", "Language to logical form with neural attention.", "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , volume 1, pages 3343.", "Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A Smith.", "2016.", "Recurrent neural network grammars.", "In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 199209.", "Akiko Eriguchi, Yoshimasa Tsuruoka, and Kyunghyun Cho.", "2017.", "Learning to parse and translate improves neural machine translation.", "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) , volume 2, pages 7278.", "Nicolas Ford, Daniel Duckworth, Mohammad Norouzi, and George Dahl.", "2018.", "The importance of generation order in language modeling.", "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 29422946.", "Dan Klein and Christopher D Manning.", "2003.", "Fast exact inference with a factored model for natural language parsing.", "In Advances in neural information processing systems , pages 310.", "Chin-Yew Lin.", "2004.", "Rouge: A package for automatic evaluation of summaries.", "In Text Summarization Branches Out .", "We consider the problem of generating natural language in alternative orders and with syntactic tree structures, with the use of lexicalized grammar.", "By incorporating syntactic structures, our models are capable of generating longer sentences.", "By changing lexicalizaion schemes and making content words appear earlier in generation process, our models are able to make word choices that are more relevant to source.", "Furthermore, incorporating syntax facilitates response generation with richer vocabularies and more complex structures.", "On the other hand, as shown in our error analysis, there is still room for improvement on discourse and pragmatics level.", "This material is based upon work partially supported by the National Science Foundation (Award # 1722822).", "Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of National Science Foundation, and no official endorsement should be inferred." ]
[ "objective", "abstain", "abstain", "abstain", "method", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "result", "other", "other" ]
[ "With the popularity of smartphones, we have witnessed the rapid proliferation of multimodal posts on various social media platforms.", "We observe that the multimodal sentiment expression has specific global characteristics, such as the interdependencies of objects or scenes within the image.", "However, most previous studies only considered the representation of a single image-text post and failed to capture the global co-occurrence characteristics of the dataset.", "In this paper, we propose Multi-channel Graph Neural Networks with Sentiment-awareness (MGNNS) for image-text sentiment detection.", "Specifically, we first encode different modalities to capture hidden representations.", "Then, we introduce multichannel graph neural networks to learn multimodal representations based on the global characteristics of the dataset.", "Finally, we implement multimodal in-depth fusion with the multi-head attention mechanism to predict the sentiment of image-text pairs.", "Extensive experiments conducted on three publicly available datasets demonstrate the effectiveness of our approach for multimodal sentiment detection.", "The tasks of extracting and analyzing sentiments embedded in data have attracted substantial attention from both academic and industrial communities (Zhang et al., 2018; Yue et al., 2018).", "With the increased use of smartphones and the bloom of social media such as Twitter, Tumblr and Weibo, users can post multimodal tweets (e.g., text, image, and video) about diverse events and topics to convey their feelings and emotions.", "Therefore, multimodal sentiment analysis has become a popular research topic in recent years (Kaur and Kautish, 2019; Soleymani et al., 2017).", "As shown in Fig. 1, sentiment is no longer expressed by a pure modality in the multimodal scenario but rather by the", "com-(a) We have a fun day on the beach!", "bined expressions of multiple modalities (e.g., text, image, etc.).", "In contrast to unimodal data, multimodal data consist of more information and make the user's expression more vivid and interesting.", "We focus on multimodal sentiment detection for image-text pairs in social media posts.", "The problem of image-text mismatch and flaws in social media data, such as informality, typos, and a lack of punctuation, pose a fundamental challenge for the effective representation of multimodal data for the sentiment detection task.", "To tackle this challenge, Xu et al. (2017; 2017) constructed different networks for multimodal sentiment analysis, such as a Hierarchical Semantic Attentional Network (HSAN) and a Multimodal Deep Semantic Network (MDSN).", "Xu et al. (2018) and Yang et al. (2020) proposed a Co-Memory network (Co-Mem) and a Multi-view Attentional Network (MVAN) models, respectively, introducing memory networks to realize the interaction between modalities.", "The above methods treat each image-text post in the dataset as a single instance, and feature dependencies across instances are neglected or modeled implicitly.", "In fact, social media posts have specific global co-occurring characteristics, i.e., co-occurring words, objects, or scenes, which tend to share similar sentiment orientations and emotions.", "For example, the co-occurrences of the words have a fun/nice day and of the bright scenes ocean/beach in the two images in Fig. 1 imply a strong relationship between these features and positive sentiment.", "How to more effectively make use of the feature co-occurrences across instances and capture the global characteristics of the data remain a great challenge.", "We propose a Multi-channel Graph Neural Networks model with Sentiment-awareness (MGNNS) for multimodal sentiment analysis that consists of three stages.", "(i) Feature extraction .", "For text modality, we encode the text and obtain a text memory bank; for image modality, we first extract objects and scenes and then capture the image' semantic features from a multiview perspective.", "(ii) Feature representation .", "We employ a Graph Neural Network (GNN) for text modality based on the global shared matrices, i.e., one text graph based on word co-occurrence is built based on the whole dataset.", "Specifically, we first connect word nodes within an appropriate small window in the text.", "After that, we update the node representation by itself as well as neighbor nodes.", "For image modality, it is believed that different views of an image, such as the beach (Scene view) and person (Object view) in Fig.", "1(a), can reflect a user's emotions (Xu and Mao, 2017).", "The existing literature usually models the relationship between the scenes and objects within an image, failing to capture the rich co-occurrence information from the perspective of the whole dataset.", "In contrast, we explicitly build two graphs for scenes and objects according to the co-occurrences in the datasets and propose Graph Convolutional Network (GCN) models over the two graphs to represent the images.", "In general, to tackle the isolated feature problem, we build multiple graphs for different modalities, with each GNN acting as a channel, and propose a Multi-channel Graph Neural Networks (Multi-GNN) module to capture the in-depth global characteristics of the data.", "This multi-channel based method can provide complementary representation from different sources (George and Marcel, 2021; George et al., 2019; Islam et al., 2019).", "(iii) Feature fusion .", "Previous studies usually directly connect multimodal representations, without considering multimodal interactions (Wang et al., 2020a; Xu, 2017; Xu and Mao, 2017).", "In this stage, we realize the pairwise interaction of text and image modalities from different channels through the use of the Multimodal Multi-head Attention Interaction (MMAI) module and obtain the fusion representation.", "Our main contributions are summarized as follows: We propose a novel MGNNS framework that models the global characteristics of the dataset to handle the multimodal sentiment detection task.", "To the best of our knowledge, we are the first to apply GNN to the image-text multimodal sentiment detection task.", "We construct the MMAI module from different channels to realize in-depth multimodal interaction.", "We conduct extensive experiments on three publicly available datasets, and the results show that our model outperforms the state-of-the-art methods.", "2.1 Multimodal Sentiment Analysis For convenience, multimodal polarity analysis and emotion analysis are unified to form multimodal sentiment analysis.", "Traditional machine learning methods are adopted to address the multimodal sentiment analysis task (Perez-Rosas et al., 2013; You et al., 2016).", "Recently, deep learning models have also achieved promising results for this task.", "For the video dataset, Wang et al. (2020b) proposed a novel method, TransModality, to fuse multimodal features with end-to-end translation models; Zhang et al. (2020) leveraged semi-supervised variational autoencoders to mine more information from unlabeled data; and Hazarika et al. (2020) constructed a novel framework, MISA, which projects each modality to two distinct subspaces: modality-invariant and modality-specific subspaces.", "There is a massive amount image-text data on social platforms, and thus, image-text multimodal sentiment analysis has attracted the attention of many researchers.", "Xu et al. constructed different networks for multimodal sentiment analysisHSAN (2017), MDSN (2017) and Co-Mem (2018).", "Yang et al. (2020) built an image-text emotion dataset, named TumEmo, and further proposed MVAN for multimodal emotion analysis.", "The Graph Neural Network has achieved promising results for text classification, multi-label recognition, and multimodal tasks.", "For text classification, a novel neural network called Graph Neural Network (GNN), and its variants have been rapidly developed, and their performance is better than that of traditional methods, such as Text GCN (Yao et al., 2019), TensorGCN (Liu et al., 2020), and TextLevelGNN (Huang et al., 2019).", "The GCN is also introduced in the multi-label image recognition task to model the label dependencies (Chen et al., 2019).", "Recently, Graph Convolutional Network has been applied in different multimodal tasks, such as Visual Dialog (Guo et al., 2020; Khademi, 2020), multimodal fake news detection (Wang et al., 2020a), and Visual Question Answering (VQA) (Hudson and Manning, 2019; Khademi, 2020).", "Jiang et al. (2020) applied a novel Knowledge-Bridge Graph Network (KBGN) in modeling the relations among the visual dialogue cross-modal information in fine granularity.", "Wang et al. (2020a) proposed a novel Knowledge-driven Multimodal Graph Convolutional Network (KMGCN) to model semantic representations for fake news detection.", "However, the KMGCN extracted visual words as visual information and did not make full use of the global information of the image.", "Khademi (2020) introduced a new neural network architecture, a Multimodal Neural Graph Memory Network (MN-GMN), for VQA, which model constructed a visual graph network based on the bounding-boxes, which produced overlapping parts that might provide redundant information.", "For the image-text dataset, we found that certain words often appear in a text post simultaneously, and different objects or scenes within an image have specific co-occurrences that indicate certain sentiments.", "We explicitly model these global characteristics of the dataset through the use of a multichannel GNN.", "Fig. 2 illustrates the overall architecture of our proposed MGNNS model for multimodal sentiment detection that consists of three modules: the encoding module, the Multi-GNN module, and the multimodal interaction module.", "We first encode text and image input into hidden representations.", "Then, we introduce GNN from different channels to learn multiple modal representations.", "In this paper, the channels are the Text-GNN (TG) module, the Image-GCN-Scene (IGS) module, and the Image-GCN-Object (IGO) module.", "Finally, we realize the in-depth interactions between different modalities by multimodal multi-head attention.", "The goal of our model is to identify which sentiment is expressed by an image-text post.", "Given a set of multimodal posts from social media, P = { ( T 1 , V 1 ) , ..., ( TN , VN ) } , where T i is the text modality and V i is the corresponding visual information, N represents the number of posts.", "We need to learn the model f : P L to classify each post ( T i , V i ) into the predefined categories L i .", "For polarity classification, L i { P ositive, Neutral, Negative } ; for emotion classification, L i { Angry, Bored, Calm, F ear, Happy, Love, Sad } .", "For text modality , we first encode words by GloVe (Pennington et al., 2014) to obtain the embedding vector and then obtain the text memory bank, M t , by BiGRU (Cho et al., 2014):", "(1) where T is a text sequence, L t is the maximum length of a padded text sequence, and d t is the dimension of hidden units in the BiGRU.", "For image modality , we extract image features from both the object and scene views to capture sufficient information.", "We believe that there are interdependencies between different objects or scenes in an image.", "To explicitly model this co-occurrence, we first extract objects O = { o 1 , ..., o l o } by YOLOv3 (Farhadi and Redmon, 2018), and extract scenes S = { s 1 , ..., s l s } by VGG-Place (Zhou et al., 2017).", "Finally, we obtain the object and scene memory banks with the pretrained ResNet (He et al., 2016).", "Thus, if an input image V has a 448 448 resolution and is split into 14 14 = 196 visual blocks of the same size, then each block is represented by a 2,048-dimensional vector.", "In this subsection, we present our proposed Multi-GNN module.", "As Fig. 2 shows, this module consists of the TG channel (middle), the IGO channel (right), and the IGS channel (left).", "Text GNN : As shown in the middle of Fig. 2, motivated by (Huang et al., 2019), we learn text representation through the Text Level GNN.", "For text with l t words T = { w 1 , ..., w k , ..., w l t } , where the k th word, w k , is initialized by glove embedding r tk R d , d = 300 .", "We build the graph of the text-based vocabulary of the training dataset, which is defined as follows: N t = { w k | k [1 , l t ] } .", "where N t and E t are the set of nodes and edges of the text graph, respectively.", "The word representations in N t and the edge weights in E t are taken from global shared matrices built based on vocabulary and the edge set of the dataset, respectively.", "That is, the representations of the same nodes and weights of the edges are shared globally.", "e tk,j is initialized by point-wise mutual information (PMI) (Wang et al., 2020a) and is learned in the training process.", "ws is the hyperparameter sliding window size, which indicates how many adjacent nodes are connected to each word in the text graph.", "Then, we update the node representation based on its original representations and neighboring nodes by the message passing mechanism (MPM) (Gilmer et al., 2017), which is defined as follows: A tk = max j N wsk e tkj r tk , (5) r tk (cid:48) = r tk + (1 ) A tk , (6) where A tk R d is the aggregated information from neighboring nodes from node k ws to k + ws , and max is the reduction function.", "is the trainable variable that indicates how much original information of the node should be kept, and r tk (cid:48) R d is the updated representation of node k .", "Image GCN : In this module, we explicitly model interdependence within l x scenes or objects by IGX, as shown on the left and right sides of Fig.", "2, respectively.", "The graph of the image is defined as follows: N x = { x p | p [1 , l x ] } , (8) where N x RC x is the set of nodes of IGX; x or X { Object, Scene } , C x = 80 when x = Object , and C x = 365 when x = Scene .", "To build the edges of IGX, we first build the global shared co-occurrence matrix-based dataset: E x = { e xp,q | p [1 , l x ] , q [1 , l x ] } , (9) where E x RC x C x is the co-occurrence matrix; edge weight e xp,q indicates the co-occurrence times of x p and x q in the dataset.", "Then, we calculate the conditional probability for node p as follows: P xp,q = e xp,q /N xp , when q (cid:54) = p (10) where N xp denotes the occurrence times of x p in the dataset.", "Note that P xp,q (cid:54) = P xq,p .", "As mentioned by (Chen et al., 2019), the simple correlation above may suffer several drawbacks.", "where is the hyperparameter used to filter noisy edges.", "It is obvious that the role of the central node is different from that of neighboring nodes, so we need to further calculate the weight of the edge: R xp,q = (cid:40) 1 , if p = q / (cid:80) C x q =1 B xp,q , if p (cid:54) = q , (12) where R x RC x C x is the weighted co-occurrence matrix, and hyperparameter indicates the importance of neighboring nodes.", "Finally, we input node N x and edge R x of the image into the graph convolutional network.", "Like in (Kipf and Welling, 2016), every layer can be calculated as follows: H xL +1 = h ( (cid:99) R x H xL W xL ) , (13) where H xL RC x d x , H xL +1 RC x d x (cid:48) , W xL R d x d x (cid:48) , and (cid:99) R x RC x C x is the normalized representation of R x ; h ( ) is a non-linear operation.", "When L = 1 , H x 1 is the word-embedding vector of N x .", "By stacking multiple GCN layers, we can explicitly learn and model the complex interdependence of the nodes.", "Then, we obtain the image representation with objects or scenes dependencies: I x = MaxP ooling ( M x )( H xL +1 ) T , I x RC x .", "But, we cannot capture the relationship between nodes and sentiments.", "Therefore, we learn the sentiment-awareness image representation through multi-head attention (Vaswani et al., 2017).", "Att = softmax ( QKT d k ) V, (15) EI x = MH ( Q, K, V ) = Concat ( head 1 , ..., head H ) WO where head h = Att ( QW Qh , KW Kh , V W Vh ) , (16) where MH ( ) is multi-head attention; W Qh R d d k , W Kh R d model d k , W Vh R d model d v , and WO R Hd v d ; and H = 5 , d model = 300 , d k = d v = 60 .", "Q R l s d is a sentiment embedding matrix built based on the label set l s = 3 for polarity classification and l s = 7 for emotion classification; K = V = I x WI , WI RC x d model , K, V R d model .", "Motivated by the Transformer (Vaswani et al., 2017) prototype, we design a Multimodal Multihead Attention Interaction (MMAI) module that can effectively learn the interaction between text", "modality and image modality by multiple channels,", "as shown in Fig. 3. We employ the MMAI to obtain the Text guided Image-X representations and Image-X guided Text representations, X { Object, Scene } .", "For the Text-guided Image-X attention , O TgXN +1 = LN ( MH ( Q = H TgXN , K = V = M x ) + H TgXN ) , (17) H TgXN +1 = LN ( F F N ( O TgXN +1 ) + O TgXN +1 ) , (18) where LN ( ) is layer normalization, and F F N ( ) is the feed-forward network.", "When N = 1 , H TgX 1 = T (cid:48) , as in Eq.", "7.", "For the Image-X-guided Text attention , O XgTN +1 = LN ( MH ( Q = H XgTN , K = V = M t ) + H XgTN ) , (19) H XgTN +1 = LN ( F F N ( O XgTN +1 ) + O XgTN +1 ) , (20) when N = 1 , H XgT 1 = EI x , as in Eq.", "16.", "For MH , H = 4 , d model = 512 , d k = d v = 128 .", "The fused multimodal representation is as follows: R m = [ H TgON H TgSN H OgTN H SgTN ] , where is a concatenation operation.", "L m = softmax ( w s R m + b s ) , L m R l s , (21) where w s and b s are the parameters of the fully connected layer.", "We conduct experiments on three multimodal sentiment datasets from social media platforms, MVSA-Single, MVSA-Multiple (Niu et al., 2016), and TumEmo (Yang et al., 2020), and compare our MGNNS model with a number of unimodal and multimodal approaches.", "MVSA-Single and MVSA-Multiple are two different scale image-text sentiment datasets crawled from Twitter 1 .", "TumEmo is a multimodal weak-supervision emotion dataset containing a large 1 https://twitter.com Dataset Train Val Test All MVSA-S 3,608 451 452 4,511 MVSA-M 13,618 1,703 1,703 17,024 TumEmo 156,204 19,525 19,536 195,265 Table 1: Statistics of the different datasets.", "amount of image-text data crawled from Tumblr 2 .", "The statistics of these datasets are given in Appendix A; and for a fair comparison, we adopt the same data preprocessing method as that of Yang (Yang et al., 2020).", "The corresponding details are shown in Appendix B. 4.2 Experimental Setup Parameter MVSA TumEmo Learning rate 4 e 5 5 e 5 ws 4 5 Object 0.4 0.4 Scene 0.3 0.5 0.2 0.2 L x 2 2 N TgX 1 1 N XgT 1 1 Table 2: Parameter settings of the different datasets.", "We adopt the cross-entropy loss function and Adam optimizer.", "In the process of extracting objects and scenes, we reserve the objects with the probability greater than 0.5 and the top-5 scenes, respectively.", "The other parameters are listed in Table 2, { Single, Multiple } .", "We use Accuracy ( Acc ) and F1-score ( F1 ) as evaluation metrics.", "All models are implemented with PyTorch.", "We compare our model with multimodal sentiment models with the same modalities and the unimodal baseline models.", "Unimodal Baselines : For text modality, CNN (Kim, 2014) and Bi-LSTM (Zhou et al., 2016) are well-known models for text classification tasks, and BiACNN (Lai et al., 2015) incorporates the CNN and BiLSTM models with an attention mechanism for text sentiment analysis.", "TGNN (Huang et al., 2019) is a text-level graph neural network for text classification.", "For image modality, OSDA (Yang 2 http://tumblr.com Modality Model MVSA-Single MVSA-Multiple TumEmo Acc F1 Acc F1 Acc F1 Text CNN 0.6819 0.5590 0.6564 0.5766 0.6154 0.4774 BiLSTM 0.7012 0.6506 0.6790 0.6790 0.6188 0.5126 BiACNN 0.7036 0.6916 0.6847 0.6319 0.6212 0.5016 TGNN 0.7034 0.6594 0.6967 0.6180 0.6379 0.6362 Image OSDA 0.6675 0.6651 0.6662 0.6623 0.4770 0.3438 SGN 0.6620 0.6248 0.6765 0.5864 0.4353 0.4232 OGN 0.6659 0.6191 0.6743 0.6010 0.4564 0.4446 DuIG 0.6822 0.6538 0.6819 0.6081 0.4636 0.4561 Image-Text HSAN 0.6988 0.6690 0.6796 0.6776 0.6309 0.5398 MDSN 0.6984 0.6963 0.6886 0.6811 0.6418 0.5692 Co-Mem 0.7051 0.7001 0.6992 0.6983 0.6426 0.5909 MVAN 0.7298 0.7139 0.7183 0.7038 0.6553 0.6543 MGNNS 0.7377 0.7270 0.7249 0.6934 0.6672 0.6669 Table 3: Experiment results of Acc and F1 on three datasets. represents the reproductive operation. et al., 2020) is an image sentiment analysis model based on multiple views.", "Note that the SGN, OGN, and DuIG are variants of our model and rely only on image modality.", "SGN and OGN are the image graph convolutional neural networks based on scenes and objects for image sentiment analysis, respectively.", "DuIG is the image graph convolutional neural network with dual views, e.g., Object and Scene.", "Muiltimodal Baselines : HSAN (Xu, 2017) is a hierarchical semantic attentional network based on image captions for multimodal sentiment analysis.", "MDSN (Xu and Mao, 2017) is a deep semantic network with attention for multimodal sentiment analysis.", "Co-Mem (Xu et al., 2018) is a co-memory network for iteratively modeling the interactions between multiple modalities.", "MVAN (Yang et al., 2020) is a multi-view attentional network that utilizes a memory network for multimodal emotion analysis.", "This model achieves state-of-the-art performance on image-text multimodal sentiment classification tasks.", "The experimental results of the baseline methods and our model are shown in Table 3, where MGNNS denotes that our model is based on multichannel graph neural networks 3 .", "our model (MGNNS) is competitive with the other strong baseline models on the three datasets.", "Note that the data distribution of MVSA is extremely unbalanced.", "Thus, we reproduce the MVAN model with ACC and Weighted-F1 metrics instead of the Micro-F1 metric used in the original paper, which is more realistic.", "Second, the multimodal sentiment analysis models perform better than most of the unimodal sentiment analysis models on all three datasets.", "Moreover, the segmental indictors are difficult to capture for images owing to the low information density, and the sentiment analysis on the image modality achieves the worst results.", "Finally, the TGNN unimodal model outperforms the HSAN multimodal model, indicating that the GNN has excellent performance in sentiment analysis.", "We conduct ablation experiments on the MGNNS model to demonstrate the effectiveness of different modules.", "Table 4 shows that the whole MGNNS model achieves the best performance among all models.", "To show the performance of the Multi-GNN module, we replace the Text-GNN with the CNN, as well as the Image-GCN with the pretrained ResNet.", "The removal of the MMAI module (w/o MMAI) and Multi-GNN module (w/o MGNN) adversely affect the model results, which indicates that these modules are useful for multimodal sentiment analysis.", "By replacing the MMAI module with the CoAtt (Lu et al., 2016) module Datasets Model Acc F1 MVSA-Single w/o MGNN 0.7010 0.6847 w/o MMAI 0.7108 0.6879 +CoAtt 0.7255 0.6986 w/o Scene 0.7304 0.6988 w/o Object 0.7034 0.6900 MGNNS 0.7377 0.7270 MVSA-Multiple w/o MGNN 0.7019 0.6752 w/o MMAI 0.7128 0.6792 +CoAtt 0.7210 0.6849 w/o Scene 0.7170 0.6797 w/o Object 0.7110 0.6848 MGNNS 0.7249 0.6934 TumEmo w/o MGNN 0.6553 0.6547 w/o MMAI 0.6370 0.6347 +CoAtt 0.6624 0.6606 w/o Scene 0.6618 0.6593 w/o Object 0.6592 0.6584 MGNNS 0.6672 0.6669 Table 4: Ablation experiment results.", "(+CoAtt), the model performance is found to be slightly worse than that of the MGNNS module.", "This further illustrates the importance of multimodal interactions and the superiority of the MMAI module.", "When one of the object views (w/o Object) or scene views (w/o Scene) is removed, the performance of the model declines, which indicates that both views of the image are effective for multimodal sentiment analysis.", "In the Multi-GNN module, we build multiple graphs for different modalities based on the dataset.", "For different datasets, the graphs built by the unimodal model are different.", "However, can graph capture from one dataset (e.g., MVSA-Single) have positive effects on other datasets (e.g., TumEmo)?", "In this subsection, we will verify the transferability of the model through experiments.", "As Table 5 shows, the following conclusions can be drawn:", "(i) Regardless of the modality, such as text or image, compared to introducing the graph constructed based on own dataset, the experimental results calculated based on graphs transferred from other datasets are worse.", "This is mainly because each dataset has unique global characteristics, the experimental results based on transferred graphs are slightly worse.", "(ii) However, due to the commonality of datasets when expressing the same emotions, the results of the transferred models are not completely worse.", "For example, the same scenes and objects can appear in different images in different datasets simultaneously for image modalities.", "Therefore, graphs from different datasets have transferability and can be used for other datasets.", "(iii) For different datasets, the experimental results of X2Y-Text are worse than those of X2Y-Image.", "That is, the text graph has worse transferability.", "The reason for this may be that text graphs with various nodes are created based on the vocabulary of different datasets.", "Two situations in the transferred text graph will seriously affect the results: fewer nodes will lose information, and more nodes will provide redundant information.", "(iv) When the dataset gap is relatively wide, the transferability of text graphs is worse.", "For example, from the larger datasets transfer to the smallest dataset, including T2S-Text and M2S-Text, experimental results show a drop of 2.45% and 2.69%, respectively; from the smaller datasets transfer to the most largest dataset, including S2T-Text and M2T-Text, experimental results show a significant drop of 4.81% and 4.09%, respectively.", "Hyperparameter ws : To obtain adequate information from neighboring nodes in the TGNN, we conduct experiments under different settings for hyperparameter ws in Eq.", "4, the related results of which are shown in Fig. 4. The best ws selection varies among different datasets since the average text length of TumEmo is longer compared to other data.", "The TGNN cannot obtain sufficient information from neighboring nodes with ws values that are too small, while larger values may degrade the performance due to the redundant information provided by neighboring nodes.", "(b) Comparisons on TumEmo Figure 4: Acc comparisons with different values of ws .", "Hyperparameter : We vary the values of hyperparameter in Eq.", "11 for the binary co-occurrence matrix from different views, the results of which are shown in Fig. 5. We find that the best value is different for different views in different datasets.", "For MVSA , the smaller value can reserve more edges to capture more information since the scene co-occurrence matrix is sparser than that in the object view.", "For TumEmo with a large amount of data, preserving the top-5 scenes produces many noise edges, so the value of scene is greater than that of MVSA .", "Hyperparameter : As Fig. 6 shows, the model receives the best performance for the three datasets when is 0.2.", "When is smaller, the neighboring nodes do not receive enough attention; in contrast, their own information is not fully utilized.", "(b) Comparisons on TumEmo Figure 6: Acc comparisons with different values.", "This paper proposes a novel model, MGNNS, that is built based on the global characteristics of the dataset for multimodal sentiment detection tasks.", "As far as we know, this is the first application of graph neural networks in image-text multimodal sentiment analysis.", "The experimental results on publicly available datasets demonstrated that our proposed model is competitive with strong baseline models.", "In future work, we plan to construct a model that adopts the advantages of the GNN and pretrained models such as BERT, VisualBERT, and etc.", "We want to design a reasonable algorithm to characterize the quality of the objects and scenes selected from the image and further improve the representation ability of the model.", "The project is supported by the National Key R&D Program of China (2018YFB1004700) and by the National Natural Science Foundation of China (61772122, 61872074, U1811261)." ]
[ "abstain", "result", "abstain", "objective", "objective", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "method", "objective", "method", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "result", "objective", "objective", "method", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "other", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "method", "result", "other" ]
[ "Multilingual pre-trained models could leverage the training data from a rich source language (such as English) to improve the performance on low resource languages.", "However, the transfer effectiveness on the multilingual Machine Reading Comprehension (MRC) task is substantially poorer than that for sentence classification tasks, mainly due to the requirement of MRC to detect the word level answer boundary.", "In this paper, we propose two auxiliary tasks to introduce additional phrase boundary supervision in the fine-tuning stage: (1) a mixed MRC task, which translates the question or passage to other languages and builds cross-lingual question-passage pairs; and (2) a language-agnostic knowledge masking task by leveraging knowledge phrases mined from the Web.", "Extensive experiments on two cross-lingual MRC datasets show the effectiveness of our proposed approach.", "Machine Reading Comprehension (MRC) plays a critical role in the assessment of how well a machine could understand natural language.", "Among various types of MRC tasks, the span extractive reading comprehension task (like SQuAD (Ra-jpurkar et al., 2016)) has been become very popular.", "Promising achievements have been made with neural network based approaches (Seo et al., 2017; Wang et al., 2017; Xiong et al., 2018; Yu et al., 2018; Hu et al., 2017), especially those built on pre-trained language models such as BERT (Devlin et al., 2018), due to the availability of large-scale annotated corpora (Hermann et al., 2015; Rajpurkar et al., 2016; Joshi et al., 2017).", "However, these large-scale annotated corpora are Work is done during internship at STCA NLP Group, Microsoft.", "mostly exclusive to English, while research about MRC on languages other than English (i.e. multilingual MRC) has been limited due to the absence of sufficient training data.", "To alleviate the scarcity of training data for multilingual MRC, the translation based data augmentation approaches were firstly proposed.", "For example, (question q , passage p , answer a ) in English SQuAD can be translated into ( q (cid:48) , p (cid:48) , a (cid:48) ) in other languages (Asai et al., 2018) to enrich the non-English MRC training data.", "However, these approaches are limited by the quality of the translators, especially for those low resource languages.", "Most recently, approaches based on multilingual/cross-lingual pre-trained models (Devlin et al., 2018; Lample and Conneau, 2019; Huang et al., 2019; Yang et al., 2019) have proved very effective on several cross-lingual NLU tasks.", "These approaches learn language-agnostic features and align language representations in vector space during multilingual pre-training process (Wang et al., 2019; Castellucci et al., 2019; Keung et al., 2019; [Question]: who were the kings of the southern kingdom [Passage]: In the southern kingdom there was only one dynasty, that of king David, except usurper Athaliah from the northern kingdom, who by marriage, [...] [Answer ground truth]: king David [Answer model predication:] David, except usurper Athaliah [Question]: What is the suggested initial does dosage of chlordiazepoxide [Passage]: If the drug is administered orally, the suggested initial dose is 50 to 100 mg, to be followed by repeated doses as needed until agitation is controlled up to 300 mg per day. [...] [Answer ground truth]: 50 to 100 mg [Answer model predication:] 100 mg Table 2: Bad answer boundary detection cases of multilingual MRC model. Jing et al., 2019; Cui et al., 2019).", "On top of these cross-lingual pre-trained models, zero-shot learning with English data only, or few-shot learning with an additional small set of non-English data derived from either translation or human annotation, can be conducted.", "Although these methods achieved significant improvement in sentence level multilingual tasks (like XNLI task (Conneau et al., 2018), the effectiveness on phrase level multilingual tasks is still limited.", "As shown in Table 1, MRC has bigger gap compared with sentence level classification tasks, in terms of the gap between non-English languages and English.", "To be specific, the EM metrics for non-English languages have 20+ points gap with the counterpart of English on average.", "For extractive MRC, the EM metric is very critical since it indicates the answer boundary detection capability, i.e. the accuracy for extractive answer spans.", "In Table 2, there are two multilingual MRC cases with wrong boundary detection.", "In real scenarios, these bad extractive answers will bring negative impact to user experience.", "One interesting finding after case study is that the multilingual MRC model could roughly locate the correct span but still fail to predict the precise boundary (e.g. missing or adding some words in the spans as the cases in Table 2).", "For example, an error analysis of XLM on MLQA (Lewis et al., 2019) showed about 49% errors come from answers that partially overlap with golden span.", "Another finding is that a large amount ( 70% according to MLQA) of the extractive spans are language-specific phrases (kind of broad knowledge, such as entities or N-grams noun phrases).", "We call such phrases knowledge phrase in the rest of paper, and will leverage them as prior knowledge in our model.", "Motivated by the above observations, we propose two auxiliary tasks to enhance boundary detection for multilingual MRC, especially for low-resource languages.", "First, we design a cross-lingual MRC task with mixed-languages (cid:104) question, passage (cid:105) pairs to better align the language representation.", "We then propose a knowledge phrase masking task as well as a language-agnostic method to generate per-language knowledge phrases from the Web.", "Extensive experiments on two multilingual MRC datasets show that our proposed tasks could substantially boost the model performance on answer span boundary detection.", "The main contributions of our paper can be summarized as follows.", "We design two novel auxiliary tasks in multitask fine-tuning to help improve the accuracy of answer span boundary detection for multilingual MRC model.", "We propose a language-agnostic method to mine language-specific knowledge phrase from search engines.", "This method is lightweight and easy to scale to any language.", "We conduct extensive experiments to prove the effectiveness of our proposed approach.", "In addition to an open benchmark dataset, we also create a new multilingual MRC dataset from real-scenario together with fine-grained answer type labels the in-depth impact analysis.", "A straightforward approach is leveraging translation to translate training data in rich resource language to low resource language.", "Asai et al. (2018) proposed to use run-time machine translation for multilingual extractive reading comprehension.", "Cui et al. (2019) developed several back-translation methods for cross-lingual MRC.", "Singh et al. (2019) introduced a translation-based data augmentation mechanism for question answering.", "However, these methods highly depend on the availability and quality of translation systems.", "Another approach to Multilingual NLU extracts language-independent features to address multilingual NLU tasks.", "Some works (Keung et al., 2019; Jia and Liang, 2017; Chen et al., 2019) apply adversarial technology to learn language-invariant features and achieve significant performance gains.", "More recently, there has been an increasing trend to design cross-lingual pre-trained models, such as multilingual BERT (De-vlin et al., 2018), XLM (Lample and Conneau, 2019), and Unicoder (Huang et al., 2019), which showed promising results due to the capability of cross-lingual representations in a shared contextual space (Pires et al., 2019).", "In this paper, we propose two novel sub-tasks in fine-tuning cross-lingual models for MRC.", "Prior works (Yang and Mitchell, 2017; Mihaylov and Frank, 2018; Weissenborn et al., 2017; Sun et al., 2018) mostly focus on leveraging structured knowledge from knowledge bases (KBs) to enhance MRC models following a retrieve-then-encode paradigm, i.e., relevant knowledge from KB are retrieved first and sequence modeling methods are used to capture complex knowledge features.", "However, such a paradigm often suffers from the sparseness of knowledge graphs.", "Recently, some works fuse knowledge into pre-trained models to get knowledge enhanced language representation.", "Zhang et al. (2019) uses both large-scale textual corpora and knowledge graphs to train an enhanced language representation.", "Sun et al. (2019) construct unsupervised pre-trained tasks with large scale data and prior knowledge to help the model efficiently learn the lexical, syntactic and semantic representations, which significantly outperforms BERT on MRC.", "Most previous works on knowledge-based MRC are limited to English only.", "Meanwhile the requirement of acquiring large-scale prior knowledge (such as entity linking, NER models) may be challenging to meet for non-English languages.", "In this work, we propose a light-weight language-agnostic knowledge phrase mining approach and design a knowledge phrase masking task to boost the model performance for multilingual MRC.", "In this section, we first introduce the overall training procedure, and then introduce two new tasks, namely, Mixed Machine Reading Comprehension (mixMRC) and Language-agnostic Knowledge Phrase Masking (LAKM), respectively.", "The overview of our training procedure is shown at Figure 1. Our approach is built on top of popular multilingual pre-trained models (such as multilingual BERT and XLM).", "We concatenate passage, question (optional) together with special tokens [Start] and [Delim] as the input sequence of our model, and transform word embedding into contextually-encoded token representations using transformer.", "Finally, this contextual English Query: what does the last name wood come from English Passage: The last name Woods comes from both ([ the English and scottish ]) .", "representation is used for all three tasks introduced as following.", "The first task, also our main task, is multilingual MRC, which aims to extract answers spans from the context passage according to the question.", "In this task, each language has its own data.", "However, only English has human labeled training data, and the other languages use machine translated training data from English.", "During training, the MRC training data in all languages will be used together for fine-tuning.", "In the following, we introduce our new proposed tasks which will jointly train with our main task to boost multilingual MRC performance.", "We propose a task, named mixMRC, to detect answer boundaries even when (cid:104) question, passage (cid:105) are in different languages, which is shown in Figure 1", "(b).", "It is mainly motivated by the strategy of data augmentation (Singh et al., 2019).", "In detail, we utilize the mixMRC to derive more accurate answer span boundaries according to the constructed (cid:104) question, passage (cid:105) pairs.", "The way to obtain (cid:104) question, passage (cid:105) pairs consists of two steps: 1) translate training data from English into non-English; 2) construct mix-language training data for mix-MRC task.", "We show the entire data generation process in the Figure 2. Step 1: Data Translation When using machine translation system to translate paragraphs and questions from English into non-English, the key challenge is how to address the answer span in translation.", "To solve this problem, we enclose the answer text of source passage in special token pair ([ and ]), similar to (Lee et al., 2018).", "After translation, we discard training the instances where the translation model does not map the answer into a span well.", "Some skip data can still be recalled by finding the translated answer in the translated passage.", "The statistics of translated data are shown in Table 3. Formally, given a monolingual dataset D = { ( q i , p i , a i ) } where q i , p i and a i mean the query, passage and answer of language i respectively.", "We apply a public translator and create a translated dataset D (cid:48) = { ( q j , p j , a j ) } , where q j is the translation of q i , and a j is the answer span boundary in p j .", "Step 2: Mix Language After translation, we create a mixed-language dataset D (cid:48)(cid:48) = { ( q k , p l , a l ) } where l (cid:54) = k .", "This could encourage MRC model to distinguish the phrases boundary by answer span selection and also keep the alignment of the underlying representations between two languages.", "In this task, we use the same fine-tuning framework as in monolingual MRC task.", "In this section, we first introduce the approach for mining knowledge phrases from the Web.", "We then introduce the masking task created with these knowledge phrases.", "Data Generation In the following, we will describe our data generation method to collect large-scale phrase knowledge for different languages.", "The source data comes from a search engine, consisting of queries and the top N relevant documents.", "Let us take a running example of query { when is the myth of George Washington cutting down cherry tree made } .", "As shown in Figure 3, our mining pipeline consists of two main steps: 1. Phrase Candidates Generation: This step targets at high recall.", "We enumerate all the n-grams (n=2,3,4) of the given query as phrase candidates, such as when is, the myth, George Washington, cherry tree, is the myth, etc .", "We further filter the candidates with a stop word list.", "A manual analysis (by asking humans to identify all meaningful n-gram phrases in the given queries) shows that recall reaches 83% .", "2. Phrase Filtering: This step targets at high precision by removing useless phrases.", "For each candidate, we count its frequency in the titles of relevant documents.", "We only keep those frequent candidates.", "For example, phrases George Washington, cherry tree appear in every title.", "We name them as knowledge phrases .", "Our empirical study suggests a frequency of 0.7 results in a good balance between precision and recall, and we use this threshold in our approach.", "Following this approach, large amount of meaningful phrases can be mined independent of languages.", "After this, we further extract the passages which contain the mined knowledge phrases from the documents (following similar passage creation approach proposed by Rajpurkar et al. (2016)), which is the input of the LAKM.", "For the purpose of fair comparisons, the number of passages in different languages is equal, and the total amount of training data in LAKM is the same as that of mixMRC.", "The statistics of the knowledge phrases are given in Table 4. en fr de es # P 99.7k 91.2k 93.8k 78.8k # K-phrases 229k 102k 102k 101k Avg.", "Model Structure Given a (cid:104) passage, knowledge phrases (cid:105) pair, denoted as ( X, Y ) , we formalize that X = ( x 1 , x 2 , . . . , x m ) is a passage with m tokens, Y = ( y 1 , y 2 , . . . , y n ) is a set of language-specific knowledge phrases generated as before, where y i = ( x j , x j +1 , . . . , x j +( l 1) )(1 j m ) , l is the number of tokens in y i (1 i n ) .", "The representations h can be easily obtained from transformer.", "To inject language-specific knowledge into multilingual MRC model, we use masked language model as the fine-tuning objective.", "This task-specific loss has an additional summation over the length of sequence: p t = Softmax ( W h ( x ) t + b ) (1) LLAKM = m (cid:88) k =1 y Tkt log p t (2) where p t is the prediction value of t th word, m is the number of tokens in the input passage, y kt is the target word, W, b are the output projections for the task-specific loss LLAKM , and h ( x ) t refers to the pre-trained embedding of the t th word.", "In this section, we firstly describe the dataset and evaluation in Section 4.1; then introduce the baseline models in Section 4.2 and experiment setting in Section 4.3; thirdly the experimental results are shown in Section 4.4.", "To verify the effectiveness of our approach, we conduct experiments on two multilingual datasets: one open benchmark called MLQA (Lewis et al., 2019); the other newly constructed multilingual QA dataset with multiple fine-grained answer types (MTQA).", "MLQA.", "A multilingual question answering benchmark (Lewis et al., 2019).", "MLQA contains QA instances in 7 languages.", "Due to resource limitation, we evaluate our models on three languages ( English, German, Spanish ) of the dataset.", "MTQA.", "To further evaluate our approach on real-scenario as well as conduct in-depth analysis of the impact on different answer types (in Section 5.3), we construct a new QnA dataset with fine-grained answer types.", "The construction process is described as following: 1. (cid:104) question, passage (cid:105) pairs come from the question answering system of one commercial search engine.", "Specifically, questions are real user searched queries on one commercial search engine, which are more diverse, covering various answer types.", "For each question, a QA system is leveraged to rank the best passage from the top 10 URLs returned by search engine.", "For each question, only the best passage is selected.", "2. To annotate the answer span in each passage, we leverage crowd sourcing annotators for the labeling.", "Annotators are asked to first select the best shortest span in the passage which can answer the question and also assign an answer type according to the query Only single span is considered.", "and the answer span.", "Each case are labeled by three annotators and those instances which are labeled with consensus (no less than two annotators agree on the result) are finally selected.", "An English example is given in Table 5. Detailed statistics of MTQA dataset are given in Table 6 as well as the distribution of answer types in our dataset shown in Figure 4. [Question]: how many players in rugby-league team on field [Passage]: A rugby league team consists of thirteen players on the field, with four substitutes on the bench, [...] [subtype]: numeric [Answers:] start:41,end:49,text:thirteen Table 5: An English example of MTQA.", "We use the same evaluation metrics in the SQuAD dataset (Rajpurkar et al., 2016), i.e., F1 and Exact Match , to evaluate the model performance.", "Exact Match Score measures the percentage of predictions that exactly match any one of the ground truths.", "F1 score is used to measure the answer overlap between predictions and ground truth.", "We treat the predictions and ground truth as bags of words, and compute their F1 score.", "For a given question, we select the maximum value of F1 over all of the ground truths, and then we average over all of the questions.", "We use the following two multilingual pre-trained models to conduct experiments:", "M-BERT: Multilingual version of BERT released by (Devlin et al., 2018) which is pre-trained with monolingual corpora in 104 languages.", "This model proves to be very effective at zero-shot multilingual transferring between different languages (Pires et al., 2019).", "XLM: A cross-lingual language model (15 languages) (Lample and Conneau, 2019) pre-trained with both monolingual data and cross-lingual data as well as cross-lingual tasks to enhance the transferring capacity among different languages.", "For baseline, we directly fine-tune the pre-trained models using MRC training data only.", "We use Adam optimizer with 1 = 0 .", "9 , 2 = 0 .", "999 .", "The learning rate is set as 3e-5 for the mixMRC, LAKM and multilingual MRC tasks.", "The pre-trained model is configured with its default setting.", "Each of the tasks is trained until the metric of MRC task converges.", "mixMRC.", "We jointly train mixMRC and multilingual MRC tasks using multi-task training at the batch level to extract the answer boundary in the given context.", "For both tasks, the max sequence length is 384.", "LAKM.", "LAKM and multilingual MRC tasks are jointly trained using multi-task training.", "In terms of input, we randomly mask 15% of all WordPiece tokens in each sequence in a two step approach.", "Firstly, if the i th token belongs to a knowledge phrase, we replace the i token with (1) the [MASK] token 80% of the time (2) a random token 10% of the time (3) the unchanged i th token 10% of the time.", "Secondly, if the proportion of knowledge phrase is less than 15%, we will further randomly mask other WordPiece tokens to make the total masked ratio to reach 15%.", "For LAKM, the max sequence length is set as 256.", "mixMRC + LAKM.", "We jointly train mixMRC, LAKM and multilingual MRC tasks, take the gradients with respect to the multilingual MRC loss, mixMRC loss and LAKM loss, and apply the gradient updates sequentially at batch level.", "During the training, the max sequence length is 384 for multilingual MRC model, 256 for LAKM and 384 for mixMRC.", "The overall experimental results are shown in Table 7. Compared with M-BERT & XLM baselines, both mixMRC and LAKM have decent improvements in fr, es and de, and on-par performance in en in terms of both MLQA and MTQA datasets.", "This demonstrates the effectiveness of our models.", "The combination of LAKM and mixMRC tasks gets the best results on both datasets.", "Take M-BERT and MLQA dataset as an example, mixMRC+LAKM have 1.7% and 4.7% EM improvements on es and de languages respectively, compared with baseline.", "In terms of LAKM task, there are decent gains for all languages, including English.", "However, the gains are bigger on low resource languages compared with English performance.", "Take XLM and MLQA dataset as an example, LAKM gets 1.8% and 3.2% EM improvements on es and de, while the improvement on en is about 0.5%.", "The intuition behind en gains is that LAKM brings extra data with knowledge to en as well.", "In terms of mixMRC task, there are slight regression on en compared with decent gains on es, de and fr.", "Take XLM and MTQA dataset for illustrations, mixMRC has 0.6% EM regression on en versus 1.4% and 0.5% EM gains on fr and de languages.", "This shows that mixMRC mainly improves the transferring capability from rich resource language to low resource language.", "In this section, we ablate important components in LAKM to explicitly demonstrate its effectiveness.", "To study the effectiveness of LAKM, we compare LAKM with Random N-gram Masking based on XLM and MTQA dataset.", "LAKM and Random N-gram Masking refer to fine-tuning XLM with the language-specific knowledge masking strategy and random n-gram masking strategy respectively.", "As shown in Table 8, without the language-agnostic knowledge masking strategy, the EM metrics drops by 0.2% 0.87%, which proves the necessity of LAKM.", "To illustrate the effectiveness of the auxiliary tasks, an extreme scenario is considered when only English training data is available and there is no translation data.", "That means that we are unable to use mixMRC task to driver more accurate answer span boundaries.", "At this point, we only leverage LAKM to enhance answer boundary detection and compares the performance of M-BERT baseline with our model in Table 9.", "From the experimental results, zero shot fine-tuning with LAKM is significantly better than M-BERT baseline.", "On MTQA, our model gets 2%, Random N-gram Masking shows gains in English SQuAD.", "3.3%, 3.8% EM improvements on English, French and German respectively.", "On MLQA, we get 1.6%, 1.4%, 1.2% EM improvements on English, Spanish and German.", "To have an insight that how the new tasks (LAKM/mixMRC) affect the multilingual MRC task, we further analyze model performance on various answer types, as shown in Figure 5.", "The comparison with baseline indicates that in most of the answer types (like color, description, money ), both LAKM and mixMRC can enhance the answer boundary detection for multilingual MRC task.", "One interesting finding is that in terms of animal, full name , LAKM outperforms mixMRC by a great margin, which are 9.1% and 14.3% respectively.", "One possible explanation is that the knowledge phrases of LAKM can cover some entity related phrases like animals and names, leading to the significant EM boost.", "In terms of those numerical answer types (like money, numeric, length ), the performance between mixMRC and LAKM are similar.", "The intuition behind this is that these numerical answers may be easier to transfer between different languages since answers like length are similar across different languages.", "This paper proposes two auxiliary tasks (mixMRC and LAKM) in the multilingual MRC fine-tuning stage to enhance answer boundary detection especially for low resource languages.", "Extensive experiments on two multilingual MRC datasets have been conducted to prove the effective of our proposed approach.", "Meanwhile, we further analyze the model performance on fine-grained answer types, which shows interesting insights." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "other", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result" ]
[ "We consider the task of linking social media accounts that belong to the same author in an automated fashion on the basis of the content and metadata of their corresponding document streams.", "We focus on learning an embedding that maps variable-sized samples of user activityranging from single posts to entire months of activityto a vector space, where samples by the same author map to nearby points.", "The approach does not require human-annotated data for training purposes, which allows us to leverage large amounts of social media content.", "The proposed model outperforms several competitive baselines under a novel evaluation framework modeled after established recognition benchmarks in other domains.", "Our method achieves high linking accuracy, even with small samples from accounts not seen at training time, a prerequisite for practical applications of the proposed linking framework.", "The scale and anonymity of social media pose systematic challenges for manual moderation efforts (Pennycook et al., 2020; Broniatowski et al., 2018).", "These challenges have motivated the development of automated methods to identify abusive content, such as Davidson et al. (2017), which considers automatically classifying hate speech, or Alvari et al. (2019), which deals with detecting violent extremism, both in the Twitter domain.", "However, automatic moderation remains a difficult problem.", "Indeed, existing methods based on hand-constructed resources such as keyword lists may fail to adapt to novel trends (Corbett-Davies and Goel, 2018) whereas automatic methods based on statistics of large corpora may exhibit harmful biases (Caliskan et al., 2017).", "Additionally, individual posts may fail to contain sufficient information to reliably identify them as harmful.", "This work considers account-level moderation.", "Specifically, we consider the problem of determining whether two document streams share the same author, based on samples from those streams rather than individual documents.", "This capability has numerous applications, such as detecting users attempting to circumvent account bans, identifying sockpuppet accounts, and detecting coordinated disinformation campaigns involving multiple authors controlling multiple accounts.", "As a motivating application, we consider the enforcement of account bans on anonymous platforms, such as Reddit.", "Given a new account, the problem is to automatically identify whether it matches any previously banned account, which amounts to making binary decisions about whether pairs of accounts share the same author.", "Variations of this problem have been studied before.", "For example, Schwartz et al. (2013) learn a classifier to determine whether the author of a Twitter comment belongs to a small, closed set of authors.", "In contrast, we are interested in an open-world setting, requiring binary decisions about arbitrary pairs of accounts.", "This introduces a number of challenges.", "First, any individual comment may be too short to serve as the basis for linking accounts.", "Figure 1a illustrates this empirically using a variation of our model, where embeddings of individual comments from the same account fail to coalesce, making it difficult to assert that an account has the same author as another account.", "See 4 for further experimental details.", "Therefore, we focus on aggregating information across contiguous sequences of documents.", "Figure 1b illustrates the impact of aggregation using our full model, where aggregations of contiguous sequences of documents from the same account exhibit an approximate convergence behavior as the number of documents aggregated increases.", "In fact, the motivating application above requires linking accounts on the basis of samples of widely varying sizes.", "Indeed, banned accounts", "(a) Embeddings of single posts by a model trained on single posts.", "(b) Embeddings of samples by a model trained on variablesized samples.", "For each author and each 1 j 16 , the j th point shows the embedding of the author's first j posts.", "typically have many documents, all of which we would like to consider, while new accounts generally have few documents, which nevertheless must be linked to banned accounts as quickly as possible to mitigate abusive behavior.", "The second challenge is that of spurious associations.", "For example, over a short period of time, an author may discuss only a single, narrow topic.", "While a nave model based on word statistics alone might be sufficient to link such an account to another, this approach would fail to generalize over longer periods of time due to topic drift.", "Both our training procedure and evaluation framework have been designed to ensure that our model learns the appropriate invariances to identify same-authorship, rather than being correct for the wrong reasons (McCoy et al., 2019).", "Namely, samples from an account are drawn from different time periods in each training iteration (see 2.3), while the evaluation data consists of posts by accounts not seen at training time and is future to all the training data (see 4.2).", "Finally, the numbers of banned and new accounts may be quite large, requiring a still larger number of pairwise comparisons.", "For this reason, our proposed approach to account linking consists of embedding variable-sized samples from document streams into a metric space whereby samples likely to have been composed by the same author map to nearby points.", "Under this embedding, comparisons between document streams amount to pairwise distance calculations, so our approach is highly scalable and amenable to various optimizations, such as approximate nearest neighbor methods.", "Our primary contributions are the following: We provide a simple but effective data augmentation strategy which enables embedding variable-sized samples.", "In addition, we successfully train such an embedding on a large-scale dataset consisting of more than 300 million comments from 1 million distinct accounts using scalable losses.", "We propose a novel framework to assess account linking performance focused on challenging conditions and minimizing the impact of incidental authorship features, such as topic.", "In particular, we propose benchmark datasets as well as verification metrics tailored to our application.", "Our code, data splits, and scripts to reproduce our experiments are available at http:// github.com/noa/naacl2021 .", "We treat a document stream as a sequence of times-tamped actions a 1 , a 2 , . . . , a L where each a i is a structure containing the data comprising an action.", "The possible contents of a i are specific to the document stream, but include at least a timestamp t i such that t 1 < t 2 < < t L .", "In this work we focus on textual content published on social media platforms, although the approach would easily extend to allow a 1 , a 2 , . . . , a L to contain other modalities such as images or video, which would be handled similarly.", "In addition, we ( t 1 , r 1 ) E t 1 r 1 x 1 ( t 2 , r 2 ) E t 2 r 2 x 2 ... ( t M , r M ) E t M r M x MA M f ( a ) P x 1 E C P x 2 E C P ... x ME C P Figure 2: Illustration of the proposed model architecture.", "also avail of certain categorical features contained in a 1 , a 2 , . . . , a L such as hashtags or the subreddit to which a comment was posted.", "A sample from a document stream is a contiguous subsequence of its actions.", "We introduce an embedding f in 2.1 mapping a variable-sized sample to a point in a vector space, such that the Euclidean distance between the embeddings of two samples quantifies the likelihood that they belong to the same author.", "We define an embedding f as follows.", "This embedding is illustrated in Figure 2.", "Consider a sample a = ( a 1 , a 2 , . . . , a M ) where each action a i consists of a subreddit feature r i , a timestamp t i , and text content x i .", "1 We encode t i as the corresponding hour of the day and r i by lookup in the list of 2048 most common subreddits, resulting in t i { 0 , 1 , . . . , 23 } and r i { 0 , 1 , . . . , 2048 } , where r i = 2048 when the subreddit is not among the top 2048.", "We encode the text feature using the SentencePiece unigram subword model (Kudo, 2018), resulting in x i Z (cid:96) , where the parameter (cid:96) is defined in 2.2.", "Note that the chosen vocabulary size impacts the amount of content that can be encoded with (cid:96) integers.", "Further details on the choice of text encoding are provided in Appendix C. We replace each token of x i with a correspond-1 In addition to r i , t i , x i platforms like Reddit contain further metadata, such as the thread title and the submission and parent comments, all of which might help distinguish users because users read them and chose to respond.", "Incorporating these features would be interesting to explore in future work.", "ing learned embedding in RN for all 1 i M , resulting in M matrices in R (cid:96) N .", "We apply one-dimensional convolutions of widths 2, 3, and 4 along the first axis of each, concatenate the convolved matrices along their second axes, max-pool along the first axis, and concatenate the results with learned embeddings of the corresponding subreddit features and one-hot encodings of the corresponding time features, resulting in M vectors, which we aggregate using dot-product attention.", "The resulting sequence of vectors is projected to a single vector through max-pooling followed by two fully-connected layers with bias, resulting in the encoding f ( a ) RD of the sample a .", "The variance in the lengths of documents poses computational challenges when aggregating large samples.", "Therefore we resort to truncating each document to a fixed number (cid:96) of tokens, padding any documents containing fewer than (cid:96) tokens.", "We take (cid:96) = 32 after observing that Reddit posts have an average length of approximately 43 tokens (see Appendix D).", "We also experimented with a more complicated text sampling strategy, namely sampling contiguous segments of (cid:96) tokens from each post uniformly at random during training.", "While this approach leverages all available textual information by affording slightly different samples of each post in each iteration of training, we found it to yield similar results and also complicates the comparison to our primary baseline model, which uses the prefix x y Unif (0 , 1) Beta (3 , 1) 1 2 3 12 1 Figure 3: In contrast with choosing sample sizes uniformly at random, sampling according to a left skewed Beta distribution results in more samples of sizes closer to the specified maximum.", "During training we randomly select document stream samples of sizes varying between R = 1 and S = 16 , which we regard as hyperparameters of the model.", "To select a sample from the stream a 1 , a 2 , . . . , a L we first choose its length M = R + (cid:100) x ( S R ) (cid:101) where x Beta (3 , 1) and take the sample a i , a i +1 , . . . , a i + M 1 where 1 i L M +1 is chosen uniformly at random.", "Selecting M according to Beta (3 , 1) provides an expected sample size closer to S than to R , a tradeoff that allows the model to quickly learn features of a document stream by exposing it to larger samples most of the time, while still maintaining the flexibility to handle samples of varying sizes.", "Indeed, the latter is critical in the evaluation described in 4.3, which requires linking large samples to small samples.", "The density function of Beta (3 , 1) is shown in Figure 3 together with that of the uniform distribution Unif (0 , 1) for comparison.", "We explore the benefits of Beta (3 , 1) and other related distributions in 4.5.", "Deep metric learning methods aim to embed observations into a low-dimensional space such that instances from the same class map to nearby points under a chosen metric, such as Euclidean distance.", "In our setting, we take the instances to be document stream samples and the classes to be the corresponding accounts, which serve as proxies for latent authorship.", "Therefore, training the mapping f defined in 2.1 using metric learning affords an embedding under which samples by the same author map to nearby points.", "Recent work in deep metric learning has introduced a number of training objectives with state of the art performance on computer vision tasks (Kim et al., 2020; Wang et al., 2019).", "Unfortunately, many of these objectives scale linearly with the number K of classes considered due to a costly linear projection onto RK .", "Note that because account names in effect provide labels for the corresponding document streams, we may use raw social media content to fit our model directly, availing of a virtually unlimited source of data.", "We stipulate that the ability to exploit larger amounts of data may be more important than per-example efficiency, and therefore consider the classical triplet loss (Schroff et al., 2015) in our experiments, whose complexity does not depend on K .", "In particular, we use semihard negative mining with a fixed margin penalty.", "We also consider the topk loss recently proposed by Lu et al. (2019), which optimizes precision-at-k as follows.", "Given targets ranked by similarity to a query, topk arranges for as many matches as possible to be among the top k ranked targets.", "It accomplishes this by penalizing only those targets that would need to move the smallest amount in order to maximize the number of matching targets among the top k .", "Like triplet loss, topk also uses an additive margin penalty to separate classes.", "See Appendix A for further experimental details on both loss functions.", "The separate but related problem of closed-world author attribution has received considerable attention.", "For example, the PAN 2019 challenge (Daele-mans et al., 2019) employed a closed-world setting with a small number of authors that are the same at training and test time.", "That task also considered longer documents, obviating the need for aggregating evidence of authorship across multiple documents.", "Generic text embedding methods such as the universal sentence encoder (Cer et al., 2018) and BERT (Devlin et al., 2019) are fit using auxiliary tasks, such as conditional language modeling.", "In the case of BERT, this is usually followed by supervised fine-tuning for a downstream task of interest.", "In this work, we are interested in learning representations that are immediately useful for our account linking task.", "However, because a large corpus of task-specific training data may be collected without human supervision, the benefits of generative pretraining are diminished in our setting.", "Indeed, the parameters of the text encoding are learned from a random initialization in all our experiments.", "Our approach is further distinguished from generic embedding methods by featuring a multi-document embedding, mapping a sequence of documents to a single vector, where each document may consist of both text and metadata.", "The most closely related prior work is the Invariant User Representation (IUR) proposed by Andrews and Bishop (2019), whose approach is broadly similar to ours, but only considers samples of a fixed size.", "Our approach may be viewed as a generalization of that work in support of the account linking task.", "In addition we use a simpler dot product attention mechanism and introduce the use of scalable metric learning losses in 2.4, which enable us to train our model on an order of magnitude more data than previously considered.", "We validate these improvements in 4.2 using the ranking task proposed by Andrews and Bishop (2019).", "We also adapt IUR to serve as a baseline in our primary linking task in 4.3.", "We believe that our treatment of account linking as a pairwise recognition task between document stream samples and our proposed general-purpose evaluation protocols are both novel.", "However, in prior work, there have been several platform-specific approaches to account linking.", "For example Silvestri et al. (2015) explore a heuristic approach to linking accounts across social media platforms.", "Separately, on platforms with rich social network information, graph-matching methods have been explored (Fan, 2012).", "Our focus is on content-based account linking, which is more general than prior methods we are aware of.", "Some other related but distinct problems include detecting deceptive accounts (Van Der Walt and Eloff, 2018) and authorship classification of short messages (Ishihara, 2011).", "We conduct evaluations on the two primary tasks illustrated in Figure 4.", "First, our ranking evaluation described in 4.2 is motivated by information retrieval needs.", "Although ranking is not the focus of this paper, it provides an assessment of the quality of the learned embedding in terms of similarity judgements and facilitates comparison with the baseline model IUR.", "In addition, we use the ranking evaluation to monitor training using de-Model Features Loss MRR R@4 R@8 Prop TPS Topk 0.637 0.709 0.765 Prop TPS Triplet 0.634 0.702 0.762 Prop TP Topk 0.450 0.522 0.595 Prop TP Triplet 0.452 0.520 0.591 Prop T Triplet 0.372 0.439 0.512 IUR TPS Arcface 0.520 0.590 0.650 IUR T Arcface 0.200 0.240 0.290 Table 1: Ranking results for the proposed model (Prop) and the baseline (IUR), both trained and evaluated using various combinations of text content (T), publication time (P), and subreddit (S).", "velopment data disjoint from the test data used for the final evaluation.", "Second, we introduce an account linking evaluation framework in 4.3 inspired by similar evaluations used for speaker recognition (Doddington et al., 2000; Van Leeuwen and Brmmer, 2007).", "Both evaluations involve setting up two sets of samples as described below, the queries and the targets .", "For each query, there is exactly one target drawn from the same document stream.", "Roughly speaking, both evaluations involve matching targets with their corresponding queries.", "Reddit is currently one of the most popular social media platforms, where anonymous users interact primarily by posting comments to discussion threads.", "Together with its text content, each comment is labeled by its publication time and the subreddit to which it was posted, a categorical feature roughly indicating its topic.", "We construct a dataset consisting of 300 million Reddit posts from 1 million users published over an entire year to be used to train our proposed model.", "This Million User Dataset (MUD) consists of all posts by authors who published at least 100 and at most 1000 posts between July 2015 and June 2016, where the lower bound ensures a sufficiently long history from which to sample, and the upper bound is intended to reduce the impact of bot and spam accounts.", "We obtained the data by drawing from the existing Pushshift Reddit corpus (Baumgartner et al., 2020).", "Some further statistics of MUD are shown in Table 8.", "(a) Ranking evaluation: for each query, the targets are ranked by similarity to the query.", "(b) Linking evaluation: for each target, the queries are determined to match each query or not.", "As shown in Figure 4a, the ranking experiment consists of ranking the targets by similarity to each query.", "For compatibility, we mimic the experimental setup from Andrews and Bishop (2019), which proposes separate sets of queries and targets to be used for training and testing.", "We adopt the training split for validation and the testing split for evaluation, although we train our model on MUD (see 4.1).", "We select hyperparameters based on dev split performance (see Appendix A).", "The test split consists of samples, each of size exactly 16, although we train the proposed model using samples from MUD of varying sizes as described in 2.3.", "Note that the posts comprising MUD precede those of both IUR splits in publication time, ensuring that our training data is disjoint from IUR's test data.", "Of the 111,396 authors contributing to the test split, 69,275 or 62% contribute to the IUR training split.", "In contrast, MUD has only 39,529 users in common with the test split, a significantly smaller overlap than IUR.", "In principle, the increase in novel users at test time puts the proposed model at a disadvantage because it places more importance on generalization to novel users.", "We report recall-atk (R@ k ) and mean reciprocal rank (MRR), calulated exactly as in Andrews and Bishop (2019).", "MRR is the expected value of the reciprocal of the position of the correct target in the ranked list.", "R@ k is the probability that the unique target composed by the same author as a given query appears in the top k ranked results.", "We limit ourselves to R@4 and R@8 as proxies for the first page of search results returned to a user issuing a query.", "The results of this evaluation calculated with the test split are shown in Table 1.", "Note that the full version of the proposed model significantly outperforms the previously published state-of-the-art.", "2 We conclude that although the angular margin loss used by Andrews and Bishop (2019) is considered state-of-the-art, the simpler triplet loss outperforms it, most likely because it admits the use of a considerably larger dataset.", "We remark that the models trained with topk performed only slightly better than those trained with triplet loss, an observation consistent with recent findings that when matching experimental conditions, the choice of ranking loss is less important than previously believed (Mus-grave et al., 2020).", "In addition, Figure 5 shows the results of the evaluation performed after every hour of training.", "3 Note that after only six hours of training the full model outperforms the baseline.", "Figure 5 also shows the learning curve for an ablation of our model that eliminates the subreddit feature.", "We observe that this ablated model performs almost as well as the full-featured baseline, which suggests that the proposed approach may be effective in domains where only text and timestamps are 2 A paired sign test of the differences in ranking between IUR and the proposed model is significant at the p < 10 15 level.", "While the ranking experiments in 4.2 were designed to measure the quality of the learned embedding, they do not directly measure task performance: moderation applications require decisions rather than rankings .", "To this end, we propose an account linking benchmark modeled after the problem of enforcing account bans, in which a fixed number of accounts are linked against novel accounts at test-time.", "Compared to the ranking experiments, the key difference is that we introduce a distinguished subset of authors from which we have accumulated a significant number of previously published documents to serve as queries.", "The procedure is illustrated in Figure 4b.", "Because the subreddit feature serves as a proxy for topic, restricting to a single subreddit results in a more challenging problem by increasing the likelihood that the comments considered deal with similar topics.", "To this end, we repeat the following procedure for each of the five most popular subreddits.", "Each result of the experiment reported in Tables 2 and 9 is the average over the five subreddits of the corresponding results calculated using those subreddits individually.", "Given a specified subreddit, we first randomly select 100 distinguished accounts, each publishing at least 100 posts to that subreddit in November 2016.", "The queries in the experiment consist of the 100 most recently published posts to the subreddit by each of the distinguished accounts in November 2016.", "In addition, the distinguished accounts must have published at least 16 posts to the subreddit between December 2016 and May 2017 to serve as the corresponding targets, as described below.", "Next we randomly select 4900 accounts distinct from the distinguished accounts, each publishing at least 16 posts to the subreddit between December 2016 and May 2017.", "The targets in the experiment consist of the 4 most recently published posts to the subreddit by each of the 5000 accounts.", "Performance metrics .", "For every query and target, each model considered returns a score, with smaller scores associated with a higher likelihood that the query and the target have the same author.", "For example, the proposed model returns the distance between their embeddings under the model.", "A decision rule to predict an author match is obtained by thresholding this score with respect to a chosen operating point .", "In production settings, one adjusts the operating point to obtain acceptable rates of false positives and false negatives.", "In our running application of ban enforcement, these types of errors correspond respectively with mistakenly banning an innocent user and failing to ban a new account of a banned user.", "Because the severity of these types errors are different, we consider the detection cost function C det = C P + (1 ) C + P + proposed by Van Leeuwen and Brmmer (2007), where P and P + are empirical probabilities of false negatives and false positives, C and C + are the costs of false negatives and false positives, and is the a priori probability of a match.", "We take = 0 .", "05 and we set C = 1 and C + = 2 , reflecting our presumption that banning an innocent account is more severe than failing to recognize a banned user.", "Our choices of C and C + are only meant to reflect the asymmetric nature of the problem, although in practice these costs would be highly platform-specific.", "We report the minimum value of C det over all operating points (minDCF) and the value of P + at the operating point for which P = P + , also known as the equal error rate (EER).", "Baseline models .", "We compare the proposed method with three baselines.", "First we consider TF-IDF vector representations of the concatenated text content of a sample, which are compared using cosine similarity.", "Next, we consider universal sentence encodings (Cer et al., 2018), which Model Training EER minDCF Length TF-IDF 0.341 0.971 Universal 0.363 0.981 IUR 16 0.247 0.999 TP 18 0.169 0.848 TP 116 0.132 0.792 Table 2: Linking evaluation, averaged over 5 subreddits.", "are compared using angular distance.", "We experimented with two versions of this baseline, namely embedding the concatenation of the text content of the documents in a sample, and averaging the embeddings of the individual documents.", "Since we found the concatenated version to perform better, we only report on this variation.", "Finally, we consider IUR (Andrews and Bishop, 2019).", "Because this model only embeds samples of size 16, we pad samples containing fewer than 16 posts.", "To handle samples containing more than 16 posts, we organize the sample into contiguous groups of at most 16 posts, apply the embedding to each group, and average the embeddings.", "Results .", "Table 2 compares the linking performance of the three baseline models along with two of variations of the proposed model arising from varying the sizes of the training samples.", "Note that both variations of the proposed model outperform the baselines.", "A further variation on this experiment is reported in Table 9 in which the queries are drawn from the training dataset, better reflecting the context of the motivating example.", "Figure 6 shows the effect of the sizes of the targets on linking performance.", "Note that performance rapidly improves with larger samples, relative to the baselines.", "This trend is promising for our motivating application of ban enforcement, where it is desirable to recognize banned users as early as possible.", "Figure 7 shows receiver operator curves (ROC), which plot false positive rates against true positive rates as the operating points vary.", "Our experiments in 4.3 show that linking samples from newly created accounts to those of distinguished authors is more successful when using as much historical data from the distinguished authors", "as possible.", "However, computational constraints typically inhibit embedding full account histories during training.", "Instead, in 4.3 we embed large samples of distinguished authors' histories using models trained on samples of sizes up to a maximum tractable length S .", "We take S = 16 in our experiments as described in 2.3.", "Here, we examine the ability of a model trained on samples of sizes at most S to generalize to samples of sizes greater than S .", "We also compare to a further baseline that averages single-post embeddings produced by a variation of the proposed model trained on single posts, which we denote by Avg.", "This is in contrast with the proposed model, which aggregates embeddings of multiple posts using an attention mechanism.", "number of variations of the proposed model trained with triplet loss and all features (TPS) on samples of fixed or varying sizes as specified.", "These results demonstrate that a model trained on variable-sized samples appears to generalize well to much longer samples.", "We observe substantially better performance compared to the simple averaging baseline, and only a slight decrease in performance compared to the fixed length models as the evaluation sample size increases beyond the lengths seen at training time.", "As mentioned in 2.3, we use Beta (3 , 1) to select sample sizes during training.", "We hypothesize that a negatively skewed distribution tends to improve training efficiency by supplying longer samples most of the time, while retaining the ability to handle shorter samples.", "To evaluate this claim, we investigate several distributions of varying degrees of negative skew.", "Table 4 shows the ranking performance of variations of the proposed model trained on samples of sizes varying between 1 and 16 posts and evaluated on samples of size 16.", "These models differ only in the distribution used to select sample sizes.", "Indeed, the negatively skewed distributions do improve ranking performance over the uniform distribution, although the choice of negatively skewed distribution appears to be mostly immaterial.", "This work motivates a number of interesting research questions.", "First, the proposed model makes use of publication times, but only avails of the hour of the day.", "It would be interesting to examine continuous-time variants of our encoder that incorporate relative time differences between actions when aggregating their embeddings, in light of the fact that patterns of user activity might be highly discriminative.", "For example, bots and spammers typically post at certain times of day and with particular frequencies.", "Separately, the proposed data augmentation methods we use to handle variable-sized samples may also be applicable in other settings, such as multi-document summarization (Liu and Lapata, 2019).", "Finally, the scores we use to determine author matches could be calibrated , providing confidence estimates associated with the account linking decisions.", "To our knowledge, this work is the first to demonstrate the feasibility of a general-purpose account linking framework at web scale.", "Indeed, Figure 6 shows that performance improves as the size of the target increases, suggesting a speed-accuracy trade-off that can be tuned for different application settings.", "Expanding on an idea above, if confidence estimates were available, they could be used to inform the necessary sample sizes to achieve an acceptable level of risk.", "Finally, we note that the generality of the proposed approach make it potentially applicable to a wide range of applications, including source code attribution (Burrows et al., 2009; Yang et al., 2017; Kalgutkar et al., 2019), plagiarism detection (Pot-thast et al., 2010; Meuschke et al., 2018; Folt`ynek et al., 2019), and authorship attribution in collaborative documents (Flck and Acosta, 2014; Dauber et al., 2017)." ]
[ "method", "method", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "method", "objective", "method", "objective", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "objective", "other", "other", "other", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other" ]
[ "Complex, compositional reading comprehension datasets require performing latent sequential decisions that are learned via supervision from the final answer.", "A large combinatorial space of possible decision paths that result in the same answer, compounded by the lack of intermediate supervision to help choose the right path, makes the learning particularly hard for this task.", "In this work, we study the benefits of collecting intermediate reasoning supervision along with the answer during data collection.", "We find that these intermediate annotations can provide two-fold benefits.", "First, we observe that for any collection budget, spending a fraction of it on intermediate annotations results in improved model performance, for two complex compositional datasets: DROP and Quoref.", "Second, these annotations encourage the model to learn the correct latent reasoning steps, helping combat some of the biases introduced during the data collection process.", "Recently many reading comprehension datasets requiring complex and compositional reasoning over text have been introduced, including HotpotQA (Yang et al., 2018), DROP (Dua et al., 2019), Quoref (Dasigi et al., 2019), and ROPES (Lin et al., 2019).", "However, models trained on these datasets (Hu et al., 2019; Andor et al., 2019) only have the final answer as supervision, leaving the model guessing at the correct latent reasoning.", "Figure 1 shows an example from DROP, which requires first locating various operands (i.e. relevant spans) in the text and then performing fil-ter and count operations over them to get the final answer 3.", "However, the correct answer can also be obtained by extracting the span 3 from the passage, or by adding or subtracting various num-bers in the passage.", "The lack of intermediate hints makes learning challenging and can lead the model Question: How many touchdown passes did Cutler throw in the second half?", ".....In the third quarter, the Vikes started to rally with running back Adrian Peterson's 1-yard touchdown run (with the extra point attempt blocked).", "The Bears increased their lead over the Vikings with Cutler's 3-yard TD pass to tight end Desmond Clark.", "The Vikings then closed out the quarter with quarterback Brett Favre firing a 6-yard TD pass to tight end Visanthe Shiancoe.", "An exciting ....", "with kicker Ryan Longwell's 41-yard field goal, along with Adrian Peterson's second 1-yard TD run.", "The Bears then responded with Cutler firing a 20-yard TD pass to wide receiver Earl Bennett.", "The Vikings then completed the remarkable comeback with Favre finding wide receiver Sidney Rice on a 6-yard TD pass on 4th-and-goal with 15 seconds left in regulation.", "The Bears then took a knee to force overtime....", "The Bears then won on Jay Cutler's game-winning 39-yard TD pass to wide receiver Devin Aromashodu.", "With the loss, not only did the Vikings fall to 11-4, they also surrendered homefield advantage to the Saints.", "complex reasoning.", "In this paper, we present three main contributions.", "First, we show that annotating relevant context spans, given a question, can provide an easy and low-cost way to learn better latent reasoning.", "To be precise, we show that under low budget constraints, collecting these annotations for up to 10% of the training data (2-5% of the total budget) can improve the performance by 4-5% in F1.", "We supervise the current state-of-the-art models for DROP and Quoref, by jointly predict the relevant spans and the final answer.", "Even though these models were not designed with these annotations in mind, we show that they can still be successfully used to improve model performance.", "Models that explicitly incorporate these annotations might see greater benefits.", "Our results suggest that future dataset collection efforts should set aside a fraction of budget for intermediate annotations, particularly as the reasoning required becomes more complex.", "Conroy tries to teach them about the outside world but comes into conflict both with the principal and Mr. Skeffington, the superintendent.", "He teaches them how to brush their teeth, who Babe Ruth is, and has the children listen to music, including Flight of the Bumblebee and Beethoven's Fifth Symphony.", "He explains that the when Beethoven wrote the Fifth Symphony, he was writing about what death would sound like.", "He is also astounded they've never even heard of Halloween, and he decides to take them to Beaufort on the mainland to go trick-or-treating, which the superintendent has forbidden.", "He also must overcome parental fears of the", "river. As he leaves the island for the last time, the children come out to see him leave, all of them lined up on a rickety bridge.", "As he is about to leave by boat, one of the students then begins playing a record, which is the beginning movement of Beethoven's Fifth Symphony.", "Second, these annotations can help combat biases that are often introduced while collecting data (Gururangan et al., 2018; Geva et al., 2019).", "This can take the form of label biasin DROP, 18% of questions have answers 1, 2, or 3or annotator bias, where a small group of crowd workers creates a large dataset with common patterns.", "By providing intermediate reasoning steps explicitly, the annotations we collect help the model overcome some of these biases in the training data.", "Finally, the intermediate annotations collected in this work, including 8,500 annotations for DROP and 2,000 annotations for Quoref, will be useful for training further models on these tasks.", "We have made them available at https://github.com/dDua/ Intermediate_Annotations .", "Intermediate annotations describe the right set of context spans that should be aggregated to answer a question.", "We demonstrate their impact on two datasets: DROP and Quoref.", "DROP often requires aggregating information from various events in the context (Figure 1).", "It can be challenging to identify the right set of events directly from an answer when the same answer can be derived from many possible event combinations.", "We annotate the entire event span including all the attributes associated with the specific event.", "Quoref requires understanding long chains of coreferential reasoning, as shown in Figure 2, which are often hard to disentangle, especially when the context refers to multiple entities.", "We specifically annotate the coreference chains which lead to the entity being queried.", "Collection process: We used Amazon Mechanical Turk to crowd-source the data collection.", "We randomly sample 8,500 and 2,000 QA pairs from the training set for DROP and Quoref respectively.", "We showed a QA pair and its context to the workers and asked them to highlight essential spans in the context.", "In case of DROP, crowd workers were asked to highlight complete events with all their corresponding arguments in each span.", "For Quoref, they were asked to highlight the coreference chains associated with the answer entity in the context.", "Cost of gathering intermediate annotations: Each HIT, containing ten questions, paid $1, and took approximately five minutes to complete.", "Overall, we spent $850 to collect 8,500 annotations for DROP and $200 to collect 2,000 annotations for Quoref.", "If these annotations are collected simultaneously with dataset creation, it may be feasible to collect them at a lower cost, as the time taken to read the context again will be avoided.", "In this section, we train multiple models for the DROP and Quoref datasets, and evaluate the benefits of intermediate annotations as compared to traditional QA pairs.", "In particular, we will focus on the cost vs benefit tradeoff of intermediate annotations, along with evaluating their ability to mitigate bias in the training data.", "We study the impact of annotations on DROP on two models at the top of the leaderboard: NABERT 1 and MTMSN (Hu et al., 2019).", "Both the models employ a similar arithmetic block introduced in the baseline model (Dua et al., 2019) on top of contextual representations from BERT (De-vlin et al., 2019).", "For Quoref, we use the baseline XLNet (Yang et al., 2019) model released with the dataset.", "We supervise these models with the annotations in a simple way, by jointly predicting intermediate annotation and the final answer.", "We add two auxiliary loss terms to the marginal log-likelihood loss function.", "The first is a cross-entropy loss between the gold annotations ( g ) and predicted annotations, which are obtained by passing the final BERT representations through a linear layer to get a score per token p , then normalizing each token's score of being selected as an annotation 1 https://github.com/raylin1000/drop_bert with a sigmoid function.", "The second is an L 1 loss on the sum of predicted annotations, encouraging the model to only select a subset of the passage.", "The hyper-parameters 1 and 2 were used to balance the scale of both auxiliary loss terms with the marginal log-likelihood.", "To evaluate the cost-benefit trade-off, we fix the total collection budget and then vary the percentage of budget that should go into collecting intermediate annotations.", "As shown in Figure ??", ", the model achieves better performance (+1.7% F1) when spending $7k where 2% budget is used for collecting intermediate reasoning annotations as compared to model performance when spending $10k for collecting only QA pairs.", "Overall, from Figure 3 we can see that allocating even 1% of the budget to intermediate annotations provides performance gains.", "However, we observe that allocating a large percentage of the budget to intermediate annotations at the expense of QA pairs reduces performance.", "In our experiments, we find that the sweet-spot percentage of the budget and training-set that should be allocated to intermediate annotations is 2% and 10% respectively.", "Unanticipated biases (Min et al., 2019; Manjunatha et al., 2019) are often introduced during dataset collection due to many reasons (eg., domain-specific contexts, crowd-workers distributions, etc.).", "These dataset artifacts can be picked up by the model to achieve better performance without learning the right way to reason.", "We explore two examples of such dataset artifacts in DROP and Quoref.", "In DROP, around 40% of the passages are from NFL game summaries.", "The frequency of counting and arithmetic questions from this portion of the data resulted in the answers 1, 2, and 3 making up 18% of the entire training set.", "To study the effect of biased answer distribution on model performance, we sample 10k QA pairs with answers [0,9] from Dataset Baseline More QA pairs Annotations F1(%) Conf.loss F1(%) Conf.loss F1(%) Conf.loss DROP 24.6 101.5 25.5 107.5 28.1 94.5 Quoref 61.8 103.0 62.7 109.0 64.3 97.0 Table 1: F1 performance and confusion loss (lower is better) of models in three settings: baseline with 10k(DROP) and 5k(Quoref) QA pairs, additional QA pairs worth $250 and $100 for DROP and Quoref respectively, and additional annotations worth $250 and $100 for DROP and Quoref respectively.", "the training set randomly as a biased training set.", "We also sample QA pairs from the validation set uniformly for each answer [0,9] thus ensuring that each answer has equal representation in the unbiased validation set.", "In Quoref, we found that around 65% of the answers are entity names present in the first sentence of the context.", "Similar to DROP, we create a biased training set with 5k QA pairs from the original training data, and an unbiased validation set with equal representation of answers from the first sentence and the rest of the context.", "We investigate the effects of spending a small additional budget, either by adding more QA pairs (from the biased data distribution) or by collecting intermediate annotations, on this bias.", "We use two metrics to measure the extent to which bias has been mitigated.", "The first is the original metric for the task, i.e. F 1 , that measures how accurate the model is on the unbiased evaluation.", "Further, we also want to evaluate the extent to which the errors made by the model are unbiased; in other words, how much is the error diffused over all possible answers, rather than only over the biased labels.", "We compute confusion loss (Machart and Ralaivola, 2012) as the metric for this, which measures error diffusion by computing the highest singular value of the unnormalized confusion matrix after setting the diagonal elements (i.e. true positives), to zero (Koco and Capponi, 2013) (lower confusion loss implies more diffusion).", "In an ideal scenario, all labels should have an equally likely probability of being a mis-prediction.", "Higher confusion loss implies that if we consider mis-classifications of a model we see that it has a tendency of over-predicting a specific label, making it biased towards that specific class.", "Table 1 shows that along with higher improvements in F 1 on providing annotations as compared to more QA pairs, we also see a reduction in the confusion loss with annotations indicating bias mitigation.", "Further, we also find that for DROP, the false positive rate for top-3 common labels fell down from 47.7% (baseline) to 39.6% (with annotations), while the false positive rate for the bottom-7 increased from 30.4%(baseline) to 36.3%(with anno-tations), further demonstrating mitigation of bias.", "The confusion matrices are included in Appendix.", "Figure 4 shows a DROP example where the model trained without annotations is not able to determine the right set of events being queried, returning an incorrect response.", "The model trained with annotations can understand the semantics behind the query terms first half and Cowboys, to arrive at the correct answer.", "The curves depicting quanti-How many times did the Cowboys score in the first half?", "Still searching for their first win, the Bengals flew to Texas Stadium for a Week 5 interconference duel with the Dallas Cowboys.", "In the first quarter, Cincinnati trailed early as Cowboys kicker Nick Folk got a 30-yard field goal, along with RB Felix Jones getting a 33-yard TD run.", "In the second quarter, Dallas increased its lead as QB Tony Romo completed a 4-yard TD pass to TE Jason Witten.", "The Bengals would end the half with kicker Shayne Graham getting a 41-yard and a 31-yard field goal.", "In the third quarter, Cincinnati tried to rally as QB Carson Palmer completed an 18-yard TD pass to WR T. J. Houshmandzadeh.", "In the fourth quarter, the Bengals got closer as Graham got a 40-yard field goal, yet the Cowboys answered with Romo completing a 57-yard TD pass to WR Terrell Owens.", "Cincinnati tried to come back as Palmer completed a 10-yard TD pass to Houshmandzadeh (with a failed 2-point conversion), but Dallas pulled away with Romo completing a 15-yard TD pass to WR Patrick Crayton.", "Similar to our work, Zaidan et al. (2007) studied the impact of providing explicit supervision via rationales , rather than generating them, for varying fractions of training set in text classification.", "However, we study the benefits of such supervision for complex compositional reading comprehension datasets.", "In the field of computer vision, Donahue and Grauman (2011) collected similar annotations, for visual recognition, where crowd-workers highlighted relevant regions in images.", "Within reading comprehension, various works like HotpotQA (Yang et al., 2018) and CoQA (Reddy et al., 2019) have collected similar reasoning steps for entire dataset.", "Our work shows that collecting intermediate annotations for a fraction of dataset is cost-effective and helps alleviate dataset collection biases to a degree.", "Another line of work (Ning et al., 2019) explores the cost vs. benefit of collecting full vs. partial annotations for various structured predictions tasks.", "However, they do not focus on intermediate reasoning required to learn the task.", "Our auxiliary training with intermediate annotations is inspired by extensive related work on training models using side information or domain knowledge beyond labels (Mann and McCallum, 2008; Chang et al., 2007; Ganchev et al., 2010; Rock-taschel et al., 2015).", "Especially relevant is work on supervising models using explanations (Ross et al., 2017), which, similar to our annotations, identify parts of the input that are important for prediction (Lei et al., 2016; Ribeiro et al., 2016).", "We show that intermediate annotations are a cost-effective way to not only boost model performance but also alleviate certain unanticipated biases introduced during the dataset collection.", "However, it may be unnecessary to collect these for entire dataset and there is a sweet-spot that works best depending on the task.", "We proposed a simple semi-supervision technique to expose the model to these annotations.", "We believe that in future they can be used more directly to yield better performance gains.", "We have also released these annotations for the research community at https: //github.com/dDua/Intermediate_Annotations .", "This work was supported in part by Allen Institute of Artificial Intelligence, in part by Amazon, and in part by the National Science Foundation (NSF) grant #CNS-1730158." ]
[ "abstain", "abstain", "method", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "method", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "other", "abstain", "other", "other", "method", "method", "result", "abstain", "objective", "method", "other", "other" ]
[ "Dialogue state tracking (DST) plays a key role in task-oriented dialogue systems to monitor the user's goal.", "In general, there are two strategies to track a dialogue state: predicting it from scratch and updating it from previous state.", "The scratch-based strategy obtains each slot value by inquiring all the dialogue history, and the previous-based strategy relies on the current turn dialogue to update the previous dialogue state.", "However, it is hard for the scratch-based strategy to correctly track short-dependency dialogue state because of noise; meanwhile, the previous-based strategy is not very useful for long-dependency dialogue state tracking.", "Obviously, it plays different roles for the context information of different granularity to track different kinds of dialogue states.", "Thus, in this paper, we will study and discuss how the context information of different granularity affects dialogue state tracking.", "First, we explore how greatly different granularities affect dialogue state tracking.", "Then, we further discuss how to combine multiple granularities for dialogue state tracking.", "Finally, we apply the findings about context granularity to few-shot learning scenario.", "Besides, we have publicly released all codes.", "Currently, task-oriented dialogue systems have attracted great attention in academia and industry (Chen et al., 2017), which aim to assist the user to complete certain tasks, such as buying products, booking a restaurant, etc.", "As a key component of task-oriented dialogue system, dialogue state tracking plays a important role in understanding the natural language given by the user and expressing it as a certain dialogue state (Rastogi et al., 2017, 2018; Goel et al., 2018).", "The dialogue Corresponding author U: i am looking for a swimming pool in the south part of town .", "state for each turn of a dialogue is typically presented as a series of slot value pairs that represent information about the user's goal up to the current turn.", "For example, in Figure 1, the dialogue state at turn 2 is { ( attraction type , cinema ), ( attraction area , south ) } .", "In general, there are two strategies to track a dialogue state: predicting it from scratch and updating it from previous state.", "The scratch-based strategy obtains each slot value in dialogue state by inquiring all the dialogue history (Xu and Hu, 2018; Lei et al., 2018; Goel et al., 2019; Ren et al., 2019; Wu et al., 2019; Shan et al., 2020; Zhang et al., 2020), the advantage of this strategy is to ensure the integrity of the dialogue information.", "The previous-based strategy relies on the current turn dialogue to update the previous dialogue state (Mrksic et al., 2017; Chao and Lane, 2019; Kim et al., 2020; Heck et al., 2020; Zhu et al., 2020), the main character of this strategy is to greatly improve the efficiency of dialogue state prediction and avoid the computational cost of encoding all dialogue history.", "However, both kinds of strategies above have great defects because of their own characters.", "For the scratch-based strategy, it is hard to correctly track short-dependency dialogue state because of the noise associated with encoding all dialogue history.", "For example, the dialogue history of turn 1 to 3 in Figure 1", "(a) does not contribute to the prediction of slot values in the restaurant domain.", "For the previous-based strategy, it is difficult to solve the problem of long-dependency dialogue state tracking because it utilizes only limited dialogue information from the current turn dialogue and the previous state.", "As in Figure 1", "(b), the slot taxi departure cannot be predicted due to the absence of corresponding dialogue history content.", "Obviously, it plays different roles for the context information of different granularity to track different kinds of dialogue states.", "Intuitively, less context information is needed for short-dependency dialogue state, while more context information must be taken into account for long-dependency dialogue state tracking.", "For example, the dialogue state in Figure 1", "(c) is tracked from turn 2, which utilizes context information of granularity 4 (turn 3 to 6), providing evidence for the prediction of all slots while bringing as little noise as possible.", "Thus, in this paper, we will study and discuss how the context information of different granularity affects dialogue state tracking.", "The contribution of this paper is that it is, to the best of our knowledge, the first detailed investigation of the impact of context granularity in dialogue state tracking and promotes the research on dialogue state tracking strategy.", "Our investigation mainly focuses on three points 1 : 1 The code is released at https://github.com/ yangpuhai/Granularity-in-DST How greatly different granularities affect dialogue state tracking?", "How to combine multiple granularities for dialogue state tracking?", "Application of context information granularity in few-shot learning scenario.", "The rest of paper is organized as follows: The relevant definitions and formulas in the dialogue state tracking strategy are introduced in section 2. Section 3 lists the detailed experimental settings.", "Section 4 presents the survey report and results, followed by conclusions in section 5.", "To describe the dialogue state tracking strategy, let's introduce the formula definitions used in this paper:", "Dialogue Content: D = ( T 1 , T 2 , ..., TN ) is de-fined as the dialogue of length N , where T i = ( S i , U i ) is the dialogue content of i -th turn, which includes the system utterance S i and the user utterance U i .", "Dialogue State: We define E = ( B 0 , B 1 , B 2 , ..., BN ) as all dialogue states up to the N -th turn of the dialogue, where B i is the set of slot value pairs representing the information provided by the user up to the i -th turn.", "In particular, B 0 is the initial dialogue state which is an empty set.", "Granularity: In dialogue state tracking, the number of dialogue turns spanning from a certain dialogue state B m in the dialogue to the current dialogue state B n is called granularity, that is, G = | ( T m +1 , ..., T n ) | .", "For example, the granularities of context information in", "(a),", "(b), and", "(c) in Figure 1 are 6, 1, and 4, respectively.", "Assuming that the dialogue state of the N -th turn is currently required to be inferred, the dialogue state tracking under a certain granularity is as follows: BN = tracker (( TN G +1 , ..., TN ) , BN G ) where G { 1 , 2 , ..., N } is the granularity of context information and tracker represents a dialogue state tracking model.", "Dataset # Domains # Slots Avg.", "turns # Dialogues # Turns train dev test train dev test Sim-M 1 5 5.14 384 120 264 1,973 627 1,364 Sim-R 1 9 5.53 1,116 349 775 6,175 1,489 3,436 WOZ2.0 1 3 4.23 600 200 400 2,536 830 1,646 DSTC2 1 3 7.24 1,612 506 1,117 11,677 3,934 9,890 MultiWOZ2.1 5 30 6.53 8,420 1,000 999 54,984 7,371 7,368 Table 1: Data statistics of Sim-M, Sim-R, WOZ2.0, DSTC2 and MultiWOZ2.1.", "this case corresponds to the strategy of updating from previous state.", "Therefore, the previous-based strategy is a special case where context granularity is minimal in dialogue state tracking.", "If G = N , then: BN = tracker (( T 1 , ..., TN ) , B 0 ) this case corresponds to the strategy of predicting state from scratch.", "Similarly, the scratch-based strategy is also a special case of dialogue state tracking, with the context information of maximum granularity.", "Since the size of the maximum granularity N is different in different dialogues, so 0 is used in the paper to refer to the maximum granularity N , -1 to refer to granularity N 1 , and so on.", "In order to investigate how the context information of different granularity affects dialogue state tracking, we analyze the performance of several different types of dialogue state tracking models on different datasets.", "For a clearer illustration, the detailed settings are introduced in this section.", "Our experiments were carried out on 5 datasets, Sim-M (Shah et al., 2018), Sim-R (Shah et al.,", "2018), WOZ2.0 (Wen et al., 2016), DSTC2 (Hen-derson et al., 2014) and MultiWOZ2.1 (Eric et al., 2019).", "The statistics for all datasets are shown in Table 1. Sim-M and Sim-R are multi-turn dialogue datasets in the movie and restaurant domains, respectively, which are specially designed to evaluate the scalability of dialogue state tracking model.", "A large number of unknown slot values are included in their test set, so the generalization ability of the model can be reflected more accurately.", "WOZ2.0 and DSTC2 datasets are both collected in the restaurant domain and have the same three slots food , area , and price range .", "These two datasets provide automatic speech recognition (ASR) hypotheses of user utterances and can therefore be used to verify the robustness of the model against ASR errors.", "As in previous works, we use manuscript user utterance for training and top ASR hypothesis for testing.", "MultiWOZ2.1 is the corrected version of the MultiWOZ (Budzianowski et al., 2018).", "Compared to the four datasets above, MultiWOZ2.1 is a more challenging and currently widely used benchmark for multi-turn multi-domain dialogue state tracking, consisting of 7 domains, over 30 slots, and over 4500 possible slot values.", "Following previous works (Wu et al., 2019; Kim et al., 2020; Heck et al., 2020; Zhu et al., 2020), we only use 5 domains ( restaurant , train , hotel , taxi , attraction ) that contain a total of 30 slots.", "SpanPtr: This is the first model to extract slot values directly from dialogue context without an ontology, it encodes the whole dialogue history with a bidirectional RNN and extracts slot value for each slot by generating the start and end positions in dialogue history (Xu and Hu, 2018).", "TRADE: This model is the first to consider knowledge transfer between domains in the multi-domain dialogue state tracking task.", "It represents a slot as a concatenation of domain name and slot name, encodes all dialogue history using bidirectional RNN, and finally decodes each slot value using a pointer-generator network (Wu et al., 2019).", "BERTDST: This model decodes only the slot values of the slots mentioned in the current turn of dialogue, and then uses a rule-based update mechanism to update from the previous state to the current turn state.", "It uses BERT to encode the current turn of dialogue and extracts slot values from the dialogue as spans (Chao and Lane, 2019).", "SOMDST: This model takes the dialogue state as an explicit memory that can be selectively overwritten, and inputs it into BERT together with the current turn dialogue.", "It then decomposes the prediction for each slot value into operation prediction and slot generation (Kim et al., 2020).", "SUMBT: This model uses an ontology and is trained and evaluated on the dialogue session level instead of the dialogue turn level.", "BERT is used in the model to encode turn level dialogues, and an unidirectional RNN is used to capture session-level representation (Lee et al., 2019).", "Our deployments are based on the official implementation source code of SOMDST 2 and SUMBT 3 , in which SpanPtr, TRADE and BERTDST are reproduced in this paper.", "BERT in all models uses pre-trained BERT (Vaswani et al., 2017) (BERT-Base, Uncased) which has 12 hidden layers of 768 units and 12 self-attention heads, while RNN uses 2 https://github.com/clovaai/som-dst 3 https://github.com/SKTBrain/SUMBT GRU (Cho et al., 2014).", "We use adam (Kingma and Ba, 2014) as the optimizer and use greedy decoding.", "We customize the training epochs for all models, and the training stopped early when the model's performance on development set failed to improve for 15 consecutive epochs, and all the results were averaged over the three runs with different random seeds.", "The detailed setting of the hyperparameters is given in Appendix A. Since the length of the dialogue history is related to the granularity, the input length of the model needs to adapt to the granularity.", "Especially for the model with BERT as the encoder, in order to prevent the input from being truncated, we set the max sequence length to exceed almost all the inputs under different granularity.", "See Appendix A for details on the max sequence length settings.", "Following previous works (Xu and Hu, 2018; Wu et al., 2019; Kim et al., 2020; Heck et al., 2020), the joint accuracy (Joint acc) and slot accuracy (Slot acc) are used for evaluation.", "The joint accuracy is the accuracy that checks whether all the predicted slot values in each turn are exactly the same as the ground truth slot values.", "The slot accuracy is the average accuracy of slot value prediction in all turns.", "This section presents our detailed investigation of how the context information of different granularity affects dialogue state tracking, focusing on the impact of granularity on dialogue state tracking, the combination of multiple granularities, and the application of context granularity in few-shot learning scenario.", "For simplicity, in all experimental results, the maximum granularity is expressed as 0, the maximum granularity minus 1 is expressed as -1, and so on.", "The first part of our investigation look at the validity of the context granularity used by the current various dialogue state tracking models and try to figure out how different granularities affect dialogue state tracking.", "The experimental results are shown in Table 3. It can be found that some dialogue state tracking models do not take the appropriate granularity, and their performance is greatly improved when they are trained with the the context of appropriate gran-Models TG IG WOZ2.0 DSTC2 MultiWOZ2.1 Joint acc Slot acc Joint acc Slot acc Joint acc Slot acc SpanPtr 0* 0* 0.4455 0.7475 0.6234 0.8461 0.4415 0.9570 -1 -1 0.5012 0.7786 0.5829 0.8251 0.3868 0.9495 -2 -2 0.5881 0.8121 0.4825 0.7728 0.3726 0.9499 -3 -3 0.6330 0.8350 0.4737 0.7628 0.3745 0.9507 TRADE 0* 0* 0.5808 0.8186 0.6493 0.8590 0.4420 0.9655 -1 -1 0.5194 0.7833 0.5013 0.7834 0.3963 0.9613 -2 -2 0.5680 0.8107 0.4185 0.7488 0.3528 0.9569 -3 -3 0.5292 0.7886 0.5171 0.7963 0.3564 0.9552 BERTDST 1* 1* 0.8194 0.9307 0.6395 0.8537 0.4140 0.9584 2 2 0.8220 0.9318 0.5830 0.8271 0.4586 0.9636 3 3 0.8190 0.9318 0.5614 0.8103 0.4772 0.9646 4 4 0.8256 0.9344 0.5666 0.8152 0.4917 0.9659 SOMDST 1* 1* 0.8540 0.9471 0.6975 0.8828 0.5029 0.9715 2 2 0.8274 0.9341 0.7022 0.8808 0.5179 0.9730 3 3 0.8280 0.9356 0.7121 0.8851 0.5128 0.9720 4 4 0.8620 0.9491 0.7176 0.8882 0.5085 0.9718 Table 3: Joint accuracy and slot accuracy on WOZ2.0, DSTC2 and MultiWOZ2.1 when the same granularities are used in the training and inference phases.", "ularity.", "For example, the joint accuracy of SpanPtr with granularity -3 on WOZ2.0 improved by 42%, while the joint accuracy of BERTDST with granularity 4 on MultiWOZ2.1 improved by 19%.", "These results suggest that there are significant differences in dialogue state tracking at different granularities, therefore, we should be careful to determine the granularity to be used according to the characteristics of the model and dataset.", "By observing the experimental comparison results on different models and datasets in Table 3, it can be found that: For different models, the model with generative decoding prefer larger granularity, because it requires more context information to effectively learn vocabulary-based distribution.", "For example, TRADE and SOMDST both perform better in larger granularity.", "Meanwhile, the model with extractive decoding is more dependent on the characteristics of the dataset.", "Besides, in general, the model with generative decoding has obvious advantages over the model with extractive decoding.", "For different datasets, when the dataset involves multiple domains and there are a large number of long-dependency dialogue states, context information of larger granularity can", "be used to more effectively capture the long-dependency relationship in the data for dialogue state tracking, such as MultiWOZ2.1 dataset.", "For simpler single-domain datasets, where a large number of short dependencies Models TG IG WOZ2.0 DSTC2 MultiWOZ2.1 Joint acc Slot acc Joint acc Slot acc Joint acc Slot acc SpanPtr 0* 0* 0.4455 0.7475 0.6234 0.8461 0.4415 0.9570 0, -1 0 0.4804 0.7428 0.6078 0.8371 0.4430 0.9565 TRADE 0* 0* 0.5808 0.8186 0.6493 0.8590 0.4420 0.9655 0, -1 0 0.6102 0.8357 0.6030 0.8413 0.4410 0.9655 BERTDST 1* 1* 0.8194 0.9307 0.6395 0.8537 0.4140 0.9584 1, 2 1 0.8331 0.9368 0.5824 0.8290 0.4229 0.9602 SOMDST 1* 1* 0.8540 0.9471 0.6975 0.8828 0.5029 0.9715 1, 2 1 0.8572 0.9479 0.7077 0.8866 0.5126 0.9723 SUMBT 1* 1* 0.9052 0.9665 0.6571 0.8664 0.4632 0.9655 1, 2 1 0.9089 0.9677 0.6739 0.8716 0.4725 0.9663 Table 4: Comparison of different baseline models on WOZ2.0, DSTC2 and MultiWOZ2.1 before and after applying multi-granularity combination.", "determine the effectiveness of small granularity in dialogue state tracking.", "However, when there are more turns of dialogue resulting in less information in each turn, a larger granularity may be required to provide enough information, for example, SpanPtr performs best on the DSTC2 dataset at maximum granularity.", "As can be seen from the above analysis, different granularities have their own advantages in different situations of dialogue, so it is natural to wonder whether multiple granularities can be combined to achieve better dialogue state tracking.", "Next, let's discuss the issue of multi-granularity combination.", "Following the above analysis, here we mainly discuss how to combine multiple granularities in dialogue state tracking, mainly focusing on three aspects: (1) The relationship between granularities, (2) Performance of multi-granularity combination and (3) Limitations of multi-granularity combination.", "inference phases of dialogue state tracking to figure out the relationship between different granularities, as shown in Figure 2. It can be seen that when we fix the granularity of context information in the inference phase, the dialogue state tracking model trained with other granularity still obtains the generalization under this inference granularity.", "And even some models learned at other granularity, such as the BERTDST in Figure 2", "(b) and", "(f), can perform better.", "Meanwhile, it can also be found that as the granularity gap increases, the context information becomes more and more inconsistent, and eventually the ability of the model to generalize across granularity is gradually reduced.", "Through these phenomena, we can summarize as follows: The knowledge learned by the dialogue state tracking model in context information of different granularity is transferable and the smaller the gap between granularity can bring more knowledge transfer effect.", "Performance of multi-granularity combination: Then, we use the knowledge transfer between context information of different granularity to improve the baseline.", "In the specific experiment, we add the most adjacent granularity to the training phase of the model, that is, the context under two granularities is used for training, while the inference phase remains unchanged, as shown in Table 4.", "It can be observed that in most cases, the performance of the baseline models is significantly enhanced, suggesting that adding more granularity context information to the training phase of the model can indeed improve the generalization of the dialogue state tracking model.", "Of course, in some cases, multi-granularity combination results in a reduction in performance, such as SpanPtr, TRADE, and BERTDST on DSTC2 dataset.", "The main reason for this phenomenon should be the large deviation between the context information of different granularity in the multi-granularity combination, as can be seen from the large reduction of SpanPtr, TRADE, and BERTDST on the DSTC2 dataset with other granularity in Table 3. Limitations of multi-granularity combination: Given that multi-granularity combination can lead to improved generalization performance, is it better to have more context information of different granularity in training phase?", "To answer this question, we gradually add more granularities to the training phase while keeping the inference granularity", "unchanged, the experimental results are shown in Figure 3. It can be found that there is an upper limit to the use of multi-granularity combination in the training phase.", "Generally, adding the granularity with the smallest gap can bring the best effect, after that, with the increase of granularity number, the performance will decline.", "Considering the knowledge transfer between granularity in multi-granularity combination, we explore the application of multi-granularity combination in few-shot learning scenario.", "Figure 4 shows the joint accuracy of the model with different multi-granularity combinations and the percentage improvement relative to the baseline model on the WOZ2.0 dataset with different Models TG IG Sim-M Sim-R WOZ2.0 DSTC2 MultiWOZ2.1 10% 10% 10% 5% 5% SpanPtr 0* 0* 0.1466 0.5147 0.1744 0.4523 0.2700 0, -1 0 0.1188 0.5631 0.2211 0.4640 0.2703 0, -1, -2 0 0.0985 0.5752 0.2165 0.4839 0.2765 0, -1, -2, -3 0 0.0872 0.5805 0.2313 0.4873 0.2762 TRADE 0* 0* 0.0780 0.6531 0.2047 0.5290 0.2531 0, -1 0 0.0880 0.6512 0.2153 0.5108 0.2400 0, -1, -2 0 0.0892 0.6612 0.2193 0.5173 0.2470 0, -1, -2, -3 0 0.0921 0.6569 0.2098 0.5101 0.2461 BERTDST 1* 1* 0.4814 0.7066 0.5800 0.4697 0.3414 1, 2 1 0.6219 0.7295 0.5770 0.5137 0.3491 1, 2, 3 1 0.5926 0.7376 0.6138 0.4712 0.3450 1, 2, 3, 4 1 0.6075 0.7241 0.6136 0.4929 0.3377 SOMDST 1* 1* 0.2708 0.4700 0.5140 0.3967 0.3596 1, 2 1 0.2754 0.5101 0.5563 0.5151 0.3706 1, 2, 3 1 0.2549 0.5166 0.5662 0.5307 0.3613 1, 2, 3, 4 1 0.2104 0.5142 0.5330 0.5238 0.3572 SUMBT 1* 1* 0.0982 0.6526 0.4581 0.4689 0.2964 1, 2 1 0.0980 0.6546 0.4690 0.5493 0.3535 1, 2, 3 1 0.0980 0.6390 0.4848 0.5265 0.3696 1, 2, 3, 4 1 0.0968 0.6464 0.4708 0.5611 0.3637 Table 5: Joint accuracy of baseline models in few-shot learning before and after applying multi-granularity combination in training phase.", "training data scales.", "It can be found that under different scales of training data, multi-granularity combination can achieve better performance compared with single-granularity in most cases.", "Moreover, it can be seen from", "(a),", "(d) and", "(e) that the advantages of multi-granularity combination are gradually expanding with the decrease of the scale of training dataset.", "Therefore, the performance of multi-granularity combination in few-shot learning is worth exploring.", "We conduct detailed experiments on all the 5 datasets in the paper to fully explore the potential of multi-granularity combination in few-shot learning, as shown in Table 5.", "It can be found that multi-granularity combination has a very significant effect in few-shot learning, and in some cases can even achieve a relative improvement of more than 10%, such as SpanPtr on Sim-R and WOZ2.0, BERTDST on Sim-M, SOMDST on WOZ2.0 and DSTC2.", "Meanwhile, in few-shot learning, the upper limit of multi-granularity combination can be higher, and better performance can be achieved when more granularities are added in the training phase.", "The above experimental results of multi-granularity combination in few-shot learning show that, there is indeed knowledge transfer between different granularity contexts, and the model can obtain more adequate modeling of dialogue by learning context dialogues of different granularity.", "In the paper, we analyze the defects of two existing traditional dialogue state tracking strategies when dealing with context of different granularity and make a comprehensive study on how the context information of different granularity affects dialogue state tracking.", "Extensive experimental results and analysis show that: (1) Different granularities have their own advantages in different situations of dialogue state tracking; (2) The multi-granularity combination can effectively improve the dialogue state tracking; (3) The application of multi-granularity combination in few-shot learning can bring significant effects.", "In future work, dynamic context granularity can be used in training and inference to further improve dialogue state tracking.", "This work may contribute to the development of conversational systems.", "In the narrow sense, this work focuses on dialogue state tracking in task-oriented dialogue system, hoping to improve the ability of conversational AI to understand human natural language.", "If so, these improvements could have a positive impact on the research and application of conversational AI, which could help humans to complete goals more effectively in a more intelligent way of communication.", "However, we never forget the other side of the coin.", "The agent substitution of conversational AI may affect the humanized communication and may lead to human-machine conflict problems, which need to be considered more broadly in the field of conversational AI.", "We thank the anonymous reviewers for their insightful comments.", "This work was supported by National Key R&D Plan (No. 2020AAA0106600) and National Natural Science Foundation of China (Grant No. U19B2020 and No. 61772076)." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "other", "objective", "objective", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "other", "other" ]
[ "Definition generation, which aims to automatically generate dictionary definitions for words, has recently been proposed to assist the construction of dictionaries and help people understand unfamiliar texts.", "However, previous works hardly consider explicitly modeling the components of definitions, leading to under-specific generation results.", "In this paper, we propose ESD , namely E xplicit S emantic D ecomposition for definition generation, which explicitly decomposes meaning of words into semantic components, and models them with discrete latent variables for definition generation.", "Experimental results show that ESD achieves substantial improvements on WordNet and Oxford benchmarks over strong previous baselines.", "Dictionary definition, which provides explanatory sentences for word senses, plays an important role in natural language understanding for human.", "It is a common practice for human to consult a dictionary when encountering unfamiliar words (Fraser, 1999).", "However, it is often the case that we cannot find satisfying definitions for words that are rarely used or newly created.", "To assist dictionary compilation and help human readers understand unfamiliar texts, generating definitions automatically is of practical significance.", "Noraset et al. (2017) first propose definition modeling, which is the task of generating the dictionary definition for a given word with its embedding.", "Gadetsky et al. (2018) extend the work by incorporating word sense disambiguation to generate context-aware word definitions.Both methods adopt a variant of encoder-decoder architecture, Equal contribution Corresponding author Word captain Reference the person in charge of a ship Generated the person who is a member of a ship Table 1: An example of the definitions of word captain.", "where the word to be defined is mapped to a low-dimension semantic vector by an encoder, and the decoder is responsible for generating the definition given the semantic vector.", "Although the existing encoder-decoder architecture (Gadetsky et al., 2018; Ishiwatari et al., 2019; Washio et al., 2019) yields reasonable generation results, it relies heavily on the decoder to extract thorough semantic components of the word, leading to under-specific definition generation results, i.e. missing some semantic components.", "As illustrated in Table 1, to generate a precise definition of the word captain, one needs to know that captain refers to a person , captain is related to ship , and captain manages or is in charge of the ship , where person , ship , manage are three semantic components of word captain.", "However, due to the lack of explicitly modeling of these semantic components, the model misses the semantic component manage for the word captain.", "Linguists and lexicographers define a word by decomposing its meaning into its semantic components and expressing them in natural language sentences (Wierzbicka, 1996).", "Inspired by this, Yang et al. (2019) incorporate sememes (Bloom-field, 1949; Dong and Dong, 2003), i.e. minimum units of semantic meanings of human languages, in the task of generating definition in Chinese.", "However, it is just as, if not more, time-consuming and expensive to label the components of words than to write definitions manually.", "In this paper, we propose to explicitly decompose the meaning of words into semantic components for definition generation.", "We introduce a group of discrete latent variables to model the underlying semantic components.Extending the established training technique for discrete latent variable used in representation learning (Roy et al., 2018) and machine translation tasks (van den Oord et al., 2017; Kaiser et al., 2018; Shu et al., 2019), we further propose two auxiliary losses to ensure that the introduced latent variables capture the word semantics.", "Experimental results show that our method achieves significant improvements over previous methods on two definition generation datasets.", "We also show that our model indeed learns meaningful and informative latent codes, and generates more precise and specific definitions.", "In this section, we introduce the background of the original definition modeling task and two extensive works to original definition modeling.", "Definition generation was firstly proposed by Noraset et al. (2017).", "The goal of the original task is to generate a natural language description D = d 1: T for a given word w .", "The authors view it as a conditional language modeling task: p ( D| w ) = T (cid:89) t =1 p ( d t | d i<t , w ) (1) The main drawback of Noraset et al. (2017) is that they cannot handle words with multiple different meanings such as spring and bank, whose meanings can only be disambiguated using their contexts.", "To tackle the polysemous problem in the definition generation task, Gadetsky et al. (2018) introduce the task of C ontext-aware D efinition G eneration ( CDG ), in which a usage example C = c 1: |C| of the target word is given to help disambiguate the meaning of the word.", "For example, given the word bank and its context a bank account, the goal of the task is to generate a definition like an organization that provides financial services.", "However, if the input context has been changed to He jumped into the river and swam to the opposite bank ., then the appropriate definition would be the side of a river.", "They extend Eqn.", "1 to make use of the given context as follows: p ( D| w , C ) = T (cid:89) t =1 p ( d t | d i<t , w , C ) (2) 2.3 Decomposed Semantic for Definition Modeling Linguists consider the process of defining a word is to decompose its meaning into constituent components and describe them in natural language sentences (Goddard and Wierzbicka, 1994; Wierzbicka, 1996).", "Previously, Yang et al. (2019) take sememes as one kind of such semantic components, and leverage external sememe annotations HowNet (Dong and Dong, 2003) to help definition generation.", "They formalize the task of definition generation given a word w and its sememes s as follows: p ( D| w , s ) = T (cid:89) t =1 p ( d t | d i<t , w , s ) (3) Although it is shown their method can generate definitions more accurately, they assume that annotations of sememes are available for each word, which can be unrealistic in real-world scenarios.", "In this section, we present ESD , namely E xplicit S emantic D ecomposition for context-aware definition generation.", "It is linguistically motivated that to define a word is to decompose its meaning into constituent components and describe them in natural language sentences (Goddard and Wierzbicka, 1994; Wierzbicka, 1996).", "We assume that there exists a set of discrete latent variables z = z 1: M that model the semantic components of w , where M is the hyperparameter denoting the number of decomposed components.", "Then the marginal likelihood of a definition D that we would like to maximize given a target word w and its context C can be written as follows: p ( D| w , C ) = (cid:88) z p ( z | w , C ) p ( D| w , C , z ) !", "However, it is generally computationally intractable to sum over all the configurations of latent variables.", "In order to address this issue, we instead introduce a approximate posterior q ( z | w , C , D ) and optimize the evidence lower bound (ELBO) of the log likelihood log p ( D| w , C ) for training: JELBO = E q ( z | w , C , D ) (cid:2) log p ( D| z , w , C ) (cid:3) KL ( q ( z | w , C , D ) || p ( z | w , C )) log p ( D| w , C ) (4) At the training phase, both posterior distribution q ( z | w , C , D ) and prior distribution p ( z | w , C ) are computed and z is sampled from the posterior distribution.", "At the testing phase, due to the lack of D , we only compute the prior distribution p ( z | w , C ) and obtain z by applying arg max to it.", "Note that for the simplicity of notions, we denote q ( z i | w , C , D ) and p ( z i | w , C ) as q i and p i in the following sections, respectively.", "As shown in Figure 1, ESD is composed of three modules: the encoder stack, a decoder, and a semantic components predictor.", "Before detailing each component of ESD , we overview the architecture for a brief understanding.", "Following the common practice of context-aware definition models (Gadetsky et al., 2018; Ishiwatari et al., 2019), we first encode the source word w into its representation r and context C = c 1: | C | into its contextual representation H = h 1: | C | .", "The semantic component predictor is responsible for predicting the semantic components z = z 1: M .", "Finally, the decoder generates the target definition from the semantic components z , the word representation r and the context representation H .", "Same as Ishiwatari et al. (2019), our encoder consists of two parts, namely word encoder and context encoder.", "Word Encoder The word encoder is responsible for mapping the word w to a low-dimensional vector r , and consists of a word embedding and a character level encoder.", "The word embedding is initialized by large-scale pretrained word embeddings such as GloVe (Pennington et al., 2014) or FastText (Bojanowski et al., 2017), and is kept fixed at the training time.", "Previous works (No-raset et al., 2017; Ishiwatari et al., 2019) also show that morphological information can be helpful for definition generation.", "We employ a convolutional neural network (Krizhevsky et al., 2012) to encode the character sequence of the word.", "We concatenate the word embedding and the character encoding to get the word representation r .", "Context Encoder We adopt a standard bidirectional LSTM network (Sundermeyer et al., 2012) to encode the context, which takes word embedding sequence of the context C = c 1: |C| and outputs a hidden state sequence H = h 1: |C| .", "Semantic Components Posterior Approximator Exactly modeling the true posterior q ( z | w , C , D ) is usually intractable.", "Therefore, we adopt an approximation method to simplify the posterior inference (Zhang et al., 2016) Following the spirit of VAE (Bowman et al., 2016), we use neural networks for better approximation in this paper.", "Specifically, we first compute the representation HD = h (cid:48) 1: T of the definition D = d 1: T with a bidirectional LSTM network.", "We then obtain the representation of definition D and context C with max-pooling operation.", "h D = max-pooling ( h (cid:48) 1: T ) (5) h C = max-pooling ( h 1: |C| ) (6) With these representations, as well the word representation r , we compute the posterior approximation q i of z i as follows: q i = softmax ( W qi [ r , h C , h D ] + b qi ) where the W qi and b qi are the parameters of the semantic components posterior approximator.", "Semantic Components Prior Model Similar to the posterior, we model the prior p i of z i by a neural network with the representation h C (computed by Eqn 6) and r as follows: p i = softmax ( W pi [ r , h C ] + b pi ) where the W pi and b pi are the parameters of the semantic components prior.", "Given the word w , the context C and the semantic component latent variables z , our decoder adopt a LSTM to model the probability of generating definition D given word w , context C , and semantic components z :", "At each decoding time step, we first obtain the context vector c t as follows:", "ti = exp ( s Tt h i ) (cid:80) |C| j =1 exp ( s Tt h j ) c t = |C| (cid:88) i ti h i", "Moreover, it is intuitive that at different time steps the decoder is describing different semantic perspectives of the word, thus needing different semantic components (Yang et al., 2019).", "We embed each z i using a latent embedding matrix E i RK D and get M semantic component vectors { E 1 ( z 1 ) , E 2 ( z 2 ) , , EM ( z M ) } .", "We then apply an attention mechanism over the semantic component vectors and obtain a semantic context vector o t : ti = exp ( s Tt E i ( z i )) (cid:80) Mj =1 exp ( s Tt E i ( z i )) o t = M (cid:88) i ti E i ( z i ) Finally, we adopt a GRU-like (Cho et al., 2014) gate mechanism to allow the decoder to dynamically fuse information from the word representation r , context vector c t , and semantic context vector o t , which can be calculated as follows: f t = [ r ; c t ; o t ] u t = ( W u [ f t ; s t ] + b u ) v t = ( W r [ f t ; s t ] + b r ) s t = tanh ( W s [( v t (cid:12) f t ]; s t ] + b s ) s (cid:48) t = (1 u t ) (cid:12) s t + u t (cid:12) s t where, W and b are weight matrices and bias terms, respectively.", "The loss function in Eqn.", "4 serves as our primary training objective.", "Besides, since the latent variables are designed to model the semantic components, we propose two auxiliary losses to ensure that these latent variables can learn informative codes and capture the decomposed semantics.", "Semantic Completeness Objective In order to generate accurate definitions, the introduces latent variables must capture all perspectives of the word semantics.", "For example, it is impossible to precisely define the word captain in the context The captain gave the order to abandon the ship without knowing that (1) a captain is a person, (2) a captain works in a ship, and (3) a captain usually is in charge of a ship.", "Therefore, an ideal z should contain sufficient information for predicting the definition.", "We first propose to leverage sememe annotations of HowNet (Dong and Dong, 2003) as an external signal to guide the learning of latent variables.", "As we mentioned in Section 2.3, sememes are also known to be helpful for definition generation (Yang et al., 2019).", "Previously, Xie et al. (2017) show that it is possible to predict sememes of words from large scale pretrained distributional representations.", "Suppose the set of sememes in HowNet are denoted by S = { s 1 , s 2 , , s n } , and each word w in HowNet is annotated by a small subset of S , denoted by S w = { s i | s i S} .", "Inspired by Weng et al. (2017), we adopt a bag-of-word loss to ensure that z is informative enough to be predictive about sememe annotations S w : L ( sem ) com = log (cid:88) s i S w p ( s i | z ) (8) Our next motivation is that the sememes annotation is still expensive, while definitions of words are off-the-shelf when training.", "Inspired by Bao et al. (2019) and John et al. (2019), we enforce the model to predict every words in the target definition D = d 1: T to ensure that z is informative enough: L ( def ) com = log T (cid:88) i =1 p ( d i | z ) (9) Semantic Diversity Objective To achieve the goal of decomposing semantics, it is crucial that there are several different latent variables that separately model different semantic components.", "In order to prevent that multiple latent variables degenerate to one, we encourage the semantic vectors to be dissimilar from each other by introducing a disagreement loss: L div = (cid:88) 1 i<j M dist ( E i ( z i ) , E j ( z j )) (10) where, dist ( , ) is a distance function between two distributions.", "Overall Objectives With the different overall training loss used, there are two variants of ESD .", "The original loss of ESD is L base = J ELBO The first variant of ESD (denoted by ESD -def) includes the optimization of semantic completeness and semantic diversity, which is optimized with: LESD -def = L base + L ( def ) com + L div Grounding on the annotated sememes, the second variant of ESD (denoted by ESD -sem) is optimized with: LESD -sem = L base + L ( sem ) com + L div 4 Experiments 4.1 Experimental Setting Datasets To demonstrate the effectiveness of our method, we conduct experiments on two datasets used in previous work (Ishiwatari et al., 2019): WordNet 1 and Oxford 2 .", "Each entry in the datasets is a triple of a word, a piece of its usage example, and its corresponding dictionary definition.", "1 https://wordnet.princeton.edu/ 2 https://en.oxforddictionaries.com/ Sememe Annotation Resources Following previous work (Yang et al., 2019), we take HowNet as the sememe annotation resource, which is an ontology that contains annotations for over 100,000 words with sememes.", "Each word in HowNet may have several senses, and each sense is annotated with several sememes explaining the meaning of it.", "Hyperparameters We adopt a two-layer LSTM network as our context encoder and definition decoder.", "We set the hidden dim to 300.", "Following Ishiwatari et al. (2019), we set the CNN kernel for character encoder of length 2 , 3 , 4 , 5 , 6 and size 10 , 30 , 40 , 40 , 40 respectively with a stride of 1 .", "The dimension of the final character level encoding is 160 .", "We set the number of latent variables M and the number of categories K to 8 and 256, respectively.", "Optimization We adopt Adam (Kingma and Ba, 2014) to optimize our model.", "The learning rate is set to 0.001.", "The and we used in the overall objective are set to 1.0 and 0.1, respectively.", "All hyperparameters are chosen based on the performance on the validation set and are used across all the experiments.", "Competitors We compare our model with several baseline models:", "1. I-Attention (Gadetsky et al., 2018) uses the context to disambiguate the word embedding and cannot utilize context information at the decoding time.", "2. LOG-CaD (Ishiwatari et al., 2019) is similar to our architecture, without modeling the semantic component.", "3. Pip-sem is our intuitive pipeline that consists of a sememe predictor and a definition generator.", "The sememe predictor is trained on HowNet and is responsible for annotating words in definition generation datasets.", "The definition generator is used to generate definitions given the word, context, and pseudo annotations of sememes.", "Metrics We adopt two several automatic metrics that are often used in generation tasks: BLEU (Pa-pineni et al., 2002) and Meteor (Denkowski and Lavie, 2014).", "BLEU considers the exact match between generation results and references and is the most common metric used to evaluate generation systems.", "Following previous work, we compute Model WordNet Oxford BLEU METEOR BLEU METEOR I-Attention (Gadetsky et al., 2018) 23.77 / 17.25 / LOG-CaD (Ishiwatari et al., 2019) 24.79 / 18.53 / *LOG-CaD 24.70 8.66 18.24 8.43 Pip-sem 25.52 11.33 19.89 11.10 ESD -def 25.75 11.52 19.98 10.79 ESD -sem 26.48 12.45 20.86 11.86 Table 2: BLEU and Meteor scores on WordNet and Oxford dataset.", "the sentence level BLEU score.", "We also consider Meteor (Denkowski and Lavie, 2014), a metric that takes synonyms, stemming, and paraphrases into consideration while calculating the score.", "Meteor score is said to favor word choices than word orders and favor recall over precision (Denkowski and Lavie, 2014).", "We use the recommended hyperparameters to compute Meteor scores.", "The results, as measured by the automatic evaluation metrics, i.e. BLEU and Meteor, are presented in Table2.", "ESD significantly improves the quality of definition generation with a large margin.", "On all the benchmark datasets, our ESD that incorporates sememes achieves the best generation performance, both in BLEU and Meteor scores.", "It is worth noting that the improvement of the Meteor score is more significant than the BLEU score, i.e. 3.79 vs. 1.78 on WordNet, and 3.43 vs. 2.62 on Oxford, indicating that our model is better at recalling semantically correct words, which is consistent with our motivation to address the under-specific problem.", "definition modeling.", "The models that generate definition with the explicit decomposed semantics (Pip-sem, ESD -def and ESD -sem) leads to remarkable improvements over the competitor without decomposed component modeling (I-Attention and LOG-CaD).", "The comparison between the ESD -def, I-Attention and LOG-CaD is fair because all of them do not have the external sememe annotation during training and testing.", "Notably, ESD -sem also improves over Pip-sem by a large margin.", "This shows that the way our method leverages the sememe annotations, i.e. using them as external signals of word semantics, is more effective than simple annotate-then-generate pipeline methods.", "In order to further compare the proposed methods and the strongest previous method (i.e., the Log-CaD model), we performed a human evaluation of the generated definitions.", "We randomly selected 100 samples from the test set of Oxford dataset, and invited four people with at least CET6 level English skills to rate the output definitions in terms of flu-ency and semantic completeness from 1 to 5 points.", "The averaged scores are presented in Table3.", "As can be seen from the table, definitions generated by our methods are rated higher in terms of semantic completeness while achieving comparable fluency.", "Semantic completeness objective We can see that the semantic completeness objective, i.e. L ( ) com leads to a substantial improvement in terms of Meteor score (Line 3 and Line 4 vs. Line 1), which indicates that the gain obtained by our model is not by trivially adopting the conditional VAE framework to definition generation task.", "Semantic diversity objective The experimental results show that although independently using the semantic diversity objective leads to no gains (Line 2 vs. Line 1), regularizing the model to learn diverse latent codes when using semantic completeness objective can improve the generation perfor-L base L div L ( def ) com L ( sem ) com Meteor 1 (cid:88) 8.99 2 (cid:88) (cid:88) 9.15 3 (cid:88) (cid:88) 11.09 4 (cid:88) (cid:88) 11.88 5 (cid:88) (cid:88) (cid:88) 11.56 6 (cid:88) (cid:88) (cid:88) 12.43 7 (cid:88) (cid:88) (cid:88) (cid:88) 12.87 Table 4: Ablation study on the development set of Oxford dataset.", "To gain more insight into the improvement provided by the proposed method, we perform several analyses in this section.", "To validate that explicit decomposition of word semantics is beneficial for definition generation, we compare the performances of several models with different number of latent variables, and plot the result in Figure", "2. Overall, setting multiple latent variables given the same categories achieves noticeable improvements over M=1, i.e. encoder-decoder model with word prediction mechanism.", "However, it is not the case we should adopt as many latent variables as possible.", "The reason for it is that generally a word has a limited number of semantic components (3-10 in HowNet), and having too many components in 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 parameter 6 8 10 12 14 16 18 M e t e o r S c o r e LOG-CaD ESD-def ESD-sem Figure 3: Comparison between LOG-CaD and ESD def with different parameter .", "It is interesting to see that when we set the number of components M to 8, the optimal number of categories K is 256.", "As the total number of semantic units we are modeling is M K , this approximately equals to the number of sememes in HowNet.", "The goal of definition generation task is to accelerate dictionary compilation or to help humans with unfamiliar text.", "In both application scenarios, it is more important to generate content words that describe the semantic of the given word, rather than function words or phrases such as refer to and of or relating to.", "To understand which kind of word our model achieves the largest improvements on, we evaluate Meteor scores of the baseline model and our model under different values of , where is a hyperparameter used by Meteor that controls how much we prefer content words over function words.", "Figure 3 shows the results.", "We can see that as our preference over content words increases, both the performances of baseline model and our model decreases, indicating that it is more difficult for current definition generation models to generate useful content words than function words.", "However, the gap between the baseline model and our model becomes larger when increases, which shows that the gain of our model is mainly from the content words instead of function words.", "Examples of learned latent codes In Table 6, we show some examples of learned latent codes on WordNet dataset.", "We can see that our model does learn informative codes, i.e. words with similar meanings are assigned with similar latent codes, and codes of words with different meanings tend to differ.", "Examples of generated definitions We also list several generation samples in Table 5.", "We can see that the definitions generated by our method are more semantically complete than those by previous works, and they indeed capture fine-grained semantic components that the baseline model ignores.", "For example, it is necessary to know that militia has unprofessional military skills, which distinguishes the meaning of militia and army .", "The definition generated by the baseline model ignores this perspective.", "However, our model does describe the unprofessional nature of militia by generating not very skillful, thanks to the ability of modeling fine-grained semantic components.", "Definition Generation Definition modeling was firstly proposed by Noraset et al. (2017).", "They take a word embedding as input and generate a definition of the word.", "An obvious drawback is that their model cannot handle polysemous words.", "Recently several works (Ni and Wang, 2017; Gadetsky et al., 2018; Ishiwatari et al., 2019) consider the context-aware definition generation task, where the context is introduced to disambiguate senses of words.", "They all adopt a encoder-decoder architecture, and rely heavily on the decoder to extract semantic components of the word semantic, thus leading to under-specific definitions.", "In contrast, we introduce a group of discrete latent variables to model these semantic components explicitly.", "Semantics It is recognized by linguists that human beings understand complex meaning by decomposing it into components that are latent in the meaning.", "Wierzbicka (1996) propose that different languages share a set of atomic concepts that cannot be further decomposed i.e. semantic primitives , and all complex concepts can be semantically composed by semantic primitives.", "Dong and Dong (2003) introduce a similar idea.", "They call the atomic concepts as sememes , and present a knowledge base HowNet in which senses of words are annotated with sememes.", "HowNet is shown to be helpful for many NLP tasks, such as word representation learning (Niu et al., 2017), relation extraction (Li et al., 2019), aspect extraction (Luo et al., 2019).", "Previously Yang et al. (2019) propose to use sememe annotations as a direct input when generating definitions, which can suffer from the data sparsity problem.", "In this paper, we instead leverage HowNet as the external supervising signals for latent variables when training and try to learn the knowledge into the model itself.", "We proposed ESD , a context-aware definition generation model that explicitly models the decomposed semantics of words.", "Specifically, we model the decomposed semantics as discrete latent variables, and training with auxiliary losses to ensure that the model learns informative latent codes for definition modeling.", "As a result, ESD leads to significant improvements over the previous strong baselines on two established definition datasets.", "Quantitative and qualitative analysis showed that our model could generate more meaningful, specific and accurate definitions.", "In future work, we plan to seek better ways to guide the learning of latent variables, such as using dynamic routing (Sabour et al., 2017) method to align the latent variables and sememes, and learn more explainable latent codes.", "We would like to thank the anonymous reviewers for their insightful comments.", "Shujian Huang is the corresponding author.", "This work is supported by National Science Foundation of China (No. U1836221, 61772261), the Jiangsu Provincial Research Foundation for Basic Research (No. BK20170074)." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "result", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "other", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "method", "other", "other", "other" ]
[ "Abstract Meaning Representation (AMR) annotations are often assumed to closely mirror dependency syntax, but AMR explicitly does not require this, and the assumption has never been tested.", "To test it, we devise an expressive framework to align AMR graphs to dependency graphs, which we use to annotate 200 AMRs.", "Our annotation explains how 97% of AMR edges are evoked by words or syntax.", "Previously existing AMR alignment frameworks did not allow for mapping AMR onto syntax, and as a consequence they explained at most 23%.", "While we find that there are indeed many cases where AMR annotations closely mirror syntax, there are also pervasive differences.", "We use our annotations to test a baseline AMR-to-syntax aligner, find-ing that this task is more difficult than AMR-to-string alignment; and to pinpoint errors in an AMR parser.", "We make our data and code freely available for further research on AMR parsing and generation, and the relationship of AMR to syntax.", "Abstract Meaning Representation (AMR; Banarescu et al., 2013) is a popular framework for annotating whole sentence meaning.", "An AMR annotation is a directed, usually acyclic graph in which nodes represent entities and events, and edges represent relations between them, as on the right in figure", "1. 1 AMR annotations include no explicit mapping between elements of an AMR and the corresponding elements of the sentence that evoke them, and this presents a challenge to developers of machine learning systems that parse sentences to AMR or generate sentences from AMR, since they must 1 For clarity of presentation, we have constructed the sentences and AMRs shown in figuresexcept for figure 3, which is a simplified version of a sentence in the corpus.", "first infer this mapping in the training data (e.g. Flanigan et al., 2014; Wang et al., 2015; Artzi et al., 2015; Flanigan et al., 2016; Pourdamghani et al., 2016; Misra and Artzi, 2016; Damonte et al., 2017; Peng et al., 2017, inter alia).", "2 This AMR alignment problem was first formalized by Flanigan et al. (2014), who mapped AMR nodes or connected subgraphs to words or sequences of words under the assumption of a one-to-one mappingwe call this JAMR alignment .", "Pourdamghani et al. (2014) then re-formalized it so that any AMR node or edge can map to any word without a one-to-one assumptionwe call this ISI alignment .", "In ISI alignments, edges often align to syntactic function words: for example, :location aligns to in in figure", "1. So edge alignments allow ISI to explain more of the AMR structure than JAMR, but in a limited way: only 23% of AMR edges are aligned in the ISI corpus.", "This may be be-2 Some recent neural AMR sytems require minimal or no explicit alignments (Konstas et al., 2017; van Noord and Bos, 2017).", "But they implicitly learn them in the form of soft attention, and we believe that a clearer understanding of alignment will benefit modeling and error analysis even in these systems.", "cause edges are often evoked by syntactic structure rather than words: for instance, the :ARG1 edge in figure 1 is evoked by the fact that cat is the subject of lies and not by any particular word.", "Although it seems sensible to assume that all of the nodes and edges of an AMR are evoked by the words and syntax of a sentence, the existing alignment schemes do not allow for expressing that relationship.", "We therefore propose a framework expressive enough to align AMR to syntax (2) and use it to align a corpus of 200 AMRs to dependency parses.", "We analyse our corpus and show that the addition of syntactic alignments allows us account for 97% of the AMR content.", "Syntactic-semantic mappings are often assumed by AMR parsing models (e.g. Wang et al., 2015; Artzi et al., 2015; Damonte et al., 2017), which is understandable since these mappings are well-studied in linguistic theory.", "But AMR explicitly avoids theoretical commitment to a syntax-semantics mapping: Banarescu et al. (2013) state that AMR is agnostic about how we might want to derive meanings from strings.", "If we are go-ing to build such an assumption into our models, we should test it empirically, which we can do by analysing our corpus.", "We observe some pervasive structural differences between AMR and dependency syntax (3), despite the fact that a majority of AMR edges map easily onto dependency edges.", "Since syntactic alignment can largely explain AMRs, we also develop a baseline rule-based aligner for it, and show that this new task is much more difficult than lexical alignment (4).", "We also show how our data can be used to analyze errors made by an AMR parser (5).", "We make our annotated data and aligner freely available for further research.", "3 2 Aligning AMR to dependency syntax Our syntactic representation is dependency grammar, which represents the sentence as a rooted, directed graph where nodes are words and edges are grammatical relations between them (Kruijff, 2006).", "We use Universal Dependencies (UD), a cross-lingual dependency annotation scheme, as implemented in Stanford CoreNLP (Manning et al., 2014).", "Within the UD framework, we use enhanced dependencies (Schuster and Manning, 2016), in which dependents can have more than one head, 3 https://github.com/ida-szubert/amr_ud resulting in dependency graphs (DGs).", "4 Our alignment guidelines generalize ideas present in the existing frameworks.", "We want to allow many-to-many alignments, which we motivate by the observation that some phenomena cause an AMR graph to have one structure expressing the same information as multiple DG structures, and vice versa.", "For instance, in figure 2 the AMR subgraph representing Cruella de Vil aligns to two subgraphs in the dependency graph because of pronominal coreference.", "In the other direction, in figure 3 the capabilities node aligns to both capable nodes in the AMR, which is a result of the AMR treating conjoined adjectival modifiers as a case of ellipsis.", "The alignments we propose hold between subgraphs of any size.", "By aligning subgraphs we gain expressiveness needed to point out correspondences between semantic and syntactic structure.", "If AMR and DG were very similar in how they represent information, such correspondences would probably hold between subgraphs consisting of a single edge, as in figure 1 cat nmod:poss my cat poss I .", "However, AMR by design abstracts away from syntax and it should not be assumed that all mappings will be so clean.", "For example, the same figure has lies nmod-in sun case in lies location sun .", "Moreover, AMR represents the meaning of particular words or phrases with elaborate structures, the result of which might be that the same information is expressed by a single word and a complex AMR subgraph, as in figure 3 where AMR represents general as person ARG0-of have-org-role ARG2 general .", "An alignment is a link between subgraphs in an AMR and a DG which represent equivalent information.", "Given a sentence's DG and AMR we define an alignment as a mapping between an AMR subgraph and a DG subgraph.", "Lexical alignments (2.2) hold between pairs of nodes, and nodes from either graph may participate in multiple lexical alignments.", "Structural alignments (2.3) hold between pairs of connected subgraphs where at least one of the subgraphs contains an edge.", "4 We chose UD because it emphasises shallow and semantically motivated annotation, by the virtue of which it can be expected to align relatively straightforwardly to a semantic annotation such as AMR.", "Aligning AMR with different versions of dependency grammar (e.g. Prague) or different syntactic frameworks (e.g. CCG, TAG) would be an interesting extension of our work.", "In the following two sections we discuss the types of alignments that our framework allows.", "More detailed guidelines regarding how to align particular linguistic constructions can be found in appendix A. 2.2 Lexical alignments A lexical alignment should hold between a word and an AMR concept if the latter is judged to express the lexical meaning of the former.", "Node labels usually reflect their lexically aligned word or its lemma, including derivational morphology (e.g. thirsty thirst-01 ).", "Thus, string similarity is a useful heuristic for lexical alignment.", "5 Most AMR nodes align lexically to a single word.", "Cases of one-to-many alignments include coreference , when an entity is mentioned multiple times in the sentence, and multiword expressions such as a verb-particle constructions ( pay off pay-off-02 ) and fixed grammatical expressions ( instead of instead-of-91 ).", "Occasionally an AMR node does not lexically align to any DG node.", "This is true for constants indicating sentence mood such as imperative , implicit uses of and to group list items, inferred concept nodes such as entity 5 Exceptions include: pronouns with noun antecedents in the sentence; the indicating negative polarity, which lexically aligns to no , not , and negative prefixes; modal auxiliaries, e.g., can possible ; normalized dates and values such as February 2 in a date-entity ; and amr-unknown , which aligns to wh-words.", "types , name in named entities, and -91 frames like have-org-role-91 .", "Most words are lexically aligned to a single AMR node, if they are aligned at all.", "A word may align to multiple AMR nodes if it is duplicated in the AMR due to ellipsis or distributive coordination ( capabilities aligns to c2 / capable and c3 / capable in figure 3), or if it is morphologically decomposed in the AMR ( evildoer aligns to evil and do-02 in figure 2).", "Many words are not lexically aligned to any AMR node, including punctuation tokens, articles , copulas , nonmodal auxiliaries , expletive subjects, infinitival to , complementizer that , and relative pronouns.", "Structural alignments primarily reflect compositional grammatical constructions , be they syntactic or morphological.", "Note that the structural alignments build upon the lexical ones.", "Structural alignments hold between two subgraphs, at least one of which is larger than a single node.", "If a subgraph includes any edges, it automatically includes nodes adjacent to those edges.", "Structural alignments need not be disjoint: an edge can appear in two or more distinct alignments.", "Nodes and edges in both AMR and DG may be unaligned.", "can be interpreted.", "We establish the following principles to guide the specification of alignment: Connectedness Principle.", "In an alignment d a , d must be a connected subgraph of the DG, and a must be a connected subgraph of the AMR.", "Minimality Principle.", "If two alignments, d a and d a , have no dependency or AMR edges in common, then their union d d a a is redundant, even if it is valid.", "Individual alignments should be as small as possible; we believe compo-sitionality is best captured by keeping structures minimal.", "Therefore, in figure 1 there is no alignment between subgraphs spanning My , cat , lies and i , cat , lie .", "Such subgraphs do express equivalent information, but the alignment between them decomposes neatly into smaller alignments and we record only those.", "Subsumption Principle.", "This principle expresses the fact that our alignments are hierarchical.", "Structural alignments need to be consistent with lexical alignments: for subgraph a to be aligned to subgraph d , all nodes lexically aligned to nodes in a must be included in d , and vice versa.", "Moreover, structural alignments need to be consistent with other structural alignments.", "A structural alignment d a is valid only if, for every connected AMR subgraph a < a which is aligned to a DG subgraph, d a < , we also have that d is a subgraph of d and vice versa for every d < d .", "Further, if a contains a node n which is not lexically aligned but which is part of a structurally aligned subgraph a such that d a , it needs to be the case that a a d d or a a d d .", "(And vice versa for nodes in d .)", "For example, conceal nsubj-xsubj Cruella conceal ARG0 person name name op1 Cruella is not a valid alignment, because the AMR side contains nodes person and name , which are not lexically aligned but which are both parts of a structural alignment marked in blue.", "Coordination Principle.", "If an alignment contains a dependency edge between two conjuncts, or between a conjunct and a coordinating conjunction, then it must also include all conjuncts and the conjunction.", "This preserves the integrity of coordinate structures in alignments.", "For example, in figure 2 there is no alignment glee cc and and op1 glee ; only the larger structure which includes the greed nodes is aligned.", "Named Entity Principle.", "Any structural alignment containing an AMR name node or any of the strings under it must contain the full subgraph rooted in the name plus the node above it specifying the entity type.", "This means that for example, in figure 2 there is no alignment conceal nsubj-xsubj Cruella conceal ARG0 person name name op1 \"Cruella\" .", "Such an alignment would also be stopped by the Subsumption Principle provided that the blue alignment of the whole name was present.", "The Named Entity Principle is superfluous, but is provided to explicitly describe the treatment of such constructions.", "The smallest structure which can participate in a structural alignment is a single node, provided that it is aligned to a subgraph containing at least one edge.", "A DG node may align to an AMR subgraph if the word is morphologically decomposed or otherwise analyzed in the AMR (e.g. in figure 2, evildoer person ARG0-of do-02 ARG1 thing mod evil ).", "Examples of DG structures whose meaning is expressed in a single AMR node include light verb constructions, phrasal verbs, and various other multiword expressions (e.g. in figure 2, makes dobj attempt attempt-01 ).", "Conceptually the simplest case of structural alignment is one edge to one edge, as in the blue and green alignments in figure", "1. For such an alignment to be possible, two requirements must be satisfied: nodes which are endpoints of those edges need to be aligned one-to-one; and the AMR relation and the syntactic dependency must map cleanly in terms of the relationship they express.", "A one edge to multiple edges alignment arises when either of those requirements is not met.", "To see what happens in absence of one-to-one endpoint alignments let's look at the relation between confident and general in figure", "3. The DG general node is aligned to an AMR subgraph: general person ARG0-of have-org-role ARG2 general .", "All alignments which involve the general node on the DG side need to include its aligned subgraph on the AMR side.", "It necessarily follows that the AMR subgraphs in those alignments will contain more edges that the DG ones; in this case the yellow subgraph in DG has 1 edge, and in AMR 3 edges.", "As for the second requirement, it is possible for one graph to use multiple edges to express a relationship when the other graph needs only one.", "This is the case for lie nmod-in sun case in lie location sun in figure", "1. An example which combines both the nodeand edge-related issues is marked in red in figure", "2. Finally, we also allow for many edges to many edges alignments.", "This may seem counterintuitive considering the assumption that we want to capture mappings between relations expressed in DG and AMR, and that we want to align minimal subgraphs.", "There are cases where an alignment is actually capturing a single relation, but we need to treat a subgraph as an endpoint of the edge both in DG and AMR.", "For instance, consider in figure 2 the relationship that holds between Cruella de Vil and concealing, expressed syntactically as an nsubj-xsubj edge and semantically as an ARG0 edge.", "One of the entities involved in that relationship, Cruella, is represented by a 2-edge DG subgraph and a 4-edge AMR subgraph.", "Consequently, the alignment covering the DG and AMR edges that relate Cruella to concealing must link subgraphs consisting respectively of 3 and 5 edges.", "A more difficult case of many edges to many edges alignment arises when relationships between nodes are expressed so differently in the DG and AMR that given an edge in one graph it is not possible to find in the other graph a subgraph that would convey the same information without also including some other information.", "Coordination has this property: e.g. in figure 2 the conj-and dependency between glee and greed has no counterpart in the AMR.", "There is no edge between AMR nodes aligned to those words, and the smallest AMR subgraph which contains them also contains and , which is itself lexically aligned.", "We cannot align glee conj-and greed glee op1 and op2 greed because of the rule that all lexically aligned nodes in one subgraph must be aligned to nodes in the other subgraph.", "Therefore we need to extend the DG side to and cc glee conj-and greed .", "We annotated a corpus of 200 AMR-sentence pairs (3813 aligned structures) using the guidelines of 2 and appendix A. 6", "Data selection.", "To create the corpus we drew a total of 200 AMR-sentence pairs: 135 from the training split of the AMR Annotation Release 1.0 (Knight et al., 2014), 55 from the training split of The Little Prince Corpus v1.6, 7 and 10 sentences from the Adam part of the CHILDES Brown corpus (Brown, 1973), for which AMRs were produced by an experienced annotator.", "Seventy items were selected to illustrate particular linguistic phenomena.", "8 The remaining 130 were selected at random.", "6 We followed the precedent of previous AMR-to-sentence alignment corpora (see 4.2) in including 200 sentences in our gold standard, though ours was a different sample.", "8 Namely: relative clauses, reflexive and non-reflexive pronominal anaphora, subject and object control, raising, exceptional case marking, coordination, wh-questions, do-support questions, ellipsis, expletives, modal verbs, light verbs, comparison constructions, and quantification.", "Preprocessing.", "Dependency parses were obtained using Stanford CoreNLP neural network parser 9 (Chen and Manning, 2014) and manually corrected.", "The final parses conform to the enhanced UD guidelines, 10 except they lack enhancements for ellipsis.", "Inter-annotator agreement.", "The corpus was created by one annotator.", "To assess inter-annotator agreement, a second annotator deeply familiar with UD and AMR annotated a random sample of sentences accounting for 10% of alignments in the corpus.", "The overall inter-annotator F 1 -score was 88%, with 96% agreement on lexical alignments and 80% on structural alignments.", "We take this as an indication that our richly structured alignment framework as laid out in 2 is reasonably well-defined for annotators.", "To assess our attempt to explain as much of the AMR as possible, we computed the proportion of AMR nodes and edges that participate in at least one alignment.", "Overall, 99.3% of nodes and 97.2% of edges in AMRs are aligned.", "We found that 81.5% of AMR graphs have full coverage, 18.5% have at least one unaligned edge, and 7.5% have one unaligned node (none had more than one; all unaligned nodes express mood or discourse-related information: interrogative , and , and say ).", "We conclude that nearly all information in an AMR is evoked by lexical items or syntactic structure.", "We expected coverage of DG to be lower because punctuation and many function words are unaligned in our guidelines (2.2).", "Indeed, only 71.4% of words and 65.2% of dependency edges are aligned.", "The similarity of AMR to syntax in examples like figure 1 invites the assumption of a close mapping, which often seems to be made in AMR parsers (Wang et al., 2015; Artzi et al., 2015; Misra and Artzi, 2016; Damonte et al., 2017) and aligners (Chu and Kurohashi, 2016; Chen and Palmer,", "9 The corpus is annotated with UD v1; a release of the dataset converted to UD v2 is planned for the future. We used the pretrained dependency parsing model provided in CoreNLP with depparse.extradependencies set to MAXIMAL and used collapsed CCprocessed dependencies. 10", "http://universaldependencies.org/u/overview/ enhanced-syntax.html", "configurations is max config.", "2017).", "11 Such an attitude reflects decades of work in the syntax-semantics interface (Partee, 2014) and the utility of dependency syntax for other forms of semantics (e.g., Oepen et al., 2014; Reddy et al., 2016; Stanovsky et al., 2016; White et al., 2016; Zhang et al., 2017; Hershcovich et al., 2017).", "However, this assumption has not been empirically tested, and as Bender et al. (2015) observe, it is an assumption not guaranteed by the AMR annotation style.", "Having aligned a corpus of AMR-DG pairs, we are in a position to provide empirical evidence.", "Are AMRs and dependency graphs structurally similar?", "We approach the question by analyzing the sizes of subgraphs used to align the two representations of the sentence.", "We define the size of a subgraph as the number of edges it contains.", "If a structure consists of a single node, we say its size is", "0. The configuration of an alignment is then the pair of sizes for its AMR and DG sides; for example, an alignment with 1 AMR edge and 2 DG edges has configuration 1:2.", "We call an alignment configuration simple if at least one of the subgraphs is a single edge, indicating that there is a single relation which the alignment captures.", "Complex configurations cover multiple relations.", "By principle of minimality we infer that some structural difference between the graphs prevented those relations from aligning individually.", "One measure of similarity between AMR and DG graphs is the configuration of the most complex subgraph alignment between them.", "Configuration a:b is higher than c:d if a + b > c + d .", "However, all configurations involving 0 are lower than those which do not.", "A maximum of 1:1 means the graphs have only node-to-node, node-to-edge, and edge-to-edge alignments, rendering the graphs isomorphic (ignoring edge directions and unaligned nodes).", "In 11 In particular, Chen and Palmer (2017) align dependency paths to AMR edges.", "However, their evaluation only considers node-to-node alignment, and their code and data are not available for comparison at the time of this writing.", "general, if the maximum alignment configuration is a simple one, the graphs could be made isomorphic by collapsing the larger side of the alignment (e.g., in figure 2, the AMR side of the alignment evildoer person ARG0-of do ARG1 thing mod evil could be collapsed into a node).", "In contrast, complex configurations imply serious structural dissimilarity, as in figure 3, where the cyan alignment has configuration 4:4.", "sentences are simple .", "Table 2 provides a detailed breakdown of alignment configurations in the corpus.", "Phenomena which often trigger complex configurations include coordination, named entities, semantically decomposed words, attachment of negation, and preposition-based concepts encoding location, time, and quantity.", "12 We observe, comparing tables 1 and 2, that while simple configurations are most frequent in the corpus, the majority of sentences have at least one alignment which is complex.", "It should not be assumed that AMR and DG representations of a sentence are, or could trivially be made to be, isomorphic.", "It is worth noting that our analysis suggests that DG and AMR could be made more similar by applying simple transformations targeting problematic constructions like coordination and named entities.", "We use our annotations to measure the accuracy of AMR aligners on specific phenomena that were inexpressible in previous annotation schemes.", "Our experiments evaluate the JAMR heuristic aligner (Flanigan et al., 2014), the ISI statistical aligner (Pourdamghani et al., 2014), and a heuristic rule-based aligner that we developed specifically for 12 An AMR concept evoked by a preposition usually dominates the structure ( after op1 date-entity decade nineties ), which is at odds with UD's prepositions-as-case-markers policy ( nineties case after ).", "Lexical alignment algorithm.", "AMR concepts are cognate with English words, so we align them by lexical similarity.", "This algorithm does not make use of the DG.", "Before alignment, we remove sense identifiers on AMR node labels, and lemmatize DG node labels.", "Then for every pair of nodes a from the AMR and d from the DG we align them if any of the following conditions holds:", "1. The Levenshtein distance of a and d is 15% or less of the length of the longer word.", "13", "2. The label of a is the morphological negation of d (e.g. prudent imprudent ).", "14", "5. The label of a consists of multiple words, and the label of d matches any one of them under rule", "3. The label of a is (AMR's annotation of negation) and the parent of a aligns to d via rule", "2.", "4. The label of a is and d is one of no , none , not , or never", ".", "1. (e.g. sit sit-down , war-torn war ).", "15 6.", "Labels of a and d likely have the same morphological root.", "We determine this by segmenting each word with Morfessor (Grnroos et al., 2014) trained on Wiki data and applying rule 1 to the first morpheme of each word.", "Note that if a word type is repeated in a sentence, each repetition is aligned to the same AMR nodes under the above rules.", "Structural alignment algorithm.", "We align subgraphs using the procedure below, first from AMR to DG, then from DG to AMR.", "For clarity, the explanation refers to the first case.", "13 Threshold was determined empirically on a 10% sample from the dataset.", "Local phase.", "For every AMR edge e a whose endpoints are lexically aligned nodes a 1 (aligned to d 1 ) and a 2 (aligned to d 2 ), we attempt to align minimal and connected AMR and dependency subgraphs, a and d : 1. If there is a DG edge e d whose endpoints are d 1 and d 2 , then a e a and d e d .", "2. Otherwise, let d be the shortest undirected path between d 1 and d 2 .", "If all lexically aligned nodes in d are aligned to a 1 or a 2 , then a e a and d d .", "3. Otherwise, let a be the smallest subgraph covering all AMR nodes that are lexically aligned to nodes in d .", "If all the nodes in a are aligned only to nodes in d , then a a and d d .", "4. Otherwise, the attempt is abandoned.", "5. Finally, if the top node of a has a parent node labeled with an entity type concept, extend a to include the parent.", "(This step is performed only in the AMR-to-DG step.)", "Global phase.", "The local phase might produce alignments that violate the Subsumption Principle (2.3.1), so we filter them out heuristically.", "For every pair of structural alignments, d a and d a where a overlaps with a , or d with d , if the region of overlap is not itself an aligned subgraph, we prune both alignments.", "16 4.2 Experiments We evaluate JAMR, ISI, and our aligner on two distinct tasks.", "Lexical alignment.", "Lexical alignment involves aligning AMR nodes to words, a task all three systems can perform.", "We evaluate against three datasets: our own, the JAMR dataset (Flanigan et al., 2014), and the ISI dataset (Pourdamghani et al., 2014).", "17 Results (table 3) suggest that this task is already well-addressed, but also that there exist marked differences between how lexical alignment is defined in each dataset and that aligners are 16 This could be order-dependent since the removal of one alignment could trigger the removal of others, but our aligner does not account for this.", "fine-tuned to their dataset.", "For our aligner, errors are due to faulty morphological analysis, duplicated words, and both accidental string similarity between AMR concepts and words and occasional lack of similarity between concepts and words that should be aligned.", "Structural alignment.", "An important goal of our experiments is to establish baselines for the structural alignment task.", "While we cannot evaluate the JAMR and ISI aligners directly on this task, we can use the lexical alignments they output in place of the first pass of our aligner.", "The only dataset for this task is our own.", "The results (table 4) evaluate accuracy of structural alignments only and do not count lexical alignments.", "The automatic alignments have lower coverage of AMRs than the gold alignments do: our best aligner leaves 13.3% of AMR nodes and 30.0% of AMR edges unaligned, compared to 0.07% and 2.8% in the gold standard.", "The aligner also leaves 39.2% of DG nodes and 47.7% of DG edges unaligned, compared to 28.6% and 34.8% in the gold standard.", "The relatively low F-score for the gold standard lexical alignments and DGs condition suggests that substantial improvements to our structural alignment algorithm are possible.", "The two most common reasons for low recall were missing one of the conjuncts in a coordinate structure and aligning structures that violate the principle of minimality.", "Our corpus gives alignments between AMRs and gold standard dependency parses.", "To see how much performance degrades when such parses are not available we also evaluate on automatic parses.", "18 Both precision and recall are substantially worse when the aligner relies on automatic syntax.", "Our corpus of manually aligned AMRs can be used to identify linguistic constructions which cause", "18 We use the CoreNLP dependency parser with settings as described in 3.", "problems for an AMR parser.", "We parsed the sentences from our corpus with the parser of Damonte et al. (2017).", "19 We map the nodes of the resulting automatic AMRs to the gold AMRs using the smatch evaluation tool (Cai and Knight, 2013), and on the basis of this mapping identify those nodes and edges of the gold AMRs which are missing or mislabeled in the automatic AMRs.", "We then measured the number and rate of erroneous AMR fragments associated with each UD relation or construction (table 5).", "The largest proportion of recall errors were for fragments associated with the subject relation, prepositional phrases, and nominal compounds.", "Focusing on the subject relation, we can further say that 69% of the missing or mislabeled edges have the gold label ARG0 , 19% ARG1 , and the rest are distributed amongst domain , ARG2 , purpose and mod .", "Inspecting the errors we see that phenomena underlying them include pronominal coreference, sharing arguments between conjoined predicates, auxiliary verb constructions, and control and raising.", "20 Our corpus facilitates fine-grained error analysis of AMR parsers with respect to individual syntactic constructions.", "We release the code for the above analysis in order to encourage syntactically-informed comparison and improvement of systems.", "We have presented a new framework and corpus for aligning AMRs to dependency syntax.", "Our data and analysis show that the vast majority of the semantics in AMR graphs can be mapped to the lexical and syntactic structure of a sentence, though current alignment systems do not fully capture this correspondence.", "The syntaxsemantics 19 The overall smatch score of the parser on this dataset was 0.65.", "correspondences are often structurally divergent (non-isomorphic).", "Simple algorithms for lexical and structural alignment establish baselines for the new alignment task; we expect statistical models will be brought to bear on this task in future work.", "Our framework also facilitates syntactically-based analysis of AMR parsers.", "We release our data and code for the benefit of the research community.", "This work was supported in part by EU ERC Advanced Fellowship 249520 GRAMPLUS and EU ERC H2020 Advanced Fellowship GA 742137 SE-MANTAX.", "We thank Sameer Bansal, Marco Damonte, Lucia Donatelli, Federico Fancellu, Sharon Goldwater, Andreas Grivas, Yova Kementchedjhieva, Junyi Li, Joana Ribeiro, and the anonymous reviewers for helpful discussion of this work and comments on previous drafts of the paper." ]
[ "abstain", "method", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "result", "abstain", "abstain", "method", "result", "objective", "result", "abstain", "abstain", "method", "other", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "objective", "result", "abstain", "abstain", "objective", "abstain", "abstain", "other", "other" ]
[ "The field of natural language processing is experiencing a period of unprecedented growth, and with it a surge of published papers.", "This represents an opportunity for us to take stock of how we cite the work of other researchers, and whether this growth comes at the expense of forgetting about older literature.", "In this paper, we address this question through bibliographic analysis.", "We analyze the age of outgoing citations in papers published at selected ACL venues between 2010 and 2019, finding that there is indeed a tendency for recent papers to cite more recent work, but the rate at which papers older than 15 years are cited has remained relatively stable.", "Scientific progress benefits from researchers stand-ing on the shoulders of giants and one way for researchers to recognise those shoulders is by citing articles that influence and inform their work.", "The nature of citations in NLP publications has previously been analysed with regards to topic areas (Anderson et al., 2012; Gollapalli and Li, 2015; Mariani et al., 2019b), semantic relations (Gabor et al., 2016), gender issues (Vogel and Jurafsky, 2012; Schluter, 2018), the role of sharing software (Wieling et al., 2018), and citation and collaboration networks (Radev et al., 2016; Mariani et al., 2019a).", "Mohammad (2019) provides the most recent analysis of the ACL Anthology, looking at demographics, topic areas, and research impact via citation analysis.", "In this paper, we conduct a corpus analysis of papers published in recent ACL venues to determine whether the community is collectively forgetting about older papers as it experiences a period 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 0 250 500 750 1000 1250 1500 1750 N u m b e r o f p u b li c a t i o n s Venue ACL EACL EMNLP NAACL CL TACL Figure 1: The distribution of the number of articles published between 20102019 in the ACL Anthology.", "of rapid growth (see Figure 1).", "The Association of Computational Linguistics (ACL) is one of the largest publishers of articles in natural language processing research: it maintains the open-access ACL Anthology 1 of articles that date back to the 1960s, offering a rich resource for studying NLP publications.", "While the aforementioned analyses have mainly focused on incoming citations, our work targets outgoing citations.", "We focus on the age of citations in the References section of articles published at ACL venues between 2010 and 2019 (Sec. 2), with a view to studying three questions:", "1. Do recently published papers have a tendency to cite more recently published papers, and less older literature?", "2. Are older papers being cited less frequently in 2019 than they were in 2010?", "3. Is there a difference between publication venues with regard to the age of citations?", "We find that the mean age of the papers cited does indeed decrease from 20102019, and that this de-1 https://www.aclweb.org/anthology/ crease is statistically significant, with a larger effect size in recent years (Sec. 3.1).", "We also find that there is no significant difference in the rate at which older papers are cited during this period (Sec. 3.2), and that there are marked differences between the citations in journal articles and conference proceedings (Sec. 3.3).", "Our findings show that, at a time of rapid growth, an increasing proportion of citations are going to recently published papers, but that researchers still acknowledge that they are standing on the shoulders of their peers.", "The analysis in this paper is based on a subset of articles from the ACL Anthology.", "While corpora of NLP publications, including the ACL Anthology, already exist (Bird et al., 2008; Radev et al., 2009; Mariani et al., 2019a), none of them include publications newer than 2015.", "We compiled our own dataset because we are mostly interested in the papers published in recent years.", "The dataset is drawn from ACL venues: conference proceedings from meetings of the ACL, EACL (European Chapter of the ACL), NAACL (North American Chapter of the ACL), and EMNLP (Em-pirical Methods in NLP) as well as articles from the CL (Computational Linguistics) and TACL (Trans-actions of the ACL) journals.", "Anthology statistics Figure 1 shows the distribution of the articles in the corpus: the number of articles published in these venues steadily increases from 20102019.", "The CL and TACL journals publish articles at a steady rate; the ACL conference fluctuates in size, depending on whether it is co-located with NAACL; and the EACL conference nearly doubles in size each time it takes place.", "In terms of whether the field is rapidly growing, we note that there was a year-on-year increase of 42% between in 20172018 due to the increase in the number of papers published at NAACL and EMNLP, and a 34% increase between 20182019.", "Extracting citations To extract a list of references from an article, we first extract the text stream from the PDF file via pdftotext , 2 then feed it into ParsCit (Councill et al., 2008) to obtain the references.", "3 For each reference in this list, we 2 https://gitlab.freedesktop.org/ poppler/poppler 3 We note that the ParsCit maintainers recommend a newer iteration of the tool, Neural-ParsCit (Prasad et al., 2018), but we could not easily replicate the same pipeline with it.", "then extract and keep the parsed date, author, and title entries.", "For 1.4% of the input files, this pipeline fails to extract any references; spot-checking reveals that many of those are not regular papers (but, e.g., book reviews or front matter), some PDFs have no embedded text, and others silently fail to parse.", "Citation age For each publication in our dataset, we want to consider how recently each paper in its reference list was published.", "We calculate the age of a cited paper by subtracting its year of publication from that of the citing paper.", "We only keep citations in the age range [0 , 50] as values outside of this range typically appeared to be parsing errors.", "4 As only 0.95% of parsed reference dates fall outside of this range, the effect of excluding potentially valid citations is minimal.", "Identifying cited papers We use authors and titles of cited papers in order to identify which individual papers are being cited.", "We find that these entries are rather noisy in our ParsCit output; therefore, we use a heuristic based on fuzzy string matching to identify citations that are likely to refer to the same paper, despite differences in their author and/or title fields.", "5 Dataset 6 The resulting dataset covers 8,722 papers published within 20102019 with a total of 264,957 extracted citations; 7 for conference proceedings, we only include volumes that are marked as containing either full papers or short papers .", "8 3 Analysis 3.1 Are more recently published papers citing more recently published papers?", "Figure 2 shows the distribution of the age of cited articles with respect to the year in which the source article was published; Table 1 gives some complementary statistics.", "The mean age of a cited paper has steadily decreased since 2013, from 7.69 years to 5.53 years in 2019; the median has dropped from 6 to 3 years in the same period.", "4 For example, ParsCit mistakes the journal number for the year of publication, resulting in a 1,900 years old citation.", "5 The full algorithm is described in Appendix A. 6 Datasets and code are available at: https://github.", "com/coastalcph/acl-citations 7 This includes papers that were published on the ACL Anthology before November 6, 2019.", "8 In particular, this excludes papers from system demonstration, student research workshop, and industry tracks.", "Significance and effect size To determine if the distribution of citation ages significantly differs between years, we perform Mann-Whitney U tests with p < 0 .", "005 and Bonferroni correction on each pair of years.", "We calculate rank-biserial correlation scores to determine the effect size of these differences and convert them into common language effect size (CLES; McGraw and Wong, 1992) for easier interpretability.", "9 Results are shown in Figure 3: numbers correspond to (rounded) CLES values and can be interpreted as the probability that a randomly drawn citation from the column year will be older than a randomly drawn citation from the row year.", "For example, if we were to randomly draw a citation from a paper published in 2012 and one from a paper published in 2019, the former citation has a 59% probability of being strictly older than the latter (row 2019, column 2012).", "Greyed-out cells were not statistically significantly different according to the Mann-Whitney U test.", "The CLES scores show that a randomly drawn citation from more recent years (e.g. 20172019) has a significantly lower probability of being older than a randomly drawn citation from earlier years (e.g. 20102014).", "This can be seen by inspecting the columns and rows in the bottom right of Figure", "3. 3.2 Are older papers cited less frequently in more recently published papers?", "While the previous section showed a downwards trend for average citation age in more recent pub-9 If r is the rank-biserial correlation coefficient, CLES is defined as r +12 .", "lications, this does not imply that older papers are cited less frequently in absolute terms.", "Indeed, as there are more publications available to cite from recent years, it seems natural that they would constitute a larger relative share of cited papers, but this does not necessarily need to come at the cost of citing older papers less frequently.", "Figure 4 visualizes the average number of citations per paper, broken down by the age of the citation.", "We observe that this number steadily increases between 2010 and 2019, showing that publications in 2019 do indeed cite more papers than publications in 2010, on average.", "We also see that this increase is mostly due to citations of papers between 0 and 3 years old, while papers that were published 15 or more years ago are still cited at approximately the same rate now as in 2010.", "Tracking citations to individual papers While the citation rate for old papers has not changed, the distribution of papers being cited may have.", "To investigate this, we now also consider the author and title fields of citations to track which papers are being cited.", "This way, we can analyze e.g. to what extent old papers cited in 2010 overlap with those cited in 2019.", "Figure 5 shows the average number of citations to papers published 15 or more years agocorresponding to the bottom area of Fig. 4and additionally indicates which share of these papers have already been cited in 2010.", "We can see that in all the other years, more than half of these old citations are to papers that were not cited in 2010.", "Table 2 shows the most frequently cited old papers in 2019, additionally indicating in which year we can find the earliest citation to this paper in our dataset.", "Perhaps unsurprisingly, the most cited papers describe very broadly applicable resources or methods.", "Furthermore, two of these papers introducing the bidirectional RNN and the LSTM, respectivelyhave only gathered citations from 2014 onwards, while another classic reinforcement learning paper was not cited before 2016.", "This suggests that in recent years, a substantial part of older citations is made up of deep learning papers that have not yet been (widely) cited in 2010.", "Ratio of papers to citations Figure 6 looks at the ratio of unique old papers being cited compared to the total number of citations.", "We observe that this ratio has steadily decreased since 2013, indicating that the stable number of citations goes 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 0 500 1000 1500 2000 N u m be r o f c i t a t i on s Cited in 2010 Not cited in 2010 Figure 5: Average number of citations per papers with age 15 or older, distinguished by whether or not they (already) have been cited in 2010.", "to a continuously decreasing pool of papers.", "In other words, there is a reduction in the variety of older papers being cited.", "Journals invite submissions that are more substantial than conference papers; it is conceivable that this is reflected in the papers they cite.", "Figure 7 takes a closer look at citations 15 years or older by venue of publication.", "The four conference venues in our dataset behave very similarly, showing around 24 old citations on average.", "For CL papers, on the other hand, this figure is considerably larger (up to 17 such citations on average in 2017).", "TACL papers also show a trend towards more older citations, but not as strong as for CL.", "Overall, there is a clear difference in the average number of older citations in journal articles compared to conference proceedings.", "We presented an analysis of citations in publications from major ACL venues between 2010 and 2019, focusing on the distribution of the age of cited papers.", "We found that recently published papers (03 years old) are cited significantly more often in publications from recent years (ca. 2015 2019), while papers published 15 or more years ago are being cited at a stable rate.", "There is also a marked difference between journal and conference publications in the distribution of citation age: journal articles feature more citations to older papers.", "These findings could be due to the increasing dif-ficulty of keeping up with the literature, given that many more papers are being published now, in addition to the deluge of papers that appear on preprint servers.", "Some areas of NLP research did also not exist 15 years ago, e.g. social media analysis, potentially making it challenging to cite older related work.", "Finally, since several influential neural network papers have been published in the 1990s (cf. Tab. 2), a mostly quantitative analysis is limited in its ability to determine, e.g., to what extent we still engage with older literature outside of this domain.", "A potential confound in our analysis is that some proceedings imposed a page limit for references; e.g., the ACL conference gave unlimited space for references in 2010, 2012, and from 2016 onwards, but imposed a page limit in 2011 and 20132015.", "We can still observe an increase in the average number of citations per paper during this latter period, so it seems unlikely that this had an effect.", "In addition, our analysis is limited to studying the age of the papers cited in the ACL Anthology it does not make any claims about the complex network effects involved in researchers from particular institutions, countries, or sub-fields, and it does not study other venues that also publish NLP papers.", "Future work includes a deeper qualitative analysis of which (type of) papers are being cited; a more fine-grained analysis of different research topics in NLP to determine whether changes are more prevalent within certain areas than others; or extending the analysis to a larger set of the papers in the ACL Anthology.", "We would like to thank the reviewers for their helpful comments and suggestions for further analyses.", "Marcel Bollmann was funded from the European Union's Horizon 2020 research and innovation programme under the Marie Skodowska-Curie grant agreement No. 845995." ]
[ "abstain", "method", "abstain", "result", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "method", "objective", "objective", "objective", "other", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "method", "abstain", "result", "method", "abstain", "other", "other" ]
[ "We present a complete pipeline to extract characters in a novel and link them to their direct-speech utterances.", "Our model is divided into three independent components: extracting direct-speech, compiling a list of characters, and attributing those characters to their utterances.", "Although we find that existing systems can perform the first two tasks accurately, attributing characters to direct speech is a challenging problem due to the narrator's lack of explicit character mentions, and the frequent use of nominal and pronominal coreference when such explicit mentions are made.", "We adapt the progress made on Dialogue State Tracking to tackle a new problem: attributing speakers to dialogues.", "This is the first application of deep learning to speaker attribution, and it shows that is possible to overcome the need for the hand-crafted features and rules used in the past.", "Our full pipeline improves the performance of state-of-the-art models by a relative 50% in F1-score.", "Natural Language Processing has enabled a quantitative improvement in the humanities, by allowing for large-scale statistical measurements to be taken over hundreds of thousands of books compared to the order of tenths a human could analyse in a much longer time span.", "Some examples of large-scale literary analyses include studies on characters and their descriptions within the novel, mostly focused on gender differences (Underwood et al., 2018; Kraicer and Piper, 2019), and studies on character's relations by extracting social networks from novels (Labatut and Bost, 2019; Jayannavar et al., 2015).", "Most of these studies demand special attention to dialogues, being a major part of character expression and interaction with other characters.", "Dialogues play an instrumental role in plot developWork done during internship at Amazon ment, frequently encompassing focal plot moments, especially in fiction which is also the focus for this study.", "Here we aim to identify direct-speech utterances that form part of dialogues and associate them with the speaking characters.", "Such information is not only useful to enable large-scale socio-temporal studies but also crucial to many downstream challenging tasks like narrative understanding (Iyyer et al., 2016) and summarising (Ladhak et al., 2020).", "Further, the high-quality dialogue-character association is pertinent for generating engaging text-to-speech for novels with distinct voice profiles for characters.", "In the past, models that link direct speech to characters have been dominated by predefined rules (Muzny et al., 2017) or hand-crafted features (He et al., 2013).", "When evaluating these models the authors also presumed that a character list, together with the character's aliases, has been precompiled and that direct-speech text has been extracted.", "Although extensions to these models that extract speaking characters in a fully automated manner exist, it is unclear what impact does the automation of the aforementioned steps has on the final performance of the model.", "Moreover, the models have only been tested against a small dataset of three books from the same time period.", "These are the two questions that we aim to answer in this paper:", "i) how can we build flexible models that can generalise and improve with increasing dataset", "sizes?, and", "ii) what is the impact of errors propagating from each component of the pipeline, and thus where should we focus future efforts?", "To answer these questions, we focus on building deep learning models with the necessary inductive biases and flexibility to learn nuanced features when given a large enough dataset, as opposed to hand-crafted rules that need revisiting to generalise to different time periods, writing styles, genres, or even languages.", "Moreover, we present a separate evaluation of pipeline's each component.", "Attributing speakers to direct speech is a common problem for two related domains: news and literature.", "However, previous work (O'Keefe et al., 2012) has shown that models do not generalise to both domains.", "Their best model obtained an accuracy of 92 .", "4% and 84 .", "1% on their two newswire datasets, whilst they only found a 53 .", "3% accuracy when evaluating the same model on a literature dataset.", "We therefore focus on summarising progress in quote attribution for literary fiction.", "The early models targeting literary texts (Glass and Bangay, 2007) were based on the identification of speech verbs and their actors.", "However, the proportion of dialogues accompanied by a speech-verb and an explicit mention to a character can be as low as 20% of the total quotes for some books.", "For this reason, consequent work (O'Keefe et al., 2012; Elson and McKeown, 2010) shifted their focus to attributing speakers to dialogues where the character is not explicitly mentioned, incorporating rules to exploit the sequential nature of conversations.", "These models could not improve the results of a simple Nearest Mention (NM) baseline that obtained a 53 .", "3% accuracy on their test set.", "Finally, current state-of-the-art models (Muzny et al., 2017; He et al., 2013) demonstrated how the simple nearest mention baseline could be beaten through a combination of rules and learning.", "Both models present analysis on a limited setup:", "i) they report performance on a test set comprised of two books Jane Austen's Emma , and Anton Chekhov's The Steppe , and have therefore not been bench-marked on a wider range of styles or time periods, and", "ii) they assume the ideal circumstance of a pre-compiled list of characters, with character aliases and genders provided.", "We relax the second assumption when we evaluate our model, to estimate the end-to-end performance on a more diverse dataset of fifteen books.", "The task of speaker attribution is also closely related to other dialogue sequence problems.", "One such umbrella technique to solve these problems is Dialogue State Tracking or simply DST, where a system is tasked with estimating some conversation state variables usually the user's goals and intents.", "We are first to apply DST for the purpose of speaker attribution.", "Our proposed DST-based formulation requires modification to the utterance encoder with focus on non-dialogue context, and state-variable that can generalise to states not seen in the training set.", "We adapt a BERT-based DST model (Lai et al., 2020) to track the speaker for every single utterance instead of tracking the user's goals and intents.", "Our work follows a similar line of thought to Ren et al. (2018); Lai et al. (2020), where the model is given a list of candidate intents (speakers in our case) embedded as inputs to the problem so that the model can generalise to unseen goals (speak-ers) during test time.", "The task of our model is to generate a score for each utterance against every candidate speaker.", "To recap, we present an end-to-end pipeline for speaker to dialogue attribution that leverages re-cent advances in large pretrained Language Models casting the problem as a Dialogue State Tracking.", "We empirically show that our model is capable of generalising to different styles more reliably as compared to prior hand-crafted features-based systems.", "Further, we present this comparative study on literary texts ranging from the 1900s to the 2010s, which are more varied and diverse compared to past studies.", "Note that usually such dataset are effort-intensive to create and not publicly available due to lack of rights to redistribution, which makes the reported result very interesting for the wider community.", "Our annotations consist of two independent layers, one focusing on direct speech, and one focusing on clustering mentions that refer to the same character entity.", "Example 1: Excerpt from 2001: A Space Odyssey.", "Annotated direct speech is in bold, and the annotated attributed character entity inside a blue box.", "Poole was asleep, and Bowman was reading on the control deck, when Hal announced: Er Dave , I have a report for you.", "HAL What's up?", "DAVID BOWMAN We have another bad AE-35 unit. My fault predictor indicates failure within twenty-four hours.", "HAL For the first layer, the annotator selects the span of text representing a character's direct speech.", "It is usually found within quotation marks, but this is not a necessary or sufficient requirement.", "The annotator then attributes a character entity to the utterance.", "Example 1 presents a typical conversation with instances of coreference (Dave and Bow-5821 man both refer to David Bowman), and implicit attribution (third and fourth paragraphs) where no character is explicitly mentioned by the narrator.", "The second layer of annotations focuses on characters and their mentions.", "We follow Bamman et al. (2014) and distinguish character mentions (e.g. Dave, David, Bowman, Dr. Bowman) in the text and character entities (e.g. DAVID BOW-MAN), to which mentions refer to.", "See the text in italics within Example 1 to find some of the annotated character mentions.", "Note that we don't include pronominal mentions.", "These mentions are then clustered per book into character entities by the annotators.", "We annotate a collection of 15 books sampled from the most popular titles from time epochs 1881 2018 .", "The annotation is carried out by 3 expert English native annotators, each reading the book in sequential order, over a BRAT 1 based annotation interface.", "In case the annotation from any single annotator is different, a master annotator goes through the cases and makes the correction, resulting in a dataset with a very high agreement (Cohen's Kappa greater than 0.9).", "Across the books, the number of annotated dialogues varies from 200 to 5000 and characters from less than 10 to 200.", "We would refer these books by IDs 1 through 15 and the existing 3 books as E1 ( Emma ), E2 ( Pride and Prejudice ) and E3 ( The Steppe ).", "We divide the model into three main tasks: identifying quotes, extracting unique characters and their aliases, and attributing dialogues to the extracted characters.", "Our goal is to reduce the amount of hand-crafted rules (usually heavily biased to the small subset of documents of prior studies) where performance can be improved, and allow the model to learn nuanced features that allow it to generalise better when given a large enough dataset.", "We first introduce our direct speech identification component, which is purely rule-based due to the simplicity of the problem and since improving this aspect is not part of our core contribution.", "Afterward, we focus on identifying characters and compare NER and coreference resolution deep learning models to simple rule-based systems.", "Finally, we discuss the focal aspect of our contribution a DST architectural adaptation to solve the speaker attribution of quotes.", "Direct speech in fiction is usually denoted with quotation marks, although there are exceptions such as Ali Smith's Summer , where speech marks are completely removed and dialogues blend in with the rest of the text, or Joyce's Ulysses that introduces speech with dashes.", "Here we ignore such instances, which are not present in our dataset, and focus on the most common case where direct speech is marked by quotation marks.", "Further, we find that for English over 95 % of the dialogues (as analysed over a large collection of popular books) follow open-close quotation-pair variation.", "See Steinbach et al. (2011) for an in-depth review of the topic.", "In the case of extracting quotation marks, simple rules can achieve almost perfect performance.", "As in O'Keefe et al. (2012), we use a regular expression to detect opening and closing quotation marks that denote the presence of direct speech.", "Although characters are central to most literary analyses, identifying them automatically from a novel remains an unsolved problem.", "We split the character identification task into:", "i) identifying mentions in the text that refer to characters, and", "ii) clustering those mentions into unique character entities.", "Similar to direct speech identification, we do not focus on improving the architecture for character identification.", "As both of these form input to our core DST module, we re-purpose the best of existing techniques.", "However, unlike previous studies, we do analyse and report the impact of these components on end-to-end performance to guide future research.", "Extracting entities from text is normally done over short documents, such as Wikipedia pages.", "But literature brings unique challenges to the field: novels tend to be long documents, demanding effi-cient algorithms, and requiring models to be able to link far apart mentions.", "We present an evaluation of Named Entity Recognition (NER) to detect mentions, together with the effectiveness of coreference resolution to cluster character mentions into entities.", "We find that although NER achieves a similar performance to a simple rule-based system, coreference resolu-tion's performance on clustering characters is significantly poorer than a simple rule-based character clustering technique, and future work should fo-5822 Figure 1: Diagram of our speaker attribution pipeline.", "We present a comparison between an out-of-domain NER model, trained on the CoNLL-2003 dataset (Tjong Kim Sang and De Meulder, 2003) with a simple rule-based baseline that focuses on identifying all the characters that speak explicitly.", "It finds the subject of the narrator's explicit dialogue attribution signals (defined by the 40 most frequent speech denoting verbs such as said, answered, ... ).", "In Table 1, we show a summary of the aliases variations found in our dataset and their frequency.", "Since our core contribution is not to improve character clustering, and our dataset of character aliases is small, we do not develop a custom model for it and merely compare two different clustering tech-Type Freq Example ([mentions] entity) Full Name Variation 67% [Harry, Potter, Mr. Potter, Harry Potter] HARRY POTTER) Dimunitives 7% [Lizzy, Eliza] ELIZABETH Professional 15 .", "niques:", "i) an out-of-domain coreference resolution system, and", "ii) a simple set of rules that cluster characters according to their names.", "In both cases, we build a graph where nodes are character mentions, and edges are attached to all two compatible nodes.", "See the top right box on Clustering Mentions in Figure 1 for an example of such graph.", "On one hand, in the case of coreference resolution, two nodes are compatible if they appear together at least in two coreferent clusters.", "Clusters of characters are formed by finding all disconnected subgraphs within the graph.", "On the other hand, although character aliases can be of any kind and might not be related to each other by name, most of the ones appearing in literature are variations of the character's full name (See Table 1).", "We build a rule-based algorithm that deems two names incompatible if,", "i) The first names of the characters are different (the first names do not match exactly) or the shorter first name is not exactly inside the longest one.", "This also takes care of a few diminutive forms (like Eliza for Elizabeth).", "We split the graph into disconnected subgraphs of compatible names, and find those nodes that are ambiguous, i.e., nodes whose first neighbour connections contain more than one title, first name, or surname.", "Removing those nodes we can form unambiguous clusters of characters that share the same title, name, and surname.", "As opposed to previous work (Bamman et al., 2014; Elson et al., 2010), where ambiguous names would be merged to the closest entity mentioned in the text, we allow ambiguous nodes to either form their own cluster, or be part of any of their first neighbour nodes' clusters.", "We use the text to resolve this ambiguity by finding the most mentioned cluster among the possible clusters in the 20 paragraph vicinity of the ambiguous mention.", "In this way, we can retain characters such as Mrs. Bennet in Pride and Prejudice , without merging them to other members of the Bennet family, since they are prominent enough to be given their own cluster.", "In this section, we present an adaptation of Dialogue State Tracking to speaker attribution.", "In DST, it is a challenge to produce models that can work with dynamic ontologies and unseen slot values such that the user can request information on any slot (movies, restaurants, ...) and use any value (the type of food, the price, ...) that has not necessarily been seen at test time.", "In the same way, we can't simply use a general fixed tag set of characters beyond the level of an individual novel, since we want our model to generalise to unseen novels and unseen characters during test time.", "We will therefore embed the character's mentions within the inputs of the model as done in state-of-the-art DST (Lai et al., 2020).", "Below, we discuss in detail how we adapt DST to model speaker attribution in novels.", "Although our dataset is annotated at the level of word spans, the odds are high that disconnected spans in the same paragraph are attributed to the same speaker.", "We find that this rule is violated on less than 5% of the paragraphs that contain more than one disconnected span.", "Therefore, as in He et al. (2013), our model will be trained on attributing speakers at the level of paragraphs.", "Regarding the model inputs, we split the text into conversations.", "Denoting paragraphs with no direct speech as narratives, we segment conversations by restricting the number of intervening narratives between direct speech utterances to one.", "If more than one narrative separates two direct speech utterances, the conversation is split into two different conversations.", "Given a set of n utterances that define a conversation, u = { u 0 , u 1 , ..., u n 1 } , a set of l mentions to candidate characters, m = { m 0 , m 1 , ..., m l 1 } , and a set of k candidate characters entities linked to those mentions, c = { c 0 , c 1 , ..., c k 1 } , where k < = l , we wish to model the probability for each candidate character entity, c i , being the speaker of a given utterance, u j .", "This probability will be denoted as P ( c i | u j , u ) .", "Let's denote as the embedding model that transforms word tokens in vectors (equivalently IRD (cid:15) space), where D (cid:15) is the output dimension of the embedding model.", "In this work, we chose a Distil-BERT model (Sanh et al., 2019) for .", "As in Figure 1, we generate an embedding for every utterance in the conversation, (cid:15) u i = Distil BERT ( u i ) [ CLS ] , (1) by selecting the embedding of the [CLS] token, whereas to encode the character's mentions we take the mean of the embedding of the tokens inside the mention.", "For a mention m j = { m 0 j , m 1 j , ..., m t 1 j } of length t , (cid:15) m j = 1 t t 1 (cid:88) T =0 Distil BERT ( u i ) m Tj .", "(2) We denote the collection of all mentions embeddings by M , a matrix of dimensions D (cid:15) D l , where D l is the number of mentions to candidate characters inside the conversation.", "In the next section, we explain the three components of the model that take these embeddings and produce the probability of a character speaking:", "i) the conversation embedding module, that takes an utterance as input and produces its context-aware representation,", "ii) a character extraction module, that given the contextual representation of an utterance and the embedding of the candidate characters mentions, generates the logits for each candidate character entity and utterance, and", "iii) a sequential decoder component that learns the conversational turn patterns.", "Below, we define the architecture of each component in detail.", "The conversation embedding module consists of a Gated Recurrent Unit (Cho et al., 2014; Gers et al., 1999), GRU , that encodes the content of the conversation,", "where h i is the GRU's hidden state of dimension D (cid:15) which is randomly initialised at the beginning of the sequence.", "This module processes the GRU's hidden states to extract the candidate character's logits.", "We take the dot product of the utterance embedding, processed by a fully connected network, with the mention embedding matrix, M , to obtain the logits, l i = FCN ( h i ) M , (4) where l i , has dimensions D l .", "This for whole conversation results in L of dimension D n D l and are denoted by Mention Logits in Figure 1. We can now combine the logits of different mentions that belong to the same character entity by max-pooling over character entities to get the Entity Logits, E with dimensions D n D k .", "Implicitly, the model defined above assumes that labels are independent of each other, and that therefore the likelihood of a sequence of labels in a conversation, y , can be expressed as the product of utterance-wise likelihoods,", "However, characters speaking in a conversation follow certain turn-taking patterns that are common through literature, such as a two-party conversation in which the dialogues move back and forth between two characters.", "We add a linear chain Conditional Random Field (CRF) (Lafferty et al., 2001) to our model, to maximise the likelihood of a sequence of characters and relax the label indepen-dence assumption by allowing a target to depend on its immediate predecessor.", "A CRF models the sequential likelihood as a combination of element-wise prediction, and a pairwise interaction term that models the probability of transitioning from label y i to label y j .", "In our particular implementation, the element-wise predictions are the Entity Logits, E , and the pairwise interaction will be learned parameters, P ( y | u ) = exp (cid:32) N (cid:88) n =0 E n ( y n ) + N 1 (cid:88) n =0 V y n ,y n +1 (cid:33) /Z, (6) where Z represents the normalization factor, and V is generally a D k D k dimensional matrix of learned weights known as transition matrix.", "In our use-case, there is no specific label ordering that can generalise to unseen novels, and we thus reduce the degrees of freedom of the D k D k transition matrix to two: the value of the diagonal, and the value of the off-diagonal elements.", "The first one controls the probability of the same speaker to continue speaking, whereas the second one varies the probability of a change in speaker.", "Note that this implies that we do not need to constrain the number of speaking characters.", "At inference, we find the most likely sequence of characters using the Viterbi algorithm (Viterbi, 1967).", "In this section, we present both the evaluation of each separate component and the final evaluation of the entire pipeline.", "To compare ground truth direct speech with our extracted quotes, we define a True Positive as an exact match between our selected text and the ground truth.", "With this definition, the F1-score achieved by our quote identification module is 0 .", "98 0 .", "01 , when evaluated against our entire dataset.", "We find that common errors are the identification of quoted text that has a different purpose other than direct 5825 Model F1-Score Precision Recall NER 0 .", "speech, such as emphasising a word, naming a title, or marking written text, such as a letter, that no one is reading out loud.", "We evaluate separately the effects of identifying character mentions and clustering mentions into distinct entities.", "Since our model aims to resolve dialogue attribution and characters that are mentioned more often tend to also speak more often, we show precision, recall and F1-score weighted by the number of times a mention appears through the text.", "In this way, we make sure that we are identifying the main characters in the text at the cost of missing rarer ones.", "The evaluation is shown in Table 2. NER and our simple rule-based model show a similar performance, although overall the rule-based system works better.", "Finally, we evaluate character clustering on oracle mentions using the B 3 measures of precision, recall and F1-score (Amig et al., 2009).", "The results are reported in Table 3. The performance of coreference resolution is significantly worse than the simple naming rules we developed.", "By using name compatibility we achieve a high precision but a low recall, meaning that the clusters we create tend to contain elements of the same class, but are incomplete; a character might be split into several different clusters.", "This is because we only cluster characters from variations of their names, and therefore all other cases shown in Table 1 are considered as separate entities.", "As mentioned in Section 4.2.2, the coreference resolution model fails at linking two mentions to the same character that are far apart, and therefore produces a system with lower recall.", "We train and evaluate the speaker attribution task on oracle direct speech, mentions and character clusters.", "We compute precision, recall and F1-Score, all weighted averages, of the character entities attributed to each span of direct speech.", "The resulting evaluation is shown in Table 4. We show a comparison with a baseline model that selects the nearest mention to either left or right of the quote.", "To include a thorough evaluation despite the small size of our dataset, we have trained the model in a leave-one-out fashion for all books for which we annotated more than 1 , 000 paragraphs, together with the three publicly available books released by Muzny et al. (2017).", "In Table 4, we report average values and standard deviation over the 11 books.", "Moreover, we show an ablation study in Table 5, computed on only one train, validation and test split.", "Next to the overall F1-Score we show the performance on the model by type of signal where a sample is:", "i) explicit, if the character is mentioned on the same paragraph as the quote,", "ii) implicit, if there is no narrator context accompanying the quote.", "Note that not all quotes fall in either of these categories.", "Finally, the entire pipeline is evaluated as a clustering overlap problem through the B 3 clustering metric.", "A cluster is defined by the set of quotations attributed to the same character entity.", "If the quote has been incorrectly identified as a quote by the model, it forms part of a misidentified cluster.", "On the other hand, if we haven't identified one of the true quotes, we also label it as another kind of misidentification.", "In Figure 2, we show a full pipeline comparison of our model to the state-of-the-art model presented in Muzny et al. (2017) 2 .", "Our model improves over Muzny et al. (2017) by an average of 50% in F1-2 We ran their publicly available code on our dataset, the code can be found here https://nlp.stanford.edu/ ~muzny/quoteli.html 5826 F1-Score Accuracy Explicit Accuracy Implicit Distil-BERT (DB) 0 .", "We also show the effect of replacing the different components with Oracle data for a subsample of the dataset in Figure 3. We can see that different components play a different role by book, whereas improving the mention extraction stage can be of crucial importance for some books (14 and 15 part of text split; and 1 and 11 part of train split), character clustering has a larger effect on others.", "However, the dominant effect is still the Speaker attribution model.", "We have presented a speaker attribution pipeline for novels that does not rely on pre-compiled lists of characters and that performs consistently across different writing styles and time periods.", "Our main contribution has been to develop the first deep learning model for speaker attribution, based on previous Dialogue State Tracking approaches.", "Our deep learning model has the flexibility to learn nuanced features from data, compared to previous work that relied on rules or hand-crafted features.", "Training our model on a small dataset composed of 15 different novels, we find that it outperforms the model presented in Muzny et al. (2017) by an average of 50% F1-score.", "In the future, we hope to improve our model by: training it on a larger and more varied dataset, and training the model on speaker attribution together with the related task of coreference resolution.", "We have also presented an error analysis on the different components necessary to perform the end-to-end goal of attributing characters to their direct speech utterances in novels:", "i) direct speech identification,", "ii) character extraction, and", "iii) speaker attribution.", "We have shown the need to produce literature-domain specific models targeting character extraction in order to improve the accuracy of current systems." ]
[ "objective", "method", "objective", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "method", "method", "method", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "objective", "objective", "abstain", "abstain", "method", "method", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "objective", "objective", "method", "abstain", "result", "objective", "abstain", "abstain", "abstain", "result" ]
[ "Policy gradient algorithms have found wide adoption in NLP, but have recently become subject to criticism, doubting their suitability for NMT.", "Choshen et al. (2020) identify multiple weaknesses and suspect that their success is determined by the shape of output distributions rather than the reward.", "In this paper, we revisit these claims and study them under a wider range of configurations.", "Our experiments on in-domain and cross-domain adaptation reveal the importance of exploration and reward scaling, and provide empirical counterevidence to these claims.", "In neural sequence-to-sequence learning, in particular Neural Machine Translation (NMT), Reinforcement Learning (RL) has gained attraction due to the suitability of Policy Gradient (PG) methods for the end-to-end training paradigm (Ranzato et al., 2016; Li et al., 2016; Yu et al., 2017; Li et al., 2018; Flachs et al., 2019; Sankar and Ravi, 2019).", "The idea is to let the model explore the output space beyond the reference output that is used for standard cross-entropy minimization, by reinforcing model outputs according to their quality, effectively increasing the likelihood of higher-quality samples.", "The classic exploration-exploitation dilemma from RL is addressed by sampling from a pretrained model's softmax distribution over output tokens, such that the model entropy steers exploration.", "For the application of NMT, it was firstly utilized to bridge the mismatch between the optimization for token-level likelihoods during training and the corpus-level held-out set evaluations with non-differentiable/decomposable metrics like BLEU (Ranzato et al., 2016; Edunov et al., 2018), and secondly to reduce exposure bias in autoregressive sequence generators (Ranzato et al., 2016; Wang and Sennrich, 2020).", "It has furthermore been identified as a promising tool to adapt pretrained models to new domains or user preferences by replacing reward functions with human feedback in human-in-the-loop learning (Sokolov et al., 2016; Nguyen et al., 2017).", "Recently, the effectiveness of these methods has been questioned: Choshen et al. (2020) identify multiple theoretical and empirical weaknesses, leading to the suspicion that performance gains with RL in NMT are not due to the reward signal.", "The most surprising result is that the replacement of a meaningful reward function (giving higher rewards to higher-quality translations) by a constant reward (reinforcing all model samples equally) yields similar improvements in BLEU.", "To explain this counter-intuitive result, Choshen et al. (2020) conclude that a phenomenon called the peakiness effect must be responsible for performance gains instead of the reward.", "This means that the most likely tokens in the beginning gain probability mass regardless of the rewards they receive during RL training.", "If this hypothesis was true, then the perspectives for using methods of RL for encoding real-world preferences into the model would be quite dire, as models would essentially be stuck with whatever they learned during supervised pretraining and not reflect the feedback they obtain later on.", "However, the analysis by Choshen et al. (2020) missed a few crucial aspects of RL that have led to empirical success in previous works: First, variance reduction techniques such as the average reward baseline were already proposed with the original Policy Gradient by Williams (1992), and proved effective for NMT (Kreutzer et al., 2017; Nguyen et al., 2017).", "Second, the exploration-exploitation trade-off can be controlled by modifying the sampling function (Sharaf and Daum III, 2017), which in turn influences the peakiness.", "We therefore revisit the previous findings with NMT experiments differentiating model behavior between in-domain and out-of-domain adaptation, controlling exploration, reducing variance, and isolating the effect of reward scaling.", "This allows us to establish a more holistic view of the previously identified weaknesses of RL.", "In fact, our experiments reveal that improvements in BLEU can not solely be explained by increased peakiness , and that simple methods encouraging stronger exploration can successfully move previously lower-ranked token into higher ranks .", "We observe generally low empirical gains in in-domain adaptation, which might explain the surprising success of constant rewards in (Choshen et al., 2020).", "However, we find that rewards and their scaling do matter for domain adaptation .", "Furthermore, our results corroborate the auspicious findings of Wang and Sennrich (2020) that RL mitigates exposure bias.", "Our paper thus reinstates the potential of RL for model adaptation in NMT, and puts previous pessimistic findings into perspective.", "The code for our experiments is publicly available.", "1 2 RL for NMT The objective of RL in NMT is to maximize the expected reward for the model's outputs with respect to the parameters : arg max E p ( y | x ) [( y, y (cid:48) )] .", "where y (cid:48) denotes a reference translation, y is the generated translation and is a metric (e.g. BLEU (Papineni et al., 2002)), rewarding similarities to the reference.", "Applying the log derivative trick, the following gradient can be derived: = E p ( y | x ) [( y, y (cid:48) ) log p ( y | x )] .", "However, computing the gradient requires the summation over all y V mtrg , which is computationally infeasible for large sequence lengths m and vocabulary sizes V trg as they are common in NMT.", "Therefore, Eq.", "1 is usually approximated through Monte Carlo sampling (Williams, 1992) resulting in unbiased estimators of the full gradient.", "We draw one sample from the multinomial distribution defined by the model's softmax to approximate Eq.", "1 (Ranzato et al., 2016; Kreutzer et al., 1 https://github.com/samuki/ reinforce-joey 2 Rewards may be obtained without reference translations y (cid:48) , hence ( y ) can replace ( y, y (cid:48) ) in the following equations.", "The temperature of the softmax distribution exp( y i / ) / (cid:80) j exp( y j / ) can be used to control the amount of exploration during learning.", "Setting 0 < < 1 results in less diverse samples while setting > 1 increases the diversity and also the entropy of the distribution.", "Lowering the temperature (i.e. making the distribution peakier) may be used to make policies more deterministic towards the end of training (Sutton and Barto, 1998; Rose, 1998; Sokolov et al., 2017), while we aim to reduce peakiness by increasing the temperature.", "Variance reduction techniques were already suggested by Williams (1992) and found to improve generalization for NMT (Kreutzer et al., 2017).", "The simplest option is the baseline reward , which in practice is realized by subtracting a running average of historic rewards from the current reward in Eq.", "2.", "It represents an expected reward, so that model outputs get more strongly reinforced or penalized if they diverge from it.", "In addition to variance reduction, subtracting baseline rewards also change the scale of rewards (e.g. [0 , 1] for BLEU becomes [ 0 . 5 , 0 . 5] ), allowing updates towards or away from samples by switching the sign of u k (Eq. 2).", "The same range of rewards can be obtained by rescaling them, e.g., to ( y,y (cid:48) ) min max min 0 .", "5 with the minimum ( min ) and maximum ( max ) within each batch.", "Minimum Risk Training (MRT) (Shen et al., 2016) aims to minimize the empirical risk of task loss over a larger set of n = |S| , n > 1 output samples S ( x ) Y ( x ) :", "objective due to the renormalization of model scores, but that has not hindered its empirical success (Shen et al., 2016; Edunov et al., 2018; Wi-eting et al., 2019; Wang and Sennrich, 2020).", "Interestingly, the resulting gradient update includes a renormalization of sampled rewards, yielding a similar effect to the baseline reward (Shen et al., 2016).", "It also allows for more exploration thanks to learning from multiple samples per input, but it is therefore less attractive for human-in-the-loop learning and efficient training.", "The exposure bias in NMT arises from the model only being exposed to the ground truth during training, and receiving its own previous predictions during inferencewhile it might be overly reliant on perfect context, which in turn lets errors accumulate rapidly over long sequences (Ranzato et al., 2016).", "Wang and Sennrich (2020) hypothesize that exposure bias increases the prevalence of hallucinations in domain adaptation and causes the beam search curse (Koehn and Knowles, 2017; Yang et al., 2018), which describes the problem that the model's performance worsens with large beams.", "Wang and Sennrich (2020) find that MRT with multiple samples can mitigate this problem thanks to being exposed to model predictions during training.", "We will extend this finding to other PG variants with single samples.", "We implement PG and MRT (without enforcing gold tokens in S ; n = 5 ) in Joey NMT (Kreutzer et al., 2019) for Transformers (Vaswani et al., 2017).", "We simulate rewards for training samples from IWSLT14 de-en with sacreBLEU (Post, 2018), and test on IWSLT14 held-out sets.", "We consider two different domains for pretraining, WMT15 and IWSLT14.", "This allows us to distinguish the effects of RL in in-domain learning vs domain adaptation scenarios.", "RL experiments are repeated three times and we report mean and standard deviation.", "Remaining experimental details can be found in the Appendix.", "The goal is not to find the best model in a supervised domain adaptation setup (Fine-tuning in Table 2), but to investigate if/how scalar rewards expressing translation preferences can guide learning, mimicking a human-in-the-loop learning scenario.", "Choshen et al. (2020) suspect that PG improvements are due to an increase in peakiness.", "Increased peakiness is indicated by a disproportionate rise of p top 10 and p mode , the average token probability of the 10 most likely tokens, and the mode, respectively.", "To test the influence of peakiness on performance, we deliberately increase and decrease the peakiness of the output distribution by adjusting the parameter .", "In Tables 1 and 2 we can see that all PG variants generally increase peakiness ( p top 10 and p mode ), but that those with higher temperature > 1 show a lower increase.", "Comparing the peakiness with the BLEU scores, we find that BLEU gains are not tied to increasing peakiness in in-domain and cross-domain adaptation experiments.", "This is exemplified by reward scaling (PG+scaled), which improves the BLEU but does not lead to an increase in peakiness compared to PG.These results show that improvements in BLEU can not just be explained by the peakiness effect, contradicting the hypothesis of Choshen et al. (2020).", "However, in cross-domain adaptation exploration plays a major role: Since the model has lower entropy on the new data, reducing exploration (lower ) helps to improve translation quality.", "One disadvantage of high peakiness is that previously likely tokens accumulate even more probability mass during RL.", "Choshen et al. (2020) therefore fear that it might be close to impossible to transport lower-ranking tokens to higher ranks with RL.", "We test this hypothesis under different exploration settings by counting the number of gold tokens in each rank of the output distribution.", "That number is divided by the number of all gold tokens to obtain the probability of gold tokens appearing in each rank.", "We then compare the probability before and after RL.", "Fig. 1 illustrates that training with an increased temperature pushes more gold tokens out of the lowest rank.", "The baseline reward has a beneficial effect to that aim, since it allows down-weighing samples as well.", "This shows that upwards mobility is feasible and not a principled problem for PG.", "Choshen et al. (2020) observe an increase in peakiness when all rewards are set to 1, and BLEU improvements even comparable to BLEU rewards.", "While our results with a constant reward of 1 (PG+constant) also show an increase in peakiness for cross-domain adaptation (Table 2), we do not observe any improvements over the pretrained model, which contradicts the results of Choshen et al. (2020).", "Similarly, domain adaptation via self-training does not show improvements over the baseline, which confirms that gains do not come from being exposed to new inputs alone.", "While the effects in-domain are generally weak with a maximum gain of 0.5 BLEU over the baseline (with beam size k = 5 , Table 1), the results for domain adaptation (Table", "2) show a clear advantage of using informative rewards with up to +4.7 BLEU for PG and +6.7 BLEU for MRT (with beam size k = 5 ).", "We conclude that rewards do matter for PG for NMT.", "As described in Section 2.3, scaling the reward (PG+scaled), subtracting a baseline (PG+average bl), or normalizing it over multiple samples for MRT, introduces negative rewards, which enables updates away from sampled outputs.", "BLEU under domain shift (Table", "2) shows a signifi-cant improvement when allowing negative rewards.", "The scaled reward increases the score by almost 1 BLEU, the average reward baseline by almost 2 BLEU and MRT leads to a gain of about 4.5 BLEU over plain PG.", "The results show that improvements of RL over the baseline are higher with lower beam sizes, since RL reduces the need for exploration (through search) during inference thanks to the exploration during training.", "These findings are in line with (Bahdanau et al., 2017).", "For RL models, BLEU reductions caused by larger beams are weaker than for the baseline model in both settings, which confirms that PG methods are effective at mitigating the beam search problem, and according to Wang and Sennrich (2020) might also reduce hallucinations.", "Despite the promising empirical gains over a pretrained baseline, all above methods would fail if trained from scratch, as there are no non-zero-reward translation outputs sampled when starting from a random policy.", "Empirical improvements over a strong pretrained model vanish when there is little to learn from the new feedback, e.g. when Model p top 10 p mode BLEU ( k = 1 ) BLEU ( k = 5 ) BLEU ( k = 50 ) Pretraining (WMT15) 0 0 19.74 20.35 20.10 Self-training (IWSLT14) 46.87 99.15 19 .", "it is given on the same data which the model was already trained on, as we have shown above, relating to the failure cases in (Choshen et al., 2020).", "RL methods for MT can be effective at adapting a model to new custom preferences if these preferences can be reflected in an appropriate reward function, which we simulated with in-domain data.", "In Table 2, we observed this effect and gained several BLEU points without revealing reference translations to the model.", "Being exposed to new sources alone (without rewards) is not sufficient to obtain improvements, which we tested by self-training (Table 2).", "Ultimately, the potential to improve MT models with RL methods lies in situations where there are no reference translations but reward signals, and models can be pretrained on existing data.", "We provided empirical counter-evidence for some of the claimed weaknesses of RL in NMT by untying BLEU gains from peakiness, showcasing the upwards mobility of low-ranking tokens, and re-confirming the importance of reward functions.", "The affirmed gains of PG variants in adaptation scenarios and their responsiveness to reward functions, combined with exposure bias repair and avoidance of the beam curse, rekindle the potential to utilize them for adapting models to human preferences.", "We acknowledge the support by the state of Baden-Wrttemberg through bwHPC compute resources." ]
[ "abstain", "abstain", "method", "result", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "method", "abstain", "other" ]
[ "Transformer-based language models pretrained on large amounts of text data have proven remarkably successful in learning generic transferable linguistic representations.", "Here we study whether structural guidance leads to more human-like systematic linguistic generalization in Transformer language models without resorting to pre-training on very large amounts of data.", "We explore two general ideas.", "The Generative Parsing idea jointly models the incremental parse and word sequence as part of the same sequence modeling task.", "The Structural Scaffold idea guides the language model's representation via additional structure loss that separately predicts the incremental constituency parse.", "We train the proposed models along with a vanilla Transformer language model baseline on a 14 million-token and a 46 million-token subset of the BLLIP dataset, and evaluate models' syntactic generalization performances on SG Test Suites and sized BLiMP.", "Experiment results across two benchmarks suggest converging evidence that generative structural supervisions can induce more robust and humanlike linguistic generalization in Transformer language models without the need for data intensive pre-training.", "Pre-trained Transformer architectures have led to huge progress in building more human-like language processing systems (Radford et al.; Devlin et al., 2019; Brown et al., 2020, among others).", "These models achieve impressive perplexity results on language modelling datasets, perform well on grammatical judgments (Warstadt et al., 2020), and provide useful linguistic representations that benefit a wide range of downstream tasks.", "Probing analyses also suggest that these models learn to implicitly encode syntactic information (Hewitt and Manning, 2019; Clark et al., 2019) that may support better linguistic generalization than recurrent neural network architectures (RNNs).", "However, the Transformer architecture (Vaswani et al., 2017) is an interesting subject of study beyond its success in transfer-learning settings.", "Transformer models lack the inductive biases of RNNs.", "Rather than maintaining vector-valued state and updating it in a recurrent manner, auto-regressive Transformer models encode all past decisions simultaneously at each inference step, thanks to a self-attention mechanism.", "The only notion of sequence order is also given by position embeddings summed to content embeddings in both input and auto-regressive signals.", "Previous works have shown the advantage of structural supervision in RNNs in learning to maintain syntactic states and non-local dependencies (Kuncoro et al., 2018; Wilcox et al., 2019; Futrell et al., 2019).", "It remains an open question whether Transformer language models can similarly benefit from generative structural supervision, and what form of structural supervision would more effectively induce human-like syntactic generalization.", "This work hypothesizes that the Transformer language model may benefit from explicit generative structural supervision to systematically generalize syntactic knowledge.", "Here we explore two ma-jor classes of structural guidance for Transformer language models based on joint modeling of language and constituency parses.", "The generative parsing as language modeling approach builds a Transformer-parameterized model to learn to predict actions that incrementally build constituency trees along with terminal words, following prior work on RNNs (Dyer et al., 2016; Choe and Charniak, 2016).", "The structural scaffolding approach follows the general idea of regularizing hidden representation through multi-task learning objective, with prior success in various NLP tasks (Zhang S NP The birds VP sang ADVP (cid:104) BOS (cid:105) NT ( S ) NT ( NP ) The birds REDUCE NT ( VP ) sang NT ( ADVP ) w 0 w 1 w 2 w 3 y 0:1 y 1:2 y 2:3 y 3:4 w 1 w 2 w 3 w 0 w 1 w 2", "Swayamdipta et al., 2018).", "We test these two approaches on two subsets of the BLLIP dataset (Charniak et al., 2000) and evaluate models' syntactic generalization performances on SG Test Suites (Hu et al., 2020) and a sampled subset of the BLiMP Benchmark (Warstadt et al., 2020).", "We show evidence that generative structural supervision indeed induces more robust and human-like linguistic generalization in Transformer language models and explore the different trade-offs involved in the presented methods.", "Here we explore joint modelling of structures and words parametrized with Transformers by considering both a sentence W and its constituency parse Y and modeling the joint distribution P ( W, Y ) .", "A language model can be described formally as a probability distribution over strings of a language w 1 , , w T , usually left-to-right factored.", "There are many possible approaches that can combine both language modeling and syntax modeling tasks.", "As long as both tasks share some of the parameters they can be considered a case of multi-task learning (Caruana, 1997).", "Of interest here is the model proposed in Recurrent Neural Network Grammars (RNNGs; Dyer et al., 2016) and parsing as language model (LSTM-LM; Choe and Charniak, 2016).", "Both approaches model the joint distribution of words W and constituency tree components Y as p ( Y, W ) = p ( a 1 , , a R ) = R (cid:89) t =1 p ( a t | a <t ) (2) where a t are transitions of a state machine that generates both the sentence and the tree.", "These transitions are similar to the well-established transition sets used for transition-based parsing (Earley, 1970) but adapted to generate both text and parse simultaneously.", "For the reminder of this work, we will consider each a t to be integer valued and indexing a dictionary of transitions.", "A transition a can be a word w or a transition action that generates a component of the constituency tree y .", "The actions include non-terminal symbols that open and label a new constituent with the label x , indicated as NT(x), or a REDUCE action closing the closest open constituent.", "An example of a partial parse tree and transitions can be found at the top of Figure", "1. RNNG and LSTM-LM parametrize the same factorization in Equation 2 in different ways.", "RNNG utilizes stack-LSTMs, which allow it to dynamically create representations for partial tree components by composition.", "The LSTM-LM, however, uses a flat parametrization treating the transitions as a sequence in a conventional language model learnt with an LSTM (Hochreiter and Schmidhu-ber, 1997).", "It should also be noted that the LSTM-LM is designed as a parser, while RNNG is also used as a language model.", "In order to derive a language model from a joint model, it is is necessary to marginalize over all possible parse trees p ( W ) = (cid:88) Y Y ( W ) p ( Y, W ) (3) which is an intractable problem since there is an exponentially large number of possible trees.", "The original RNNG work (Dyer et al., 2016) proposes an approximate solution based on importance sampling.", "In this work we use the word-synchronous beam search approximation introduced in Stern et al. (2017).", "The marginalized likelihood language model in Equation 3 is desirable because it makes no statistical independence assumption between language and syntax and shares all parameters across both tasks, with the exception of action specific embeddings.", "Particularly relevant for this work is the fact that both word and non-word transitions are predicted as language model output indiscriminately and are available at each prediction step through its history a <t .", "In this work we propose to parametrize Eq 2 with a Transformer language model (Vaswani et al., 2017).", "This is equivalent to the flat parametrization of the LSTM-LM but using a Transformer language model instead.", "Unlike LSTM-LM, which is a parsing model, we derive from it a language model by marginalization as in the RNNG.", "A Transformer language model can be succinctly described as a neural network of vertically stacked layers where the m -th layer is given by h m<t = FF m O A m 1 ( h m 1 <t ) A m 2 ( h m 1 <t ) A m N ( h m 1 <t ) .", "Here h m 1 <t RH t is the output of the previous decoder layer for all previous predictions of the model at time step t and H is the size of the hidden vector.", "The input to the first layer i.e. h 0 <t are the embeddings of all previous transitions a <t concatenated with a start symbol.", "Each embedding is the sum of both a content embedding, dictionary vector that is being indexed, and a position embedding that encodes the absolute or relative position of each action in the sequence.", "FF m () is a feed-forward layer, A m 1 () AMN () are multiple self-attention heads and O RH H is a matrix multiplication performed on the concatenated output of the attention heads.", "Both the feed-forward and the projection of N attention heads through O are wrapped around with residual, dropout and layer normalization operations that are here removed for clarity.", "Each attention head comprises a simple inner product attention mechanism A mn ( h m 1 <t ) = V mn h m 1 <t softmax (cid:0) ( K mn h m 1 <t ) T Q mn h m 1 <t + M (cid:1) (5) where V mn , K mn , Q mn R H/N H are value, key and query projection matrices respectively and the softmax operation is normalized over columns to sum to one.", "The matrix M { , 0 } t t is used to prevent the model from attending to future states during training, enabling efficient parallelization.", "It is displayed here due to its relevance for the next section.", "Similarly to other models, to derive a distribution over all possible transitions, including words, nonterminal symbols and the REDUCE operation, we can use a softmax together with an inner product p ( a t | a <t ) = softmax( EW Y h m<t ) a t (6) where EW Y are the embeddings for the joint vocabulary of words, non-terminals and REDUCE transitions.", "Henceforth, we refer to this model as Parsing as Language Model , or PLM for short.", "Unlike LSTMs or the RNNG, the Transformer has direct access to all past decisions through self-attention and relies on position embeddings to encode word order.", "Thus, in principle, there is no structural bias for the model to favor past decisions that are close in time to inform current prediction.", "On one hand, this potential ability to use long distance information can enable a less local, more human like processing of language, but on the other hand, it can also result in an additional learning burden, especially if there is not sufficient learning data available.", "Also worth noting for the experiments proposed here is that the total number of parameters of a typical Transformer greatly exceeds that of an LSTM or a RNNG model.", "As previously mentioned, unlike any of the other models, the RNNG is able to create partial tree representations by composition using stack-LSTMs.", "This changes the RNNG model structure dynamically as a function of the partial parse, a very desirable property to derive syntax-aware representations.", "Moreover, the fact that Recurrent Neural Networks such as LSTMs summarize all information about previous time steps on two hidden vectors, creates a bottleneck that forces the model to focus on the local state.", "This is a situation where a syntax-aware representation can provide additional value by enabling the local state to better encompass past structures.", "We conjecture that a similarly constrained local state might benefit Transformer models in learning linguistic regularities, especially in a limited training data scenario.", "In an attempt to capture a similar effect in the Transformer, we explore here the idea of masking some attention heads to reflect the parser state as in the stack-Transformer (Astudillo et al., 2020).", "In the stack-Transformer, two attention heads are specialized to attend only to the contents of buffer and stack respectively for dependency and semantic parsing tasks.", "Here we choose to specialize two heads as well for each layer in Equation 4, as depicted in Fig.", "2. One attention head attends to the contents of the last open constituent whereas another head attends all other past decisions not involving that constituent.", "The rest of the heads are left free as in the original Transformer architecture.", "To constrain the attention heads, we only need to alter the mask M in Equation 5 to depend on head index n and past actions M n ( a <t ) , which results in a negligible computation overhead.", "This hard masking makes the model structure change dynamically depending on the partial parse and it forces some heads to focus on the local syntactic state.", "Nevertheless, unlike the RNNG, it does not create new representations of partial parses that can be composed in a recurrent manner at each time step, and some attention heads can still operate unrestricted.", "We hypothesize that structure-aware attention mechanism may still help the model achieve better generalization.", "The symbolic representation induces a strong inductive bias to how the model should use the structure that it generates on the fly.", "We henceforth refer to this model PLM-mask .", "Given the strong coupling between the tasks, the marginal likelihood Transformer language model of the previous section can be expected to be strongly influenced by the additional syntax prediction task.", "This comes however at a big cost.", "First, sequences combine both words and non-terminal and reduce transitions, yielding longer sentences than those of a normal language model R > T .", "Furthermore the approximated marginalization is computationally intensive and also introduces an approximation error.", "One well-established regime that allows joint modeling of tasks at a low complexity is that of the syntactic scaffold (Zhang and Weiss, 2016; Sgaard and Goldberg, 2016; Swayamdipta et al., 2018).", "Scaffolding adds an additional structure prediction task at one of the layers of the model as a separate layer and only during training.", "This is a minimally intrusive change since it just branches some hidden vector of the network and computes an additional loss.", "It also has no influence on test runtime and avoids expensive steps such as marginalization.", "However, applying the idea of syntactic scaffolding to our present scenario poses one difficulty.", "If we use a standard language model predicting words w and predict the non-word symbols y separately, we face the problem that the two sequences have different lengths.", "To overcome this in a straightforward way, we predict the n -gram of non-word actions y t : t + n ( t ) corresponding to the partial parse synchronous with step t when we predict word w t .", "We use a secondary softmax layer for this action n -gram prediction.", "Here EY is the vocabulary of all transition n grams excluding words found in the train corpus plus a blank symbol.", "Note that since Scaffolding operates only at train time, we do not need to worry about generalization of these n -grams to test time.", "The models are thus trained to minimize the loss function log p ( Y, W ) where p ( Y, W ) = (cid:81) Tt =1 p ( w t | w <t ) + (cid:81) Tt =1 p ( y t : t + n ( t ) | w <t ) (8) The scaffold can be set so that the synchronous non-word action n -grams y t : t + n ( t ) are predicted either before (Figure 1c, left) or after (Figure 1c, right) producing w t .", "We considered both variants in our experiments to empirically assess their impact on performance.", "We refer to this model as Transformer Language Model with Syntactic Scaffold , or ScLM in short, and its two versions ScLM-past and ScLM-next , for past and next n gram prediction.", "All models, including the baseline vanilla language models ( LM in short), the syntactic scaffold models, and the generative parsing models, are based on the same architecture of GPT-2 small (Radford et al.) (117M parameters, 12 layers, H = 768 ) and use the same BPE tokenizer, but with randomly initialized weights.", "We believe this would give us a fair comparison to pretrained GPT-2 as well, in order to evaluate whether structural guidance helps improve sample efficiency.", "We implemented all the proposed models using Huggingface's Transformer package (Wolf et al., 2020) 1 .", "As our goal here is to study whether structural guidance helps models learn robust humanlike generalization of syntactic knowledge, we train our model on the BLLIP dataset (Charniak et al., 2000), an English newswire style corpus used in Hu et al. (2020).", "This makes the results here more comparable to the results reported in previous work, especially with RNNGs.", "We train the proposed models and the baseline vanilla Transformer language models on BLLIP-MD , a 14 million-token corpus, and BLLIP-LG , a 46 million-token corpus, both of which are auto-parsed using a state-of-the-art constituency parser (Kitaev and Klein, 2018).", "We used the parsed sentences to generate oracle parsing action sequence for PLM and PLM-mask.", "We collected a list of word-synchronous parsing 1 Code available at https://github.com/IBM/ transformers-struct-guidance action sequences from the train and development oracle of BLLIP-LG and use it to parametrize the action n -gram vocabulary of ScLMs trained on both BLLIP-MD and BLLIP-LG .", "There are 3756 action n -gram types from the corpora, including one padding token and one blank token.", "All models were trained with learning rate 10 5 , AdamW optimizer, and minibatch of size 5.", "We trained the models with multiple seeds within the capacity of our resources, in order to accommodate potential variance.", "In total, there are three seeds of LM, four of ScLM-past, four of ScLM-next, three of PLM, and three of PLM-mask for BLLIP-MD , and the same number of seeds of each model type for BLLIP-LG .", "Models were trained until convergence, as suggested by the loss of the development set during training.", "To assess whether a trained model systematically generalizes its syntactic knowledge, we employ targeted syntactic evaluation paradigm (Marvin and Linzen, 2018).", "Specifically, we measure models' performance on two held-out test datasets, a collection of syntactic generalization test suites from Hu et al. (2020) and BLiMP Benchmark from Warstadt et al. (2020).", "These two datasets cover a wide range of English syntactic phenomena.", "Tests from Hu et al. (2020), which we refer as SG Test Suites , consist of hand-designed test suites for evaluating fine-grained syntactic generalization in incremental processing of a linguistic input.", "The general method is to compare models' surprisals p ( continuation | prefix ) of grammatical and ungrammatical continuations given certain sentence prefixes.", "We report the accuracy averaged across SG test suites.", "BLiMP Benchmark features minimal pairs of a grammatical sentence W and an ungrammatical counterpart W .", "To evaluate a model on these minimal pairs, one simply compares the likelihood of W and W assigned by the model.", "As is implied by the evaluation methods, we need to marginalize out the structure variables for PLM or PLM-mask models in order to estimate the surprisal of a continuation, given a sentence prefix or the likelihood of a complete sentence.", "We follow similar setup as in Futrell et al. (2019); Wilcox et al. (2019) applying word-synchronous beam search (Stern et al., 2017) to find a list Y k of k incremental parses given a sentence prefix w <t .", "We then sum the joint probability p ( w <t , y <t ) over the list of incremental parses given by the model to approximate the likelihood of p ( w <t ) .", "We set the parse beam size to 100, word-synchronous beam size k as 10, and fast track size of 5.", "Since the search process can be computationally intensive, the large number of items in BLiMP benchmark poses a computational challenge.", "We therefore select the first 10% out of the 1000 items in each of the 67 tests of BLiMP Benchmark.", "We report the accuracy over the 100 items and refer to this down-sized BLiMP Benchmark as BLiMP-10% .", "We compare models' performance on the SG Test Suites and BLiMP-10% in Figure", "3. Each bar shows a model's performance averaged across multiple seeds on a given benchmark, with each dot plotting the accuracy of a specific seed.", "Overall, syntactic generalization performance improves as the training data size increases from BLLIP-MD (14 million tokens) to BLLIP-LG (42 million to-kens).", "Models with structural guidance achieve higher accuracy than the vanilla Transformer language model trained on the same set of raw text data without explicit structural information.", "We also include the results for the RNNGs taken from Hu et al. (2020).", "RNNG lags behind all Transformer models by a large margin in average scores.", "We also notice that among different forms of structural guidance, generative parsing as language modeling is the most effective in improving syntactic generalization performance against the baseline transformer language models.", "We didn't observe consistent benefits of adding dynamic masking mechanism to PLM.", "While scaffolding approach slightly improves vanilla Transformer language models, it still falls behind the best performance of the model trained with generative parsing.", "We hypothesize that our scaffold did not fully exploit the compositional structure in the local parses by modelling each action n -gram as a distinct type, while the generative parsing models only predict actions in a relatively small set of non-terminal action space, which might make it easier for PLM and PLM-mask to learn compositional generalization.", "We leave it for future work to design new scaffolds that can take advantage of the combinatorial nature of syntactic structure.", "For completeness, we also ran the pre-trained GPT-2 model on the syntactic suites.", "This yielded a score of 0.808 on the SG Test Suites and 0.827 on BLiMP-10% for the small version of pre-trained GPT-2.", "Among models trained on BLLIP-LG , the average accuracy score on the SG Test Suites is 0.723 for PLMs, 0.748 for PLM-masks, and 0.665 for LMs.", "Similar trend is observed on BLiMP-10% as well, where among models trained on BLLIP-LG the average accuracy is 0.751 for PLMs, 0.753 for PLM-masks, and 0.708 for LMs.", "The proposed PLM method is able to close the gap between GPT-2 small and the same model trained with BLLIP-LG by about half, while the improvement for BLiMP is more modest but still signi-ficative.", "It remains an open question whether scaling syntactic supervision to a larger dataset than BLLIP-LG would bring the generalization performance of PLM models closer to that of the pretrained GPT-2 model.", "We compare perplexity on the BLLIP held-out test set against syntactic generalization performance in Figure", "4. Perplexities of PLM and PLM-mask models are computed setting the parse tree equal to the gold parse in Equation 3 to approximate the likelihood.", "Note that, unlike Hu et al. (2020), all 50 60 Word-level Perplexity 0.625 0.650 0.675 0.700 0.725 0.750 SGA cc u r a cy Model LM ScLM-past ScLM-next PLM PLM-mask Corpus BLLIP-MD BLLIP-LG Corpus BLLIP-MD BLLIP-LG 50 60 Word-level Perplexity 0.68 0.70 0.72 0.74 0.76 BL i MP10 % A cc u r a cy Model LM ScLM-past ScLM-next PLM PLM-mask Corpus BLLIP-MD BLLIP-LG Corpus BLLIP-MD BLLIP-LG Figure 4: Comparison between model perplexity on BLLIP test data and syntactic generalization performance on SG Test Suites (top) and BLiMP-10% (bot-tom).", "our models use the same BPE vocabulary and word tokenization from GPT-2.", "The only exception are the additional parsing actions in the vocabulary y .", "From Figure 4, both perplexity and syntactic generalization performance improve with dataset size.", "However, for both training dataset sizes, we see that structural guidance can improve syntactic generalization.", "PLM models consistently perform better than vanilla models.", "While all models achieve very similar perplexity results after being trained on a specific dataset, their syntactic generalization performances differ dramatically.", "In addition to comparing model's aggregated performances, we also compare their generalization performances in the clustered subsets of tests in SG Test Suites and BLiMP-10%.", "These subsets consist of several related tests that target specific type of syntactic phenomenon, such as NPI licensing, subject-verb agreement, filler-gap dependencies, etc.", "We also include the results for the RNNGs taken from Hu et al. (2020).", "Results in Figure 5 show converging evidence that structural guidance in the form of generative parsing can robustly improve learning of subject-verb agreement and NPI licensing, and helps the model to better capture incremental processing phenomenon such as garden-path effects, but seems to slightly hurt the performance on gross syntactic state.", "While overall the RNNG shows a poor performance this is mostly due to its very low scores for licensing suites.", "Excluding these suites only the RNNG shows a performance close to the PLM model, even outperforming it clearly for the gross syntactic state suites.", "In this category and binding PLM variants seem inferior to all other models.", "Multitask learning (Caruana, 1997) has been applied to a variety of NLP tasks with traditional modeling approaches (Miller et al., 2000; Sutton and McCallum, 2005; Sutton et al., 2007) as well as more recent neural models (Collobert et al., 2011; Li et al., 2020a).", "A recurring theme has been the use of structure in the form of syntactic trees to benefit other NLP tasks.", "Among the early works exploring this direction, Punyakanok et al. (2008) showed that syntactic parses can benefit Semantic Role Labeling (SRL).", "Poon and Domingos (2009) extended this idea to induce first-order logic representation in a unsupervised fashion, by clustering the dependency structures.", "In both cases syntax forms part of a pipeline and is not strictly supervision for the end task.", "This trend continued with the rise of neural models.", "Collobert et al. (2011) improved deep convolution neural network for syntactic chunking models with additional POS supervision.", "Zhang and Weiss (2016); Sgaard and Goldberg (2016) observe the benefits of POS supervision at different depths of a neural network model with impact on dependency parsing, tagging and CCG super tagging performance.", "He et al. (2019) perform a syntax-based pruning of semantic roles, showing benefits in a multilingual setting.", "More recently, Sachan et al. (2020) incorporate a syntactic graph recurrent neural network into BERT models for better semantic role labeling.", "However, their method shows little or no benefit of syntax modeling for Named Entity Recognition and relation linking task.", "Neural machine translation (Chen et al., 2018) and text generation (Li et al., 2020a) have also been shown to benefit from syntactic modeling.", "In a recent work, Li et al. (2020b) use syntactic modeling in BERT based transformers to achieve performance gains 0.05 0.15 0.25 0.35 0.45 0.55 0.65 A cc u r a cy Licensing (10 suites) 0.400.450.500.550.600.650.700.75 Long-Distance Dependencies (8 suites) 0.3 0.4 0.5 0.6 0.7 0.8 Agreement (3 suites) BLLIP-MD BLLIP-LG 0.550.600.650.700.750.800.85 A cc u r a cy Garden-Path Effects (6 suites) BLLIP-MD BLLIP-LG 0.70 0.75 0.80 0.85 0.90 0.95 Gross Syntactic State (4 suites) BLLIP-MD BLLIP-LG 0.500.550.600.650.700.750.800.85 Center Embedding (2 suites) Model Performance on Specific Clusters of SG Test Suites RNNG LM ScLM-past ScLM-next PLM PLM-mask 0.500.550.600.650.700.750.800.85 A cc u r a cy Anaphor Agreement (2 suites) 0.55 0.60 0.65 0.70 Argument Structure (9 suites) 0.60 0.65 0.70 0.75 Binding (7 suites) 0.60 0.65 0.70 0.75 Control/Raising (5 suites) 0.70 0.75 0.80 0.85 A c c u r a cy Determiner-Noun Agreement (8 suites) 0.50 0.55 0.60 0.65 0.70 0.75 0.80 Ellipsis (2 suites) 0.60 0.65 0.70 0.75 Filler Gap (7 suites) 0.550.600.650.70 0.750.800.85 Irregular Forms (2 suites) BLLIP-MD BLLIP-LG 0.40 0.45 0.50 0.55 0.60 A cc u r a cy Island Effects (8 suites) BLLIP-MD BLLIP-LG 0.500.550.600.650.700.750.800.85 NPI Licensing (7 suites) BLLIP-MD BLLIP-LG 0.55 0.60 0.65 0.70 0.75 Quantifiers (4 suites) BLLIP-MD BLLIP-LG 0.70 0.75 0.80 0.85 Subject-Verb Agreement (6 suites) Model Performance on Specific Clusters of BLiMP-10% Test Suites LM ScLM-past ScLM-next PLM PLM-mask Figure 5: Model performance comparison by specific linguistic phenomena clustered in SG Test Suites (top) and BLiMP-10% (bottom).", "on several text classification benchmarks.", "Other works have found that structural supervision in the form of intermediate fine-tuning (e.g., on CCG super tagging) is not helpful or even harmful (Pruk-sachatkun et al., 2020; Warstadt et al., 2019).", "The focus of our work is on gauging the impact of joint modeling on syntactic generalization performance.", "In this direction, the work of Swayamdipta et al. (2018) is close to the scaffolding version of our model.", "They predict multiple labels, extracted from syntactic information, as auxiliary task and show positive effects on shallow semantic parsing and co-reference resolution.", "We use however a single feature, constituency parsing n -gram, which is closer to prior work relying on Part-of-Speech information.", "In addition, we explore impact of using preceding structure as feature vs postceding structure, which as shown plays a role in the learning process.", "In terms of modeling objective and syntactic representations, our method is closest to the works of Choe and Charniak (2016); Dyer et al. (2016) that jointly model syntax and language.", "A more recent work from Peng et al. (2019) uses Rational Neural Networks language model that can derive binary unlabeled constituents from attention weights and can supervise the attention to attain a structural inductive bias.", "The proposed models show lower language modeling perplexity compared to their structure agnostic counterparts.", "We also extend here the idea of syntax-aware language modeling to transformer-based language models.", "Finally, our approach relates to the other works that propose ways of incorporating structural information into Transformer-based models.", "This includes the use of dependency or tree structure for constraining self-attention patterns (Strubell et al., 2018; Wang et al., 2019; Zhang et al., 2020), guiding cross-attention (Chen et al., 2018; Astudillo et al., 2020), modelling syntactic distance (Du et al., 2020), using syntactic information to guide the computation flow in the model (Shen et al., 2021), or through knowledge distillation (Kuncoro et al., 2020).", "Our structured masking in parsing as language modeling approach is close in spirit to the methods that modify attention mechanism according to syntactic connections (Astudillo et al., 2020); This work, however, primarily aims to study the impact of structural guidance on syntactic generalization.", "Therefore, we resort to simpler methods of incorporating structure to minimize the impact of modeling intricacies.", "Our work explores two forms of syntactic supervision as structural guidance for Transformer language models.", "Experiments suggest that generative parsing approach can effectively improve systematic generalization of learned syntactic knowledge in small training data regime, while a naive syntactic scaffold approach does not improve the baseline to the same extent despite reduced computation cost at inference time.", "Future work may explore alternative structural guidance strategies that combine the best of both approaches.", "The authors would like to thank the anonymous reviewers for their helpful comments.", "This work was supported by the MIT-IBM Watson AI Lab." ]
[ "abstain", "method", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "other", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "other", "method", "objective", "abstain", "other", "other", "objective", "objective", "other", "abstain", "abstain", "objective", "abstain", "abstain", "other", "other" ]
[ "Abstract Recent research has discovered that a shared bilingual word embedding space can be induced by projecting monolingual word embedding spaces from two languages using a self-learning paradigm without any bilingual supervision.", "However, it has also been shown that for distant language pairs such fully unsupervised self-learning methods are unstable and often get stuck in poor local optima due to reduced isomorphism between starting monolingual spaces.", "In this work, we propose a new robust framework for learning unsupervised multilingual word embeddings that mitigates the instability issues.", "We learn a shared multilingual embedding space for a variable number of languages by incrementally adding new languages one by one to the current multilingual space.", "Through the gradual language addition our method can leverage the interdependencies between the new language and all other languages in the current multilingual hub/space.", "We find that it is beneficial to project more distant languages later in the iterative process.", "Our fully unsupervised multilingual embedding spaces yield results that are on par with the state-of-the-art methods in the bilingual lexicon induction (BLI) task, and simultaneously obtain state-of-the-art scores on two downstream tasks: multilingual document classification and multilingual dependency parsing, outperforming even supervised baselines.", "This finding also accentuates the need to establish evaluation protocols for cross-lingual word embeddings beyond the omnipresent intrinsic BLI task in future work.", "The ubiquitous use and success of word embeddings in monolingual tasks inspired further research on inducing cross-lingual word embeddings for two or more languages in the same vector space.", "Embeddings of translations and words with similar meaning are geometrically close in the shared cross-lingual vector space .", "This property makes them effective features for cross-lingual NLP tasks such as cross-lingual document classification (Kle-mentiev et al., 2012), cross-lingual information retrieval (Vulic and Moens, 2015), bilingual lexicon induction (Mikolov et al., 2013b; Gouws et al., 2015; Heyman et al., 2017), and (unsupervised) machine translation (Artetxe et al., 2017b; Lample et al., 2018; Artetxe et al., 2018c).", "Most prior work has focused on methods for constructing bilingual word embeddings (BWEs), yielding word representations for exactly two languages.", "For problems such as multilingual document classification, however, it is highly-desirable to represent words in a multilingual space.", "A favourable property is that it enables fitting a single classifier on the union of training datasets in many languages, which results in", "1) knowledge transfer across languages that may lead to better classification performance, and", "2) a setup that is easier to maintain as it is no longer required to train many different monolingual or bilingual classifiers.", "Multilingual word embedding (MWE) methods typically generalize existing BWE methods by mapping multiple source language spaces to the space of one target language (Ammar et al., 2016), which is used as a pivot/hub language.", "This approach may lead to suboptimal solutions as it does not account for interdependencies between the source languages.", "Most BWE and MWE methods rely on cross-lingual supervision to some extent: e.g., bilingual lexicons (Mikolov et al., 2013a), parallel corpora (Gouws et al., 2015), or subject-aligned document pairs (Vulic and Moens, 2016).", "In such paradigms, modeling dependencies between all languages is impractical as it requires supervision for all language pair combinations.", "Recent research has shown that BWEs can also be learned without cross-lingual supervision and can even outperform supervised BWE variants on bilingual lexicon induction benchmarks (Con-neau et al., 2018; Artetxe et al., 2018a).", "Chen and Cardie (2018) took a first step towards learning multilingual spaces without supervision while incorporating dependencies between all languages but their approach extends the work of Conneau et al. (2018), which has known limitations concerning optimization stability with distant language pairs (Sgaard et al., 2018).", "In this work, we investigate robust methods to induce MWEs without any cross-lingual supervision.", "The robustness of our approach is illustrated in good performance for distant languages such as Finnish and Bulgarian.", "This paper makes the following contributions.", "First, based on a reformulation of the BWE method of Artetxe et al. (2018a), we propose two novel methods for inducing MWEs:", "1) the single hub space model (SHS) uses the classical idea of mapping source languages to a single hub language;", "2) the incremental hub space model (IHS) incorporates dependencies between all languages by incrementally expanding the multilingual space by one language in each step.", "IHS results in mappings that are more robust and coherent across languages.", "Both SHS and IHS only require monolingual data.", "Second, we evaluate our method on benchmarks for bilingual lexicon induction (BLI), multilingual document classification, and dependency parsing.", "We find that the IHS method is competitive with state-of-the-art BWE methods on the bilingual lexicon induction benchmarks, while yielding the highest scores on the multilingual document classification and dependency parsing benchmarks.", "Third, unlike the majority of prior work (Con-neau et al., 2018; Artetxe et al., 2018a; Chen and Cardie, 2018, inter alia ), we do not limit our evaluation to the intrinsic BLI task only.", "Consequently, we investigate if embedding reweighting, a recently proposed best practice for BWEs, is useful for extrinsic tasks such as document classification and dependency parsing in multilingual settings.", "Cross-lingual word embeddings have received a lot of attention in recent years.", "Most methods construct a space shared between two languages using cross-lingual supervision in the form of bilingual lexicons (Mikolov et al., 2013a; Artetxe et al., 2016; Smith et al., 2017), parallel corpora (Klementiev et al., 2012; Faruqui and Dyer, 2014; Gouws et al., 2015; Luong et al., 2015) or subject-aligned document pairs (Vulic and Moens, 2016).", "See Ruder et al. (2018) for a full overview of BWE model typology in relation to the required supervision.", "To enable knowledge transfer across an arbitrary number of languages, multilingual methods have been introduced.", "Huang et al. (2015), propose decomposing a matrix with multilingual co-occurrence counts weighted by probabilistic dictionaries.", "Ammar et al. (2016) compare this method to three other MWE models: MultiCluster, MultiCCA, and MultiSkip.", "MultiCluster uses bilingual dictionaries to cluster translations and then train the monolingual Skip-gram model (SG) (Mikolov et al., 2013a) on a union of monolingual corpora where they replace words with their cluster id such that words in the same cluster get the same representation.", "MultiCCA is the multilingual extension of the method of Faruqui and Dyer (2014): Using canonical correlation analysis (CCA) and dictionaries with English as the target language, monolingual embeddings are projected to the English vector space.", "MultiSkip is a straightforward extension of the BiSkip method (Luong et al., 2015) which generalizes the monolingual SG objective to account for word alignments in parallel corpora.", "Similarly, Duong et al. (2017), extend CBOW to multiple languages.", "All these methods learn multilingual embeddings using bilingual dictionaries of parallel corpora: This limits their applicability for many languages.", "More recently, Conneau et al. (2018); Artetxe et al. (2018a) showed that BWEs can be effectively induced without any cross-lingual supervision.", "The approaches are based on the assumption that monolingual embedding spaces are approximately isomorphic.", "1 Improving on earlier attempts (Cao et al., 2016; Zhang et al., 2017), Conneau et al. (2018) propose a two-step framework to map two monolingual spaces to the shared space.", "First, they use an adversarial objective to get an initial bilingual space in which the discriminator can no longer distinguish to which language a given word embedding belongs.", "They then fine-tune the initial solution.", "An important limitation is that the adversarial objective is prone to converge to degenerate solutions.", "Furthermore, Sgaard et al. (2018) empirically prove that the method typically fails for distant language pairs such as English-Finnish.", "an-1 One of the necessary conditions for this assumption to hold is that the monolingual corpora on which the embeddings are trained are comparable (Sgaard et al., 2018).", "other framework with the same goal.", "Expanding on their earlier work (Artetxe et al., 2017a, 2018b), they use an unsupervised heuristic to obtain a noisy initial seed lexicon which is used to obtain an initial bilingual space.", "This solution is iteratively improved similar to Artetxe et al. (2017a) and Conneau et al. (2018) while using value dropping regularization to escape early convergence to local minima.", "Their method is the starting point for this work, and it is discussed in detail in 3.", "The two unsupervised approaches are limited to finding mappings between a pair of languages.", "To the best of our knowledge, Chen and Cardie (2018) is the only unsupervised method that constructs a multilingual embedding space.", "Their method extends the adversarial pre-training and iterative refinement steps of Conneau et al. (2018) to the multilingual setting.", "Their work does not investigate the limitations of Conneau et al. (2018)'s method: Less stable optimization and difficulties with mapping spaces with reduced isomorphism.", "Furthermore, their generalization turns the iterative refinement into a non-convex optimization problem.", "In contrast, our multilingual methods proposed in this work are applicable to distant language pairs and decompose every iteration in the refinement step in multiple convex optimization problems, making them very robust and widely applicable.", "We now summarize mapping-based approaches to learning BWEs, which serve as the backbone of our multilingual approach.", "These methods rely on a mapping procedure , that is, a way to transform two monolingual spaces such that translations and similar words obtain similar representations.", "Supervised approaches take translations from readily available seed training dictionaries.", "Unsupervised approaches construct a seed lexicon from scratch, and use an iterative procedure to refine the seed lexicon and the mapped bilingual space.", "Mapping Procedure.", "Various mapping procedures have been proposed in the literature (Mikolov et al., 2013b; Dinu et al., 2015; Lazaridou et al., 2015; Vulic and Korhonen, 2016).", "These methods can be seen as variants of a a single framework (Artetxe et al., 2018b), summarized here.", "At its core, each mapping procedure learns the orthogonal transformations W x and W z for the monolingual embedding spaces X and Z that minimize the distance between embeddings of translations in the mapped spaces XW x and ZW z .", "The orthogonality constraint ensures that the transformations preserve the monolingual constellation of embeddings.", "Formally, let D be a matrix representing a bilingual dictionary s.t. D ij = 1 if the i th source word is translated by the j th target word and D ij = 0 otherwise, then W x and W z are found by solving the following optimization problem: arg max W x , W z (cid:88) i (cid:88) j D ij ( X i, : W x ) ( Z j, : W z ) = arg max W x , W z tr ( XW x ( DZW z ) (cid:124) ) (1) subject to W x W (cid:124) x = I , W z W (cid:124) z = I where tr ( ) denotes the trace operator.", "Eq.", "(1) has a closed-form solution based on the singular vectors of X (cid:124) DZ : W x = U , W z = V with USV (cid:124) = SVD( X (cid:124) DZ ) .", "In addition to the transformation, there are several optional pre-processing (S1-S2) and post-processing (S3-S5) steps: S1.", "Normalization : apply length normalization (normalizing X and Z such that all embeddings have a unit Euclidean norm), or mean centering, or a combination of both; S2.", "Whitening : apply ZCA whitening (Bell and Se-jnowski, 1997) on X and Z which transforms the monolingual embedding matrices such that each dimension/component has unit variance and such that the dimensions are uncorrelated (see Eq.", "(2) later).", "The intuition is that it is easier to align the vector spaces along directions of high variance; S3.", "Re-weighting the components according to the singular value matrix S of X (cid:124) DZ : This is an attempt to further align the embeddings in the multilingual space as each singular value measures how well a dimension in the multilingual space correlates across languages for the given dictionary; S4.", "De-whitening , the inverse transformation of S2: After the mapping, it was shown as important to restore the variance information in case whitening was applied (Artetxe et al., 2018c); S5.", "Dimensionality reduction truncates the embedding vectors such that only components with the highest singular values are kept.", "Refinement Procedure.", "The refinement aims at iteratively improving the seed dictionary and the bilingual space with Expectation Maximization (Dempster et al., 1977).", "In each iteration, the mapping procedure is executed using the dictionary from the previous iteration to obtain a new bilingual space, and then a new dictionary is induced using nearest neighbor retrieval in the cross-lingual similarity matrix M .", "This is repeated until the (unsuper-vised) training objective (cid:80) i (cid:80) j D ij (cid:0) ( X i, : W x ) ( Z j, : W z ) (cid:1) stops increasing.", "The matrix M is calculated using cross-domain similarity local scaling (CSLS; Conneau et al. (2018)), an extended variant of cosine similarity that avoids the hubness problem (Radovanovic et al., 2010; Dinu et al., 2015).", "2 In particular, the element m ij at row i and column j of M corresponds to the CSLS value between the crosslingual vectors x CLi and z CLj of the i th source word and the j th target word respectively: m ij = CSLS ( x CLi , z CLj ); CSLS ( x , z ) = 2 cos ( x , z ) r Zk ( x ) r Xk ( z ) .", "r Xk ( x ) and r Zk ( z ) calculate the average cosine similarity of a vector with its k nearest neighbors (measured by cosine) in the mapped spaces of X , Z , respectively.", "It is also beneficial to jointly infer dictionaries source-to-target and target-to-source (Artetxe et al., 2018a).", "3 The mapping is then learned from the concatenation of these two dictionaries.", "4 To avoid suboptimal local minima, Artetxe et al. (2018a) propose to randomly drop values from the matrix M with probability 1 p (further value dropping ).", "The value of p is exponentially increased as training progresses.", "p is initialized to a small value (e.g., 0 . 1 ).", "Whenever the objective stops improving for N patience refinement steps, p is multiplied with a given factor (e.g., 2 ) until p 1 after which all values in M are kept.", "Value dropping was shown to be crucial when constructing bilingual spaces between distant language pairs.", "We later analyze its impact on the proposed multilingual methods.", "Inducing a Seed Lexicon.", "Artetxe et al. (2018a) obtain a seed lexicon based on the assumption that for a translation pair w Xi , w Zj , the monolingual similarity vectors, X i X (cid:124) and (cid:112) Z j Z (cid:124) of translations i, j are (approximately) equal up to a permu-2 Hubness is the phenomenon observed in a high-dimensional vector space where there are vectors, called hubs, which are the nearest neighbors to many vectors in the space.", "3 Note that z CL j being the nearest neighbor of x CL i does not imply that the inverse is also true.", "4 Due to the large search space, limiting the search space by truncating both vocabularies and their corresponding embedding matrices to the C refinement most frequent words results in better solutions and speeds up computation.", "tation.", "5 Therefore, seed translations for a source word i are generated by finding the nearest neighbor based on similarity of monolingual similarities.", "This heuristic yields a very noisy seed lexicon, but it was proven to contain a sufficiently strong bilingual signal to bootstrap the refinement procedure.", "The seed lexicon is inferred symmetrically (i.e., by concatenating respective source-to-target and target-to-source seed lexicons) and the vocabularies are truncated to the C seed most frequent words.", "We now present two models for learning unsupervised multilingual word embedding spaces: the single hub space model (SHS) and the incremental hub space model (IHS).", "The methods generalize the bilingual framework described in 3, and rely on (a subset of) preprocessing and postprocessing steps S1-S5 in the multilingual setting.", "Single Hub Space (SHS).", "The SHS model de-fines one language as the hub language L 0 and projects the embedding spaces Z 1 , ..., ZN of all other languages L 1 , ..., LN (further secondary languages ) to the hub space X .", "Hence, we reduce the construction of a multilingual space of N languages to the alignment of N 1 vector spaces.", "Learning these projections is similar to the bilingual case: We use the unsupervised iterative refinement procedure and seed lexicon heuristic from 3.", "However, we require the orthogonal mapping to be asymmetric: The hub language space should either remain unchanged or it should be transformed with the same operation for each of the N 1 language pairs.", "We therefore derive an asymmetric version of the mapping framework from 3 that yields the exact same solution as the original.", "Let X be the embedding matrix of the, Z 1 , . . . , ZN the embedding matrices of the secondary languages, and D k,l the dictionary between languages L k and L l .", "We induce a multilingual space X m , Z m 1 , ... , Z mN in three main steps.", "First, the embeddings of each language are preprocessed by normalizing and whitening the embeddings, as described by Eqs.", "(2)-(6).", "Normalization consists of subsequently performing length normalization, mean 5 The square root in the formulas is empirically motivated.", "ZCAwhiten ( W ) = W ( W (cid:124) W ) 0 .", "5 (2) X (cid:48) = normalize ( X ) (3) X (cid:48)(cid:48) = ZCAwhiten ( X (cid:48) ) (4) Z (cid:48) l = normalize ( Z l ) (5) Z (cid:48)(cid:48) l = ZCAwhiten ( Z (cid:48) l ) (6) After preprocessing, we rotate each secondary language L l to a bilingual space between the hub language space and its own embedding space, as described by Eqs.", "(7)-(10).", "The calculations are analogous to the bilingual mapping procedure: the left and right singular vectors U l and V l of X (cid:48)(cid:48) D k,l Z (cid:48)(cid:48) (cid:124) l are the rotation matrices that project the preprocessed matrices X (cid:48)(cid:48) and Z (cid:48)(cid:48) l to their bilingual space; this is formulated in Eqs.", "(7)-(8).", "The bilingual projection of Z (cid:48)(cid:48) l can be reweighted by multiplying it with a given power q of the singular values matrix S l of X (cid:48)(cid:48) (cid:124) D 0 ,l Z (cid:48)(cid:48) l , Eq.", "(9).", "Intuitively, the reweighting operation makes the dimensions that correlate better across languages more important.", "Next, we restore the variance information of Z (cid:48) l by performing a dewhitening operation: we project back to the monolingual space, multiply with the inverse of the whitening matrix, and then project back to the bilingual space (Eq.", "(10)).", "6 U l S l V (cid:124) l = SVD ( X (cid:48)(cid:48) (cid:124) D 0 ,l Z (cid:48)(cid:48) l ) (7) Z l,bi ( l ) = Z (cid:48)(cid:48) l V l (8) Z (cid:48) l,bi ( l ) = Z l,bi ( l ) S ql (9) Z (cid:48)(cid:48) l,bi ( l ) = Z (cid:48) l,bi ( l ) V (cid:124) l ( Z (cid:48) (cid:124) l Z (cid:48) l ) 0 .", "Finally, we project Z (cid:48)(cid:48) l,bi ( l ) to the space of the hub language in Eq.", "(11)).", "The multilingual space for the hub language is simply the monolingual embedding space after preprocessing, see Eq.", "(12).", "For the bilingual case this formulation is equivalent to the symmetric mapping introduced in 3: one can easily verify that the dot products between the mapped spaces simplify to the same formula.", "6 Note that the projection matrices that map from the bilingual to the monolingual spaces are given by the inverses of U l and V l .", "Since the matrices are orthogonal their inverses are equal to their transposes.", "Incremental Hub Space (IHS).", "SHS enables language interactions only indirectly through the hub language.", "Ideally, a multilingual method should incorporate interdependencies between all languages.", "We hypothesize that, especially when mapping a language distant to the hub language, it is beneficial to incorporate the structural similarities with all other languages as a regularization mechanism to find a more robust mapping.", "We therefore propose the incremental hub space (IHS) model.", "It incrementally expands the multilingual space X m and takes into account all languages in the current multilingual space when adding a new language.", "First, we define a language order and initialize the space to the preprocessed embedding space of language L 0 .", "Next, following the order, we gradually add new languages to the space: in each iteration we rotate the preprocessed embedding space Z (cid:48)(cid:48) l of language l to the multilingual space by minimizing the dot product between embeddings of the translations between language l and all the languages in the multilingual space.", "The recipe to calculate the cross-lingual embedding Z ml is similar to the SHS model: the preprocessing and postprocessing steps are the same, but the projection matrices are calculated with Eq.", "(14) instead of Eq.", "(7) and conform with the new objective from Eq.", "(13).", "After convergence, Z ml is added to the multilingual space X m : X ml = Z ml .", "arg max W xl , W zl l 1 (cid:88) k =0 tr ( X mk W xl ( D k,l Z (cid:48)(cid:48) l W zl ) (cid:124) ) (13) subject to W xl W (cid:124) xl = I , W zl W (cid:124) zl = I U l S l V (cid:124) l = SVD ( C ) (14) C = ( X m 0 ) (cid:124) D 0 ,l Z (cid:48)(cid:48) l || ... || ( X ml 1 ) (cid:124) D l 1 ,l Z (cid:48)(cid:48) l where || denotes concatenation along the row axis In supervised settings this approach would be impractical as it requires bilingual dictionaries D k,l for all language pairs k, l , and not only with the hub language.", "However, within an unsupervised framework this constraint is lifted.", "Tasks and Datasets.", "The induced embeddings are evaluated in three tasks: bilingual lexicon induction (BLI), multilingual dependency parsing, and multilingual document classification.", "BLI is currently the most widely used method to evaluate bilingual embedding spaces.", "Although BLI performance is not the primary goal of our multilingual embedding spaces, it provides a fast means to address the following questions:", "1) Is the incremental construction of multilingual embedding spaces indeed an effective regularization method?", "Is it still necessary to perform value dropping in this case?", "7 ; 2) Is the reweighting of embedding spaces also beneficial for BLI in multilingual", "settings?;", "3) Does multilingual training improve bilingual lexicon induction performance?", "How do our multilingual models compare to each other and to the state-of-the-art unsupervised BLI methods?", "We report Precision@1 (P@1) BLI performance on two standard BLI datasets.", "1) DINUARTETXE is the extended version of Dinu et al. (2015)'s dataset, used by Artetxe et al. (2018a).", "8 It consists of bilingual dictionaries for English-German, English-Italian, English-Spanish and English-Finnish.", "Monolingual embeddings are provided, based on the CBOW model trained on the WaCKy corpora for English, Italian and German (Baroni et al., 2009), the monolingual WMT Common Crawl corpus for Finnish, and the WMT News Crawl for Spanish (Bojar et al., 2015).", "The test dictionary sizes are between 1.869 and 1,993 word pairs for each language pair.", "As our methods are unsupervised, we do not use the provided training dictionaries.", "2) EURMUSEWIKI is the dataset compiled from dictionaries for all combinations of the following European languages: English, German, Spanish, French, Italian, and Portuguese.", "The test set sizes range between 1,513 and 3,660 word pairs.", "We rely on publicly available monolingual fastText embeddings (Bojanowski et al., 2016).", "9 All monolingual word embeddings are 300-dimensional and represent the 200k most frequent words as in prior work (Dinu et al., 2015; Conneau et al., 2018).", "Multilingual dependency parsing and multilingual document classification tasks assess the embeddings w.r.t. their actual goal: enabling transfer learning across multiple languages.", "The word embeddings are used as feature vectors for classifiers in the respective downstream tasks.", "We address the following research questions:", "4) Is reweighting of embedding spaces also beneficial in downstream", "tasks?;", "5) How do our methods compare against 7 Value dropping significantly slows down training time and leads to non-deterministic outcomes.", "However, it has been shown to be crucial in the bilingual setting to obtain good results when mapping distant language pairs in previous work (Artetxe et al., 2018a).", "8 https://github.com/artetxem/vecmap/ 9 https://fasttext.cc/docs/en/ pretrained-vectors.html supervised multilingual embedding models?", "We rely on the evaluation platform of Ammar et al. (2016) for the downstream tasks 10 : the users submit their multilingual embeddings and obtain the final scores, which ensures that the classifiers we use are identical to the ones used in prior work (Ammar et al., 2016; Duong et al., 2017).", "REUTERSMLDC is a multilingual document classification dataset covering seven languages: English, German, French, Italian, Spanish, Danish, and Swedish.", "The final performance is reported as the average accuracy across all languages.", "The respective training and test set consist of 7,000 and 13,058 documents.", "The dataset is well balanced in the number of documents per language.", "11 The architecture of the document classifier is the average perceptron used by Klementiev et al. (2012).", "MLPARSING is a multilingual dependency parsing dataset sampled from the Universal Dependencies 1.1 corpus (Agi c et al., 2015) 12 .", "It contains 12 languages: English, German, French, Spanish, Italian, Bulgarian, Czech, Danish, Swedish, Greek, Finnish, and Hungarian.", "The respective training and test set contain 6,748 and 1,200 sentences.", "The test set contains 100 sentences for each language, while for the training set the number of sentences for a language ranges between 98 and 6,694.", "The parser used is the stack-LSTM parser by Dyer et al. (2015).", "The parser is not allowed to use any part-of-speech and morphology features, and keeps the input word embeddings fixed to isolate the effect of the evaluated embeddings on the parsing performance (Ammar et al., 2016).", "The reported scores are UAS scores averaged across languages.", "For comparison with related work, we train 512-dimensional monolingual embeddings on the text collections used by Ammar et al. (2016) and Duong et al. (2017).", "The monolingual embeddings are again trained using fastText.", "Training Setup.", "In all experiments, we set the following hyper-parameters to values that were used in prior research (Conneau et al., 2018; Artetxe et al., 2018a).", "When constructing the seed lexicon the 4,000 most frequent words of each language are considered ( C seed = 4 , 000 ), and during the refinement step the 20,000 most frequent words 10 https://github.com/wammar/ multilingual-embeddings-eval-portal 11 As the dataset is not publicly available this information was provided by the first author of Ammar et al. (2016).", "of each language are used ( C refinement = 20 , 000 ).", "When using value dropping, the keep probability p is initialized is 0.1, N patience is set to 50, and the stochastic multiplier is set to 2.", "Dictionaries are constructed symmetrically: from hub language(s) to the secondary language and from the secondary language to the hub language(s): during refinement each dictionary consists of 2 20 , 000 translation pairs.", "We use CSLS with k = 10 nearest neighbors following the setup of Conneau et al. (2018).", "Experiment 1: Value Dropping.", "In the first experiment, we investigate if the expensive value dropping procedure is a necessary condition for mapping between distant language pairs in our multilingual framework.", "Table 1 provides results on the DINUARTETXE dataset for SHS and IHS models.", "For IHS we process the languages in the following order: English, German, Italian, Spanish, Finnish.", "When using value dropping we report the average and best results across five runs.", "We observe that value dropping is crucial for SHS to succeed for English-Finnish.", "However, it is not necessary for IHS.", "This supports our hypothesis that mapping a language to a multilingual hub space serves a type of regularization that can substitute value dropping.", "As validated later, it is still important to avoid adding distant languages early with the incremental IHS procedure.", "13 , 14 Experiment 2: Comparative BLI Performance and Reweighting.", "In this experiment, we test if reweighting embedding spaces is beneficial for BLI in our multilingual setup, and also compare our 13 For instance, when using IHS with a language order that starts with English and Finnish, value dropping still prevents bad performance for Finnish.", "However, this is not a problem in practice as the language order can be easily predetermined according to various language similarity heuristics.", "14 We further validated the robustness of our approach on other distant language pairs but moved this experiment to the appendix due to space constraints.", "methods against state-of-the-art BLI methods.", "Table 2 and Table 3 show the results for SHS and IHS with reweighting coefficients q of 0, 0.5 and 1 on DINUARTETXE and EURMUSEWIKI , respectively.", "We also include the state-of-the-art results of Artetxe et al. (2018a) and Chen and Cardie (2018) for reference.", "The EURMUSEWIKI benchmark evaluates BLI performance on all language pair combinations of its six languages and does this in both directions (EN-DE, DE-EN, EN-ES, ... IT-PT, PT-IT) yielding 28 P @1 scores per model.", "For clarity, we report the average P @1 scores per language as well as the global P @1 average.", "Following Experiment 1, all results for SHS are obtained using value dropping (again averaged across 5 different runs), while we do not use it with IHS.", "The SHS hub language is English, and the language orders for IHS are EN, DE, IT, ES, FI for DINUARTETXE , and EN, DE, ES, FR, IT, PT for EURMUSEWIKI .", "The scores reveal that reweighting the embedding spaces is indeed still beneficial for BLI when mapping to a multilingual space.", "Both SHS and IHS obtain best results with the reweighting coef-ficient q = 0 .", "5 .", "When comparing SHS and IHS, we see that for language pairs involving English (the SHS hub language) SHS obtains slightly better results, but for the other language pairs IHS outperforms SHS slightly.", "This is no surprise as IHS by design incorporates dependencies between all languages when learning the projection matrices, though it is striking that mapping to a single hub language is still a strong BLI baseline.", "For both datasets IHS obtains BLI performance on par with the state-of-the-art: on DINUARTETXE , SHS and IHS ( q = 0 . 5 ) obtain scores similar to (Artetxe et al., 2018a); on EURMUSEWIKIIHS ( q = 0 . 5 ) slightly outperforms Chen and Cardie (2018) for all languages except Spanish.", "Although optimizing BLI performance is not the main goal of this work, these results verify the soundness of our methods.", "Experiment 3: Language Order.", "Next, we investigate", "1) the influence of the hub language choice for SHS, and", "2) the impact of the language order for IHS.", "We run both SHS and IHS with reweighting 0.5 on DINUARTETXE .", "SHS is with value dropping (results are again averaged over 5 runs), and for IHS we do not use value dropping.", "We find that the SHS model is sensitive to the hub language: the best average scores are obtained when using English (41.6%) or German (41.0%).", "With Italian, the average score drops to 40.4%, mainly due to worse performance on English and German.", "With Spanish, the average score further drops to 31.6%, as Spanish and Finnish completely fail to align even when using value dropping.", "With Finnish, EN-ES alignment becomes unstable, while P @1 for EN-DE and EN-IT drops 5.5% and 3.7% compared to the case with English as the hub.", "These results indicate that the hub language has to be cho-sen carefully to avoid instability issues.", "For IHS, we evaluate all 120 order permutations on DINUARTETXE .", "The best performing order (EN-DE-ES-FI-IT) achieves an average accuracy of 41.96%, see the last row in Table 2.", "The full results, not reported due to space constraints 15 confirm our hypothesis that distant languages (Finnish) should be mapped at the end: when using Finnish as one of the first two languages performance drops significantly.", "EN-FI scores drop below 1% and the results for all other language pairs are also suboptimal.", "15 The full results can be found in Appendix A.2.", "input word embeddings on downstream model performance, and compare SHS and IHS to several supervised methods that use cross-lingual supervision.", "Table 4 reports the results for SHS and IHS with q set to 0 and 0.5 on the REUTERSMLDC and MLPARSING benchmarks, 16 along with the results from related work.", "For SHS the hub language is English and for IHS the language order is English, German, Spanish, Italian, French, Bulgarian, Czech, Danish, Finnish, Greek, Hungarian, and Swedish.", "We again use SHS with value dropping and IHS without it.", "The results in Table 4 are comparable: all methods were trained on the same text corpora (i.e., the collections of Ammar et al. (2016)), but our methods do not use parallel corpora nor bilingual dictionaries.", "A first interesting result is that, contrary to the BLI task, reweighting the embeddings is not beneficial for multilingual dependency parsing and document classification.", "This can be explained by the fact that the reweighted embedding spaces are no longer isomorphic to the original monolingual embedding spaces, hence important patterns in the embedding space could be distorted.", "Further, we notice that both SHS and IHS improve over the best reported results on the REUTERSMLDC and MLPARSING benchmarks.", "This result is surprising given that all the reported baselines require supervision to induce the multilingual embedding spaces.", "Further, we again find that the best results are obtained with IHS, most notably for dependency parsing for which the difference in UAS scores between 16 Since the languages covered in MLPARSING is a superset of the languages in REUTERSMLDC, we use the same multilingual embedding space for both tasks.", "Experiment 5: Time Complexity.", "In this experiment, we study how training time of SHS and IHS behaves as a function of the number of languages that are mapped.", "The singular value decompositions are more expensive for IHS as the matrices grow linearly with the number of languages (see Eq.", "(14)) whereas for SHS they are constant.", "On the other hand, IHS does not require the use of value dropping.", "Figure 1 plots training time for IHS and SHS conditioned on the number of languages.", "The estimates are based on training on a single Nvidia Titan Xp GPU.", "We find that SHS with value dropping and IHS without value dropping have similar training times, IHS being a bit more efficient when mapping 7 languages or less.", "Mapping 12 languages takes approximately four hours for both methods.", "We proposed two novel methods for learning multilingual word embeddings (MWEs) without any cross-lingual supervision.", "The better-performing incremental hub space model (IHS) is the first unsupervised MWE method that combines three desirable properties:", "1) It incorporates interdependencies between all targeted languages;", "2) It works for distant language pairs; and", "3) It is both deterministic and robust, that is, it does not produce degenerate solutions.", "Our evaluation on standard benchmarks has proven that the IHS method induces multilingual word embeddings that are competitive with the state of the art in bilingual lexicon induction.", "Moreover, we have shown that IHS outperforms even supervised models on downstream tasks of multilingual dependency parsing and document classification, and this anomaly requires further investigation in future work.", "Furthermore, we looked at the influence of reweighting the dimensions of the embedding spaces according to their cross-correlations with the hub language space(s) and found that, while it improves performance for the BLI task, it is harmful to downstream crosslingual transfer tasks.", "These empirical observations stress the requirement to include comprehensive evaluation protocols for cross-lingual word embedding models in future research.", "We thank the reviewers for their insightful comments.", "IV would like to thank Goran Glavas and Anders Sgaard for interesting discussions, and the ERC Consolidator Grant LEXICAL (no 648909) for the support.", "GH, BV and MFM would like to thank Stijn Jaques, Evelyn Reynders and Evelien Verbaenen for the fruitful collaboration and Flanders Innovation & Entrepreneurship (VLAIO) for funding TELLMI within the ITEA 3 project PAPUD." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "method", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "other", "method", "other", "other", "other", "other", "other", "abstain", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "other", "other", "other" ]
[ "Standard autoregressive language models perform only polynomial-time computation to compute the probability of the next symbol. While this is attractive, it means they cannot model distributions whose next-symbol probability is hard to compute.", "Indeed, they cannot even model them well enough to solve associated easy decision problems for which an engineer might want to consult a language model.", "These limitations apply no matter how much computation and data are used to train the model, unless the model is given access to oracle parameters that grow superpolynomially in sequence length.", "Thus, simply training larger autoregressive language models is not a panacea for NLP.", "Alternatives include energy-based models (which give up ecient sampling) and latent-variable autoregressive models (which give up ecient scoring of a given string).", "Both are powerful enough to escape the above limitations.", "Sequence modeling is a core NLP problem.", "Many sequence models are ecient at scoring strings : given a string x , its score ( x ) can be computed in ( poly (| x |)) .", "For example, an RNN (Mikolov et al., 2011) scores x in time (| x |) while a Transformer (Vaswani et al., 2017) does so in time (| x | 2 ) .", "The score may be an unnormalized probability, and can be used to rank candidate strings.", "Many sequence models also make it easy to compute marginal properties of .", "They support ecient sampling of strings x (which allows unbiased approximation of marginal expectations).", "And they support ecient computation of the normalizing constant = (cid:80) x ( x ) (or simply guarantee = 1) for any value of the model parameters.", "How about training?", "Briefly: If a sequence model can eciently compute ( x ) (and its derivatives Part of this work was done at Facebook AI.", "with respect to model parameters), then it is ecient to compute parameter updates for noise-contrastive estimation (Gutmann and Hyvrinen, 2010; Gutmann and Hyvrinen, 2012) or score-matching (Hyvrinen, 2005).", "If sampling x or computing (and its derivatives) is also ecient, then it is ecient to compute parameter updates for ordinary MLE training.", "Finally, popular sequence models are compact .", "Usually a fixed-size model is used to score strings x of all lengths.", "More generally, it might be reasonable to use an ( poly ( )) -sized parameter vector when x has length , at least if parameter vectors can be obtained (perhaps from an oracle) for all needed lengths.", "In this paper, we investigate what can and cannot be achieved with models that are compact in this sense.", "This setup allows us to discuss the asymptotic behavior of model families.", "to compute from a fixed parameter vector.", "These models satisfy all three of the desiderata above.", "By using flexible neural network architectures, standard autoregressive models have achieved stellar empirical results in many applications (Oord et al., 2016; Child et al., 2019; Zellers et al., 2019; Brown et al., 2020).", "However there are still tasks that they have not mastered: e.g. , it is reported that they struggle at deep logical structure, even when initialized to huge pretrained models (Wang et al., 2019a).", "We point out that, unfortunately, there are certain sequence distributions whose unnormalized string probabilities ( x ) are easy to compute individually, yet whose autoregressive factors ( | x < ) are NP -hard to compute or even approximate, or are even uncomputable .", "Thus, standard autoregressive models are misspecified for these distributions (can-not fit them).", "It does not help much to focus on strings of bounded length, or to enlarge the model: under the common complexity-theoretic assumption NP (cid:42) P / poly, the parameter size | | must grow superpolynomially in to eciently approximate the probabilities of all strings of length up to .", "Indeed, one of our main findings is that there exist unweighted languages P for which no standard autoregressive model has as its support, i.e. , assigns weight > 0 to just the strings x .", "This is downright depressing, considering the costs invested in training huge parametric autoregressive models (Bender et al., 2021).", "Since P, it is trivial to build an ecient scoring function ( x ) with fixed parameters that has as its support just not an autoregressive one.", "The problem holds for all standard autoregressive models, regardless of how much computation and training data are used to learn the model parameters.", "That is, for an NP-hard problem, scoring a string x under a standard autoregressive model ( x ) cannot be used to verify a witness.", "Nor can finding a witness be solved by prompting such a model with a description of a problem instance and sampling a continuation x of that string.", "Such problems are abundant in NLP: for example, surface realization under Optimality Theory (Idsardi, 2006), decoding text from an AMR parse (Cai and Knight, 2013), phrase alignment between two sentences (DeNero and Klein, 2008), and in general inference for propositional logic (Cook, 1971), which underlies the NP-hardness of general natural language inference, as in Figure 1. In other words, our results imply that standard autoregressive models do not have the right structure to capture important linguistic regularities: e.g. , that observed sequences were in fact constructed to be phonologically optimal, expressive of a semantic form, or logically coherent!", "Our work is also relevant to autoregressive models of fixed-dimensional vectors, such as NADE (Uria et al., 2016).", "These can be extended to arbitrary dimensional vectors by providing separate parameters for each .", "Our constructions imply that for some distributions, | | must grow superpolynomially in , even though this would be not be necessary if the models were not autoregressive.", "In the remainder of this paper, we formalize our three desiderata for sequence models.", "We formalize compact autoregressive models and describe some limitations on their expressiveness.", "We then show that it can help to choose an alternative model family that relaxes any one of the three desiderata (Table 1).", "An unweighted language is a set of strings x over a finite alphabet .", "A weighted language is a function : R 0 .", "It may be regarded as specifying an unweighted language = support ( ) (cid:44) { x : ( x ) 0 } along with positive weights for the strings in .", "We say that a weighted language is normalizable if its global normalizing constant (cid:44) (cid:80) x ( x ) is finite and strictly positive.", "When is normalizable, ( x ) (cid:44) ( x )/ is a probability distribution over .", "A distribution is any weighted language whose global normalizing constant is 1. Let x (cid:22) x mean that x is a prefix of x (not necessarily a strict prefix).", "If is normalizable, then ( x ) (cid:44) (cid:80) x : x (cid:22) x ( x ) is for any x , yielding a marginal prefix probability ( x )/ .", "If the prefix x has positive prefix probability, then it admits a local conditional probability ( | x ) (cid:44) ( x )/ ( x ) for each symbol , where the denominator is interpreted as a local normalizing constant .", "This is the conditional probability that if a random string starts with the prefix x , the next symbol is .", "There is also a probability ( $ | x ) (cid:44) 1 (cid:80) ( | x ) = ( x )/ ( x ) 0 that the string ends immediately after x ; the special symbol $ represents end of string. 2.2 Computation for weighted languages We define a weighted language to be computable if it is defined by a Turing machine (also called ) that maps any x to ( x ) Q 0 in finite time.", "The Turing machine does not have to compute .", "While the computable weighted languages allow any computable function as , most architectures for defining weighted languages ( e.g. , RNNs or Transformers) do only a bounded or linear amount of work per input symbol.", "As a result, they compute ( x ) in time ( poly (| x |)) (that is, FP).", "We refer to such weighted languages as eciently computable ( EC ).", "This does not imply that the normalized version is eciently computable, since finding the denominator requires summing over all of .", "If we tried to construct the same normalized distribution as in the previous paragraph using a standard autoregressive model, we would model it as a product of local conditional probabilities, ( x ) = ( (cid:81) | x | = 1 ( | x < )) ( $ | x ) .", "Most such architectures again do only a bounded or linear amount of work per input symbol.", "Yet one suspects that this may not always be enough work to do the job: the local conditional probabilities of the original are expensive to compute (unless has some special structure making ( x ) tractable).", "Indeed, the observation of this paper is that for some eciently computable weighted languages , the local conditional probabilities are expensive to compute or even to approximate well.", "More precisely, autoregressive models cannot fit the local conditional probabilities unless they are superpolynomial in either their runtime or in their number of parameters (where the parameters may be precomputed at training time).", "We now explain how to formalize these notions.", "In the machine learning approach to sequence modeling, we usually do not manually design the Turing machine behind .", "Rather, we design a model with parameters .", "is a Turing machine that reads and outputs a specialized Turing machine (cid:44) ( ) that can score strings x and hence defines a weighted language.", "Without loss of generality, we will express as a string in B (where B (cid:44) { 0 , 1 } ).", "Strings vary in length, and accurate modeling of longer strings may sometimes require more complex computations with more parameters.", "For example, when is a natural language alphabet, a recurrent neural network may require more hidden units to model sentences of the language rather than individual words,and even more units to model whole documents.", "To accommodate this, we allow an infinite sequence of parameter vectors, = { B | N } , which yields an infinite sequence of Turing machines { | N } via (cid:44) ( ) .", "We then define ( x ) (cid:44) | x | ( x ) , so a string of length is scored by the machine.", "This is known as non-uniform computation .", "Of course, it is legal (and common) for all of the to be equal, or empty, but if desired, we can obtain more power by allowing the number of parameters to grow with if needed.", "We can now consider how rapidly the parametric and runtime complexity may grow.", "If | | is permitted to grow exponentially, then one can fit any weighted language (even an uncomputable one).", "2 Simply use to encode a trie with (| | + 1 ) nodes that maps x ( x ) for any | x | of length , and design such that the Turing machine = ( ) has a (large) state transition table that mirrors the structure of this trie.", "The resulting collection of Turing machines { | N } can then compute ( x ) exactly for any x , with only linear runtime (| x |) (which is used to traverse the trie).", "Separately, if unbounded runtime is permitted for , then one can exactly fit any computable weighted language .", "Simply have , when run on , compute and return the large trie-structured that was mentioned above.", "In this case, need not even use the parameters , except to determine .", "Finally, if unbounded runtime is permitted for , then again one can exactly fit any computable weighted language .", "In this case, trivially returns = for all .", "However, if the parameters are compact in the sense that | | grows only as ( poly ( )) , and also = ( ) is constructed by in time ( poly ( )) , and scores any x of length in time ( poly ( )) , then we say that the resulting weighted language is eciently computable with compact parameters (ECCP).", "3 We refer 2 See our remark on computability in Appendix A. 3 Since we require to run in polytime, it can only look at a polynomial-sized portion of .", "compact values for as an ECCP model .", "Neural models of weighted languages are typically ECCP models.", "The construction and execution of the neural network may perform a polynomial amount of total computation to score the string x .", "This computation may involve parameters that were precomputed using any amount of eort ( e.g. , training on data) or even obtained from an oracle (they need not be computable).", "However, the exponentially many strings of length must share a polynomial-size parameter vector , which prevents the solution given in the first bullet point above.", "In practice one takes = for all and obtains R by training.", "However, we do not consider whether such parameters are easy to estimate or even computable.", "We simply ask, for a given target language , whether there exists a polynomially growing sequence of good parameter vectors for any parametric model .", "When not, there can be no scheme for estimating arbitrarily long finite prefixes of such a sequence.", "So for any polynomial , any training scheme that purports to return a trained model of size ( ) that works well for strings of length must fail for large enough even if unlimited data, computation, and oracles are allowed at training time.", "The phrase eciently computable with compact parameters means that without access to those parameters, the ECCP weighted language may no longer be eciently computable.", "Indeed, it need not be computable at all, if the parameter vectors store the outputs of some uncomputable function.", "Our definitions above of EC and ECCP weighted languages are weighted generalizations of complexity classes P and P / poly, respectively, 4 and their supports are always unweighted languages in P and P / poly, respectively.", "An unweighted language is in P i there is a deterministic Turing machine that decides in ( poly (| x |)) time whether x .", "And an unweighted language (cid:48) is in P / poly i 5 there exist the parameters p to be compact, but we nonetheless include this intuitive condition, without loss of generality.", "4 Namely the nonnegative functions in FP and FP / poly.", "5 Our presentation of P / poly is a variant of Arora and Barak (2009, 6), in which inputs x of length are evaluated by a polytime function that is given an advice string as an auxiliary argument.", "This corresponds to a neural architecture that can consult trained parameters at runtime.", "We have replaced the standard call ( , x ) with the curried expression ( )( x ) , which we still require to execute in polynomial total time.", "Here the intermediate result = ( ) corresponds to a Turing machines { : N } such that decides in ( poly ( )) time whether x of length is in (cid:48) , where each can be constructed in ( poly ( )) time as ( ) , for some Turing machine and some sequence of polynomially-sized advice strings = { | N } with | | ( poly ( )) .", "We define the language class NP / poly similarly to P / poly: the only dierence is the family { : N } consists of nondeterministic Turing machines .", "Naturally, P P / poly.", "But P / poly is larger than P: it contains all sparse languages, regardless of their hardness even sparse undecidable languages as well as many dense languages.", "The extra power of P / poly comes from its access to compact advice strings that do not have to be recursively enumerable, let alone ecient to find.", "This corresponds to statistical modeling, where the trained model has a computationally ecient architecture plus access to parameters that might have taken a long time to find.", "NP-complete decision problems have solutions that are ecient to validate but inecient to find (assuming P NP).", "One of the most well-known NP-complete problems is the boolean satisfiability problem (Sat) (Cook, 1971).", "Given a boolean formula , Sat accepts i can be satisfied by some value assignment.", "For example, the formula ( 1 2 3 ) ( 1 4 ) is in Sat, since there is a satisfying assignment 1 ... 4 = 1101 .", "We denote the number of satisfying assignments to as # ( ) .", "It is widely believed that no NP-complete languages are in P / poly.", "Otherwise we would have all of NP P / poly and the polynomial hierarchy would collapse at the second level (Karp and Lipton, 1980).", "A capacity limitation of EC/ECCP weighted languages naturally follows from this belief: 6 Lemma 1. For any P , there exists an EC weighted language with support .", "For any P / poly ,there exists an ECCP language withsupport .", "But for any NP complete , there exists no ECCP language with support (assuming NP (cid:42) P / poly ).", "In addition to not capturing the support of NP-complete languages, ECCP weighted languages can-trained runtime model for inputs of length .", "Our Turing machines have size polynomial in (because they are constructed by in polynomial time).", "They correspond to the polynomial-sized boolean circuits that are used to evaluate inputs of length under the classical definition of P / poly (Ladner, 1975).", "We exposed these intermediate results only to observe in 2.3 and 4.3 that if we had allowed the to grow exponentially, they would have been able to encode the answers in tries.", "6 All omitted proofs are in Appendix A. not help solve other NP-hard problems, either.", "For example, many structured prediction problems in NLP can be formulated as argmax x : x (cid:22) x ( x ) : we are given a prefix x as input and look for its optimal continuation under .", "But if this problem is NP-hard for a particular , then it is not in P / poly (assuming NP (cid:42) P / poly), so it cannot be accomplished by any polytime algorithm that queries an ECCP model.", "In this section we formally define autoregressive ECCP models, and prove that they have strictly less capacity than general ECCP models or even just EC models.", "Our proofs rely on the construction of a EC model where computing the local conditional probabilities ( | x ) is NP-hard, so they cannot be computed with compact parameters, if NP (cid:42) P / poly.", "Many parameter estimation techniques and inference methods specifically work with local conditional probabilities ( | x ) .", "Thus, it is common to use parametric models where such quantities can be computed in time ( poly (| x |)) (given the parameters).", "7 These are the standard autoregressive models we discussed in 1.", "We say that the resulting distributions are eciently locally normalizable , or ELN .", "We may again generalize ELNs to allow the use of compact parameters.", "For any weighted language , the Turing machine q eciently locally normalizes with compact parameters q = { q | N } if the parameter size | q | grows only as ( poly ( )) q ( q ) returns a Turing machine (similar to in 2.3) in time ( poly ( )) is normalizable (so exists) maps x ( | x ) for all { $ } and all prefixes x with | x | and ( x ) > 0 7 An autoregressive model architecture generally defines ( x ) as an eciently computable (2.2) product of local conditional probabilities.", "However, the parametrization usually ensures only that (cid:80) ( | x ) = 1 for all prefixes x .", "Some parameter settings may give rise to inconsistent distributions where (cid:44) (cid:80) x ( x ) < 1 because the generative process terminates with probability < 1 (Chen et al., 2018).", "In this case, the factors ( | x ) defined by the autoregressive model are not actually the conditional probabilities of the weighted language (as defined by 2.1).", "It is true that training with a likelihood objective does encourage finding a weighted language whose generative process always terminates (hence = 1), since this is the behavior observed in the training corpus (Chi and Geman, 1998; Chen et al., 2018; Welleck et al., 2020).", "Our definitions of ELN(CP) models require the actual conditionalprobabilities to be eciently computable.", "Autoregressive models that do not sum to 1, whose normalized probabilities can be uncomputable, are not ruled out by our theorems that concern ELN(CP).", "runs on those inputs x in time ( poly ( )) If there is q that eciently locally normalizes a weighted language with compact parameters q , we say is eciently locally normalizable with compact parameters , or ELNCP .", "Note that this is a property of the weighted language itself.", "In this case, it is obvious that is ECCP: Lemma 2. An ELNCP model is also ECCP.", "Likewise, an ELN model is also EC.", "If we define ELNCP models analogously to ECCP models, Lemma 2 means that locally normalized models do not provide any extra power.", "Their distributions can always be captured by globally normalized models (of an appropriate architecture that we used in the proof).", "But we will see in Theorem 1 that the converse is likely nottrue: providedthatNP (cid:42) P / poly,there are eciently computable weighted languages that cannot be eciently locally normalized, even with the help of compact parameters.", "That is, they are EC (hence ECCP), yet they are not ELNCP (hence not ELN).", "We reducing Sat to computing certain local conditional probabilities of (as defined in 2.1).", "Each decision Sat ( ) (where ranges over formulas) corresponds to a particular local conditional probability, implying that there is no polytime scheme for computing all of these probabilities, even with polynomially sized advice strings ( i.e. , parameters).", "Without loss of generality, we consider only formulae such that the set of variables mentioned at least once in is { 1 , . . . , } for some N ; we use | | to denote the number of variables in .", "We say that a satisfies if a B | | and ( 1 = 1 , . . . , | | = | | ) is a satisfying assignment.", "Finally, let boldface B denote enc ( ) where enc is a prefix-free encoding function.", "We can now define the unweighted language = { a | is a formula and a B | | and a satisfies } over alphabet B , which contains each possible Sat problem concatenated to each of its solutions.", "8 We now convert to a weighted language , defined by ( x ) = ( , a ) = ( 13 ) | x |+ 1 for x (oth-erwise ( x ) = 0).", "is normalizable since is both finite ( = (cid:80) x B ( x ) (cid:80) x B ( 13 ) | x |+ 1 = 1) and positive ( > 0 because the example string in footnote 8 has weight > 0).", "The conditional distribution 8 For example, contains the string a where = enc (( 1 2 3 ) ( 1 4 )) and a = 1101 .", "is eciently computable,and so is = / .", "9 Yet deciding whether the local conditional probabilities of are greater than 0 is NP-hard.", "In particular, we show that Sat can be reduced to deciding whether certain local probabilities are greater than 0, namely the ones that condition on prefixes x that consist only of a formula: x = for some .", "This implies, assuming NP (cid:42) P / poly,thatno ( q , q ) can eciently locally normalize with compact parameters.", "Granted, the restriction of to the finite set { x B : | x | } can be locally normalized by some polytime Turing machine , using the same trie trick sketched in 2.3.", "But such tries have sizes growing exponentially in , and it is not possible to produce a sequence of such machines, { : N } , via a single master Turing machine q that runs in ( poly ( )) on q .", "That is: Theorem 1. Assuming NP (cid:42) P / poly , there exists an eciently computable normalizable weighted language that is not ELNCP.", "Proof sketch.", "Take to be the weighted language we defined earlier in this section.", "is clearly eciently computable.", "We will show that if it is ELNCP via ( q , q ) , then the NP-complete problem Sat is in P / poly, contradicting the assumption.", "We must give a method for using ( q , q ) to decide Sat in polytime and with compact parameters .", "Given , our method constructs a simple related formula (cid:48) such that (cid:48) has at least one satisfying assignment (so ( (cid:48) ) > 0 and thus ( 1 | (cid:48) ) is defined) (cid:48) has satisfying assignments with 1 = 1 ( i.e. , ( 1 | (cid:48) ) > 0) if and only if is satisfiable Our construction also provides a polynomial function such that | (cid:48) | is guaranteed to be (| |) .", "We now define by = q ( ) ( ) .", "When our Sat algorithm with compact parameters is given of length , it can use the polynomial-size advice string to ask ( q , q ) in polynomial time for ( 1 | (cid:48) ) .", "Sat ( ) returns true i that probability is > 0. 10 (cid:3) 3.3 ELNCP models cannot even capture all EC (or ECCP) supports or rankings We can strengthen Theorem 1 as follows: Theorem 2. Assuming NP (cid:42) P / poly , there exists an eciently computable normalizable weighted 9 Almost.", "This couldbeirrational,butatleastitiscomputable to any desired precision.", "For any rational , we can say = / is EC, via a Turing machine that stores .", "Further remarks on irrationality appear in Appendix A. 10 See also the remark on implications for seq2seq models following the proof in Appendix A. language where there is no ELNCP such that support ( ) = support ( ) .", "Proof.", "Observe that for any two weighted languages and with the same support, x , ( x ) > 0 ( x ) > 0 (where and return the prefix probabilities of and respectively).", "Thus, for any x with ( x ) > 0, ( 1 | x ) (cid:44) ( x 1 )/ ( x ) and ( 1 | x ) (cid:44) ( x 1 )/ ( x ) are well-defined and ( 1 | x ) > 0 ( 1 | x ) > 0. If is ELNCP, then all such probabilities ( 1 | x ) can be computed in polytime with compact parameters, so it is likewise ecient to determine whether ( 1 | x ) > 0. But this cannot be the case when is the weighted language used in the proof of Theorem 1, since that would suce to establish that Sat P / poly, following the proof of that theorem.", "(cid:3)", "To put this another way, there exists an unweighted language in P (namely support ( ) ) that is not the support of any ELNCP distribution.", "If they have dierent support, normalizable languages also dier in their ranking of strings: Lemma 3. Let , be normalizable weighted languages with support ( ) support ( ) .", "Then x 1 , x 2 such that ( x 1 ) < ( x 2 ) but ( x 1 ) ( x 2 ) .", "Therefore, no ELNCP captures the string ranking of from Theorem 2. And for some , any ELNCP misranks even string pairs of similar lengths: Theorem 3. Assuming NP (cid:42) P / poly , there exists an eciently computable normalizable weighted language such that no ELNCP with support ( ) support ( ) has ( x 1 ) < ( x 2 ) ( x 1 ) < ( x 2 ) for all x 1 , x 2 .", "Indeed, any such has a counterexample where ( x 1 ) = 0 .", "Moreover, there is a polynomial : N N such that a counterexample exists for every x 1 such that ( x 1 ) = 0 and ( x 1 ) > 0 , where the x 2 in this counterexample always satisfies | x 2 | (| x 1 |) .", "Theorem 3 is relevant if one wishes to train a model to rerank strings that are proposed by anothermethod ( e.g. , beam search on , or exact -best decoding from a more tractable distribution).", "If the desired rankings are given by Theorem 3's , any smoothed 11 ELNCP model will misrank some sets of candidate strings, even sets all of whose strings are close in length, by failing to rank an impossible string ( x 1 with ( x 1 ) = 0) below a possible one ( x 2 with ( x 2 ) > 0).", "11 Smoothing is used to avoid ever incorrectly predicting 0 (a false negative) by ensuring support ( ) support ( ) .", "E.g., autoregressive language models often define ( | x ) using a softmax over { $ } , ensuring that ( x ) > 0 for all x .", "Theorem 2 implies that there exists whose local probabilities ( | x ) are not approximated by any ELNCP to within any constant factor , since that would perfectly distinguish zeroes from non-zeroes and the resulting support sets would be equal.", "12 However,this demonstration hinges on the diculty of multiplicative approximation of zeroes whereas real-world distributions may lack zeroes.", "Below we further show that it is hard even to approximate the non-zero local conditional probabilities (even with the additional help of randomness).", "Theorem 4. Assuming NP (cid:42) P / poly , there exists an eciently computable weighted language : R 0 such that there is no ( q , q ) where q = { q | N } that satisfies all of the following properties (similar to 3.1): the parameter size | q | grows only as ( poly ( )) q ( q ) returns a probabilistic Turing machine in time ( poly ( )) there exists 1 such that for each { $ } and x with | x | and ( | x ) > 0 , the probabilistic computation ( x ) has probability > 2 / 3 of approximating ( | x ) to within a factor of (that is, ( x )/ ( | x ) [ 1 / , ] ) runs on those inputs x in time ( poly ( )) Moreover, the statement above remains true", "(a) when the approximation guarantee is only required to hold for prefixes x where { x : x (cid:22) x } is finite (so ( | x ) is computable by brute force)", "(b) or, when support ( ) = 3.5 ELN models are unconditionally weak Our above results rely on the NP -hardness of computing or approximating an EC distribution's autoregressive factors ( | x < ) .", "In Appendix A, we show that these factors can even be uncomputable .", "In such cases, the distribution cannot be ELN (Theorem 5), though sometimes it is still ELNCP (Theorem 6).", "This result does not assume P NP or NP (cid:42) P / poly.", "We now discuss alternative families of sequence distributions that trade away eciency or compactness in exchange for greater capacity, as shown in Table", "12 Droppingthenormalizationrequirementontheapproximated local probabilities (so that possibly (cid:80) ( | x ) 1) does not help.", "Otherwise, again, Sat could be solved in polynomial time (with the help of polysize advice strings) by using ( 1 | (cid:48) ) to determine in the proof of Theorem 1 whether ( 1 | (cid:48) ) > 0. 4.1 Energy-based models (EBMs) Energy-based models (LeCun et al., 2006) of discrete sequences (Rosenfeld et al., 2001; Sandbank, 2008; Huang et al., 2018) traditionally refer to the EC models of 2.2.", "Only the unnormalized probabilities ( x ) are required to be eciently computable.", "Lemmas 1 and 2 showed that this model family contains all ELN languages and can achieve any support in P. Theorem 1 shows that it also contains languages that are not ELN or even ELNCP: intuitively, the reason is that the sums ( x ) needed to compute the local normalizing constants (see 2.1) can be intractable.", "If we generalize energy-based sequence models to include all ECCP models that is, we allow nonuniform computation with compact parameters then Lemmas 1 and 2 guarantee that they can capture all ELNCP languages and furthermore all languages in P / poly (though still not NP-complete languages).", "Experiments on dierent parameterizations.", "Maximum-likelihood parameter estimation (MLE) can be expensive in an EBM because the likelihood formula involves the expensive summation = (cid:80) x ( x ) .", "This forces us in practice to use alternative estimators that do not require computing normalized probabilities,such as noise-contrastive estimation (NCE) or score matching (1), which are less statistically ecient.", "In pilot experiments we found that both RNNand Transformer-based EBMs trained with NCE achieved worse held-out perplexity than comparable locally normalized models trained with MLE.", "13 Fortunately, it is possible to infuse a globally normalized architecture with the inductive bias of a locally normalized one, which empirically yields good results.", "Residual energy-based models ( REBMs ) (Bakhtin et al., 2021) are a simple hybrid architecture: ( x ) ( x ) (cid:44) 0 ( x ) exp ( x ) This simply multiplies our previous weight by a new factor 0 ( x ) .", "The base model 0 : ( 0 , 1 ] is a locally normalized neural sequence model (ELN model) that was pretrained on the same distribution.", ": R is a learnable function (with parameters ) that is used to adjust 0 , yielding a weighted language with the same support .", "We implemented 13 This might be due to a capacity limitation of the specific globally normalized architectures ( i.e. , no parameters work well), or excess capacity ( i.e. , too many parameters work well on the finite sample), or statistical ineciency of the estimator (the NCE objective on the finite sample, with the noise distribution we chose, does not distinguish among parameters as well as MLE does), or an optimization diculty caused by local optima in the NCE optimization landscape.", "REBMs, again with NCE training, and evaluated them on two dierent neural architectures (GRUand Transformer-based) and 3 datasets (WikiText (Merity et al., 2017), Yelp (Yelp), and RealNews (Zellers et al., 2019)).", "In each setting we tried, the REBM slightly but significantly improved the perplexity of the base model 0 ( < 0 . 05).", "14 4.2 Latent-variable models Autoregressive models have = 1 for any setting of the parameters (or at least any setting that guarantees consistency: see footnote 7).", "Clearly = 1 ensures that is both finite and tractable.", "Can we find a model family that retains this convenience (unlike EBMs), while still being expressive enough to have any non-empty language in P as support?", "Autoregressive latent-variable models form such a family.", "As in directed graphical models, the use of latent variables provides a natural way to model partial observations of an underlying stochastic sequence of events.", "We will model an observed sequence x of length as a function of a latent string z of length ( poly ( )) .", "As in EBMs, the probability ( x ) can be computationally intractable, allowing these models to break the expressivity bottleneck of ordinary autoregressive models.", "However, the intractability no longer comes from exponentially many summands in the denominator , but rather from exponentially many summands in the numerator namely, the summation over all latent z that could have produced x .", "Notice that as a result, even unnormalized string weights are now hard to compute, although once computed they are already normalized.", "Formally, we define marginalized weighted languages.", "We say that is a marginalization of the weighted language if it can be expressed as ( x ) = (cid:80) z : ( z ) = x ( z ) , where : is some function (the marginalization operator ).", "We say it is a light marginalization if | z | ( poly (| ( z )|)) and runs in time ( poly (| z |)) .", "15 Typically ( z ) extracts a subsequence of z ; it can be regarded as keeping the observed symbols while throwing away a polynomially bounded number of latent symbols.", "Light marginalizations of ELN distributions are a 14 We independently conceived of and implemented the REBM idea proposed in Bakhtin et al. (2021).", "Details of neural architecture choice, model parameter sizes, training regimen, and evaluation (Appendices BD) dier between our work and theirs, which also reported positive empirical results (on dierent datasets).", "We regard the two independent positive findings as a strong indication that the REBM design is eective.", "15 WLOG, can be required to run in linear time (| z |) , as it does in our constructions below.", "reasonable formalization of latent-variable autoregressive models.", "They are more powerful than ELN distributions, and even include some distributions that (by Lemma 1) are not even ELNCP or ECCP: Theorem 7. There exists a light marginalization of an ELN distribution, such that support ( ) is an NP -complete language.", "Our proof of Theorem 7 relies on special structure of a certain NP-complete language (Sat) and does not evidently generalize to all languages in NP.", "However, light marginalizations of ELNCP distributions are more powerful still, 16 and can have any language NP or even NP / poly (2.4) as support: Theorem 8. The following statements are equivalent for any nonempty :", "(a) NP / poly .", "(b) is the support of a light marginalization of an ELNCP distribution.", "(c) is the support of a light marginalization of an ECCP weighted language.", "Theorems 7 and 8 make use of unrestricted latent-variable autoregressive models.", "There exist more practical restricted families of such models that admit tractable computation of ( x ) (Laerty et al., 2001; Rastogi et al., 2016; Wu et al., 2018; Buys and Blunsom, 2018).", "Such models are EC (and indeed, typically ELN) but this limits their expressivity, by Theorem 1. Both Lin et al. (2019) and Buys and Blunsom (2018) observed that such models yield worse empirical results than models that do not have tractable exact inference methods.", "The tractability requirement is dropped in self-talk (blixt, 2020; Gontier et al., 2020; Shwartz et al., 2020), where a neural autoregressive language model generates an analysis of the prefix x via latent intermediate symbols before predicting the next output symbol.", "17 We remark that for autoregressive models, the position of the latent variables is significant.", "Marginalizing out latent variables at the end of the string adds no power.", "More precisely, if an ELNCP distribution is over strings z of the form x # y , then its marginalization 16 The capacity established by Theorem 8 does not need the full power of marginalization.", "We could similarly define light max-imizations of ELNCP distributions, ( x ) = max z : ( z ) = x ( z ) .", "Replacing sum by max does not change the support.", "17 Here the marginal distribution of the next observed symbol can require superpolynomial time to compute (if #P FP, which follows from NP (cid:42) P / poly).", "Theorem 1 could likewise be evaded by other autoregressive approaches that invest superpolynomial computation in predicting the next symbol (Graves, 2016).", "Each autoregressive stepmightexplicitlyinvoke lookaheadorreasoning algorithms, just as feed-forward network layers can invoke optimizers orsolvers (Amos andKolter,2017; Wang et al.,2019b).", "via ( x # y ) = x can be expressed more simply as an ELNCP language.", "Thus, by Theorem 2, marginalizations of such distributions cannot have arbitrary NP languages as support.", "Our proofs of Theorems 7 and 8 instead use latent strings of the form y # x , where all latent variables precede all observed ones (as in Kingma and Welling, 2014).", "(This simple design can always be used without loss of generality.)", "Trying to reorder those latent strings as x # y while preserving their weights would have yielded a non-ELNCP distribution ( x # y ) (because if it were ELNCP, then ( x ) would be ELNCP also, and we know from Lemma 1 that it cannot be for any distribution whose support is an NP-complete language).", "How about lightly marginalizing ECCP languages instead of ELNCP ones?", "This cannot model any additional unweighted languages, by Theorem 8. But it may be able to model more probability distributions.", "One can easily construct a light marginalization of an ECCP distribution such that # ( ) = ( ) , where # ( ) is the number of satisfying assignments of and the constant depends only on = | | .", "We conjecture that this is not possible with lightly marginalized ELNCP distributions.", "2.3 noted that with exponential growth in stored parameters, it is possible to fit any weighted language up to length , with local probabilities computed in only ( ) time by lookup.", "Of course this rapidly becomes impractical as increases, even if the amount of training data increases accordingly.", "However, there has been some recent movement toward storage-heavy models.", "Such models are typically semiparametric: they use a parametric neural model, such as an autoregressive model, together with an external knowledge base of text strings or factoids that are not memorized in the layer weights.", "The neural model generates queries against the knowledge base and combines their results.", "Examples include NNLMs (Khandel-wal et al., 2020) and semiparametric LMs (Yogatama et al., 2021).", "The knowledge base grows linearly with the training data rather than compressing the data into a smaller parameter vector.", "It is in fact a copy of the training data, indexed to allow fast lookup (Indyk and Motwani, 1998).", "(Preparing the index is much cheaper than neural network training.)", "Access to the large knowledge base may reduce the amount of computation needed to find the local conditional probabilities, much as in the trie construction of 2.3.", "Chen et al. (2018) show that it is hard to map RNN parameters to properties of the resulting autoregressive weighted language, such as consistency ( = 1).", "We focus on cases where the RNN parameters are already known to be consistent, so the RNN eciently maps a string x to its local conditional distribution ( | x ) .", "Our point is that for some weighted languages, this is not possible (even allowing polynomially larger RNNs for longer strings), so consistent RNNs and their ilk cannot be used to describe such languages.", "In a Bayes network which is really just an autoregressive model of fixed-length strings approximate marginalinference is NP-hard(Roth,1996).", "Assuming NP (cid:42) P / poly and the grid-minor hypothesis, Chan-drasekaran et al. (2008, Theorem 5.6) further showed that for any infinite sequence of graphs 1 , 2 , . . . where has treewidth , there is no sequence of algorithms 1 , 2 , . . . such that performs approximate marginal inference in time ( poly ( )) on graphical models of structure .", "This remarkable negative result says that in any graph sequence of unbounded treewidth, approximating the normalizing constant for given arbitrary parameters is hard (not ( poly ( )) ), even with advice strings.", "Our negative result (Theorem 4) focuses on one particular infinite weighted language, showing that approximating local conditional probabilities given an arbitrary length prefix is hard in the same way.", "(So this language cannot be captured by an RNN, even with advice strings.) 6 Conclusion and future work Autoregressive models are suited to those probability distributions whose prefix probabilities are eciently computable.", "This eciency is convenient for training and sampling.", "But unless we sacrifice it and allow runtime or parameter size to grow superpolynomially in input length, autoregressive models are less expressive than models whose prefix probabilities expensively marginalize over suxes or latent variables.", "All model families we have discussed in this paper can be seen as making compromises between dierent desiderata (Table 1).", "Natural follow-up questions include Are there model families that win on all fronts?' What are other modeling desiderata?' While some languages P cannot be supports of ELNCPs, we do not know if the same can be said for most languages P. This problem seems to be closely related to the average complexity of NP-complete languages, where most questions remain open (Levin, 1986; Bogdanov and Trevisan, 2006).", "We thank the anonymous reviewers for their comments.", "We also thank our colleagues at Johns Hopkins University, Facebook, and Carnegie Mellon University for their comments on earlier versions of the manuscript.", "This material is based upon work at Johns Hopkins University supported by the National Science Foundation under Grant No. 1718846.", "It does not represent the views of Microsoft (where Dr. Eisner is also a paid employee, in an arrangement that has been reviewed and approved by the Johns Hopkins University in accordance with its conflict of interest policies)." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "objective", "result", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "method", "other", "other", "other", "abstain", "other", "method", "other", "other", "method", "other", "objective", "method", "method", "other", "abstain", "other", "other", "abstain", "method", "other", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "method", "abstain", "other", "other", "other", "other", "abstain", "other", "abstain", "other", "method", "other", "method", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "other", "other", "other", "other", "method", "other", "abstain", "other", "method", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "other", "other" ]
[ "Despite recent improvements in open-domain dialogue models, state-of-the-art models are trained and evaluated on short conversations with little context.", "In contrast, the long-term conversation setting has hardly been studied.", "In this work we collect and release a human-human dataset consisting of multiple chat sessions whereby the speaking partners learn about each other's interests and discuss the things they have learnt from past sessions.", "We show how existing models trained on existing datasets perform poorly in this long-term conversation setting in both automatic and human evaluations, and we study long-context models that can perform much better.", "In particular, we find retrieval-augmented methods and methods with an ability to summarize and recall previous conversations outperform the standard encoder-decoder architectures currently considered state-of-the-art.", "Improvements in the ability to train large neural language models, together with the availability of larger and higher quality dialogue datasets, are spurring the development of increasingly convincing open-domain dialogue models (McTear, 2020).", "Unfortunately, a major aspect missing from the current state of the art is that human conversations can take place over long time frames, whereas the currently used systems suffer in this setting.", "Commonly used training and evaluation resources while large in terms of number of training examples include only short conversations, typically between 2-15 turns, consisting of a single conversational session.", "Perhaps for that reason, the current state-of-the-art models such as Meena (Adiwar-dana et al., 2020) and BlenderBot (Roller et al., 2020) employ Transformers with token truncation lengths of only the 128 most recent tokens, and We use this term colloquially, see Agranoff et al. (1965) for evidence of goldfish long-term memory.", "are clearly incapable of incorporating long-term conversational context.", "Consequently, it is unclear how well these models will perform on long or multi-session open-domain conversations.", "In contrast, a successfully deployed bot will engage in many conversations over a length of time, as capturing organic user interest will garner continual reengagement from returning users.", "Long-term open-domain communication gives the opportunity for the conversation to develop and even improve with time as the model has more context and more understanding of that specific user's interests.", "However current models, due to context truncation, will never use this information.", "In this work we study methods for long-term open-domain conversation.", "As to the best of our knowledge no public domain task exists to study such methods, we collect and release 1 a new English dataset, entitled Multi-Session Chat (MSC) that consists of human-human crowdworker chats over 5 sessions, with each session consisting of up to 14 utterances, where the conversationalists reengage after a number of hours or days and continue chatting.", "Previous sessions are annotated with summaries of important personal points that may be useful in further conversations.", "When reengaging, conversationalists often address existing knowledge about their partner to continue the conversation in a way that focuses and deepens the discussions on their known shared interests, or explores new ones given what they already know.", "We study the performance of two long-context conversational architectures on this task:", "(i) retrieval-augmented generative models (Lewis et al., 2020b; Shuster et al., 2021); and", "(ii) a proposed read-write memory-based model that summarizes and stores conversation on the fly.", "We show that both techniques outperform conventional encoder-decoder Transformers, and that training 1 Dataset, model weights and code for this entire project will be made available upon acceptance.", "models on our new task give long-term conversational abilities that existing state-of-the-art models lack, as shown in both automatic metrics and human evaluations.", "We provide extensive experiments and ablations that study the reasons behind these improvements.", "A relatively large and growing number of either natural or crowdsourced datasets have been collected and used in open-domain dialogue research.", "These datasets focus on the vast array of different skills required by a dialogue agent, but conversations lengths are typically short.", "Recent state-of-the-art open-domain dialogue agents have utilized Daily Dialogue (Li et al., 2017), PersonaChat (Zhang et al., 2018), Empathetic Dialogues (Rashkin et al., 2019), Wizard of Wikipedia (Dinan et al., 2019) and Pushshift.io Reddit (Baumgartner et al., 2020); see Huang et al. (2020) for a review of other datasets.", "The number of conversational turns in these datasets is in the range of 2-15 turns, we provide statistics of some of these datasets in Table 2. We note there also exist some other kinds of dialogue datasets, e.g. from fantasy role-playing (Urbanek et al., 2019; Rameshkumar and Bailey, 2020) and TV shows as well (Poria et al., 2018).", "Crowdsourcing long conversations is difficult due to both the expense and the difficulty of employing crowdworkers for long lengths of time due to so called Human Intelligence Tasks (HITs) being typically of a short duration only a few minutes (Paolacci et al., 2010).", "While organic long conversations regularly transpire on the internet, e.g. on messaging platforms, these are proprietary, and privacy concerns make public release implausible.", "Several existing datasets explore the use of personal knowledge used as context to dialogue, which can be seen as a short, simple memory provided to the bot.", "In Mazar et al. (2018) such personas were extracted from Reddit and used to train agents.", "In Zhang et al. (2018) personas were first crowdsourced, and speakers were asked to play those roles.", "Other works have considered encoding personas into vector-based weights (Li et al., 2016).", "In this work, we explore summarizing the long-term conversations that occur in order to store useful information about them.", "Summarization is a rich field where the vast majority of work focuses on summarizing documents (Kaikhah, 2004; Krys-cinski et al., 2019; Cheng and Lapata, 2016), for example summarizing in order to predict other relevant information (West et al., 2019), while there is some work on dialogue as well (Goo and Chen, 2018; Gliwa et al., 2019; Pan et al., 2018).", "Standard Transformers have a fixed context length which due to the all-vs-all self-attention mechanism becomes inefficient when it is too large.", "Consequently, many existing pre-trained models have short token truncation lengths, e.g. 128 tokens, as in BlenderBot (Roller et al., 2020) and Meena (Adiwardana et al., 2020), or 1024 tokens, as in BART (Lewis et al., 2020a).", "A number of approaches have been proposed to ameliorate this issue.", "Long-context Transformers consider ways to speed up the self-attention mechanism (Child et al., 2019; Kitaev et al., 2019; Beltagy et al., 2020) and retrieval-augmented methods consider ways to select the pertinent parts of the context to consider (Dinan et al., 2019; Lewis et al., 2020b; Shuster et al., 2021) which can also be related to earlier neural QA methods (Chen et al., 2017).", "To conduct research on long-term conversations, we require data to both train on and to evaluate models.", "We consider the natural case where two speakers chat online in a series of sessions as is for example common on messaging platforms.", "Each chat session consists of 6-7 turns for each speaker.", "Then, after a certain amount of (simulated) time has transpired, typically hours or days, the speakers resume chatting, either continuing to talk about the previous subject, bringing up some other subject from their past shared history, or sparking up conversation on a new topic.", "We consider this multi-session long conversation setup, and name our dataset Multi-Session Chat (MSC).", "Data Collection To build our publicly available dataset we employ crowdworkers.", "We provide screenshots of the task, and details of quality control via onboarding, crowdworker co-rating, and automatic evaluation procedures in Appendix B. Personas Crowdworkers are asked to play a role, rather than speaking about their own personality, which helps mitigate privacy concerns, and ensures diversity even if the same crowdworker conducts multiple conversations.", "In addition to the crowdworkers being specifically told to play the role, they are also told not to discuss aspects of their real profiles or indeed any personally identifiable informa-5181 Train Valid Test Data Type Epsiodes Utts.", "tion.", "The role is provided as a series of sentences describing characteristics, events and opinions of the character they are playing.", "We use the 1,155 personas crowdsourced from Zhang et al. (2018), validation and test use separate personas from the ones used in the training set.", "Session 1 For the first chat session we use the PERSONACHAT dataset (Zhang et al., 2018), which already involves short conversations where two speakers get to know each other for the first time.", "We note that these conversations rarely go beyond the superficial stage because speakers simply do not have enough turns to discuss any topic deeply.", "Sessions 2, 3, 4, . . . For subsequent sessions, we first select a random amount of (simulated) time that has elapsed since the previous session, chosen to be either 1-7 hours or 1-7 days, as ideally speakers would reengage within that timeframe.", "We ask the crowdworkers to play the same roles that were played in the previous session, acting as if that amount of time has transpired.", "We note these crowdworkers may not be the same ones that played those characters in previous sessions, but will be playing the same roles: this makes the task tractable in a crowdworking frameworking where jobs are typically short, and matching pairs over a long duration would be infeasible.", "We instruct the workers to chitchat with another worker for 6 turns, as if you were catching up since last time you two spoke. and that When you expand the topic, make sure it makes sense with the personal details already mentioned., i.e. emphasizing that not only must they play their role, but also pay attention to previous interactions with the other speaker.", "Session Lengths We collect two lengths of training conversation: 4000 episodes with 3 sessions, and 1001 episodes with 4 sessions.", "For the validation and test data, the sessions extend up to 5 sessions, giving us a way to measure long-context session performance that extends beyond the training set distribution.", "Conversation Summaries (Extended Personas) We give crowdworkers access to all previous dialogues between the two conversational roles (for the role they are playing, and their partner's role).", "However, as the conversation gets longer, this becomes infeasible to read and digest within a limited amount of time.", "Therefore, between each session, including after session 1, we run a separate crowdworker task in which conversations are summarized into important points, which are much shorter than the full dialogues themselves.", "We then show previous dialogues, along with these summaries, as the primary reference for subsequent session dialogues.", "As these summaries were collected in order to store the important points pertinent to either one or the 5182 other speaker, they can also be seen to function as extensions of the original given personas.", "As the two speakers continue to converse they create more depth to those characters.", "Dataset Examples Two dataset examples, which consist of four sessions each, along with example summary annotations, are given in Appendix C (provided in the Appendix due to their length).", "Dataset Statistics Statistics of the multi-session chat dataset are given in Table 1 and a comparison with other standard open-domain dialogue datasets is given in Table 2. We can see that the number of training utterances per episode is larger than other datasets (last column of Table 2).", "Our multi-session training chats that last 4 sessions have an average of 53 utterances in a full conversation (over all sessions), while our validation and test chats over 5 sessions have an average of 66 utterances.", "In contrast, other standard datasets are in the range of 2.6-14.7 utterances on average.", "This brings challenges in open-domain dialogue modeling due to the large context size, e.g. an average of 1614 tokens as tokenized by the BlenderBot BPE dictionary (Roller et al., 2020), where the Transformer used in that work has a truncation length of 128.", "Further information on the dataset including analysis of its quality is given in Appendix B. 4 Modeling Multi-Session Chat 4.1 Transformer Encoder-Decoders The most straight-forward approach for modeling dialogue using our new task is simply to use a large language model as is standard in open-domain dialogue, i.e. an encoder-decoder Transformer as in the Meena (Adiwardana et al., 2020) and BlenderBot (Roller et al., 2020) systems.", "We consider using the BST 2.7B parameter model from BlenderBot as an initial pre-trained model, which we then fine-tune on the Multi-Session Chat task.", "Encoder Truncation As BST 2.7B has a truncation of 128 tokens in the encoder, we consider extending this to a larger input.", "To do this, we extend its available positional encodings from 128 to 512 or 1024 tokens as we fine-tune the whole network on the downstream task.", "We add new positional embeddings to be trained such that the existing ones (the first 128 most recent tokens) do not change from before.", "We then evaluate the impact of these choices in order to select the best model.", "A popular technique when dealing with a large collection of text, only some of which is relevant, is to use a retrieval-augmented Transformer.", "A retrieval system is used to search over a text collection, and select some of it to be included in the final encoding which is attended to by the Transformer decoder.", "RAG The RAG (Retrieval-Augmented Generation) approach (Lewis et al., 2020b) utilizes a neural-retriever-in-the-loop which is itself a second Transformer.", "Documents to be retrieved are stored in an approximate nearest-neighbor FAISS index (Johnson et al., 2019), and a DPR (Dense Passage Retrieval) (Karpukhin et al., 2020) Transformer bi-encoder model is used to score document-context pairs in order to rank them based on their match, where the base DPR model is pre-trained on QA data pairs.", "The DPR model is thus used to both retrieve from the FAISS index, and then score the top N candidates.", "The entire system is trained end-to-end so that retrieval is optimized to help improve generation.", "This setup was shown to work for dialogue in particular in Shuster et al. (2021).", "FiD and FiD-RAG We also consider the Fusion-in-Decoder (FiD) (Izacard and Grave, 2020), another method that has been shown to perform well.", "In this approach, the pre-trained retriever is used directly: each of the top N documents returned is prepended to the context and encoded separately by the encoder, and finally all the results are concatenated.", "The decoder then attends to these encodings to produce a final response.", "We consider the pre-trained retriever to either be standard pre-trained DPR, or the RAG-trained retriever, called FiD-RAG (Shuster et al., 2021).", "Retriever and Documents In this work the set of passages in the memory is not large enough to require a FAISS index, but it is large enough that retrieval may be useful.", "We thus store for every item in the memory the vector encoding by the DPR model (whereas in the FAISS approach this dense vector is approximated instead).", "Then given a dialogue context, we score each memory using the bi-encoder, and use the top N for generation.", "In our case, the memories consist of dialog utterances from the history of the conversation.", "We consider the chunk (document) size as a hyperpa-rameter and try either encoding utterances as separate documents, or else whole sessions (or session summaries) as documents.", "The latter (whole se-5183 Pre-Train Model Truncation Sessions 1-4 Session 1 Session 2 Session 3 Session 4 Trunc% (S4) With no previous session context BST 2.7B 128 9.23 8.76 9.45 9.31 9.40 51% BST 2.7B 512 9.06 8.18 9.42 9.26 9.36 0% BST 2.7B 1024 9.08 8.20 9.46 9.29 9.37 0% With previous session dialogue context BST 2.7B 128 9.16 8.75 9.32 9.22 9.32 100% BST 2.7B 512 8.87 8.15 9.14 9.04 9.17 100% BST 2.7B 1024 8.89 8.17 9.18 9.05 9.16 80% With previous session summary context BST 2.7B 128 9.09 8.77 9.24 9.12 9.24 100% BST 2.7B 512 8.79 8.17 8.69 9.15 9.22 36% BST 2.7B 1024 8.80 8.18 9.05 8.91 9.04 0% Table 3: Comparison of different context truncation lengths and context types when training on MULTISESSIONCHAT .", "sions) worked better, and we report those in the final results.", "For N we try values 3, 5 and 6, and also choose the best for each method according to the validation set.", "The retrieval-augmentation models described in the previous section retrieve from the set of past dialogues.", "Simply storing historical text in the memory in their raw form is a simple approach that is often used elsewhere in the literature, e.g. in question answering or knowledge-grounded dialogue.", "However, those approaches have two potential drawbacks:", "(i) there is a lot of context to store, and hence retrieve from;", "(ii) no processing has been done on that content, so the reading, retrieving and combining operations required to generate an answer leave a lot of work for the model to do.", "We therefore propose instead a novel memory augmentation that first summarizes pertinent knowledge and only stores that in an attempt to solve both problems.", "1. An encoder-decoder abstractive summarizer that takes as input the dialogue history, and outputs a summary of new pertinent information contained in the last dialogue turn, or no-summary if there is no new information found.", "When found, the summarized knowledge is added to the long-term memory.", "For (1) we can use the human annotated data from our newly collected MSC task to know what summaries to generate (see section 3 and Figure 1 in the Appendix).", "We thus train a supervised encoder-decoder model to produce summaries.", "For (2) we can use the same systems as presented in subsection 4.2 to both retrieve from the summarization memories, and to finally generate an appropriate response.", "That is, we store the summaries in documents and retrieve them using either RAG, FiD or FiD-RAG.", "Using session dialogue context We compare different context types in Table 3, evaluating over sessions 1-4.", "We observe an improvement in perplexity when incorporating the dialogue history from previous chat sessions, compared to no session context, for all sessions after the first one, and for all context lengths with larger context lengths giving better improvement.", "This shows that our human conversationalists do use previous sessions to make dialogue more salient in successive sessions as this is reflected in the collected human-human dataset and that our models are able to utilize this information well when training on this data.", "performance of using gold session summary contexts, as annotated by crowdworkers, in Table 3. As the summaries include salient points, they are potentially more informative than dialogue context for a generative model.", "We find perplexities improve when using summaries compared to using dialogue context (or no context at all) over all sessions after the first one, and for all context lengths, although the improvements are not large.", "This shows that conversation summaries are potentially useful for dialogue generation in the long-context case.", "Comparing performance on session openings Session openings in the MSC dataset look quite different to other dialogue datasets that do not have a session format.", "This is because they involve an opening message that is intended to reengage the other speaker after a period of time, using known information that has been exchanged between speakers.", "In Table 4 we compare models that use different context types on only these opening responses.", "In this case we find much more pronounced perplexity differences between no session context history, dialogue history or summary context history.", "For example, we see around around 2 perplexity points difference between using or not using previous session context.", "We show examples of opening session generations in Appendix C. We observe that opening messages are categorically different to other conversation turns, typically involving a statement or question given knowledge of shared interests contained in the long-context.", "This explains why collection of our new dataset is so important for this goal, as reflected in perplexity improvements.", "That is, they indicate that our new task will likely help improve multi-session conversational engagement with users compared to existing training schemes.", "Comparing different context lengths As shown in Table 3 changing the context length of a Transformer can impact the performance in our task.", "With no previous session context, improvements are minimal for sessions 2 onwards.", "However, using session dialogue or summary contexts we do see improvements with larger lengths of 512 or 1024 tokens, compared to 128.", "The last column of Table 3 shows the percentage of responses where the input to the Transformer is truncated for session 4, for each truncation length.", "One can see that using summaries can be beneficial as they are shorter, 5185 Model Session 1 Session 2 Session 3 Session 4 Session 5 Session Openings BST 2.7B (Roller et al., 2020) 8.97 9.98 10.26 10.40 10.50 12.92 MSC 2.7B (truncate 128) 8.87 8.89 9.10 9.21 9.27 8.95 MSC 2.7B (truncate 1024) 8.25 8.76 8.93 9.07 9.16 8.09 MSC 2.7B (RAG) 8.22 8.78 8.97 9.11 9.17 8.10 MSC 2.7B (FiD) 8.22 8.75 8.92 9.05 9.11 8.06 MSC 2.7B (FiD-RAG) 8.23 8.75 8.93 9.04 9.11 8.03 SumMem-MSC 2.7B (truncate 1024) 8.25 8.71 8.89 9.01 9.09 8.04 SumMem-MSC 2.7B (RAG) 8.24 8.81 9.00 9.10 9.17 8.05 SumMem-MSC 2.7B (FiD) 8.20 8.71 8.89 9.00 9.07 7.91 SumMem-MSC 2.7B (FiD-RAG) 8.22 8.70 8.89 9.00 9.07 7.87 Table 7: Test perplexity across sessions for our retrievaland memory-augmented models (bottom two blocks) compared to several encoder-decoder baselines (top three rows).", "Summary context performance We can ablate the summary model training data to understand its impact further, results of which are given in Table 4. We see that removing the time feature (indicating how long ago the previous session occurred) only has minimal effect.", "Removing either the partner or self summary (and keeping the other one), on the other hand, has a larger effect in both cases, where keeping the self summary is slightly more important.", "Keeping both features is best.", "These differences, as before, are magnified when looking at session opening performance.", "Predicted summary models We train models to predict dialogue summaries, and use predicted summaries of previous sessions as context (instead of the full dialogue history or the gold summary).", "The training data for predicting summaries consists of, for each turn, either a summarizing sentence or the no_summary label.", "As 42% of turns have the no_summary label, this can be overexpressed in the model at beam decoding time 2 , we therefore experiment with sampling this label only K % of the time during training in Table 5. Example predictions (for the 5% sampling model) are shown in Figure 1. We find that subsampling gives better results and closer sparsity levels to the original human annotated data (e.g., with K = 25% ).", "We compare predicted summaries with K = 5% sampling to other methods of modeling long-context in Table 4. We observe results that are between using a standard dialogue history (predicted summaries are slightly better), and using gold summaries (pre-dicted summaries are not as good).", "2 We use a beam size of 3 and minimum beam length 10 with no context blocking.", "Varying the number of training sessions We vary the amount of available training sessions from 1-4, with results reported in Table 6. We observe large gains when using more than one training session compared to only one (around 1.5 perplexity points), again justifying the construction of our MSC training data.", "The gains however decrease with the number of available sessions, e.g. between having 1-3 training sessions vs. 1-4 only gives a 0.03 perplexity gain averaged across sessions.", "The gain even on session 4 is not that large despite the 1-4 training data being in-distribution, whereas 1-3 is not, in addition to 1-4 having more training data.", "Retrieval-augmentation model Comparison of our retrieval-augmented methods are given in Table 7, training on MSC using the BST 2.7B model as pre-training, hence called MSC 2.7B (RAG), (FiD) or (FiD-RAG), depending on the augmentation method.", "These methods are compared to the existing BlenderBot model (BST 2.7B), or training with MSC with no augmentation (MSC 2.7B with different dialogue history context truncation lengths).", "We find that all three retrieval augmentation methods, when using the session level-document size as retrieval documents, can effectively use retrieval to extend the conversation history length.", "We see a large performance improvement over the existing BlenderBot model or a truncation of 128 of the MSC 2.7B model.", "Performance improvements over MSC 2.7B with a truncation length of 1024 are minimal, but the retrieval-augmented models are guaranteed to have a memory that essentially never forgets the conversation, no matter how long it gets, whereas the truncation model does not.", "previous dialogue history is summarized before being stored in the model's long-term memory, called SumMem-MSC 2.7B.", "We use the RAG, FiD, or RAG-FiD methods to retrieve from that memory, or we compare to a fixed memory of 1024 tokens that is truncated, resulting in four different methods that we compare.", "Results are given in Table 7. While improvements are small, we see the same patterns as for the retrieval-augmented methods that SumMem-MSC 2.7B FiD-RAG is better than FiD which is in turn better than RAG, with FiD and FiD-RAG better than truncation at session openings.", "Moreover, all SumMem-MSC models outperform their retrieval-augmented model counterparts MSC 2.7B (RAG/FiD/FiD-RAG).", "SumMem-MSC 2.7B (FiD-RAG) thus provides the best results out of all methods tested in this work.", "Further Detailed Automatic Metrics Our analysis so far measured perplexity.", "We report more automatic metrics (F1 and BLEU) in Appendix A, which yield similar conclusions.", "We perform a human evaluation using crowdworkers.", "The conversations begin with two randomly chosen personas from the validation set, and one is assigned to the crowdworker who is asked to play that role.", "We select the conversation to be the 5 th session that these two speakers will converse, and make available the summary of the previous 4 sessions.", "We ask the crowdworkers to have a natural conversation, where they will also evaluate their partner's responses for conversational attributes, in particular whether they reference knowledge of their own or the other speaker's persona (or topics they discussed) from previous sessions, from the current session, or neither.", "On each turn of the conversation the crowdworker is asked to check all attribute boxes that apply.", "A screenshot can be found in Figure 6 in the Appendix showing the UI.", "Each conversation consists of 15 messages (7 from the human, 8 from the bot).", "At the end of the conversation, an additional question collects an overall engagingness score (out of 5) for their speaking partner.", "The results are given in Table 8. We find that MSC-trained models outperform BlenderBot (BST 2.7B) in terms of both per-turn engaging responses and final ratings.", "Further, our summarization memory models (all three variants RAG, FiD and FiD-RAG) outperform encoder-decoders with different levels of truncation of the dialogue history (MSC 2.7B with truncate 128 and 1024).", "For example, SumMem-MSC 2.7B (RAG) achieves an engaging response rate of 62.1% and final rating of 3.65, compared to BlenderBot's 53.0% and 3.14 and MSC 2.7B (truncate 1024)'s 54.2% and 3.47.", "For all MSC models, while rates of referencing their own topics are not particularly increased, we do observe increased rates of referencing partner topics from previous sessions, with higher rates for the summarization memory models.", "For example, 33.8% for SumMem-MSC 2.7B (RAG) compared to BlenderBot's 14.5%.", "This is likely an important reason why human raters feel the summarization memory models are more engaging.", "We have shown that existing dialogue models, both in terms of training data and models trained, fail to conduct long-term conversations adequately.", "Our work investigates recent model architectures to ameliorate this issue, and collects a new crowdsourced task, Multi-Session Chat to both train and evaluate these models.", "We show, in terms of both automatic metrics and human evaluations, that these long-context dialogue modeling approaches outperform the previous systems.", "Future work should investigate further improvements to architectures for the long-context dialogue setting.", "The dialogue models we use in this work utilize large language models, and therefore have similar concerns as in other work, in particular concerns about toxic language, bias and other issues during language generation (Bender et al., 2021).", "For open-domain dialogue in particular, see Xu et al. (2020); Dinan et al. (2021) for reviews of the literature and evaluation of recent methods that try to mitigate these safety issues.", "Our work focuses on models with long-term memory and open-domain conversations wherein speakers may divulge personal interests.", "We remark that, during data collection, crowdworkers were specifically playing roles with given personality traits, not talking about themselves, and hence not identifying any personal information.", "During conversations with our trained models, the models will store information they learn from the exchange.", "In contrast to current standard language models, our models have the capability of storing this in the long-term.", "This information is stored in the memory of the model, private to the individual's conversation, and hence is not shared with anyone else." ]
[ "abstain", "abstain", "method", "result", "objective", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "method", "abstain", "abstain", "result", "objective", "method", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "result", "objective", "result", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain" ]
[ "Adversarial attacks are carried out to reveal the vulnerability of deep neural networks.", "Textual adversarial attacking is challenging because text is discrete and a small perturbation can bring significant change to the original input.", "Word-level attacking, which can be regarded as a combinatorial optimization problem, is a well-studied class of textual attack methods.", "However, existing word-level attack models are far from perfect, largely because unsuitable search space reduction methods and inefficient optimization algorithms are employed.", "In this paper, we propose a novel attack model, which incorporates the sememe-based word substitution method and particle swarm optimization-based search algorithm to solve the two problems separately.", "We conduct exhaustive experiments to evaluate our attack model by attacking BiLSTM and BERT on three benchmark datasets.", "Experimental results demonstrate that our model consistently achieves much higher attack success rates and crafts more high-quality adversarial examples as compared to baseline methods.", "Also, further experiments show our model has higher transferability and can bring more robustness enhancement to victim models by adversarial training.", "All the code and data of this paper can be obtained on https://github.com/ thunlp/SememePSO-Attack .", "Adversarial attacks use adversarial examples (Szegedy et al., 2014; Goodfellow et al., 2015), which are maliciously crafted by perturbing the original input, to fool the deep neural networks", "Indicates equal contribution.", "Yuan developed the method, designed and conducted most experiments; Fanchao formalized the task, designed some experiments and wrote the paper; Chenghao made the original research proposal, performed human evaluation and conducted some experiments.", "Work done during internship at Tsinghua University Corresponding author FondOf produce shows Original Input Substitute Words Sememes movie cinema picture love like this film I Search Space Adversarial Example film this like II this enjoy Figure 1: An example showing search space reduction with sememe-based word substitution and adversarial example search in word-level adversarial attacks.", "(DNNs).", "Extensive studies have demonstrated that DNNs are vulnerable to adversarial attacks, e.g., minor modification to highly poisonous phrases can easily deceive Google's toxic comment detection systems (Hosseini et al., 2017).", "From another perspective, adversarial attacks are also used to improve robustness and interpretability of DNNs (Wallace et al., 2019).", "In the field of natural language processing (NLP) which widely employs DNNs, practical systems such as spam filtering (Stringhini et al., 2010) and malware detection (Kolter and Maloof, 2006) have been broadly used, but at the same time the concerns about their security are growing.", "Therefore, the research on textual adversarial attacks becomes increasingly important.", "Textual adversarial attacking is challenging.", "Different from images, a truly imperceptible perturbation on text is almost impossible because of its discrete nature.", "Even a slightest character-level perturbation can either (1) change the meaning and, worse still, the true label of the original input, or (2) break its grammaticality and naturality.", "Unfortunately, the change of true label will make the adversarial attack invalid .", "For example, supposing an adversary changes she to he in an input sentence to attack a gender identification model, although the victim model alters its prediction result, this is not a valid attack.", "And the adversarial examples with broken grammaticality and naturality (i.e., poor quality) can be easily defended (Pruthi et al., 2019).", "Various textual adversarial attack models have been proposed (Wang et al., 2019a), ranging from character-level flipping (Ebrahimi et al., 2018) to sentence-level paraphrasing (Iyyer et al., 2018).", "Among them, word-level attack models, mostly word substitution-based models, perform comparatively well on both attack efficiency and adversarial example quality (Wang et al., 2019b).", "Word-level adversarial attacking is actually a problem of combinatorial optimization (Wolsey and Nemhauser, 1999), as its goal is to craft adversarial examples which can successfully fool the victim model using a limited vocabulary.", "In this paper, as shown in Figure 1, we break this combinatorial optimization problem down into two steps including (1) reducing search space and (2) searching for adversarial examples.", "The first step is aimed at excluding invalid or low-quality potential adversarial examples and retaining the valid ones with good grammaticality and naturality.", "The most common manner is to pick some candidate substitutes for each word in the original input and use their combinations as the reduced discrete search space.", "However, existing attack models either disregard this step (Papernot et al., 2016) or adopt unsatisfactory substitution methods that do not perform well in the trade-off between quality and quantity of the retained adversarial examples (Alzantot et al., 2018; Ren et al., 2019).", "The second step is supposed to find adversarial examples that can successfully fool the victim model in the reduced search space.", "Previous studies have explored diverse search algorithms including gradient descent (Papernot et al., 2016), genetic algorithm (Alzantot et al., 2018) and greedy algorithm (Ren et al., 2019).", "Some of them like gradient descent only work in the white-box setting where full knowledge of the victim model is required.", "In real situations, however, we usually have no access to the internal structures of victim models.", "As for the other black-box algorithms, they are not efficient and effective enough in searching for adversarial examples.", "adversarial attacking.", "To solve the problems, we propose a novel black-box word-level adversarial attack model, which reforms both the two steps.", "In the first step, we design a word substitution method based on sememes , the minimum semantic units, which can retain more potential valid adversarial examples with high quality.", "In the second step, we present a search algorithm based on particle swarm optimization (Eberhart and Kennedy, 1995), which is very efficient and performs better in find-ing adversarial examples.", "We conduct exhaustive experiments to evaluate our model.", "Experimental results show that, compared with baseline models, our model not only achieves the highest attack success rate (e.g., 100% when attacking BiLSTM on IMDB) but also possesses the best adversarial example quality and comparable attack validity.", "We also conduct decomposition analyses to manifest the advantages of the two parts of our model separately.", "Finally, we demonstrate that our model has the highest transferability and can bring the most robustness improvement to victim models by adversarial training.", "In this section, we first briefly introduce sememes, and then we give an overview of the classical particle swarm optimization algorithm.", "In linguistics, a sememe is defined as the minimum semantic unit of human languages (Bloomfield, 1926).", "The meaning of a word can be represented by the composition of its sememes.", "In the field of NLP, sememe knowledge bases are built to utilize sememes in practical applications, where sememes are generally regarded as semantic labels of words (as shown in Figure 1).", "HowNet (Dong and Dong, 2006) is the most wellknown one.", "It annotates over one hundred thousand English and Chinese words with a predefined sets of about 2,000 sememes.", "Its sememe annotations are sense-level, i.e., each sense of a (polysemous) word is annotated with sememes separately.", "With the help of HowNet, sememes have been successfully applied to many NLP tasks including word representation learning (Niu et al., 2017), sentiment analysis (Fu et al., 2013), semantic composition (Qi et al., 2019), sequence modeling (Qin et al., 2019), reverse dictionary (Zhang et al., 2019b), etc. 2.2 Particle Swarm Optimization Inspired by the social behaviors like bird flocking, particle swarm optimization (PSO) is a kind of metaheuristic population-based evolutionary computation paradigms (Eberhart and Kennedy, 1995).", "It has been proved effective in solving the optimization problems such as image classification (Omran et al., 2004), part-of-speech tagging (Silva et al., 2012) and text clustering (Cagnina et al., 2014).", "Empirical studies have proven it is more efficient than some other optimization algorithms like the genetic algorithm (Hassan et al., 2005).", "PSO exploits a population of interacting individuals to iteratively search for the optimal solution in the specific space.", "The population is called a swarm and the individuals are called particles .", "Each particle has a position in the search space and moves with an adaptable velocity .", "Formally, when searching in a D -dimensional continuous space S RD with a swarm containing N particles, the position and velocity of each particle can be represented by x n S and v n RD respectively, n { 1 , , N } .", "Next we describe the PSO algorithm step by step.", "(1) Initialize.", "At the very beginning, each particle is randomly initialized with a position x n in the search space and a velocity v n .", "Each dimension of the initial velocity v nd [ V max , V max ] , d { 1 , , D } .", "(2) Record.", "Each position in the search space corresponds to an optimization score.", "The position a particle has reached with the highest optimization score is recorded as its individual best position .", "The best position among the individual best positions of all the particles is recorded as the global best position .", "(3) Terminate.", "If current global best position has achieved the desired optimization score, the algorithm terminates and outputs the global best position as the search result.", "(4) Update.", "Otherwise, the velocity and position of each particle are updated according to its current position and individual best position together with the global best position.", "The updating formulae are v nd = v nd + c 1 r 1 ( p nd x nd ) + c 2 r 2 ( p gd x nd ) , x nd = x nd + v nd , (1) where is the inertia weight, p nd and p gd are the d th dimensions of the n -th particle's individual best position and the global best position respectively, c 1 and c 2 are acceleration coefficients which are positive constants and control how fast the particle moves towards its individual best position and the global best position, and r 1 and r 2 are random coefficients.", "After updating, the algorithm goes back to the Record step.", "In this section, we detail our word-level adversarial attack model.", "It incorporates two parts, namely the sememe-based word substitution method and PSO-based adversarial example search algorithm.", "The sememes of a word are supposed to accurately depict the meaning of the word (Dong and Dong, 2006).", "Therefore, the words with the same sememe annotations should have the same meanings, and they can serve as the substitutes for each other.", "Compared with other word substitution methods, mostly including word embedding-based (Sato et al., 2018), language model-based (Zhang et al., 2019a) and synonym-based methods (Samanta and Mehta, 2017; Ren et al., 2019), the sememe-based word substitution method can achieve a better trade-off between quality and quantity of substitute words.", "For one thing, although the word embedding and language model-based substitution methods can find as many substitute words as we want simply by relaxing the restrictions on embedding distance and language model prediction score, they inevitably introduce many inappropriate and low-quality substitutes, such as antonyms and semantically related but not similar words, into adversarial examples which might break the semantics, grammaticality and naturality of original input.", "In contrast, the sememe-based and, of course, the synonym-based substitution methods does not have this problem.", "For another, compared with the synonym-based method, the sememe-based method can find more substitute words and, in turn, retain more potential adversarial examples, because HowNet annotates sememes for all kinds of words.", "The synonym-based method, however, depends on thesauri like WordNet (Miller, 1995), which provide no synonyms for many words like proper nouns and the number of a word's synonyms is very limited.", "An empirical comparison of different word substitution methods is given in Section 4.6.", "In our sememe-based word substitution method, to preserve grammaticality, we only substitute content words 1 and restrict the substitutes to having the same part-of-speech tags as the original words.", "Considering polysemy, a word w can be substituted by another word w only if one of w 's senses has the same sememe annotations as one of w 's senses.", "When making substitutions, we conduct lemmatization to enable more substitutions and delemmatization to avoid introducing grammatical mistakes.", "Before presenting our algorithm, we first explain what the concepts in the original PSO algorithm correspond to in the adversarial example search problem.", "Different from original PSO, the search space of word-level adversarial example search is discrete.", "A position in the search space corresponds to a sentence (or an adversarial example), and each dimension of a position corresponds to a word.", "Formally, x n = w n 1 w nd w nD , w nd V ( w od ) , where D is the length (word number) of the original input, w od is the d -th word in the original input, and V ( w od ) is composed of w od and its substitutes.", "The optimization score of a position is the target label 's prediction probability given by the victim model, where the target label is the desired classification result for an adversarial attack.", "Taking a binary classification task as an example, if the true label of the original input is positive, the target label is negative, and vice versa.", "In addition, a particle's velocity now relates to the position change probability, i.e., v nd determines how probable w nd is substituted by another word.", "Next we describe our algorithm step by step.", "First, for the Initialize step, since we expect the adversarial examples to differ from the original input as little as possible, we do not make random initialization.", "Instead, we randomly substitute one word of the original input to determine the initial position of a particle.", "This operation is actually the mutation of genetic algorithm, which has also been employed in some studies on discrete PSO (Higashi and Iba, 2003).", "We repeat mutation N times to initialize the positions of N particles.", "Each dimension of each particle's velocity is randomly 1 Content words are the words that carry meanings and consist mostly of nouns, verbs, adjectives and adverbs.", "initialized between V max and V max .", "For the Record step, our algorithm keeps the same as the original PSO algorithm.", "For the Terminate step, the termination condition is the victim model predicts the target label for any of current adversarial examples.", "For the Update step, considering the discreteness of search space, we follow Kennedy and Eberhart (1997) to adapt the updating formula of velocity to v nd = v nd + (1 ) [ I ( p nd , x nd ) + I ( p gd , x nd )] , (2) where is still the inertia weight, and I ( a, b ) is defined as I ( a, b ) = (cid:40) 1 , a = b, 1 , a (cid:54) = b.", "(3) Following Shi and Eberhart (1998), we let the inertia weight decrease with the increase of numbers of iteration times, aiming to make the particles highly dynamic to explore more positions in the early stage and gather around the best positions quickly in the final stage.", "Specifically, = ( max min ) T t T + min , (4) where 0 < min < max < 1 , and T and t are the maximum and current numbers of iteration times.", "The updating of positions also needs to be adjusted to the discrete search space.", "Inspired by Kennedy and Eberhart (1997), instead of making addition, we adopt a probabilistic method to update the position of a particle to the best positions.", "We design two-step position updating.", "In the first step, a new movement probability P i is introduced, with which a particle determines whether it moves to its individual best position as a whole.", "Once a particle decides to move, the change of each dimension of its position depends on the same dimension of its velocity, specifically with the probability of sigmoid( v nd ) .", "No matter whether a particle has moved towards its individual best position or not, it would be processed in the second step.", "In the second step, each particle determines whether to move to the global best position with another movement probability P g .", "And the change of each position dimension also relies on sigmoid( v nd ) .", "P i and P g vary with iteration to enhance search efficiency by adjusting the balance between local and global search, i.e., encouraging particles to explore more Dataset Task #Class Avg.", "space around their individual best positions in the early stage and search for better position around the global best position in the final stage.", "Formally, P i = P max t T ( P max P min ) , P g = P min + t T ( P max P min ) , (5) where 0 < P min < P max < 1 .", "Besides, to enhance the search in unexplored space, we apply mutation to each particle after the update step.", "To avoid excessive modification, mutation is conducted with the probability P m ( x n ) = min (cid:18) 0 , 1 k E ( x n , x o ) D (cid:19) , (6) where k is a positive constant, x o represents the original input, and E measures the word-level edit distance (number of different words between two sentences).", "E ( x n , x o ) D is defined as the modification rate of an adversarial example.", "After mutation, the algorithm returns to the Record step.", "In this section, we conduct comprehensive experiments to evaluate our attack model on the tasks of sentiment analysis and natural language inference.", "For sentiment analysis, we choose two benchmark datasets including IMDB (Maas et al., 2011) and SST-2 (Socher et al., 2013).", "Both of them are binary sentiment classification datasets.", "But the average sentence length of SST-2 (17 words) is much shorter than that of IMDB (234 words), which renders attacks on SST-2 more challenging.", "For natural language inference (NLI), we use the popular Stanford Natural Language Inference (SNLI) dataset (Bowman et al., 2015).", "Each instance in SNLI comprises a premise-hypothesis sentence pair and is labelled one of three relations including entailment, contradiction and neutral.", "As for victim models, we choose two widely used universal sentence encoding models, namely bidirectional LSTM (BiLSTM) with max pooling (Conneau et al., 2017) and BERTBASE (BERT) (De-vlin et al., 2019).", "For BiLSTM, its hidden states are 128-dimensional, and it uses 300-dimensional pre-trained GloVe (Pennington et al., 2014) word embeddings.", "Details of the datasets and the classification accuracy results of the victim models are listed in Table", "1. 4.2 Baseline Methods We select two recent open-source word-level adversarial attack models as the baselines, which are typical and involve different search space reduction methods (step 1) and search algorithms (step 2).", "The first baseline method (Alzantot et al., 2018) uses the combination of restrictions on word embedding distance and language model prediction score to reduce search space.", "As for search algorithm, it adopts genetic algorithm, another popular metaheuristic population-based evolutionary algorithm.", "We use Embedding/LM+Genetic to denote this baseline method.", "The second baseline (Ren et al., 2019) chooses synonyms from WordNet (Miller, 1995) as substitutes and designs a saliency-based greedy algorithm as the search algorithm.", "We call this method Synonym+Greedy .", "This baseline model is very similar to another attack model TextFooler (Jin et al., 2019), which has extra semantic similarity checking when searching adversarial examples.", "But we find the former performs better in almost all experiments, and thus we only select the former as a baseline for comparison.", "In addition, to conduct decomposition analyses of different methods in the two steps separately, we combine different search space reduction methods (Embedding/LM, Synonym and our sememe-based substitution method (Sememe)), and search algorithms (Genetic, Greedy and our PSO).", "For our PSO, V max is set to 1, max and min are set to 0.8 and 0.2, P max and P min are also set to 0.8 and 0.2, and k in Equation (6) is set to", "2. All these hyper-parameters have been tuned on the validation set.", "For the baselines, we use their recommended hyper-parameter settings.", "For the two population-based search algorithms Genetic and PSO, we set the maximum number of iteration times ( T in Section 3.2) to 20 and the population size ( N in Section 3.2) to 60, which are the same as Alzantot et al. (2018).", "To improve evaluation efficiency, we randomly sample 1 , 000 correctly classified instances from the test sets of the three datasets as the original input to be perturbed.", "For SNLI, only the hypotheses are perturbed.", "Following Alzantot et al. (2018), we restrict the length of the original input to 10 100 , exclude the out-of-vocabulary words from the substitute sets, and discard the adversarial examples with modification rates higher than 25%.", "We evaluate the performance of attack models including their attack success rates, attack validity and the quality of adversarial examples.", "The details of our evaluation metrics are listed in Table", "2. (1) The attack success rate is defined as the percentage of the attacks which craft an adversarial example to make the victim model predict the target label.", "(2) The attack validity is measured by the percentage of valid attacks to successful attacks, where the adversarial examples crafted by valid attacks have the same true labels as the original input.", "(3) For the quality of adversarial examples, we divide it into four parts including modification rate, grammaticality, fluency and naturality.", "Grammaticality is measured by the increase rate of grammatical error numbers of adversarial examples compared with the original input, where we use Language-Tool 2 to obtain the grammatical error number of a sentence.", "We utilize the language model perplexity (PPL) to measure the fluency with the help of GPT-2 (Radford et al., 2019).", "The naturality reflects whether an adversarial example is natural and indistinguishable from human-written text.", "We evaluate attack validity and adversarial example naturality only on SST-2 by human evaluation with the help of Amazon Mechanical Turk 3 .", "We randomly sample 200 adversarial examples, and ask the annotators to make a binary sentiment classification and give a naturality score (1, 2 or 3, higher better) for each adversarial example and original input.", "More annotation details are given in Appendix A. 4.5 Attack Performance Attack Success Rate The attack success rate results of all the models are listed in Table", "3. We observe that our attack model (Sememe+PSO) achieves the highest attack success rates on all the three datasets (especially the harder SST-2 and SNLI) and two victim models, proving the superiority of our model over baselines.", "It attacks BiLSTM/BERT on IMDB with a notably 100.00%/98.70% success rate, which clearly demonstrates the vulnerability of DNNs.", "By comparing three word substitution methods (search space reduction methods) and three search algorithms, we find Sememe and PSO consistently outperform their counterparts.", "Further decomposition analyses are given in a later section.", "Validity and Adversarial Example Quality We evaluate the attack validity and adversarial example quality of our model together with the two baseline methods (Embedding/LM+Genetic and Synonym+Greedy).", "The results of automatic and human evaluations are displayed in Table 4 and 5 respectively.", "4 Note that the human evaluations including attack validity and adversarial example naturality are conducted on SST-2 only.", "We find that in terms of automatic evaluations of adversarial example quality, including modification rate, grammaticality and fluency, our model consistently outperforms the two baselines on whichever victim model and dataset.", "As for attack validity and adver-2 https://www.languagetool.org 3 https://www.mturk.com 4 Automatic evaluation results of adversarial example quality of all the combination models are shown in Appendix B. Word Substitution Method Search Algorithm BiLSTM BERT IMDB SST-2 SNLI IMDB SST-2 SNLI Embedding/LM Genetic 86.90 67.70 44.40 87.50 66.20 44.30 Greedy 80.90 69.00 47.70 62.50 56.20 42.40 PSO 96.90 78.50 50.90 93.60 74.40 53.10 Synonym Genetic 95.50 73.00 51.40 92.90 78.40 56.00 Greedy 87.20 73.30 57.70 73.00 64.60 52.70 PSO 98.70 79.20 61.80 96.20 80.90 62.60 Sememe Genetic 96.90 78.50 50.90 93.60 74.40 53.10 Greedy 95.20 87.70 70.40 80.50 74.80 66.30 PSO 100.00 93.80 73.40 98.70 91.20 78.90 Table 3: The attack success rates (%) of different attack models.", "sarial example naturality, our Sememe+PSO model obtains a slightly higher overall performance than the two baselines.", "But its adversarial examples are still inferior to original human-authored input, especially in terms of validity (label consistency).", "We conduct Student's t -tests to further measure the difference between the human evaluation results of different models, where the statistical significance threshold of p -value is set to 0 .", "05 .", "We find that neither of the differences of attack validity and adversarial example naturality between different models are significant.", "In addition, the adversarial examples of any attack model have significantly worse label consistency (validity) than the original input, but possesses similar naturality.", "More details of statistical significance test are given in Appendix D. For Embedding/LM, relaxing the restrictions on embedding distance and language model prediction score can improve its attack success rate but sacrifices attack validity.", "To make a specific comparison, we adjust the hyper-parameters of Em-bedding/LM+Genetic 5 to increase its attack success rates to 96 .", "90% , 90 .", "30% , 58 .", "00% , 93 .", "50% , 83 .", "50% and 62 .", "90% respectively on attacking the two victim models on the three datasets (in the same order as Table 3).", "Nonetheless, its attack validity rates against BiLSTM and BERT on SST-2 dramatically fall to 59 .", "5% and 56 .", "5% .", "In contrast, ours are 70 .", "5% and 72 .", "0% , and their differences are significant according to the results of significance tests in Appendix D. 4.6 Decomposition Analyses In this section, we conduct detailed decomposition analyses of different word substitution methods (search space reduction methods) and different search algorithms, aiming to further demonstrate the advantages of our sememe-based word substitution method and PSO-based search algorithm.", "Word Substitution Method Table 6 lists the average number of substitutes provided by different word substitution methods on the three datasets.", "It shows Sememe can find much more substitutes than the other two counterparts, which explains the high attack success rates of the models incorporating Sememe.", "Besides, we give a real case from SST-2 in Table 7 which lists substitutes found by the three methods.", "We observe that Embed-ding/LM find many improper substitutes, Synonym cannot find any substitute because the original word pie has no synonyms in WordNet, and only Sememe finds many appropriate substitutes.", "Search Algorithm We compare the two population-based search algorithms Genetic and PSO by changing two important hyper-parameters, namely the maximum number of iteration times T and the population size N .", "The results of attack success rate are shown in Figure 2 and", "3. From the two figures, we find our PSO outperforms Genetic 2 3 4 5 10 20 30 40 60 100 Population Size 75% 80% 85% 90% 95% 100% A tt a c k Su cce ss R a t e Sememe+PSO Synonym+PSO Sememe+Genetic Synonym+Genetic Figure 3: Attack success rates of different models with population sizes.", "consistently, especially in the setting with severe restrictions on maximum number of iteration times and population size, which highlights the efficiency of PSO.", "The transferability of adversarial examples reflects whether an attack model can attack a DNN model without any access to it (Kurakin et al., 2016).", "It has been widely used as an important evaluation metric in adversarial attacks.", "We evaluate the transferability of adversarial examples by using BiLSTM to classify the adversarial examples crafted for attacking BERT, and vice versa.", "Table 8 shows the classification accuracy results of transferred adversarial examples.", "Note that lower accuracy signifies higher transferability.", "The lower the accuracy is, the higher the transferability is.", "We find compared with the two baselines, our Se-meme+PSO crafts adversarial examples with overall higher transferability.", "Adversarial training is proposed to improve the robustness of victim models by adding adversarial examples to the training set (Goodfellow et al., 2015).", "In this experiment, for each attack model, we craft 692 adversarial examples (10% of the original training set size) by using it to attack BiLSTM on the training set of SST-2.", "Then we add the adversarial examples to the training set and retrain a BiLSTM.", "We re-evaluate its robustness by calculating the attack success rates of different attack models.", "Table 9 lists the results of adversarial training.", "Note larger attack success rate decrease signifies greater robustness improvement.", "We find that adversarial training can improve the robustness of victim models indeed, and our Sememe+PSO model brings greater robustness improvement than the two baselines, even when the attack models are exactly themselves.", "6 From the perspective of attacking, our Sememe+PSO model is still more threatening than others even under the defense of adversarial training.", "We also manually select 692 valid adversarial examples generated by Sememe+PSO to conduct adversarial training, which leads to even greater robustness improvement (last column of Table 9).", "The results show that adversarial example validity has big influence on adversarial training effect.", "Existing textual adversarial attack models can be classified into three categories according to the perturbation", "perturbation levels of their adversarial examples.", "Sentence-level attacks include adding distracting sentences (Jia and Liang, 2017), paraphrasing (Iyyer et al., 2018; Ribeiro et al., 2018) and performing perturbations in the continuous latent semantic space (Zhao et al., 2018).", "Adversarial examples crafted by these methods usually have profoundly different forms from original input and their validity are not guaranteed.", "Character-level attacks are mainly random character manipulations including swap, substitution, deletion, insertion and repeating (Belinkov and Bisk, 2018; Gao et al., 2018; Hosseini et al., 2017).", "In addition, gradient-based character substitution methods have also been explored, with the help of one-hot character embeddings (Ebrahimi et al., 2018) or visual character embeddings (Eger et al., 2019).", "Although character-level attacks can achieve high success rates, they break the grammaticality and naturality of original input and can be easily defended (Pruthi et al., 2019).", "6 For instance, using Embedding/LM+Genetic in adversarial training to defend its attack declines the attack success rate by 2 .", "60% while using our Sememe+PSO model declines by 3 .", "53% .", "As for word-level attacks, following our two-step modeling, their adversarial example space reduction methods (step 1) involve using word embeddings (Sato et al., 2018) or language model (Zhang et al., 2019a) to filter words, selecting synonyms as substitutes (Samanta and Mehta, 2017; Ren et al., 2019; Jin et al., 2019), and their combinations (Alzantot et al., 2018; Glockner et al., 2018).", "The search algorithms (step 2) include gradient descent (Papernot et al., 2016; Sato et al., 2018; Gong et al., 2018), genetic algorithm (Alzantot et al., 2018), Metropolis-Hastings sampling (Zhang et al., 2019a), saliency-based greedy algorithm (Liang et al., 2018; Ren et al., 2019; Jin et al., 2019).", "In comparison, our model adopts new methods in both steps which are more powerful.", "In this paper, we propose a novel word-level attack model comprising the sememe-based word substitution method and particle swarm optimization-based search algorithm.", "We conduct extensive experiments to demonstrate the superiority of our model in terms of attack success rate, adversarial example quality, transferability and robustness improvement to victim models by adversarial training.", "In the future, we will try to increase the robustness gains of adversarial training and consider utilizing sememes in adversarial defense model.", "This work is supported by the National Key Research and Development Program of China (No. 2018YFB1004503) and the National Natural Science Foundation of China (NSFC No. 61732008, 61772302).", "We also thank the anonymous reviewers for their valuable comments and suggestions." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "method", "result", "method", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "other", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "method", "other", "objective", "objective", "objective", "method", "other", "other" ]
[ "Unsupervised constituency parsing aims to learn a constituency parser from a training corpus without parse tree annotations.", "While many methods have been proposed to tackle the problem, including statistical and neural methods, their experimental results are often not directly comparable due to discrepancies in datasets, data preprocessing, lexicalization, and evaluation metrics.", "In this paper, we first examine experimental settings used in previous work and propose to standardize the settings for better comparability between methods.", "We then empirically compare several existing methods, including decade-old and newly proposed ones, under the standardized settings on English and Japanese, two languages with different branching tendencies.", "We find that recent models do not show a clear advantage over decade-old models in our experiments.", "We hope our work can provide new insights into existing methods and facilitate future empirical evaluation of unsupervised constituency parsing.", "Unsupervised constituency parsing, a task in the area of grammar induction, aims to learn a constituency parser from a training corpus without parse tree annotations.", "While research on unsupervised constituency parsing has a long history (Car-roll and Charniak, 1992; Pereira and Schabes, 1992; Stolcke and Omohundro, 1994), recently there is a resurgence of interest in this task and several approaches based on neural networks have been proposed that achieve impressive performance (Shen et al., 2018; Drozdov et al., 2019; Shen et al., 2019; Kim et al., 2019b,a; Jin et al., 2019).", "This work was supported by the National Natural Science Foundation of China (61976139).", "Kewei Tu is the corresponding author.", "With the recent development in research of unsupervised constituency parsing, however, the problem of lacking a unified experimental setting begins to emerge, which makes empirical comparison between different approaches difficult.", "First of all, although almost all previous approaches are evaluated on the Penn Treebank (Marcus and Marcinkiewicz, 1993), they differ in how they preprocess the training data, with respect to the sentence length limit, punctuation removal, vocabulary pruning, and so on.", "For example, non-neural methods such as Constituent Context Model (CCM) (Klein and Manning, 2002) are trained on short sentences, while modern neural based methods such as Parsing-Reading-Predict Network (PRPN) (Shen et al., 2018; Htut et al., 2018) do not impose any limit on sentence length.", "Furthermore, existing approaches also differ in their evaluation metrics, with respect to the methods of computing averages, counting trivial spans, and so on.", "The evaluation results of the same approach using different metrics can differ significantly in some cases.", "Unfortunately, we have seen more than one paper that directly compares approaches evaluated with different metrics.", "In this paper, we propose three standardized experimental settings with respect to data preprocessing, post-processing, evaluation metrics, and tuning.", "We then empirically compare five existing methods under the standardized settings, including two decade-old methods and three recently proposed neural methods.", "We run our experiments on English and Japanese, two languages with different branching tendencies.", "Interestingly, the overall experimental results show that the recent methods do not show a clear advantage over the decade-old methods.", "We hope our empirical comparison could provide new insights into the relative strength and weakness of existing methods and our standardized experimental settings could facilitate future evaluation of unsupervised constituency parsing.", "Our pre/post-processing and evaluation source code can be found at https://github.com/i-lijun/ UnsupConstParseEval .", "We choose to evaluate five models under our experimental setup: PRPN 1 (Shen et al., 2018), URNNG 2 (Kim et al., 2019b), CCM 3 (Klein and Manning, 2002), CCL 4 (Seginer, 2007), DIORA 5 (Drozdov et al., 2019).", "We use the open source implementation of each model, which we make sure can reproduce the results in the original papers.", "PRPN is a neural-based model designed for language modeling by leveraging latent syntactic structures.", "It calculates syntactic distances between words of a sentence which can be used to obtain an unlabeled parse tree.", "Note that as a constituency parser, PRPN is incomplete (Dyer et al., 2019).", "URNNG is an unsupervised version of the supervised neural parser RNNG (Dyer et al., 2016).", "It uses a chart parser to approximate the posterior of the original RNNG.", "DIORA is a recursive autoencoder using the inside-outside algorithm to compute scores and representations of spans in the input sentence.", "It is the only model in our comparison that uses external word embedding (in our experiments, we use ELMo (Peters et al., 2018) for English and fastText (Grave et al., 2018) for Japanese).", "CCM is a generative distributive model, the parameters of which are updated with the EM algorithm.", "It is the only model in our comparison that uses the gold Part-of-Speech tags as input.", "CCL is an incremental parser, which uses a representation for syntactic structures similar to dependency links.", "In addition to these models, we note that there are several other models that achieve good results on unsupervised constituency parsing, such as UML-DOP (Bod, 2006), UPParse (Ponvert et al., 2011), feature CCM (Golland et al., 2012), Depth-Bounded PCFG (Jin et al., 2018), and Compound PCFG (Kim et al., 2019a).", "However, because of 1 https://github.com/yikangshen/PRPN 2 https://github.com/harvardnlp/urnng 3 https://github.com/davidswelt/dmvccm 4 https://github.com/DrDub/cclparser 5 https://github.com/iesl/diora limited time and computational resource, as well as a lack of open source implementations for some of the models, we do not evaluate them in our experiments.", "We use two corpora in our evaluation: the English Penn Treebank (PTB) (Marcus and Marcinkiewicz, 1993) and the Japanese Keyaki Treebank (KTB) (Butler et al., 2012).", "We pick KTB in addition to PTB for the purpose of checking the generalizability of existing models on left-branching languages.", "For PTB, we follow the standard split, using section 02-21 for training, 22 for validation and 23 for testing.", "For KTB, we shuffle the corpus and use 80% of the sentences for training, 10% for validation and 10% for testing.", "Many previous approaches learn from training sentences of length 10 , but recent models based on language modeling often use a length limit of 40 or set no length limit at all.", "We experiment with both length 10 and length 40 .", "We do not impose any length limit on test sentences.", "Previous models also have different ways to deal with punctuation.", "Although Jones (1994) and Spitkovsky et al. (2011) point out that careful treatment of punctuation may be helpful in unsupervised parsing, many previous models choose to remove punctuation and some recent models treat punctuation as normal words.", "Only a few models such as CCL (Seginer, 2007) make special treatment of punctuation.", "We experiment with two settings for length 40, one with punctuation and one without.", "To reduce the vocabulary size, we replace all the numerals with a < num > token and words that appear only once with < unk > .", "The parses output by CCL do not contain punctuation even when it is trained with punctuation, so it cannot be evaluated properly using a test set with punctuation.", "In addition, although the right branching baseline is a very strong baseline when punctuation is removed, its evaluation score becomes very low if punctuation is included because of its treatment of trailing punctuation.", "So we extend the post-processing method used in (Drozdov et al., 2019) to either add back punctuation marks or modify their connections in a parse tree: for a trailing punctuation mark, we manually attach it to Train ptb len10 nopunct ptb len40 nopunct ptb len40 punct Metric micro macro evalb micro macro evalb micro macro evalb Evaluated on test sentences with length 10.", "the root of the constituency parse tree; for a punctuation mark inside the sentence, we attach it to the lowest common ancestor of its two adjacent words in the parse tree.", "Note that the above procedure will produce non-binary parse trees.", "The performance of a constituency parser is often evaluated with F1 scores.", "However, two ways of averaging F1 scores over multiple test sentences are available, i.e., micro average and macro average.", "In micro average, all the span predictions are aggregated together and then compared with the gold spans to get the precision and recall.", "In contrast, macro average is obtained by calculating the F1 score for each individual sentence and then take an average over all the sentences.", "We use both metrics in our experiments.", "Note that when computing F1 scores, we remove trivial spans, i.e., single-word spans and whole-sentence spans, and we calculate duplicate constituents only once.", "We additionally use the standard PARSEVAL metric computed by the Evalb program 6 .", "Although Evalb calculates the micro average F1 score, it differs from our micro average metric in that it will count the whole sentence spans and duplicated spans are calculated and not removed.", "To maintain the unsupervised nature of our experiments, we avoid the common practice of using gold parses of the validation set for hyperparameter tuning.", "CCM and CCL do not expose any hyperparameter for tuning.", "We tune PRPN and URNNG based on their perplexity on the validation set.", "DIORA does not provide a metric that can be used for tuning, so we do not tune it.", "We tune PRPN and URNNG with the same time budget of 5 days on a GPU cluster with TITAN V GPUs.", "We use Bayesian optimization 7 to automatically tune these models.", "We set the ranges of hyperparameter values around the default values provided in the original papers.", "We list the experimental results of all the models and the left/right-branching baselines for PTB and KTB in Table 1 and Table 2 respectively.", "Since all the models except CCL produce binary parse trees, we also show the score upper bound that a binary tree parser can achieve, which is computed by binarizing the gold trees and calculating their scores against the original gold trees.", "Note that our results can be very different from those reported in the original papers of these models because of different experimental setups.", "For example, the original CCM paper reports an F1 score of 71.9 on PTB, but we report 62.97.", "This is because the original CCM experiment uses the whole WSJ corpus (with length 10) for both training and test, which is very different from our setup.", "Also note that for the left and right branching baselines and the binary upper bound, the scores for length 10 no punct and length 40 no punct are the same, because these baselines do not require training and are evaluated on the same test sets.", "Overall Comparison There is no universal winner for all the settings but there is clear winners for specific settings.", "On PTB, it is surprising to see that each model is the winner of at least one setting.", "Right-branching is a very strong baseline and with post-processing it outperforms all the models in some settings of ptb len40 punct.", "On KTB, DIORA is the winner in most of the settings, while CCM has a strong performance on ktb len10 nopunct.", "Left-branching is a strong baseline especially when evaluated on sentences with length 10 .", "Although CCM and DIORA achieve the best overall performance, we note that they both utilize additional resources.", "CCM uses gold POS tags and DIORA uses pretrained word embedding.", "Our preliminary experiments on PTB show a signifi-cant drop in performance when we run CCM using words without gold POS tags, with the Evalb F1 score dropping from 70.14 to 57.29 when evaluated on length 10 under the ptb len10 nopunct setting.", "DIORA also performs worse when pretrained word embedding is replaced by randomly initialized embedding, with the average Evalb F1 score dropping from 49.39 to 42.63 when evaluated on all sentences under the ptb len40 nopunct setting.", "Overall, we do not see a clear advantage of more recent neural models over traditional models.", "There are two factors that should be taken into account though.", "First, neural models are significantly slower and therefore may not have been sufficiently tuned because of the fixed tuning time budget.", "Second, the training data may still be too small from the perspective of neural models.", "Finally, we also note that our post-processing method for adding back punctuation almost always improves the score in PTB, sometimes by a large margin (e.g., for CCM and RBranch).", "On KTB, however, it sometimes decreases the score.", "This may be caused by different annotation standards for punctuation in the two treebanks.", "Impact of Experimental Settings Different experimental settings lead to remarkable difference in the evaluation scores of the same model.", "Different evaluation metrics also produce very different scores.", "With the same output parses, they can sometimes differ more than 20 F1 points.", "Running Time Traditional models such as CCM and CCL are fast, taking only several minutes.", "On the other hand, neural models take hours or even days to train.", "Apart from training, the inference stage is also very fast for traditional models but slow for neural models.", "Considering their close F1 scores, we believe at least in the scenario of limited data and computational resources, traditional models are preferred to neural models.", "Comments on Individual Models We find that CCM when trained with length 10 sentences is very competitive.", "On PTB, it even outperforms all the other models that are trained on length 40 data with no punctuation.", "However, CCM cannot handle punctuation very well without post-processing.", "URNNG seems to degrade to mostly right-branching in many settings (thus having very low standard deviations).", "This is possibly due to two reasons:", "1) URNNG takes a lot of time to train and is therefore only lightly tuned because of the tuning time budget;", "2) in the original paper, URNNG is trained with punctuation but evaluated without punctuation, which is quite different from our settings.", "PRPN has a strong performance on PTB when trained with long sentences.", "However, we note that PRPN has a right-branching bias during inference (Dyer et al., 2019).", "If we switch its inference bias to left-branching, the performance drops significantly (for more than 10 points).", "Because of its right-branching bias, PRPN does not perform well on KTB.", "For the sentence length limit, we think one can set any limit on the training data, but should report evaluation results on both length 10 and all-length test data.", "For the evaluation metrics, since small details in implementing micro and macro average will lead to nontrivial differences, we suggest using PARSEVAL which has publicly available implementation.", "For models sensitive to random seeds, we recommend reporting means and standard deviations from multiple runs.", "We also recommend evaluation on treebanks of both left-branching and right-branching languages, such as PTB and KTB." ]
[ "abstain", "abstain", "objective", "objective", "result", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "abstain", "abstain", "objective", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "other", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "method", "method" ]
[ "Zhou Zhao Zhejiang University zhaozhou@zju.edu.cn", "Abstract Non-autoregressive text to speech (NAR-TTS) models have attracted much attention from both academia and industry due to their fast generation speed.", "One limitation of NAR-TTS models is that they ignore the correlation in time and frequency domains while generating speech mel-spectrograms, and thus cause blurry and over-smoothed results.", "In this work, we revisit this over-smoothing problem from a novel perspective: the degree of over-smoothness is determined by the gap between the complexity of data distributions and the capability of modeling methods.", "Both simplifying data distributions and improving modeling methods can alleviate the problem.", "Accordingly, we first study methods reducing the complexity of data distributions.", "Then we conduct a comprehensive study on NAR-TTS models that use some advanced modeling methods.", "Based on these studies, we find that 1) methods that provide additional condition inputs reduce the complexity of data distributions to model, thus alleviating the over-smoothing problem and achieving better voice quality.", "2) Among advanced modeling methods, Laplacian mixture loss performs well at modeling multimodal distributions and enjoys its simplicity, while GAN and Glow achieve the best voice quality while suffering from increased training or model complexity.", "3) The two categories of methods can be combined to further alleviate the over-smoothness and improve the voice quality.", "4) Our experiments on the multi-speaker dataset lead to similar conclusions as above and providing more variance information can reduce the difficulty of modeling the target data distribution and alleviate the requirements for model capacity.", "Non-autoregressive text to speech (NAR-TTS) models (Ren et al., 2019, 2020; Peng et al., 2020; Vainer and Duek, 2020; ancucki, 2020; Kim", "et al., 2020; Miao et al., 2020) have shown much faster inference speed than their autoregressive counterparts (Wang et al., 2017; Shen et al., 2018; Ping et al., 2018), while achieving comparable or even better voice quality (Ren et al., 2019, 2020).", "The text-to-speech mapping can be formulated as a conditional distribution P ( y | x ) where x and y are the text and speech sequences, respectively.", "Text-to-speech mapping is a one-to-many mapping problem (Wang et al., 2017), since multiple possible speech sequences correspond to a text sequence due to speech variations such as pitch, duration and prosody.", "Furthermore, speech mel-spectrograms are strongly correlated in time and frequency dimensions (see Section 2.1 for detailed analyses).", "Therefore, P ( y | x ) is actually a dependent and multimodal distribution (Ling et al., 2013; Zen and Senior, 2014) 1 .", "Early non-autoregressive TTS models (Ren et al., 2019; Peng et al., 2020) use mean absolute error (MAE) or mean square error (MSE) as loss function to model speech mel-spectrograms, implicitly assuming that data points in mel-spectrograms are independent to each other and follow a unimodal distribution 2 .", "Consequently, the mel-spectrograms following dependent and multimodal distributions cannot be well modeled by the MAE or MSE loss, which presents great challenges in non-autoregressive TTS modeling and causes over-smoothed (blurred) predictions in mel-spectrograms (Vasquez and Lewis, 2019; Sheng and Pavlovskiy, 2019).", "In this work, we conduct a comprehensive study on the over-smoothing problem in TTS.", "We find that the over-smoothness is closely related to the mismatch between the complexity of data distri-1 Here \"dependent\" means that the different dimensions (in either temporal domain or frequency domain) of y are dependent to each other.", "2 MAE can be derived from the Laplace distribution and MSE from the Gaussian distribution, both of which are unimodal.", "butions ( e . g ., dependent and multimodal distributions are more complex than independent and unimodal distributions) and the power of modeling methods ( e . g ., simple losses such as MAE and MSE are less powerful than GAN or Glow-based methods).", "Both simplifying data distributions and enhancing modeling methods can alleviate the over-smoothing problem.", "From this perspective, we categorize recent methods combating over-smoothness into two classes: 1. Simplifying data distributions : The data distribution P ( y | x ) can be simplified by providing more conditional input information.", "We review two main methods: 1) Providing the previous mel-spectrogram frames y <t to predict current frame y t , i.e., factorizing the complex dependent distribution P ( y | x ) into a simpler conditional distribution (cid:81) t P ( y t | y <t , x ) , as used in autoregressive TTS models (Wang et al., 2017; Li et al., 2019).", "2) Providing more variance information v 3 such as pitch, duration, and energy to predict mel-spectrogram in parallel, i.e., modeling P ( y | x, v ) rather than P ( y | x ) , as done in some non-autoregressive TTS models (Ren et al., 2020; ancucki, 2020).", "2. Enhancing modeling methods : Generally speaking, the modeling method should be powerful enough to fit complex data distributions.", "We review methods based on different distribution assumptions, including Laplacian mixture loss, structural similarity index (SSIM) (Wang et al., 2004) loss, generative adversarial network (GAN) (Lee et al., 2020) and Glow (Kim et al., 2020).", "By studying those methods, we have the following findings.", "We hope that our studies and findings can inspire the community to design better models for TTS.", "By either autoregressive factorization or providing more variance information as input, complex distributions can be simplified to be less dependent and multimodal, which clearly alleviates the over-smoothing problem and improves the generated voice quality.", "Among them, providing more variance information such as FastSpeech 2 enjoys the advantages of fast generation due to its non-autoregressive nature.", "3 The term \"variance information\" is first mentioned in FastSpeech 2 (Ren et al., 2020), which refers some speechrelated conditional information", "loss significantly improves the quality of generated mel-spectrograms and enjoys the simplicity of modeling.", "GAN and Glow achieve the best quality under both subjective and objective evaluations (since they make no assumptions about output distributions), but at the cost of increased training or model complexity.", "To further analyze the effectiveness of combining the basic ideas of the above two categories, we enhance FastSpeech 2 (considering its fast inference speed and good quality in the first category) with Laplacian mixture loss, SSIM, GAN and Glow, respectively.", "We find that the enhanced FastSpeech 2 generates speech with even better quality and the over-smoothing problem is further alleviated, which shows that the methods in the two categories are complementary to each other.", "We also extend our experiments to multi-speaker TTS task and obtain similar conclusions as above.", "Besides, we find that Glow has poor modeling ability in multi-speaker scenarios due to limited model capacity and more complex target data distributions compared with the single-speaker scenario, while it can be greatly alleviated by simplifying data distributions (introducing more variance information).", "Text-to-speech mapping is a one-to-many mapping since multiple speech sequences can possibly correspond to a text sequence with different pitch, duration and prosody, making the mel-spectrograms distribution P ( y | x ) multimodal.", "And due to the continuous nature of the mel-spectrograms, adjacent data points are dependent to each other.", "In this section, we first empirically characterize the distribution of P ( y | x ) in TTS 4 through visualization (Section 2.1), and then provide a novel perspective to study the over-smoothing problem in TTS (Section 2.2).", "We first visualize the distribution of P ( y | x ) to see whether it is dependent and multimodal 5 .", "Denote 4 In this work, we mainly focus on text (phoneme) to mel-spectrogram mapping, and leave mel-spectrogram to waveform mapping and text to waveform mapping to future work.", "We approximate the mel-spectrogram distribution on LJSpeech dataset.", "The detailed data processing procedure is the same as that in Section 3.2.", "the data point in the t -th frame and the f -th frequency bin in the ground-truth mel-spectrogram as y ( t, f ) , where t [1 , T ] , f [1 , F ] , and T , F represent the total length and the number of frequency bins of mel-spectrograms respectively.", "Since different phonemes have different mel-spectrograms, we analyze the distribution of each phoneme separately.", "Specifically, for all the mel-spectrogram frames corresponding to each phoneme ph , we calculate three distributions: 1) marginal distribution P ( y ( t, f ) | x = ph ) ; 2) joint distribution between two different frequency bins P ( y ( t, f 1 ) , y ( t, f 2 ) | x = ph ) ; 3) joint distribution between two different time frames P ( y ( t 1 , f ) , y ( t 2 , f ) | x = ph ) 6 .", "For each distribution, we first compute the histograms and smooth into probability density functions with kernel density estimation (Dehnad, 1987) for better visualization.", "The marginal distributions P ( y ( t, f ) | x = ph ) for several different phonemes are shown in Figure 1. It can be seen that the shape of the marginal distribution of each data point y ( t, f ) in mel-sepctrogram is multimodal, especially for data points in high-frequency bins.", "The joint distribution P ( y ( t, f 1 ) , y ( t, f 2 ) | x = ph ) and P ( y ( t 1 , f ) , y ( t 2 , f ) | x = ph ) are shown in Figure 2a and Figure 2b, respectively.", "Obviously, those 6 We denote one of the frame index among all mel-spectrograms corresponding to the phoneme x as t 1 and set t 2 = t 1 + 1 .", "joint distributions are also multimodal and neighboring points in mel-spectrograms are strongly correlated.", "From these observations, we can see that the distribution P ( y | x ) of mel-sepctrograms is multimodal and dependent across time and frequency.", "The dependent and multimodal distribution of P ( y | x ) increases the difficulty of TTS modeling and causes over-smoothing problem if it is not correctly handled.", "We provide a novel perspective to depict this problem: the degree of over-smoothness is closely related to the gap between the complexity of data distributions and the power of modeling methods.", "A larger gap between the power of a modeling method and the complexity of a data distribution results in more severe over-smoothing problem.", "Consequently, simplifying data distributions and enhancing modeling methods can alleviate the over-smoothing problem.", "From this perspective, 8199 we list the methods to combat over-smoothness in two categories in Table 1. In the following sections, we first explore the effectiveness of simplifying data distributions (Sec-tion 3) and then that of enhancing modeling methods (Section 4).", "Finally, we combine the basic ideas of these two categories to improve an existing model (Section 5.1) and conduct further exploration on the multi-speaker dataset (Section 5.2).", "| Simplifying data distributions P ( y | x ) is usually achieved by providing more conditional input information in the TTS literature (Wang et al., 2017; Li et al., 2019; Ren et al., 2020) 7 .", "In this way, more conditional information in input can alleviate the one-to-many mapping issue, and thus the distribution becomes less multimodal and the correlation along time and frequency is reduced given more condition.", "There are mainly two methods to provide more conditional input information: 1) autoregressive factorization along the time (Wang et al., 2017; Li et al., 2019) or frequency axis; 2) providing more variance information to predict mel-spectrogram in parallel, as used in some non-autoregressive TTS models (Ren et al., 2020; ancucki, 2020).", "In this section, we first overview these two kinds of methods in detail and conduct the experiment analyses to measure their effectiveness in solving the over-smoothing problem.", "In this subsection, we overview the two kinds of methods to simplify data distributions, including autoregressive factorization and providing more variance information as input.", "Autoregressive Factorization The joint probability P ( y | x ) can be factorized according to the chain rule in two ways along time and frequency dimensions respectively: P ( y | x ) = (cid:81) Tt =1 P ( y t | y <t , x ) , where y <t is the proceeding frames before the t -th frame and T is the total frames.", "P ( y | x ) = (cid:81) Ff =1 P ( y f | y <f , x ) , where y <f is the proceeding frequency bins before the f -th 7 Ren et al. (2019) use knowledge distillation to simplify the target mel-spectrogram itself, which also simplifies the data distribution but affects data quality as analyzed in Ren et al. (2020).", "Thus, we do not consider this method in our study.", "More Variance Information Another way to simplify data distributions P ( y | x ) is to provide more variance information v such as pitch, duration, and energy, to convert P ( y | x ) into P ( y | x, v ) , as used in previous works (Ren et al., 2020; ancucki, 2020).", "In this way, the distribution becomes less multimodal and the correlation along time and frequency is reduced.", "Experimental Settings We conduct all experiments 8 on LJSpeech dataset (Ito, 2017).", "We use ParallelWaveGAN (PWG) (Yamamoto et al., 2020) as the vocoder to convert mel-spectrograms to waveforms.", "To evaluate the voice quality of the synthesized speech subjectively, we conduct the MOS (Loizou, 2011) tests.", "To measure the degree of over-smoothness of mel-spectrograms objectively, we calculate the variation of the Laplacian (Pech-Pacheco et al., 2000) (Var L ) on the generated mel-spectrograms and compare with that on the ground-truth mel-spectrograms.", "We use FastSpeech (Ren et al., 2019) trained with MAE loss as the baseline model as shown in Figure 3. For autoregressive modeling along time, we directly use TransformerTTS (Li et al., 2019).", "For 8 The corresponding audio samples are available at https: //revisittts.github.io/revisittts/ .", "autoregressive modeling along frequency, we modify the vanilla mel-spectrogram decoder in baseline model to support autoregressive generation along frequency (which is called FreqAR decoder) and feed y <f to the FreqAR decoder to model P ( y f | y <f ) as shown in Figure 4a.", "For the method providing more variance information, we use FastSpeech 2 (Ren et al., 2020), which adds pitch, duration and energy information to the variance adaptor of the baseline model.", "We put more detailed model descriptions in Appendix A and experimental settings in Appendix B. Table 2: Results for different methods for simplifying data distributions for TTS.", "Results and Analyses We conduct MOS evaluation and compute Var L to compare methods including the baseline model (denoted as MAE ), autoregressive modeling along frequency (denoted as FreqAR ) and along time (denoted as TimeAR ) and FastSpeech 2 .", "The results are shown in Table 2. We also visualize the mel-spectrograms generated by all modeling methods in Figure 5. From the results, we have some observations: 1) Autoregressive modeling along frequency ( FreqAR ) and time ( TimeAR ) dimensions both outperform MAE in terms of MOS and Var L , which shows that simplifying data distributions using frequency or time dimension factorization can ease the over-smoothing problem.", "However, the autoregressive models suffer from slow inference.", "2) FastSpeech 2 also greatly outperforms the baseline model, further indicating that simplifying data distributions by providing more variance information is another way to alleviate the over-smoothing problem.", "In conclusion, autoregressive modeling and providing more variance information can both simplify the complex distribution to be less dependent and multimodal and thus alleviate the over-smoothing problem.", "Besides, methods that provide more variance information such as FastSpeech 2 also enjoy the fast inference speed.", "Most previous non-autoregressive TTS models (Ren et al., 2019; Wang et al., 2019; Ren et al., 2020) use mean absolute error (MAE) or mean square error (MSE) as training loss.", "However, they fail to capture dependent and multimodal distributions.", "MAE loss is derived from the Laplace distribution and MSE from the Gaussian distribution (Chai and Draxler, 2014), which means min-8201 imizing MAE/MSE will maximize the data log-likelihood under a Laplace/Gaussian distribution.", "Both of these distributions are unimodal and thus encourage the model to predict a single mode in each data point.", "As a result, the model just learns an average of all modes, which leads to over-smoothed results.", "Another problem brought by MAE and MSE is that they are independent across time and frequency for mel-spectrogram output, which ignores the correlation across time and frequency axes in mel-spectrogram.", "In this section, we first introduce several enhanced modeling methods to directly model the dependent and multimodal distribution P ( y | x ) (Sec-tion 4.1), and then conduct experiments to compare and analyze these methods (Section 4.2).", "We list the enhanced methods, including SSIM loss, Laplacian mixture loss, GAN 9 and Glow-based method and their distribution assumptions in Table 3. We put the details of each method in Appendix A.", "Structural Similarity Index (SSIM) Structural Similarity Index (SSIM) (Wang et al., 2004) is one of the state-of-the-art perceptual metrics to measure image quality, which can capture structural information and texture.", "The value of SSIM is between 0 and 1, where 1 indicates perfect perceptual quality relative to the ground truth.", "The model architecture of SSIM loss follows the baseline model in Figure 3 and we directly replace the MAE loss 9 Although GAN is well-known to suffer from the mode collapse issue, practically it performs very well in modeling the multi-modal distribution through well-tuning (Mao et al., 2017).", "GAN can avoid the average frame prediction in MAE loss that has a strong unimodal assumption, and can generate high-quality and reasonable results when the data distribution is multimodal.", "Therefore, we regard GAN as \"no distribution assumption\" method.", "Laplacian Mixture (LM) Loss Laplacian mixture loss 10 can model samples independently with multimodal distribution.", "As shown in Figure 4b, the basic architecture of mel-spectrogram decoder follows baseline model and we modify the output layer of the baseline model to predict the multimodal distribution of each mel-spectrogram bin.", "Generative Adversarial Network (GAN) We introduce adversarial training to better model the dependent and multimodal distribution.", "Inspired by Wu and Luan (2020); Binkowski et al. (2020), we adopt multiple random window discriminators.", "We use the LSGAN (Mao et al., 2017) loss to train the TTS model and multi-window discriminators.", "Glow Glow (Kingma and Dhariwal, 2018) is a kind of normalizing flow, which maps data into a known and simple prior ( e . g ., spherical multivariate Gaussian distribution).", "As shown in Figure 4d, our Glow-based decoder models the distribution of mel-spectrograms conditioned on the encoder output hidden states x .", "10 We choose Laplace distribution as the mixture distribution since the distribution of the magnitude of spectrogram is Laplacian (Tits et al., 2019; Usman et al., 2018; Gazor and Zhang, 2003).", "We have also tried other mixture distributions ( e . g ., mixture of logistic and mixture of Gaussian) and have similar findings.", "The dataset, baseline model and evaluation metrics are the same as Section 3.2.", "We conduct MOS evaluation and compute Var L to compare different modeling methods including MAE (de-noted as MAE ), Laplacian mixture loss (denoted as LM ), structural similarity index (denoted as SSIM ), generative adversarial network (denoted as GAN ), Glow (denoted as Glow ).", "The results are shown in Table 4. We also visualize mel-spectrograms generated by all modeling methods in Figure 6. From the results, we can find that: Table 4: Results for different modeling methods for TTS.", "1) LM and SSIM outperform MAE in terms of voice quality according to MOS evaluation and the mel-spectrogram visualizations, which shows that even simply replacing the loss function with those without strong unimodal and independent assumptions can significantly alleviate the over-smoothing problem and improve the generated mel-spectrograms.", "Among simple loss functions, LM performs the best and generates sharper and clearer mel-spectrograms since its Var L is closer to GT , which demonstrates that Laplacian mixture loss can model the multimodal distribution well.", "2) GAN and Glow show more superior performances compared with other modeling methods, indicating that modeling mel-spectrogram with both dependent and multimodal distribution can significantly ease the over-smoothing problem and improve the generated speech.", "From the visualizations, we can see that GAN and Glow generate formants with rich details in the middle/high-frequency region.", "However, GAN-based and Glow-based methods suffer from training complexity and large model footprint respectively: GAN relies on the discriminator to fit the distribution, which causes unstable training and difficult hyper-parameters tuning; Glow imposes strong architectural constraints and requires a large model footprint (about 2x model parameters) to keep the bijection between the simple independent latent distribution ( e . g ., spherical multivariate Gaussian distribution) and the complex and dependent data distribution.", "In this section, we first explore the advantages of combining methods from two categories and then perform extensional analyses on multi-speaker dataset.", "After studying the two categories to combat over-smoothing problems, we have demonstrated that both simplifying data distributions and enhancing modeling methods can alleviate this problem and improve the voice quality of TTS.", "A natural thought is to combine the methods from two categories, which may integrate the advantages of both aspects to reduce the gap between the complexity of data distribution and the power of the modeling method, resulting in better voice quality and further alleviating the over-smoothing problem.", "To demonstrate this idea, we choose the FastSpeech 2 as the model from the first category since it achieves better perceptual voice quality than autoregressive modeling models according to Table 2 and importantly, it enjoys the fast and robust inference advantages due to its non-autoregressive nature.", "Then we combine two categories by applying enhanced modeling methods to FastSpeech 2 and obtain the following systems: 1) FastSpeech 2 + SSIM , which replaces MAE loss with SSIM loss; 2) FastSpeech 2 + LM , which predicts the k-component mixture of Laplace distribution and uses LM loss for training; 3) FastSpeech 2 + GAN , which adds the adversarial loss to FastSpeech 2; and 4) FastSpeech 2 + Glow , which replaces the mel-spectrogram decoder with Glow.", "We conduct subjective and objective eval-8203 Table 6: Results for of different multi-speaker TTS models on multi-speaker dataset.", "uations to compare these combined systems with FastSpeech 2 .", "The results are shown in Table 5. We can see that combining FastSpeech 2 with more powerful modeling methods can further alleviate the over-smoothing problem in terms of Var L and improve the generated speech quality in terms of MOS, which demonstrate our idea that simplifying data distributions and enhancing modeling methods can be used together and it is complementary to each other to further improve TTS performance.", "Among these methods, GAN performs best in alleviating the over-smoothing problem and Glow achieves the best perceptual voice quality, while they suffer from the cost of increased training or model complexity as we described in Section 4.2; compared with GAN and Glow , LM can generate mel-spectrograms with comparable clearness and quality while enjoying its simplicity.", "To demonstrate the generalization of our findings and provide more insights, we conduct experiments on a multi-speaker LibriTTS (Zen et al., 2019) dataset.", "We modify our models to support multiple speakers by adding speaker embeddings to the encoder outputs to indicate the speaker identity.", "We put more details of our multi-speaker TTS models and the dataset in Appendix C. We compare the following systems: 1) FastSpeech ; 2) FastSpeech + GAN ; 3) FastSpeech + Glow ; 4) FastSpeech 2 ; 5) FastSpeech 2 + GAN ; and 6) FastSpeech 2 + Glow .", "The results are shown in Table 6. We can see that 1) simplifying data distributions by providing more variance information and enhancing modeling method can alleviate the over-smoothing problem and improve the generated mel-spectrogram, and combining them together can achieve further better audio quality, which are consistent with the findings on single-speaker dataset.", "2) FastSpeech + Glow leads to inferior performance compared with the baseline model ( FastSpeech ), because the multi-speaker dataset has more complex target data distributions and Glow requires a large model footprint to capture them as described in Section 4.2.", "When providing more variance information, FastSpeech 2 + Glow achieves much better performance, which can reduce the difficulty of modeling the target data distribution and thus alleviate the requirements for model capacity.", "Non-autoregressive Text to Speech Previous TTS systems such as Tacotron (Wang et al., 2017), Tacotron 2 (Shen et al., 2018), Deep Voice 3 (Ping et al., 2018) and TransformerTTS (Li et al., 2019) synthesize speech sequentially, which suffer from slow inference speed.", "To solve these shortcomings, various non-autoregressive TTS models are proposed to synthesize spectrogram frames in parallel.", "FastSpeech (Ren et al., 2019) and ParaNet (Peng et al., 2020) are early non-autoregressive TTS model which both adopt a fully parallel model architecture and rely on an autoregressive teacher model to provide the alignment between phonemes and mel-spectrograms.", "FastSpeech introduces knowledge distillation for mel-spectrograms to simplify data distributions.", "FastSpeech 2 (Ren et al., 2020) and FastPitch (ancucki, 2020) introduce more variance information as input to further reduce the output uncertainty and ease the one-to-many mapping problem.", "However, they are trained with MAE loss, which fits independent and unimodal Laplace distribution and results in blurry and over-smoothed mel-spectrograms in inference.", "SpeedySpeech (Vainer and Duek, 2020) use the combination of MAE and structural similarity index (SSIM) losses to avoid blurry mel-spectrograms.", "Glow-TTS (Kim et al., 2020) and Flow-TTS (Miao et al., 2020) both use flow-based decoder to apply some invertible transforms between mel-spectrograms and noise data sampled from simple distribution.", "Sheng and Pavlovskiy (2019) employ a cascaded Tacotron 2 and GAN pipeline to reduce the over-smoothness of synthesized speech.", "Multi-SpectroGAN (Lee et al., 2020) introduces generative adversarial network (GAN) and a multi-scale discriminator and is trained with only the adversarial feedback by conditioning hidden states with variance information ( e . g ., duration, 8204 pitch and energy) to a discriminator.", "GAN and Flow-based methods can model dependent and multimodal distribution well, while they suffer from training or model complexity.", "In this work, we conduct systematic studies on several modeling methods in both NAR-TTS and AR-TTS from a novel perspective.", "Handling Dependent and Multimodal Distributions Dependent and multimodal distributions increase the uncertainty for model training and lead to blurry results, which is observed in many generation tasks (Gu et al., 2018; Isola et al., 2017; Mathieu et al., 2016).", "There are some common ways to handle dependent and multimodal distributions: 1) using loss functions or modeling methods that can well fit the distributions; and 2) introducing some input variables or transforming the target data to simplify data distributions.", "In neural machine translation, Gu et al. (2018) tackle this problem by introducing knowledge distillation to simplify target data distributions and using fertilities extracted by an external aligner to directly model the nondeterminism in the translation process.", "In image translation task, Isola et al. (2017) compare the generated images by their proposed adversarial generative method with those generated by using MAE and MSE losses and conclude that adversarial loss can help avoid blurry results.", "In video prediction task, Mathieu et al. (2016) propose a multi-scale architecture, an adversarial training method, and an image gradient difference loss function to deal with the inherently blurry predictions obtained from the MSE loss function.", "However, there is no systematic analysis on multimodal and dependent distributions in TTS task as far as we know.", "In this paper, we conduct comprehensive analyses and studies on handling dependent and multimodal distributions in TTS from a novel perspective.", "In this paper, we revisited the over-smoothing problem in TTS with a novel perspective: the degree of over-smoothness is determined by the gap between the complexity of data distribution and the capability of the modeling method.", "Under this perspective, we classified existing methods combating over-smoothness into two categories: simplifying data distributions and enhancing modeling methods, and conducted comprehensive analyses and studies on these methods.", "For simplifying data distributions, we found that both AR factorization and providing more variance information as input ( e . g ., FastSpeech 2) can alleviate the over-smoothing problem, and FastSpeech 2 enjoys the advantage of fast generation over AR factorization.", "For enhancing modeling methods, we found that Laplacian mixture loss can improve the generation quality and enjoy its simplicity, while GAN and Glow can further achieve better quality at the cost of increased training or model complexity.", "Based on the above findings, we further combined these two categories of methods and found that the over-smoothing problem is further alleviated, and the generated speech quality is further improved, which shows that these two categories are complementary to each other.", "When performing our analyses on the multi-speaker dataset, we drew similar conclusions and found that providing more variance information can reduce the difficulty of modeling the target data distribution and alleviate the requirements for model capacity.", "We hope that our studies can inspire the community and industry to develop more powerful TTS models.", "Besides, since we are the first to discuss the over-smoothing problem systematically in the speech domain.", "We hope our analysis methodology as well as the findings can be extended to other tasks and inspire other domains.", "This work was supported in part by Zhejiang Natural Science Foundation under Grant LR19F020006, National Natural Science Foundation of China under Grant No.61836002, No.62072397." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "objective", "method", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "method", "method", "objective", "objective", "abstain", "objective", "objective", "objective", "result", "objective", "result", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "result", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "abstain", "objective", "objective", "method", "result", "result", "result", "result", "objective", "objective", "objective", "other" ]
[ "Indirect speech such as sarcasm achieves a constellation of discourse goals in human communication.", "While the indirectness of figurative language warrants speakers to achieve certain pragmatic goals, it is challenging for AI agents to comprehend such idiosyncrasies of human communication.", "Though sarcasm identification has been a well-explored topic in dialogue analysis, for conversational systems to truly grasp a conversation's innate meaning and generate appropriate responses, simply detecting sarcasm is not enough; it is vital to explain its underlying sarcastic connotation to capture its true essence.", "In this work, we study the discourse structure of sarcastic conversations and propose a novel task Sarcasm Explanation in Dialogue ( SED ) .", "Set in a multimodal and code-mixed setting, the task aims to generate natural language explanations of satirical conversations.", "To this end, we curate WITS , a new dataset to support our task.", "We propose MAF (Modality Aware Fusion), a multimodal context-aware attention and global information fusion module to capture multimodality and use it to benchmark WITS.", "The proposed attention module surpasses the traditional multimodal fusion baselines and reports the best performance on almost all metrics.", "Lastly, we carry out detailed analyses both quantitatively and qualitatively.", "The use of figurative language serves many communicative purposes and is a regular feature of both oral and written communication (Roberts and Kreuz, 1994).", "Predominantly used to induce humour, criticism, or mockery (Colston, 1997), paradoxical language is also used in concurrence with hyperbole to show surprise (Colston and Keller, 1998) as well as highlight the disparity between expectations and reality (Ivanko and Pexman, 2003).", "While the use and comprehension of sarcasm is a Equal contribution Figure 1: Sarcasm Explanation in Dialogues (SED).", "cognitively taxing process (Olkoniemi et al., 2016), psychological evidence advocate that it positively correlates with the receiver's theory of mind (ToM) (Wellman, 2014), i.e., the capability to interpret and understand another person's state of mind.", "Thus, for NLP systems to emulate such anthropomorphic intelligent behavior, they must not only be potent enough to identify sarcasm but also possess the ability to comprehend it in its entirety.", "To this end, moving forward from sarcasm identification, we propose the novel task of Sarcasm Explanation in Dialogue ( SED ).", "For dialogue agents, understanding sarcasm is even more crucial as there is a need to normalize its sarcastic undertone and deliver appropriate responses.", "Conversations interspersed with sarcastic statements often use contrastive language to convey the opposite of what is being said.", "In a real-world setting, understanding sarcasm goes beyond negat-5956 ing a dialogue's language and involves the acute comprehension of audio-visual cues.", "Additionally, due to the presence of essential temporal, contextual, and speaker-dependent information, sarcasm understanding in conversation manifests as a challenging problem.", "Consequently, many studies in the domain of dialogue systems have investigated sarcasm from textual, multimodal, and conversational standpoints (Ghosh et al., 2018; Castro et al., 2019; Oraby et al., 2017; Bedi et al., 2021).", "However, baring some exceptions (Mishra et al., 2019; Dubey et al., 2019; Chakrabarty et al., 2020), research on figurative language has focused predominantly on its identification rather than its comprehension and normalization.", "This paper addresses this gap by attempting to generate natural language explanations of satirical dialogues.", "To illustrate the proposed problem statement, we show an example in Figure 1.", "It contains a dyadic conversation of four utterances (cid:104) u 1 , u 2 , u 3 , u 4 (cid:105) , where the last utterance ( u 4 ) is a sarcastic remark.", "Note that in this example, although the opposite of what is being said is, I don't have to think about it,\" it is not what the speaker means; thus, it enforces our hypothesis that sarcasm explanation goes beyond simply negating the dialogue's language. The discourse is also accompanied by ancillary audio-visual markers of satire such as an ironical intonation of the pitch, a blank face, or roll of the eyes. Thus, conglomerating the conversation history, multimodal signals, and speaker information, SED aims to generate a coherent and cohesive natural language explanation associated with sarcastic dialogues. For the task at hand, we extend MASAC (Bedi et al., 2021) a sarcasm detection dataset for code-mixed conversations by augmenting it with natural language explanations for each sarcastic dialogue. We name the dataset WITS 1 . The dataset is a compilation of sarcastic dialogues from a popular Indian TV show. Along with the textual transcripts of the conversations, the dataset also contains multimodal signals of audio and video. We experiment with unimodal as well as multimodal models to benchmark WITS. Text, being the driving force of the explanations, is given the primary importance, and thus, we compare a number of established text-based sequence-to-sequence systems on WITS. To incorporate multimodal information, we propose a unique fusion scheme of 1 WITS: Why Is This Sarcastic\" Multimodal Context-Aware Attention (MCA2).", "Inspired by Yang et al. (2019), this attention variant facilitates deep semantic interaction between the multimodal signals and textual representations by conditioning the key and value vectors with audio-visual information and then performing dot product attention with these modified vectors.", "The generated audio and video information-informed textual representations are then combined using the Global Information Fusion Mechanism (GIF).", "The gating mechanism of GIF allows for the selective inclusion of information relevant to the satirical language and also prohibits any multimodal noise from seeping into the model.", "We further propose MAF ( Modality Aware Fusion ) module where the aforementioned mechanisms are introduced in the Generative Pretrained Models (GPLMs) as adapter modules.", "Our fusion strategy outperforms the text-based baselines and the traditional multimodal fusion schemes in terms of multiple text-generation metrics.", "Finally, we conduct a comprehensive quantitative and qualitative analysis of the generated explanations.", "In a nutshell, our contributions are four fold: We propose Sarcasm Explanation in Dialogue (SED), a novel task aimed at generating a natural language explanation for a given sarcastic dialogue, elucidating the intended irony.", "We extend an existing sarcastic dialogue dataset, to curate WITS, a novel dataset containing human annotated gold standard explanations.", "We benchmark our dataset using MAF-TAVB and MAF-TAVM variants of BART and mBART, respectively, that incorporate the audio-visual cues using a unique context-aware attention mechanism.", "We carry out extensive quantitative and qualitative analysis along with human evaluation to assess the quality of the generated explanations.", "Reproducibility: The source codes and the dataset can be found here: https://github.com/LCS2-IIITD/MAF.git.", "Sarcasm and Text: Joshi et al. (2017) presented a well-compiled survey on computational sarcasm where the authors expanded on the relevant datasets, trends, and issues for automatic sarcasm identification.", "Early work in sarcasm detection dealt with standalone text inputs like tweets and reviews (Kreuz and Caucci, 2007; Tsur et al., 2010; 5957 Joshi et al., 2015; Peled and Reichart, 2017).", "These initial works mostly focused on the use of linguistic and lexical features to spot the markers of sarcasm (Kreuz and Caucci, 2007; Tsur et al., 2010).", "More recently, attention-based architectures are proposed to harness the interand intra-sentence relationships in texts for efficient sarcasm identification (Tay et al., 2018; Xiong et al., 2019; Srivastava et al., 2020).", "Analysis of figurative language has also been extensively explored in conversational AI setting.", "Ghosh et al. (2017) utilised attention-based RNNs to identify sarcasm in the presence of context.", "Two separate LSTMs-with-attention were trained for the two inputs (sentence and context) and their hidden representations were combined during the prediction.", "The study of sarcasm identification has also expanded beyond the English language.", "Bharti et al. (2017) collected a Hindi corpus of 2000 sarcastic tweets and employed rule-based approaches to detect sarcasm.", "Swami et al. (2018) curated a dataset of 5000 satirical Hindi-English code-mixed tweets and used n-gram feature vectors with various ML models for sarcasm detection.", "Other notable studies include Arabic (Abu Farha and Magdy, 2020), Spanish (Ortega-Bueno et al., 2019), and Italian (Cignarella et al., 2018) languages.", "Sarcasm and Multimodality: In the conversational setting, MUStARD, a multimodal, multi-speaker dataset compiled by Castro et al. (2019) is considered the benchmark for multimodal sarcasm identification.", "Chauhan et al. (2020) leveraged the intrinsic interdependency between emotions and sarcasm and devised a multi-task framework for multimodal sarcasm detection.", "Currently, Hasan et al. (2021) performed the best on this dataset with their humour knowledge enriched transformer model.", "Recently, Bedi et al. (2021) proposed a code-mixed multi-party dialogue dataset, MASAC, for sarcasm and humor detection.", "In the bimodal setting, sarcasm identification with tweets containing images has also been well explored (Cai et al., 2019; Xu et al., 2020; Pan et al., 2020) .", "Beyond Sarcasm Identification: While studies in computational sarcasm have predominantly focused on sarcasm identification, some forays have been made into other domains of figurative language analysis.", "Dubey et al. (2019) initiated the work of converting sarcastic utterances into their non-sarcastic interpretations using deep learning.", "In another direction, Mishra et al. (2019) devised a modular unsupervised technique for sarcasm generation by introducing context incongruity through fact removal and incongruous phrase insertion.", "Following this, Chakrabarty et al. (2020) proposed a retrieve-and-edit-based unsupervised framework for sarcasm generation.", "Their proposed model leverages the valence reversal and semantic incongruity to generate sarcastic sentences from their non-sarcastic counterparts.", "In summary, much work has been done in sarcasm detection, but little, if any, effort has been placed into explaining the irony behind sarcasm .", "This paper attempts to fill this gap by proposing a new problem definition and a supporting dataset.", "Situational comedies, or Sitcoms' , vividly depict human behaviour and mannerism in everyday real-life settings.", "Consequently, the NLP research community has successfully used such data for sarcasm identification (Castro et al., 2019; Bedi et al., 2021).", "However, as there is no current dataset tailored for the proposed task, we curate a new dataset named WITS, where we augment the already existing MASAC dataset (Bedi et al., 2021) with explanations for our task.", "MASAC is a multimodal, multi-party, Hindi-English code-mixed dialogue dataset compiled from the popular Indian TV show, Sarabhai v/s Sarabhai' 2 .", "We manually analyze the data and clean it for our task.", "While the original dataset contained 45 episodes of the TV series, we add 10 more episodes along with their transcription and audio-visual boundaries.", "Subsequently, we select the sarcastic utterances from this augmented dataset and manually define the utterances to be included in the dialogue context for each of them.", "Finally, we are left with 2240 sarcastic dialogues with the number of contextual utterances ranging from 2 to 27 .", "Each of these instances is manually 2 https://www.imdb.com/title/tt1518542/ 5958 5 10 15 20 25 Number of utterances in a dialog 0 100 200 300 400 500 N u m b e r o f d i a l o g s", "annotated with a corresponding natural language explanation interpreting its sarcasm.", "Each explanation contains four primary attributes source and target of sarcasm, action word for sarcasm, and an optional description for the satire as illustrated in Figure 1.", "In the explanation Indu implies that Maya is not looking", "good.\", Indu' is the sarcasm source, Maya' is the target, implies' is the action word, while is not looking good' forms the description part of the explanation.", "We collect explanations in code-mixed format to keep consistency with the dialogue language.", "We split the data into train/val/test sets in an 80:10:10 ratio for our experiments, resulting in 1792 dialogues in the train set and 224 dialogues each in the validation and test sets.", "The next section illustrates the annotation process in more detail.", "Table 1 and Figure 2 show detailed statistics of WITS.", "Each of the instance in WITS is associated with a corresponding video, audio, and textual transcript such that the last utterance is sarcastic in nature.", "We first manually define the number of contextual utterances required to understand the sarcasm present in the last utterance of each dialogue.", "Further, we provide each of these sarcastic statements, along with their context, to the annotators who are asked to generate an explanation for these instances based on the audio, video, and text cues.", "Two annotators were asked to annotate the entire dataset.", "The target explanation is selected by calculating the cosine similarity between the two explanations.", "If the cosine similarity is greater than 90% then the shorter length explanation is selected as the target explanation.", "Otherwise, a third annotator goes through the dialogue along with the explanations and resolves the conflict.", "The average cosine similarity after the first pass is 87 .", "67% .", "All the final selected explanations contain the following attributes: Sarcasm source: The speaker in the dialog who is being sarcastic.", "Sarcasm target: The person/ thing towards whom the sarcasm is directed.", "Action word: Verb/ action used to describe how the sarcasm is taking place.", "For e.g. mocks, insults, taunts, etc.", "Description: A description about the scene which helps in understanding the sarcasm.", "Figure 1 represents an example annotation from WITS with its attributes.", "In this section, we present our model and its nuances.", "The primary goal is to smoothly integrate multimodal knowledge into the BART architecture.", "To this end, we introduce Multimodal Aware Fusion (MAF), an adapter-based module that comprises of Multimodal Context-Aware Attention (MCA2) and Global Information Fusion (GIF) mechanisms.", "Given the textual input sarcastic dialogue along with the audio-video cues, the former aptly introduces multimodal information in the textual representations, while the latter conglomerates the audiovisual information infused textual representations.", "This adapter module can be readily incorporated at multiple layers of BART/mBART to facilitate various levels of multimodal interaction.", "Figure 3 illustrates our model architecture.", "The traditional dot-product-based cross-modal attention scheme leads to the direct interaction of textual representations with other modalities.", "Here the text representations act as the query against the multimodal representations, which serve as the key and value.", "As each modality comes from a different embedding subspace, a direct fusion of multimodal information might not retain maximum contextual information and can also leak substantial noise in the final representations.", "Thus, based on the find-ings of Yang et al. (2019), we propose multimodal fusion through Context Aware Attention .", "We first generate multimodal information conditioned key and value vectors and then perform the traditional scaled dot-product attention.", "We elaborate on the process below.", "Given the intermediate representation H generated by the GPLMs at a specific layer, we calculate the query, key, and value vectors Q , K , and V R n d , respectively, as given in Equation 1, where WQ , WK , and WV R d d are learnable parameters.", "Here, n denotes the maximum sequence length of the text, and d denotes the dimensionality of the GPLM generated vector.", "Let C R n d c denote the vector obtained from audio or visual representation.", "We generate multimodal information informed key and value vectors K and V , respectively, as given by Yang et al. (2019).", "To decide how much information to integrate from the multimodal source and how much information to retain from the textual modality, we learn matrix R n 1 (Equation 3).", "Note that U k and U v R d c d are learnable matrices.", "Instead of making k and v as hyperparame-ters, we let the model decide their values using a gating mechanism as computed in Equation 3.", "The matrices of W k 1 , W k 2 , W v 1 , and W v 2 R d 1 are trained along with the model.", "Finally, the multimodal information infused vectors K and V are used to compute the traditional scaled dot-product attention.", "For our case, we have two modalities audio and video.", "Using 5960 the context-aware attention mechanism , we obtain the acoustic-information-infused and visual-information infused vectors HA and HV , respectively (c.f. Equations 4 and 5).", "H a = Softmax ( Q K Ta d k ) V a (4) H v = Softmax ( Q K Tv d k ) V v (5) 4.2 Global Information Fusion In order to combine the information from both the acoustic and visual modalities, we design the GIF block.", "We propose two gates, namely the acoustic gate ( g a ) and the visual gate ( g v ) to control the amount of information transmitted by each modality.", "They are as follows: g a = [ H H a ] W a + b a (6) g v = [ H H v ] W v + b v (7) Here, W a , W v R 2 d d and b a , b v R d 1 are trainable parameters, and denotes concatenation.", "The final multimodal information fused representation H is given by Equation 8.", "In this section, we illustrate our feature extraction strategy, the comparative systems, followed by the results and its analysis.", "For a quantitative analysis of the generated explanations, we use the standard metrics for generative tasks ROUGE-1/2/L (Lin, 2004), BLEU-1/2/3/4 (Papineni et al., 2002), and METEOR (Denkowski and Lavie, 2014).", "To capture the semantic similarity, we use the multilingual version of the BERTScore (Zhang et al., 2019).", "Audio: Acoustic representations for each instance are obtained using the openSMILE python library 3 .", "We use a window size of 25 ms and a window shift of 10 ms to get the non-overlapping frames.", "Further, we employ the eGeMAPS model (Eyben et al., 2016) and extract 154 dimensional functional features such as Mel Frequency Cep-stral Coefficients (MFCCs) and loudness for each 3 https://audeering.github.io/ opensmile-python/ frame of the instance.", "These features are then fed to a Transformer encoder (Vaswani et al., 2017) for further processing.", "Video: We use a pre-trained action recognition model, ResNext-101 (Hara et al., 2018), trained on the Kinetics dataset (Kay et al., 2017) which can recognise 101 different actions.", "We use a frame rate of 1 .", "5 , a resolution of 720 pixels, and a window length of 16 to extract the 2048 dimensional visual features.", "Similar to audio feature extraction, we employ a Transformer encoder (Vaswani et al., 2017) to capture the sequential dialogue context in the representations.", "To get the best textual representations for the dialogues, we experiment with various sequence-to-sequence (seq2seq) architectures.", "RNN: We use the openNMT 4 implementation of the RNN seq-to-seq architecture.", "Transformer (Vaswani et al., 2017): The standard Transformer encoder and decoder are used to generate explanations in this case.", "Pointer Generator Network (See et al., 2017): A seq-to-seq architecture that allows the generation of new words as well as copying words from the input text for generating accurate summaries.", "BART (Lewis et al., 2020): It is a denoising auto-encoder model with standard machine translation architecture with a bidirectional encoder and an auto-regressive left-to-right decoder.", "We use its base version.", "mBART (Liu et al., 2020): Following the same architecture and objective as BART, mBART is trained on large-scale monolingual corpora in different languages 5 .", "Text Based: As evident from Table 2, BART performs the best across all the metrics for the textual modality, showing an improvement of almost 2 3% on the METEOR and ROUGE scores when compared with the next best baseline.", "PGN, RNN, and Transformers demonstrate admissible performance considering that they have been trained from scratch.", "However, it is surprising to see mBART not performing better than BART as it is trained on multilingual data.", "We elaborate more on this in Appendix A.1.", "Multimodality: Psychological and linguistic literature suggests that there exist distinct paralinguistic cues that aid in comprehending sarcasm and humour (Attardo et al., 2003; Tabacaru and Lem-mens, 2014).", "Thus, we gradually merge auditory and visual modalities using MAF module and obtain MAF-TAVB and MAF-TAVM for BART and mBART, respectively.", "We observe that the inclusion of acoustic signals leads to noticeable gains of 2 3% across the ROUGE, BLEU, and METEOR scores.", "The rise in BERTScore also suggests that the multimodal variant generates a tad more coherent explanations.", "As ironical intonations such as mimicry, monotone, flat contour, extremes of pitch, long pauses, and exaggerated pitch (Rock-well, 2007) form a significant component in sarcasm understanding, we surmise that our model, to some extent, is able to spot such markers and identify the intended sarcasm behind them.", "We notice that visual information also contributes to our cause.", "Significant performance gains are observed for MAF-TVB and MAF-TVM , as all the metrics show a rise of about 3 4% .", "While MAF-TAB gives marginally better performance over MAF-TVB in terms of R1, RL, and B1, we see that MAF-TVB performs better in terms of the rest of the metrics.", "Often, sarcasm is depicted through gestural cues such as raised eyebrows, a straight face, or an eye roll (Attardo et al., 2003).", "Moreover, when satire is conveyed by mocking someone's looks or physical appearances, it becomes essential to incorporate information expressed through visual media.", "Thus, we can say that, to some extent, our model is able to capture these nuances of non-verbal cues and use them well to normalize the sarcasm in a dialogue.", "In summary, we conjecture that whether independent or together, audio-visual signals bring essential information to the table for understanding sarcasm.", "Table 3 reports the ablation study.", "CONCAT 1 represents the case where we perform bimodal concatenation ( ( T A ) , ( T V ) ) instead of the MCA2 mechanism, followed by the GIF module, whereas, CONCAT 2 represents the simple trimodal concatenation ( T A V ) of acoustic, visual, and textual representations followed by a linear layer for dimensionality reduction.", "In comparison with MCA2 , CONCAT 2 reports a below-average performance with a significant drop of more than 14% for MAF-TAVB and MAF-TAVM .", "This highlights the need to have deftly crafted multimodal fusion mechanisms.", "CONCAT 1, on the other hand, gives good performance and is competitive with DPA and MAF-TAVB .", "We speculate that treating the audio and video modalities separately and then merging them to retain the complimentary and differential features lead to this performance gain.", "Our proposed MAF outperforms DPA with gains of 1 3% .", "This underlines that our unique multimodal fusion strategy is aptly able to capture the contextual information provided by the audio and video signals.", "Replacing the GIF module with simple addition, we observe a noticeable decline in the performance across almost all metrics by about 2 3% .", "This attests to the inclusion of GIF module over simple addition.", "We also experiment with fusing multimodal information using MAF before different layers of the BART encoder.", "The best performance was obtained when the fusion was done before the sixth layer of the architecture (c.f. Appendix A.2).", "We evaluate the generated explanations based on their ability to correctly identify the source and target of a sarcastic comment in a conversation.", "We report such results for mBART, BART, MAF-TAB , MAF-TVB , and MAF-TAVB .", "BART performs better than mBART for the source as well as target identification.", "We observe that the inclusion of audio ( 10%) and video ( 8%) information dras-5962 INDRAVARDHAN:AcchasunoMonishatumhaaregharmeinbeenyaaisakuuchhain?", "tically improves the source identification capability of the model.", "The combination of both these nonverbal cues leads to a whopping improvement of more than 13% for the same.", "As a result, we infer that multimodal fusion enables the model to incorporate audio-visual peculiarities unique to each speaker, resulting in improved source identification.", "The performance for target identification, however, drops slightly on the inclusion of multimodality.", "We encourage future work in this direction.", "Qualitative Analysis.", "We analyze the best performing model, MAF-TAVB , and its corresponding unimodal model, BART, and present some examples in Table 4.", "In Table 4a, we show one instance where the explanations generated by the BART as well as MAF-TAVB are neither coherent nor comply with the dialogue context and contain much scope of improvement.", "On the other hand, Table 4b illustrates an instance where the explanation generated by MAF-TAVB adheres to the topic of the dialogue, unlike the one generated by its unimodal counterpart.", "Table 4c depicts a dialogue where MAF-TAVB 's explanation better captures the satire than BART.", "We further dissect the models based on different modalities in Appendix A.3.", "manually inspect the generated results.", "Consequently, we perform a human evaluation for a sample of 30 instances from our test set with the help of 25 evaluators 6 .", "We ask the evaluators to judge the generated explanation, given the transcripts of the sarcastic dialogues along with a small video clip with audio as well.", "Each evaluator has to see the video clips and then rate the generated explanations on a scale of 0 to 5 based on the following factors 7 : Coherence: Measures how well the explanations are organized and structured.", "Related to dialogue: Measures whether the generated explanation adheres to the topic of the dialogue.", "Related to sarcasm: Measures whether the explanation is talking about something related to the sarcasm present in the dialogue.", "Table 6 presents the human evaluation analysis with average scores for each of the aforementioned categories.", "Our scrutiny suggests that MAF-TAVB generates more syntactically coherent explanations when compared with its textual and bimodal counterparts.", "Also, MAF-TAVB and MAF-TVB generate explanations that are more focused on the conversation's topic, as we see an increase of 0 .", "55 points in the related to the dialogue category.", "Thus, we reestablish that these models are able to incorporate information that is explicitly absent from the dialogue, such as scene description, facial fea-6 Evaluators are the experts in linguistics and NLP and their age ranges in 20-28 years.", "different models.", "Multimodal models are BART based.", "tures, and looks of the characters.", "Furthermore, we establish that MAF-TAVB is better able to grasp sarcasm and its normalization, as it shows about 0 .", "6 points improvement over BART in the related to sarcasm category.", "Lastly, as none of the metrics in Table 6 exhibit high scores ( 3 . 5+ ), we feel there is still much scope for improvement in terms of the generation performance and human evaluation.", "The research community can further explore the task with our proposed dataset, WITS.", "In this work, we proposed the new task of Sarcasm Explanation in Dialogue ( SED ) , which aims to generate a natural language explanation for sarcastic conversations.", "We curated WITS, a novel multimodal, multiparty, code-mixed, dialogue dataset to support the SED task.", "We experimented with multiple text and multimodal baselines, which give promising results on the task at hand.", "Furthermore, we designed a unique multimodal fusion scheme to merge the textual, acoustic, and visual features via Multimodal Context-Aware Attention (MCA2) and Global Information Fusion (GIF) mechanisms.", "As hypothesized, the results show that acoustic and visual features support our task and thus, generate better explanations.", "We show extensive qualitative analysis of the explanations obtained from different models and highlight their advantages as well as pitfalls.", "We also perform a thorough human evaluation to compare the performance of the models with that of human understanding.", "Though the models augmented with the proposed fusion strategy perform better than the rest, the human evaluation suggested there is still room for improvement which can be further explored in future studies.", "The authors would like to acknowledge the support of the Ramanujan Fellowship (SERB, India), In-fosys Centre for AI (CAI) at IIIT-Delhi, and ihub-Anubhuti-iiitd Foundation set up under the NM-ICPS scheme of the Department of Science and Technology, India." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "other", "abstain", "abstain", "abstain", "objective", "result", "method", "objective", "objective", "method", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "method", "objective", "result", "method", "abstain", "other" ]
[ "Reply suggestion models help users process emails and chats faster.", "Previous work only studies English reply suggestion.", "Instead, we present MRS , a multilingual reply suggestion dataset with ten languages.", "MRS can be used to compare two families of models:", "1) retrieval models that select the reply from a fixed set and", "2) generation models that produce the reply from scratch.", "Therefore, MRS complements existing cross-lingual generalization benchmarks that focus on classification and sequence labeling tasks.", "We build a generation model and a retrieval model as baselines for MRS .", "The two models have different strengths in the monolingual setting, and they require different strategies to generalize across languages.", "MRS is publicly available at https://github.com/zhangmozhi/mrs .", "Automated reply suggestion is a useful feature for email and chat applications.", "Given an input message, the system suggests several replies, and users may click on them to save typing time (Figure 1).", "This feature is available in many applications including Gmail, Outlook, LinkedIn, Facebook Messenger, Microsoft Teams, and Uber.", "Reply suggestion is related to but different from open-domain dialog systems or chatbots (Adiwar-dana et al., 2020; Huang et al., 2020).", "While both are conversational AI tasks (Gao et al., 2019), the goals are different: reply suggestion systems help the user quickly reply to a message, while chatbots aim to continue the conversation and focus more on multi-turn dialogues.", "Ideally, we want our model to generate replies in any language.", "However, reply suggestion models require large training sets, so previous work mostly Work mostly done as an intern at Microsoft Research.", "focuses on English (Kannan et al., 2016; Henderson et al., 2017; Deb et al., 2019).", "To investigate reply suggestion for other languages with possibly limited data, we build a multilingual dataset, dubbed MRS ( M ultilingual R eply S uggestion).", "From publicly available Reddit threads, we extract message-reply pairs, response sets, and machine-translated examples in ten languages (Table 1).", "One interesting aspect of the reply suggestion problem is that there are two modeling approaches.", "Some models follow the retrieval framework and select the reply from a predetermined response set (Henderson et al., 2017).", "Others follow the generation framework and generate the reply from scratch (Kannan et al., 2016).", "The two approaches have different advantages.", "Generation models are more powerful because they are not constrained by the response set.", "In comparison, retrieval models are easier to train and runs faster, and a curated response set guarantees the coherence and the safety of the model output.", "The two frameworks make reply suggestion an interesting task for studying cross-lingual generalization.", "Most cross-lingual generalization benchmarks use classification and sequence labeling tasks (Tjong Kim Sang, 2002; Nivre et al., 2016; Strassel and Tracey, 2016; Conneau et al., 2018; Schwenk and Li, 2018; Clark et al., 2020; Hu et al., 2020; Lewis et al., 2020b).", "In contrast, reply suggestion has two formulations that require different cross-lingual generalization strategies.", "While some recent work explores cross-lingual transfer Language Code Family Examples Tokens Response Set English EN West Germanic 48,750,948 1,700,066,696 36,997 Spanish ES Romance 2,325,877 195,424,517 45,152 German DE West Germanic 1,864,688 118,711,662 34,747 Portuguese PT Romance 1,822,594 114,642,809 45,225 French FR Romance 1,396,806 133,068,740 32,350 Japanese JA Japonic 727,668 46,124,966 38,817 Swedish SV North Germanic 738,254 47,845,497 32,165 Italian IT Romance 736,296 58,715,043 31,855 Dutch NL West Germanic 638,634 43,847,547 32,293 Russian RU East Slavic 516,739 23,109,295 31,475 Table 1: Dataset statistics for MRS .", "learning in generation tasks, the tasks are extractive ; i.e., the output often has significant overlap with the input.", "These tasks include news title generation, text summarization, and question generation (Chi et al., 2020; Liang et al., 2020; Scialom et al., 2020).", "Reply suggestion is more challenging because the reply often does not overlap with the message (Figure 1), so the model needs to address different cross-lingual generalization challenges (Section 5.2).", "We build two baselines for MRS : a retrieval model and a generation model.", "We first compare the models in English, where we have abundant training data and human referees.", "We evaluate the models with both automatic metrics and human judgments.", "The two models have different strengths.", "The generation model has higher word overlap scores and is favored by humans on average, but inference is slower, and the output is sometimes contradictory or repetitive (Holtzman et al., 2020).", "In contrast, the retrieval model is faster and always produces coherent replies, but the replies are sometimes too generic or irrelevant due to the fixed response set.", "Next, we test models in other languages.", "We compare different training settings and investigate two cross-lingual generalization methods: initializing with pre-trained multilingual models (Wu and Dredze, 2019; Conneau et al., 2020; Liang et al., 2020) and training on machine-translated data (Banea et al., 2008).", "Interestingly, the two models prefer different methods: multilingual pretraining works better for the retrieval model, while the generation model prefers machine translation.", "In summary, we present MRS , a multilingual reply suggestion dataset.", "We use MRS to provide the first systematic comparison between generation and retrieval models for reply suggestion in both monolingual and multilingual settings.", "MRS is also a useful benchmark for future research in reply suggestion and cross-lingual generalization.", "The rest of the paper is organized as follows.", "Section 2 describes the data collection process for MRS .", "Section 3 introduces task formulations, experiment settings, and evaluation metrics.", "Section 4 describes the baseline generation and retrieval models.", "Section 5 presents our experiment results.", "Section 6 discusses how MRS can help future research.", "To study reply suggestion in multiple languages, we build MRS , a dataset with message-reply pairs based on Reddit comments.", "The dataset is available at https://github.com/zhangmozhi/mrs .", "We download Reddit comments between January 2010 and December 2019 from the Pushshift Reddit dataset (Baumgartner et al., 2020).", "1 We extract message-reply pairs from each thread by considering the parent comment as an input message and the response to the comment as the reference reply.", "We remove comments starting with [removed] or [deleted] , which are deleted messages.", "We also skip comments with a rating of less than one, since they are likely to contain inappropriate content.", "on the concatenation of the message and the reply.", "We discard low-confidence examples where none of the languages has a score higher than 0.7.", "For the remaining examples, we use the highest-scoring label as the language.", "We only use English data from 2018 because English data is abundant on Reddit.", "Non-English examples are much more scarce, so we use data from the last ten years.", "We select the top ten languages with at least 100K examples.", "We create three splits for each language: 80% examples for training, 10% for validation, and 10% for testing.", "Table 1 shows some dataset statistics.", "MRS is heavily biased towards English.", "We have more than 48 million English examples, but fewer than one million examples for half of the languages.", "This gap reflects a practical challenge for reply suggestionwe do not have enough data for most languages in the world.", "Nevertheless, we can use MRS to test models in different multilingual settings, including cross-lingual transfer learning, where we build non-English reply suggestion models from English data (Section 3.2).", "We build a response set of 30K to 50K most frequent replies for each language, which are used in the retrieval model.", "We want the response set to cover generic responses, so we select replies that appear at least twenty times in the dataset.", "This simple criterion works well for English, but the set is too small for other languages.", "For non-English languages, we augment the response set by translating the English response set to other languages with Microsoft Translator.", "The non-English response set is sometimes smaller than the English set, because different English responses may have the same translation.", "Exchanges on Reddit are sometimes uncivil, inappropriate, or even abusive (Massanari, 2017; Mohan et al., 2017).", "We try to filter out toxic contents, as they are not desirable for reply suggestion systems.", "We use two toxicity detection models.", "First, we use an in-house multilingual model.", "The model is initialized with multilingual BERT (Devlin et al., 2019, MBERT ) and fine-tuned on a mixture of proprietary and public datasets with toxic and offensive language labels.", "The model outputs a score from zero to one, with a higher score corresponding to a higher level of toxicity.", "Second, we use Perspective API 2 , a publicly available model.", "Perspective API has limited free access (one query per second), so we only use the API on the English validation, test, and response set.", "For other languages, we rely on our in-house model.", "We filter message-reply pairs if it has greater than 0.9 score according to the in-house model, or greater than 0.5 score according to Perspective API (Gehman et al., 2020).", "About one percent of examples are filtered.", "After filtering the data, we manually validate three hundred random examples and do not find any toxic examples, which confirms that our filter method have a high recall.", "While we hope the filtered dataset leads to better reply suggestion models, existing filtering methods are not perfect and can introduce other biases (Dixon et al., 2018; Sap et al., 2019; Hutchinson et al., 2020).", "Therefore, models trained on all MRS data may still have undesirable behavior.", "MRS is intended to be used as a benchmark for testing cross-lingual generalization of generation and retrieval models.", "The dataset should not be directly used in production systems.", "To use the dataset in practice, additional work is required to address other possible biases and toxic or inappropriate content that may exist in the data.", "After presenting the dataset, we explain how we use MRS to compare reply suggestion models.", "We describe the two frameworks for reply suggestion, our experiment settings, and evaluation metrics.", "In reply suggestion, the input is a message x , and the output is one or more suggested replies y .", "In practice, reply suggestion systems can choose to not suggest any replies.", "This decision is usually made by a separate trigger model (Kannan et al., 2016).", "In this paper, we focus on reply generation, so we assume that the models always need to suggest a fixed number of replies.", "Reply suggestion can be formulated as either a retrieval problem or a generation problem.", "2 https://www.perspectiveapi.com Given an input message x , the model computes a relevance score xy for each candidate reply y Y .", "The model then selects the highest-scoring replies as suggestions; e.g., the top-1 reply is arg max y Y xy .", "Generation Model.", "A generation model generates the reply y from scratch.", "Generation models usually follow the sequence-to-sequence framework (Sutskever et al., 2014, SEQ 2 SEQ ), which generates y token by token.", "Given an input message x = ( x 1 , x 2 , , x n ) of n tokens, a SEQ 2 SEQ model estimates the probability of a reply y = ( y 1 , y 2 , , y m ) of m tokens as following: p ( y | x ) = m (cid:89) i =1 p ( y i | x , y <i ) .", "The model computes probability for the next token p ( y i | x , y <i ) based on the input x and the first ( i 1) tokens of the output y .", "The model is trained to maximize the probability of reference replies in the training set.", "At test time, we find the top replies that approximately maximize (1) with beam search.", "The two models have different strengths.", "The generation model is more flexible, but the retrieval model is faster (Henderson et al., 2017), and the output can be controlled by curating the response set (Kannan et al., 2016).", "We compare a retrieval model and a generation model as baselines for MRS .", "To our knowledge, we are the first to systematically compare the two models in both monolingual and multilingual settings.", "We explain our training settings and metrics next.", "For each language in MRS , we train and compare models in four settings.", "Future work can experiment with other settings (discussed in Section 6).", "Monolingual.", "Here, we simply train and test models in a single language.", "This setting simulates the scenario where we have adequate training data for the target language.", "Previous reply suggestion models were only studied in the English monolingual setting.", "Zero-Shot.", "Next, we train models in a zero-shot cross-lingual setting.", "We train the model on the English training set and use the model on the test set for another language.", "This setting simulates the scenario where we want to build models for a low-resource language using our large English set.", "To generalize across languages, we initialize the models with pre-trained multilingual models (de-tails in Section 4).", "These models work well in other tasks (Wu and Dredze, 2019; Liang et al., 2020).", "We test if they also work for reply suggestion, as different tasks often prefer different multilingual representations (Zhang et al., 2020b).", "Machine Translation ( MT ).", "Another strategy for cross-lingual generalization is to train on machine-translated data (Banea et al., 2008).", "We train models on nineteen million English training examples machine-translated to the target language with Microsoft Translator.", "We compare against the zero-shot setting to compare the two cross-lingual generalization strategies.", "Multilingual.", "Finally, we build a multilingual model by jointly training on the five languages with the most training data: English, Spanish, German, Portuguese, and French.", "We oversample non-English training data to have the same number of training examples data across all languages (John-son et al., 2017).", "We make two comparisons:", "1) for the five training languages, we compare against the monolingual setting to test whether fitting multiple languages in a single model hurts performance; and", "2) for other languages, we compare against the zero-shot setting to check if adding more training languages helps cross-lingual generalization.", "The goal of reply suggestion is to save user typing time, so the ideal metrics are click-through rate ( CTR ), how often the user chooses a suggested reply, and time reduction, how much time is saved by clicking the suggestion instead of typing.", "However, these metrics require deploying the model to test on real users, which is not feasible at full-scale while writing this paper.", "Instead, we focus on automated offline metrics that can guide research and model development before deploying production systems.", "Specifically, we evaluate models using a test set of message-reply pairs.", "To identify a good metric, we compare several metrics in a pilot study by deploying an English system.", "We collect millions of user interactions and measure Pearson's correlation between CTR and automated offline metrics.", "The next paragraph lists the metrics.", "Based on the study, we recommend weighted ROUGE F1 ensemble ( ROUGE in tables), which has the highest correlation with CTR .", "For the retrieval model, we follow previous work and consider mean reciprocal rank (Kannan et al., 2016, MRR ) and precision at one (Henderson et al., 2017).", "These metrics test if the model can retrieve the reference response from a random set of responses.", "Alternatively, we compute MRR and precision on a subset of examples where the reference reply is in the response set so that we can directly measure the rank of the reference response in the response set.", "This set also allows us to compute MRR for individual responses, so we can compute macroMRR , the average MRR over each response in the set.", "Higher macroMRR can indicate diversity but has a worse correlation than computing MRR over the entire test set.", "For the generation model, we consider model perplexity (Adiwar-dana et al., 2020).", "Finally, we consider two word overlap scores, BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004), which can be used for both retrieval and generation models.", "Our pilot study shows that ROUGE has the best correlation.", "However, individual ROUGE F1 scores ( ROUGE -1/2/3) are sensitive to small changes in sequence lengths (more so because our responses are generally short).", "Therefore, we use a weighted average of the three scores: ROUGE -1 6 + ROUGE -2 3 + ROUGE -3 2 .", "This weighted score leads to the highest correlation with CTR .", "Intuitively, the weights balance the differences in the average magnitude of each metric and thus reduce variance on short responses.", "Popular reply suggestion systems (such as Gmail and Outlook) suggest three replies for each message, while the user only selects one.", "To simulate this setting, we predict three replies for each message.", "For the retrieval model, we use the three highest-scoring replies from the response set.", "For the generation model, we use top-three results from beam search.", "Out of the three replies, we only use the reply with the highest ROUGE compared to the reference reply when computing the final metrics; i.e., the model only has to provide one correct reply to have a full score.", "We compare models primarily with ROUGE , since the metric has the best correlation in the pilot study.", "Nevertheless, word overlap scores have known limitations (Liu et al., 2016), as there are different ways to reply to a message.", "We encourage future research to investigate other metrics to understand different aspects of the model.", "As examples, we also report two diversity scores: the proportion of distinct unigrams ( Dist-1 ) and bigrams ( Dist-2 ) in the generated replies (Li et al., 2016).", "While ROUGE measures the relevance of the replies, higher diversity can also increase CTR (Deb et al., 2019).", "We can improve the diversity of the three replies with diversity-promoting decoding (Li et al., 2016; Vijayakumar et al., 2018; Zhang et al., 2018) or latent variable models (Deb et al., 2019), but we leave this direction to future work.", "For our English monolingual experiments, we also complement automatic metrics with human judgments ( Human in Figure 2).", "For each example, we display the input message and sets of three suggested replies from both generation and retrieval models to three human annotators (crowd workers).", "We then ask the annotators to select the set with more responses that they prefer to send as a reply.", "We leave evaluations for other languages to future work due to resource limitations.", "This section introduces the two baseline models: retrieval model and a generation model.", "For the retrieval model, we use the architecture from Henderson et al. (2017), except we replace the feedforward network encoders with Transformers (Vaswani et al., 2017).", "Given an input message x and candidate reply y , two Transformer encoders x and y map the message and the reply to two vectors x ( x ) and y ( y ) .", "The relevance score xy between the message x and the reply y is the dot product of the two vectors: xy = x ( x ) (cid:62) y ( y ) .", "Henderson et al. (2017) also adds a language model score to encourage more frequent replies.", "We do not use language model score for simplicity.", "We train the model with the symmetric loss from Deb et al. (2019).", "Suppose the batch size is n .", "For a batch of training messages { x i } ni =1 and corresponding replies { y i } nj =1 , we maximize: n (cid:88) i =1 e x i y i (cid:80) nj =1 (cid:16) e x i y j + e x j y i (cid:17) e x i y i .", "In a regular softmax loss, the denominator only sums over one variable.", "The denominator in the ROUGE Dist-1 Dist-2 Human .0543 .0341 .1608 .484 .0331 .0194 .0480 .320 Generation Retrieval Figure 2: Generation vs. retrieval model on English.", "symmetric loss sum over both variables to encourage bidirectional compatibility: the message should be predictive of the reply, and the reply should be predictive of the message.", "This encourages the model to select responses specific to the message, similar to the Maximum Mutual Information objective from Li et al. (2016).", "The two encoders x and y are initialized with MBERT (Devlin et al., 2019), a Transformer with 110 million parameters pre-trained on multilingual corpora.", "Initializing with MBERT allows the model to generalize across languages (Wu and Dredze, 2019).", "In Appendix A, we experiment with another pre-trained multilingual Transformer, XLM-R (Con-neau et al., 2020).", "We use the base version with 270 million parameters.", "For the generation model, we follow the SEQ 2 SEQ architecture (Section 3.1).", "We use a Transformer encoder to read the input x , and another Transformer decoder to estimate p ( y i | x , y <i ) in (1).", "We cannot initialize the generation model with MBERT or XLM-R , because the model also has a decoder.", "Instead, we use UnicoderXDAE (Liang et al., 2020), a pre-trained multilingual SEQ 2 SEQ model, which can generalize across languages in extractive generation tasks such as news title generation and question generation.", "We test if Unicoder-XDAE also generalizes in the more challenging reply suggestion task.", "There are other generation models we can use, which we discuss as future work in Section 6.", "We train the retrieval model using Adam optimizer (Kingma and Ba, 2015) with 1e-6 learning rate, default , and 256 batch size.", "For monolingual and zero-shot settings, we use twenty epochs for English and fifty epochs for other languages.", "We use ten epochs for MT and multilingual settings.", "The first 1% training steps are warmup steps.", "During training, we freeze the embedding layers and the bottom two Transformer layers of both en-Monolingual Zero-Shot MT Multilingual ROUGE Dist-1 Dist-2 ROUGE Dist-1 Dist-2 ROUGE Dist-1 Dist-2 ROUGE Dist-1 Dist-2 EN .0331 .0194 .0480 .0331 .0194 .0480 --.0265 .0158 .0376 ES .0187 .0157 .0353 .0156 .0113 .0271 .0139 .0164 .0350 .0181 .0151 .0333 DE .0215 .0134 .0298 .0178 .0098 .0240 .0141 .0152 .0333 .0190 .0140 .0314 PT .0509 .0158 .0393 .0115 .0121 .0323 .0110 .0184 .0449 .0460 .0161 .0401 FR .0216 .0191 .0468 .0168 .0133 .0343 .0166 .0196 .0461 .0212 .0169 .0411 JA .0311 .0220 .0540 .0213 .0236 .0250 .0153 .1031 .0444 .0144 .0677 .0286 IT .0200 .0357 .0768 .0172 .0246 .0576 .0150 .0378 .0811 .0171 .0278 .0614 SV .0188 .0287 .0658 .0168 .0203 .0506 .0176 .0302 .0677 .0169 .0224 .0518 NL .0184 .0316 .0766 .0167 .0199 .0533 .0169 .0297 .0710 .0170 .0221 .0551 RU .0142 .0486 .0946 .0138 .0298 .0604 .0130 .0431 .0804 .0246 .0405 .0761 Table 3: Results for retrieval model initialized with MBERT (Devlin et al., 2019).", "coders, which preserves multilingual knowledge from the pre-trained model and improves cross-lingual transfer learning (Wu and Dredze, 2019).", "All hyperparameters are manually tuned on the English validation set.", "We use almost the same hyperparameters as Liang et al. (2020) to train generation models.", "Specifically, we use Adam optimizer with 1e-5 initial learning rate, default , and 1024 batch size.", "For the monolingual and zero-shot setting, we use four epochs for English and 5000 steps for other languages (equivalent to two to nine epochs depending on the language).", "We use one epoch for the MT setting and 40,000 steps for the multilingual setting.", "The first 20% training steps are warmup steps.", "We freeze the embedding layer during training for faster training.", "All models are trained with eight Tesla V100 GPU .", "It takes about an hour to train the generation model for 1000 steps (covering about one million examples).", "For the retrieval model, an epoch on the English training set (about 48 million examples) takes about seven hours.", "We experiment with the two baselines from Section 4 on MRS .", "We first compare the models in English, where we have enough training data and human referees.", "We then build models for other languages and compare training settings listed in Section 3.2.", "Figure 2 compares the generation and retrieval models in the English monolingual setting.", "Generation model not only has higher relevance ( ROUGE ) score but also can generate more diverse replies (higher DIST scores).", "For English, we also ask three human referees to compare the model outputs on a subset of 500 test examples.", "Again, the referees prefer the generation model more often than the retrieval model (Figure 2).", "We look at some generated responses to understand the models qualitatively.", "In the top two examples in Table 2, the generation model produces replies highly specific to the input message.", "In contrast, the retrieval model fails to find a relevant reply, because the response set does not cover these topics.", "This explains why the generation model has much higher ROUGE and distinct n -gram scores than the retrieval model.", "However, the expressiveness comes at the cost of a lack of control over the generated replies.", "The generation model sometimes produces incoherent replies that are repetitive and/or contradictory, as shown in the bottom two examples of Table 2.", "For the retrieval model, we can easily avoid these problems by curating the fixed response set.", "These degenerative behaviors are observed in other text Monolingual MT Multilingual ROUGE DIST 1 DIST 2 ROUGE DIST 1 DIST 2 ROUGE DIST 1 DIST 2 EN .0543 .0341 .161 --.0412 .0352 .175 ES .0397 .0214 .182 .0270 .0261 .190 .0366 .0209 .175 DE .0469 .0332 .228 .0288 .0244 .142 .0454 .0321 .220 PT .0566 .0209 .194 .0276 .0221 .161 .0564 .0207 .190 FR .0446 .0207 .174 .0271 .0165 .109 .0428 .0211 .175 JA .0139 .1931 .245 .0042 .2812 .216 .0114 .0954 .179 IT .0493 .0322 .243 .0316 .0393 .240 .0295 .0312 .222 SV .0387 .0376 .236 .0369 .0359 .203 .0241 .0380 .227 NL .0377 .0337 .230 .0320 .0284 .162 .0233 .0334 .219 RU .0286 .0825 .349 .0238 .0310 .094 .0165 .0607 .224 Table 4: Results for generation model.", "generation tasks and can be mitigated by changing training and decoding objectives (Holtzman et al., 2020; Welleck et al., 2020).", "We leave these directions for future research.", "After comparing English models, we experiment on other languages using the settings from Section 3.2.", "Retrieval Model.", "Table 3 shows results for the retrieval model when initialized with MBERT .", "The retrieval model can generalize fairly well across languages, as the ROUGE in the zero-shot setting is often close to the monolingual setting.", "This result confirms that initializing with MBERT is an effective strategy for cross-lingual generalization.", "Training on MT data is usually worse than training in the zero-shot setting.", "This is possible because the MT system may create artifacts that do not appear in organic data (Artetxe et al., 2020).", "For the multilingual model, the training language ROUGE scores are lower than monolingual training (gray cells in Table 3).", "However, multilingual training sometimes leads to better ROUGE on unseen languages compared to transferring from only English (zero-shot).", "Previous work observes similar results on other tasks, where multilingual training hurts training languages but helps generalization to unseen languages (Johnson et al., 2017; Conneau et al., 2020; Wang et al., 2020).", "Finally, Appendix A shows similar results when initializing with XLM-R (Conneau et al., 2020).", "Generation Model.", "Table 4 shows results for the generation model.", "In the monolingual setting, the generation model has higher scores than the retrieval model on most languages, consistent with the English result (Figure 2).", "However, unlike the retrieval model, the generation model fails to generalize across languages in the zero-shot setting, despite using UnicoderXDAE for initialization.", "We do not show zero-shot results in Table 4, because ROUGE are close to zero for non-English languages.", "After training on English data, the model always produces English replies, regardless of the input language; i.e., the generation model forgets multilingual knowledge acquired during pre-training (Kirkpatrick et al., 2017).", "This result is surprising because UnicoderXDAE works in the zero-shot setting for other generation tasks (Liang et al., 2020), which suggests that reply suggestion poses unique challenges for cross-lingual transfer learning.", "Interestingly, the multilingual model can generalize to unseen languages; perhaps training on multiple languages regularizes the model to produce replies in the input language.", "Overall, the best method to generalize the generation model across languages is to use machine-translated data.", "MRS opens up opportunities for future research.", "Our experiments use four training settings (Sec-tion 3.2), but there are many other settings to explore.", "For example, we can use other combinations of training languages, which may work better for some target languages (Ammar et al., 2016; Cotterell and Heigold, 2017; Ahmad et al., 2019; Lin et al., 2019; Zhang et al., 2020a).", "We are also interested in training on both organic data and MT data; i.e., mixing the zero-shot and MT setting.", "We can also compare other models on MRS .", "For the English monolingual setting, we can initialize the generation model with state-of-the-art language models (Radford et al., 2019; Brown et al., 2020; Zhang et al., 2020c).", "For cross-lingual settings, we can initialize the generation model with several recent pre-trained multilingual SEQ 2 SEQ models (Chi et al., 2020, 2021; Liu et al., 2020; Tran et al., 2020; Lewis et al., 2020a; Xue et al., 2020).", "For retrieval models, we can experiment with other multilingual encoders that use different pre-training tasks (Artetxe and Schwenk, 2019; Chidambaram et al., 2019; Reimers and Gurevych, 2020; Feng et al., 2020).", "Another idea is to combine the two models.", "Given an input message, we first use a generation model to create a set of candidate replies.", "We then use a retrieval model to compute relevance scores and rerank these candidates.", "Reranking the output of a generation model helps other natural language processing tasks (Shen et al., 2004; Collins and Koo, 2005; Ge and Mooney, 2006), and previous work uses a similar idea for chatbots (Qiu et al., 2017).", "Our experiment shows that reply suggestion poses unique challenges for cross-lingual generalization, especially for the generation model.", "Future work can study methods to improve cross-lingual generalization methods.", "Some examples include applying adversarial learning (Chen et al., 2018, 2019; Huang et al., 2019), using adapters (Pfeiffer et al., 2020), adaptive transfer (Xia et al., 2021), mixing pre-training and fine-tuning (Phang et al., 2020), and bringing a human in the loop (Yuan et al., 2020).", "We present MRS , a multilingual dataset for reply suggestion.", "We compare a generation and a retrieval baseline on MRS .", "The two models have different strengths in the English monolingual setting and require different strategies to transfer across languages.", "MRS provides a benchmark for future research in both reply suggestion and cross-lingual transfer learning.", "Data Collection.", "No human annotators are involved while creating MRS .", "The examples and response sets of MRS come from publicly available Reddit dumps from Pushshift, which are used in more than a hundred peer-reviewed publications (Baumgartner et al., 2020).", "Privacy.", "Examples in MRS do not have the user-name and are from publicly available data.", "Therefore, we do not anticipate any privacy issues.", "In the pilot study (Section 3.3), we measure the correlation of user CTR with different evaluation metrics.", "To protect user privacy, we only collect aggregated statistics ( CTR ) and use no other information.", "Potential Biased and Toxic Content.", "Despite our best effort to filter toxic contents (Section 2.2), the dataset may not be perfectly cleansed and may have other biases that are typical in open forums (Massanari, 2017; Mohan et al., 2017).", "Users should be aware of these issues.", "We will continue to improve the quality of the dataset.", "Intended Use of MRS .", "Because of the possible biases and inappropriateness in the data, MRS should not be directly used to build production systems (as mentioned in Section 2.2).", "The main use of MRS is to test cross-lingual generalization for text retrieval and generation models, and researchers should be aware of possible ethical issues of Reddit data before using MRS .", "We appreciate the feedback from anonymous reviewers.", "MZ is supported in part by the Office of the Director of National Intelligence ( ODNI ), Intelligence Advanced Research Projects Activity ( IARPA ), via the BETTER Program contract #2019-19051600005.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the of-ficial policies, either expressed or implied, of ODNI , IARPA , or the U.S. Government.", "The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein." ]
[ "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "objective", "abstain", "abstain", "method", "method", "objective", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "other", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other" ]
[ "Recent years have seen the paradigm shift of Named Entity Recognition (NER) systems from sequence labeling to span prediction .", "Despite its preliminary effectiveness, the span prediction model's architectural bias has not been fully understood.", "In this paper, we first investigate the strengths and weaknesses when the span prediction model is used for named entity recognition compared with the sequence labeling framework and how to further improve it, which motivates us to make complementary advantages of systems based on different paradigms.", "We then reveal that span prediction, simultaneously, can serve as a system combiner to re-recognize named entities from different systems' outputs.", "We experimentally implement 154 systems on 11 datasets, covering three languages, comprehensive results show the effectiveness of span prediction models that both serve as base NER systems and system combiners.", "We make all code and datasets available: https:// github.com/neulab/spanner , as well as an online system demo: http://spanner.", "sh .", "Our model also has been deployed into the EXPLAINABOARD (Liu et al., 2021) platform, which allows users to flexibly perform the system combination of top-scoring systems in an interactive way: http://explainaboard.", "nlpedia.ai/leaderboard/task-ner/ .", "The rapid evolution of neural architectures (Kalch-brenner et al., 2014a; Kim, 2014; Hochreiter and Schmidhuber, 1997) and large pre-trained models (Devlin et al., 2019; Lewis et al., 2020) not only drive the state-of-the-art performance of many NLP tasks (Devlin et al., 2019; Liu and Lapata, 2019) to a new level but also change the way", "how researchers formulate the task.", "For example, recent years have seen frequent paradigm shifts for the task of named entity recognition (NER) from token-level tagging , which conceptualize NER as a sequence labeling (SEQLAB ) task (Chiu and Nichols, 2015; Huang et al., 2015; Ma and Hovy, 2016; Lample et al., 2016; Akbik et al., 2018; Peters et al., 2018; Devlin et al., 2018; Xia et al., 2019; Luo et al., 2020; Lin et al., 2020; Fu et al., 2021), to span-level prediction (SPANNER ) (Li et al., 2020; Mengge et al., 2020; Jiang et al., 2020; Ouchi et al., 2020; Yu et al., 2020), which regards NER either as question answering (Li et al., 2020; Mengge et al., 2020), span classification (Jiang et al., 2020; Ouchi et al., 2020; Yamada et al., 2020), and dependency parsing tasks (Yu et al., 2020).", "However, despite the success of span prediction-based systems, as a relatively newly-explored framework, the understanding of its architectural bias has not been fully understood so far.", "For example, what are the complementary advantages compared with SEQLAB frameworks and how to make full use of them?", "Motivated by this, in this paper, we make two scientific contributions.", "We first investigate what strengths and weaknesses are when NER is conceptualized as a span prediction task .", "To achieve this goal, we perform a fine-grained evaluation of SPANNER systems against SEQLAB systems and find there are clear complementary advantages between these two frameworks.", "For example, SEQLAB -based models are better at dealing with those entities that are long and with low label consistency.", "By contrast, SPANNER systems do better in sentences with more Out-of-Vocabulary (OOV) words and entities with medium length (3.3).", "Secondly, we reveal the unique advantage brought by the architectural bias of the span prediction framework : it can not only be used as a base system for named entity recognition but also serve as a meta-system to combine multiple NER systems' outputs .", "In other words, the span prediction model play two roles showing in Fig. 1:", "(i) as a base NER system; and", "(ii) as a system combiner of multiple base systems.", "We claim that compared with traditional ensemble learning of the NER task, SPANNER combiners are advantageous in the following aspects: 1. Most of the existing NER combiners rely on heavy feature engineering and external knowledge (Florian et al., 2003; Wu et al., 2003; Saha and Ekbal, 2013).", "Instead, the SPANNER models we proposed for system combination train in an end-to-end fashion.", "2. Combining complementarities of different paradigms: most previous works perform NER system combination solely focusing on the sequence labeling framework.", "It is still an understudied topic how systems from different frameworks help each other.", "3. No extra training overhead and flexibility of use: Existing ensemble learning algorithms are expensive, which usually need to collect training samples by k-fold cross-validation for system combiner (Speck and Ngomo, 2014), reducing their practicality.", "4. Connecting two separated training processes: previously, the optimization of base NER systems and ensemble learning for combiner are two independent processes.", "Our work builds their connection and the same set of parameters shared over these two processes.", "Experimentally, we first implement 154 systems on 11 datasets, on which we comprehensively evaluate the effectiveness of our proposed span prediction-based system combiner.", "Empirical results show its superior performance against several typical ensemble learning algorithms.", "that benefits from the practicality of our proposed methods .", "Specifically, we developed an online demo system based on our proposed method, and integrate it into the NER Leaderboard, which is very convenient for researchers to find the complementarities among different combinations of systems, and search for a new state-of-the-art system.", "NER is frequently formulated as a sequence labeling (SEQLAB ) problem (Chiu and Nichols, 2015; Huang et al., 2015; Ma and Hovy, 2016; Lample et al., 2016), where X = { x 1 , x 2 , . . . , x T } is an input sequence and Y = { y 1 , y 2 , . . . , y T } is the output label (e.g., B-PER, I-LOC, O) sequence.", "The goal of this task is to accurately predict entities by assigning output label y t for each token x t .", "We take the F1-score 1 as the evaluation metric for the NER task.", "To make a comprehensive evaluation, in this paper, we use multiple NER datasets that cover different", "domains and languages.", "CoNLL-2003 2 (Sang and De Meulder, 2003) covers two different languages: English and German.", "Here, we only consider the English ( EN ) dataset collected from the Reuters Corpus.", "CoNLL-2002 3 (Sang, 2002) contains annotated corpus in Dutch ( NL ) collected from De Morgen news, and Spanish ( ES ) collected from Spanish EFE News Agency.", "We evaluate both languages.", "OntoNotes 5.0 4 (Weischedel et al., 2013) is a large corpus consisting of three different languages: English, Chinese, and Arabic, involving six genres: newswire ( NW ), broadcast news ( BN ), broadcast conversation ( BC ), magazine ( MZ ), web data ( WB ), and telephone conversation ( TC ).", "Following previous works (Durrett and Klein, 2014; Ghaddar and Langlais, 2018), we utilize different domains in English to test the robustness of proposed models.", "1 http://www.clips.uantwerpen.be/ conll2000/chunking/conlleval.txt 2 https://www.clips.uantwerpen.be/ conll2003/ner/ 3 https://www.clips.uantwerpen.be/ conll2002/ner/ 4 https://catalog.ldc.upenn.edu/ LDC2013T19 WNUT-2016 5 and WNUT-2017 6 (Strauss et al., 2016; Derczynski et al., 2017) are social media data from Twitter, which were public as a shared task at WNUT-2016 ( W16 ) and WNUT-2017 ( W17 ).", "Although this is not the first work that formulates NER as a span prediction problem (Jiang et al., 2020; Ouchi et al., 2020; Yu et al., 2020; Li et al., 2020; Mengge et al., 2020), we contribute by (1) exploring how different design choices influence the performance of SPANNER and (2) interpreting complementary strengths between SEQLAB and SPANNER with different design choices.", "In what follows, we first detail span prediction-based NER systems with the vanilla configuration and proposed advanced featurization.", "Overall, the span prediction-based framework for NER consists of three major modules: token representation layer, span representation layer, and span prediction layer.", "Given a sentence X = { x 1 , , x n } with n tokens, the token representation h i is as follows:", "where EMB ( ) is the pre-trained embeddings, such as non-contextualized embeddings GloVe (Pen-nington et al., 2014) or contextualized pre-trained embeddings BERT (Devlin et al., 2018).", "BILSTM is the bidirectional LSTM (Hochreiter and Schmid-huber, 1997).", "First, we enumerate all the possible m spans S = { s 1 , , s i , , s m } for sentence X = { x 1 , , x n } and then re-assign a label y Y for each span s .", "For example, for sentence: London 1 is 2 beautiful 3 , the possible span's (start, end) indices are { (1 , 1) , (2 , 2) , (3 , 3) , (1 , 2) , (2 , 3) , (1 , 3) } , and the labels of these spans are all O except (1 , 1) (London) is LOC.", "We use b i and e i to denote the startand endindex of the span s i , respectively, 5 http://noisy-text.github.io/2016/ ner-shared-task.html 6 http://noisy-text.github.io/2017/ emerging-rare-entities.html and 1 b i e i n .", "Then each span can be represented as s i = { x b i , x b i +1 , , x e i } .", "The vectorial representation of each span could be calculated based on the following parts: Boundary embedding: span representation is calculated by the concatenation of the start and end tokens' representations z bi = [ h b i ; h e i ] Span length embedding: we additionally featur-ize each span representation by introducing its length embedding z li , which can be obtained by a learnable look-up table.", "where score( ) is a function that measures the compatibility between a specified label and a span:", "Heuristic Decoding Regarding the flat NER task without nested entities, we present a heuristic decoding method to avoid the prediction of overlapped spans.", "Specifically, for those overlapped spans, we keep the span with the highest prediction probability and drop the others.", "Setup To explore how different mechanisms influence the performance of span prediction models, We design four specific model variants", "(i) generic SPANNER : only using boundary embedding", "(ii) boundary embedding + span length embedding,", "(iii) boundary embedding + heuristic decoding,", "(iv) heuristic decoding +", "(ii).", "Results As shown in Tab.", "1, we can observe that:", "(i) heuristic decoding is an effective method that can boost the generic model's performance over all the datasets.", "(ii) span length feature works most of the time.", "The performances on 10 of the 11 datasets have improved against the generic model.", "(iii) By combining two mechanisms together, significant improvements were achieved on all datasets.", "The holistic results in Tab.", "1 make it hard for us to interpret the relative advantages of NER systems with different structural biases.", "To address this problem, we follow the interpretable evaluation idea (Fu et al., 2020a,c) that proposes to breakdown the holistic performance into different buckets from different perspectives and use a performance heatmap to illustrate relative advantages between two systems, i.e., system-pair diagnosis.", "Setup As a comparison, we replicate five top-scoring SEQLAB -based NER systems, which are sq 1 : 92 .", "41 , sq 2 : 92 .", "01 , sq 3 : 92 .", "46 , sq 4 : 92 .", "11 , sq 5 : 91 .", "99 .", "Notably, to make a fair comparison, all five SEQLAB s are with closed performance comparing to the above SPANNER s.", "Although we will detail configurations of these systems later (to reduce content redundancy) in 5.1 Tab.", "3 , it would not influence our analysis in this section.", "Regarding interpretable evaluation, we choose the CoNLL-2003 (EN) dataset as a case study and breakdown the holistic performance into four groups based on different attributes.", "Specifically, given an entity e that belongs to a sentence S , the following attribute feature functions can be defined: eLen = len( e ) : entity length sLen = len( S ) : sentence length oDen = | OOVs | len( S ) : density of OOVs eCon = |{ | label( )=label( e ) , E}| |E| : entity label consistency where len( ) counts the number of words, label( e ) gets the label for span e , E denotes all spans in the training set.", "| OOVs | is the number of OOV words in the sentence.", "We additionally use a training set dependent attribute: entity label consistency ( eCon ), which measures how consistently a particular entity is labeled with a particular label.", "For example, if an entity with the label LOC has a higher eCon , it means that the entity is frequently labeled as LOC in the training set.", "Based on the attribute value of entities, we partition test entities into four buckets: extra-small (XS), small (S), large (L), and extra-large (XL).", "7 .", "For each bucket, we calculate a 7 we show detailed bucket intervals in the appendix bucket-wise F1.", "Analysis As shown in Tab.", "2, the green area indicates SEQLAB performs better while the red area implies the span model is better.", "We observe that: (1) The generic SPANNER shows clear complementary advantages with SEQLAB -based systems.", "Specifically, almost all SEQLAB -based models outperform generic SPANNER when", "(i) entities are long and with lower label consistency", "(ii) sentences are long and with fewer OOV words.", "By contrast, SPANNER is better at dealing with entities locating on sentences with more OOV words and entities with medium length.", "(2) By introducing heuristic decoding and span length features, SPANNER s do slightly better in long sentences and long entities, but are still under-performing on entities with lower label consistency.", "The complementary advantages presented by SEQLAB s and SPANNER s motivate us to search for an effective framework to utilize them.", "The development of ensemble learning for NER systems, so far, lags behind the architectural evolution of the NER task.", "Based on our evidence from 3.3, we propose a new ensemble learning framework for NER systems.", "SPANNER as System Combiner The basic idea is that each span prediction NER (SPANNER ) system itself can also conceptualize as a system combiner to re-recognize named entities from different systems' outputs.", "Specifically, Fig. 2 illustrates the general workflow.", "Here, SPANNER plays two roles, (1) as a base model to identify potential named entities; (2) as a meta-model (combiner) to calculate the score for each potential named entity.", "Given a test span s and prediction label set L from m base systems ( | L| = m ).", "Let L be NER label set where |L| = c and c is the number of pre-defined NER classes (i.e., LOC, ORG, PER, MISC, O in CoNLL 2003 (EN)", ".) For each l L we define P ( s, l ) as the combined probability that span s can be assigned as label l , which can be calculated as: P ( s, l ) = (cid:88) l L l = l score( s, l ) , (5) where score( ) is defined as Eq.4.", "Intuitively, Fig. 2 gives an example of how SPANNER re-recognizes the entity New York based on outputs from four base systems.", "As a base system, SPANNER predicts this span as LOC, and the label will be considered as one input of the combiner model.", "The prediction labels of the other three base models are LOC, ORG, and O, respectively.", "Then, as a combiner, SPANNER calculates the score for each predicted label.", "We sum weights (scores) of the same label that are predicted by the base models and select the label that achieves the maximum score as the output of the combiner.", "To make a thorough evaluation of SPANNER as a system combiner, as illustrated in Tab.", "3, we first implement 10 SEQLAB based systems that cover rich combinations of popular neural components.", "To be fair for other system combination methods, we also include two SPANNER s as base systems.", "To reduce the uncertainty, we run experiments with multiple trials and also perform the significant test with Wilcoxon Signed-Rank Test (Wilcoxon et al., 1970) at p < 0 .", "05 .", "We keep the testing result from the model with the best performance on the development set, terminating training when the performance of the development set is not improved in 20 epochs.", "We extensively explore six system combination methods as competitors, which involves supervised and unsupervised fashions.", "Weighted voting base on overall F1-score (VOF1): The taggers are combined according to 9 We view BERT as the subword-sensitive representation because we get the representation of each subword.", "testing set.", "Weighted voting base on class F1-score (VCF1): Also weighted voting, the weights are the cate-gories' F1-score.", "Stacking (a.k.a, Stacked Generalization) is a general method of using a high-level model to combine lower-level models to achieve greater predictive accuracy (Ting and Witten, 1997).", "To make a comprehensive evaluation, we investigated three popular methods that are supervised learning , thereby requiring additional training samples.", "Specifically, there are: Support Vector Machines (SVM) (Hearst et al., 1998) is a supervised machine learning algorithm, which can train quickly over large datasets.", "Therefore, the ensemble classifier is usually SVM.", "Random Forest (RF) (Breiman, 2001) is a common ensemble classifier that randomly selects a subset of training samples and variables to make multiple decision trees.", "Extreme Gradient Boosting (XGB) (Chen and Guestrin, 2016) is also an ensemble machine learning algorithm.", "It is based on the decision-tree and the gradient boosting decision (Friedman et al., 2000).", "Notably, compared to our model, these methods are computationally expensive since they require external training samples for system combiners, Cm Best SpNER Voting-based Stacking-based Best SpNER Voting-based Stacking-based VM VOF1 VCF1 SVM RF XGB VM VOF1 VCF1 SVM RF XGB CoNLL-2003 (EN) OntoNotes5.0-BN (BN) all 93.02 93.80 93.62 93.57 93.60 93.28 93.04 92.93 90.93 91.14 90.92 91.29 91.12 89.67 90.95 90.50 m[:10] 93.02 93.78 93.48 93.55 93.54 93.21 93.03 93.18 90.93 91.48 91.03 91.41 91.27 89.97 90.92 90.91 m[:9] 93.02 93.81 93.57 93.59 93.51 93.33 93.26 93.35 90.93 91.64 91.16 91.24 91.22 90.16 90.75 90.76 m[:8] 93.02 93.81 93.41 93.52 93.54 93.28 93.17 93.14 90.93 91.74 91.17 91.59 91.39 90.16 90.69 90.81 m[:7] 93.02 93.72 93.41 93.47 93.41 93.26 92.98 93.00 90.93 91.86 91.60 91.66 91.57 90.16 90.80 90.73 m[:6] 93.02 93.71 93.21 93.63 93.53 93.20 93.27 93.21 90.93 91.95 91.42 91.74 91.67 90.34 91.09 91.04 m[:5] 93.02 93.80 93.46 93.54 93.52 93.33 93.19 93.28 90.93 90.65 91.69 91.77 91.97 90.54 90.72 90.69 m[:4] 93.02 93.70 93.29 93.69 93.61 93.47 93.20 93.28 90.93 90.30 91.18 91.13 90.32 90.02 90.93 90.77 m[:3] 93.02 93.75 93.66 93.75 93.61 93.30 93.38 93.43 90.93 91.13 91.07 91.13 91.13 90.89 90.98 91.08 m[:2] 93.02 93.01 93.02 92.99 92.95 92.74 92.86 92.86 90.93 89.81 90.70 89.78 90.01 90.61 90.98 90.81 m[2:4] 92.41 93.66 92.41 92.46 92.78 92.32 92.37 92.51 90.53 90.23 88.54 90.53 89.10 89.38 90.26 90.18 m[4:6] 92.11 93.39 92.01 92.11 92.31 92.01 92.15 92.17 89.43 90.80 89.33 89.43 89.77 89.66 89.49 89.99 m[3:6] 92.28 93.04 92.97 92.92 92.95 92.18 92.52 92.46 89.66 90.98 90.82 90.96 90.90 89.48 90.59 90.56 m[1:] 92.46 93.68 93.59 93.54 93.55 93.07 92.83 93.00 90.70 91.21 90.81 90.94 90.91 89.50 90.90 90.56 m[2:] 92.41 93.58 93.43 93.40 93.34 93.06 92.96 92.89 90.53 90.97 90.54 90.86 90.74 89.29 90.72 90.53 m[3:] 92.28 93.58 93.35 93.41 93.35 93.09 92.81 92.81 89.66 90.71 90.25 90.38 90.31 89.05 90.26 90.10 m[4:] 92.11 93.50 92.86 93.21 93.10 92.88 92.79 92.68 89.43 90.70 89.46 89.89 89.84 89.10 89.20 89.05 m[5:] 92.01 93.54 92.67 92.84 92.78 92.81 92.85 92.65 89.33 90.39 89.46 89.42 89.58 88.61 89.32 88.93 m[6:] 91.99 93.32 91.85 92.51 92.34 92.16 92.58 92.51 89.21 89.94 88.51 89.27 89.08 88.56 88.75 88.57 m[7:] 91.57 92.66 90.92 91.55 91.29 91.93 92.20 92.02 89.03 89.42 88.33 88.33 88.62 88.00 87.87 87.86 m[8:] 90.88 91.29 87.39 90.65 90.32 90.98 90.90 90.83 86.84 88.52 86.50 87.61 87.19 86.35 87.14 87.10 m[9:] 89.71 91.21 85.97 87.31 86.27 89.68 89.50 89.56 86.18 88.36 85.87 86.27 86.20 85.34 86.20 86.01 m[10:] 83.03 85.65 78.49 83.03 81.83 83.06 83.17 83.17 83.87 86.52 81.05 83.87 83.39 82.25 83.94 83.86 Std.", "which is achieved by", "(i) collecting training data by performing five-fold cross-validation (Wu et al., 2003; Florian et al., 2003) on the original training samples of each dataset", "(ii) training a system combiner based on collected samples.", "Setup Most previous works on system combination only consider one combination case where all base systems are put together.", "In this setting, we aim to explore more fine-grained combination cases.", "Specifically, we first sort systems based on their performance in a descending order to get a list m .", "We refer to m [ i : k ] as one combination case, dubbed combined interval , which represents systems whose ranks are between i and k .", "In practice, we consider 23 combination cases showing in Tab.", "4. To examine whether the SPANNNER is significantly better than the other baseline methods, we conduct the significance test with Wilcoxon Signed-RankTest (Wilcoxon et al., 1970) at p < 0 .", "05 .", "Results Tab.", "4 shows results of our SPANNER against six baseline combiner methods on CoNLL-2003 and OntoNotes5.0-BN under a nuanced view.", "We can observe that: (1) Overall, our proposed SPANNER outperforms all other competitors significantly (p-value < 0.05) on most of the combination cases include the one ( all ) that most previous works have explored.", "(2) As more base systems are introduced in descending order, the combined performance will be improved gradually.", "The combination performance will decrease with the reduction of the best single system, which holds for all the combiners.", "(3) The best performance is always achieved on the combination case with more models, instead of the one with a small number of top-scoring base models.", "This suggests that introducing more base models with diverse structures will provide richer complementary information.", "Setup To also explore the effectiveness of SPANNER on the other datasets, we calculate the average performance of each system combination method", "over 23 combination cases.", "Results Tab.", "5 shows the results, and we can observe that: comparing with the three voting combiner, SPANNER achieves the best average combination performance with the lowest standard deviation, which holds for seven of nine testing datasets with statistical significance p < 0.05.", "Specifically, the performance gap between SPANNER and other combiners is larger on datasets from web domain: WB and Twitter: W16 , W17 .", "Setup The above experiments have shown the superior performance of SPANNER on system combination.", "To further investigate where the gains of the SPANNER come from, similar to 3.3, we perform fine-grained evaluation on CoNLL-2003 dataset using one combination case to interpret how SPANNER outperform other", "(i) base systems and", "(ii) other baseline combiners .", "The combination case contains base systems: sq 0 5 together with sp 1 , sp 2 (model's detail can refer to Tab.3).", "Results As shown in Tab.", "6, we can find: (1) SPANNER v.s. Base systems : the improvements of all base systems largely come from entities with low label consistency ( eCon : XS, S).", "Particularly, base systems with SEQLAB benefit a lot from short entities while base systems with SPANNER gain mainly from long entities.", "(2) SPANNER v.s. Other combiners : as a system combiner, the improvement of SPANNER against other baselines mainly comes from low label consistency ( eCon : XS, S).", "By contrast, traditional combiners surpass SPANNER when dealing with long sentences ( sLen : XL).", "NER as Different Tasks Although NER is commonly formulated as a sequence labeling task (Chiu and Nichols, 2015; Huang et al., 2015; Ma and Hovy, 2016; Lample et al., 2016; Akbik et al., 2018; Peters et al., 2018; Devlin et al., 2018; Xia et al., 2019; Akbik et al., 2019; Luo et al., 2020; Lin et al., 2020), recently other new forms of frameworks have been explored and have shown impressive results.", "For example, (Jiang et al., 2020; Ouchi et al., 2020; Yu et al., 2020) shift NER from token-level tagging to span-level prediction task while (Li et al., 2020; Mengge et al., 2020) conceptualize it as reading comprehension task.", "In this work we aim to interpret the complementarity between sequence labeling and span prediction.", "System Combination Traditionally, system combination was used to improve the performance of statistical MT systems (Gonzalez-Rubio et al., 2011; Watanabe and Sumita, 2011; Duh et al., 2011; Mizumoto and Matsumoto, 2016).", "Some recent work (Zhou et al., 2017; Huang et al., 2020) also extended this method to neural MT where the meta-model and base systems are all neural models.", "There is a handful of works about system combination for NER.", "(Wu et al., 2003; Florian et al., 2003) investigated stacking and voting methods for combining strong classifiers.", "Ekbal and Saha (2011); Zhang et al. (2014) proposes a weighted voting approach based on differential evolution.", "These works commonly require training samples and rely heavily on feature engineering.", "Co-evolution of NLP Systems and their combiners Systems for NLP tasks (e.g., NER model) and their combiners (e.g., ensemble learning for NER) are developing in two parallel directions.", "This paper builds the connection between them and proposes a model that can be utilized as both a base NER system and a system combiner.", "Our work opens up a direction toward making the algorithms of NLP models and system combination co-evolved.", "The unified idea can be applied to other NLP tasks, and some traditional methods like reranking in syntactic parsing can be re-visited .", "For example, we can formulate constituency parsing (Jiang et al., 2020) as well as its re-ranking (Collins and Koo, 2005; Huang, 2008) as a span prediction (Stern et al., 2017) problem, which is be unified and parameterized with the same form.", "CombinaBoard It has become a trend to use a Leaderboard (e.g., paperwithcode 10 ) to track current progress in a particular field, especially with the rapid emergence of a plethora of models.", "Leaderboard makes us pay more attention to and even obsess over the state-of-the-art systems (Ethayarajh and Jurafsky, 2020).", "We argue that Leaderboard with an effective system combination (dubbed COMBINABOARD ) feature would allow researchers to quickly find the complementarities among different systems.", "As a result, the value of a worse-ranked model still could be observed through its combined results.", "In this paper, we make the first step towards this by releasing a preliminary COMBINABOARD for the NER task http://spanner.sh .", "Our model also has been deployed into the EXPLAINABOARD (Liu et al., 2021) platform, which allows users to flexibly perform system combination of top-scoring systems in an interactive way: http://explainaboard.", "nlpedia.ai/leaderboard/task-ner/ 10 https://paperswithcode.com/ Acknowledgements We thank Professor Graham Neubig and anonymous reviewers for valuable feedback and helpful suggestions.", "This work was supported by the Air Force Research Laboratory under agreement number FA8750-19-2-0200.", "The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory or the U.S. Government." ]
[ "abstain", "abstain", "objective", "result", "result", "other", "abstain", "other", "abstain", "other", "abstain", "other", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "objective", "objective", "method", "objective", "abstain", "objective", "objective", "other", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "abstain", "abstain", "abstain" ]
[ "Answer retrieval is to find the most aligned answer from a large set of candidates given a question.", "Learning vector representations of questions/answers is the key factor.", "Question-answer alignment and question/answer semantics are two important signals for learning the representations.", "Existing methods learned semantic representations with dual encoders or dual variational auto-encoders.", "The semantic information was learned from language models or question-to-question (answer-to-answer) generative processes.", "However, the alignment and semantics were too separate to capture the aligned semantics between question and answer.", "In this work, we propose to cross variational auto-encoders by generating questions with aligned answers and generating answers with aligned questions.", "Experiments show that our method outperforms the state-of-the-art answer retrieval method on SQuAD.", "Answer retrieval is to find the most aligned answer from a large set of candidates given a question (Ahmad et al., 2019; Abbasiyantaeb and Momtazi, 2020).", "It has been paid increasing attention by the NLP and information retrieval community (Yoon et al., 2019; Chang et al., 2020).", "Sentence-level answer retrieval approaches rely on learning vector representations (i.e., embeddings) of questions and answers from pairs of question-answer texts.", "The question-answer alignment and question/answer semantics are expected to be preserved in the representations.", "In other words, the question/answer embeddings must reflect their semantics in the texts of being aligned as pairs.", "One popular scheme Dual-Encoders (also known as Siamese network (Triantafillou et al., 2017; Das et al., 2016)) has two separate encoders to generate question and answer embeddings and Table 1: The answer at the bottom of this table was aligned to 17 different questions at the sentence level.", "Question (1): What three stadiums did the NFL decide between for the game?", "Question (2): What three cities did the NFL consider for the game of Super Bowl 50?", "down Super Bowl 50's location to?", "Answer: The league eventually narrowed the bids to three sites: New Orleans Mercedes-Benz Superdome, Miami Sun Life Stadium, and the San Francisco Bay Area's Levi's Stadium.", "a predictor to match two embedding vectors (Cer et al., 2018; Yang et al., 2019).", "Unfortunately, it has been shown difficult to train deep encoders with the weak signal of matching prediction (Bow-man et al., 2015).", "Then there has been growing interests in developing deep generative models such as variational auto-encoders (VAEs) and generative adversial networks (GANs) for learning text embeddings (Xu et al., 2017; Xie and Ma, 2019).", "As shown in Figure", "1(b), the scheme of Dual-VAEs has two VAEs, one for question and the other for answer (Shen et al., 2018).", "It used the tasks of generating reasonable question and answer texts from latent spaces for preserving semantics into the latent representations.", "Although Dual-VAEs was trained jointly on question-to-question and answer-to-answer recon-struction, the question and answer embeddings can only preserve isolated semantics of themselves.", "In the model, the Q-A alignment and Q/A semantics were too separate to capture the aligned semantics (as we mentioned at the end of the first paragraph) between question and answer.", "Learning the alignment with the weak Q-A matching signal, though now based on generatable embeddings, can lead to confusing results, when (1) dif!", "ferent questions have similar answers and (2) similar questions have different answers.", "Table 1 shows an examples in SQuAD: 17 different questions share the same sentence-level answer.", "Our idea is that if aligned semantics were preserved, the embeddings of a question would be able to generate its answer, and the embeddings of an answer would be able to generate the corresponding question.", "In this work, we propose to cross variational auto-encoders, shown in Figure", "1(c), by reconstructing answers from question embeddings and reconstructing questions from answer embeddings.", "Note that compared with Dual-VAEs, the encoders do not change but decoders work across the question and answer semantics.", "Experiments show that our method improves MRR and R@1 over the state-of-the-art method by 1.06% and 2.44% on SQuAD, respectively.", "On a subset of the data where any answer has at least 10 different aligned questions, our method improves MRR and R@1 by 1.46% and 3.65%, respectively.", "Answer retrieval (AR) is defined as the answer of a candidate question is obtained by finding the most similar answer between multiple candidate answers (Abbasiyantaeb and Momtazi, 2020).", "While another popular task on SQuAD dataset is machine reading comprehension (MRC), which is introduced to ask the machine to answer questions based on one given context (Liu et al., 2019).", "In this section, we review existing work related to answer retrieval and variational autoencoders.", "Answer Retrieval.", "It has been widely studied with information retrieval techniques and has received increasing attention in the recent years by considering deep neural network approaches.", "Recent works have proposed different deep neural models in text-based QA which compares two segments of texts and produces a similarity score.", "Document-level retrieval (Chen et al., 2017; Wu et al., 2018; Seo et al., 2018, 2019) has been studied on many public datasets including including SQuAD (Rajpurkar et al., 2016), MsMarco (Nguyen et al., 2016) and NQ (Kwiatkowski et al., 2019) etc.", "ReQA proposed to investigate sentence-level retrieval and provided strong baselines over a reproducible construction of a retrieval evaluation set from the SQuAD data (Ahmad et al., 2019).", "We also focus on sentence-level answer retrieval.", "Variational Autoencoders.", "VAE consists of encoder and generator networks which encode a data example to a latent representation and generate samples from the latent space, respectively (Kingma and Welling, 2013).", "Recent advances in neural variational inference have manifested deep latent-variable models for natural language processing tasks (Bowman et al., 2016; Kingma et al., 2016; Hu et al., 2017a,b; Miao et al., 2016).", "The general idea is to map the sentence into a continuous latent variable, or code, via an inference network (encoder), and then use the generative network (decoder) to reconstruct the input sentence conditioned on samples from the latent code (via its posterior distribu-tion).", "Recent work in cross-modal generation adopted cross alignment VAEs to jointly learn representative features from multiple modalities (Liu et al., 2017; Shen et al., 2017; Schonfeld et al., 2019).", "DeConv-LVM (Shen et al., 2018) and VAR-Siamese (Deudon, 2018) are most relevant to us, both of which adopt Dual-VAEs models (see Figure", "1(b)) for two text sequence matching task.", "In our work, we propose a Cross-VAEs for questions and answers alignment to enhance QA matching performance.", "Problem Definition.", "Suppose we have a question set Q and an answer set A .", "Each question and answer have only one sentence.", "Each question q Q and answer a A can be represented as ( q, a, y ) , where y is a binary variable indicating whether q and a are aligned.", "Therefore, the solution of sentence-level retrieval task could be considered as a matching problem.", "Given a question q and a list of answer candidates C ( q ) A , our goal is to predict p ( y | q, a ) of each input question q with each answer candidate a C ( q ) .", "Learning cross-domain constructions under generative assumption is essentially learning the conditional distribution p ( q | z a ) and p ( a | z q ) where two continuous latent variables z q , z a R d z are independently sampled from p ( z q ) and p ( z a ) :", "p ( q | a ) = E z a p ( z a | a ) [ p ( q | z a )] , p ( a | q ) = E z q p ( z q | q ) [ p ( a | z q )] .", "The question-answer pair matching can be represented as the conditional distribution p ( y | z q , z a ) from latent variables p ( q | z a ) and p ( a | z q ) :", "Objectives.", "We denote E q and E a as question and answer encoders that infer the latent variable z q and z a from a given question answer pair ( q, a, y ) , and D q and D a as two different decoders that generate corresponding question and answer q and a from latent variables z a and z q .", "Then, we have cross construction objective function: L cross ( E , D ) = y E q Q [ log p D ( q | a, E ( a ))] + y E a A [ log p D ( a | q, E ( q ))] .", "Variational Autoencoder (Kingma and Welling, 2013) imposes KL-divergence regularizer to align both posteriors p E ( z q | q ) and p E ( z a | a ) :", "LKL ( E ) = y E q Q [ DKL ( p E ( z q | q ) || p ( z q ))] + y E a A [ DKL ( p E ( z a | a ) || p ( z a ))] , (5)", "where E , D are all parameters to be optimized.", "Besides, we have question answer matching loss from f ( y | q, a ) as: L matching ( f ) = (cid:2) y log p f ( y | z q , z a ) +(1 y ) log(1 p f ( y | z q , z a )) (cid:3) , (6) where f is a matching function and f are parameters to be optimized.", "Finally, in order to allow the model to balance between maximizing the variational evidence lower bound (ELBO) and minimizing the question answer matching loss, a joint training objective is given by: J = L cross LKL + L matching , (7) where , and are introduced as hyperparameters to control the importance of each task.", "Dual Encoders.", "We use Gated Recurrent Unit (GRU) as encoders to learn contextual words embeddings (Cho et al., 2014).", "Question and answer embeddings are reduced by weighted sum through multiple hops self-attention (Lin et al., 2017) of GRU units and then fed into two linear transition to obtain mean and standard deviation as N ( z q ; q , diag ( 2 q )) and N ( z a ; a , diag ( 2 a )) .", "Dual Decoders.", "We adopt another Gated Recurrent Unit (GRU) for generating token sequence conditioned on the latent variables z q and z a .", "Question Answer Matching.", "We adopt cosine similarity with l 2 normalization to measure the matching probability of a question answer pair.", "Our experiments were conducted on SQuAD 1.1 (Rajpurkar et al., 2016).", "It has over 100,000 questions composed to be answerable by text from Wikipedia documents.", "Each question has one corresponding answer sentence extracted from the Table 2: Performance of answer retrieval on SQuAD.", "Wikipedia document.", "Since the test set is not publicly available, we partition the dataset into 79,554 (training) / 7,801 (dev) / 10,539 (test) objects.", "InferSent (Conneau et al., 2017) .", "It is not explicitly designed for answer retrieval, but it produces results on semantic tasks without requiring additional fine tuning.", "USE-QA (Yang et al., 2019) .", "It is based on Universal Sentence Encoder (Cer et al., 2018), but trained with multilingual QA retrieval and two other tasks: translation ranking and natural language inference.", "The training corpus contains over a billion question answer pairs from popular online forums and QA websites (e.g, Reddit).", "QA-Lite.", "Like USE-QA, this model is also trained over online forum data based on transformer.", "The main differences are reduction in width and depth of model layers, and sub-word vocabulary size.", "BERTQA (Devlin et al., 2019) .", "BERTQA first concatenates the question and answer into a text sequence [[ CLS ] , Q, [ SEP ] , A, [ SEP ]] , then passes through a 12-layers BERT and takes the [ CLS ] vector as input to a binary classifier.", "SenBERT (Reimers and Gurevych, 2019) .", "It consists of twin structured BERT-like encoders to represent question and answer sentence, and then applies a similarity measure at the top layer.", "Implementation details.", "We initialize each word with a 768-dim BERT token embedding vector.", "If a word is not in the vocabulary, we use the average vector of its sub-word embedding vectors in the vocabulary.", "The number of hidden units in GRU encoder are all set as 768.", "All decoders are multi-layer perceptions (MLP) with one 768 units hidden layer.", "The latent embedding size is 512.", "The model is trained for 100 epochs by SGD using Adam optimizer (Kingma and Ba, 2014).", "For the KL-divergence, we use an KL cost annealing scheme (Bowman et al., 2016), which serves the purpose of letting the VAE learn useful representations before they are smoothed out.", "We increase the weight of the KL-divergence by a rate of 2 /epochs per epoch until it reaches 1.", "We set learning rate as 1e-5, and implemented on Pytorch.", "Competitive Methods.", "We compare our proposed method cross variational autoencoder (Cross-VAEs) with dual-encoder model and dual variational autoencoder (Dual-VAEs).", "For fair comparisons, we all use GRU as encoder and decoder, and keep all other hyperparameters the same.", "Evaluation Metrics.", "The models are evaluated on retrieving and ranking answers to questions using three metrics, mean reciprocal rank (MRR) and recall at K (R@K).", "R@K is the percentage of correct answers in topK out of all the relevant answers.", "MRR represents the average of the reciprocal ranks of results for a set of queries.", "Comparing performance with baselines.", "As shown in Table 2, two BERT based models do not perform well, which indicates fune tuning BERT may not be a good choice for answer retrieval task due to unrelated pre-training tasks (e.g, masked language model).", "In contrast, using BERT token embedding can perform better in our retrieval task.", "Our proposed method outperforms all baseline methods.", "Comparing with USE-QA, our method improves MRR and R@1 by +1.06% and +2.44% on SQuAD, respectively.", "In addition, Dual variational autoencoder (Dual-VAEs) does not make much improvement on question answering retrieval task because it can only preserve isolated semantics of themselves.", "Our 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Questions Answer", "proposed crossing variational autoencoder (Cross-VAEs) could outperform dual-encoder model and dual variational autoencoder model, which improves MRR and R@1 by +1.23%/+0.81% and +0.90%/+0.59%, respectively.", "Analyzing performance on sub-dataset.", "We extract a subset of SQuAD, in which any answer has at least eight different questions.", "As shown in Table 3, our proposed cross variational autoencoder (Cross-VAEs) could outperform baseline methods on the subset.", "Our method improves MRR and R@1 by +1.46% and +3.65% over USE-QA.", "Cross-VAEs significantly improve the performance when an answer has multiple aligned questions.", "Additionally, SSE of our method is smaller than that of USE-QA.", "Therefore, the questions of the same answer are closer in the latent space.", "Figures", "2(a) and", "2(b) visualize embeddings of 14 questions of the same answer.", "We observe that crossing variational autoencoders (CrossVAE) can better capture the aligned semantics between questions and answers, making latent representations of questions and answers more prominent.", "Figure", "2(c) demonstrates two of example questions and corresponding answers produced by USE-QA and CrossVAEs.", "We observe that CrossVAEs can better distinguish similar answers even though they all share several same words with the question.", "Given a candidate question, answer retrieval aims to find the most similar answer text between candidate answer texts.", "In this paper, We proposed to cross variational autoencoders by generating questions with aligned answers and generating answers with aligned questions.", "Experiments show that our method improves MRR and R@1 over the best baseline by 1.06% and 2.44% on SQuAD.", "We thank Drs.", "Nicholas Fuller, Sinem Guven, and Ruchi Mahindru for their constructive comments and suggestions.", "This project was partially supported by National Science Foundation (NSF) IIS-1849816 and Notre Dame Global Gateway Faculty Research Award." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "result", "result", "other", "other", "abstain", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "abstain", "other", "objective", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "objective", "result", "other", "other", "other" ]
[ "NLP practitioners often want to take existing trained models and apply them to data from new domains.", "While fine-tuning or few-shot learning can be used to adapt a base model, there is no single recipe for making these techniques work; moreover, one may not have access to the original model weights if it is deployed as a black box.", "We study how to improve a black box model's performance on a new domain by leveraging explanations of the model's behavior.", "Our approach first extracts a set of features combining human intuition about the task with model attributions generated by black box interpretation techniques, then uses a simple calibrator, in the form of a classifier, to predict whether the base model was correct or not.", "We experiment with our method on two tasks, extractive question answering and natural language inference, covering adaptation from several pairs of domains with limited target-domain data.", "The experimental results across all the domain pairs show that explanations are useful for calibrating these models, boosting accuracy when predictions do not have to be returned on every example.", "We further show that the calibration model transfers to some extent between tasks.", "1 1 Introduction With recent breakthroughs in pre-training, NLP models are showing increasingly promising performance on real-world tasks, leading to their deployment at scale for translation, sentiment analysis, and question answering.", "These models are sometimes used as black boxes, especially if they are only available as a service through APIs 2 or if end users do not have the resources to fine-tune the 1 Code available: https://github.com/xiye17/ InterpCalib 2 Google Translate, the Perspective API https:// perspectiveapi.com/ , and MonkeyLearn https:// monkeylearn.com/monkeylearn-api/ being three examples.", "models themselves.", "This black-box nature poses a challenge when users try to deploy models on a new domain that diverges from the training domain, usually resulting in performance degradation.", "We investigate the task of domain adaptation of black box models: given a black box model and a small number of examples from a new domain, how can we improve the model's generalization performance on the new domain?", "In this setting, note that we are not able to update the model parameters, 6199 which makes transfer and few-shot learning techniques inapplicable.", "However, we can still make the model more effective in practice by learning a calibrator , or a separate model to make a binary decision of whether the black box model is likely to be correct or not on a given instance.", "While not fully addressing the domain adaptation problem, calibrating the model can make it more useful in practice, as we can recognize when it is likely to make mistakes (Guo et al., 2017; Kamath et al., 2020; Desai and Durrett, 2020) and modify our deployment strategy accordingly.", "This paper explores how explanations can help address this task.", "We leverage black box feature attribution techniques (Ribeiro et al., 2016; Lundberg and Lee, 2017) to identify key input features the model is leveraging, even without access to model internal representations.", "As shown in Figure 1, we perform calibration by connecting model interpretations with hand-crafted heuristics to extract a set of features describing the reasoning of the model.", "For the question answering setting depicted in the figure, answers turn out to be more reliable when the tokens of a particular set of tags (e.g., proper nouns) in the question are strongly considered.", "We extract a set of features describing the attribution values of different tags.", "Using a small number of examples in the target domain, we can train a simple calibrator for the black box model.", "Our approach is closely related to the recent line of work on model behavior and explanations.", "Chan-drasekaran et al. (2018); Hase and Bansal (2020) shows explanations can help users predict model decisions in some ways and Ye et al. (2021) show how these explanations can be semi-automatically connected to model behavior based on manually crafted heuristics.", "Our approach goes further by using a model to learn these heuristics, instead of handcrafting them or having a human inspect the explanations.", "We test whether our method can improve model generalization performance on two tasks: extractive question answering (QA) and natural language inference (NLI).", "We construct generalization settings for 5 pairs of source and target domains across the two tasks.", "Compared to existing baselines (Ka-math et al., 2020) and our own ablations, we find explanations are indeed helpful for this task, successfully improving calibrator performance among all pairs.", "We even find settings where explanation-based calibrators outperform fine-tuning the model on target domain data, which assumes glass-box access to the model's parameters.", "Our analysis further demonstrates generalization of the calibrator models themselves: our calibrator trained on one domain can transfer to another new domain in some cases.", "Moreover, our calibrator can also substantially improves model performance in the Selective QA setting.", "Let x = x 1 , x 2 , ..., x n be a set of input tokens and y = f ( x ) be a prediction from our black box model under consideration.", "Our task in calibration 3 is to assess whether the model prediction on x matches its ground truth y .", "We represent this with the variable t , i.e., t 1 { f ( x ) = y } .", "We explore various calibrator models to perform this task, with our main focus being on calibrator models that leverage explanations in the form of feature attribution .", "Specifically, an explanation for the input x assigns an attribution score i for each input token x i , which represents the importance of that token.", "Next, we extract features u ( x, ) depending on the input and explanation, and use the features to learn a calibrator c : u ( x, ) t for predicting whether a prediction is valid.", "We compare against baselines that do not use explanations in order to answer the core question posed by our paper's title.", "Our evaluation focuses on binary calibration, or classifying whether a model's initial prediction is correct.", "Following recent work in this setting (Ka-math et al., 2020), we particularly focus on domain transfer settings where models make frequent mistakes.", "A good calibrator can identify instances where the model has likely made a mistake, so we can return a null response to the user instead of an incorrect one.", "In the remainder of this section, we'll first introduce how we generate the explanations and then how to extract the features u for the input x .", "(Lund-3 We follow Kamath et al. (2020) in treating calibration as a binary classification task.", "Devising a good classifier is connected to the goal of accurate estimation of posterior probabilities that calibration has more historically referred to (Guo et al., 2017), but our evaluation focuses on binary accuracy rather than real-valued probabilities.", "berg and Lee, 2017) for generating explanations for models instead of other techniques that require access to the model details (e.g., integrated gradients (Sundararajan et al., 2017)).", "The rest of this work only relies on LIME and SHAP to map an input sequence x and a model prediction y to a set of importance weights .", "We will briefly summarize the unified framework shared by both methods, and refer readers to the respective papers for additional details.", "LIME and SHAP generate local explanations by approximating the model's predictions on a set of perturbations around the base data point x .", "In this setting, a perturbation x with respect to x is a simplified input where some of the input tokens are absent (replaced with a <mask> token).", "Let z = z 1 , z 2 , ..., z n be a binary vector with each z i indicating whether x i is present (using value 1) or absent (using value 0), and h x ( z ) be the function that maps z back to the simplified input x .", "Both methods seek to learn a local linear classifier g on z which matches the prediction of original model f by minimizing: g ( z ) = 0 + n (cid:88) i =1 i z i = arg min g (cid:88) z Z x ( z )[ f ( h x ( z )) g ( z )] 2 + ( g ) where x is a local kernel assigning weight to each perturbation z , and is the L2 regularizer over the model complexity.", "The learned feature weight i for each z i then represents the additive attribution (Lundberg and Lee, 2017) of each individual token x i .", "LIME and SHAP differ in the choice of the local kernel x .", "Please refer to the supplementary materials for details of the kernel.", "Armed with these explanations, we now wish to connect the explanations to the reasoning we expect from the task : if the model is behaving as we expect, it may be better calibrated.", "A human might look at the attributions of some important features and decide whether the model is trustworthy in a similar fashion (Doshi-Velez and Kim, 2017).", "Past work has explored such a technique to compare explanation techniques (Ye et al., 2021), or run studies with human users on this task (Chandrasekaran et al., 2018; Hase and Bansal, 2020).", "Our method automates this process by learning what properties of explanations are important.", "We first assign each token x i with one or more human-understandable properties V ( x i ) = { v j } m i j =1 .", "Each property v j V is an element in the property space, which includes indicators like POS tags and is used to describe an aspect of x i whose importance might correlate with the model's robustness.", "We conjoin these properties with aspects of the explanation to render our calibration judgment.", "Figure 1 shows examples of properties such as whether a token is a proper noun (NNP).", "We now construct the feature set for the prediction made on x .", "For every property v V , we extract a single feature F ( v, x, ) by aggregating the attributions of the tokens associated with v : F ( v, x, ) = n (cid:88) i =1 (cid:88) v V ( x i ) 1 { v = v } i where 1 is the indicator function, and i is the attribution value.", "An individual feature represents the total attribution with respect to property v when the model is making the predictions for x .", "The complete feature set u for x , given as u = { F ( v, x, ) } v V , summarizes model rationales from the perspective of the properties in V .", "properties for calibrating QA and NLI models.", "Segments of the Input (QA and NLI): In both of our tasks, an input sequence can naturally be decomposed into two parts, namely a question and a context (QA) or a premise and a hypothesis (NLI).", "We assign each token with the corresponding segment name, which yields features like Attributions to Question .", "POS Tags (QA and NLI): We use tags from the English Penn Treebank (Marcus et al., 1993) to implement a group of properties.", "We hypothesize that tokens of some specific tags should be more important, like proper nouns in the questions of the QA tasks.", "If a model fails to consider proper nouns of a QA pair, it is more likely to make incorrect predictions.", "Overlapping Words (NLI): Word overlap between a premise and a hypothesis strongly affects neural models' predictions (McCoy et al., 2019).", "We assign each token with the Overlapping property if a token appears in both the premise and the hypothesis, or Non-Overlapping otherwise.", "Cartesian product of two or more groups.", "We conjoin Segment and Pos-Tags , which yields higher-level features like Attributions to NNP in Question .", "Such a feature aggregates attributions of tokens that are tagged with NNP and also required to be in the question (marked with orange).", "We train the calibrator on a small number of samples in our target domain.", "Each sample is labeled using the prediction of the original model compared to the ground truth.", "Using our feature set F ( v, x, ) , we learn a random forest classifier, shown to be effective for a similar data-limited setting in Kamath et al. (2020), to predict t (whether the prediction is correct).", "This classifier returns a score, which overrides the model's original confidence score with respect to that prediction.", "In Section 4, we discuss several baselines for our approach.", "As we vary the features used by the model, all the other details of the classifier and setup remain the same.", "Our task setup involves transferring from a source domain/task A to a target domain/task B. Figure 2 shows the data conditions we operate in.", "Our primary experiments focus on using our features to either calibrate or selectively answer in the black box setting (right side in Figure 2).", "In this setting, we have a black box model trained on a source domain A and a small amount of data from the target domain B. Our task is to train a calibrator using data from domain B to identify instances where the model potentially fails in the large unseen test data in domain B. We contrast this black box setting with glass box settings (left side in Figure 2), where we directly have access to the model parameters and can fine-tune on domain B or train on B from scratch.", "English Question Answering We experiment with domain transfer from SQUAD (Rajpurkar et al., 2016) to three different settings: SQUAD-ADV (Jia and Liang, 2017), HOTPOTQA (Yang et al., 2018), and TRIVIAQA (Joshi et al., 2017).", "SQUAD-ADV is an adversarial setting based on SQUAD , which constructs adversarial QA examples based on SQUAD by appending a distractor sentence at the end of each example's context.", "The added sentence contains a spurious answer and usually has high surface overlapping with the question c B l ack -B o x C a li b r a t i o n Base Model Calibration Test S e l ec t i v e QAA A C A B ?", "so as to fool the model.", "We use the ADDSENT setting from Jia and Liang (2017).", "Similar to SQUAD , HOTPOTQA also contains passages extracted from Wikipedia, but HOTPOTQA asks questions requiring multiple reasoning steps, although not all questions do (Chen and Durrett, 2019).", "TRIVIAQA is collected from Web snippets, which present a different distribution of questions and passages than SQUAD .", "For HOTPOTQA and TRIVIAQA, we directly use the pre-processed version of dataset from the MRQA Shared Task (Fisch et al., 2019).", "English NLI For the task of NLI, we transfer a model trained on MNLI (Williams et al., 2018) to MRPC (Dolan and Brockett, 2005) and QNLI (Wang et al., 2019), similar to the settings in Ma et al. (2019).", "QNLI contains a question and context sentence pair from SQUAD , and the task is to verify whether a sentence contains the answer to the paired question.", "MRPC is a paraphrase detection dataset presenting a binary classification task to decide whether two sentences are paraphrases of one another.", "Note that generalization from MNLI to QNLI or MRPC not only introduces shift in terms of the distribution of the input text, but in terms of the nature of the task itself, since QNLI and MRPC aren't strictly NLI tasks despite sharing some similarity.", "Both are binary classification tasks rather than three-way.", "Baselines We compare our calibrator against existing baselines as well as our own ablations.", "KAMATH (Kamath et al., 2020) (for QA only) is a baseline initially proposed to distinguish out-of-distribution data points from in-domain data points in the selective QA setting (see Section 5), but it can also be applied in our settings.", "It trains a random forest classifier to learn whether a model's prediction is correct based on several heuristic features, including the probabilities of the top 5 predictions, the length of the context, and the length of the predicted answer.", "Since we are calibrating black box models, we do not use dropout-based features in Kamath et al. (2020).", "CLSPROBCAL (for NLI only) uses more detailed information than MAXPROB : it uses the predicted probability for Entailment , Contradiction , and Neutral as the features for training a calibrator instead of only using the maximum probability.", "BOWPROP adds a set of heuristic property features on top of the KAMATH method.", "These are the same as the features used by the full model excluding the explanations .", "We use this baseline to give a baseline for using general shape features on the inputs not paired with explanations.", "Implementation of Our Method We refer our explanation-based calibration method using explanations produced by LIME and SHAP as LIMECAL and SHAPCAL respectively.", "We note that these methods also take advantages of the bag-of-word features in BOWPROP .", "For QA, the property space is the union of low-level Segment and Segment Pos-Tags .", "For NLI, we use the union of Segment and Segment Pos-Tags Overlapping Words to label the tokens.", "Setup We train a ROBERTA (Liu et al., 2019) QA model on SQUAD as the base model, which achieves 85.5 exact match and 92.2 F1 score.", "For the experiments on HOTPOTQA and TRIVIAQA, we split the dev set, sample 500 examples for training, and leave the rest for testing.", "4 For experiments on SQUAD-ADV , we remove the unmodified data points in the ADD-SENT setting and also use 500 examples for training.", "For the experiments across all pairs, we randomly generate the splits, test the methods 20 times, and average the results to alleviate the influence of randomness.", "Metrics In addition to calibration accuracy ( ACC ) that measures the accuracy of the calibrator, we also use the area under coverage-F1 curve ( AUC ) to evaluate the calibration performance for QA tasks in particular.", "The coverage-F1 curve (Figure 3) plots the average F1 score of the model achieved when the model only chooses to answer varying fractions (coverage) of the examples ranked by the calibrator-produced confidence.", "A better calibrator should assign higher scores to the questions that the models are sure of, thus resulting in higher area under the curve; note that AUC of 100 is impossible since the F1 is always bounded by the base model when every question is answered.", "We additionally report the average scores when answering the top 25%, 50%, and 75% questions, for a more intuitive comparison of the performance.", "Results Table 1 summarizes the results for QA.", "First, we show that explanations are helpful for calibrating black box QA models out-of-domain.", "Our method using LIME substantially improves the calibration AUC compared to KAMATH by 7.1, 2.1 and 1.4 on SQUAD-ADV , TRIVIAQA, and HOTPOTQA, respectively.", "In particular, LIMECAL achieves an average F1 score of 92.3 at a coverage of 25% on SQUAD-ADV , close to the performance of the base model on original SQUAD examples.", "Our explanation-based approach is effective at identifying the examples that are robust with respect to the adversarial attacks.", "PROP performs on par with or only slightly better than KAMATH .", "These results show that connecting explanations with annotations is a path towards building better calibrators.", "Finally, we compare the performance of our methods based on different explanation techniques.", "LIMECAL slightly outperforms SHAPCAL in all three settings.", "As discussed in Section 2.1, SHAP assigns high instance weights to those perturbations with few activated features.", "While such a choice of the kernel is effective in tasks involving tabular data (Lundberg and Lee, 2017), this might not be appropriate for the task of QA when such perturbations may not yield meaningful examples.", "Setup Our base NLI model is a ROBERTA classification model trained on MNLI, which achieves 87.7% accuracy on the development set.", "We collapse contradiction and neutral into non-entailment when evaluating on QNLI and MRPC.", "We continue using random forests as the calibrator model.", "We evaluate the generalization performance on the development sets of QNLI and MRPC.", "Similar to the settings in QA, we use 500 examples to train the calibrator and test on the rest for each of the 20 random trials.", "Metrics Because QNLI and MRPC are binary classification tasks, predicting whether a model is correct (our calibration setting) is equivalent to the original prediction task.", "We can therefore measure calibrator performance with standard classification accuracy and AUC.", "Results We show results on NLI tasks in Table", "2. The base MNLI model utterly fails when transferring to QNLI and MRPC and achieves an accuracy of 49% and 57%, respectively, whereas the majority class is 50% (QNLI) and 65% (MRPC).", "With heuristic annotations, BOWPROP is able to solve 74% of the QNLI instances and 72% of the MRPC instances.", "Our heuristic itself is strong for QNLI compared to MAXPROB .", "LIMECAL is still the best in both settings, moderately improving accuracy by 1% and 2% over BOWPROP using explanations.", "The results on NLI tasks suggest our method can still learn useful signals for indicating model reliability even if the underlying tasks are very different.", "transfer settings.", "Is the knowledge of a calibrator learned on some initial domain transfer setting, e.g., SQuAD TRIVIAQA, generalizable to another transfer setting, e.g. HOTPOTQA?", "This would enable us to take our basic QA model and a calibrator and apply that pair of models in a new domain without doing any new training or adaptation.", "We explore this hypothesis on QA.", "5 For comparison, we also give the performance of a ROBERTA -model first finetuned on SQUAD and then finetuned on domain A (ADAPT , Figure 2).", "ADAPT requires access to the model architecture and is an unfair comparison for other approaches.", "We show the results in Table", "3. None of the approaches generalize between SQUAD-ADV and the other domains (either trained or tested on SQUADADV ), which is unsurprising given the synthetic and very specific nature of SQUAD-ADV .", "Between TRIVIAQA and HOTPOTQA, both the LIMECAL and KAMATH calibrators trained on one 5 We also tested the hypothesis on the NLI-paraphrase transfer, but did not see evidence of transferability there, possibly due to the fact that these tasks fundamentally differ.", "domain can generalize to the other, even though BOWPROP is not effective.", "Furthermore, our LIMECAL exhibits a stronger capability of generalization compared to KAMATH .", "We then compare LIMECAL against ADAPT .", "ADAPT does not always work well, which has also been discussed in Kamath et al. (2020); Talmor and Berant (2019).", "ADAPT leads to a huge drop in terms of performance when being trained on HOTPOTQA and tested on TRIVIAQA, whereas LIMECAL is the best in this setting.", "From TRIVIAQA to HOTPOTQA, ADAPT works well, but LIME is almost as effective.", "ity across the two realistic QA tasks.", "We believe this can be attributed to the features used in the explanation-based calibrator.", "Although the task is different, the calibrator can rely on some common rules to decide the reliability of a prediction.", "Impacts of Training Data Size Calibrating a model for a new domain becomes cumbersome if large amounts of annotated data are necessary.", "We experiment with varying the amount of training data the calibrator is exposed to, with results shown in Table", "4. Our explanation-based calibrator is still the best in every setting with as few as 100 examples.", "With 100 examples, KAMATH and BOWPROP perform worse than the MAXPROB baseline on TRIVIAQA and HOTPOTQA, indicating that more data is needed to learn to use their features.", "Throughout this work, we have assumed a black box model that cannot be fine-tuned on a new domain.", "In this section, we compare calibration-based approaches with glass-box methods that require access to the model architectures and parameters.", "We evaluate two glass-box methods in two different settings (Figure 2): (1) finetuning a base ROBERTA model (FINETUNEROBERTA ), which needs access to the model's architecture but not parameters; and (2) finetuning a base QA/NLI model, which requires both model architectures as well as parameters.", "All these models are finetuned with 500 examples, the same as LIMECAL .", "We also give the performance of a model trained with full in-domain training data for different tasks as references (INDOMAIN QA/NLI).", "We present the model performance (measured with Exact Match and F1 for QA and Acc for NLI) and calibration results in Table", "5. Note that there are no calibrators for glass box methods, so we only report AUC scores for calibration performance.", "On QA tasks, the limited training data is not sufficient for successfully finetuning a ROBERTA model.", "Consequently, FINETUNEROBERTA does not achieve credible performance.", "Finetuning a base QA model greatly improves the performance, surpassing LIMECAL on SQUAD-ADV and HOTPOTQA.", "However, we still find that on TRIVIAQA, LIMECAL slightly outperforms ADAPT .", "This is a surprising result, and shows that explanation-based calibrators can still be beneficial in some scenarios, even if we have full access to the model.", "On NLI tasks that are substantially easier than QA, finetuning either a ROBERTALM model or a base NLI model can reach an accuracy of roughly 80%.", "Our explanation-based approach largely lags glass-box methods, likely because the base NLI model utterly fails on QNLI (50.5% accuracy) and MRPC (55.0% accuracy) and does not grant much support for the two tasks.", "Nonetheless, the results on NLI still support our main hypothesis: explanations can be useful for calibration.", "Our results so far have shown that a calibrator can use explanations to help make binary judgments of correctness for a model running in a new domain.", "We now test our model on the selective QA setting from Kamath et al. (2020) (Figure 2).", "This experiment allows us to more directly compare with prior work and see performance in a setting where in-domain (ID) and out-of-domain (OOD) examples are mixed together.", "Given a QA model trained on source domain data, the goal of selective QA is to train a calibrator on a mixture of ID source data and known OOD data, and test the calibrator to work well on a 6206 Known \\ Unknown SQ-ADVTRIVIAHOTPOTSQADVMAXPROB 85.0 88.7 87.5 KAMATH 88.8 89.5 88.9 BOWPROP 91.5 90.6 89.0 LIMECAL 94.5 91.7 91.9 TRIVIAMAXPROB 85.0 88.7 87.6 KAMATH 85.6 91.9 88.7 BOWPROP 85.3 92.1 89.9 LIMECAL 90.9 92.5 92.1 HOTPOTMAXPROB 85.0 88.7 87.6 KAMATH 86.1 91.4 89.4 BOWPROP 85.1 91.8 91.6 LIMECAL 91.7 92.3 92.5 Table 6: Area under Coverage-F1 curve in the Selective QA setting.", "We follow the similar experimental setup as in Kamath et al. (2020).", "The detailed setting is included in the supplementary material.", "Results As shown in Table 6, similar to the main QA results.", "Our explanation-based approach, LIMECAL , is consistently the best among all settings.", "We point out our approach outperforms KAMATH especially in settings that involve SQUADADV as known or unknown OOD distribution.", "This can be attributed the similarity between SQUAD and SQUAD-ADV which can not be well distinguished with features used in KAMATH ( Context Length, Answer Length , and etc.).", "The strong performance of our explanation-based approach in the selective QA setting further verifies our assumption: explanation can be useful and effective for calibrating black box models.", "Our approach is inspired by recent work on the simulation test (Doshi-Velez and Kim, 2017), i.e., whether humans can simulate a model's prediction on an input example based on the explanations.", "Simulation tests have been carried out in various tasks (Ribeiro et al., 2018; Nguyen, 2018; Chan-drasekaran et al., 2018; Hase and Bansal, 2020) and give positive results in some tasks (Hase and Bansal, 2020).", "Our approach tries to mimic the process that humans would use to judge a model's prediction by combining heuristics with attributions instead of having humans actually do the task.", "like machine translation (Bojar et al., 2017), question answering (Kamath et al., 2020; Zhang et al., 2021), constituency parsing (Charniak and Johnson, 2005; Fossum and Knight, 2009) and semantic parsing (Yin and Neubig, 2019).", "The work of Rajani and Mooney (2018) in VQA is most relevant to ours; they also use heuristic features, but we further conjoin heuristic with model attributions.", "Our meta-feature set is derived from the presence of certain properties, which is similar to the con-cepts used in concept-based explanations (Ghor-bani et al., 2019; Mu and Andreas, 2020), but we focus on using them for estimating model performance rather than explaining a prediction.", "Our work addresses the problem of calibration (Guo et al., 2017; Desai and Durrett, 2020), which is frequently framed in terms of models' output probabilities.", "Past work has attempted to tackle this problem using temperature scaling (Guo et al., 2017) or label smoothing (Pereyra et al., 2017), which adjust confidence scores for all predictions.", "In contrast, we approach this issue by applying a classifier leveraging instance-specific explanations.", "Past work on generalizing to out-of-domain distribution in NLP largely focuses on using unlabeled data from the target domain and requires finetuning a model (Ma et al., 2019; Ramponi and Plank, 2020; Guo et al., 2020), whereas we improve OOD performance of strictly black-box models.", "Limitations Despite showing promising results in improving model generalization performance, our attribution-based approach does suffer from intensive computation cost.", "Using either LIME or SHAP to generate attributions requires running inference a fair number of perturbations when the input size is large (see Appendix for details), which limits our method's applicability.", "But this doesn't undermine the main contribution of this paper, answering the question in the title, and our approach is still applicable as-is in the scenarios where we pay for access to the model but not per query.", "Conclusion We have explored whether model attributions can be useful for calibrating black box models.", "The answer is yes .", "By connecting attributions with human heuristics, we improve model generalization performance on new domains and tasks.", "Besides, it exhibits promising generalization performance in some settings (cross-domain generalization and Selective QA).", "Thanks to the anonymous reviewers for their helpful feedback.", "This work was partially supported by NSF Grant IIS-1814522, a gift from Salesforce Inc, and a gift from Amazon." ]
[ "abstain", "abstain", "objective", "objective", "method", "abstain", "result", "abstain", "other", "abstain", "abstain", "objective", "method", "abstain", "method", "objective", "method", "result", "abstain", "method", "method", "abstain", "abstain", "method", "result", "method", "result", "result", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "other", "method", "method", "abstain", "other", "objective", "abstain", "result", "result", "objective", "objective", "abstain", "objective", "abstain", "other", "other" ]
[ "Recent work has identified properties of pretrained self-attention models that mirror those of dependency parse structures.", "In particular, some self-attention heads correspond well to individual dependency types.", "Inspired by these developments, we propose a new competitive mechanism that encourages these attention heads to model different dependency relations.", "We introduce a new model, the Unsupervised Dependency Graph Network (UDGN), that can induce dependency structures from raw corpora and the masked language modeling task.", "Experiment results show that UDGN achieves very strong unsupervised dependency parsing performance without gold POS tags and any other external information.", "The competitive gated heads show a strong correlation with human-annotated dependency types.", "Furthermore, the UDGN can also achieve competitive performance on masked language modeling and sentence textual similarity tasks 1 .", "The goal of unsupervised dependency parsing is to induce dependency grammar from corpora that don't have annotated parse trees (Marecek, 2016).", "Although the task is difficult, one advantage of unsupervised methods is that they can leverage vast amount of unannotated raw corpus (Han et al., 2020).", "Thus, adapting the task into a pretraining framework is increasingly tempting.", "The induced dependency trees can also help solve other NLP problems such as unsupervised discourse parsing (Nishida and Nakayama, 2020), aspect-based sentiment analysis (Dai et al., 2021) and intent discovery (Liu et al., 2021).", "Furthermore, the task can also be used as a probe to verify cognitive theories for human language acquistition (Yang et al., 2020; 1 Our code is publicly available at https://github.", "Pretrained Language Models (PLMs) have become the foundation of modern natural language processing in the last few years (Bommasani et al., 2021).", "They dominate the most if not all NLP tasks.", "But Recent works show that, beyond pretraining big models on large-scale corpora, deep learning methods can improve performance by increasing models' awareness of syntactic information (Kuncoro et al., 2020).", "These methods either include known structural information as input to the model (Sundararaman et al., 2019; Bai et al., 2021), or incorporate structural prediction tasks into the training process (Wang et al., 2019a).", "However, these attempts require access to large datasets with supervised parses, which may be complicated and expensive.", "Recent work also identified properties of pretrained self-attention models that mirror those of dependency parse structures (Htut et al., 2019; Hewitt 4767 and Manning, 2019; Jawahar et al., 2019).", "StructFormer (Shen et al., 2020) shows that a transformer-based model can induce a good dependency structure.", "The belief that linguistic structure may be embedded in these models is of interest to the community.", "Furthermore, Dai et al. (2021) shows that the induced trees from finetuned RoBERTa outperform parser-provided trees on aspect-based sentiment analysis tasks.", "This result brings interest to study task-specific structures.", "From this perspective, the unsupervised acquisition of dependency structure from raw data or downstream tasks appears important and feasible.", "Traditionally, dependency grammars consider the dependency types (a.k.a. syntactic functions) as primitive and then derive the dependency graph (Debusmann, 2000).", "Every head-dependent dependency bears a syntactic function (Mel'cuk et al., 1988).", "Htut et al. (2019) shows that some attention heads in BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) track individual dependency types.", "In other words, these heads model different syntactic functions.", "Inspired by this observation and syntactic functions, we introduce competitive gated heads to model different syntactic parts and the process of selecting the proper syntactic function for each edge.", "These heads include two key components: A set of gated heads that model different information propagation processes between tokens; A competitive controller that selects the most suitable gated head for each pair of tokens.", "Building on these components, we propose a novel architecture, the Unsupervised Dependency Graph Network (UDGN).", "As shown in Figure 1, the UDGN is composed of two networks: a parser that computes the dependency head distribution p i for each word w i in the input sentence and then converts it to a matrix of edge probability m ij that approximates an undirected dependency graph; a Dependency Graph Network (DGN) that uses the edge probabilities { m ij } and competitive gated heads to propagate information between words to compute a contextualized embedding h i for each word w i .", "While training with the masked language modeling or other objectives, the gradient can flow through the DGN to the parser network through its dependence on m ij .", "As a result, UDGN can induce a dependency grammar while solely relying on the masked language modeling objective.", "In the experiment section, we first train the UDGN with masked language modeling, then evaluate it on unsupervised dependency parsing.", "Our experimental results show that UDGN can: 1) achieve very strong unsupervised parsing results among models that don't have access to extra annotations (including POS tags); 2) learn attention heads that are strongly correlated to human-annotated dependency types; 3) achieve competitive performance on language modeling tasks.", "We also finetune the pretrained UDGN on Semantic Textual Similarity (STS) tasks.", "Our experiments show that UDGN outperforms a Transformer baseline trained on the same corpus.", "Unsupervised dependency parsing Unsupervised dependency parsing is a long-standing task for computational linguistics.", "Dependency Model with Valence (DMV; Klein and Manning 2004) is the basis of several unsupervised dependency parsing methods (Daum III, 2009; Gillenwater et al., 2010).", "Jiang et al. (2016) updates the method using neural networks to predict grammar rule probabilities.", "While previous methods mostly require additional Part-of-Speech (POS) information, Spitkovsky et al. (2011) tackled the issue by performing clustering based on word context information and then assigning the cluster ID to each word as their tag.", "He et al. (2018) incorporate an invertible neural network into DMV model to jointly model dependency grammar and word embeddings.", "Recently, NL-PCFG (Zhu et al., 2020) and NBL-PCFG (Yang et al., 2021) combined neural network and L-PCFG to achieve good performance in a joint unsupervised dependency and constituency parsing setting.", "StructFormer (Shen et al., 2020) proposes a joint constituency and dependency parser and uses the dependency distribution to regularize the self-attention heads in the transformer model.", "This joint parser-language model framework can induce grammar from masked language modeling tasks.", "The UDGN's architecture is similar to StructFormer, both models include a parser and masked language model.", "Our model, however, has three major differences: 1) it uses competitive gated heads to improve models performance on grammar induction; 2) it uses a neural head selective parser that can produce both projective and nonprojective dependency trees, whereas the distance parser in StructFormer can only produce projective 4768 trees; 3) it uses a simplified method to generate an undirected dependency mask.", "Transformers, Graph Neural Networks and Dependency Graphs In many Transformer-based models, attention masks are often used to limit the input tokens that a particular timestep can attend over.", "In Yang et al. (2019), for example, a mask derived from the permutation of inputs is used to induce a factorization over the tokens so that the resulting model is a valid probabilistic model.", "This attention mask can be viewed as an adjacency matrix over a graph whose nodes are the input tokens.", "From this perspective, Transformers are a form of Graph Neural Network (Scarselli et al., 2008) specifically, a Graph Attention Network (GAT; Velickovic et al. 2017), as it attends over the features of its neighbors.", "Several works have made this connection, and integrated dependency structures into transformers (Ahmad et al., 2021; Wang et al., 2019b; Tang et al., 2020).", "Results from Omote et al. (2019) and Deguchi et al. (2019) suggest that embedding these structures can improve translation models.", "However, these dependency parses may not always be present to be used as input to the model.", "Strubell et al. (2018) trains the self-attention to attend the syntactic governor (head) of a particular token, resulting in a model that does not require dependency structure as input during inference time.", "We take a further step in our work and attempt to learn these structures in an unsupervised fashion from the MLM objective.", "Differentiable Structured Prediction While the head selection is a good approximation of a tree structure, there are methods to obtain a relaxed adjacency matrix as the output of the parser.", "Previous work have used such methods for predicting structure.", "Koo et al. (2007) proposed using the Kirchoff matrix tree theorem for dependency parsing.", "They explain how the marginals of the edge potentials are computed, and these marginals have properties similar to a tree adjacency matrix (sum over the marginals are equal to N 1 for example, where N is the length of the sentence).", "Eisner (2016) describes how backpropagation can be used to compute marginals of some structured prediction algorithm.", "We also tried using the Kirchhoff method to normalize our dependency distributions in Appendix A.3.", "Corro and Titov (2018) uses similar notions but relaxes projective trees using Gumbel-softmax.", "Kim et al. (2017) proposed a structured form of attention and show that they are useful for certain sequence-to-sequence tasks.", "Mensch and Blondel (2018) gives a general theoretical treatment for these types of relaxations, while Paulus et al. (2020) gives a practical treatment of possible applications for these methods.", "As shown in Figure 2, the parser computes a dependency head distribution for each token and then converts it to a soft dependency mask m ij .", "The DGN takes m ij and the sentence as input and uses a competitive mechanism to propagate information between tokens.", "We use a simplified version of the Dependency Neural Selection parser (DENSE ; Zhang et al. 2016) that only predicts unlabelled dependency relations.", "The parser takes the sentence s = w 1 w 2 ...w T as input, and, for each token w i , it produces a distribution p i over all tokens in the sentence, resulting in a T T weight matrix.", "The parser first maps the sequence of tokens w 1 w 2 ...w T into a sequence of embeddings [ x 1 , x 2 , ..., x T ] .", "Then the word embeddings are fed into a stack of a bidirectional LSTM (BiLSTM): h i = BiLSTM( x i ) (1) where h i is the output of the BiLSTM at i -th timestep.", "Linear transforms are applied to the output of the BiLSTM to extract head and dependent information.", "where p ij is the probability that w i depends on w j , D is the dimension of hidden states.", "During the inference for parsing, the Chu-Liu/Edmonds' algorithm (Chu and Liu, 1965b) is used to extract the most likely directed dependency graph from the matrix p .", "Given the dependency probabilities, StructFormer (Shen et al., 2020) uses a weighted sum of matrix p and p (cid:62) to produce a mask for self-attention layers in the transformer.", "We found that simply using the adjacency matrix of the undirected dependency graph provides better parsing results and perplexities.", "However, simply using the sum of the matrix and its transpose to create a symmetric weight matrix does not ensure that the attention mask has values < 1.", "When p ij = 1 and p ji = 1 , for instance, the mask violates the constraints of a dependency mask.", "Thus, we treat p ij and p ji as parameters for independent Bernoulli variables, and we compute the probability that either w i depends on w j or w j depends on w i .", "To better induce and model the dependency relations, we propose a new Dependency Graph Network (DGN).", "One DGN layer includes several gated heads and a competitive controller.", "A gated head can process and propagate information from one node to another.", "Different heads can learn to process and propagate different types of information.", "The competitive controller is designed to select the correct head to propagate information between a specific pair of nodes.", "We take inspiration from the linguistic theory that dependencies are associated with different syntactic functions.", "These functions can appear as labels, e.g. ATTR (attribute), COMP-P (complement of preposition), and COMP-TO (complement of to).", "However, DGN learns these functions from training tasks, which in our experiments is the masked language model objective.", "Since these objectives tend to be statistical in nature, these functions may not be correlated with ground truth labels given by human experts.", "Inside each layer, the input vector h l 1 i is first projected into N groups of vectors, where N is the number of heads.", "Each group contains four different vectors, namely, query q , key k , value v and gate g : q ik k ik v ik g ik = W head k h l 1 i + b head k (7) Gated Head To model the information propagation from node j to node i , we proposed a gated head: c ijk = ( v jk ) (cid:12) sigmoid( g ik ) (8) where is a non-linear activation function, and gates sigmoid( g ) allows the i -th token to filter the extracted information.", "We also found that the gate effectively improves the model's ability to induce 4770 Figure 3: Competitive Gated Heads.", "latent dependency structures that are coherent to human-annotated trees.", "The activation function can be chosen from a wide variety of functions, including the identity function, tanh, ReLU, and ELU, etc.", "In our experiment, we found that tanh function provides the best overall performance.", "This is probably due to two reasons:", "a) tanh function provides a bounded output (between -1 and 1), and", "b) gates and head weights are more effective while controlling a bounded value.", "Competitive Controller Lamb et al. (2021) proposed the idea of using a competition method to encourage heads to specialize over training iterations to learn different functions.", "This idea is coherent with our intuition different heads should model different dependency relations.", "In UDGN, a competitive controller is designed to select a head for each pair of nodes ( i, j ) .", "However discrete assignment is hard to optimize, we replace it with a soft relaxation: e ijk = q ik k jk D (9) a ijk = softmax k ( e ijk ) (10) where a ijk is the probability that the k -th head is assigned to propagate information from the j th token to the i -th token.", "To obtain the actual head weights, we multiply the probability of edge existence with the probability of choosing a specific attention head: a ijk = a ijk m ij (11) where a ijk is the weight from the node j to the node i for k -th attention head.", "Relative Position Bias Transformer models use positional encoding to represent the absolute position for each token.", "In DGN, we only model whether the token is before or after the current token.", "The motivating intuition is the association of different heads with different directions.", "In equation 10, we can introduce a relative position bias: a ijk = softmax k ( e ijk + b lrk ) (12) b lr k = (cid:40) b lk , i > j b rk , i < j (13) where b lk and b rk are trainable parameters.", "The relative position bias allows the attention head k to prioritize forward or backward directions.", "A mere forward and backward differentiation may seem weak compared to other parameterizations of positional encoding (Vaswani et al., 2017; Shaw et al., 2018), but in conjunction with the dependency constraints, this method is a more effective way to model the relative position in a tree structure.", "As shown in Table 4, the relative position bias achieves stronger masked language modeling and parsing performance than positional encoding.", "At the end, a matrix multiplication is used to aggregate information from different positions.", "Then, the output o from different heads are concatenated, and then projected back to the hidden state space with a linear layer.", "where h li is the output of the l -th gated self attention layers.", "The shared hidden state space can be seen as the shared global workspace (Goyal et al., 2021) for different independent mechanisms (heads).", "Language Modeling tasks evaluate the model's general ability to model different semantic and syntactic phenomena (e.g., words co-occurrence, verb-subject agreement, etc.).", "The performance of MLM is evaluated by measuring perplexity on masked words.", "We perform experiments on two corpora: the Penn TreeBank (PTB) and Brown Laboratory for Linguistic Information Processing (BLLIP).", "In this experiment, we randomly replace each token with a mask token <mask> , such that the model is required to predict the original token.", "But we never replace <unk> token.", "PTB The Penn Treebank (Marcus et al., 1993) is a standard dataset for language modeling (Mikolov et al., 2012) and unsupervised constituency parsing (Shen et al., 2018; Kim et al., 2019).", "It contains 1M words (2499 stories) from Wall Street Journal.", "Following the setting proposed in Shen et al. (2020), we preprocess the Penn Treebank dataset by removing all punctuations, lower case all letters, and replaces low frequency tokens (< 5) with <unk> .", "The preprocessing results in a vocabulary size of 10798 (including <unk> , <pad> and <mask> ).", "BLLIP The Brown Laboratory for Linguistic Information Processing dataset is a large corpus, parsed in the same style as the PTB dataset.", "It contains 24 million sentences from Wall Street Journal.", "We perform experiments on four subsets of BLLIP: BLLIP-XS (40k sentences, 1M tokens), BLLIP-SM (200K sentences, 5M tokens), BLLIP-MD (600K sentences, 14M tokens), and BLLIP-LG (2M sentences, 42M tokens).", "Following the same setting proposed in Hu et al. (2020) for sentence selection, each subset is a superset of smaller subsets.", "Models trained on different subsets are tested on a shared held-out test set (20k sentences, 500k tokens).", "We use a shared vocabulary for all splits to make the mask language modeling and parsing results comparable.", "Like the PTB dataset, we preprocess the BLLIP dataset by removing all Methods DDA UDA DMV (Klein and Manning, 2004) 35.8 E-DMV (Headden III et al., 2009) 38.2 UR-A E-DMV (Tu and Honavar, 2012) 46.1 Neural E-DMV (Jiang et al., 2016) 42.7 Gaussian DMV (He et al., 2018) 43.1 INP (He et al., 2018) 47.9 NL-PCFGs (Zhu et al., 2020) 40.5 55.9 NBL-PCFGs (Yang et al., 2021) 39.1 56.1 StructFormer (Shen et al., 2020) 46.2 61.6 UDGN 49.9 61.8 Table 2: Dependency Parsing Results on WSJ test set without gold POS tags.", "punctuations and lower case letters.", "The shared vocabulary is obtained by counting word frequencies on the BLLIP-LG dataset and selecting the words that appear more than 27 times.", "The resulting vocabulary size is 30232 (including <unk> , <pad> and <mask> ), and covers more than 98% tokens in BLLIP-LG split.", "The mask rate is 30% when training on both corpora.", "In Section A.4, we further explore the relationship between mask rate and parsing results.", "Other hyperparameters are tuned separately for each model and dataset.", "Details are further described in Section A.1.", "Table 1 shows The masked language model results.", "UDGN outperforms the baselines on smaller datasets (PTB, BLLIP-SM), but underperforms against baselines trained on large datasets (BLLIP-MD, BLLIP-LG).", "However, in Section 4.5, we find that the UDGN pretrained on BLLIP-LG dataset can achieve stronger performance when finetuned on a downstream task.", "This result may suggest that our model learns more generic contextual embeddings.", "Following previous research (Shen et al., 2020), we use the model trained on the PTB training set (section 0-20, no punctuations) and test its parsing accuracy on section 23 of the PTB corpus.", "Punctuations are ignored during the evaluation.", "We convert the human-annotated constituency trees in the PTB test set (Marcus et al., 1993) to dependency trees with Stanford CoreNLP (Manning et al., 2014) and use the Directed Dependency Accuracy (DDA) as our metric.", "To derive valid trees from the attention 4772 Models prep pobj det compound nsubj amod dobj aux UDGN 0.65(0.12) 0.60(0.11) 0.68(0.15) 0.42(0.04) 0.50(0.06) 0.39(0.07) 0.39(0.07) 0.62(0.10) StructFormer 0.39(0.05) 0.38(0.07) 0.57(0.03) 0.33(0.01) 0.25(0.06) 0.26(0.01) 0.22(0.05) 0.23(0.04) Transformer 0.43(0.00) 0.46(0.03) 0.46(0.12) 0.30(0.01) 0.39(0.15) 0.26(0.02) 0.28(0.01) 0.30(0.10) Table 3: The pearson correlation coefficients between most frequent dependency types and their most correlated head.", "mask, we use the Chu-Liu (Chu and Liu, 1965b) (or Edmonds' (Edmonds, 1967)) algorithm to obtain the maximum directed spanning tree.", "Table 2 shows that our model outperforms baseline models.", "This result suggests that, given our minimum inductive bias (a token must attach to another, but the graph is not necessarily a tree), predicting missing tokens implicitly learns a good graph that correlates well with human-annotated dependency trees.", "In other words, some of the dependency relations proposed by linguists may correspond with efficient ways of propagating information through the sentence.", "Parsing examples of our model can be found in Appendix A.5.", "In this section, we test the correlation between heads and dependency types.", "We consider each dependency edge i j ( i depends on j ) in the ground truth structure as a data point.", "Given all the edges, we can obtain three sets of quantities: head probabilities A k = { a kji } and type values Y l = { y lij } .", "a kij is a real value between 0 and 1, represents the probability that heads k is used to model the information propagation from the child i to the parent j .", "Details about this value can be found at Equation 12.", "y lij is a binary value, represents whether the label l is assigned to edge i j .", "We can then compute Pearson Correlation Coeffi-cient (PCC) for every pair of A k and Y l across all ground truth edges { i j } : A k ,Y l = cov( A k , Y l ) A k Y l (16) where cov( ) is the covariance function, is the standard deviation of the respective variable.", "Hence, A k ,Y l measures the correlation between head k and dependency type l .", "A k ,Y l > 0 means that the model tends to use head k for propagating information from child to parent for dependency edges of the type l .", "Here, we only consider the Figure 4: Relationship between the parsing performance and the number of heads in each layer.", "information propagation from child to parent even though information can propagate in both directions in masked language models.", "In Appendix A.2, we also computed the PCC for the parent to child direction.", "Table 3 shows the PCC between the most frequent dependency types and their most correlated heads.", "We can observe that all three models have heads that are positively correlated to human-annotated dependency types.", "This result is coherent with the observation of Htut et al. (2019).", "Meanwhile, the UDGN achieves a significantly better correlation than the StructFormer and the Transformer.", "This confirms our intuition that competitive gated heads can better induce dependency types.", "Figure 4 shows the relation between the number of heads in each UDGN layer and the model's unsupervised parsing performance.", "Table 4 shows the model's performance when individual components are removed.", "We can observe that the number of heads has the most significant influence on unsupervised parsing performance.", "While this is only one head, the model fails to learn any meaningful structure.", "Then the parsing performance increase as the number of heads increase.", "And we observe 4773 Model MLM Argmax Chu-Liu PPL DDA UDA DDA UDA UDGN 59.3(0.5) 52.7(0.9) 58.3(0.7) 49.9(1.6) 61.8(0.9) Gates 69.5(1.9) 31.5(2.2) 40.7(0.3) 26.1(2.1) 48.9(0.5) Competition 73.6(3.1) 44.7(1.9) 54.4(1.9) 40.4(1.6) 56.6(2.1) relative pos bias 62.1(1.0) 51.6(1.6) 59.8(0.8) 47.4(2.6) 62.1(1.1) Table 4: The performance of UDGN after removing different components.", "marginal improvement after the number of heads reaching 8.", "The second most significant parsing performance decrease is caused by removing the gating mechanism.", "This change forces each head to always extract the same information from a given key node h j , regardless of the query node h i .", "This has a similar effect as the previous change, reducing the diversity of different functions that can be modeled by heads.", "These two observations may suggest that the diversity of information propagation function (multiple heads) is essential to induce a meaningful structure.", "The competitive controller also has an important influence on parsing performance.", "Its noncompetitive version is the sigmoid controller used in StructFormer.", "If we replace it with the noncompetitive controller, the DDA decreases to 44.7 which is similar to the result of StructFormer (46.2).", "Another interesting observation is that removing relative position bias has the least influence on parsing and language modeling.", "This may suggest that the dependency structure already encoded certain positional information.", "More ablation experiment results can be found in Appendix A.3.", "In this experiment, the goal was to determine if a better representation of semantics can be encoded if the model was constrained for structure.", "We pretrain a UDGN model on the BLLIP-XL dataset, and then finetune it on the STS-B (Cer et al., 2017) dataset.", "For a controlled experiment, we compare the results we attain with the previously mentioned Transformer model.", "We then evaluate the resulting classifier on the STS 2012-2016 (Agirre et al., 2012, 2013, 2014, 2015, 2016), the SICK-Relatedness (Marelli et al., 2014) dataset, and STS-B (Cer et al., 2017).", "We then report the Spearman correlation score for each dataset (the all' setting in Gao et al. 2021).", "We find that the UDGN model performs better overall than the transformer model.", "While these are not state-of-the-art results for these tasks, our comparison aimed to examine the benefit of the UDGN model over the Transformer architecture.", "It's also interesting to notice that if parameters in the parser are frozen during the finetuning, the model will get worse performance.", "This result suggests that fine-tuning on STS forces pretrained language models to learn more task-oriented trees.", "Dai et al. (2021) observed similar results with finetuned RoBERTa on Aspect-Based Sentiment Analysis tasks.", "In this paper, we proposed the Unsupervised Dependency Graph Network (UDGN), a novel architecture to induce and accommodate dependency graphs in a transformer-like framework.", "The model is inspired by linguistic theories.", "Experiment results show that UDGN achieves state-of-the-art dependency grammar induction performance.", "The competitive gated heads show a strong correlation 4774 to human-annotated dependency types.", "We hope these interesting observations will build new connections between classic linguistic theories and modern neural network models.", "Another interesting future research direction is exploring how the newly proposed components can help large-scale pretrained languages models." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "abstain", "abstain", "objective", "result", "method", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "method", "other", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "other", "result", "result", "result", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain" ]
[ "Most NLP datasets are not annotated with protected attributes such as gender, making it dif-ficult to measure classification bias using standard measures of fairness (e.g., equal opportu-nity).", "However, manually annotating a large dataset with a protected attribute is slow and expensive.", "Instead of annotating all the examples, can we annotate a subset of them and use that sample to estimate the bias?", "While it is possible to do so, the smaller this annotated sample is, the less certain we are that the estimate is close to the true bias.", "In this work, we propose using Bernstein bounds to represent this uncertainty about the bias estimate as a confidence interval.", "We provide empirical evidence that a 95% confidence interval derived this way consistently bounds the true bias.", "In quantifying this uncertainty, our method, which we call Bernstein-bounded unfairness , helps prevent classifiers from being deemed biased or unbiased when there is insufficient evidence to make either claim.", "Our findings suggest that the datasets currently used to measure specific biases are too small to conclusively identify bias except in the most egregious cases.", "For example, consider a coreference resolution system that is 5% more accurate on gender-stereotypical sentences to claim it is biased with 95% confidence, we need a bias-specific dataset that is 3.8 times larger than WinoBias, the largest available.", "NLP models have drawn criticism for capturing common social biases with respect to gender and race (Manzini et al., 2019; Garg et al., 2018; Ethayarajh, 2019).", "These biases can be quantified by applying some metric to an embedding space (Boluk-basi et al., 2016), but it is unclear how bias in text embeddings affects decisions made by downstream classifiers.", "This is because bias is not propagated deterministically: it is possible for minimally biased embeddings to be fed into a classifier that makes maximally biased predictions (and vice-versa).", "Moreover, recent work has found that WEAT (Caliskan et al., 2017), the most popular test of embedding bias, can be easily manipulated to claim that bias is present or absent (Ethayarajh et al., 2019a,b).", "Unlike measuring embedding bias, measuring classification bias is difficult: most NLP datasets are not annotated with protected attributes, precluding the use of standard fairness measures such as equal opportunity (Hardt et al., 2016).", "However, manually annotating a large dataset with a protected attribute is slow and expensive.", "In response to this problem, some have created small datasets annotated with a single protected attribute typically gender that is used to estimate bias on tasks such as co-reference resolution (Zhao et al., 2018a; Kiritchenko and Mohammad, 2018; Rudinger et al., 2018).", "This can be done by creating new data or annotating a subset of an existing dataset with the protected attribute.", "Intuitively, the less data we annotate, the less certain we are that our sample bias is close to the true bias (i.e., what we would get by annotating the entire population).", "We propose using Bernstein bounds to express our uncertainty about the sample bias as a confidence interval.", "First, we show that for standard fairness measures such as equal opportunity and equalized odds (Hardt et al., 2016), we can define a cost function such that the fairness measure is equal to the difference in expected cost incurred by the protected and unprotected groups.", "We treat the contribution of each annotated example to the bias as a random variable.", "Using Bernstein's inequality, we can thus estimate the probability that the true bias is within a constant t of our sample bias.", "Working backwards, we then derive a confidence interval for the true bias.", "Treating the genres of examples in MNLI (Williams et al., 2018) as the protected groups and the rate of annotator disagreement as the cost, we offer empirical evidence that our 95% confidence interval consistently bounds the true bias.", "In quantifying the uncertainty around bias estimates, Bernstein-bounded unfairness helps prevent classifiers from being deemed biased or unbiased when there is insufficient evidence to make either claim.", "For example, even when the sample bias is positive, it is possible that the true bias between groups is zero.", "Conversely, a sample bias of zero does not ensure the absence of bias at the population level.", "Moreover, our findings suggest that most bias-specific datasets in NLP are too small to conclusively identify bias except in the most egregious cases.", "For example, consider a co-reference resolution system that is 5% more accurate on gender-stereotypical sentences.", "For us to claim that this system is gender-biased with 95% confidence, we would need a bias-specific dataset that is 3.8 times larger than WinoBias (Zhao et al., 2018a), the largest such dataset currently available.", "Not only does the NLP community need more bias-specific datasets, but it also needs datasets that are much larger than the ones it currently has.", "In this section, we present the core idea of our paper: Bernstein-bounded unfairness (BBU).", "In practice, we estimate the bias which we call the groupwise disparity using a small sample of annotated data.", "Given that this estimate deviates from the true bias (i.e., at the population level), BBU helps us express our uncertainty about the bias estimate using a confidence interval.", "Definition 2.1.", "Let c : ( y , y ) [ 0 , C ] denote the cost of predicting y when the true label is y , where C R + is the maximum cost that can be incurred.", "Definition 2.2.", "Let f : x { 1 , 0 , + 1 } denote an annotation function that maps an example to the protected group A ( + 1), the unprotected group B ( 1), or neither (0).", "The groupwise disparity ( f ; c ) between groups A and B is the difference in expected cost incurred by each group: ( f ; c ) = E a [ c ( y a , y a )] E b [ c ( y b , y b )] Definition 2.3.", "The amortized disparity of x i is an estimate of the groupwise disparity based solely on x i .", "The expectation over all amortized disparities is the groupwise disparity: ( f ; c ) = E x [ ( x , f ; c )] .", "In practice, given n i.i.d. examples X , we can take a Monte Carlo estimate of ( f ; c ) by partitioning X into the protected and unprotected groups using f and then calculating the difference in mean cost.", "An equivalent way of framing this is that we have n random variables ( x 1 , f ; c ) ,..., ( x n , f ; c ) and we are taking their mean to estimate ( f ; c ) .", "Because examples X are i.i.d., so are the random variables.", "This means that we can use Bernstein's inequality to calculate the probability that the sample mean deviates from the true groupwise disparity by some constant t > 0. Where [ m , m ] bounds each random variable ( x i , f ; c ) and 2 = 1 n Var [ i ] denotes their variance, by Bernstein's inequality: Pr [ | | > t ] = Pr [ | E [ ] | > t ] 2exp (cid:32) nt 2 2 2 + 23 tm (cid:33) (1) Since the interval [ m , m ] is defined by the frequency of protected and unprotected examples (2.3), if we want it to strictly bound the random variable, it should be [ NC , NC ] , where N is the population size and we assume that there is at least one protected example.", "However, if this were the interval, (1) could be criticized for being too loose a bound and effectively useless.", "Therefore we assume that the proportion of the population that is protected and unprotected is bounded and that the lower bounds on these proportions are known.", "Definition 2.4.", "Let A , B denote the lower bounds of the proportion of the population that is protected and unprotected respectively.", "Let = min ( A , B ) .", "Note that the protected group does not necessarily have to be the smaller of the two groups in this setup.", "We set to be the lesser of A and B to reflect this: if the unprotected group is smaller than the protected group, then [ m , m ] will be bounded in [ C / B , C / B ] .", "Proposition 2.5.", "Under (2.4), [ m , m ] [ C , C ] for any random variable.", "Using this interval, (1) can be rewritten as: Pr [ | | > t ] 2exp (cid:32) nt 2 2 2 + 2 C 3 t (cid:33) (2) Proposition 2.6.", "For a given confidence [ 0 , 1 ) that the true groupwise disparity falls in the interval [ t , + t ] , we can derive t R + as follows: t = B + (cid:113) B 2 8 n 2 log (cid:2) 12 ( 1 ) (cid:3) 2 n where B = 2 C 3 log (cid:20) 1 2 ( 1 ) (cid:21) (3) This can be derived by rearranging (2) after setting both sides to be equal and then applying the quadratic formula to find the solution to t .", "Note that the width of the confidence interval grows as:", "(a) the desired confidence increases;", "(b) the sample size n decreases;", "(c) decreases.", "To our knowledge, Bernstein bounds are the tightest that can be applied here, as they consider the variance of the random variables.", "We also validated empirically that they are a better candidate than Hoeffding bounds, another common choice.", "Standard Fairness Measures How can we use Bernstein-bounded unfairness to derive confidence intervals when the bias metric is demographic parity, equal opportunity, or equalized odds?", "Demographic parity requires that the success rates be equal across all groups.", "In this case, the cost would be c ( y , y ) = ( 1 y ) , since the rate of predicting a positive outcome ( y = 1 ) must be the same.", "There are no constraints on the annotation function f .", "Equal opportunity requires that the true positive rates be equal across groups (Hardt et al., 2016).", "The cost would still be ( 1 y ) but the annotation function would be g ( x ) = f ( x ) y ( x ) .", "To use terminology from Hardt et al. (2016), including y ( x ) means that we annotate qualified examples (i.e., y ( x ) = 1) but not unqualified ones (i.e., y ( x ) = 0).", "Equalized odds requires that both true and false positive rates be equal across groups (Hardt et al., 2016).", "The annotation function would be the same as for equal opportunity but the cost would have to account for differences in false positive rates as well.", "This could be done by letting c be the zero-one loss.", "It is thus possible to define the cost and annotation functions such that the groupwise disparity is equivalent to the bias defined by a common fairness measure.", "Because of our framing of the problem, we treat the cost as something to be minimized.", "For example, for equal opportunity, the groupwise disparity was defined as the difference in false negative rates.", "However, we could set c ( y , y ) = y for equal opportunity as well, such that the groupwise disparity is the difference in true positive rates.", "Both perspectives are equivalent, but one may be more intuitive depending on the use case.", "We begin by providing empirical evidence that a 95% BBU confidence interval consistently bounds the true bias (i.e., population-level groupwise dis-parity).", "We conduct our experiments on the MNLI dev set (Williams et al., 2018), used for testing natural language inference.", "We treat the genres of examples in MNLI as the protected groups.", "Since the genre annotations are given, we calculate the true bias as the difference in annotator disagreement rates for in-genre versus out-genre examples, effectively treating the human annotators as the classifier whose bias we want to measure.", "We then use BBU and check whether the true bias falls within the 95% confidence interval when we estimate the bias using a subset of the data.", "The experiments on MNLI do not measure an important social bias.", "Rather, they are meant to be a proof-of-concept.", "We treat the MNLI genres as protected groups because the protected attribute the genre is clearly annotated.", "We use MNLI over smaller datasets annotated with attributes such as gender because this setup where the cost is the rate of annotator disagreement does not require any model training, making our results easy to replicate.", "Moreover, this use case illustrates that our conception of bias need not be restricted to social biases it can be the difference in cost incurred by any arbitrarily defined groups.", "Lastly, we examine how large a bias-specific dataset needs to be in order to conclude that a given classifier is biased.", "Specifically, we consider a co-reference resolution system that is more accurate on sentences containing stereotypical gender roles.", "Fixing the confidence level at = 0 .", "95, we show that as the magnitude of the sample bias decreases, we need a larger bias-specific dataset (i.e., larger n ) in order to make a bias claim with 95% confidence.", "roughly 2000 per genre.", "Since the genre annotation is known, we treat it as the protected attribute.", "We define the cost for a given example as the proportion of human annotators whose annotation differs from the gold label.", "The true bias for each genre (i.e., the groupwise disparity across all data) is the difference in mean cost incurred by the in-genre and out-genre examples.", "These statistics are in Table 1. The annotation function for each genre just samples some in-genre and out-genre examples to be the protected and unprotected groups respectively.", "In this setup, the ratio of in-genre to out-genre examples is controlled by (2.4).", "We then use this sample to calculate a 95% confidence interval [ t , + t ] .", "If in Table 1 falls within [ t , + t ] , then the BBU confidence interval correctly bounds the true bias for that genre.", "Gender Bias For our second experiment, we consider a hypothetical co-reference resolution system M that is more accurate when the input sentence is gender-stereotypical.", "For example, M might assume that doctor' is always replaced with a male pronoun and nurse' with a female pronoun.", "The existence of such systems motivated the cre-ation of bias-specific datasets such as WinoBias and WinoGender for co-reference resolution (Zhao et al., 2018b; Rudinger et al., 2018).", "We define the cost for a given example as the zero-one loss (i.e., 1 [ y (cid:54) = y ] ) so that the true bias corresponds to the difference in accuracy between gender-stereotypical and non-gender-stereotypical sentences.", "The former is our protected group.", "Say = 0 .", "05 that is, M is 5 percentage points more accurate on gender-stereotypical sentences.", "How large must n be for us to claim with 95% confidence that M is gender-biased (i.e., for 0 (cid:54) [ t , + t ] )?", "On the MNLI data, even when as few as 100 examples are sampled and used to estimate the bias, a 95% BBU confidence interval bounds the true bias 100% of the time.", "This outcome is the average across all MNLI genres after averaging the results across 20 runs.", "As seen in Figure 1, 95% BBU bounds also grow tighter as the annotated sample size n increases and the frequency of the protected group increases from 0.1 to 0.5.", "Based on the derivation of the interval width in (3), both of these trends are expected.", "In our gender bias experiment, we want to know how large n needs to be such that given = 0 .", "05, Figure 2: The bias estimate of a co-reference resolution system M is calculated on a sample of annotated data.", "we can say with 95% confidence that the coreference resolution system M is gender-biased.", "In other words, we want to find the smallest n such that 0 (cid:54) [ t , + t ] .", "Since > 0, we can set t and work backwards from (2): n > ( 2 2 + 2 C 3 ) (cid:0) log (cid:2) 12 ( 1 ) (cid:3)(cid:1) 2 (4) In our hypothetical scenario, the maximum cost C = 1, the bias estimate = 0 .", "05, and = 0 .", "95.", "We assume that = 0 .", "5, since bias-specific datasets often have equally many protected and unprotected examples.", "We also assume that the variance is maximal (i.e., 2 = ( C / ) 2 ).", "With these inputs, n > 11903: in other words, we would need a bias-specific dataset with at least 11903 examples to claim with 95% confidence that the system M is biased.", "This is 3 .", "8 times larger than the size of WinoBias (Zhao et al., 2018a), the largest such dataset currently available.", "In Figure 2, we plot the amount of data needed against the magnitude of sample bias .", "Note that with WinoBias, which has 3160 examples, we could only make a bias claim with 95% confidence if the bias estimate = 0 .", "0975 or higher (i.e., if the system M were 9.75 percentage points more accurate on the gender-stereotypical examples in WinoBias).", "It is possible to claim the existence of bias in a particular direction without knowing what the true bias is.", "For example, consider the = 0 .", "5 error bars in Figure 1 (right): the 95% confidence interval for the bias faced by the government' genre in MNLI falls in the range (0.0, 0.12).", "This means that we are 95% confident that government' examples in MNLI face more annotator disagreement than other genres, even if we do not know precisely how much more that is.", "However, as shown in section 3.3, datasets currently used to estimate classification bias in NLP such as WinoBias (Zhao et al., 2018b) and WinoGender (Rudinger et al., 2018) are too small to conclusively identify bias except in the most egregious cases.", "There are two possible remedies to this.", "For one, even though we applied what we thought was the tightest applicable bound, it may be possible to derive a tighter confidence interval for .", "If so, one could use smaller datasets to make bias claims with a high degree of confidence.", "However, even in this optimistic scenario, current datasets would probably remain insufficient for detecting small magnitudes of bias.", "The more straightforward remedy would be to create larger bias-specific datasets.", "Even MNLI, for example, is orders of magnitude larger than WinoBias, suggesting that creating large bias-specific datasets is well within the realm of possibility.", "We first showed that many standard measures of fairness (e.g., equal opportunity) can be expressed as the difference in expected cost incurred by protected and unprotected groups.", "Given that most bias estimates are made using small samples, we proposed Bernstein-bounded unfairness (BBU) for quantifying the uncertainty about a bias estimate using a confidence interval.", "Using MNLI, we provided empirical evidence that 95% BBU confidence intervals consistently bound the true population-level bias.", "In quantifying this uncertainty, BBU helps prevent classifiers from being deemed biased or unbiased when there is insufficient evidence to make either claim.", "Although datasets currently used to estimate classification bias (e.g., WinoBias) are undoubtedly a step in the right direction, our findings suggest that they need to be much larger in order to be a useful diagnostic.", "Many thanks to Aidan Perreault, Dallas Card, and Tengyu Ma for providing detailed feedback.", "We thank Nelson Liu for helpful discussion." ]
[ "abstain", "abstain", "method", "abstain", "objective", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "method", "other", "other" ]
[ "False claims that have been previously fact-checked can still spread on social media.", "To mitigate their continual spread, detecting previously fact-checked claims is indispensable.", "Given a claim, existing works retrieve fact-checking articles (FC-articles) for detection and focus on reranking candidate articles in the typical two-stage retrieval framework.", "However, their performance may be limited as they ignore the following characteristics of FC-articles: (1) claims are often quoted to describe the checked events, providing lexical information besides semantics; and (2) sentence templates to introduce or debunk claims are common across articles, providing pattern information.", "In this paper, we propose a novel reranker, MTM (Memory-enhanced Transformers for Matching), to rank FC-articles using key sentences selected using event (lexical and semantic) and pattern information.", "For event information, we propose to finetune the Transformer with regression of ROUGE .", "For pattern information, we generate pattern vectors as a memory bank to match with the parts containing patterns.", "By fusing event and pattern information, we select key sentences to represent an article and then predict if the article fact-checks the given claim using the claim, key sentences, and patterns.", "Experiments on two real-world datasets show that MTM outperforms existing methods.", "Human evaluation proves that MTM can capture key sentences for explanations.", "The code and the dataset are at https://github.com/ ICTMCG/MTM .", "Social media posts with false claims have led to real-world threats on many aspects such as politics (Fisher et al., 2016), social order (Wang and Li, 2011), and personal health (Chen, 2020).", "To tackle this issue, over 300 fact-checking projects have been launched, such as Snopes 1 and Jiaozhen 2 (Duke Reporters' Lab, 2020).", "Meanwhile, automatic systems have been developed for detecting suspicious claims on social media (Zhou et al., 2015; Popat et al., 2018a).", "This is however not the end.", "A considerable amount of false claims continually spread, even though they are already proved false.", "According to a recent report (Xinhua Net, 2019), around 12% of false claims published on Chinese social media, are actually old, as they have been debunked previously.", "Hence, detecting previously fact-checked claims is an important 1 https://www.snopes.com 2 https://fact.qq.com/ task.", "According to the seminal work by Shaar et al. (2020), the task is tackled by a two-stage information retrieval approach.", "Its typical workflow is illustrated in Figure", "1(a).", "Given a claim as a query, in the first stage a basic searcher (e.g., BM25 Robertson and Zaragoza, 2009) searches for candidate articles from a collection of fact-checking articles (FC-articles).", "In the second stage, a more powerful model (e.g., BERT, Devlin et al., 2019) reranks the candidates to provide evidence for manual or automatic detection.", "Existing works focus on the reranking stage: Vo and Lee (2020) model the interactions between a claim and the whole candidate articles, while Shaar et al. (2020) extract several semantically similar sentences from FC-articles as a proxy.", "Nevertheless, these methods treat FC-articles as general documents and ignore characteristics of FC-articles.", "Figure", "1(b) shows three sentences from candidate articles for the given claim.", "Among them, S1 is more friendly to semantic matching than S2 and S3 because the whole S1 focuses on describing its topic and does not contain tokens irrelevant to the given claim, e.g., has spread over years in S2.", "Thus, a semantic-based model does not require to have strong filtering capability.", "If we use only general methods on this task, the relevant S2 and S3 may be neglected while irrelevant S1 is focused.", "To let the model focus on key sentences (i.e., sentences as a good proxy of article-level relevance) like S2 and S3, we need to consider two characteristics of FC-articles besides semantics: C1 .", "Claims are often quoted to describe the checked events (e.g., the underlined text in S2); C2 .", "Event-irrelevant patterns to introduce or debunk claims are common in FC-articles (e.g., bold texts in S2 and S3).", "Based on the observations, we propose a novel reranker, MTM (Memory-enhanced Transformers for Matching).", "The reranker identifies key sentences per article using claimand pattern-sentence relevance, and then integrates information from the claim, key sentences, and patterns for article-level relevance prediction.", "In particular, regarding C1 , we propose ROUGE -guided Transformer (ROT) to score claim-sentence relevance literally and semantically.", "As for C2 , we obtain the pattern vectors by clustering the difference of sentence and claim vectors for scoring pattern-sentence relevance and store them in the Pattern Memory Bank (PMB).", "The joint use of ROT and PMB allows us to identify key sentences that reflect the two characteristics of FC-articles.", "Subsequently, fine-grained interactions among claims and key sentences are modeled by the multi-layer Transformer and aggregated with patterns to obtain an article-level feature representation.", "The article feature is fed into a Multi-layer Perceptron (MLP) to predict the claim-article relevance.", "To validate the effectiveness of our method, we built the first Chinese dataset for this task with 11,934 claims collected from Chinese Weibo 3 and 27,505 fact-checking articles from multiple sources.", "39,178 claim-article pairs are annotated as relevant.", "Experiments on the English dataset and the newly built Chinese dataset show that MTM outperforms existing methods.", "Further human evaluation and case studies prove that MTM finds key sentences as explanations.", "Our main contributions are as follows: We propose a novel reranker MTM for fact-checked claim detection, which can better identify key sentences in fact-checking articles by exploiting their characteristics.", "We design ROUGE -guided Transformer to combine lexical and semantic information and propose a memory mechanism to capture and exploit common patterns in fact-checking articles.", "Experiments on two real-world datasets show that MTM outperforms existing methods.", "Further human evaluation and case studies prove that our model finds key sentences as good explanations.", "We built the first Chinese dataset for fact-checked claim detection with fact-checking articles from diverse sources.", "To defend against false information, researchers are mainly devoted to two threads: (1) Automatic fact-checking methods mainly retrieve relevant factual information from designated sources and judge the claim's veracity.", "Thorne et al. (2018) use Wikipedia as a fact tank and build a shared task for automatic fact-checking, while Popat et al. (2018b) and Wang et al. (2018) retrieve webpages as evidence and use their stances on claims for veracity prediction.", "(2) Fake news detection methods often use non-factual signals, such as styles (Przy-byla, 2020; Qi et al., 2019), emotions (Ajao 3 https://weibo.com -guided Transformer(ROT) E m b e dd i n g L a y e r O n e l a y e r T r a n s f o r m e r Pattern Memory Bank (PMB) Claim-SentenceScores Claim-Sentence Vectors Residual Embeddings Pattern-SentenceScores Nearest PatternVectors Multi-layerTransformer TotalScores MLP Relevant Irrelevant Article Relevance Prediction Key Sentence Identification Aggregate Claim-SentencePairs 1 2 3 4 5 0.3 0.8 0.9 0.6 0.9 0.7 0.5 0.9 0.8 SelectbyIndex SelectbyIndex Index Score Vector in EmbeddingSpace Vector Component WeightedSum Figure 2: Architecture of MTM .", "et al., 2019; Zhang et al., 2021), source credibility (Nguyen et al., 2020), user response (Shu et al., 2019) and diffusion network (Liu and Wu, 2018; Rosenfeld et al., 2020).", "However, these methods mainly aim at newly emerged claims and do not address those claims that have been fact-checked but continually spread.", "Our work is in a new thread, detecting previously fact-checked claims .", "Vo and Lee (2020) models interaction between claims and FC-articles by combining GloVe (Pennington et al., 2014) and ELMo embeddings (Peters et al., 2018).", "Shaar et al. (2020) train a RankSVM with scores from BM25 and Sentence-BERT for relevance prediction.", "These methods ignore the characteristics of FC-articles, which limits the ranking performance and explainability.", "Given a claim q and a candidate set of k 1 FC-articles D obtained by a standard full-text retrieval model (BM25), we aim to rerank FC-articles truly relevant w.r.t. q at the top by modeling fine-grained relevance between q and each article d D .", "This is accomplished by Memory-enhanced Transformers for Matching ( MTM ), which conceptually has two steps, (1) Key Sentence Identification and (2) Article Relevance Prediction, see Figure 2. For an article of l sentences, let S = { s 1 , ... , s l } be its sentence set.", "In Step (1), for each sentence, we derive claim-sentence relevance score from ROUGE guided Transformer (ROT) and pattern-sentence relevance score from Pattern Memory Bank (PMB).", "The scores indicate how similar the sentence is to the claim and pattern vectors, i.e., how possible to be a key sentence.", "Top k 2 sentences are selected for more complicated interactions and aggregation with the claim and pattern vectors in Step (2).", "The aggregated vector is used for the final prediction.", "We detail the components and then summarize the training procedure below.", "ROT (left top of Figure. 2) is used to evaluate the relevance between q and each sentence s in {S i } k 1 i =1 , both lexically and semantically.", "Inspired by (Gao et al., 2020), we choose to inject the ability to consider lexical relevance into the semantic model.", "As the BERT is proved to capture and evaluate semantic relevance (Zhang et al., 2020), we use a one-layer Transformer initialized with the first block of pretrained BERT to obtain the initial semantic representation of q and s : z q,s = Transformer ([ CLS ] q [ SEP ] s ) (1) where [ CLS ] and [ SEP ] are preserved tokens and z q,s is the output representation.", "To force ROT to consider the lexical relevance, we finetune the pretrained Transformer with the guidance of ROUGE (Lin, 2004), a widely-used metric to evaluate the lexical similarity of two segments in summarization and translation tasks.", "The intuition is that lexical relevance can be characterized by token overlapping, which ROUGE exactly measures.", "We minimize the mean square error between the prediction and the precision and recall of ROUGE-2 between q and s ( R 2 R 2 ) to optimize the ROT: R ( q, s ) = MLP (cid:0) z q,s ([ CLS ]) (cid:1) (2) LR = (cid:107) R ( q, s ) R 2 ( q, s ) (cid:107) 22 + R (cid:107) (cid:107) 22 (3) where the first term is the regression loss and the second is to constraint the change of parameters as the ability to capture semantic relevance should be maintained.", "R is a control factor and represents the change of parameters.", "The Pattern Memory Bank (PMB) is to generate, store, and update the vectors which represent the common patterns in FC-articles.", "The vectors in PMB will be used to evaluate pattern-sentence relevance (see Section 3.1.3).", "Here we detail how to formulate, initialize, and update these patterns below.", "Formulation.", "Intuitively, one can summarize the templates, like ...has been debunked by..., and explicitly do exact matching, but the templates are costly to obtain and hard to integrate into neural models.", "Instead, we implicitly represent the common patterns using vectors derived from embeddings of our model, ROT.", "Inspired by (Wu et al., 2018), we use a memory bank M to store K common patterns (as vectors), i.e., M = { m i } Ki =1 .", "Initialization.", "We first represent each q in the training set and s in the corresponding articles by averaging its token embeddings (from the embedding layer of ROT).", "Considering that a pattern vector should be event-irrelevant , we heuristically remove the event-related part in s as possible by calculating the residual embeddings r s,q , i.e., subtracting q from s .", "We rule out the residual embeddings that do not satisfy t low < (cid:107) r s,q (cid:107) 2 < t high , because they are unlikely to contain good pattern information: (cid:107) r s,q (cid:107) 2 t low indicates q and s are highly similar and thus leave little pattern information, while Residual Embedding for the Rightly -predicted sample Residual Embedding for the Wrongly -predicted sample Weighted Sum of Pattern Vector Push away Draw closer Figure 3: Illustration for Memory Vector Update.", "(cid:107) r s,q (cid:107) 2 t high indicates s may not align with q in terms of the event, so the corresponding r s,q is of little sense.", "Finally, we aggregate the valid residual embeddings into K clusters using K-means and obtain the initial memory bank M : M = Kmeans (cid:0) { r valids,q } (cid:1) = { m 1 , ... , m K } (4) where { r valids,q } is the set of valid residual embeddings.", "Update.", "As the initial K vectors may not accurately represent common patterns, we update the memory bank according to the feedbacks of results during training: If the model predicts rightly, the key sentence, say s , should be used to update its nearest pattern vector m .", "To maintain stability, we use an epoch-wise update instead of an iteration-wise update.", "Take updating m as an example.", "After an epoch, we extract all n key sentences whose nearest pattern vector is m and their n corresponding claims, which is denoted as a tuple set ( S , Q ) m .", "Then ( S , Q ) m is separated into two subsets, R m and W m , which contain n r and n w sentence-claim tuples from the rightly and wrongly predicted samples, respectively.", "The core of our update mechanism (Figure 3) is to draw m closer to the residual embeddings in R m and push it away from those in W m .", "We denote the i th residual embedding from the two subsets as r R mi and r W mi , respectively.", "To determine the update direction, we calculate a weighted sum of residual embeddings according to the predicted matching scores.", "For ( s, q ) , suppose MTM output y s,q [0 , 1] as the predicted matching score of q and d (whose key sentence is s ), the weight of r s,q is | y s,q 0 .", "5 | (denoted as w s,q ).", "Weighted residual embeddings are respectively summed and normalized as the components of the direction vector (Eq. 5): u mr = (cid:18) n r (cid:88) i =1 w R mi r R mi (cid:19) , u mw = (cid:18) n w (cid:88) i =1 w W mi r W mi (cid:19) (5) where u mr and u mw are the aggregated residual embeddings.", "where w r and w w are the normalized sum of corresponding weights used in Eq.", "5 ( w r + w w = 1 ).", "The pattern vector m is updated with: m new = m old + m (cid:107) m old (cid:107) 2 u m (cid:107) u m (cid:107) 2 (7) where m old and m new are the memory vector m before and after updating; the constant m and (cid:107) m old (cid:107) 2 jointly control the step size.", "Whether a sentence is selected as a key sentence is determined by combining claimand pattern-sentence relevance scores.", "The former is calculated with the distance of q and s trained with ROT (Eq. 8) and the latter uses the distance between the nearest pattern vector in PMB and the residual embedding (Eq. 9).", "The scores are scaled to [0 , 1] .", "For each sentence s in d , the relevance score with q is calculated by Eq.", "10: scr Q ( q, s ) = Scale( (cid:107) r s,q (cid:107) 2 ) (8) scr P ( q, s ) = Scale( (cid:107) m u r s,q (cid:107) 2 ) (9) scr ( q, s ) = Q scr Q ( q, s ) + P scr P ( q, s ) (10) where Scale( x )=1 x min max min and max and min are the maximum and minimum distance of s in d , respectively.", "u = arg min i (cid:107) m i r s,q (cid:107) 2 , and Q and P are hyperparameters whose sum is 1 .", "Finally, sentences with topk 2 scores, denoted as K = { s keyi ( q, d ) } k 2 i =1 , are selected as the key sentences in d for the claim q .", "Sentence representation.", "We model more complicated interactions between the claim and the key sentences by feeding each z q,s key (de-rived from ROT) into a multi-layer Transformer ( MultiTransformer ): z (cid:48) q,s key = MultiTransformer( z q,s key ) (11) Following (Reimers and Gurevych, 2019), we respectively compute the mean of all output token vectors of q and s in z (cid:48) q,s key to obtain the fixed sized sentence vectors q (cid:48) R dim and s key (cid:48) R dim , where dim is the dimension of a token in Transformers.", "Weighted memory-aware aggregation.", "For final prediction, we use a score-weighted memory-aware aggregation.", "To make the predictor aware of the pattern information, we append the corresponding nearest pattern vectors to the claim and key sentence vectors: v i = [ q (cid:48) , s key (cid:48) i ( q, d ) , m j ] (12) where i =1 , ... , k 2 .", "j =arg min k (cid:13)(cid:13)(cid:13) m k r s keyi ,q (cid:13)(cid:13)(cid:13) 2 .", "Intuitively, a sentence with higher score should be attended more.", "Thus, the concatenated vectors (Eq. 12) are weighted by the relevance scores from Eq.", "10 (normalized across the topk 2 sentences).", "The weighted aggregating vector is fed into a MLP which outputs the probability that d fact-checks q : scr (cid:48) ( q, s keyi ) = Normalize (cid:0) scr ( q, s keyi ) (cid:1) (13) y q,d = MLP (cid:16) k 2 (cid:88) i =1 scr (cid:48) ( q, s keyi ) v i (cid:17) (14) where y q,d [0 , 1] .", "If y q,d > 0 .", "5 , the model predicts that d fact-checks q , otherwise does not.", "The loss function is cross entropy: LM = CrossEntropy( y q,d , y q,d ) (15) where y q,d { 0 , 1 } is the ground truth label.", "y q,d = 1 if d fact-checks q and 0 otherwise.", "The predicted values are used to rank all k 1 candidate articles retrieved in the first stage.", "We summarize the training procedure of MTM in Algorithm 1, including the pretraining of ROT, the initialization of PMB, the training of ARP, and the epoch-wise update of PMB.", "In this section, we mainly answer the following experimental questions: EQ1: Can MTM improve the ranking performance of FC-articles given a claim?", "EQ2: How effective are the components of MTM , including ROUGE -guided Transformer, Pattern Memory Bank, and weighted memory-aware aggregation in Article Relevance Prediction?", "EQ3: To what extent can MTM identify key sentences in the articles, especially in the longer ones?", "We conducted the experiments on two real-world datasets.", "Table 1 shows the statistics of the two datasets.", "The details are as follows: Twitter Dataset The Twitter 4 dataset is originated from (Vo and Lee, 2019) and processed by Vo and Lee (2020).", "The dataset pairs the claims (tweets) with the corresponding FC-articles from Snopes.", "For tweets with images, it appends the OCR results to the tweets.", "We remove the manually normalized claims in Snopes' FC-articles to adapt to more general scenarios.", "The data split is the same as that in (Vo and Lee, 2020).", "Weibo Dataset We built the first Chinese dataset for the task of detecting previously fact-checked claims in this ar-4 https://twitter.com Table 1: Statistics of the Twitter and the Weibo dataset.", "ticle.", "The claims are collected from Weibo and the FC-articles are from multiple fact-checking sources including Jiaozhen, Zhuoyaoji 5 , etc.", "We recruited annotators to match claims and FC-articles based on basic search results.", "Appendix A introduce the details.", "BERT-based rankers from general IR tasks BERT (Devlin et al., 2019): A method of pretraining language representations with a family of pretrained models, which has been used in general document reranking to predict the relevance.", "(Nogueira and Cho, 2019; Akkalyoncu Yil-maz et al., 2019)", "DuoBERT (Nogueira et al., 2019): A popular BERT-based reranker for multi-stage document ranking.", "Its input is a query and a pair of documents.", "The pairwise scores are aggregated for final document ranking.", "Our first baseline, BERT (trained with query-article pairs), provides the inputs for DuoBERT.", "BERT(Transfer) : As no sentence-level labels are provided in most document retrieval datasets, Yang et al. (2019) finetune BERT with short text matching data and then apply to score the relevance between query and each sentence in documents.", "The three highest scores are combined with BM25 score for document-level prediction.", "Rankers from related works of our task Sentence-BERT : Shaar et al. (2020) use pretrained Sentence-BERT models to calculate cosine similarity between each sentence and the given claim.", "Then the top similarity scores are fed into a neural network to predict document relevance.", "RankSVM : A pairwise RankSVM model for reranking using the scores from BM25 and sentence-BERT (mentioned above), which achieves the best results in (Shaar et al., 2020).", "CTM (Vo and Lee, 2020): This method leverages GloVe and ELMo to jointly represent the claims and the FC-articles for predicting the relevance scores.", "Its multi-modal version is not included as MTM focuses on key textual information.", "Evaluation Metrics.", "As this is a binary retrieval task, we follow Shaar et al. (2020) and report Mean Reciprocal Rank (MRR), Mean Average Precision@ k (MAP@ k , k = 1 , 3 , 5 ) and HIT@ k ( k = 3 , 5 ).", "See equations in Appendix B. Implementation Details.", "In MTM , the ROT and ARP components have one and eleven Transformer layers, respectively.", "The initial parameters are obtained from pretrained BERT models 6 .", "Other parameters are randomly initialized.", "The dimension of claim and sentence representation in ARP and pattern vectors are 768 .", "Number of Clusters in PMBK is 20 .", "Following (Shaar et al., 2020) and (Vo and Lee, 2020), we use k 1 = 50 candidates retrieved by BM25.", "k 2 = 3 (Weibo, hereafter, W) / 5 (Twitter, hereafter, T) key sentences are selected.", "We use Adam (P. Kingma and Ba, 2015) for optimization with (cid:15) = 10 6 , 1 = 0 .", "9 , 2 = 0 .", "999 .", "The learning rates are 5 10 6 (W) and 1 10 4 (T).", "The batch size is 512 for pretraining ROT, 64 for the main task.", "According to the quantiles on 6 We use bert-base-chinese for Weibo and bert-base-uncased for Twitter.", "training sets, we set t low = 0 .", "252 (W) / 0 .", "190 (T), t high = 0 .", "295 (W) / 0 .", "227 (T).", "The following hyperparameters are selected according to the best validation performance: R = 0 .", "01 (W) / 0 .", "05 (T), Q = 0 .", "6 , P = 0 .", "4 , and m = 0 .", "3 .", "The maximum epoch is 5 .", "All experiments were conducted on NVIDIA V100 GPUs with PyTorch (Paszke et al., 2019).", "The implementation details of baselines are in Appendix C. 4.4 Performance Comparison To answer EQ1 , we compared the performance of baselines and our method on the two datasets, as shown in Table 2. We see that: (1) MTM ourper-forms all compared methods on the two datasets (the exception is only the MAP@1 on Twitter), which indicates that it can effectively find related FC-articles and provide evidence for determining if a claim is previously fact-checked.", "(2) For all methods, the performance on Weibo is worse than that on Twitter because the Weibo dataset contains more claim-sentence pairs (from multiple sources) than Twitter and is more challenging.", "Despite this, MTM 's improvement is significant.", "(3) BERT(Transfer), Sentence-BERT and RankSVM use transferred sentence-level knowledge from other pretext tasks but did not outperform the document-level BERT.", "This is because FC-articles have their own characteristics, which may not be covered by transferred knowledge.", "In contrast, our observed characteristics help MTM achieve good performance.", "Moreover, MTM is also effi-ciency compared to BERT(Transfer), which also uses 12-layer BERT and selects sentences, because our model uses only one layer for all sentences (other 11 layers are for key sentences), while all sentences are fed into the 12 layers in BERT(Transfer).", "To answer EQ2 , we evaluated three ablation groups of MTM 's variants (AG1 AG3) to investigate the effectiveness of the model design.", "7 Table 3 shows the performance of variants and MTM .", "AG1: With vs. Without ROUGE .", "The variant removes the guidance of ROUGE ( MTM w/o ROUGE guidance ) to check the effectiveness of ROUGE guided finetuning.", "The variant performs worse on Weibo, but MAP@1 slightly increases on Twitter.", "This is probably because there are more lexical overlapping between claims and FC-articles in the Weibo dataset, while most of the FC-articles in the Twitter dataset choose to summarize the claims to fact-check.", "AG2: Cluster-based Initialization vs. Random Initialization vs. Without update vs. Without PMB.", "The first variant ( MTM w/ rand mem init ) uses random initialization and the second ( MTM w/o mem update ) uses pattern vectors without updating.", "The last one ( MTM w/o PMB ) removes the PMB.", "We see that the variants all perform worse than MTM on MRR, of which w/ rand mem init performs the worst.", "This indicates that cluster-based initialization provides a good start and facilitates the following updates while the random one may harm further learning.", "AG3: Score-weighted Pooling vs. Average pooling, and With vs. Without pattern vector.", "The first variant, MTM w/ avg.", "pool , replace the score-weighted pooling with average pooling.", "The comparison in terms of MRR and MAP shows the effectiveness of using relevance scores as weights.", "The second, MTM w/o pattern aggr.", ", does not append the pattern vector to claim and sentence vectors before aggregation.", "It yields worse results, indicating the patterns should be taken into consideration for final prediction.", "7 We do not run MTM without sentence selection due to its high computational overhead which makes it unfeasible for training and inference.", "To probe what the PMB summarizes and memorizes, we selected and analyzed the key sentences corresponding to the residual embeddings around pattern vectors.", "Figure 4 shows example sentences where highly frequent words are in boldface.", "These examples indicate that the pattern vectors do cluster key sentences with common patterns like ...spread in WeChat Moments.", "The quality of selected sentences cannot be automatically evaluated due to the lack of sentence-level labels.", "To answer EQ3 , we conducted a human evaluation.", "We randomly sampled 370 claim-article pairs whose articles were with over 20 sentences from the Weibo dataset.", "Then we showed each claim and top three sentences selected from the corresponding FC-article by MTM .", "Three anno-Claim Isthis tomake the so-called artificial eggs?", "tators were asked to check if an auto-selected sentence helped match the given query and the source article (i.e., key sentences).", "Figure 5 shows", "(a) MTM hit at least one key sentence in 83.0% of the articles;", "(b) 73.0% of the sentences at Rank 1 are key sentences, followed by 65.1% at Rank 2 and 56.8% at Rank 3. This proves that MTM can find the key sentences in long FC-articles and provide helpful explanations.", "We also show the positional distribution in Figure", "5(c), where key sentences are scattered throughout the articles.", "Using MTM to find key sentences can save fact-checkers' time to scan these long articles for determining whether the given claim was fact-checked.", "Additionally, we exhibit two cases in the evaluation set in Figure 6.", "These cases prove that MTM found the key sentences that correspond to the characteristics described in Section 1.", "Please refer to Appendix D for further case analysis.", "We propose MTM to select from fact-checked articles key sentences that introduce or debunk claims.", "These auto-selected sentences are exploited in an end-to-end network for estimating the relevance of the fact-checked articles w.r.t. a given claim.", "Experiments on the public Twitter dataset and the private Weibo dataset show that MTM outperforms the state of the art.", "Moreover, human evaluation and case studies demonstrate that the selected sentences provide helpful explanations of the results.", "The authors thank Guang Yang, Tianyun Yang, Peng Qi and anonymous reviewers for their insightful comments.", "Also, we thank Rundong Li, Qiong Nan, and other annotators for their efforts.", "This work was supported by the National Key Research and Development Program of China (2017YFC0820604), the National Natural Science Foundation of China (U1703261), and the Fundamental Research Funds for the Central Universities and the Research Funds of Renmin University of China (No. 18XNLG19).", "The corresponding authors are Juan Cao and Xirong Li.", "Our work involves two scenarios that need the ability to detect previously fact-checked claims: (1) For social media platforms, our method can check whether a newly published post contains false claims that have been debunked.", "The platform may help the users to be aware of the text's veracity by providing the key sentences selected from fact-checking articles and their links.", "(2) For manual or automatic fact-checking systems, it can be a filter to avoid redundant fact-checking work.", "When functioning well, it can assist platforms, users, and fact-checkers to maintain more credible cyberspace.", "But in the failure cases, some well-disguised claims may escape.", "This method functions with reliance on the used fact-checking article databases.", "Thus, authority and credibility need to be carefully considered in practice.", "We did our best to make the new Weibo dataset for academic purpose reliable.", "Appendix A introduces more details." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "objective", "abstain", "objective", "result", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "objective", "objective", "result", "objective", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Text-to-SQL parsers map natural language questions to programs that are executable over tables to generate answers, and are typically evaluated on large-scale datasets like SPIDER (Yu et al., 2018).", "We argue that existing benchmarks fail to capture a certain out-of-domain generalization problem that is of significant practical importance: matching domain specific phrases to composite operations over columns.", "To study this problem, we propose a synthetic dataset and a re-purposed train/test split of the SQUALL dataset (Shi et al., 2020) as new benchmarks to quantify domain generalization over column operations.", "Our results indicate that existing state-of-the-art parsers struggle in these benchmarks.", "We propose to address this problem by incorporating prior domain knowledge by preprocessing table schemas, and design a method that consists of two components: schema expansion and schema pruning .", "This method can be easily applied to multiple existing base parsers, and we show that it significantly outperforms baseline parsers on this domain generalization problem, boosting the underlying parsers' overall performance by up to 13.8% relative accuracy gain (5.1% absolute) on the new SQUALL data split.", "Text-to-SQL parsing is the task of translating natural language questions over provided tables to SQL queries which can be executed to produce answers.", "In recent years, with the availability of large-scale datasets (e.g., Zhong et al., 2017; Yu et al., 2018), neural semantic parsers have witnessed significant success on this task.", "However, recent work (Suhr et al., 2020; Lee et al., 2021) has suggested that these state-of-the-art parsers are far from successful in terms of out-of-domain generalization in real scenarios, where users may ask questions related to potentially very large tables with the goal of improving their productivity (e.g., while they are viewing or editing a large Excel spreadsheet).", "In such scenarios, it is common to encounter tables specific to new domains that were not encountered while training a parser.", "Perhaps the most challenging aspect of domain generalization is that models need to understand domain-specific phrases that they have not seen before, and translate them into logical form segments that involve references to table elements (e.g., column names or aggregation operations over columns).", "We argue that two kinds of abstract operations, shown in Figure 1, are particularly challenging for new domains: 1. Column Matching: The task of mapping natural language phrases to the most relevant columns (e.g., mapping Income to the \"Wages\" column).", "This can be challenging because some mappings may be implicit or may require domain knowledge.", "2. Column Operations: The task of mapping natural language phrases to composite expressions over table columns.", "For example, in Figure 1, we need to map income to just \"Wages\" for one table, and to \"Salary\" + \"Stock\" for another table.", "Similarly, consider the complex \"Term\" column in Figure 2, in which two subfields 1 and 2 represent the term start (e.g., 1926) and term end (e.g., 1927), respectively.", "Some questions may ask about the term duration while others may ask about the term start .", "Each of these questions requires mapping the corresponding phrase to an expression that refers to this column (e.g., \"Term\". 2 \"Term\". 1 for the former and \"Term\". 1 for the latter).", "While recent approaches rely on pre-trained language models (e.g., Yin et al., 2020; Deng et al., 2021) for addressing the column matching challenge, column operations remain relatively unexplored due to the lack of evaluation benchmarks.", "To this end, we first propose two new benchmarks: a synthetic dataset and a train/test repartitioning of the SQUALL dataset (Shi et al., 2020); both capable of quantifying out-of-domain generalization on column operations.", "We then show that existing neural parsers underperform on both benchmarks because they require an impractically large amount of in-domain training data which is not available in our settingto effectively memorize mappings from natural language phrases to program fragments.", "Finally, we propose a new method for making any existing text-to-SQL parsers aware of prior information that may be available about the domains of interest.", "Specifically, we propose two new components: schema expansion and schema pruning .", "Schema expansion uses heuristics to expand columns into sets of derived columns based solely on their types (all schemas are assumed to be typed which tends to be true for both relational databases and Excel spreadsheets in practice; Excel uses a built-in type inference mech-anism).", "Relying on generic types makes this method applicable to new domains, as long as they make use of similar underlying types.", "This process allows us to transform complex program fragments (e.g., \"Term\". 2 \"Term\". 1 ) into simpler ones (e.g., \"Term Duration\" ) that are better aligned with the natural language questions, thus making the underlying parser's job easier.", "While schema expansion may result in a large number of unnecessary expanded columns, schema pruning then examines both the input question and the available columns (original and expanded) and prunes the set of columns that the final parser is exposed to.", "Our experiments show that schema expansion and schema pruning can boost the underlying parsers' performance by up to 13.8% relative accuracy (5.1% absolute) on the new SQUALL data split.", "Furthermore, they also boost performance over the original SQUALL data splits by up to 4.2% relative (1.9% absolute).", "One of our main goals in this paper is to put attention on the difficult problem of domain generalization by providing a new evaluation benchmark, as well as an initial direction for solving this problem.", "Our evaluation benchmarks along with code for reproducing our experiments are available at https://aka.ms/ text-to-sql-schema-expansion-generalization .", "Task.", "Semantic parsing has been widely studied in the context of multiple other tasks like instruction following (Chen and Mooney, 2011; Artzi and Zettlemoyer, 2013), code generation (Oda et al., 2015; Iyer et al., 2018), knowledge graph question answering (Berant et al., 2013; Yih et al., 2015), etc.", "We focus on using tables as the context in which semantic parsing is performed, where the goal is to translate pairs of natural language questions and tables to executable SQL queries, also known as text-to-SQL parsing (Androutsopoulos et al., 1995; Minock et al., 2008).", "Note that, while we focus on questions in the English language, there exists prior work on multilingual semantic parsing as well (Jie and Lu, 2014; Sherborne et al., 2020) and the contributions of our work also apply there.", "Formally, our goal is to map a pair ( q, T ) , where q is a natural language question and T is a table, to an executable program that, 5569 when executed against table T , will produce the answer to question q .", "We focus on the fully-supervised setting where the target executable program is provided as supervision for training our parser.", "Out-of-Domain Generalization.", "Generalization in machine learning is often defined as the ability to do well on a test set after learning from a training set, where all examples in both sets are drawn independently from the same distribution ( i.i.d. generalization ).", "However, as Gu et al. (2021) argue, in real-world applications such as semantic parsing, the test data may involve new compositional structures ( compositional generalization ), or new domains ( domain generalization ) that are not encountered during training.", "Existing work in compositional generalization for semantic parsing has focused on using synthetic datasets (e.g., Keysers et al., 2020; Lake and Baroni, 2018), or repartitioning existing text-to-SQL datasets into new train and test splits (e.g., Finegan-Dollak et al., 2018).", "Both approaches have generally shown that compositional generalization remains an important challenge (e.g., Shaw et al., 2021).", "We focus on the arguably even more challenging domain generalization problem, also known as domain adaptation , where entire domains may never be encountered during training or may only be encountered a small number of times (Motiian et al., 2017).", "Even though this problem has been studied extensively in the context of classification (Daume III and Marcu, 2006), machine translation (Daume III and Jagarlamudi, 2011), and question answering (Talmor and Berant, 2019), it remains un-derexplored for semantic parsing.", "To be applicable in real scenarios, semantic parsers must be able to generalize to new domains since collecting domain-specific labeled data is often prohibitively expensive.", "Recent approaches have focused on data synthesis (Yin et al., 2021), meta-learning (Wang et al., 2021), relation-aware schema encoding (Wang et al., 2020), and encoder pretraining (Yin et al., 2020; Herzig et al., 2020; Yu et al., 2020; Deng et al., 2021).", "In this paper, we hone in on one aspect of domain generalization that we shall broadly refer to as column operations and which was introduced in 1 and illustrated in Figure 1. Evaluation Benchmarks.", "Text-to-SQL parsing became popular after the introduction of large-scale datasets and evaluation benchmarks.", "Zhong et al. (2017) first introduced WIKISQL, which contains Wikipedia tables paired with questions and annotated with SQL queries, albeit the queries are generated from a limited set of templates.", "SPIDER was introduced by Yu et al. (2018) the following year.", "It contains more complex questions and SQL queries and focuses on generalizing to previously unseen database schemas, but the dataset has the artifact from its annotation design that the references columns are often mentioned verbatim in the natural language questions.", "Deng et al. (2021) attempt to address this limitation by repartitioning SPIDER to produce a more realistic benchmark, and Lee et al. (2021) propose a challenging test set from Kaggle for evaluating parsers trained on SPIDER dataset.", "However, as Suhr et al. (2020) point out, SPIDER also uses a simplified setting which excludes examples that involve multiple columns (e.g., adding two columns together), as well as ones that require background knowledge.", "These benchmarks are thus limited in their usefulness for evaluating parsers in real-world settings where they may encounter complex questions that require mapping specific phrases to expressions over table columns, rather than to a single column.", "Furthermore, while both WIKISQL and SPIDER assume simple tables with only String or Number -valued columns, in practice we may encounter tables where the columns themselves may have structured types (e.g., TimeSpan ).", "For example, consider the table shown on the top left of Figure 2. In this case, the \"Term\" column is of type TimeSpan and consists of two Number s that represent the beginning and the end of the timespan.", "In this case, users may ask questions that require constructing expressions to access nested elements from the \"Term\" column (e.g., How long was Pier's term?).", "Recently, Shi et al. (2020) introduced SQUALL , a dataset that annotates WIKITABLEQUESTIONS (Pasupat and Liang, 2015) with SQL queries and refined column types like Date , Score , (T1, T2) , and List[T] .", "However, SQUALL distributes tables evenly between the train and test splits, thus not allowing us to evaluate the kind of out-of-domain generalization we are interested in.", "Therefore as we will show in the following section, we aim to address this limitation by repartitioning SQUALL into new train and test splits.", "Neural Text-to-SQL Parsers.", "Neural encoder-decoder models have recently gained popularity for text-to-SQL parsing (e.g., Xu et al., 2017).", "We focus on two models that represent the current state-of-the-art for SQUALL and SPIDER , respectively: SEQ 2S EQ of Shi et al. (2020) 1 and SMBOP of Rubin and Berant (2021).", "Both models concatenate the question with a textual representation of the table schema, separated by a special [SEP] token, and feed the combined sequence to a pre-trained instance of the BERT (Devlin et al., 2019) language model.", "The activations of the last layer represent the encoded representations of the question and the table schema.", "SEQ 2S EQ then uses a autoregressive decoder , which represents programs as token sequences and at each decoding step it: (1) predicts the next token type (i.e., whether the next token is a SQL keyword, a column name, or a literal value), and (2) predicts the token conditioned on its type.", "SMBOP , on the other hand, uses bottom-up decoding , which represents programs as abstract syntax trees and constructs these trees in a bottom-up fashion (i.e., it starts by predicting the leaf nodes and then recursively composes generated sub-trees into new trees and ranks them, in a way that resembles beam search), until it reaches the tree root.", "We refer the reader to the aforementioned papers for details.", "Our goal is to design an evaluation benchmark that has the following out-of-domain generalization properties:", "(i) the training data involves a different set of domains from the test data,", "(ii) the questions and tables that appear in the train and test data are non-overlapping, not only in terms of the domains they belong to, but also in terms of the program fragments that they contain, and", "(iii) to simulate the more challenging setting that is often encountered in real applications, the test data is biased to contain more examples that involve both nested column access operations , like getting the start of a \"Term\" in Figure 2, as well as composite column expressions , like getting the duration of a \"Term\" .", "To this end, we propose a new synthetic dataset and a repartitioning of the SQUALL dataset into new train/test splits.", "We consider three fictional domains inspired by common uses of tables: finance , sports , and science .", "We explain our synthetic dataset generation process through a running example as follows: 1. For each domain, we declare a set of formulas that relate different quantities (e.g., \"Income\" = \"Salary\" + \"Stock\" ).", "The primitives used in these formulas define the set of available columns.", "2. For each column we declare a set of noun phrases that can be used to refer to it (e.g., wages for \"Income\" and base salary for salary).", "We also define a SQL query template that shall be used for all programs: SELECT <column> FROM t WHERE \"Year\" = <year> , and a question template What was <column> in <year> ?", "Note that the \"Year\" column is special and is included in all examples of this synthetic dataset.", "3. We sample a formula and a variable from that formula (e.g., \"Income\" from formula \"Income\" = \"Salary\" + \"Stock\" ).", "We then generate a question asking for this variable, randomly replacing the variable with a noun phrase in the corresponding set, and randomly generate a year value (e.g., use wages to replace income and generate a question What was [wages] in [2011] ?).", "4. To generate the target program , we randomly drop a variable from the sampled formula in step 3. If the asked value corresponds to this variable, we transform its reference in the SQL query so that it is expressed as a function of the columns that are kept (e.g., \"Salary\" + \"Stock\" ), otherwise we use the column name (e.g., \"Income\" ).", "5. To generate a table schema we first add a \"Year\" column and two of the columns that were not sampled from the formula (e.g., \"Salary\" and \"Stock\" ).", "We then sample k other columns and add them to schema ( k = 15 in our experiments) as distractor columns.", "Note that we do not generate full tables for this synthetic dataset since we do not evaluate on table cell selections.", "We construct benchmark datasets by first generating 1,000 examples per domain and then iterating over the domains and keeping the data generated for the current domain as our test data, while using the data of the remaining two domains for training.", "This results in three datasets, each with 2,000 train examples and 1,000 test examples.", "More details on the declarations for our domains can be found in Appendix A.1.", "Aside from the synthetic dataset we also propose to repartition SQUALL into new train and test data splits, with a focus on the aforementioned out-of-domain generalization properties.", "The original splits for SQUALL were produced by uniformly sampling 20% of the tables to produce the test set and using the remaining 80% as the train set.", "This process was repeated 5 times and the evaluation metric results were averaged over the results obtained for each repetition.", "This resulted in similar tables being included in both the train and test sets (e.g., tables referring to two different basketball matches, but having identical schemas), and few examples in the test set required column operations.", "In order to avoid this issue, we propose the following algorithm for automatically constructing data splits focused on out-of-domain generalization on column operations: 1. Collect the table schemas used across all examples in the train and dev splits of the dataset (there are about 1,600 schemas; note that the test set is not annotated with SQL queries).", "2. Construct a graph by treating each schema as a node, and adding an edge for each pair of schemas that share more than 33% of their columns.", "3. Find all connected components of the graph.", "Each defines a cluster of table schemas.", "4. Each table has a set of SQL queries associated with it: one for each example that uses this table.", "For each query we check if it is a SELECT of a single column or if it is a SELECT that involves column operations such as field accessors or arithmetic operations.", "We associate each cluster with the number of queries that involve such column operations.", "5. Sort the clusters based on this number, in decreasing order, and then use the first 20% as the test set and the remaining as the train set.", "Note that adding a cluster to the train/test set is equivalent to Data Category # Examples Train Test All 8,956 2,320 w/ Score Accessors 91 86 w/ Score Expressions 47 53 w/ Date Accessors 81 173 w/ Date Expressions 18 95 Table 1: Statistics for our repartitioned version of SQUALL , including the categories that we use in our empirical analysis and which are presented in 3.2.", "adding all examples that use tables included in this cluster.", "This step will result in disproportionally more column operations being used in our test set than in our train set, which means that the model will need to learn to generalize well in this setting to do well in this dataset.", "In the following sections we pay special attention to four data subcategories that are representative of the out-of-domain generalization setting for SQUALL : Score Expressions: Represents SQL queries that include expressions over columns of type Score (e.g., a query selecting the score difference for a basketball game).", "Score Accessors: Represents SQL queries that include field accessors for columns of type Score (e.g., a column with the results of a basketball game, like 89-72, and a query that requires accessing the first element of this score; i.e., 89).", "Date Expressions: Similar to Score expressions except using the Date and TimeSpan type (e.g., a query asking for the duration of a presidency term).", "Date Accessors: Similar to Score Accessors, except using the Date and TimeSpan type (e.g., a query asking for the start of a term).", "In this section we propose a simple approach for tackling this specific out-of-domain generalization problem that ought to serve as evidence that it is a real problem and that it is solvable, as well as a reference point for evaluating future approaches.", "Our approach consists of two new components that can be used in combination with any existing text-to-SQL parser: schema expansion and schema pruning .", "These components interact with the parser by preprocessing the table that is fed to it as input.", "This is illustrated in Figure 2. As discussed in 1, there are two kinds of challenges related to out-of-domain generalization in text-to-SQL parsing, column matching and column operations , with the latter being more challenging.", "The goal of schema expansion is to reduce column operation challenges to column matching by adding synthetic columns to the table schema.", "These synthetic columns correspond to expressions or accessors over existing columns (e.g., a column that represents the sum of two columns).", "Rather than learning (or memorizing) the ways in which different types of columns can be composed together, we propose to inject prior knowledge as to what kind of symbolic operations are possible based solely on the column types in a schema.", "This reduces column operations to column matching by effectively bringing the target programs closer to their surface form in the natural language question.", "For example, \"Income\" can now map to a synthetic column that corresponds to the sum of \"Salary\" and \"Stock\" instead of having the parser produce the sum expression directly.", "Since our expansion is based on column types, we argue that it is reasonable to assume that all schemas are typed and our expansion could be applied to any new domain.", "It is also worth noting that even though our templates may not cover all cases, 2 when applying our method to new domains, developers can declare a few templates of their interest and apply schema expansion on these templates to create parser-friendly schemas.", "This would be more cost-effective compared to collecting large in-domain training data for training the parser.", "Naturally, having a component that expands the table schema means that we may end up with large schemas that the parser has to deal with, which will often involve a lot of irrelevant columns (partially because the schema expansion component does not peek at the ques-tion).", "This can result in increased latency which is not desirable in real-world systems.", "To this end, we introduce a schema pruning component that looks at both the expanded table schema and the question and decides which columns to prune before invoking the parser.", "It can be argued that this pruning is as hard as parsing itself, but there is evidence from other areas that it can indeed be helpful (e.g., vocabulary selection; Chen et al., 2019; Xu et al., 2021).", "As we shall show schema pruning can actually provide an additional boost in accuracy, depending on architecture of the underlying parser.", "A domain developer first declares a set of templates that specify the ways in which different column types can interact (e.g., it specifies that given a typed TimeSpan column that contains two subfields, 1 and 2 , the expression TimeSpan. 2 TimeSpan. 1 can be constructed that represents a duration), and the names for each such interaction (e.g., \"Duration\" ).", "3 The schema expansion component receives as input this set of templates along with the table schema and returns an expanded schema that includes additional columns generated by using all applicable templates.", "For our SQUALL experiments, we declared the templates shown in Table 2. Although these templates are somewhat tailored to this dataset, our main goal is to show that there is considerable room for improvement in this challenging generalization scenario, and that even a simple approach with minimal manual effort can result in significant gains.", "We propose a simple schema pruning approach that is inspired by vocabulary selection methods in machine translation.", "Let us denote the input question 2 An interesting future project idea would be to automatically expand schemas using pre-trained language models.", "3 Having a name that accurately reflects the meaning of the column operation results is desired, as semantic parsers are sensitive to column names for column matching.", "by q and the input column names after expansion by c 1 , . . . , c M .", "We concatenate the question and the column names as [CLS] q [SEP] c 1 [SEP] ... [SEP] c M [SEP] and feed the resulting sequence to a BERT encoder (Devlin et al., 2019).", "We then define the embedding of each column, c i , as the final-layer representation of the last token of that column's name.", "Finally, we define the probability that a column should be kept as p i = Softmax ( MLP ( c i )) .", "We train this model based on whether each column is used in the corresponding SQL program.", "At inference time, we need to choose a threshold on the predicted probabilities for deciding whether to prune a column or not.", "We assume a transductive setting and choose this threshold such that the ratio of pruned columns over the test set equals to the ratio of pruned columns over the train set plus a constant hyperparameter to account for fact that accuracy will likely be lower for the test set than the train set.", "Note that assuming a transductive setting is fine because in a real-world system we could be tuning this threshold based on the last t requests made to the model.", "While this is not equivalent, assuming a large enough t , we should be able to adapt this threshold using the same approach.", "Negative Column Sampling.", "As is evident from Figure 2, we also introduce a negative column sampling component.", "This is because we train our pruning model on the same data that we use to train the underlying parser (aside from the modified table schemas) and thus the pruning model can become good at pruning all irrelevant columns over this dataset.", "This will result in the underlying parser being unable to handle situations where irrelevant columns are mistakenly left unpruned by the pruning model.", "To this end, during training we introduce some irrelevant columns to improve the robustness of the underlying parser.", "We found that making sure to always include at least 3 columns in the resulting schemas was sufficient and equivalent to randomly sampling 1 or 2 additional columns for each training example, and so that is what we did in our experiments.", "We performed experiments on the two proposed benchmarks (as well as the existing version of the SQUALL benchmark), using the two current state-of-the-art parser architectures presented in 2 in combination with our proposed schema expansion and pruning components.", "As described in 3, our synthetic benchmark consists of three domains, finance , sports and science .", "We repeat our experiments once for each domain.", "For each repetition we test on one of the domains, while training on the other two.", "For SQUALL , we present results on our repartitioned split from 3.2.", "For both datasets, we also include results for three i.i.d. splits.", "In each experiment, we compare four different configurations for the parsers: 1. Base : The underlying parser which can be either SEQ 2S EQ or SMBOP .", "2. Base + P : Base while also using schema pruning.", "3. Base + E : Base while also using schema expansion.", "4. Base + P + E : Base while also using both schema expansion and schema pruning.", "We repeat each experiment three times using different random seeds and report mean exact match accuracy (i.e., fraction of examples where the predicted SQL queries exactly match the gold queries), and standard error for this mean.", "Note that for SQUALL , researchers often also report execution accuracy , which measures the fraction of examples for which executing the predicted SQL queries results in the correct answer to the input question.", "However, we found that for 7% of the examples that are representative of out-of-domain generalization, executing the gold SQL queries does not yield the correct answer (e.g., in cases where the correct answer is a sub-string of a cell value).", "Therefore we chose to only report exact match accuracy in our experiments.", "Synthetic Benchmark Results.", "Our results for this benchmark are presented in the top part of Table 3. A first observation is that performance on the i.i.d. split for the baseline parsers is significantly better than on the domain-based splits.", "Interestingly, our expansion and pruning components still provide a significant boost over baseline performance in this setting (up to 43.7% absolute accuracy / 83.2% relative).", "However, the baseline parsers are practically unusable in the domain-based splits.", "In this case, our approach provides a very significant accuracy gain, rendering them useful ( up to 55.0% absolute / 327.4% relative ).", "SQUALL Benchmark Results.", "Our results for this benchmark are presented in the bottom part of Table 3. Similar to the synthetic benchmark, we observe that both parsers perform reasonably well on the i.i.d. split, but significantly underperform in our repartitioned benchmark.", "This is consistent with earlier observations by Suhr et al. (2020) and Lee et al. (2021).", "Furthermore, we observe that our expansion component helps boost the accuracy of both parsers significantly (up to 5.1% absolute / 13.8% relative) and the pruning component provides some small further improvements on top of that.", "However, we notice that the pruning component is not as helpful for SMBOP as it is for SEQ 2S EQ , which we provide detailed analysis in 5.4.", "Drilling down a bit further, we observe that most gains are due to the data categories we defined in 3.2.", "Perhaps most importantly, we get a 47.0% absolute accuracy gain (1,468.8% relative) for SMBOP on the Date Expressions category alone.", "This can be largely attributed to our schema expansion component, where by incorporating prior domain knowledge we are effectively reducing the original column operations problem to a column matching problem, which is significantly easier.", "As a result, we get significant improvements on both Expression data categories.", "We do not observe the same for Accessor categories, which we address in the following section.", "From Table 3, schema expansion does not seem to help much for Accessor expressions (i.e., Base + P performs as well as or slightly better than Base + E + P on those categories).", "In order to further understand the contribution of schema expansion, we conducted an ablation study where we compare the proposed Base + P + E with three more approaches: (1) E Expressions : the schema expansion component only uses Expression templates, (2) E Accessors : the schema expansion component only uses Accessor templates, (3) P Oracle : the schema pruning model is replaced with an oracle model that always only keeps the columns that are used in the gold SQL queries (so the parser only has to fig-ure out how to use them, rather than also figuring out which ones to use).", "Note that (3) will be discussed in the following section.", "We present the results for this ablation study in Table 4. We observe that expanding Expressions but not Accessors boosts performance on the Expressions categories, and similarly for Accessors.", "More importantly though we see that using either one alone performs worse than using both types of expansion, indicating that they both provide value and that they work well together.", "It is evident from Table 3 that schema pruning is useful both on its own (i.e., Base + P ), but also on top of schema expansion (i.e., Base + E + P ).", "For SMBOP , we observe that Base + P is more or less on par with Base .", "Though this may seem inconsistent with the SEQ 2S EQ results at first, it is not actually surprising because SMBOP keeps the most relevant columns in the beam during bottom-up decoding, and thus it is implicitly already using a schema pruning component.", "Furthermore, we observe that schema pruning is especially useful on top of schema expansion for the column operation data categories (Expressions and Accessors).", "This is because in the corresponding examples we end up with a significantly larger number of expanded columns that labeled as negatives when training the pruning model.", "Schema Pruning Effect on Accuracy", "Schema pruning then filters most of these irrelevant columns before training the underlying parser, resulting in a more robust training procedure.", "Finally, in Table 4 we observe that P Oracle performs really well, indicating that investing in a good schema pruning model would be meaningful for improving generalization performance.", "Schema Pruning Decision Threshold.", "As discussed in 4, the proposed schema pruning component requires setting a decision threshold hyperparameter.", "We already described the way we do this in 4, but it is also worth analyzing the impact of this decision on the overall parser accuracy.", "This is because, intuitively we expect that too aggressive pruning will likely cause cascading errors, while too conservative pruning would not be very effective and end up being equivalent to not using any pruning at all.", "To this end, we conducted a study for how the parser accuracy varies as a function of the schema pruning model hyperparameter which was discussed in 4.", "We performed this experiment using the SEQ 2S EQ model which is more affected by the pruning component, over our repartitioned SQUALL benchmark.", "The results are shown in Figure 3. It is evident that aggressive pruning has a more significant negative impact on accuracy then conservative pruning.", "The proposed method is of course not without any limitations and in this section we would like to put attention on some of them.", "While schema expansion does help significantly when tackling out-of-domain generalization on column operations, there are a lot of cases that it cannot directly handle as currently designed.", "For example, consider the question-table pair shown in Figure 4. In this case the original table contains a \"Score\" column and a \"Result\" column.", "The interpretation of these columns is very domain-specific and in this case, \"Score\" refers to the score in a game right before the player of that row scored a goal, while \"Result\" refers to the final score of the game.", "Our schema expansion component cannot help with resolving distinctions of this kind.", "Arguably, one might say this is a challenge inherently related to column matching, but putting details aside, our approach coupled with the proposed benchmarks does help show that column operations pose a significant challenge for existing text-to-SQL parsers, and this paper provides a reference point that future work can build upon.", "Also, note that while constructing expansion templates requires some effort and may initially seem like a limitation of our approach, we have shown that this effort can be small relative to the amount of training data that would need to be annotated otherwise.", "In this paper, we introduced and focused on column operations , an important challenge related to out-of-domain generalization for text-to-SQL parsing.", "We proposed two new evaluation benchmarksone based on a new synthetic dataset and one based on a repartitioning of the SQUALL datasetand showed that current state-of-the-art parsers significantly underperform when it comes to this form of generalization.", "We then proposed a simple way to incorporate prior domain knowledge to the parser via a new component called schema expansion that allows us to reduce the column operations 5575 challenge to column matching; an arguably easier challenge.", "We also introduced a schema pruning component allowing us to scale schema expansion, and showed that when paired together, these two components can boost the performance of existing text-to-SQL parsers by a significant amount (up to 13.8% relative accuracy gain / 5.1% absolute in our experiments).", "Through column expansion, we created a new table schema that is more friendly to downstream parsers.", "Our work uses heuristics based schema expansion and works well when limited to columns that have specified types (e.g., scores or timespans), but our synthetic experiments suggest much larger potential on this problem.", "We hope this work could motivate future research on creating a parser-friendly table ontology.", "Future work could explore learning approaches that use models to automatically expand any table schema, for example, by showing appropriate prompts to ask pre-trained language models to tackle it (Brown et al., 2020; Petroni et al., 2019).", "We thank the anonymous reviewers for their helpful comments, Jason Eisner for his detailed feedback and suggestions on an early draft of the paper, and Jacob Andreas, Ziyu Yao, Yuchen Zhang, and Sam Thomson for helpful discussions." ]
[ "abstain", "result", "abstain", "result", "objective", "objective", "abstain", "abstain", "other", "abstain", "abstain", "objective", "objective", "objective", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "abstain", "abstain", "objective", "method", "other", "other", "other", "other", "other", "method", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "method", "objective", "other", "other", "method", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "other", "abstain", "result", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "objective", "objective", "result", "objective", "method", "objective", "abstain", "other" ]
[ "Opinion summarization is the task of automatically generating summaries that encapsulate information from multiple user reviews.", "We present Semantic Autoencoder (SemAE) to perform extractive opinion summarization in an unsupervised manner.", "SemAE uses dictionary learning to implicitly capture semantic information from the review and learns a latent representation of each sentence over semantic units.", "A semantic unit is supposed to capture an abstract semantic concept.", "Our extractive summarization algorithm leverages the representations to identify representative opinions among hundreds of reviews.", "SemAE is also able to perform controllable summarization to generate aspect-specific summaries.", "We report strong performance on SPACE and AMAZON datasets, and perform experiments to investigate the functioning of our model.", "Our code is publicly available at https://github.com/brcsomnath/SemAE.", "Opinion summarization is the task of automatically generating digests for an entity (e.g. a product, a hotel, a service, etc.), from user opinions in online forums.", "Automatic opinion summaries enable faster comparison, search, and better consumer feedback understanding (Hu and Liu, 2004; Pang, 2008; Medhat et al., 2014).", "Although there has been significant progress towards summarization (Rush et al., 2015; Nallapati et al., 2016; Cheng and Lapata, 2016; See et al., 2017; Narayan et al., 2018; Liu et al., 2018), existing approaches rely on human-annotated reference summaries, which are scarce for opinion summarization.", "For opinion summarization, human annotators need to read hundreds of reviews per entity across different sources for writing a summary, which may not be feasible.", "This lack of labeled training data has prompted a series of works to leverage unsupervised or weakly-supervised techniques for opinion summarization (Mei et al., 2007; Titov and McDonald, 2008; Angelidis and Lapata, 2018a; Angelidis et al., 2021).", "Recent works in this direction have focused on performing opinion summarization in an abstractive setting (Coavoux et al., 2019; Isonuma et al., 2019; Brainskas et al., 2020; Amplayo et al., 2021b; Iso et al., 2021; Wang and Wan, 2021).", "Abstractive models are able to produce fluent summaries using novel phrases.", "However, they suffer from problems common in text generation like hallucination (Rohrbach et al., 2018), text degeneration (Holtzman et al., 2020), and topic drift (Sun et al., 2020).", "Also, these approaches have been evaluated on small scales (10 reviews per entity or fewer), which does not reveal their utility in the real world where there are hundreds of reviews per entity.", "To overcome these issues, another thread of works focuses on extractive opinion summarization, which creates summaries by selecting review sentences to reflect the popular opinions corresponding to an entity.", "A recently proposed extractive summarization approach is Quantized Transformer (QT) (Angelidis et al., 2021), which leverages vector quantization (van den Oord et al., 2017) for assigning texts to a latent representation that is supposed to capture a semantic sense.", "However, a text phrase can encapsulate multiple semantic senses, making this representation learning approach restrictive.", "Building on the framework introduced by QT, we introduce an unsupervised extractive model, Semantic Autoencoder (SemAE), which learns a representation of text over latent semantic units using dictionary learning (Dumitrescu and Irofti, 2018).", "Similar to QT, SemAE leverages Transformer (Vaswani et al., 2017) for sentence reconstruction to simultaneously learn latent semantic units and sentence representations.", "However, while QT assigns texts to a latent representation (codebook), SemAE models text as a combination of semantics and forms a distribution over latent units (dictionary).", "This allows sentence rep-1209 resentations to capture fine-grained and diverse semantics.", "Unlike QT that relies on identification of aspect-specific head representations, we achieve controllable summarization by utilizing information-theoretic measures (such as relevance, redundancy, etc) on sentence representations.", "Our sentence selection algorithm is more flexible and allows a broader spectrum of controllable summarization.", "We experimentally show strong performance on two opinion summarization datasets.", "Our main contributions are: We present Semantic Autoencoder (SemAE), which learns representation of sentences over latent semantic units.", "We introduce novel inference algorithms for general and controllable summarization utilizing information-theoretic measures.", "We show that SemAE outperforms previous methods using automatic and human evaluations.", "We perform analysis to understand how the learnt representations align with human semantics.", "Unsupervised opinion summarization can be conducted either abstractively or extractively.", "Abstractive approaches aim to summarize the opinion text using novel phrases.", "Traditional statistical approaches create abstractive summaries using graphical paths (Ganesan et al., 2010) or hand-written templates (Di Fabbrizio et al., 2014).", "Recent neural approaches leverage the encoder-decoder architecture to aggregate information from multiple reviews and generate summaries accordingly (Chu and Liu, 2019; Brainskas et al., 2020; Iso et al., 2021; Wang and Wan, 2021).", "In contrast to abstractive approaches, extractive approaches rank and select a subset of salient sentences from reviews to form a concise summary (Kim et al., 2011).", "Saliency computation has been explored using traditional frequency-based approaches (Nenkova and Vanderwende, 2005), similarity with the centroid in the representation space (Radev et al., 2004), and lexical similarity with all sentences in a graph-based representation (Erkan and Radev, 2004).", "Weakly supervised approaches (Angelidis and Lapata, 2018a; Zhao and Chaturvedi, 2020) extract opinions based on their aspect specificity, and nature of sentiment polarity.", "discussed in Section 1. It is also similar to neural topic model-based approaches (Iyyer et al., 2016; He et al., 2017; Angelidis and Lapata, 2018a) that use a variant of dictionary learning (Elad and Aharon, 2006; Olshausen and Field, 1997) to represent text as a combination of specific semantics (e.g. aspect, relationships etc).", "In contrast to these models, where text from same topics are trained to have similar representations using max-margin loss, SemAE uses an autoencoder setup to capture diverse latent semantics.", "We follow the task setup in (Angelidis et al., 2021), where given a set of entities (e.g. hotels), a review set R e = { r 1 , r 2 , . . . } is provided for each entity e , where each review r i is a sequence of sentences { s 1 , s 2 , . . . } .", "The review set R e covers a range of aspects A = { a 1 , a 2 , . . . } relating to the domain (e.g. service , location for hotels).", "We denote S e to be the set of sentences from all reviews for an entity e .", "SemAE is evaluated to perform two types of extractive opinion summarization introduced by Angelidis et al. (2021):", "(a) general summarization , which involves selecting a subset of sentences O e S e such that it best represents the reviews in R e , and", "(b) aspect summarization , where the generated summary O ( a ) e S e focuses on a specific aspect a A .", "The intuition behind Semantic Autoencoder is that instead of representing text as a single latent semantic unit, we represent text as a distribution over latent semantic units using dictionary learning .", "Learning semantic representations over a common dictionary makes them structurally aligned, enabling comparison of sentences using information-theoretic measures.", "Semantic Autoencoder consists of three stages", "(i) sentence encoding an input sentence s is converted into a multi-head representation ( H heads) using Transformer encoder { s h } Hh =1 ;", "(ii) reconstruction a latent representation of head vectors s h is formed over elements of the dictionary D RK d , to produce reconstructed representations z = { z h } H h =1 ; and", "(iii) sentence decoding a Transformer-based decoder takes as input the reconstructed representations z to produce the output sentence s .", "SemAE is trained on the sentence reconstruction task.", "The overall workflow of SemAE 1210 Sentence Encoder Sentence Decoder D K d 1 2 3 s 1 s 2 s 3 z 1 z 2 z 3 Input ( s ) Reconstruction ( s ) Figure 1: An example workflow of SemAE.", "We follow the setup of QT (Angelidis et al., 2021) for sentence encoding.", "Each sentence s starts with a special token [SNT] , which is fed to a Transformer-based encoder.", "We only consider the final-layer representation of the [SNT] token s snt R d .", "The sentence representation s snt is split into H contiguous vectors { s (cid:48) h } Hh =1 , where s (cid:48) h R d/H .", "A multi-head representation is formed by passing s (cid:48) h through a layer-normalization layer: s h = LN( s (cid:48) h WT + b ) (1) where W R d d/H , b R d are trainable parameters and s h R d is the h th head representation.", "For each s h , we obtain a latent representation h over the dictionary D , by reconstructing the encoded sentence representation s h as shown below z h = h D, h = softmax( s h DT ) (2) where the reconstructed vector z h R d , and the latent representation h RK .", "We hypothesize that the dictionary D captures the representation of latent semantic units, and h captures the degree to which the text encapsulates a certain semantic.", "The vectors formed z = { z h } Hh =1 are forwarded to the decoder for sentence reconstruction.", "The dictionary D and s h are updated simultaneously using backpropagation.", "For summarization (Section 5), different from QT, we consider h (not z h ) as the sentence representation.", "We employ a Transformer-based decoder that takes as input the reconstructed representations z = { z h } Hh =1 .", "MultiHead( z , z , t ) attention module in the decoder takes z as key and value, and the target tokens t as the query.", "The reconstructed sentence is generated from the decoder as s = Decoder( z , t ) .", "As our goal is sentence reconstruction, we set the target tokens to be same as the input sentence s .", "Prior work (Angelidis et al., 2021) has also used a similar Transformer-based decoder for sentence reconstruction but they attend directly over quantized head vector formed using codebook elements.", "A sentence can capture only a small number of semantic senses.", "We ensure this by enforcing sparsity constraints on the representations h , so that z h is a combination of only a few semantic units.", "The encoder, reconstructor and decoder are trained together to minimize the loss function: L = LCE ( s, s )+ 1 (cid:88) h | h | + 2 (cid:88) h H ( h ) (3) where LCE is the reconstruction cross-entropy loss of the decoder, and to ensure sparsity of h we penalize the L1-norm ( | h | ) and its entropy H ( h ) .", "We leverage the latent representations h generated by SemAE to perform opinion summarization.", "1 5.1 General Summarization For obtaining the general summary of an entity, we first compute a mean representation of all the review sentences in S e , which represents the aggregate distribution over semantic units.", "Thereafter, the general summary is obtained as the collection of sentences that resemble the mean distribution.", "Mathematically, every sentence s is associated with a representation over dictionary elements s = [ 1 , . . . , H ] , where s RH K .", "We form the mean representation of review sentences for an entity S e over dictionary elements as: = 1 | S e | (cid:88) s S e s (4) where s is the representation for sentence s S e .", "For general summarization, we compute the relevance score R ( ) for each sentence s based on its similarity with the mean representation : R ( s ) = ( , s ) = (cid:88) h KL( h , sh ) (5) 1 We experimented with different variations of the sentence selection scheme using h in Appendix A.4.", "where sh is latent representation of sentence s for the h th head.", "( x, y ) denotes the similarity between two representations x and y .", "It is implemented as negation of the sum of KL-divergence between head representations.", "We also experimented with other divergence metrics and observed similar summarization performance (Appendix A.3).", "We rank sentences according to descending order of R ( ) and select the top N (a constant hyper-parameter, N < | S e | ) sentences as the summary O e (shown in Figure 2).", "The extracted summary is a concatenation of the text from N selected input sentences (Input ( s ) in Figure 1).", "However, modeling relevance only using ( , ) results in selection of similar sentences.", "We overcome this by designing variations of our system that have additional information-theoretic constraints.", "(a) Redundancy : We introduce diversity in the generated summary by penalizing sentences that have a high similarity value with already selected sentences.", "This is achieved by adding the redundancy term in relevance score: R ( s , O e ) = ( , s ) max s (cid:48) O e ( s (cid:48) , s ) (6) where O e is the set of sentences selected so far for the summary.", "The selection routine proceeds in a greedy fashion by choosing s 0 = arg max s S e ( , s ) when O e = .", "(b) Aspect-awareness : Another drawback with sentence selection using ( , ) is that the summary frequently switches context among different aspects (example shown in Table 7).", "To mitigate this issue, we identify the aspect of a review sentence using occurrences of aspect-denoting keywords provided in the dataset (Section 5.2).", "We then cluster the sentences into aspect-specific buckets { S ( a 1 ) e , S ( a 2 ) e , . . . } and rank sentences within each bucket.", "We ignore sentences that are not part of any bucket.", "We select sentences using two different strategies: We iterate over sentence buckets { S ( a i ) e } and select the first m sentences ranked according to R ( s ) , from each bucket.", "We prevent selection of similar sentences from a bucket by introducing the redundancy term.", "We iterate over individual buckets and select first m sentences ranked according to their relevance R ( s , O ( a ) e ) (Equation 6).", "SemAE can perform aspect summarization without needing additional training.", "For this, we require a small set of keywords to identify sentences that talk about an aspect.", "For example, food aspect is captured using keywords: breakfast, buffet etc.", "For a given aspect a , let the keyword set be Q a = { w 1 , w 2 , . . . } .", "We use Q a to identify a set of sentences S ( a ) e for each entity e , belonging to aspect a from a held-out dev set S dev .", "Similar to general summarization, we proceed by computing the mean representation of sentences S ( a ) e belonging to the aspect a : ( a ) = 1 | S ( a ) e | (cid:88) s S ( a ) e s (7) We then select sentences most similar to the mean representation as the summary.", "(a) Informativeness : Sentences selected for aspect summarization should talk about the aspect but not the general information.", "We model informativeness (Peyrard, 2019) by ensuring that a selected sentence representation s resembles the aspect mean ( a ) , but is divergent from the overall representation mean , for a given entity e .", "For an aspect a , we iterate over sentences in S ( a ) e and compute the relevance score for a sentence s as follows: R a ( s ) = ( ( a ) , s ) ( , s ) (8) We rank sentences s S e according to their aspect-specific relevance score R a ( ) , and select first N sentences as the summary for aspect O ( a ) e .", "2 6 Experimental Setup In this section, we discuss the experimental setup, results and analysis.", "2 We experimented with incorporating the informativeness term in general summarization also but did not find it useful (see Appendix A.3 for more details).", "The dataset statistics are reported in Table 1. Test sets of both datasets contain three human-written general summaries per entity.", "The SPACE corpus was created in a two-step process of sentence selection and then summarization of selected sentences by annotators (further details in Appendix A.2).", "SPACE dataset also provides human-written summaries for six different aspects of hotels: building , cleanliness , food , location , rooms , and service .", "We build on the implementation framework introduced by Angelidis et al. (2021) for our experiments.", "We used a 3-layer Transformer with 4 attention heads as the encoder and decoder.", "The input and hidden dimensions are 320.", "The encoder and decoder for SemAE was trained for 4 warmup epochs, before the dictionary learning based reconstruction component was introduced.", "We split the encoded vector into H = 8 head representations.", "We have K = 1024 dictionary elements, each with dimension d = 320 .", "The dictionary elements are initialized using k -means clustering of review sentence representations.", "All hyperparameters were tuned on the development set (see Appendix A.1 for more details).", "We report ROUGE F-scores that compares the overlap between generated text with gold summaries.", "For SPACE dataset, we measure how much general summaries cover different aspects by computing the mean ROUGE-L score with the gold aspect summaries (denoted by RLASP ).", "We also compute perplexity (PPL) score to evaluate the readability of summaries.", "Perplexity is computed using cross-entropy loss from a BERT-base model.", "We measure aspect coverage of a system, by computing the average number of distinct aspects NASP in the generated summaries.", "Lastly, to evaluate repetition in summaries, we compute the percentage of distinct n -grams ( n = 2 ).", "Following prior work (Angelidis et al., 2021), we compare SemAE with three types of systems:", "(a) Best Review systems: We report the performance of Centroid method, where reviews are encoded using BERT or SentiNeutron (Radford et al., 2017), and the review most similar to the mean representation is selected.", "(b) Abstractive systems: We report the performance of Opinosis (Ganesan et al., 2010) (a graph-based approach), MeanSum (Chu and Liu, 2019), CopyCat (Brainskas et al., 2020) and AceSum (Am-playo et al., 2021a) summarization models.", "(c) Extractive systems: We report the performance of LexRank (Erkan and Radev, 2004), where sentences were encoded using BERT, SentiNeutron or tf-idf vector.", "We also report the performance achieved by selecting review sentences randomly .", "General Summarization : We present the results of general summarization on SPACE dataset in Table 2. SemAE and its variants show strong improvements over previous state-of-the-art QT, and other baselines, across all ROUGE metrics.", "They also outperform abstractive systems (like CopyCat and Meansum) by a large margin, which shows that SemAE can effectively select relevant sentences from a large pool of reviews.", "All variants of SemAE outperform other models in RLASP metric, showcasing that general summaries from SemAE cover aspects better than baselines.", "We compiled some baseline results from Angelidis et al. (2021).", "We further evaluate the quality of the summaries, for all variations of SemAE along with our strongest baseline QT, using other automatic metrics in Table 3. The first row in Table 3 reports the performance of QT, which achieves the highest distinct n -gram score, but has poor perplexity score.", "This shows that QT generates summaries with diverse text but they are not coherent.", "SemAE achieves the best perplexity score (sec-ond row in Table 3) but produces less diverse text (lowest distinct n -gram score).", "The third row in Table 3 reports the performance of SemAE with redundancy term.", "Comparing rows 2 and 3 of Table 3, we observe that the summaries from SemAE 1213 SPACE [General] R1 R2 RL RLASPB e s t R e v i e w Centroid SENTI 27.36 5.81 15.15 8.77 Centroid BERT 31.33 5.78 16.54 9.35 Oracle SENTI 32.14 7.52 17.43 9.29 Oracle BERT 33.21 8.33 18.02 9.67 A b s t r ac t Opinosis (Ganesan et al.) 28.76 4.57 15.96 11.68 MeanSum (Chu and Liu) 34.95 7.49 19.92 14.52 Copycat (Brainskas et al.) 36.66 8.87 20.90 14.15 AceSum (Amplayo et al.) 40.37 11.51 23.23 E x t r ac t Random 26.24 3.58 14.72 11.53 LexRank TF-IDF 29.85 5.87 17.56 11.84 LexRank SENTI 30.56 4.75 17.19 12.11 LexRank BERT 31.41 5.05 18.12 13.29 AceSum EXT (Amplayo et al.) 35.50 7.82 20.09 QT (Angelidis et al.) 38.66 10.22 21.90 14.26 SemAE 42.48 13.48 26.40 15.23 w/ redun.", "(w/ redundancy) have more distinct n -grams (less repetition), while falling behind in perplexity and aspect coverage.", "Performance results for aspect-aware variants of SemAE are reported in last two rows of Table 3. We observe that iteratively covering aspects reduces repetition (increase in distinct-n score).", "As expected the mean aspect-coverage ( E [ NASP ] ) improves in aspect-aware SemAE variants.", "However, a slight drop in aspect-coverage is observed when the redundancy term is introduced (last row in Table 3).", "We also observe an increase in perplexity for aspect-aware variants, which can be caused due to multiple changes in aspect context.", "Overall, SemAE (w/ aspect + redundancy) is able to produce diverse text with a high aspect coverage and a decent perplexity score, appearing to be the best performing model.", "similar performance, with SemAE achieving the best performance among all extractive summarization system.", "SemAE falls short of only abstractive summarization systems that have the advantage of generating novel phrases not present in the input reviews.", "Also, while SemAE beats most baselines for AMAZON dataset, the performance gain isn't as much as SPACE dataset.", "We believe this is because the number of reviews per entity in AMAZON (8) is much lower compared to SPACE (100).", "As SemAE is dependent on the mean representation , having more reviews helps in capturing the popular opinion distribution accurately.", "4 For practical purposes, opinion summarization systems are useful when there are hundreds or more reviews per entity.", "A larger improvement on SPACE shows the efficacy of SemAE in the real world.", "Aspect Summarization : For aspect summarization, we compare against four unsupervised systems MeanSum, CopyCat, LexRank and QT on the SPACE dataset.", "For general summarizers: MeanSum, CopyCat and LexRank, sentence embeddings retrieved from BERT (Vaswani et al., 2017) were clustered using k -means and each cluster S ( a ) e was assigned an aspect a based on frequency of aspect-denoting keywords in the cluster's sentences.", "The models then produced summaries for each aspect a given the input set S ( a ) e .", "All models including 4 We observed a drop in performance when the number of reviews/entity in SPACE dataset was reduced (experimental details in Section 6.6).", "SPACE [Aspect] Building Cleanliness Food Location Rooms Service R1 R2 RL MeanSum (Chu and Liu) 13.25 19.24 13.01 18.41 17.81 20.40 23.24 3.72 17.02 CopyCat (Brainskas et al.) 17.10 15.90 14.53 20.31 17.30 20.05 24.95 4.82 17.53 LexRank BERT (Erkan and Radev) 14.73 25.10 17.56 23.28 18.24 26.01 27.72 7.54 20.82 QT (Angelidis et al.) 16.45 25.12 17.79 23.63 21.61 26.07 28.95 8.34 21.77 SemAE 20.04 23.72 23.57 25.33 25.29 26.90 31.24 10.43 24.14 w/o informativeness 18.38 24.08 19.03 23.32 23.89 25.05 27.85 8.61 22.29", "SemAE, use the same aspect-denoting keywords.", "Evaluation results on SPACE are reported in Table 5. SemAE outperforms the state-of-the-art QT in all aspects except cleanliness , where the performance is comparable.", "We observe that adding the informativeness term ( ( , s ) in Equation 8) helps improve the specificity of the aspect thereby boosting performance.", "SemAE also shows significant gains in terms of average ROUGE-1/2 and ROUGE-L across different aspects.", "Human Evaluation : We performed human evaluations for the general and aspect summaries.", "We evaluated general summaries from QT, best performing variant SemAE (w/ aspect + redundancy) and gold summary.", "Summaries were judged by 3 human annotators on three criteria: informativeness , coherence and non-redundancy .", "The judges were presented summaries in a pairwise manner and asked to select which one was bet-ter/worse/similar.", "The scores (-100 to +100) were computed using Best-Worst Scaling (Louviere et al., 2015).", "The first half of Table 6 reports the evaluation results, where we observe that SemAE (w/ aspect + redundancy) outperforms our strongest baseline, QT, for all criteria (statistical significance information provided in the caption of Table 6).", "However, summaries generated from both systems Figure 3: Visualization of UMAP projections of dictionary elements.", "We also evaluated aspect summaries generated by SemAE and QT in a similar manner.", "Aspect summaries were judged based on two criteria: aspect informativeness (usefulness of opinions for a specific aspect, consistent with reference) and aspect specificity (how specific the summary is for an aspect without considering other factors).", "The bottom half of Table 6 reports the results for aspect summaries.", "We observe that both QT and SemAE produce aspect-specific summaries.", "However, SemAE shows a statistically significant improvement over QT in aspect informativeness.", "Latent Dictionary Interpretation.", "In this section, we investigate the semantic meanings learnt by individual dictionary elements, D k .", "We visualized the UMAP projection (McInnes et al., 2018) of dictionary element representations (shown in Figure 3).", "For different runs of SemAE, we found that the dictionary representations converged into clusters as shown in Figure 3 (elements are color-coded according to their cluster identities as assigned by k -means algorithm with k =12).", "We hypothesize that the clusters should capture certain semantic meaning.", "We explore this hypothesis by identifying sentences sharing similar representations with the mean representations 1215 SemAE SemAE (w/ redun.) SemAE (w/ aspect) SemAE (w/ aspect + redun.) The staff is great.", "{ 1 , . . . , K } for each cluster.", "For each head h in the encoder (Section 4.1), we compute cosine similarity of sentences with cluster means.", "Table 8 shows some examples of sentences having highest similarity with a cluster mean k for a head representation h .", "We observe in most cases sentences closest to a cluster share a similar semantic meaning.", "For hotel reviews, we observe that sentences often talk about a specific aspect like service, food and rooms, as shown for ( h, k ) configurations (3, 5), (2, 8) and (5, 8) in Table 8. The clusters sometimes capture certain coarse semantics like presence of a word or phrase (e.g. config.", "(0, 10)", "in Table 8).", "It can also capture high-level semantics like the experience of a customer (e.g. config.", "(6, 0)).", "It was interesting to observe that a single cluster can capture different semantics for distinct heads (cluster 8 in configurations (2, 8) and (5, 8)).", "Qualitative Examples.", "Table 7 shows summaries generated by SemAE and its variants for the SPACE dataset.", "While the summary generated by SemAE talks about location , staff & service multiple times (shown as highlighted text), summary from SemAE (w/ redundancy) doesn't have that repetition.", "Also, the summary generated by SemAE switches context frequently.", "For example, the aspect of the first three sentences changes from service location service.", "We observe that compared to SemAE, both aspect-aware SemAE variants generate summaries without abrupt context switches.", "The summary generated by SemAE (w/ aspect) covers aspects like service, hotel, food and rooms sequentially, but sentences referring to an aspect are quite similar.", "SemAE (w/ aspect + redundancy) overcomes this shortcoming, and introduces diversity among the aspect-specific sentences.", "Training Data Efficiency.", "We analyze the performance of SemAE, QT and CopyCat for general summarization (ROUGE-1) on SPACE for varying training data fractions in Table 9. We observe that both QT and SemAE perform well with low training data.", "However, SemAE outperforms QT in all 1216", "low resource settings.", "SemAE (with 10% data) yields significant ROUGE-1 improvements over QT (with access to 100% data).", "Impact of number of reviews.", "We investigate whether SemAE's performance gain on SPACE is due to the larger number of reviews available (reviews per entity AMAZON : 8, SPACE : 100).", "Specifically, we perform ablation experiments by reducing the number of reviews/entity in SPACE dataset.", "We remove user reviews with low relevance scores (relevance score of a review is the average R ( ) of its sentences).", "Table 10 reports the performance of SemAE with different number of reviews/entity in the test set.", "We observe a gradual decline in ROUGE-1 score when the reviews/entity is reduced, which shows that having more reviews per entity helps in better extractive summarization.", "Additional Controllable Summarization .", "We showcase that SemAE can perform different forms of controllable summarization.", "Specifically, we perform sentiment-based summarization using a small number (10) of seed sentences belonging to positive , negative and neutral sentiment class.", "Seed sentences were annotated using the rule-based system VADER (Hutto and Gilbert, 2014).", "An example of sentiment-based summarization is shown in Table 11.", "We observe SemAE is able to generate summaries aligning with the seed sentiments.", "We also perform multi-aspect summarization using SemAE, by controlling the aspect of the selected sentences.", "Table 12 showcases an example of multi-aspect summarization.", "An interesting observation is that SemAE is able to select sentences, which have mutliple aspects (shown in blue ) and not in-dependent sentences from different aspects.", "These experiments show that SemAE is able capture and leverage granular semantics for summarization.", "In Appendix A.5, we perform additional analysis to investigate the head-wise analysis, efficacy of sparsity constraints, dictionary evolution, and qualitatively compare SemAE with baselines (QT and CopyCat).", "We proposed a novel opinion summarization approach using Semantic Autoencoder, which encodes text as a representation over latent semantic units.", "We perform extractive summarization by selecting sentences using information-theoretic measures over representations obtained from SemAE.", "Our experiments reveal that dictionary element representations from SemAE form clusters, which capture distinct semantics.", "Our model provides fine-grained control to users to model surface-level text attributes (like redundancy, informativeness etc.) in the representation space.", "SemAE outperforms existing extractive opinion summarization methods on SPACE and AMAZON datasets.", "Finally, SemAE representations can be leveraged to explore different forms of control on the summary generation (e.g. multi-aspect sumamrization) using our inference framework.", "Future works can focus on better representation learning systems to handle use-cases with noisy or sparse textual data.", "This work was supported in part by NSF grants IIS2112635 and IIS2047232.", "We also thank the anonymous reviewers for their thoughtful and constructive comments.", "We do not foresee any ethical concerns from the technology presented in this work.", "We used publicly available datasets, and do not annotate any data manually.", "The datasets used have reviews in English language.", "Human evaluations for summarization were performed on Amazon Mechanical Turks (AMT) platform.", "Human judges were compensated at a wage rate of $15 per hour." ]
[ "abstain", "method", "abstain", "abstain", "method", "abstain", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "result", "objective", "objective", "result", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "result", "result", "method", "abstain", "objective", "abstain", "other", "other", "method", "method", "abstain", "abstain", "abstain" ]
[ "Complex question answering often requires finding a reasoning chain that consists of multiple evidence pieces.", "Current approaches incorporate the strengths of structured knowledge and unstructured text, assuming text corpora is semi-structured.", "Building on dense retrieval methods, we propose a new multi-step retrieval approach (BEAMDR) that iteratively forms an evidence chain through beam search in dense representations.", "When evaluated on multi-hop question answering, BEAMDR is competitive to state-of-the-art systems, without using any semi-structured information.", "Through query composition in dense space, BEAMDR captures the implicit relationships between evidence in the reasoning chain.", "The code is available at https://github.com/ henryzhao5852/BeamDR .", "Answering complex questions requires combining knowledge pieces through multiple steps into an evidence chain (Ralph Hefferline Columbia University in Figure 1).", "When the available knowledge sources are graphs or databases, constructing chains can use the sources' inherent structure.", "However, when the information needs to be pulled from unstructured text (which often has better coverage), standard information retrieval ( IR ) approaches only go one hop: from a query to a single passage.", "Recent approaches (Dhingra et al., 2020; Zhao et al., 2020a,b; Asai et al., 2020, inter alia ) try to achieve the best of both worlds: use the unstructured text of Wikipedia with its structured hyperlinks.", "While they show promise on benchmarks, it's difficult to extend them beyond aca-demic testbeds because real-world datasets often lack this structure.", "For example, medical records lack links between reports.", "Dense retrieval (Lee et al., 2019; Guu et al., 2020; Karpukhin et al., 2020, inter alia ) provides a Question P1 P2 P1 QQ u e r y C o m p o s i t i o n First Step Second Step Question: Ralph Hefferline was a psychology professor at a university that is located in what city?", "We evaluate Beam D ense R etrieval (BEAMDR) on HOTPOTQA (Yang et al., 2018), a multihop question answering benchmark.", "promising path to overcome this limitation.", "It encodes the query and evidence (passage) into dense vectors and matches them in the embedding space.", "In addition to its efficiencythanks to maximum inner-product search ( MIPS )Xiong et al. (2021a) show that dense retrieval rivals BERT (Devlin et al., 2019)-based (sparse) retrieve-then-rerank IR pipelines on single step retrieval.", "Unlike traditional term-based retrieval, fully learnable dense encodings provide flexibility for different tasks.", "This paper investigates a natural question: can we build a retrieval system to find an evidence chain on unstructured text corpora?", "We propose a new multi-step dense retrieval method to model the implicit relationships between evidence pieces.", "We use beam search (Section", "2) in the dense space to find and cache the most relevant candidate chains and iteratively compose the query by appending the retrieval history.", "We improve the retrieval by encouraging the representation to discriminate hard negative evidence chains from the correct chains, which are refreshed by the model.", "When retrieving evidence chains directly from the corpus (full retrieval), BEAMDR is competitive to the state-of-the-art cascade reranking systems that use Wikipedia links.", "Combined with standard reranking and answer span extraction modules, the gain from full retrieval propagates to findings answers (Section 3).", "By iteratively composing the query representation, BEAMDR captures the hidden se-mantic relationships in the evidence (Section 4).", "Unlike classic retrieval techniques, dense retrieval methods match distributed text representations (Bengio et al., 2013) rather than sparse vectors (Salton, 1968).", "With encoders (e.g., BERT ) to embed query q and passage p into dense vectors EQ ( q ) and EP ( p ) , the relevance score f is computed by a similarity function sim ( ) (e.g., dot product) over two vector representations: f ( q, p ) = sim ( EQ ( q ) , EP ( p )) .", "After encoding passage vectors offline, we can effi-ciently retrieve passage through approximate nearest neighbor search over the maximum inner product with the query, i.e., MIPS (Shrivastava and Li, 2014; Johnson et al., 2017).", "We focus on finding an evidence chain from an unstructured text corpus for a given question, often the hardest part of complex question answering.", "We formulate it as multi-step retrieval problem.", "Formally, given a question q and a corpus C , the task is to form an ordered evidence chain p 1 ...p n from C , with each evidence a passage.", "We focus on the supervised setting, where the labeled evidence set is given during training (but not during testing).", "Finding an evidence chain from the corpus is challenging because:", "1) passages that do not share enough words are hard to retrieve (e.g., in Figure 1, the evidence Columbia University);", "2) if you miss one evidence, you may err on all that come after.", "We first introduce scoring a single evidence chain, then finding the top k chains with beam search, and finally training BEAMDR.", "Evidence Chain Scoring The score S n of evidence chain p 1 , . . . , p n is the product of the (nor-malized) relevance scores of individual evidence pieces.", "At each retrieval step t , to incorporate the information from both the question and retrieval history, we compose a new query q t by appending the tokens of retrieved chains p 1 , . . . , p t 1 to query q ( q t = [ q ; p 1 ; . . . ; p t 1 ] ), we use MIPS to find relevant evidence piece p t from the corpus and update the evidence chain score S t by multiplying the current step t 's relevance score f ( q t , p t ) S t 1 .", "Beam Search in Dense Space Since enumerating all evidence chains is computationally impossible, we instead maintain an evidence cache.", "In the structured search literature this is called a beam : the k -best scoring candidate chains we have found thus far.", "We select evidence chains with beam search in dense space.", "At step t , we enumerate each candidate chain j in the beam p j, 1 ...p j,t 1 , score the top k chains and update the beam.", "After n steps, the k highest-scored evidence chains with length n are finally retrieved.", "Training BEAMDR The goal of training is to learn embedding functions that differentiate positive (relevant) and negative evidence chains.", "Since the evidence pieces are unordered, we use heuristics to infer the order of evidence chains.", "A negative chain has at least one evidence piece that is not in the gold evidence set.", "For each step t , the input is the query q , a positive chain P + t = p +1 , . . . , p + t and m sampled negative chains P j,t = p 1 , . . . , p t .", "We update the negative log likelihood ( NLL ) loss: L ( q, P + , P 1 , ..., P m ) (2) = (cid:88) t e f ([ q ; P + t 1 ] ,p + t ) e f ([ q ; P + t 1 ] ,p + t ) + (cid:80) mj =1 e f ([ q ; P j,t 1 ] ,p j,t ) .", "Rather than using local in-batch or term matching negative samples, like Guu et al. (2020) we select negatives from the whole corpus, which can be more effective for single-step retrieval (Xiong et al., 2021a).", "In multi-step retrieval, we select negative evidence chains from the corpus.", "Beam search on the training data finds the top k highest scored negative chains for each retrieval step.", "Since the model parameters are dynamically updated, we asynchronously refresh the negative chains with the up-to-date model checkpoint (Guu et al., 2020; Xiong et al., 2021a).", "Our experiments are on HOTPOTQA fullwiki setting (Yang et al., 2018), the multi-hop question answering benchmark.", "We mainly evaluate on retrieval that extracts evidence chains (passages) from the corpus; we further add a downstream evaluation on whether it finds the right answer.", "Metrics Following Asai et al. (2020), we report four metrics on retrieval: answer recall ( AR ), if answer span is in the retrieved passages; passage recall ( PR ), if at least one gold passage is in the retrieved passages; Passage Exact Match ( P EM ), if both gold passages are included in the retrieved passages; and Exact Match ( EM ), whether both gold passages are included in the top two retrieved passages (top one chain).", "We report exact match ( EM ) and F 1 on answer spans.", "Implementation We use a BERT -base encoder for retrieval and report both BERT base and large for span extraction.", "We warm up BEAMDR with TF-IDF negative chains.", "The retrieval is evaluated on ten passage chains (each chain has two pas-sages).", "To compare with existing retrieve-then-rerank cascade systems, we train a standard BERT passage reranker (Nogueira and Cho, 2019), and evaluate on ten chains reranked from the top 100 retrieval outputs.", "We train BEAMDR on six 2080Ti GPUs, three for training, three for refreshing negative chains.", "We do not search hyper-parameters and use suggested ones from Xiong et al. (2021a).", "Baselines We compare BEAMDR with TF-IDF , Semantic Retrieval (Nie et al., 2019, SR ), which", "Results Using the same implementation but on our reranked chains, BEAMDR outperforms GRR Retriever Reader Dev Test EM F1 EM F1 BERT base Reader TXH TXH 54.0 66.2 51.6 64.1 GRR GRR 52.7 65.8 -BEAMDRGRR 54.9 68.0 -BERT large wwm Reader GRR GRR 60.5 73.3 60.0 73.0 BEAMDRGRR 61.3 74.1 60.4 73.2 MDR MDR 61.5 74.7 -ELECTRA large Reader MDR MDR 63.4 76.2 62.3 75.3 Table 2: HOTPOTQA dev and test set answer exact match (EM) and F1 results.", "uses a cascade BERT pipeline, and the Graph recurrent retriever (Asai et al., 2020, GRR ), our main baseline, which iteratively retrieves passages following the Wikipedia hyperlink structure, and is state-of-the-art on the leaderboard.", "We also compare against a contemporaneous model, multi-hop dense retrieval (Xiong et al., 2021b, MDR ).", "Results: Robust Evidence Retrieval without Document Links Table 1 presents retrieval results.", "On full retrieval, BEAMDR is competitive to GRR , state-of-the-art reranker using Wikipedia hyperlinks.", "BEAMDR also has better retrieval than the contemporaneous MDR .", "Although both approaches build on dense retrieval, MDR is close to BEAMDR with TF-IDF negatives.", "We instead refresh negative chains with intermediate representations, which help the model better discover evidence chains.", "Our ablation study (Greedy search) indicates the importance of maintaining the beam during inference.", "With the help of cross-attention between the question and the passage, using BERT to rerank BEAMDR outperforms all baselines.", "Varying the Beam size Figure 2 plots the Passage EM with different beam sizes.", "While initially increassing the beam size improves Passage Exact Match, the marginal improvement decreases after a beam size of forty.", "Baselines We compare BEAMDR with TXH (Zhao et al., 2020b), GRR (Asai et al., 2020) and the contemporaneous MDR (Xiong et al., 2021b).", "We use released code from GRR (Asai et al., 2020) following its settings on BERT base and large.", "We use four 2080Ti GPUs.", "(Table 2), suggesting gains from retrieval could propagate to answer span extraction.", "BEAMDR is competitive with MDR but slightly lower; we speculate different reader implementations might be the cause.", "In this section, we explore how BEAMDR constructs evidence chains.", "Figure 3 shows query and passage representations with T-SNE (Maaten and Hinton, 2008).", "Unsurprisingly, in the dense space, the first hop query (ques-tion) is close to its retrieved passages but far from second hop passages (with some negative passages in between).", "After composing the question and first hop passages, the second hop queries indeed land closer to the second hop passages.", "Our quantitative analysis (Table 3) further shows BEAMDR has little overlap between retrieved passages in two hops.", "BEAMDR mimics multi-step reasoning by hopping in the learned representation space.", "To study model behaviors under different hops, we use heuristics 1 to infer the order of evidence passages.", "In Table 3, BEAMDR slightly wins on first hop passages, with the help of hyperlinks, GRR outperforms BEAMDR on second hop retrieval.", "Only 21 .", "9% of the top-10 BEAMDR chains are connected by links.", "BEAMDR wins after using links to filter candidates.", "To understand the strengths and weaknesses of BEAMDR compared with GRR , we manually analyze 100 bridge questions from the HOTPOTQA development set.", "BEAMDR predicts fifty of them correctly and GRR predicts the other fifty correctly (Tables 4 and 5).", "Strengths of BEAMDR.", "Compared to GRR , the largest gain of BEAMDR is to identify question entity passages.", "As there is often little context overlap besides the entity surface form, a term-based approach ( TF-IDF used by GRR ) falters.", "Some of the GRR errors also come from using reverse links to find second hop passages (i.e., the second hop passage links to the first hop passage).", "1 We label the passage that contains the answer as the second hop passage, while the other one as the first hop passage.", "If both passages include the answer, passage title mentioned in the question is the first hop passage.", "Weaknesses of BEAMDR.", "Like Karpukhin et al. (2020), many of BEAM DR's errors could be avoided by simple term matching.", "For example, matching What screenwriter with credits for Evolution co-wrote a film starring Nicolas Cage and Tea Leoni? to the context The Family Man is a 2000 American film written by David Diamond and David Weissman, and starring Nicolas Cage and Tea Leoni. .", "Extracting multiple pieces of evidence automatically has applications from solving crossword puzzles (Littman et al., 2002), graph database construction (De Melo and Weikum, 2009), and understanding relationships (Chang et al., 2009; Iyyer et al., 2016) to question answering (Ferrucci et al., 2010), which is the focus of this work.", "Given a complex question, researchers have investigated multi-step retrieval techniques to find an evidence chain.", "Knowledge graph question answering approaches (Talmor and Berant, 2018; Zhang et al., 2018, inter alia ) directly search the evidence chain from the knowledge graph, but falter when KG coverage is sparse.", "With the release of large-scale datasets (Yang et al., 2018), recent systems (Nie et al., 2019; Zhao et al., 2020b; Asai et al., 2020; Dhingra et al., 2020, inter alia ) use Wikipedia abstracts (the first paragraph of a Wikipedia page) as the corpus to retrieve the evidence chain.", "Dhingra et al. (2020) treat Wikipedia as a knowledge graph, where each entity is identi-fied by its textual span mentions, while other approaches (Nie et al., 2019; Zhao et al., 2020b) directly retrieve passages.", "They first adopt a singlestep retrieval to select the first hop passages (or entity mentions), then find the next hop candidates directly from Wikipedia links and rerank them.", "Like BEAMDR, Asai et al. (2020) use beam search to find the chains but still rely on a graph neural network over Wikipedia links.", "BEAMDR retrieves evidence chains through dense representations without relying on the corpus semi-structure.", "Qi et al. (2019, 2020) iteratively generate the query from the question and retrieved history, and use traditional sparse IR systems to select the passage, which complements BEAM DR's approach.", "We introduce a simple yet effective multi-step dense retrieval method, BEAMDR.", "By conducting beam search and globally refreshing negative chains during training, BEAMDR finds reasoning chains in dense space.", "BEAMDR is competitive to more complex SOTA systems albeit not using semi-structured information.", "While BEAMDR can uncover relationship embedded within a single question, future work should investigate how to use these connections to resolve ambiguity in the question (Elgohary et al., 2019; Min et al., 2020), resolve entity mentions (Guha et al., 2015), connect concepts across modalities (Lei et al., 2018), or to connect related questions to each other (Elgohary et al., 2018).", "We thank the anonymous reviewers and meta-reviewer for their suggestions and comments.", "Zhao is supported by the Office of the Director of National Intelligence ( ODNI ), Intelligence Advanced Research Projects Activity ( IARPA ), via the BETTER Program contract 2019-19051600005.", "Boyd-Graber is supported by NSF Grant IIS-1822494.", "Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsors." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other" ]
[ "Abstract Storyline generation aims to extract events described on news articles under a certain topic and reveal how those events evolve over time.", "Most existing approaches first train supervised models to extract events from news articles published in different time periods and then link relevant events into coherent stories.", "They are domain dependent and cannot deal with unseen event types.", "To tackle this problem, approaches based on probabilistic graphic models jointly model the generations of events and storylines without annotated data.", "However, the parameter inference procedure is too complex and models often require long time to converge.", "In this paper, we propose a novel neural network based approach to extract structured representations and evolution patterns of storylines without using annotated data.", "In this model, title and main body of a news article are assumed to share the similar storyline distribution.", "Moreover, similar documents described in neighboring time periods are assumed to share similar storyline distributions.", "Based on these assumptions, structured representations and evolution patterns of storylines can be extracted.", "The proposed model has been evaluated on three news corpora and the experimental results show that it outperforms state-of-the-art approaches accuracy and efficiency.", "With the development of the internet, massive information about current events is generated and propagated continuously on online news media sites.", "It is difficult for the public to digest such large volumes of information effectively.", "Storyline generation, aiming at summarizing the development of certain related events, has been intensively studied recently (Diao and Jiang, 2014).", "In general, storyline can be considered as an event cluster where event-related news articles are ordered and clustered depending on both content and temporal similarity.", "Different ways of calculating content and temporal similarity can be used to cluster related events (Yan et al., 2011; Huang and Huang, 2013).", "Bayesian nonparametric models could also be used to tackle this problem by describing the storyline generating process using probabilistic graphical models (Li and Cardie, 2014; Diao and Jiang, 2014).", "Nevertheless, most existing approaches extract events independently and link relevant events in a post-processing step.", "More recently, Zhou et al. (2016) proposed a non-parametric generative model to extract storylines which is combined with Chinese Restaurant Processes (CRPs) to determine the number of storylines automatically.", "However, the parameter inference procedure is too complex and the model requires long time to converge.", "This makes it impractical to be deployed in real-world applications.", "Recently, deep learning techniques have been successfully applied to various natural language processing tasks.", "Several approaches (Mikolov et al., 2013; Le and Mikolov, 2014) such as word2vec have been proved efficient in representing rich syntactic and semantic information in text.", "Therefore, it would be interesting to combine the advantage of both probabilistic graphical model and deep neural networks.", "There have been some efforts in exploring this in recent years.", "For example, Yang et al. (2015) proposed a gaussian mixture neural topic model incorporating both the ordering of words and the semantic meaning of sentences into a topic model.", "Cao et al. (2015) explained topic models from the perspective of neural networks and proposed a neural topic model where the representation of words and documents are combined into a unified framework.", "However, to the best of our knowledge, there is no attempt in extracting structured repre-1727 sentation of storylines from text using neural network based approaches.", "In this paper, we propose a novel neural model for storyline generation without the use of any annotated data.", "In specific, we assume that the storyline distributions of a document's title and its main body are similar.", "A pairwise ranking approach is used to optimize the model.", "We also assume that similar documents described in neighboring time periods should share similar storyline distributions.", "Hence, the model learned in the previous time period can be used for guiding the learning of the model in the current period.", "Based on the two assumptions, relevant events can be extracted and linked.", "Furthermore, storyline filtering based on confidence scores is performed.", "This makes it possible to generate new storylines.", "We propose a novel neural network based model to extract structured representations and evolution patterns of storylines.", "To the best of our knowledge, it is the first attempt to perform storyline generation based on neural network without any annotated data.", "The proposed approach has been evaluated on three corpora and a significant improvement on F-measure is achieved when compared to the state-of-the-art approaches.", "Moreover, the proposed approach only requires a faction of the training time in comparison with the second best approach.", "Considering storyline as hidden topic, storyline extraction can be casted into the topic detection and tracking (TDT) problem.", "One popular way to deal with TDT is through topic models.", "However, traditional topic models such as Latent Dirichlet Allocation (LDA) (Blei et al., 2003) do not detect the dynamics of topic over time.", "Griffiths and Steyvers (2004) clustered texts using LDA and then mapped the topics into corresponding time periods.", "Blei and Lafferty (2006) developed a dynamic topic model which captures the evolution of topics in a sequentially organized corpus of documents by using Gaussian time series on the natural parameter of the multinomial topics and logistic normal topic proportion models.", "Unlike early work that relied on Markov assumptions or discretization of time, Wang and McCallum (2006) proposed a topic-over-time (TOT) model where each topic is associated with a continuous distribution over timestamps.", "For each document, the mixture distribution over topics is influenced by both word co-occurrences and the document's timestamp.", "As a storyline might include more than one topic, Kawamae (2011) made an improvement over TOT and proposed a trend analysis model which generates storylines based on the model trained in the previous time period.", "Ahmed and Xing (2008) employed Recurrent Chinese Restaurant Processes (RCRPs) to cluster texts from discrete time slice while the number of clusters can grows automatically with the data at each epoch.", "Following this, many approaches were proposed for storyline extraction by combining RCRP with LDA (Ahmed et al., 2011a,b; Ahmed and Xing, 2013).", "Considering dependencies among clusters in different time periods, a distance-dependent CRP model was proposed by (Blei and Frazier, 2011) which defines a weight function to quantify the dependency in different clusters.", "Huang et al. (2015) proposed a Dynamic Chinese Restaurant Process (DCRP) model which considers the birth, survival and death of a storyline.", "Recently, there have been increasing interests in exploring neural network based approaches for topic detection from text.", "These approaches can be divided into two categories, solely based on neural networks and a combination of topic models and neural networks.", "For the first category, topic distributions of documents are modeled by a hidden layer in neural networks.", "For example, Hinton and Salakhutdinov (2009) proposed a two layer probabilistic graphical model which is a generalization of the restricted Boltzmann machine, called a Replicate Softmax.", "It can be used to automatically extract low-dimensional latent semantic representations from a large unstructured collection of documents.", "Larochelle and Lauly (2012) proposed a neural autoregressive topic model to compute the hidden units of the network efficiently.", "There are also many approaches trying to combine neural networks with topic models.", "For example, Yang et al. (2015) presented a Gaussian mixture neural topic model which incorporates both the ordering of words and the semantic meaning of sentences into topic modeling.", "To make the neural network based model more interpretable, Cao et 1728 al. (2015) explained topic models from the perspective of neural networks and proposed a neural topic model where the representation of words and documents are combined into a unified framework.", "Tian et al. (2016) proposed a sentence level recurrent topic model assuming the generation of each word within a sentence is dependent on both the topic of the sentence and the the historical context of its preceding words in the sentence.", "Wan et al. (2012) introduced a hybrid model which combines a neural networks with a latent topic models.", "The neural network provides a low dimensional embedding for the input data while the subsequent distribution is captured by the topic model.", "However, most of the aforementioned models are solely for topic detection.", "They do not consider evolutionary topic clustering for storyline generation.", "To model the generation of a storyline in consecutive time periods from a stream of documents, we propose a neural network based approach, called Neural Storyline Extraction Model (NSEM), as shown in Figure", "1. In this model, we have the following assumptions: Assumption 1: for a document, the storyline distribution of its title and main body should be similar.", "In general, for any given document, its title and main body should discuss the same storyline.", "Although title may exist metaphor and metonymy to catch the reader's eye ball, the key entities and words will not change such as name, location and so on.", "Therefore, it is reasonable to assume that the title h and its main body d of a document share a similar storyline distribution.", "The storyline distributions of title and main body are denoted as p ( s h ) and p ( s d ) .", "Hence, p ( s h ) and p ( s d ) should be similar.", "Based on this assumption, documents at time period t can be clustered into several storylines in such a way.", "Let h pos denotes the correct title to the main body d (positive example), and h neg denotes an irrelevant title (negative exam-ple), the similarity of the storyline distribution derived from the main body d and that obtained from the correct title h pos should be far more greater than that obtained from irrelevant titles h neg , i.e. sim ( p ( s d ) , p ( s h pos )) sim ( p ( s d ) , p ( s h neg )) .", "Different similarity metrics can be used to measure the similarity between two distributions.", "Assumption 2: for similar documents in neighboring time periods, they should share similar storyline distribution.", "It is assumed that similar documents in the neighboring time periods tend to share the same storyline.", "For example, a document with the title Indian Election 2014: What are minorities to do? and another document in the next time period with the title The efficiency of Indian elections is time tested should belong to the same storyline India election .", "Based on this assumption, events extracted in different time period can be linked into storylines.", "As main body contains more information than title, we only use the storyline distribution of the main body, p ( s d ) , in order to simplify the model structure.", "The learned information in the previous time period is used to supervise the learning in the current time period.", "Based on the above two assumptions, the proposed NSEM as shown in Figure 1 contains the following four layers: (1) Input layer shown at the left bottom part of Figure 1, takes d , h pos and h neg as the input and transforms these texts into vectors; (2) Main body-Storyline layer and Title-Storyline layer , both are designed to generate storyline distributions; (3) Similarity layer aims to calculate the similarity between the storyline distribution of the main body and that of the title.", "In the top part of Figure 1, the model learned in previous time period is used to guide the storyline distribution learning in current time period.", "We explain the structure and function of each layer of NSEM in more details below: Input Layer ( d, h ): the input layer aims to represent the main body d and title h with distributed embedding d and h .", "Let the subscript pos denotes the relevant title h pos (positive example) and subscript neg denotes an irrelevant title h neg (negative example).", "For news articles, we pay more attention to the key elements of events such as location l , person p , organization o and keywords w .", "Thus an event is described by a quadruple l, p, o, w .", "We extract these elements from the main body and concatenate their word embeddings as the feature vector d = [ l, p,o, w ] .", "We obtain the title feature h in the same way.", "We first identify named entities and treat those named entities with multi-word expressions (e.g., Donald Trump) as single tokens.", "Then we train word2vec (Mikolov et al., 2013) to represent each entity with a 100-dimensional embedding vector.", "We also filter out less important keywords and en-1729 1 t t ( , ) sim pos g d h l p o w ( ) t d p s ( ) neg t h p s ( , ) sim neg g d h ... -1 ( ) t d p s ( ) pos t h p s s s Title-Storyline Main body-Storyline Main body-Storyline ...", "tities based on some criteria such as TFIDF.", "For a document containing more than one entity for the same event element type, for example, a document might contain mentions of different locations, we calculate the weighted sum of all location embeddings according to their occurrence number.", "If a certain event element is missing from a document, we set it to null.", "After concatenating the four key event elements, each document or title is represented by a 400-dimensional embedding vector.", "Main body-Storyline Layer ( p ( s d ) R 1 S ) : this layer aims to represent the storyline distribution p ( s d ) of main body d .", "Suppose there are a total of S storylines, the storyline distribution p ( s d ) is a S -dimensional vector, denoted as p ( s d ) = { p ( s d = 1) , , p ( s d = S ) } .", "It can be formulated as below: p ( s d ) = f ( d W 1 + b 1 ) (1) where W 1 RK S denotes the weight matrix, b denote the bias, K = 400 is the dimension of the document representation, and f denotes the activation function.", "Here we use the Softmax function.", "The probability of the main body d belonging to the storyline i can be written below: p ( s d = i ) = exp( d W 1 i + b 1 i ) Si =1 exp( d W 1 i + b 1 i ) (2) Title-Storyline Layer ( p ( s h ) R 1 S ): this layer aims to represent the storyline distribution p ( s h ) of title h .", "Similar to the Main body-Storyline layer, we can obtain p ( s h ) and p ( s h = i ) of title h in the following way: p ( s h ) = f ( h W 2 + b 2 ) (3) p ( s h = i ) = exp( h W 2 i + b 2 i ) Si =1 exp( h W 2 i + b 2 i ) (4) Similarity Layer ( g sim R ): this layer aims to calculate the similarity of the distributions between p ( s d ) and p ( s h ) .", "The similarity score g sim is calculated by the Kullback-Leibler (KL) divergence: g sim ( d, h ) = p ( s d ) log p ( s h ) p ( s d ) (5) The similarity can be also calculated by other metric methods.", "Different from the common way which link relevant events into storyline, we extract it in a unified framework.", "According to our second assumption, for the current time period t , we employ the storyline generation results in the previous time period t 1 as constraints to guide the storyline generation process in t .", "For a document d t (we only use the main body here) in the time period t , we first use the model trained in t 1 to predict its storyline distribution p t 1 ( s d t ) .", "Hence when we learn p t ( s d t ) , we would expect it to be similar to p t 1 ( s d t ) .", "By doing so, we can link relevant events in different time periods together.", "For cases where intermittent storylines are observed, i.e., the related events occur initially, but disappear in certain time periods and re-occur later, we select 1730 documents randomly from all previous time periods and make them participate in the learning of current model.", "Our first assumption assumes that for a document, its title and main body should share similar storyline distributions .", "Hence, we use a pairwise ranking approach (Collobert et al., 2011) to optimize p ( s d ) and p ( s h ) .", "The basic idea is that the storyline distribution of the main body d should be more similar to that of the relevant title than irrelevant ones.", "We first define the loss function as below: L 1 ( d, h pos , h neg ) = max(0 , g sim ( d, h pos ) + g sim ( d, h neg )) (6) where denotes the margin parameter, h pos denotes the relevant title and h neg denotes an irrelevant title.", "We choose titles whose elements l, p, o, k have no intersection with those positive titles from the current time period as negative examples.", "Our second assumption assume that for similar documents in neighboring time periods, they should share similar storyline distribution .", "Hence, the model learned in the previous time period can be used for guiding the learning of the model in the current period.", "Hence, when constructing storyline for the main body d in current time period t , we use the model in previous time period t 1 and predict the storyline distribution p t 1 ( s d ) .", "Then we measure current storyline distribution p t ( s d ) and predicted distribution p t 1 ( s d ) by KL divergence which can be defined as below: L 2 ( d ) = p t 1 ( s d ) log p t ( s d ) p t 1 ( s d ) (7) Therefore, the final objective function is to minimize: L = d ( L 1 ( d, h pos , h neg ) + L 2 ( d )) (8) where and are the weights controlling the contributions of the two loss terms.", "For the start time period, we only use L 1 to optimize our model.", "Let t denote the model parameter in the time period t .", "Based on the model structure and the loss function described above, the training procedure for NSEM is given in Algorithm", "1. Algorithm 1 Training procedure for NSEM at the time period t Require: main bodies d ; titles h ; model parameter t 1 at the time period t 1 1: Initialize t 2: for d d do 3: Calculate its storyline distribution based on t 1 4: end for 5: repeat 6: for every minibatch M in ( d , h ) do 7: for every pair ( d i , h i,pos ) in minibatch M do 8: Calculate the storyline distribution p ( s d i ) 9: Calculate the storyline distribution p ( s h i,pos ) 10: Sample an irrelevant title h i,neg where h i,neg h i,pos = 11: Calculate the storyline distribution p ( s h i,neg ) 12: Calculate L 1 ( d i , h i,pos , h i,neg ) 13: Calculate L 2 ( d i ) 14: end for 15: Calculate minibatch loss LM = d i ( L 1 + L 2 ) and gradients t LM 16: Update model parameter t 17: end for 18: until Convergence 3.3 Post-processing As the number of storylines at each time period is assumed to be the same, some newly emerging storylines might be incorrectly linked with previous storylines.", "Therefore, post-processing is needed to filter out such erroneous linkings.", "We assume that if a current storyline does not have any key element in common with previously extracted storyline, it should be flagged as a new storyline.", "We define the Coverage of the storyline s as below: Coverage ( s, t, M ) = ( element ) ts ( element ) t M s (9) where ( element ) ts denotes the set of event elements in the time period t for storyline s and ( element ) t M s denote the set of event elements in the last M time periods for storyline s .", "If the coverage Coverage ( s, t, M ) is less than a threshold N , the current storyline s is considered as a new one.", "For example, if the current storyline' Coverage with index 5 is less than N , then previ-1731 ous storyline with index 5 stops at current period and the current storyline with index 5 is a new one.", "To evaluate the proposed approach, we use the three datasets as in (Zhou et al., 2016).", "The statistics of the three datasets are presented in Table 4.1.", "Among which the Dataset III includes 30 different types of manually annotated storylines which are categorized into four types: (1) long-term storylines which last for more than 2 weeks; (2) short-term storylines which last for less than 1 week; (3) intermittent storylines which last for more than 2 weeks in total, but stop for a time and then appear again; (4) new storylines which emerge in the middle of the period, not at the beginning.", "In our experiments, we used the Stanford named entity recognizer 1 for identifying the named entities.", "In addition, we removed common stopwords and only kept tokens which are verbs, nouns, or adjectives from these news articles.", "We chose the following four methods as the baseline approaches.", "1. DLDA (Blei and Lafferty, 2006): the dynamic LDA is based on the Markovian assumption that the topic-word distribution at the current time period is only influenced by the topic-word distribution in the previous time period.", "Moreover, topic-word distributions are linked across time periods by a Markovian chain.", "2. RCRP (Ahmed et al., 2011a): it is a nonparametric model for evolutionary clustering based on RCRP, which assumes that the past story popularity is a good prior for current popularity.", "3. SDM (Zhou et al., 2015): it assumes that the number of storylines is fixed and the storyline is modeled as a joint distribution over 1 https://nlp.stanford.edu/software/CRF-NER.html entities and keywords.", "The dependency of different stories of the same storyline at different time periods is captured by modifying Dirichlet priors.", "4. DSEM (Zhou et al., 2016): this model is integrated with CRPs so that the number of storylines can be determined automatically without human intervention.", "Moreover, per-token Metropolis-Hastings sampler based on light LDA (Yuan et al., 2015) is used to reduce sampling complexity.", "For DLDA, SDM and our model NSEM, the storyline number is set to 100 on both Dataset II and III.", "In consideration of the dependency to the historical storyline distributions, the number of past epochs M is set to 7 for both SDM and DSEM.", "For RCRP, the hyperparameter is set to", "1. For our model NSEM, the threshold is set to 0.5 and the loss weight and are set to 1 and 0.5 respectively.", "In postprocess step, we empirically set the N to 7.", "To evaluate the performance of the proposed approach, we use precision, recall and F-measure which are commonly used in evaluating information extraction systems.", "The precision is calculated based on the following criteria: 1) The entities and keywords extracted refer to the same storyline; 2) The duration of the storyline is correct.", "We assume that the start date (or end date) of a storyline is the publication date of the first (or last) related news article.", "As there is no gold standard available for Dataset I, we do manual examination with the experimental result.", "We search for the same period of news and compare it with our results in the criteria.", "The experimental results of the proposed approach in comparison to the baselines on Dataset I, II and III are presented in Table", "2. For Dataset I, as it is hard to know the ground-truth of storylines, we only report the precision value by manually examining the extracted storylines.", "It can be observed from Table 2 that the proposed approach achieves the best performance on the three datasets.", "In specific, for Dataset I, NSEM extracts more storylines and with a higher precision value.", "For Dataset II containing 77 storylines, NSEM extracts 81 storylines among which 1732 Dataset I Method Precision(%) # of extracted storylines SDM 70.20 104 DSEM 75.43 114 NSEM 76.58 121 Dataset II Method Precision(%) Recall(%) F-measure(%) DLDA 62.67 61.03 61.84 RCRP 67.11 66.23 66.67 SDM 70.67 68.80 69.27 DSEM 73.17 77.92 75.47 NSEM 75.31 79.22 77.22 Dataset III Method Precision(%) Recall(%) F-measure(%) DLDA 46.16 43.33 42.86 RCRP 61.54 53.33 57.14 SDM 54.17 43.33 48.15 DSEM 75.00 70.00 72.41 NSEM 77.78 70.00 73.69 Table 2: Performance comparison of the storyline extraction results on Dataset I, II and III.", "61 are correct and outperforms DSEM with 2% in F-measure.", "For dataset III consisting of 30 storylines, NSEM extracted 27 storylines among which 21 are correct.", "Although its recall value is the same as DSEM, its precision value is nearly 3% higher which results in better F-measure.", "The proposed approach needs to preset the number of storylines.", "To study the impact of the number of storylines on the performance of the proposed model, we conducted experiments Dataset III with different numbers of storylines S varying between 25 and 150.", "Table 3 shows the performance of storyline extraction with different value of S .", "It can be observed that both precision and recall of NSEM increase with the increasing number of storylines until it reaches 100.", "If further increasing S , the precision/recall have slight change and the F-measure become relatively stable.", "We illustrate the evolution of storylines using structured browsing.", "The structured information of the storylines such as locations, persons, entities, keywords are presented, together with titles of some related documents.", "The number of related documents for each storyline is also depicted to Dataset III S Precision(%) Recall(%) F-measure(%) 25 66.67 33.33 44.44 50 73.08 46.67 56.96 75 76.92 53.33 62.99 100 77.78 70.00 73.69 125 78.13 73.33 75.65 150 78.79 70.00 74.13 Table 3: The performances of NSEM with different S .", "allow an easy visualization of storyline popularity over time.", "Figure 2 illustrates three different types of storylines including Apple vs Samsung, Pistorious shoot Steenkamp and Egypt election.", "For the first storyline Apple vs Samsung, it starts at the beginning of the month and only lasts for 9 days.", "Three representative epochs are highlighted.", "From the extracted organizations, Apple, Samsung, and keywords, patent, infringe, it can be easily deduced that this is about Apple and Samsung infringed patents.", "For the storyline Pistorious shoot Steenkamp, it is an intermittent storyline which lasts for more than 2 weeks but with no related news articles in some of the days in between.", "From Figure 2, it can be observed that the storyline ceases for 2 days in Day 10 and 11.", "From the structured representation of the early storylines, it can be observed that there is a shooting event about Pistorious and Steenkamp in South African.", "After 2 day's silence, in Day 13, public attention was raised once again since Pistorius applied for mental tests.", "For the last storyline Egypt election, it starts in Day 20 and continues beyond the end of May.", "From the key event elements, location Egypt and keywords presidential, election, it can be easily inferred that there was a presidential election in Egypt.", "It can also be observed that Sisi and Morsi were both candidates for the Egypt's presidential election from persons extracted, Sisi, Morsi in Day 26.", "In Day 29, the storyline reached to the climax since Sisi won the election, which can be discovered from the title Sisi elected #E-gypt president by landslide.", "To explore the efficiency of the proposed approach, we conducted an experiment by comparing the proposed approach NSEM with DSEM.", "DSEM employs the Metropolis-Hastings sampler to 1733 1 2 3 4 5 6 7 8 9 101112131415161718192021222324252627282930 Date of storyline 0 10 20 30 40 50 60 70 80 90 100 N u m b e r o f d o c u m e n t s r e l a t e d t o t h e s t o r y l i n e Apple vs Samsung l p o w U.S. , Americas Jury Apple,Samsung patent,infringe Jury says Samsung infringed Apple patents.", "boost the sampling complexity in order to achieve faster convergence.", "We train both models on training data varying from 1,000 to 10,000 documents.", "Figure 3 illustrates the logarithm of time consumed for each training set.", "It can be observed that NSEM trains 30 times faster compared to DSEM, showing the advantage of using a neural network based approach in comparison with a Bayesian model based method.", "Our proposed model is based on the two distribution similarity assumptions which we presented in the Methodology section.", "To investigate the quality of the learned storyline distribution, we conducted an experiment on Dataset III where the storyline number S is set to 100.", "We randomly choose three documents and calculate the storyline distribution of theirs title and main body based on our learned NSEM.", "We also randomly select three pairs similar documents in different time periods and draw their main body storyline distributions based on the learned NSEM.", "It can be observed from Figure 4 that the storyline distributions of the title and the main body of a document are similar.", "Moreover, the storyline distributions of two similar documents in different time periods are also similar.", "In this paper, we have proposed a neural network based storyline extraction model, called NSEM, to extract structured representations of storyline from news articles.", "NSEM was designed based on the two assumptions about the similarity of storyline distributions of the title and the main body of the same document, and the similarity of storyline distributions of similar documents in different time periods.", "Experimental results show that our proposed model outperforms the state-of-the-art approaches and only requires a fraction of training time.", "In future work, we will explore the extension of our proposed model to cater for varying number of storylines automatically and also better deal with intermittent storylines.", "documents in different", "Figure 4: Visualization of the learned storyline distributions.", "David M Blei and Peter I Frazier.", "2011.", "Distance dependent chinese restaurant processes.", "Journal of Machine Learning Research 12(Aug):24612488.", "David M Blei and John D Lafferty.", "2006.", "Dynamic topic models.", "In Proceedings of the 23rd international conference on Machine learning .", "ACM, pages 113 120.", "David M Blei, Andrew Y Ng, and Michael I Jordan.", "2003.", "Latent dirichlet allocation.", "Journal of machine Learning research 3(Jan):9931022.", "Ziqiang Cao, Sujian Li, Yang Liu, Wenjie Li, and Heng Ji.", "2015.", "A novel neural topic model and its supervised extension.", "In Twenty-Ninth AAAI Conference on Artificial Intelligence .", "pages 22102216.", "Ronan Collobert, Jason Weston, Leon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa.", "2011.", "Natural language processing (almost) from scratch.", "Journal of Machine Learning Research 12(Aug):24932537.", "Qiming Diao and Jing Jiang.", "2014.", "Recurrent chinese restaurant process with a duration-based discount for event identification from twitter.", "In Proceedings of the 2014 SIAM International Conference on Data Mining .", "SIAM, pages 388397.", "Thomas L Griffiths and Mark Steyvers.", "2004.", "Finding scientific topics.", "Proceedings of the National acade-my of Sciences 101(suppl 1):52285235.", "Geoffrey E Hinton and Ruslan R Salakhutdinov.", "2009.", "Replicated softmax: an undirected topic model.", "In Advances in neural information processing systems .", "pages 16071614.", "Lifu Huang and Lian'en Huang.", "2013.", "Optimized event storyline generation based on mixture-event-aspect model.", "In EMNLP .", "pages 726735.", "Rui Huang, Fengyuan Zhu, and Pheng-Ann Heng.", "2015.", "The dynamic chinese restaurant process via birth and death processes.", "In AAAI .", "pages 2687 2693.", "Noriaki Kawamae.", "2011.", "Trend analysis model: trend consists of temporal words, topics, and timestamps.", "In Proceedings of the fourth ACM international conference on Web search and data mining .", "ACM, pages 317326.", "Hugo Larochelle and Stanislas Lauly.", "2012.", "A neural autoregressive topic model.", "In Advances in Neural Information Processing Systems .", "pages 27082716.", "Quoc Le and Tomas Mikolov.", "2014.", "Distributed representations of sentences and documents.", "In Proceedings of the 31st International Conference on Machine Learning (ICML-14) .", "pages 11881196.", "Jiwei Li and Claire Cardie.", "2014.", "Timeline generation: Tracking individuals on twitter.", "In Proceedings of the 23rd international conference on World wide web .", "ACM, pages 643652.", "This work was funded by the National Natural Science Foundation of China (61772132), the Natural Science Foundation of Jiangsu Province of China (BK20161430) and Innovate UK (103652)." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "abstain", "method", "result", "objective", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "other", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "method", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other" ]
[ "Images are often more significant than only the pixels to human eyes, as we can infer, associate, and reason with contextual information from other sources to establish a more complete picture.", "For example, in Figure 1, we can find a way to identify the news articles related to the picture through segment-wise understandings of the signs, the buildings, the crowds, and more.", "This reasoning could provide the time and place the image was taken, which will help us in subsequent tasks, such as automatic storyline construction, correction of image source in intended effect photographs, and upper-stream processing such as image clustering for certain location or time.", "In this work, we formulate this problem and introduce TARA : a dataset with 16k images with their associated news, time, and location, automatically extracted from New York Times 1 (NYT), and an additional 61k examples as distant supervision from WIT (Srini-vasan et al., 2021).", "On top of the extractions, we present a crowdsourced subset in which we believe it is possible to find the images' spatiotemporal information for evaluation purpose.", "We show that there exists a 70% gap between a state-of-the-art joint model and human performance, which is slightly filled by our proposed model that uses segment-wise reasoning, motivating higher-level vision-language joint models that can conduct open-ended reasoning with world knowledge.", "The data and code are publicly available at https://github.", "com/zeyofu/TARA .", "Vision and language are two of most important information sources, and the fact that humans reason jointly with both sources has motivated artificial intelligence research to consider visually-grounded", "Both authors contributed equally to this work.", "language understanding.", "Most work in this area has focused on reasoning with local evidence (Suhr et al., 2019; Hudson and Manning, 2019; Lu et al., 2020; Liu et al., 2021), e.g. asking factoid questions such as the colors or shapes of objects and numbers of people, yet very few works (Cui et al., 2021) encourage open-ended reasoning where a model needs to look beyond task inputs.", "However, humans can relate visual cues to corresponding contextual information that could be multi-modal, and draw on background knowledge when interpreting and grounding images.", "For example, as Figure 1 shows, people that are familiar with the news can infer that the location is Times Square through the iconic screen panels, and further estimate the period of time by looking at the crowds and the signs.", "And, this can be done without explicitly including related news pieces as input.", "In fact, even though some people would not have the prior knowledge to identify the relevant events, it is likely that they would have good estimate of the location and time by interpreting textual evidence in the image, the language in the signs, entity names, building styles, and other details in the input image.", "In this work, we identify and formulate this problem, spatio-temporal grounding of images, a 1138 Figure 2: What is the time and location for this image?", "task aiming at identifying the time and location in which the given image was taken.", "Specifically, we develop a novel dataset TARA , (Time and plAce for Reasoning beyond the imAge), a challenging dataset that tasks models with grounding images to real-world spatial and temporal information.", "In our collection, we make sure that for models to accurately find images' creation time and location, they would need to successfully ground the visual clues in texts such as news, stories and encyclopedias.", "As a result, this task motivates models to consider the association between visual information, language, and background knowledge, more closely and in a more open-ended setting.", "Figure 2 shows an example from TARA , and Figure 3 shows a possible way for a model to ground the image to its spatio-temporal information.", "The system starts with grounding multiple segments from the image, and uses the information to conduct a constrained search in a large news-base, until it locates specific textual information related to the image.", "This demonstrates the complexity and significance of this task.", "TARA is collected via a rigorous process that involves rule-based distant supervision extraction from news-images data which results in 16k image examples.", "While the training data has high label correctness (around 95%), we further run a crowdsourced validation on 3k examples to form the evaluation dataset.", "During the validation, annotators are asked to verify that there exists a potential path for humans to derive the correct answer, which encourages proper reasoning in future works.", "To better support the study of domain transfer and supervision for the reasoning process, we collect an additional 61k examples from the Wikipedia domain.", "We apply the state-of-the-art joint model CLIP (Radford et al., 2021) and show that it only 1139 achieves accuracy of 11.11% and 0.46% for time and location, respectively, on our dataset.", "Additionally, we present a new CLIP-based baseline model that reasons on object and facial segments and achieves 16.46% and 1.07% accuracy for time and location, respectively.", "We show that there exists a large gap (around 70% in accuracy) between state-of-the-art models and human performance, suggesting that the TARA data will provide a benchmark to motivate reasoning based approaches and support significant future work.", "Vision and Language Learning Language understanding in the context of images has been widely studied in various datasets covering a wide range of tasks including visual question answering, image retrieval, image and video captioning, etc.", "Earlier datasets mostly focus on simple local object properties identification (Antol et al., 2015; Chen et al., 2016).", "Later on, datasets start to focus on compositional visual reasoning.", "For example, Suhr et al. (2017) and Johnson et al. (2017) use synthetic images or synthetic language to study spatial relations.", "Recently, datasets using real images and real languages such as (Hudson and Manning, 2019; Liu et al., 2021) were proposed for reasoning about natural language descriptions of photos.", "However, all of the datasets focus on local grounding on segments inside the image, but not globally ground beyond the image with open-ended reasoning.", "While there are various tasks and datasets, the underlying associations between language and visual concepts are often common across different tasks (Lu et al., 2020).", "Therefore, we use CLIP (Radford et al., 2021) to study the TARA dataset in this paper.", "CLIP is a recently released state-of-the-art image representation model which has shown impressive performance on various tasks through pre-training on 400 million image and captions pairs collected from the internet.", "Spatio-temporal IE from Texts There has been extensive work on identifying temporal expressions and their associations with events in texts.", "Uz-Zaman et al. (2013); Ning et al. (2018) focus on temporal information extraction within the local contexts, and Zhou et al. (2020, 2021) further extends the scope to consider contextual information from external texts.", "The NLP community has also investigated spacial information extraction, with geocoding (Gritta et al., 2018; Kulkarni et al., 2020), which maps mentions to geological coordinates, being closest to our scope.", "Each example in TARA includes a news image, along with its time, location, caption, and corresponding news background such as headline, abstract, and news type.", "These are included for training or analysis purposes, but the task is to guess the correct time and location as accurately as possible given only the image.", "In developing the dataset, our goal is to collect a large corpus of semantically rich images that human with world knowledge can correctly identify the time and location, using evidence from the image, background knowledge, and appealing to external knowledge (which we call reasoning\" here).", "We design the process of collecting and identifying the images so that it facilitates this type of reasoning, and then use crowd sourcing to label a random 20% of high-quality images for development and testing.", "Figure 4 illustrates our data collection procedure.", "We first collect all the news between January 2010 and May 2021 using the NYT API 2 .", "We did not collect news that are earlier than 2010 because earlier news articles contain much fewer images.", "Each news article comes with a list of attributions 3 such as headline, abstract, news type, and possibly a main image.", "We first filter the news articles that has a valid image, and then scrape image caption for each image.", "Since the NYT covers news in several multimedia formats, the images follow a range of formatting practices, such as representative news images, image collages, images sampled from slideshows and descriptive natural thumbnails for videos.", "We setup a NYT specific pipeline to scrape image captions.", "We define a separate scraping procedure to get image specific text information for the different media types mentioned above and remove instances where multiple and/or ambiguous captions are returned.", "Image Pruning and Labeling Next, we describe how we automatically collect time and location of an image from corresponding news articles and captions.", "First, we filter out the images with 2 https://developer.nytimes.com/docs/ archive-product/1/overview 3 For each news, the API provides attributes as listed here: https://developer.nytimes.com/docs/ archive-product/1/types/Article 1140 2010-1, 2010-2, , 2021-5", "unwanted news types such as reviews, series, and obituaries, and unwanted news topics such as food, fashion, and movies, because images from these articles may not be informative enough.", "Then, we filter out the images whose caption does not contain location and time.", "For those that contain temporal and spacial cues, we assign each image a possible time label and location label.", "Specifically, we use the Spacy NER model 4 to find if the caption has both exactly one DATE entity for time and one GPE or LOC typed entity for location.", "Note that each news comes with a publication date and possible locations in attributes.", "We would either directly use our NER-extracted time entity as the possible time label if it's a valid time, or adjust the publication date using the time entity.", "For example, if the time entity is 1936 and publication date is 2021-05-01, then we will use 1936 as the possible time label because it should be an old image occurring in a recent news; in the latter case, if the 4 https://spacy.io/models/en time entity is last month and publication date is 2015-07-18, then we will use 2015-06 as the possible time label.", "We also compare our NER-extracted location entity with the news attribute locations.", "If the only difference is granularity, e.g. one is New York, United States and the other is United States, then we will use the fine-grained one New York, United States as possible location label.", "Otherwise, we will filter our this image.", "Finally, we add missing hierarchies for each possible label.", "For time labels, we add the decade and the century.", "For location labels, we use Geopy 5 to identify the location and add missing hierarchies such as country and continent.", "We randomly select an equal number of images from each month, such that a total of about 20% images are assigned to devlopment and test.", "On 5 https://geopy.readthedocs.io/en/ stable/ 1141 these images, we use two crowdsourcing tasks to (1) prune unanswerable images, and (2) verify correctness of the labels.", "In the first task, we display a single image, and ask a worker to answer, without searching online, if any person can guess the time and location of the image.", "We offer different hierarchies in the choices date, year, decade, and century for time and exact location, city, country, and continent for location so that workers can choose one of these.", "If the majority of workers agree that human cannot reason time or location based on the image itself, we will mark the corresponding label as null.", "Otherwise, if the majority of them agree on a certain hierarchy, we adjust the possible label to that specific hierarchy.", "Check", "step(c) in Figure 4 for criteria and positive and negative examples.", "The second task further verifies the correctness of current time and location labels.", "Specifically, we provide the same image, but including its caption, news headline, abstract, and extracted time and location labels.", "We ask the workers to verify if the background event is the same as in image, and if the labels are correct after reading the additional information.", "We use the Semantic Role Labeling (SRL) model 6 from AllenNLP to detect the main verb in the image caption by selecting the verb with most arguments, and mark it as the possible main event to provide to the workers.", "Detailed examples can be found in", "step(d) in Figure 4.", "We further select a small set of 30 interesting images as shown in Figure 5, that are related to most famous news happening after January 2021, the CLIP model date.", "7 This adversarial test set is specifically chosen to cover unseen images by baseline models to better test their generalization instead of memorization.", "Additionally, regarding to human baseline, annotators need to have enough knowledge to extract and interpret the key evidence segments, in order to reason about the answer.", "For instance, a person with an American cultural background and speaks English but not Hindi may find Figure 1 is easier to infer the precise time and location than Figure 2, compared to a person with Indian cultural background and speaks Hindi but not English, and vise 6 https://demo.allennlp.org/ semantic-role-labeling 7 https://github.com/openai/CLIP/blob/ main/model-card.md Dataset Train Dev Test All TARA before validation 12,306 1,644 1,644 15,652 TARA 12,306 1,552 1,571 15,429 WIT 61,325 Table 1: Dataset statistics for TARA and additional WIT supervision.", "versa.", "This test set of interest is chosen to cover most well-known news for the purpose that human baseline annotators are more likely to have enough knowledge about the key evidence so that the comparison with neural models can be more fair.", "We apply the same image pruning and labeling procedures on the WIT dataset (Srinivasan et al., 2021), which contains 11.5M Wikipedia images and the surrounding paragraphs and captions.", "Since this dataset is much unorganized, we only select images in English Wikipedia articles, and apply two additional NER models (Lample et al., 2016; Peters et al., 2017) from AllenNLP 8 to select locations.", "We further use zero-shot CLIP model to prune unwanted image types.", "Specifically, we provide each image with text sentences in the format of a photo of [ type ], with type being photograph , map , paint , and paper , and retrieve the sentence with highest similarity score.", "We only keep images of type photograph , and use these as additional weak supervision.", "The benefit of adding this additional weak supervision is that it has a wider range of time and location labels than the NYT images, especially because that all the NYT images are taken from news between 2010 and 2021.", "Dataset statistics can be found in Table 1.", "TARA contains about 16K images from New York Times.", "After crowd-sourcing validation on development and testing, about 94% of the images that either has a valid location label or time label are kept, indicating that our training set can serve as a good weak supervision.", "In addition, TARA provides a 61K weak supervision dataset built upon WIT.", "Figure 6 shows the time and location distribution in TARA .", "We can see that most images are taken in North America, Asia, and Europe, between 2010 and 2021.", "This can be the effect of using NYT as image source.", "We assess the quality of our dataset through human annotation, and evaluate on existing visual reasoning approaches.", "As introduced in Section 3.3, an expert annotator works on our test set of interest to gain a better understanding of the human performance on TARA .", "The expert is not allowed to directly search the image online, but can search for anything else such as the keywords she/he infers from the image.", "The expert is presented with all the labels in the test set just as neural models.", "We use the state-of-the-art systems in machine reading comprehension for this task: CLIP (Rad-ford et al., 2021).", "CLIP is the state-of-the-art image representation model and has shown impressive progress on visually grounded language understanding tasks.", "Specifically, we use the ViT-B/32 model 9 for zero-shot classification and analysis.", "During prediction, the model is given a single image and needs to classify the correct label.", "We use a similar prompt template A photo taken in {la-bel}. following the original paper, to encode all the labels.", "We compare the similarity between the image and each label prompt, and the highest one is the predicted label.", "We also add several variants of CLIP.", "The first is CLIP+ , which is the zero-shot CLIP model fine-tuned on NYT training data.", "Note that CLIP uses contrastive loss to train on image and text pairs.", "We concatenate the time and location labels into a 9 https://github.com/openai/CLIP 1143 Locations North America Asia Europe Africa South America Oceania Times 2020s 2010s 2000s 1990s 1980s 1970s 1960s 2021 2020 2019 2018 2017 2016 2015 2014 2013 20122011 20102009 CanadaMexico United States South KoreaSyria JapanIsrael Turkey Pakistan China India IraqIranAfghanistanPhilippines Spain France United KingdomGermany GreeceRussia Italy Libya Egypt VenezuelaBrazil Australia Figure 6: Label distribution in TARA .", "CLIP+Seg is another variant where we first extract object and face segments, and then finetune the CLIP model on the whole images along with the segments, both with time and location labels concatenated together as the final goal.", "As for object detection, we use the YOLOv5 10 method, specifically with model yolov5s.", "The intuition is that for objects such as iPhone, the model ben-efits from training it to times later than 2010.", "We add a limit to the segments so that we only consider important objects that have size larger than 50.", "We further restrict the number of people segments to be no more than 3, since many of the images have crowds and adding more people do not bring in much additional information.", "As for face segments, we use the InsightFace (Guo et al., 2022) facial detection model 11 .", "The intuition is that for famous people such as President Biden, we will benefit from training the segments to location United States.", "During implementation, we also add a limit to the segments so that we only consider face that have size larger than 50, which are more likely to be most important faces.", "CLIP+WIT is the variant of CLIP where we finetune on the training images along with the 61K weak supervision Images extracted from WIT.", "We concatenate the possible time and location labels as the paired text.", "Two metrics are adopted in this work: Accuracy and Example-F1 (also known as micro-Dice co-efficient) following previous studies (Shen et al., 2021).", "Accuracy is calculated without considering hierarchies the predicted label needs to exactly match the gold label.", "In contrast, Example-F1 calculates the average F1 scores considering each hierarchy as follows: Example-F1 = 1 NN (cid:88) i =1 2 (cid:12)(cid:12)(cid:12) L true i L pred i (cid:12)(cid:12)(cid:12) | L true i | + (cid:12)(cid:12)(cid:12) L pred i (cid:12)(cid:12)(cid:12) (1) where L true i ( L pred i ) is the true (model predicted) hierarchical label set of image i .", "For example, if the true labels for an image are 1967-7-14 and Newark, New Jersey, United States, North Amer-ica respectively, then its true hierarchical label sets are [Newark, New Jersey, United States, North America, United States, North America, North America] and [1967-7-14, 1967-7, 1967, 1960s, 20th century].", "In Table 2, we report the experimental results using the CLIP based baselines on the TARA .", "We can see that all of the model performance still have a large gap with human performance.", "Also, the object and facial segments boosts the model to be the highest on location prediction, proving that segment level reasoning is needed in this task.", "In contrast, adding the WIT weak supervision does not show consistent improvement or reduction on the performance.", "It can be due to that WIT images are not similar 1144 Model Accuracy Example-F1 CLIP 11.11 44.96 CLIP+ 15.72 49.74 CLIP+WIT 11.11 45.20 CLIP+Seg 16.46 50.52 Human 86.21 92.41 Model Accuracy Example-F1 CLIP 0.46 39.90 CLIP+ 1.00 43.09 CLIP+WIT 1.07 41.73 CLIP+Seg 0.92 42.82 Human 75.86 91.63 Table 2: Summary of the performance(%) for different baselines on the image location prediction (above) and time prediction (bottom).", "to news images, and that WIT images are mostly taken in older times than 2010, thus not providing enough supervision for our test set.", "There is also an obvious gap between the location prediction and time prediction, showing that temporal reasoning in vision language learning is much under explored and needs further research.", "Note that the Example-F1 value is consistently higher than accuracy because if the model predicts the highest two hierarchies correctly (e.g. century and decade), then it gets an Example-F1 around 40%.", "We perform qualitative and quantitative analysis of the baseline results to better understand the strengths and weaknesses of CLIP based models, and hypothesize avenues for future work.", "Specifically, we look into: model performance on test set of interest; effects on performance by using news abstract.", "Test Set of Interest Since we conduct human evaluation only on the test set of interest, we examine how models perform on this set and show the results in Table 3.", "Note that we use the same setting for the models and human experts both are given the entire test set labels.", "From the results, we observe a large gap between between the model performance and human performance, indicating that existing sota model still lacks a certain level of reasoning capability required to solve a such hard task as defined in the TARA dataset.", "Comparing the results in Table 3 to those in Table 2, we can see that there is little performance difference for each Model Accuracy Example-F1 CLIP 13.33 56.44 CLIP+ 13.33 58.67 CLIP+WIT 10.00 55.11 CLIP+Seg 23.33 63.11 Human 86.21 92.41 Model Accuracy Example-F1 CLIP 0.00 24.65 CLIP+ 0.00 26.49 CLIP+WIT 0.00 29.83 CLIP+Seg 3.33 24.43 Human 75.86 91.63 Table 3: Performance(%) of different baselines evaluated on the test set of interest for image location prediction (above) and time prediction (bottom).", "model, indicating that our human performance on the test set of interest can serve as a good reference to human performance on the whole test set, under the assumption that the annotators have enough knowledge about the key evidence segments.", "News Abstracts We also experiment with news abstracts being the classification goal instead of time and location labels given an image, under the assumption that models are given corresponding news abstract for each label.", "The intuition is that the news abstract might provide more descriptions that can map to several local segments, and thus providing additional information.", "Comparing the results shown in Table 4 to Table 2, we can see that providing news abstracts improves the performance a lot, despite that there is still a large gap with human performance.", "In this work, we introduce TARA , a new dataset and task for spatio-temporal grounding of images that requires open-ended joint reasoning with world knowledge.", "TARA provides a dataset of 16K high-quality images from NYT and Wikipedia-based supervision for additional 61K images.", "Compared to previous visual-language understanding datasets, TARA requires more complicated reasoning ability and existing state-of-the-art models such as CLIP 1145 are far from human levels, suggesting that our task remains a significant challenge with large room for improvement.", "We hope that TARA will inspire future work on reasoning beyond image's local segments in vision-language understanding.", "We collected data for TARA by downloading raw data from the official NYT API at https: //developer.nytimes.com .", "According to the Terms of Use at https://developer.", "nytimes.com/terms and NYTimes.com Terms of Service located at https://help.", "nytimes.com/hc/en-us/articles/ 115014893428-Terms-of-service , NYT granted us a license to access the NYT APIs and scrape their data.", "We ensure that our dataset has been collected in a manner which is consistent with the terms of use of NYTimes.", "We only release our dataset TARA for academic purpose.", "In order to retrieve the same raw data we scraped from the NYT API, multiple requests for months between January 1, 2010 and May 31, 2020 need to be made following the instructions at https://developer.nytimes.com/ docs/archive-product/1/overview .", "As introduced in Section 3.2, we annotated the data using crowd-workers through Amazon Mechanical Turk.", "They are voluntary participants who were aware of any risks of harm associated with their participation.", "We require the workers to be located in either Australia, Canada, Great Britain or the United States such that they are English speakers.", "We also require the workers to have HIT Approval Rate (%) for all Requesters' HITs greater than or equal to 98%.", "All crowd-workers were compensated by a fair wage determined by estimating the average completing time of each annotation task.", "Each worker earn $2.4 per 10 queries and each query should take less than a minute to annotate.", "Example screenshots of the NYT data and our annotation interface can be found in Appendix A. Acknowledgments This research is based upon work supported in part by the office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Contract No. 2019-19051600006 under the BETTER Program, by Contracts FA8750-19-2-1004 and FA8750-19-2-0201 with the US Defense Advanced Research Projects Agency (DARPA), and by a grant from the Allen Institute for Artificial Intelligence (allenai.org).", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, the Department of Defense, or the U.S. Government." ]
[ "result", "result", "method", "method", "objective", "objective", "other", "abstain", "abstain", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "result", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "method", "method", "method", "method", "method", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "objective", "abstain", "result", "abstain", "other", "other", "other", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain" ]
[ "We present a new challenging stance detection dataset, called Will-They-Won't-They 1 ( WT WT ), which contains 51,284 tweets in English, making it by far the largest available dataset of the type.", "All the annotations are carried out by experts; therefore, the dataset constitutes a high-quality and reliable benchmark for future research in stance detection.", "Our experiments with a wide range of recent state-of-the-art stance detection systems show that the dataset poses a strong challenge to existing models in this domain.", "The entire dataset is released for future research 2 .", "Apart from constituting an interesting task on its own, stance detection has been identified as a crucial sub-step towards many other NLP tasks (Mo-hammad et al., 2017).", "In fact, stance detection is the core component of fake news detection (Pomer-leau and Rao, 2017), fact-checking (Vlachos and Riedel, 2014; Baly et al., 2018), and rumor verification (Zubiaga et al., 2018b).", "Despite its importance, stance detection suffers from the lack of a large dataset which would allow for reliable comparison between models.", "We aim at filling this gap by presenting Will-They-Won't-They ( WT WT ), a large dataset of English tweets targeted at stance detection for the rumor verification task.", "We constructed the dataset based on tweets, since Twitter is a highly relevant platform for rumour verification, which is popular with the public as well as politicians and enterprises (Gor-rell et al., 2019).", "To make the dataset representative of a realistic scenario, we opted for a real-world application 1 https://en.wiktionary.org/wiki/will-they-won%27t-they 2 https://github.com/cambridge-wtwt/ acl2020-wtwt-tweets of the rumor verification task in finance.", "Specifi-cally, we constructed the dataset based on tweets that discuss mergers and acquisition (M&A) operations between companies.", "M&A is a general term that refers to various types of financial transactions in which the ownership of companies are transferred.", "An M&A process has many stages that range from informal talks to the closing of the deal.", "The discussions between companies are usually not publicly disclosed during the early stages of the process (Bruner and Perella, 2004; Piesse et al., 2013).", "In this sense, the analysis of the evolution of opinions and concerns expressed by users about a possible M&A deal, from its early stage to its closing (or its rejection) stage, is a process similar to rumor verification (Zubiaga et al., 2018a).", "Moreover, despite the wide interest, most research in the intersection of NLP and finance has so far focused on sentiment analysis, text mining and thesauri/taxonomy generation (Fisher et al., 2016; Hahn et al., 2018; El-Haj et al., 2018).", "While sentiment (Chan and Chong, 2017) and targeted-sentiment analysis (Chen et al., 2017) have an undisputed importance for analyzing financial markets, research in stance detection takes on a crucial role: in fact, being able to model the market's perception of the merger might ultimately contribute to explaining stock price re-valuation.", "We make the following three contributions.", "Firstly, we construct and release WT WT , a large, expert-annotated Twitter stance detection dataset.", "With its 51,284 tweets, the dataset is an order of magnitude larger than any other stance detection dataset of user-generated data, and could be used to train and robustly compare neural models.", "To our knowledge, this is the first resource for stance in the financial domain.", "Secondly, we demonstrate the utility of the WT WT dataset by evaluating 11 competitive and state-of-the-art stance detection models on our benchmark.", "Thirdly, we annotate a further M&A Buyer Target Outcome CVS _ AETCVS Health Aetna Succeeded CI _ ESRX Cigna Express Scripts Succeeded ANTM _ CI Anthem Cigna Blocked AET _ HUM Aetna Humana Blocked DIS _ FOXA Disney 21st Century Fox Succeeded Table 1: Considered M&A operations.", "M&A operation in the entertainment domain; we investigate the robustness of best-performing models on this operation, and show that such systems struggle even over small domain shifts.", "The entire dataset is released to enable research in stance detection and domain adaptation.", "We consider five recent operations, 4 in the healthcare and 1 in the entertainment industry (Table 1).", "For each operation, we used Selenium 3 to retrieve IDs of tweets with one of the following sets of keywords: mentions of both companies' names or acronyms, and mentions of one of the two companies with a set of merger-specific terms (refer to Appendix A.1 for further details).", "Based on historically available information about M&As, we sampled messages from one year before the proposed merger's date up to six months after the merger took place.", "Finally, we obtain the text of a tweet by crawling for its ID using Tweepy 4 .", "The annotation process was preceded by a pilot annotation, after which the final annotation guidelines were written in close collaboration with three domain experts.", "We followed the convention in Twitter stance detection (Mohammad et al., 2017) and considered three stance labels: support , refute and comment .", "We also added an unrelated tag, obtaining the following label set:", "1. Support: the tweet is stating that the two companies will merge.", "[ CI _ ESRX ] Cigna to acquire Express Scripts for $52B in health care shakeup via usatoday 3 www.seleniumhq.org 4 www.tweepy.org/", "2. Refute: the tweet is voicing doubts that the two companies will merge.", "[ AET _ HUM ] Federal judge rejects Aetna's bid to buy Louisville-based Humana for $34 billion", "3. Comment: the tweet is commenting on merger, neither directly supporting, nor refuting it.", "[ CI _ ESRX ] Cigna-Express Scripts deal unlikely to benefit consumers", "4. Unrelated: the tweet is unrelated to merger.", "[ CVS _ AET ] Aetna Announces Accountable Care Agreement with Weill Cornell Physicians The obtained four-class annotation schema is similar to those in other corpora for news stance detection (Hanselowski et al., 2018; Baly et al., 2018).", "Note that, depending on the given target, the same sample can receive a different stance label: Merger hopes for Aetna-Humana remain, An-them-Cigna not so much.", "[ AET _ HUM ] support [ ANTM _ CI ] refute As observed in Mohammad et al. (2017), stance detection is different but closely related to targeted sentiment analysis, which considers the emotions conveyed in a text (Alhothali and Hoey, 2015).", "To highlight this subtle difference, consider the following sample: [ CVS _ AET ] #Cancer patients will suffer if @CVSHealth buys @Aetna CVS #PBM has resulted in delfays in therapy, switches, etc all documented.", "Terrible!", "While its sentiment towards the target operation is negative (the user believes that the merger will be harmful for patients), following the guidelines, its stance should be labeled as comment : the user is talking about the implications of the operation, without expressing the orientation that the merger will happen (or not).", "Refer to Appendix A.2 for a detailed description of the four considered labels.", "During the annotation process, each tweet was independently labeled by 2 to 6 annotators.", "Ten experts in the financial domain were employed as annotators 5 .", "Annotators received tweets in batches of 2,000 samples at a time, and were asked to annotate no more than one batch per week.", "The entire annotation process lasted 4 months.", "In case of disagreement, the gold label was obtained through 5 Two MPhil, six PhD students and two lecturers at the Faculty of Economics of the University of Cambridge Label Healthcare Entertainment CVS_AET CI_ESRX ANTM_CI AET_HUM DIS_FOXA # samples % # samples % # samples % # samples % # samples % support 2,469 21.24 773 30.58 0970 08.78 1,038 13.14 01,413 07.76 refute 518 04.45 253 10.01 1,969 17.82 1,106 14.00 0 378 02.07 comment 5,520 47.49 947 37.47 3,098 28.05 2,804 35.50 0 8,495 46.69 unrelated 3,115 26.80 554 21.92 5,007 45.33 2,949 37.34 0 7,908 43.46 total 11,622 02,527 11,622 07,897 18,194 Table 2: Label distribution across different M&A operations (Table 1): four mergers in the healthcare domain (33,090 tweets) and one merger in the entertainment domain.", "majority vote, discarding samples where this was not possible (0.2% of the total).", "The average Cohen's between the annotator pairs 6 0.67, which is substantial (Cohen, 1960).", "To estimate the quality of the obtained corpus, a further domain-expert labeled a random sample of 3,000 tweets, which were used as human upperbound for evaluation (Table 4).", "Cohen's between those labels and the gold is 0.88.", "This is well above the agreement obtained in previously released datasets where crowd-sourcing was used (the agreement scores reported, in terms of percentage, range from 63.7% (Derczynski et al., 2017) to 79.7% (Inkpen et al., 2017)).", "Support-comment samples constitute the most common source of disagreement between annotators: this might indicate that such samples are the most subjective to discriminate, and might also contribute to explain the high number of misclassifications between those classes which have been observed in other research efforts on stance detection (Hanselowski et al., 2018).", "Moreover, w.r.t. stance datasets where unrelated samples were randomly generated (Pomerleau and Rao, 2017; Hanselowski et al., 2018), we report a slightly 6 The average was weighted by the number of samples annotated by each pair.", "The standard deviation of the scores between single annotator pairs is 0.074.", "higher disagreement between unrelated and comment samples, indicating that our task setting is more challenging.", "The distribution of obtained labels for each operation is reported in Table", "2. Differences in label distribution between events are usual, and have been observed in other stance corpora (Mohammad et al., 2016a; Kochkina et al., 2018).", "For most operations, there is a clear correlation between the relative proportion of refuting and supporting samples and the merger being approved or blocked by the US Department of Justice.", "Commenting tweets are more frequent than supporting over all operations: this is in line with previous findings in financial microblogging (nidaric et al., 2018).", "The first dataset for Twitter stance detection collected 4,870 tweets on 6 political events (Moham-mad et al., 2016a) and was later used in SemEval-2016 (Mohammad et al., 2016b).", "Using the same annotation schema, Inkpen et al. (2017) released a corpus on the 2016 US election annotated for multi-target stance.", "In the scope of PHEME , a large project on rumor resolution (Derczynski and Bontcheva, 2014), Zubiaga et al. (2015) stance-annotated 325 conversational trees discussing 9 breaking news events.", "The dataset was used in RumourEval 2017 (Derczynski et al., 2017) and was later extended with 1,066 tweets for RumourEval 2019 (Gorrell et al., 2019).", "Following the same procedure, Aker et al. (2017) annotated 401 tweets on mental disorders (Table 3).", "This makes the proposed dataset by far the largest publicly available dataset for stance detection on user-generated data.", "In contrast with Mohammad et al. (2016a), Inkpen et al. (2017) and Macro F 1 across healthcare opertations Average per-class accuracy Encoder CVS _ AET CI _ ESRX ANTM _ CI AET _ HUM avgF 1 avg w F 1 sup ref com unr SVM 51.0 51.0 65.7 65.0 58.1 58.5 54.5 43.9 41.2 88.4 MLP 46.5 46.6 57.6 59.7 52.6 52.7 55.7 40.3 48.6 68.1 EmbAvg 50.4 51.9 50.4 58.9 52.9 52.3 55.2 50.5 52.7 67.4 CharCNN 49.6 48.3 65.6 60.9 56.1 56.8 55.5 44.2 41.6 82.1 WordCNN 46.3 39.5 56.8 59.4 50.5 51.7 62.9 37.0 31.0 71.7 BiCE 56.5 52.5 64.9 63.0 59.2 60.1 61.0 48.7 45.1 79.9 CrossNet 59.1 54.5 65.1 62.3 60.2 61.1 63.8 48.9 50.5 75.8 SiamNet 58.3 54.4 68.7 67.7 62.2 63.1 67.0 48.0 52.5 78.3 CoMatchAtt 54.7 43.8 50.8 50.6 49.9 51.6 71.9 24.4 33.7 65.9 TAN 56.0 55.9 66.2 66.7 61.2 61.3 66.1 49.0 51.7 74.1 HAN 56.4 57.3 66.0 67.3 61.7 61.7 67.6 52.0 55.2 69.1 mean 53.1 50.5 61.6 62.0 61.9 44.2 45.8 74.6 upperbound 75.3 71.2 74.4 73.7 74.7 75.2 80.5 89.6 71.8 84.0 Table 4: Results on the healthcare operations in the WT WT dataset.", "PHEME , where crowd-sourcing was used, only highly skilled domain experts were involved in the annotation process of our dataset.", "Moreover, previous work on stance detection focused on a relatively narrow range of mainly political topics: in this work, we widen the spectrum of considered domains in the stance detection research with a new financial dataset.", "For these reasons, the WT WT dataset constitutes a high quality and robust benchmark for the research community to train and compare performance of models and their scalability, as well as for research on domain adaptation.", "Its large size also allows for pre-trainining of models, before moving to domain with data-scarcity.", "We re-implement 11 architectures recently proposed for stance detection.", "Each system takes as input a tweet and the related target, represented as a string with the two considered companies.", "A detailed description of the models, with references to the original papers, can be found in Appendix B.1.", "Each architecture produces a single vector representation h for each input sample.", "Given h , we predict y with a softmax operation over the 4 considered labels.", "We perform common preprocessing steps, such as URL and username normalization (see Appendix B.2).", "All hyper-parameters are listed in Appendix B.1 for replication.", "In order to allow for a fair comparison between models, they are all initialized with Glove embeddings pretrained on Twitter 7 (Pennington et al., 2014), which are shared between tweets and targets and kept fixed during training.", "Results of experiments are reported in Table", "4. Despite its simple architecture, SiamNet obtains the best performance in terms of both averaged and weighted averaged F 1 scores.", "In line with previous findings (Mohammad et al., 2017), the SVM model constitutes a very strong and robust baseline.", "The relative gains in performance of CrossNet w.r.t. BiCE, and of HAN w.r.t. TAN, consistently reflect results obtained by such models on the Se-mEval 2016-Task 6 corpus (Xu et al., 2018; Sun et al., 2018).", "Moving to single labels classification, analysis of the confusion matrices shows a relevant number of misclassifications between the support and comment classes.", "Those classes have been found difficult to discriminate in other datasets as well (Hanselowski et al., 2018).", "The presence of linguistic features, as in the HAN model, may help in spotting the nuances in the tweet's argumentative structure which allow for its correct classification.", "This may hold true also for the refute class, the least common and most difficult to discriminate.", "Unrelated samples in WT WT could be about the involved companies, but not about their merger: this makes classification more challenging than in datasets containing randomly generated unre-7 https://nlp.stanford.edu/projects/ lated samples (Pomerleau and Rao, 2017).", "SVM and CharCNN obtain the best performance on unrelated samples: this suggests the importance of character-level information, which could be better integrated into future architectures.", "Concerning single operations, CVS _ AET and CI _ ESRX have the lowest average performance across models.", "This is consistent with higher disagreement among annotators for the two mergers.", "We investigate the robustness of SiamNet, the best model in our first set of experiments, and BiCE, which constitutes a simpler neural baseline (Sec-tion 3.2), over domain shifts with a cross-domain experiment on an M&A event in the entertainment business.", "Data.", "We collected data for the Disney-Fox ( DIS _-FOXA ) merger and annotated them with the same procedure as in Section 2, resulting in a total of 18,428 tweets.", "The obtained distribution is highly skewed towards the unrelated and comment class (Table 2).", "This could be due to the fact that users are more prone to digress and joke when talking about the companies behind their favorite shows than when considering their health insurance providers (see Appendix A.2).", "We train on all healthcare operations and test on DIS _ FOXA (and the contrary), considering a 70-15-15 split between train, development and test sets for both sub-domains.", "Results show SiamNet consistently outperforming BiCE.", "The consistent drop in performance according to both accuracy and macro-avg F 1 score, which is observed in all classes but particularly evident for commenting samples, indicates strong domain dependency and room for future research.", "We presented WT WT , a large expert-annotated dataset for stance detection with over 50K labeled tweets.", "Our experiments with 11 strong models indicated a consistent ( > 10%) performance gap between the state-of-the-art and human upperbound, which proves that WT WT constitutes a strong challenge for current models.", "Future research directions might explore the usage of transformer-based models, as well as of models which exploit not only linguistic but also network features, which have been proven to work well for existing stance detection datasets (Aldayel and Magdy, 2019).", "Also, the multi-domain nature of the dataset enables future research in cross-target and cross-domain adaptation, a clear weak point of current models according to our evaluations.", "We thank the anonymous reviewers of this paper for their efforts and for the constructive comments and suggestions.", "We gratefully acknowledge funding from the Keynes Fund, University of Cambridge (grant no. JHOQ).", "CC is grateful to NERC DREAM CDT (grant no. 1945246) for partially funding this work.", "CG and FT are thankful to the Cambridge Endowment for Research in Finance (CERF)." ]
[ "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "objective", "objective", "abstain", "objective", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other" ]
[ "In this work, we demonstrate that the contextualized word vectors derived from pretrained masked language model-based encoders share a common, perhaps undesirable pattern across layers.", "Namely, we find cases of persistent outlier neurons within BERT and RoBERTa's hidden state vectors that consistently bear the smallest or largest values in said vectors.", "In an attempt to investigate the source of this information, we introduce a neuron-level analysis method, which reveals that the outliers are closely related to information captured by positional embeddings.", "We also pre-train the RoBERTa-base models from scratch and find that the outliers disappear without using positional embeddings.", "These outliers, we find, are the major cause of anisotropy of encoders' raw vector spaces, and clipping them leads to increased similarity across vectors.", "We demonstrate this in practice by showing that clipped vectors can more accurately distinguish word senses, as well as lead to better sentence embeddings when mean pooling.", "In three supervised tasks, we find that clipping does not affect the performance.", "A major area of NLP research in the deep learning era has concerned the representation of words in low-dimensional, continuous vector spaces.", "Traditional methods for achieving this have included word embedding models such as Word2Vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014), and FastText (Bojanowski et al., 2017).", "However, though influential, such approaches all share a uniform pitfall in assigning a single, static vector to a word type.", "Given that the vast majority of words are polysemous (Klein and Murphy, 2001), static word embeddings cannot possibly represent a word's changing meaning in context.", "In recent years, deep language models, like ELMo (Peters et al., 2018), BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019b), have achieved great success across many NLP tasks.", "Such models introduce a new type of word vectors, deemed the contextualized variety, where the representation is computed with respect to the context of the target word.", "Since these vectors are sensitive to context, they can better address the polysemy problem that hinders traditional word embeddings.", "Indeed, studies have shown that replacing static embeddings (e.g. word2vec) with contextualized ones (e.g. BERT) can benefit many NLP tasks, including constituency parsing (Kitaev and Klein, 2018), coreference resolution (Joshi et al., 2019) and machine translation (Liu et al., 2020).", "However, despite the major success in deploying these representations across linguistic tasks, there remains little understanding about information embedded in contextualized vectors and the mechanisms that generate them.", "Indeed, an entire research area central to this core issue the interpretability of neural NLP models has recently emerged (Linzen et al., 2018, 2019; Alishahi et al., 2020).", "A key theme in this line of work has been the use of linear probes in investigating the linguistic properties of contextualized vectors (Tenney et al., 2019; Hewitt and Manning, 2019).", "Such studies, among many others, show that contextualization is an important factor that sets these embeddings apart from static ones, the latter of which are unreliable in extracting features central to context or linguistic hierarchy.", "Nonetheless, much of this work likewise fails to engage with the raw vector spaces of language models, preferring instead to focus its analysis on the transformed vectors.", "Indeed, the fraction of work that has done the former has shed some curious insights: that untransformed BERT sentence representations still lag behind word embeddings across a variety of semantic benchmarks (Reimers and Gurevych, 2019) and that the vector spaces of language models are explicitly anisotropic (Ethayarajh, 2019; Li et al., 2020a).", "Certainly, an awareness of the patterns inherent to models' untransformed vector spaces even if shallow can only benefit the transformation-based analyses outlined above.", "In this work, we shed light on a persistent pattern that can be observed for contextualized vectors produced by BERT and RoBERTa.", "Namely, we show that, across all layers, select neurons in BERT and RoBERTa consistently bear extremely large values.", "We observe this pattern across vectors for all words in several datasets, demonstrating that these sin-gleton dimensions serve as major outliers to the distributions of neuron values in both encoders' representational spaces.", "With this insight in mind, the contributions of our work are as follows: 1. We introduce a neuron-level method for analyzing the origin of a model's outliers.", "Using this, we show that they are closely related to positional information.", "2. In investigating the effects of clipping the outliers (zeroing-out), we show that the degree of anisotropy in the vector space diminishes significantly.", "3. We show that after clipping the outliers, the BERT representations can better distinguish between a word's potential senses in the word-in-context (WiC) dataset (Pilehvar and Camacho-Collados, 2019), as well as lead to better sentence embeddings when mean pooling.", "In this section, we demonstrate the existence of large-valued vector dimensions across nearly all tokens encoded by BERT and RoBERTa.", "To illustrate these patterns, we employ two well-known datasets SST-2 (Socher et al., 2013) and QQP 1 .", "SST-2 (60.7k sentences) is a widely-employed sentiment analysis dataset of movie reviews, while QQP (727.7k sentences) is a semantic textual similarity dataset of Quora questions, which collects questions across many topics.", "We choose these datasets in order to account for a reasonably wide distributions of domains and topics, but note that 1 https://www.quora.com/q/quoradata/ First-Quora-Dataset-Release-Question-Pairs Figure 1: Average vectors for each layer of BERT-base.", "any dataset would illustrate our findings well.", "We randomly sample 10k sentences from the training sets of both SST-2 and QQP, tokenize them, and encode them via BERT-base and RoBERTa-base.", "All models are downloaded from the Huggingface Transformers Library (Wolf et al., 2020), though we replicated our results for BERT by loading the provided model weights via our own loaders.", "When discounting the input embedding layers of each model, we are left with 3.68M and 3.59M contextualized token embeddings for BERT-base and RoBERTa-base, respectively.", "In order to illustrate the outlier patterns, we average all subword vectors for each layer of each model.", "In examining BERT-base, we find that the minimum value of 96.60% of vectors lies in the 557 th dimension.", "Figure 1 displays the averaged subword vectors for each layer of BERT-base, corroborating that these patterns exist across all layers.", "For RoBERTa-base, we likewise find that the maximum value of all vectors is the 588 th element.", "Interestingly, the minimum element of 88.19% of vectors in RoBERTa-base is the 77 th element, implying that RoBERTa has two such outliers.", "Figure 2 displays the average vectors for each layer of RoBERTa-base.", "Our observations here reveal a curious pattern that is present in the base versions of BERT and RoBERTa.", "We also corroborate the same findings for the large and distilled (Sanh et al., 2020) variants of these architectures, which can be found in the Appendix A. Indeed, it would be difficult to reach any sort of conclusion about the representational geometry of such models without understanding the outliers' origin(s).", "In this section, we attempt to trace the source of the outlier dimensions in BERT-base and RoBERTa-base (henceforth BERT and RoBERTa).", "Similarly to the previous section, we can corroborate the results of the experiments described here (as well as in the remainder of the paper) for the large and distilled varieties of each respective architecture.", "Thus, for reasons of brevity, we focus our forthcoming analyses on the base versions of BERT and RoBERTa and include results for the remaining models in the Appendix B.2 for the interested reader.", "In our per-layer analysis in 2, we report that outlier dimensions exist across every layer in each model.", "Upon a closer look at the input layer (which features a vector sum of positional, segment, and token embeddings), we find that the same outliers also exist in positional embeddings.", "Figure 3 shows that the 1st positional embedding of BERT has two such dimensions, where the 557 th element is likewise the minimum.", "Interestingly, this pattern does not exist in other positional embeddings, nor in segment or token embeddings.", "Furthermore, Figure 4 shows that the 4th positional embedding of RoBERTa has four outliers, which include the aforementioned 77 th and 588 th dimensions.", "We also find that, from the 4th position to the final position, the maximum element of 99 .", "8% positional embeddings is the 588 th element.", "Digging deeper, we observe similar patterns in the Layer Normalization (LN, Ba et al. (2016)) parameters of both models.", "Recall that LN has two learnable parameters gain ( ) and bias ( ) both of which are 768-dimension vectors (in the case of the base models).", "These are designed as an affine transformation over dimension-wise nor-Figure 3: The first positional embedding of BERT-base.", "malized vectors in order to, like most normalization strategies, improve their expressive ability and to aid in optimization.", "Every layer of BERT and RoBERTa applies separate LNs post-attention and pre-output.", "For BERT, the 557 th element of the vector is always among the top-6 largest values for the first ten layers' first LN.", "Specifically, it is the largest value in the first three layers.", "For RoBERTa, the 588 th element of the first LN's vector is always among the top-2 largest values for all layers it is largest in the first five layers.", "Furthermore, the 77 th element of the second LN's are among the top-7 largest values from the second to the tenth layer.", "It is reasonable to conclude that, after the vector normalization performed by LN, the outliers observed in the raw embeddings are lost.", "We hypothesize that these particular neurons are somehow important to the network, such that they retained after scaling the normalized vectors by the affine transformation involving and .", "Indeed, we observe that, in BERT, only the 1st position's embedding has such an outlier.", "However, it is subsequently observed in every layer and token after the first LN is applied.", "Since LayerNorm is trained globally and is not token specific, it happens to rescale every vector such that the positional information is retained.", "We corroborate this by observing that all vectors share the same .", "This effectively guarantees the presence of outliers in the 1st layer, which are then propagated upward by means of the Transformer's residual connection (He et al., 2015).", "Also, it is important to note that, in the case of BERT, the first position's embedding is directly tied to the requisite [CLS] token, which is prepended to all sequences as part of the MLM training objective.", "This has been recently noted to affect e.g. attention patterns, where much of the probability mass is distributed to this particular token alone, despite it bearing the smallest norm among all other vectors in a given layer and head (Kobayashi et al., 2020).", "Neuron-level analysis In order to test the extent to which BERT and RoBERTa's outliers are related to positional information, we employ a probing technique inspired by Durrani et al. (2020).", "First, we train a linear probe W RM N without bias to predict the position of a contextualized vector in a sentence.", "In Durrani et al. (2020), the weights of the classifier are employed as a proxy for selecting the most relevant neurons to the prediction.", "In doing so, they assume that, the larger the absolute value of the weight, the more important the corresponding neuron.", "However, this method disregards the magnitudes of the values of neurons, as a large weights do not necessarily imply that the neuron has high contribution to the final classification result.", "For example, if the value of a neuron is close to zero, a large weight also leads to a small contribution.", "In order to address this issue, we define the contribution of the i th neuron as c ( i ) = abs ( w i v i ) for i = 1 , 2 , 3 , ..., n , where w i is the i th weight and v i is the i th neuron in the contextualized word vector.", "We name C = [ c (1) , c (2) , ..., c ( n )] as a contribution vector.", "If a neuron has a high contribution, this means that this neuron is highly relevant to the final classification result.", "We train, validate, and test our probe on the splits provided in the SST-2 dataset (as mentioned in 2, we surmise that any dataset would be adequate for demonstrating this).", "The linear probe is a 768 300 matrix, which we train separately for each layer.", "Since all SST-2 sentences are shorter than 300 tokens in length, we set M = 300 .", "We use a batch size of 128 and train for 10 epochs with a categorical cross-entropy loss, optimized by Adam (Kingma and Ba, 2017).", "Figure 5a shows that, while it is possible to decode positional information from the lowest three layers with almost perfect accuracy, much of this information is gradually lost higher up in the model.", "Furthermore, it appears that the higher layers of RoBERTa contain more positional information than BERT.", "Looking at Figure 5b, we see that BERT's outlier neuron has a higher contribution in position prediction than the average contribution of all neurons.", "We also find that the contribution values of the same neuron are the highest in all layers.", "Combined with the aforementioned pattern of the first positional embedding, we can conclude that the 557 th neuron is related to positional information.", "Likewise, for RoBERTa, Figure 5c shows that the 77 th and 588 th neurons have the highest contribution for position prediction.", "We also find that the contribution values of the 588 th neurons are always largest for all layers, which implies that these neurons are likewise related to positional information.", "2 Removing positional embeddings In order to isolate the relation between outlier neurons and positional information, we pre-train two RoBERTa-base models (with and without positional embeddings) from scratch using Fairseq (Ott et al., 2019).", "Our pre-training data is the English Wikipedia Corpus 3 , where we train for 200k steps with a batch size of 256, optimized by Adam.", "All models share the same hyper-parameters, which are listed in the Appendix C.1.", "We use four NVIDIA A100 GPUs to pre-train each model, costing about 35 hours per model.", "We find that, without the help of positional embeddings, the validation perplexity of RoBERTa-base is very high at 354.0, which is in line with Lee et al. (2019)'s observation that the self-attention mechanism of Transformer Encoder is order-invariant.", "In other words, the removal of PEs from RoBERTa-base makes it a bag-of-word model, whose outputs do not contain any positional information.", "In contrast, the perplexity of RoBERTa equipped with standard positional embeddings is much lower at 4.3, which is likewise expected.", "We also use heatmaps to show the contribution values in Appendix B.1.", "3 We randomly select 158.4M sentences for training and 50k sentences for validation.", "(a) Accuracy of position prediction.", "In examining outlier neurons, we employ the same datasets detailed in 2.", "For the RoBERTa-base model with PEs, we find that the maximum element of 82.56% of all vectors is the 81 st dimension 4 , similarly to our findings above.", "However, we do not observe the presence of such outlier neurons in the RoBERTa-base model without PEs, which indicates that the outlier neurons are tied directly to positional information.", "Similar to 2, we display the averaged subword vectors for each layer of our models in Appendix C.2, which also corroborate our results.", "In 3, we demonstrated that outlier neurons are related to positional information.", "In this section, we investigate the effects of zeroing out these dimensions in contextualized vectors, a process which we refer to as clipping.", "Anisotropy Ethayarajh (2019) observe that contextualized word vectors are anisotropic in all non-input layers, which means that the average cosine similarity between uniformly randomly sampled words is close to 1. To corroborate this finding, we randomly sample 2000 sentences from the SST-2 training set and create 1000 sentence-pairs.", "Then, we randomly select a token in each sentence, discarding all other tokens.", "This effectively sets the correspondence between the two sentences to two tokens instead.", "Following this, we compute the cosine similarity between these two tokens to measure the anisotropy of contextualized vectors.", "In the left plot of Figure 6, we can see that contextualized representations of BERT and RoBERTa are more anisotropic in higher layers.", "This is espe-4 Different initializations make our models have different outlier dimensions.", "cially true for RoBERTa, where the average cosine similarity between random words is larger than 0.5 after the first non-input layer.", "This implies that the internal representations in BERT and RoBERTa occupy a narrow cone in the vector space.", "Since outlier neurons tend to be valued higher or lower than all other contextualized vector dimensions, we hypothesize that they are the main culprit behind the degree of observed anisotropy.", "To verify our hypothesis, we clip BERT and RoBERTa's outliers by setting each neuron's value to zero.", "The left plot in Figure 6 shows that, after clipping the outliers, their vector spaces become close to isotropic.", "Self-similarity In addition to remarking upon the anisotropic characteristics of contextualized vector spaces, Ethayarajh (2019) introduce several measures to gauge the extent of contextualization inherent models.", "One such metric is self-similarity , which the authors employ to compare the similarity of a word's internal representations in different contexts.", "Given a word w and n different sentences s 1 , s 2 , ..., s n which contain such word, f il ( w ) is the internal representation of w in sentence s i in the l th layer.", "The average self-similarity of w in the l th layer is then defined as: SelfSim l ( w ) = (cid:80) ni =1 (cid:80) nj = i +1 cos (cid:16) f il ( w ) , f jl ( w ) (cid:17) n ( n 1) (1) Intuitively, a self-similarity score of 1 indicates that no contextualization is being performed by the model (e.g. static word embeddings), while a score of 0 implies that representations for a given word are maximally different given various contexts.", "To investigate the effect of outlier neurons on a model's self-similarity, we sample 1000 different words from SST-2 training set, all of which appear at least in 10 different sentences.", "We then com-Figure 6: Left: anisotropy measurement of contextualized word vectors in BERT and RoBERTa before and after clipping the outlier dimensions.", "Right: self-similarity measurement of BERT and RoBERTa before and after clipping.", "pute the average self-similarity of these words as contextualized by BERT and RoBERTa before and after clipping the outliers.", "To adjust for the effect of anisotropy, we subtract the self-similarity from each layer's anisotropy measurement, as in Ethayarajh (2019).", "The right plot in Figure 6 shows that, similarly to the findings in (Ethayarajh, 2019), a word's self-similarity is highest in the lower layers, but decreases in higher layers.", "Crucially, we also observe that, after clipping the outlier dimensions, the self-similarity increases, indicating that vectors become closer to each other in the contextualized space.", "This bears some impact on studies attempting to characterize the vector spaces of models like BERT and RoBERTa, as it is clearly possible to overstate the degree of contextualization without addressing the effect of positional artefacts.", "Bearing in mind the findings of the previous section, we now turn to the question of word sense, as captured by contextualized embeddings.", "Suppose that we have a target word w , which appears in two sentences.", "w has the same sense in these two sentences, but its contextualized representations are not identical due to the word appearing in (perhaps slightly) different contexts.", "In the previous few sections, we showed that outlier neurons are related to positional information and that clipping them can make a word's contextualized vectors more similar.", "Here, we hypothesize that clipping such dimensions can likewise aid in intrinsic semantic tasks, like differentiating senses of a word.", "To test our hypothesis, we analyze contextualized vectors using the word-in-context (WiC) dataset (Pilehvar and Camacho-Collados, 2019), which is designed to identify the meaning of words Model Layer Threshold Accuracy Baseline -50.0% Before clipping BERT 7 0.7 67.5% RoBERTa 10 0.9 69.0% After clipping BERT-clip 10 0.5 68.4% RoBERTa-clip 11 0.6 69.9% Table 1: The best accuracy scores on WiC dataset.", "in different contexts.", "WiC is a binary classification task, where, given a target word and two sentences which contain it, models must determine whether the word has the same meaning across the two sentences.", "In order to test how well we can identify differences in word senses using contextualized vectors, we compute the cosine similarity between contextualized vectors of target words across pairs of sentences, as they appear in the WiC dataset.", "If the similarity value is larger than a specified threshold, we assign the true label to the sentence pair; otherwise, we assign the false label.", "We use this method to compare the accuracy of BERT and RoBERTa on WiC before and after clipping the outliers.", "Since this method does not require any training, we test our models on the WiC training dataset.", "5 We compare 9 different thresholds from 0.1 to 0.9, as well as a simple baseline model that assigns the true labels to all samples.", "5 The WiC test set does not provide labels and the size of validation set is too small (638 sentences pairs).", "We thus choose to use the training dataset (5428 sentences pairs).", "6 The thresholds are different due to the fact that the cosine Dataset STS-B SICK-R STS-12 STS-13 STS-14 STS-15 STS-16 Baseline Avg.", "are less related to word sense information and can be safely clipped for this particular task (if performed in an unsupervised fashion).", "Venturing beyond the word-level, we also hypothesize that outlier clipping can lead to better sentence embeddings when relying on the cosine similarity metric.", "To test this, we follow Reimers and Gurevych (2019) in evaluating our models on 7 semantic textual similarity (STS) datasets, including the STS-B benchmark (STS-B) (Cer et al., 2017), the SICK-Relatedness (SICK-R) dataset (Bentivogli et al., 2016) and the STS tasks 2012-2016 (Agirre et al., 2012, 2013, 2014, 2015, 2016).", "Each sentence pair in these datasets is annotated with a relatedness score on a 5-point rating scale, as obtained from human judgments.", "We load each dataset using the SentEval toolkit (Conneau and Kiela, 2018).", "Indeed, the most common approach for computing sentence embeddings from contextualized models is simply averaging all subword vectors that comprise a given sentence (Reimers and Gurevych, 2019).", "We follow this method in obtaining embeddings for each pair of sentences in the aforementioned tasks, between which we compute the cosine similarity.", "Given a set of similarity and gold relatedness scores, we then calculate the Spearman rank correlation.", "As a comparison, we also consider averaged GloVe embeddings as our baseline.", "Table 2 shows that, after clipping the outliers, the best Spearman rank correlation scores for BERT and RoBERTa increase across all datasets, some by a large margin.", "This indicates that clipping the outlier neurons can lead to better sentence embeddings when mean pooling.", "However, like Li et al. similarity is inflated in the presence of outlier neurons.", "(2020b), we also notice that averaged GloVe embeddings still manage outperform both BERT and RoBERTa on all STS 2012-16 tasks.", "This implies that the post-clipping reduction in anisotropy is only a partial explanation for why contextualized, mean-pooled sentence embeddings still lag behind static word embeddings in capturing the semantics of a given sentence.", "In the previous sections, we analyzed the effects of clipping outlier neurons on various intrinsic semantic tasks.", "Here, we explore the effects of clipping in a supervised scenario, where we hypothesize that a model will learn to discard outlier information if it is not needed for a given task.", "We consider two binary classification tasks, SST-2 and IMDB (Maas et al., 2011), and a multi-class classification task, SST-5, which is a 5-class version of SST-2.", "First, we freeze all the parameters of the pre-trained models and use the same method in 4.3 to get the sentence embedding of each sentence.", "Then, we train a simple linear classifier W R 768 N for each layer, where N is the number of classes.", "We use different batch sizes for different tasks, 768 for SST-2, 128 for IMDB and 1536 for SST-5.", "Then we train for 10 epochs with a categorical cross-entropy loss, optimized by Adam.", "Table 3 shows that there is little difference in employing raw vs. clipped vectors in terms of task performance.", "This indicates that using vectors with clipped outliers does not drastically affect classifier accuracy when it comes to these common tasks.", "The experiments detailed in the previous sections point to the dangers of relying on metrics like cosine similarity when making observations about models' representational spaces.", "This is particularly salient when the vectors being compared are taken off-the-shelf and their composition is not widely understood.", "Given the presence of model idiosyncracies like the outliers highlighted here, mean-sensitive, L2 normalized metrics (e.g. cosine similarity or Pearson correlation) will inevitably weigh the comparison of vectors along the highest-valued dimensions.", "In the case of positional artefacts propagating through the BERT and RoBERTa networks, the basis of comparison is inevitably steered towards whatever information is captured in those dimensions.", "Furthermore, since such outlier values show little variance across vectors, proxy metrics of anisotropy like measuring the average cosine similarity across random words (detailed in 4.1) will inevitably return an exceedingly high similarity, no matter what the context.", "When cosine similarity is viewed primarily as means of semantic comparison between word or sentence vectors, the prospect of calculating cosine similarity for a benchmark like WiC or STS-B becomes erroneous.", "Though an examination of distance metrics is outside the scope of this study, we acknowledge similar points as having been addressed in regards to static word embeddings (Mimno and Thompson, 2017) as well as contextualized ones (Li et al., 2020b).", "Likewise, we would like to stress that our manual clipping operation was performed for illustrative purposes and that interested researchers should employ more systematic post-hoc normalization strategies, e.g. whitening (Su et al., 2021), when working with hidden states directly.", "Relatedly, the anisotropic nature of the vector space that persists even after clipping the outliers suggests that positional artefacts are simply part of the explanation.", "Per this point, Gao et al. (2019) prove that, in training any sort of model with likelihood loss, the representations learned for tokens being predicted will be naturally be pushed away from most other tokens in order to achieve a higher likelihood.", "They relate this observation to the Zipfian nature of word distributions, where the vast majority of words are infrequent.", "Li et al. (2020a) extend this insight specifically to BERT and show that, while high frequency words concentrate densely, low frequency words are much more sparsely distributed.", "Though we do not attempt to dispute these claims with our findings, we do hope our experiments will highlight the important role that positional embeddings play in the representational geometry of Transformer-based models.", "Indeed, recent work has demonstrated that employing relative positional embeddings and untying them from the simultaneously learned word embeddings has lead to impressive gains for BERT-based architectures across common benchmarks (He et al., 2020; Ke et al., 2020).", "It remains to be seen how such procedures affect the representations of such models, however.", "Beyond this, it is clear that LayerNorm is the reason positional artefacts propagate though model representations in the first place.", "Indeed, our experiments show that the outlier dimension observed for BERT is tied directly to the [CLS] token, which always occurs at the requisite 1st position despite having no linguistic bearing on the sequence of observed tokens being modeled.", "However, the fact that RoBERTa (which employs a similar delimiter) retains outliers originating from different positions' embeddings implies that the issue of artefact propagation is not simply a relic of task design.", "It is possible that whatever positional idiosyncrasies contribute to a task's loss are likewise retained in their respective embeddings.", "In the case of BERT, the outlier dimension may be granted a large negative weight in order to differentiate the (privileged) 1st position between all others.", "This information being reconstructed by the LayerNorm parameters, which are shared for all positions in the sequence length, and then propagated up through the Transformer network is a phenomenon worthy of further attention.", "In recent years, an explosion of work focused on understanding the inner workings of pretrained neural language models has emerged.", "One line of such work investigates the self-attention mechanism of Transformer-based models, aiming to e.g. characterize its patterns or decode syntactic structure (Raganato and Tiedemann, 2018; Vig, 2019; Marecek and Rosa, 2018; Voita et al., 2019; Clark et al., 2019; Kobayashi et al., 2020).", "Another line of work analyzes models' internal representations using probes.", "These are often linear classifiers that take representations as input and are trained with supervised tasks in mind, e.g. POS-tagging, dependency parsing (Tenney et al., 2019; Liu et al., 2019a; Lin et al., 2019; Hewitt and Manning, 2019; Zhao et al., 2020).", "In such work, high probing accuracies are often likened to a particular model having learned the task in question.", "Most similar to our work, Ethayarajh (2019) investigate the extent of contextualization in models like BERT, ELMo, and GPT-2 (Radford et al., 2019).", "Mainly, they demonstrate that the contextualized vectors of all words are non-isotropic across all models and layers.", "However, they do not indicate why these models have such properties.", "Also relevant are the studies of Dalvi et al. (2018), who introduce a neuron-level analysis method, and Durrani et al. (2020), who use this method to analyze individual neurons in contextualized word vectors.", "Similarly to our experiment, Durrani et al. (2020) train a linear probe to predict linguistic information stored in a vector.", "They then employ the weights of the classifier as a proxy to select the most relevant neurons to a particular task.", "In a similar vein, Coenen et al. (2019) demonstrate the existence of syntactic and semantic subspaces in BERT representations.", "In this paper, we called attention to sets of outlier neurons that appear in BERT and RoBERTa's internal representations, which bear consistently large values when compared to the distribution of values of all other neurons.", "In investigating the origin of these outliers, we employed a neuron-level analysis method which revealed that they are artefacts derived from positional embeddings and Layer Normalization.", "Furthermore, we found that outliers are a major cause for the anisotrophy of a model's vector space (Ethayarajh, 2019).", "Clipping them, consequently, can make the vector space more directionally uniform and increase the similarity between words' contextual representations.", "In addition, we showed that outliers can distort results when investigating word sense within contextualized representations as well as obtaining sentence embeddings via mean pooling, where removing them leads to uniformly better results.", "Lastly, we find that clipping does not affect models' performance on three supervised tasks.", "It is important to note that the exact dimensions at which the outliers occur will vary pending different initializations and training procedures (as evidenced by our own RoBERTa model).", "As such, future work will aim at investigating strategies for mitigating the propagation of these artefacts when pretraining.", "Furthermore, given that both BERT and RoBERTa are masked language models, it will be interesting to investigate whether or not similar artefacts occur in e.g. autoregressive models like GPT-2 (Radford et al., 2019) or XLNet (Yang et al., 2019).", "Per the insights of Gao et al. (2019), it is very likely that the representational spaces of such models are anisotropic, but it is important to gauge the extent to which this can be traced to positional artefacts.", "Authors' Note We would like to mention Kovaleva et al. (2021)'s contemporaneous work, which likewise draws attention to BERT's outlier neurons.", "While our discussion situates outliers in the context of positional embeddings and vector spaces, Kovaleva et al. (2021) offer an exhaustive analysis of LayerNorm parameterization and its impact on masked language modeling and finetuning.", "We refer the interested reader to that work for a thor-ough discussion of LayerNorm's role in the outlier neuron phenomenon.", "Acknowledgments We would like to thank Joakim Nivre and Daniel Dakota for fruitful discussions and the anonymous reviewers for their excellent feedback." ]
[ "objective", "result", "objective", "result", "result", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "result", "objective", "objective", "result", "objective", "result", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "method", "other", "other", "method", "objective", "result", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Non-autoregressive Transformer is a promising text generation model.", "However, current non-autoregressive models still fall behind their autoregressive counterparts in translation quality.", "We attribute this accuracy gap to the lack of dependency modeling among decoder inputs.", "In this paper, we propose CNAT, which learns implicitly categorical codes as latent variables into the non-autoregressive decoding.", "The interaction among these categorical codes remedies the missing dependencies and improves the model capacity.", "Experiment results show that our model achieves comparable or better performance in machine translation tasks than several strong baselines.", "Non-autoregressive Transformer (NAT, Gu et al., 2018; Wang et al., 2019; Lee et al., 2018; Ghazvininejad et al., 2019) is a promising text generation model for machine translation.", "It introduces the conditional independent assumption among the target language outputs and simultaneously generates the whole sentence, bringing in a remarkable efficiency improvement (more than 10 speed-up) versus the autoregressive model.", "However, the NAT models still lay behind the autoregressive models in terms of BLEU (Papineni et al., 2002) for machine translation.", "We attribute the low-quality of NAT models to the lack of dependencies modeling for the target outputs, making it harder to model the generation of the target side translation.", "A promising way is to model the dependencies of the target language by the latent variables.", "A line of research works (Kaiser et al., 2018; Roy et al., 2018; Shu et al., 2019; Ma et al., 2019) introduce latent variable modeling to the non-autoregressive Transformer and improves translation quality.", "The latent variables could be regarded as the springboard to bridge the modeling gap, introducing more informative decoder inputs than the previously copied inputs.", "More specifically, the latent-variable based model first predicts a latent variable sequence conditioned on the source representation, where each variable represents a chunk of words.", "The model then simultaneously could generate all the target tokens conditioning on the latent sequence and the source representation since the target dependencies have been modeled into the latent sequence.", "However, due to the modeling complexity of the chunks, the above approaches always rely on a large number (more than 2 15 , Kaiser et al., 2018; Roy et al., 2018) of latent codes for discrete latent spaces, which may hurt the translation efficiency the essential goal of non-autoregressive decoding.", "Akoury et al. (2019) introduce syntactic labels as a proxy to the learned discrete latent space and improve the NATs' performance.", "The syntactic label greatly reduces the search space of latent codes, leading to a better performance in both quality and speed.", "However, it needs an external syntactic parser to produce the reference syntactic tree, which may only be effective in limited scenarios.", "Thus, it is still challenging to model the dependency between latent variables for non-autoregressive decoding efficiently.", "In this paper, we propose to learn a set of latent codes that can act like the syntactic label, which is learned without using the explicit syntactic trees.", "To learn these codes in an unsupervised way, we use each latent code to represent a fuzzy target category instead of a chunk as the previous research (Akoury et al., 2019).", "More specifically, we first employ vector quantization (Roy et al., 2018) to discretize the target language to the latent space with a smaller number (less than 128) of latent variables, which can serve as the fuzzy word-class information each target language word.", "We then model the latent variables with conditional random fields (CRF, Lafferty et al., 2001; Sun et al., 2019).", "To avoid the mismatch of the training and inference for latent variable modeling, we propose using a gated neural network to form the decoder inputs.", "Equipping it with scheduled sampling (Ben-gio et al., 2015), the model works more robustly.", "(a) AT x y y x y x y z Autoregressive Decoding Non-Autoregressive Decoding Latent-Variable based NAT", "(b) NAT x y y x y x y z Autoregressive Decoding Non-Autoregressive Decoding Latent-Variable based NAT", "Experiment results on WMT14 and IWSLT14 show that CNAT achieves the new state-of-the-art performance without knowledge distillation.", "With the sequence-level knowledge distillation and reranking techniques, the CNAT is comparable to the current state-of-the-art iterative-based model while keeping a competitive decoding speedup.", "Neural machine translation (NMT) is formulated as a conditional probability model p ( y | x ) , which models a sentence y = { y 1 , y 2 , , y m } in the target language given the input x = { x 1 , x 2 , , x n } from the source language.", "Gu et al. (2018) proposes Non-Autoregressive Transformer (NAT) for machine translation, breaking the dependency among target tokens, thus achieving simultaneous decoding for all tokens.", "For a source sentence, a non-autoregressive decoder factorizes the probability of its target sentence as: p ( y | x ) = m (cid:89) t =1 p ( y t | x ; ) , (1) where is the set of model parameters.", "NAT has a similar architecture to the autoregressive Transformer (AT, Vaswani et al., 2017), which consists of a multi-head attention based encoder and decoder.", "The model first encodes the source sentence x 1: n as the contextual representation e 1: n , then employs an extra module to predict the target length and form the decoder inputs.", "Length Prediction: Specifically, the length predictor in the bridge module predicts the target sequence length m by: m = n + arg max L p ( L | mean( e ); ) , (2) where L is the length difference between the target and source sentence, is the parameter of length predictor.", "x y", "Inputs Initialization: With the target sequence length m , we can compute the decoder inputs h = h 1: m with Softcopy (Li et al., 2019; Wei et al., 2019) as: h j = n (cid:88) i w ij e i and w ij = softmax( | j i | / ) , (3) where is a hyper-parameter to control the sharpness of the softmax function.", "With the computed decoder inputs h , NAT generates target sequences simultaneously by arg max y t p ( y t | x ; ) for each timestep t , effectively reduce computational overhead in decoding (see Figure 1b).", "Though NAT achieves around 10 speedup in machine translation than autoregressive models, it still suffers from potential performance degradation (Gu et al., 2018).", "The results degrade since the removal of target dependencies prevents the decoder from leveraging the inherent sentence structure in prediction.", "Moreover, taking the copied source representation as decoder inputs implicitly assume that the source and target language share a similar order, which may not always be the case (Bao et al., 2019).", "To bridge the gap between non-autoregressive and autoregressive decoding, Kaiser et al. (2018) introduce the Latent Transformer (LT).", "It incorporates non-autoregressive decoding with conditional dependency as the latent variable to alleviate the degradation resulted from the absence of dependency: p ( y | x ) = p ( z | x ; ) m (cid:89) t =1 p ( y t | z , x ; ) , (4) where z = { z 1 , , z L } is the latent variable sequence and the L is the length of the latent sequence, and are the parameter of latent predictor and translation model, respectively.", "The LT architecture stays unchanged from the origin NAT models, except for the latent predictor and decoder inputs.", "During inference, the Latent Transformer first autoregressively predicts the latent variables z , then non-autoregressively produces the entire target sentence y conditioned on the latent sequence z (see Figure 1c).", "Ma et al. (2019); Shu et al. (2019) extend this idea and model z as the continuous latent variables, achieving a promising result, which replaces the autoregressive predictor with the iterative transformation layer.", "In this section, we present our proposed CNAT, an extension to the Transformer incorporated with non-autoregressive decoding for target tokens and autoregressive decoding for latent sequences.", "In brief, CNAT follows the architecture of Latent Transformer (Kaiser et al., 2018), except for the latent variable modeling (in 3.1 and 3.2) and inputs initialization (in 3.3).", "Categorical information has achieved great success in neural machine translation, such as part-of-speech (POS) tag in autoregressive translation (Yang et al., 2019) and syntactic label in non-autoregressive translation (Akoury et al., 2019).", "Inspired by the broad application of categorical information, we propose to model the implicit categorical information of target words in a non-autoregressive Transformer.", "Each target sequence y = y 1: m will be assigned to a discrete latent variable sequence z = z 1: m .", "We assume that each z i will capture the fuzzy category of its token y i .", "Then, the conditional probability p ( y | x ) is factorized with respect to the categorical latent variable: p ( y | x ) = (cid:88) z p ( z | x ) p ( y | z , x ) .", "However, it is computationally intractable to sum all configurations of latent variables.", "Following the spirit of the latent based model (Kaiser et al., 2018; Roy et al., 2018), we employ a vector quantized technique to maintain differentiability through the categorical modeling and learn the latent variables straightforward.", "Vector Quantization.", "The vector quantization based methods have a long history of being successfully in machine learning models.", "In vector quantization, each target representation repr( y i ) R d model is passed through a discretization bottleneck using a nearest-neighbor lookup on embedding matrix Q RK d model , where K is the number of categorical codes.", "For each y i in the target sequence, we define its categorical variable z i and latent code q i as: z i = k, q i = Q k , and k = arg min j [ K ] || repr( y i ) Q j || 2 , (6) where || || 2 is the l 2 distance, [ K ] denote the set { 1 , 2 , , K } .", "Intuitively, we adopt the embedding of y as the target representation: repr( y i ) = embedding( y i ) where the embedding matrix of the target language is shared with the softmax layer of the decoder.", "Exponential Moving Average.", "Following the common practice of vector quantization, we also employ the exponential moving average (EMA) technique to regularize the categorical codes.", "Put simply, the EMA technique could be understood as basically the k-means clustering of the hidden states with a sort of momentum.", "We maintain an EMA over the following two quantities for each j [ K ] : 1) the count c j measuring the number of target representations that have Q j as its nearest neighbor, and 2) Q j .", "The counts are updated over a mini-batch of targets { y 1 , y 2 , , y m B } with: c j = c j + (1 ) m B (cid:88) i 1[ z i = j ] , (7) then, the latent code Q j being updated with: Q j = Q j +(1 ) m B (cid:88) i 1[ z i = j ] repr( y i ) c j , (8) where 1[ ] is the indicator function and is a decay parameter, B is the size of the batch.", "Our next insight is transferring the dependencies among the target outputs into the latent spaces.", "Since the categorical variable captures the fuzzy target class information, it can be a proxy of the target outputs.", "We further employ a structural prediction module instead of the standard autoregressive Transformer to model the latent sequence.", "The former can explicitly model the dependencies among the latent variables and performs exact decoding during inference.", "Conditional Random Fields.", "We employ a linear-chain conditional random fields (CRF, Lafferty et al., 2001) to model the categorical latent variables, which is the most common structural prediction model.", "Given the source input x = ( x 1 , , x n ) and its corresponding latent variable sequence z = ( z 1 , , z m ) , the CRF model defines the probability of z as: p ( z | x ) = 1 Z ( x ) exp (cid:16) m (cid:88) i =1 s ( z i , x , i ) + m (cid:88) i =2 t ( z i 1 , z i , x , i ) (cid:17) , (9) where Z ( x ) is the normalize factor, s ( z i , x , i ) is the emit score of z i at the position i , and the t ( z i 1 , z i , x , i ) is the transition score from z i 1 to z i .", "Before computing the emit score and transition score in Eq.", "9, we first take h = h 1: m as the inputs and compute the representation f = Transfer( h ) , where Transfer( ) denotes a two-layer vanilla Transformer decoding function including a self-attention block, an encoder-decoder block followed by a feed-forward neural network block (Vaswani et al., 2017).", "We then compute the emit score and the transition score.", "For each position i , we compute the emit score with a linear transformation: s ( z i , x , i ) = ( WT f i + b ) z i where W R d model K and b RK are the parameters.", "We incorporate the positional context and compute its transition score with: M i d = Biane([ f i 1 ; f i ]) , M i = ET 1 M i d E 2 , t ( z i 1 , z i , x , i ) = M iz i 1 ,z i , (10) where Biane( ) : R 2 d model R d t d t is a biaffine neural network (Dozat and Manning, 2017), E 1 and E 2 R d t K are the transition matrix.", "One potential issue is that the mismatch of the training and inference stage for the used categorical", "variables.", "Suppose we train the decoder with the quantized categorical variables z , which is inferred from the target reference.", "In that case, we may fail to achieve satisfactory performance with the predicted categorical variables during inference.", "We intuitively apply the gated neural network (denote as GateNet ) to form the decoder inputs by fusing the copied decoder inputs h = h 1: m and the latent codes q = q 1: m , since the copied decoder inputs h is still informative to non-autoregressive decoding: g i = (FFN([ h i ; q i ])) , o i = h i g i + q ( z i ) (1 g i ) , (11) where the FFN( ) : R 2 d model R d model is a two-layer feed-forward neural networks and ( . ) is the sigmoid function.", "The loss of the CRF-based predictor is computed with: L crf = log p ( z ref | x ) .", "(12)", "To equip with the GateNet, we randomly mix the z ref and the predicted z pred as: z mix i = (cid:40) z pred i if p z ref i if p < , (13) where p U [0 , 1] and is the threshold we set 0.5 in our experiments.", "With the hyper-parameter , the overall training loss is: L = LNAT + L crf .", "(15) 3.5 Inference CNAT selects the best sequence by choosing the highest-probability latent sequence z with Viterbi decoding (Viterbi, 1967), then generate the tokens with: z = arg max z p ( z | x ; ) , and y = arg max y p ( y | z , x ; ) , where identifying y only requires independently maximizing the local probability for each output position.", "While training, we first compute the reference z ref by the vector quantization and employ the EMA to update the quantized codes.", "Grounding on the z mix , the non-autoregressive translation loss is computed with: LNAT = log p ( y | z mix , x ; ) .", "(14)", "Datasets.", "We conduct the experiments on the most widely used machine translation benchmarks: WMT14 English-German (WMT14 EN-DE, 4.5M pairs) 1 and IWSLT14 German-English (IWSLT14, 160K pairs) 2 .", "The datasets are processed with the Moses script (Koehn et al., 2007), and the words are segmented into subword units using byte-pair encoding (Sennrich et al., 2016, BPE).", "We use the shared subword embeddings between the source language and target language for the WMT datasets and the separated subword embeddings for the IWSLT14 dataset.", "Model Setting.", "In the case of IWSLT14 task, we use a small setting ( d model = 256, d hidden = 512, p dropout = 0.1, n layer = 5 and n head = 4) for Transformer and NAT models.", "For the WMT tasks, we use the Transformer-base setting ( d model = 512, d hidden = 512, p dropout = 0.3, n head = 8 and n layer = 6) of the Vaswani et al. (2017).", "We set the hyperparameter used in Eq.", "15 and in Eq.", "7-8 to 1.0 and 0.999, respectively.", "The categorical number K is set to 64 in our experiments.", "We implement our model based on the open-source framework of fairseq (Ott et al., 2019).", "Optimization.", "We optimize the parameter with the Adam (Kingma and Ba, 2015) with = (0 . 9 , 0 . 98) .", "We use inverse square root learning rate scheduling (Vaswani et al., 2017) for the WMT tasks and linear annealing schedule (Lee et al., 2018) from 3 10 4 to 1 10 5 for the IWSLT14 task.", "Each mini-batch consists of 2048 tokens for IWSLT14 and 32K tokens for WMT tasks.", "Distillation.", "Sequence-level knowledge distillation (Hinton et al., 2015) is applied to alleviate the multi-modality problem (Gu et al., 2018) while training.", "We follow previous studies on NAT (Gu et al., 2018; Lee et al., 2018; Wei et al., 2019) and use translations produced by a pre-trained autoregressive Transformer (Vaswani et al., 2017) as the training data.", "Reranking.", "We also include the results that come at reranked parallel decoding (Gu et al., 2018; Guo et al., 2019; Wang et al., 2019; Wei et al., 2019), which generates several decoding candidates in parallel and selects the best via re-scoring using a 1", "pre-trained autoregressive model.", "Specifically, we first predict the target length m and generate output sequence with arg max decoding for each length candidate m [ m m, m + m ] ( m = 4 in our experiments, means there are N = 9 candidates), which was called length parallel decoding (LPD).", "Then, we use the pre-trained teacher to rank these sequences and identify the best overall output as the final output.", "Baselines.", "We compare the CNAT with several strong NAT baselines, including: The NAT builds upon latent variables: NAT-FT (Gu et al., 2018), LT (Kaiser et al., 2018), Syn-ST (Akoury et al., 2019), LV-NAR (Shu et al., 2019) and Flowseq (Ma et al., 2019).", "The NAT with extra autoregressive decoding or iterative refinement: NAT-DCRF (Sun et al., 2019), IR-NAT (Lee et al., 2018), and CMLM (Ghazvininejad et al., 2019).", "The NAT with auxiliary training objectives: NAT-REG (Wang et al., 2019), imitate-NAT (Wei et al., 2019).", "We compare the proposed CNAT against baselines both in terms of generating quality and inference speedup.", "For all our tasks, we obtain the performance of baselines by either directly using the performance figures reported in the previous works if they are available or producing them by using the open-source implementation of baseline algorithms on our datasets.", "First, we compare CNAT with the NAT models without using advanced techniques, such as knowledge distillation, reranking, Model WMT14 IWSLT14 EN-DE DE-EN DE-EN NAT-FT 17.69 21.47 / LT 19.80 / / NAT-REG 20.65 24.77 23.89 imitate-NAT 22.44 25.67 / Flowseq 23.72 28.39 27.55 NAT-DCRF 23.44 27.22 27.44 Transformer (ours) 27.33 31.69 34.29 NAT (ours) 17.69 18.93 23.78 CNAT (ours) 25.56 29.36 31.15 Table 2: Results of NAT models trained with knowledge distillation on test set of WMT14 and IWSLT14.", "or iterative refinements.", "The results are listed in Table 1.", "The CNAT achieves significant improvements (around 11.5 BLEU in EN-DE, more than 14.5 BLEU in DE-EN) over the vanilla NAT, which indicates that modeling categorical information could improve the modeling capability of the NAT model.", "Also, the CNAT achieves better results than Flowseq and SynST, which demonstrates the effectiveness of CNAT in modeling dependencies between the target outputs.", "The performance of the NAT models with advance techniques (sequence-level knowledge distillation or reranking) is listed in Table 2 and Table 3. Coupling with the knowledge distillation techniques, all NAT models achieve remarkable improvements.", "Our best results are obtained with length parallel decoding, which employs a pretrained Transformer to rerank the multiple parallels generated candidates of different target lengths.", "Specifically, on a large scale WMT14 dataset, CNAT surpasses the NAT-DCRF by 0.71 BLEU score in DE-EN but Model Iteration WMT14 EN-DE DE-EN Speedup IR-NAT 1 13.91 16.77 11.39 2 16.95 20.39 8.77 5 20.26 23.86 3.11 10 21.61 25.48 2.01 CMLM 4 26.08 30.11 / 10 26.92 30.86 / CNAT 1 25.56 29.36 10.37 CNAT (N=9) 1 26.60 30.75 5.59 Table 4: Results of NAT models with iterative refinements on test set of WMT14.", "slightly under the NAT-DCRF around 0.20 BLEU in EN-DE, which shows that the CNAT is comparable to the state-of-the-art NAT model.", "Also, we can see that a larger N leads to better results ( N = 100 vs. N = 10 of NAT-FT, N = 19 vs. N = 9 of NAT-DCRF, etc.); however, it always comes at the degradation of decoding efficiency.", "We also compare our CNAT with the NAT models that employ an iterative decoding technique and list the results in Table 4. The iterative-based non-autoregressive Transformer captures the target language's dependencies by iterative generating based on the previous iteration output, which is an important exploration for a non-autoregressive generation.", "With the iteration number increasing, the performance improving, the decoding speed-up dropping, whatever the IR-NAT or CMLM.", "We can see that the CNAT achieves a better result than the CMLM with four iterations and IR-NAT with ten iterations, even close to the CMLM with ten iterations while keeping the benefits of a one-shot generation.", "Translation Efficiency.", "As depicted in Figure 2, we validate the efficiency of CNAT.", "Put simply, the decoding speed is measured sentence-by-sentence, and the speed-up is computed by comparing it with the Transformer.", "Figure 2a and Figure 2b show the BLEU scores and decoding speed-up of NAT models.", "The former compares the pure NAT models.", "The latter compares NAT model inference with advanced decoding techniques (parallel reranking or iterative-based decoding) 3 .", "3 Our results are conducted on a single GeForce GTX 1080-TI GPU.", "Please note that the result in Figure 2a and Figure 2b may be evaluated under different hardware settings, and it may not be fair to compare them directly.", "0.0 BLEU 20 23 26 29 32 35 CNAT", "w/ z ref denote CNAT generate the tokens condition on the latent sequence which is quantized from the reference target.", "w/ m ref denote the CNAT generate the tokens condition on the reference length.", "CNAT is located on the top-right of the baselines.", "The CNAT outperforms our baselines in BLEU if speed-up is held, and in speed-up if BLEU is held, indicating CNAT outperforms previous state-of-the-art NAT methods.", "Although iterative models like CMLM achieves competitive BLEU scores, they only maintain minor speed advantages over Transformer.", "In contrast, CNAT remarkably improves the inference speed while keeping a competitive performance.", "Effectiveness of Categorical Modeling.", "We further conduct the experiments on the test set of IWSLT14 to analyze the effectiveness of our categorical modeling and its influence on translation quality.", "We regard the categorical predictor as a sequence-level generation task and list its BLEU score in Table 5. As see, a better latent prediction can yield a better translation.", "With the z ref as the latent sequence, the model achieves surprisingly good performance on this task, showing the usefulness of the learned categorical codes.", "We also can see that the CNAT decoding with reference length only slightly (0.44 BLEU) better than it with predicted length, indicat-Line K Predictor GateNet BLEU 32 64 128 CRF AR 1 (cid:88) (cid:88) (cid:88) 30.13 2 (cid:88) (cid:88) (cid:88) 31.87 3 (cid:88) (cid:88) (cid:88) 30.82 4 (cid:88) (cid:88) 29.32 5 (cid:88) (cid:88) (cid:88) 28.23 6 (cid:88) (cid:88) 24.00 7 (cid:88) (cid:88) 25.43 8 24.25 Table 6: Ablation study on the dev set of IWSLT14.", "We further conduct the ablation study with different CNAT variant on dev set of IWSLT14.", "Influence of K .", "We can see the CRF with the categorical number K = 64 achieves the highest score (line 2).", "A smaller or larger K neither has a better result.", "The AR predictor may have a different tendency: with a larger K = 128 , it achieves a better performance.", "However, a larger K may lead to a higher latency while inference, which is not the best for non-autoregressive decoding.", "In our experiments, the K = 64 can achieve the high-performance and be smaller enough to keep the low-latency during inference.", "CRF versus AR.", "Experiment results show that the CRF-based predictor is better than the AR predictor.", "We can see that the CRF-based predictor surpasses the Transformer predictor 3.5 BLEU (line 2 vs. line 5) with the GateNet; without the H-score C-score V-measure w/ POS tags 0.70 0.47 0.56 w/ Frequency 0.62 0.48 0.54 Table 7: Clustering evaluation metrics on the test set of IWSLT14 to analyze the learned codes.", "GateNet, the gap enlarges to 5.3 BLEU (line 4 vs. line 6).", "It is consistent with our intuition that CRF is better than Transformer to model the dependencies among latent variables on machine translation when the number of categories is small.", "GateNet.", "Without the GateNet, the CNAT with AR predictor degenerates a standard LT model with a smaller latent space.", "We can see its performance is even lower than the NAT-baselines (line 6 vs. line 8).", "Equipping with the GateNet and the schedule sampling, it outperforms the NAT baseline with a large margin (around 4.0 BLEU), showing that the GateNet mechanism plays an essential role in our proposed model.", "To analyze the learned category, we further compute its relation to two off-the-shelf categorical information: the part-of-speech (POS) tags and the frequency-based clustered classes.", "For the former, we intuitively assign the POS tag of a word to its sub-words and compute the POS tag frequency for the latent codes.", "For the latter, we roughly assign the category of a subword according to its frequency.", "It needs to mention that the number of frequency-based classes is the same as that of the POS tags.", "Quantitative Results.", "We first compute the V-Measure (Rosenberg and Hirschberg, 2007) score between the latent categories to POS tags and sub-words frequencies.", "The results are listed in Table 7.", "Overall, the w/ POS tags achieves a higher V-Measure score, indicating that the latent codes are more related to the POS tags than sub-words frequencies.", "The homogeneity score (H-score) evaluates the purity of the category.", "We also can see that the former has a relatively higher H-score than the latter (0.70 vs. 0.62), which is consistent with our intuition.", "We can see a sharp distribution for each latent variable, showing that our learned fuzzy classes are meaningful.", "Non-autoregressive Machine Translation.", "Gu et al. (2018) first develop a non-autoregressive Transformer (NAT) for machine translation, which produces the outputs in parallel, and the inference speed is thus significantly boosted.", "Due to the missing of dependencies among the target outputs, the translation quality is largely sacrificed.", "A line of work proposes to mitigate such performance degradation by enhancing the decoder inputs.", "Lee et al. (2018) propose a method of iterative refinement based on the previous outputs.", "Guo et al. (2019) enhance decoder input by introducing the phrase table in statistical machine translation and embedding transformation.", "There are also some work focuses on improving the decoder inputs' supervision, including imitation learning from autoregressive models (Wei et al., 2019) or regularizing the hidden state with backward recon-struction error (Wang et al., 2019).", "Another work proposes modeling the dependencies among target outputs, which is explicitly missed in the vanilla NAT models.", "Qian et al. (2020); Ghazvininejad et al. (2019) propose to model the target-side dependencies with a masked language model, modeling the directed dependencies between the observed target and the unobserved words.", "Different from their work, we model the target-side dependencies in the latent space, which follows the latent variable Transformer fashion.", "Latent Variable Transformer.", "More close to our work is the latent variable Transformer, which takes the latent variable as inputs to modeling the target-side information.", "Shu et al. (2019) combine continuous latent variables and deterministic inference procedure to find the target sequence that maximizes the lower bound to the log-probability.", "Ma et al. (2019) propose to use generative flows to the model complex prior distribution.", "Kaiser et al. (2018) propose to autoregressively decode a shorter latent sequence encoded from the target sentence, then simultaneously generate the sentence from the latent sequence.", "Bao et al. (2019) model the target position of decode input as a latent variable and introduce a heuristic search algorithm to guide the position learning.", "Akoury et al. (2019) first autoregressively predict a chunked parse tree and then simultaneously generate the target tokens from the predicted syntax.", "We propose CNAT, which implicitly models the categorical codes of the target language, narrowing the performance gap between the non-autoregressive decoding and autoregressive decoding.", "Specifically, CNAT builds upon the latent Transformer and models the target-side categorical information with vector quantization and conditional random fields (CRF) model.", "We further employ a gated neural network to form the decoder inputs.", "Equipped with the scheduled sampling, CNAT works more robust.", "As a result, the CNAT achieves a significant improvement and moves closer to the performance of the Transformer on machine translation.", "We would like to thank the anonymous reviewers for their insightful comments.", "Shujian Huang is the corresponding author.", "This work is supported by the National Science Foundation of China (61772261), National Key R&D Program of China (No. 2019QY1806), the Fundamental Research Funds for the Central Universities (No. 14380076), and the program B for Outstanding Ph.D. candidate of Nanjing University." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "objective", "abstain", "other", "other", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "method", "other", "other", "other", "other", "other", "objective", "abstain", "method", "abstain", "abstain", "other", "other", "other" ]
[ "There is an emerging interest in the application of natural language processing models to source code processing tasks.", "One of the major problems in applying deep learning to software engineering is that source code often contains a lot of rare identifiers, resulting in huge vocabularies.", "We propose a simple, yet effective method, based on identifier anonymization, to handle out-of-vocabulary (OOV) identifiers.", "Our method can be treated as a preprocessing step and, therefore, allows for easy implementation.", "We show that the proposed OOV anonymization method significantly improves the performance of the Transformer in two code processing tasks: code completion and bug fixing.", "Natural language processing (NLP) is widely used for source code processing (SCP), e.", "g.", "for learning the meaningful vector representations of code (Feng et al., 2020; Alon et al., 2019b; Azcona et al., 2019), that can be used in various downstream tasks, e.", "g.", "code summarization (Iyer et al., 2016; Shiv and Quirk, 2019), code completion (Kim et al., 2020), or bug fixing (Hellendoorn et al., 2020).", "An important question, one should answer before building an SCP model, is how to create a vocabulary?", "Karampatsis et al. (2020) underline that modern source code datasets may incorporate millions of unique identifiers, of which less than 1% occur in the dataset frequently, e.", "g.", "more than 5 times.", "The common practice is to crop the vocabulary based on top-N identifiers and replace all occurrences of out-of-vocabulary (OOV) identifiers with an UNK identifier to avoid huge embedding matrices and the meaningless embeddings of rare tokens.", "But can one process rare identifiers in a better way?", "The work was done while working at Samsung-HSE Laboratory, HSE University.", "Both authors contributed equally.", "Vocabulary: { np , sin } Input: my_y = np.sin(my_x) + my_x Standard OOV processing procedure: UNK = np.sin(UNK) + UNK Proposed OOV anonymization procedure: VAR1 = np.sin(VAR2) + VAR2 Figure 1: Illustration of the proposed OOV anonymization procedure.", "copy-based approaches.", "An open vocabulary solution implies splitting rare tokens into subtokens (Sennrich et al., 2016).", "The copy-based approaches are used in generation tasks and imply using the pointer mechanism (Gulcehre et al., 2016) to copy tokens from the input sequence.", "We propose a new, simple, yet effective approach for processing OOV identifiers in source code, namely OOV anonymization.", "Anonymization implies replacing rare identifiers with unique placeholders, i.", "e.", "VAR1 , VAR2 , VAR3 etc., while preserving the names of frequent identifiers.", "An example of OOV anonymization is shown in figure 1.", "The intuition behind using anonymization is that it preserves the semantics of the algorithm that the code snippet implements, i.", "e.", "renaming user-defined identifiers does not change the underlying algorithm.", "By contrast, replacing all rare identifiers with an UNK identifier changes the algorithm.", "We underline that we propose anonymizing only rare identifiers, because frequently used identifier names may serve as an additional source of information, and neural networks are indeed capable of capturing this information.", "The proposed OOV anonymization strategy allows for easy implementation as a preprocessing step, thus no modification of model code is required.", "Another advantage of the OOV anonymization is that it enhances both the encoder and the decoder.", "The proposed approach significantly outperforms the model with all rare identifiers being replaced with an UNK , in code completion and bug fixing tasks, with the Transformer (Vaswani et al., 2017) architecture being used (see example comparison in Fig. 2).", "Our code and data split are available at https://github.com/ bayesgroup/code_transformers .", "Handling OOV identifiers in source code.", "Code processing often borrows ideas from NLP.", "Source code can be represented as a sequence of identifiers.", "In this case, identifiers can be further split into subtokens using byte-pair encoding (BPE) (Karampatsis et al., 2020; Sennrich et al., 2016) resulting in an open vocabulary model.", "This approach has several drawbacks.", "Firstly, splitting identifiers into subtokens increases the length of the sequence several times.", "This substantially slows down inference, e.", "g.", "vanilla Transformer's for-ward pass has a complexity quadratic w.", "r.", "t.", "the input length.", "Secondly, splitting breaks one-to-one alignment between identifiers and nodes in the parsing tree, e.", "g.", "abstract syntax tree (AST), in other words, several subtokens correspond to one node in the AST, which makes it harder to apply structure-aware models such as (Hellendoorn et al., 2020) or (Alon et al., 2019a).", "To the best of our knowledge, all SCP works, proposing structure-aware models, either use entire tokens without subtok-enization / BPE, or average the embeddings over subtokens (this strategy provides only a slight quality improvement compared to the first one), and the question of how to incorporate BPE in structure-aware models needs further investigation.", "Taking into account the described disadvantages of BPE, we do not consider BPE in this work and do not split tokens into subtokens.", "An orthogonal direction for handling OOV identifiers in source code is the modification of the computational graph.", "For the task of code generation, the pointer mechanism is widely adapted (Li et al., 2018).", "Cvitkovic et al. (2019) also propose a graph-structured cache for inferring the representations of the rare identifiers in source code.", "The major drawback of the mentioned approaches is that they are quite hard to implement.", "Identifier anonymization in source code.", "Chirkova and Troshin (2020) conduct an empirical study of Transformers for source code in a setting with all identifiers being anonymized and show that Transformers can make meaningful predictions in this setting.", "By contrast, we propose anonymizing only OOV identifiers and show that it boosts the performance of the model in the setting with frequent identifier names being present in the data.", "The anonymization of all identifiers has also been used in (Gupta et al., 2017) and (Xu et al., 2019) for training recurrent neural networks.", "Ahmed et al. (2018) replace variables with their types, losing information about identifier repetition.", "Consider a vocabulary of all identifiers in the training data.", "It could be a vocabulary of all tokens if we treat input code snippets as text sequences, or a vocabulary of all user-defined variables if we parse the ASTs of code snippets.", "Let us now select the vocabulary V full of frequent identifiers and call all others OOV identifiers.", "we propose replacing all OOV identifiers with placeholders VAR1 , VAR2 , VAR3 etc.", "All occurrences of one identifier in one input sequence are replaced with the same placeholder (anonymized identifier), but different identifiers are replaced with different placeholders.", "One identifier may be replaced with different placeholders in different input sequences.", "An example of OOV anonymization is presented in figure 1.", "We consider two strategies for the OOV anonymization, namely ordered anonymization and randomized anonymization.", "The ordered anonymization implies assigning an anonymized identifier VAR1 to the first seen rare identifier, VAR2 to the next seen rare identifier, etc.", "For example, the snippet from Fig. 1 is transformed into VAR1 = np.sin(VAR2) + VAR2 .", "The randomized anonymization implies fixing the placeholder vocabulary size | V an | and selecting a random subset of anonymized placeholders VAR1 , . . . , VAR | V an | for each code snippet.", "For example, the snippet from Fig. 1 can be transformed into VAR38 = np.sin(VAR801) + VAR801 .", "To ensure that we can always encode identifiers in a code snippet injectively, the size | V an | of the placeholder vocabulary should not be fewer than the maximum possible number of tokens per snippet.", "We set | V an | to the maximum length of code snippets.", "The proposed OOV anonymization can be seen as a preprocessing step, thus no model parts change.", "In the encoder, the embedding matrix contains embeddings for both anonymized and in-vocabulary identifiers: { e v } v V full { e VARi } | V an | i =1 .", "In the decoder, when generating the next identifier, the softmax is computed over all anonymized and in-vocabulary identifiers.", "We note that the ordered OOV anonymization may need a more careful implementation, e.", "g.", "of metric computation, see details in section 4.", "We conduct experiments with Transformer (Vaswani et al., 2017) on the code completion (CC) and variable misuse (VM) tasks, on Python150k (Raychev et al., 2016a) (the redistributable version of (Kanade et al., 2020)) and JavaScript150k (Raychev et al., 2016b) datasets.", "of Hellendoorn et al. (2020) for the VM task, and of Kim et al. (2020) for the CC task.", "To validate our implementation, we check that the quality we obtain with the vanilla Transformer is the same as the quality of this model reported in the corresponding works, see details in Appendix B. As a base model, we use the 6-layer Transformer equipped with the relative attention mechanism (Shaw et al., 2018) and applied over the depth-first traversal of the AST.", "Chirkova and Troshin (2020) show that such an approach leads to high performance and outperforms the vanilla Transformer and several techniques for capturing AST structure in Transformer.", "The hyperparameters are given in Appendix A. Allamanis (2019); Chirkova and Troshin (2020) emphasize the importance of the thoughtful splitting data into training and testing parts, which includes splitting by repositories and removing duplicate code .", "We follow the same strategy in our experiments (later referred to as custom train-test split).", "Variable misuse task.", "For the VM task, we use the same setup as in (Hellendoorn et al., 2020), below we briefly recap this setup.", "In the VM task, given the code of a function, the task is to output two positions (using two pointers): in what position a wrong variable is used and which position a correct variable can be copied from (any such position is accepted).", "If a snippet is non-buggy, the first pointer should select a special no-bug position.", "We obtain two pointers by applying two position-wise fully-connected layers, and softmax over positions on top of the Transformer encoder outputs.", "We use the joint accuracy to assess the model quality (the portion of buggy examples for which the model correctly localizes and repairs the bug).", "To obtain a dataset for the VM task, we select all top-level functions in Python150k dataset, including functions inside classes, and filter out functions longer than 250 AST nodes, and functions with less than three positions containing user-defined variables or less than three distinct user-defined variables.", "The resulting training / testing set consists of 417K / 231K functions (Python) and 202K / 108K functions (JavaScript).", "One function may occur in the dataset up to 6 times, 3 times with synthetically generated bug and 3 times without bug.", "The buggy examples are generated synthetically by choosing random bug and fix positions from positions containing user-defined variables.", "When using the ordered OOV anonymization, we firstly inject a synthetic bug and then perform anonymization, to Python150k dataset (custom train-test split) 0 0.01 0.050.1 0.2 0.5 1.0 2.0 10.0 100.0 Vocabulary size (thousand) 45.0 47.5 50.0 52.5 55.0 57.5 60.0 62.5 65.0 MRROOV anonymization (ordered) OOV anonymization (random) Pointer Standard JavaScript150k dataset (custom train-test split) 0 0.01 0.050.1 0.2 0.5 1.0 2.0 10.0 100.0 Vocabulary size (thousand) 52.5 55.0 57.5 60.0 62.5 65.0 67.5 70.0 72.5 MRROOV anonymization (ordered) OOV anonymization (random) Pointer Standard Figure 3: Results for Transformer in the code completion task (value prediction): mean reciprocal rank standard deviation over 3 runs.", "Code completion task.", "For the CC task, we use the setup of Kim et al. (2020), and below we briefly review it.", "The CC task implies predicting the type and value of the next node based on the prefix of the depth-first AST traversal.", "We predict the next type and value using two fully-connected heads on top of the Transformer decoder and optimize the sum of cross-entropy losses for types and values.", "While computing the loss, we skip the first occurrences of anonymized values and special positions, i.", "e.", "UNK and PAD .", "We tie the embeddings of input and output layers.", "In this task, we split the large AST traversals into chunks with a maximum length of 500 , as described in (Kim et al., 2020).", "The resulting dataset includes 186K / 100K train-ing/testing chunks for Python and 270K / 220K for JavaScript.", "We use mean reciprocal rank (MRR) to measure the quality of the model: MRR = 100% N 1 (cid:80) Ni =1 1 / rank i , where N is the total number of tokens in the dataset and rank i is a position of the true token in the model ranking.", "We assign zero scores", "(a) if the correct token is not in the top 10 predicted tokens,", "(b) if the correct token is a UNK and", "(c) for the first occurrences of anonymized identifiers.", "For the next value prediction task, we add the pointer mechanism to the Transformer for comparison.", "We re-implement the pointer mechanism following the design choice of (Deaton, 2019).", "Given an input sequence [ x 1 , . . . , x (cid:96) ] of length (cid:96) , Transformer outputs two distributions: the distribution over the fixed vocabulary V , p model ( a ) , a V , and the probability of copying an input from position j , p copy ( j ) , j = 1 , . . . , (cid:96) .", "Then both distributions are combined to obtain the final distribution over the extended vocabulary: p ( x (cid:96) +1 = a ) = p gen p model ( a )[ a V ] + (1 p gen ) (cid:80) (cid:96)j =1 p copy ( j )[ x j = a ] .", "The switcher is computed given the current input and the output of the decoder as p gen ( x (cid:96) , h (cid:96) ) = ( w Th h (cid:96) + w Ti x (cid:96) + b gen ) .", "The cross entropy loss is computed over the extended vocabulary.", "We compare the proposed anonymization of OOV identifiers with the following baseline approaches: (1) Standard: with all OOV identifiers being replaced with an UNK identifier; (2) training on fully anonymized data, i.", "e.", "all identifiers are anonymized.", "This baseline corresponds to the zero vocabulary size in all plots.", "For the code completion task, we also include the baseline with the pointer mechanism.", "Figure 2 presents the results for the variable misuse task, for different frequent identifier vocabulary sizes.", "We observe that the proposed approach, with the anonymization of OOV identifiers (dark blue and blue lines), performs substantially better than the baseline models, particularly than the standard approach with OOV identifiers being replaced with an UNK identifier (orange line).", "The leftmost point in both blue lines corresponds to the full anonymization baseline (zero vocabulary size).", "The ordered OOV anonymization (dark blue line) performs slightly better or similarly to the randomized OOV anonymization (blue line).", "We also experimented with the frequency-based OOV anonymization, i.", "e.", "sorting rare identifiers by frequencies in the code snippet and assigning VAR1 to the most frequent one, VAR2 to the next one etc.", "We found that such a strategy achieves the same quality as the ordered anonymization.", "Increasing the vocabulary size for the standard model does not help much and even hurts the performance, i.", "e.", "the standard model with a vocabulary of 50K identifiers outperforms the one with the largest possible vocabulary.", "The reason is that the embeddings of rare identifiers are updated only several times during the training and do not change a lot after being initialized randomly.", "On the contrast, anonymized identifiers occur quite frequently in the data, e.", "g.", "thousands of times, so their embeddings are updated regularly.", "As a result, it is more beneficial to anonymize rare identifiers than to include them in vocabulary.", "The intuition behind why OOV anonymization performs well is that it saves information about variable repetition and thus does not change the algorithm that the code snippet implements.", "For example, in the buggy snippet with open(myfnm) as myfp: data = myfnm.read() (should be myfp.read() ), the model with OOV anonymization detects that OOV variables after as and before read are different and correctly predicts the bug, while the model with OOVs replaced with UNK does not distinguish variables myfnm and myfp and cannot detect the bug.", "Figure 3 presents the results for code completion (value prediction), for different frequent identifier vocabulary sizes.", "In this task, the ordered OOV anonymization again slightly outperforms the randomized OOV anonymization, and they both substantially outperform the standard baseline and the baseline with full anonymization.", "Moreover, the proposed OOV anonymization surpasses the strong pointer baseline for almost all vocabulary sizes.", "The advantage of the proposed OOV anonymization approach is that it helps the Transformer to distinguish OOV identifiers in the input, while the pointer mechanism enhances only the output layer.", "Also, in contrast to the pointer mechanism, OOV anonymization is much easier to implement.", "The pointer mechanism and the OOV anonymization could be straightforwardly combined, however, in our experiments, this combination did not increase the scores compared to the maximum score of the OOV anonymization and the pointer.", "The results for type prediction are relatively the same as for the value prediction and can be found in Appendix C. We visualize the t-SNE representations of the learned embeddings in Appendix D. 4.3 The influence of the anonymized vocabulary size The randomized OOV anonymization strategy comprises the hyperparameter | V an | , i.", "e.", "the size of the anonymized vocabulary.", "It should not be less than the maximum sequence length, to avoid using the same placeholder for different identifiers, and we select | V an | as the maximum length of code snippets.", "We tried using the larger values of | V an | and observed the insignificant difference in quality in the variable misuse task, and a slight drop in quality in the code completion task, as shown in Table 1.", "In this work, we propose the effective anonymization-based encoding of out-of-vocabulary identifiers, with two options, namely ordered and randomized OOV anonymization.", "Our preprocessing technique allows for easy implementation, could be easily plugged into various Transformer models and outperforms the widely used standard approach by a significant margin.", "The ordered anonymization performs slightly better than the randomized anonymization but requires a more careful implementation.", "We would like to thank Ivan Rubachev and the anonymous reviewers for the valuable feedback.", "The results on the variable misuse task were supported by the Russian Science Foundation grant 19-71-30020.", "The results on the code completion task were supported by Samsung Research, Samsung Electronics.", "The research was supported in part through the computational resources of HPC facilities at NRU HSE." ]
[ "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "objective", "result", "abstain", "other", "other", "other", "other" ]
[ "We propose fill-in-the-blanks as a video understanding evaluation framework and introduce FIBER a novel dataset consisting of 28,000 videos and descriptions in support of this evaluation framework.", "The fill-in-the-blanks setting tests a model's understanding of a video by requiring it to predict a masked noun phrase in the caption of the video, given the video and the surrounding text.", "The FIBER benchmark does not share the weaknesses of the current state-of-the-art language-informed video understanding tasks, namely: (1) video question answering using multiple-choice questions, where models perform relatively well because they exploit linguistic biases in the task formulation, thus making our framework challenging for the current state-of-the-art systems to solve; and (2) video captioning, which relies on an open-ended evaluation framework that is often inaccurate because system answers may be perceived as incorrect if they differ in form from the ground truth.", "The FIBER dataset and our code are available at https: //lit.eecs.umich.edu/fiber/ .", "Despite current progress on multimodal (textual and visual) representations, language-informed video understanding is still a very challenging task for machine learning systems (Zhang et al., 2021; Li et al., 2021).", "This is due in large part to the task setup and the dataset construction.", "Current video understanding datasets often have at least one of two major limitations.", "First, they have limited application value.", "E.g., multiple-choice questions (Lei et al., 2018; Tapaswi et al., 2016; Jang et al., 2017; Castro et al., 2020) do not reflect real-world tasks.", "Second, they are based on subjective evaluation metrics, e.g., video captioning (Tran et al., 2016; Krishna et al., 2017; Zhou et al., 2018; Wang et al., 2019)), and are therefore hard to evaluate automatically, as the ground truth can be expressed in different ways.", "In this paper, we address these limitations by introducing a new dataset named FIBER that collects multiple perspectives on the same video, focusing on noun phrases as a proxy for different entities and their interactions in the video.", "Our data focuses on recall and tests the ability of models to capture a wide range of possible interpretations for a particular aspect of a video.", "We construct the FIBER dataset by systematically blanking captions from an existing video captioning dataset named VaTeX (Wang et al., 2019) and by providing additional correct answers for the blanks.", "VaTeX is a video captioning dataset that contains 40,000 10-second YouTube videos with 10 English captions per video.", "1 We build our 1 Licensed under Creative Commons, more information here: https://eric-xw.github.io/ vatex-website/index.html .", "video fill-in-the-blanks dataset by blanking random noun phrases from one of the English captions for each video, from a subset of VaTeX consisting of 28,000 videos.", "Through extensive analyses, we show that the blanked noun phrases are essential for understanding important visual aspects from the video.", "To address the fill-in-the-blanks task, we propose a Transformer-based (Vaswani et al., 2017) multimodal model.", "Our experiments show that our best multimodal model achieves a token-level F1 score of 71.4 while the F1 score of crowd workers is 82.5, indicating that this task is challenging for video and text understanding.", "The contribution of this work is threefold: (1) We propose a novel fill-in-the-blanks task as an evaluation framework that addresses the drawbacks associated with previous approaches to video understanding.", "In support of this framework, we introduce FIBER , which is a novel dataset of 28,000 videos and fill-in-the-blanks captions with multiple correct answers.", "(2) We propose several unimodal baselines and two multimodal models for solving this task.", "(3) We provide a detailed analysis of the data to measure the diversity and complexity of the answers, and also conduct an error analysis of the models' performance, to gain insights into the blanked captions and videos that are hard for the models to solve.", "Language-informed video understanding is a complex task that has been extensively addressed in the multimodal (natural language and computer vision) machine learning research through diverse tasks and benchmarks.", "Multiple-Choice Video Understanding.", "Multiple-choice benchmarks consist of identifying the only correct answer from a set of distractors, where the set of possible answers varies depending on the input.", "Video Question Answering (Video QA), a popular format, consists of answering questions based on the video content.", "Numerous multiple-choice Video Understanding benchmarks have been proposed such as TVQA (Lei et al., 2018), MovieQA (Tapaswi et al., 2016), TGIF-QA (Jang et al., 2017) (Repetition Action and State Transition tasks), LifeQA (Castro et al., 2020), PororoQA (Kim et al., 2017), Mari-oQA (Mun et al., 2017), VCQA (Zhu et al., 2017), VideoMCC (Tran et al., 2016), and ActivityNet QA (Yu et al., 2019).", "However, they provide choices and are thus easier to solve than generating arbitrary text.", "A further drawback is that the performance without the visual input is generally already high as models are able to exploit biases in the dataset (Agrawal et al., 2018) or they count on other modalities that overlap in functionality with the visual one.", "Video Captioning.", "Video Captioning consists of generating a piece of text that describes a given video.", "This task can be carried out using multiple datasets such as ActivityNet Captions (Krishna et al., 2017) (also features Dense-Captioning), YFCC100M (Thomee et al., 2016), (Alayrac et al., 2016), DiDeMo (Anne Hendricks et al., 2017), MSR-VTT (Xu et al., 2016), YouCook2 (Zhou et al., 2018), How2 (Sanabria et al., 2018), HowTo100M (Miech et al., 2019), VaTeX (Wang et al., 2019), TGIF (Li et al., 2016), MovieNet (Huang et al., 2020), LSMDC (Rohrbach et al., 2017), TGIF-QA (Li et al., 2016) (Frame QA task).", "Due to the diversity of captions provided, Video Captioning benchmarks do not present a high human agreement and are thus hard to evaluate automatically with certainty (Aafaq et al., 2019).", "Video Understanding Based on Filling Blanks.", "VideoBERT (Sun et al., 2019b), CBT (Sun et al., 2019a), UniVL (Luo et al., 2020), ActBERT (Zhu and Yang, 2020), and HERO (Li et al., 2020) methods propose masking random parts of the input from text and video pairs for training.", "However, they do this only for the purpose of system training and do not use the framework to test and evaluate video understanding.", "The only exception is MovieFIB (Maharaj et al., 2017) which employs a video fill-in-the-blanks scheme, based on LSMDC (Rohrbach et al., 2017) for both training and evaluation.", "However, these methods have several drawbacks.", "They blank a single word, which makes it easier to guess; they evaluate correctness with a single ground-truth answer per caption; and they focus on the movies domain (we focus on YouTube videos).", "Concurrent Work.", "The most similar work to ours is VidQAP (Sadhu et al., 2021), which presents an evaluation framework to fill in blanks with phrases using semantic roles based on ActivityNet Captions (Krishna et al., 2017) and Charades (Sigurdsson et al., 2016); unlike this existing work, we design our benchmark to feature a high 2926 human accuracy (avoiding ActivityNet Captions as it is contextualized, collecting multiple correct answers, and showing a high human performance).", "Our work is also close to (Yang et al., 2021) on evaluating the use of free-form QA; however, they employ a small vocabulary and no human accuracy that serves as an upper bound for the task.", "The novelty of our work lies in our use of a hard task (a considerable gap between human and best model performance) that measures a form of video understanding while at the same time yielding a high human performance due to the large number of possible correct answers we collected ( 13 per caption) from multiple annotators ( 9 per caption).", "We construct FIBER a large video understanding dataset that can evaluate the ability of a model to interpret and use a multimodal context by requiring the models to fill in (generate) a blank (a missing constituent) in this context.", "We build FIBER by following two main steps: (1) data generation, where we compile a large set of video-caption pairs with selectively blanked words; and (2) data annotation, where crowd workers provide additional valid answers for these blanks.", "Note that we could also develop a fill-in-the-blanks dataset by completing only the first step: the data generation.", "However, this would result in only one valid answer (the original blanked word or phrase), which can lead to unfair evaluations that are too strict because of alternative correct answers being dismissed (e.g., child provided as an answer where the blanked word was kid).", "Other than manual annotations, we found no high-quality method to automatically obtain additional correct answers.", "For example, building and t-shirt in Table 7 are too dissimilar but both are correct, pink and yellow in Fig. 1 are semantically close but only one is correct.", "The dataset is constructed starting with the VaTeX (Wang et al., 2019) dataset.", "VaTeX is a multilingual video captioning dataset, consisting of over 41,250 video clips, each of which is taken from a unique public YouTube video, and lasts around 10 seconds.", "For each video clip, there are 10 English and 10 Chinese captions associated with it.", "to mask only noun phrases for three main reasons.", "First, noun phrases often require visual information for identification or understanding.", "They cover a large variety of information regarding visual content, as their head nouns can describe people, objects, scenes, events, and more.", "A model often needs to identify the related objects in the videos, as well as the properties of objects (e.g., color, number, or size) to fill the blank correctly.", "Second, nouns are usually essential to understanding of visual content and serve as reliable predictors of the ability of a system to understand a video.", "Other phrases, such as verbs or adjectives, can more easily be guessed from the text only while ignoring the visual information.", "To illustrate, consider the example A woman _____ in the pool, where a model can easily predict that the blank should be swims from the textual content only, which would not be the case for A woman swims in _____, where the blank could be completed by sea, pool, lake, water, and other similar nouns.", "Third, in preliminary experiments, we found that nouns lead to more robust annotations as compared to e.g., adjectives, which can have low inter-annotator agreement due to their subjectivity.", "As an example, consider the phrase A _____ hill stands behind the house. where the blank could be filled with a color property, a size property, or another attribute.", "For each video, we choose the first English caption that contains at least one noun phrase as detected by spaCy 2 (Honnibal et al., 2020), and randomly blank one of these noun phrases to generate an instance.", "Accordingly, we generate our training, validation, and test data starting with the VaTeX v1.1 training set, a random subset of size 1,000 from the validation set, and a random subset of size 1,000 from the test set, respectively.", "We performed a crowdsourced annotation procedure to collect additional correct answers for each blank in the validation and test sets.", "As highlighted earlier, the main reason for collecting these additional annotations is to reflect the natural diversity of language, and have multiple alternative answers for each blank.", "We use Amazon Mechanical Turk (AMT) for the annotation.", "Figure 2 shows the annotation interface 2 We used the model en_core_web_trf from spaCy v3.", "An error analysis identified only three tagging errors in a sample of 247 sentences.", "and a highlight of the data collection instructions (additional guidelines were provided, not shown here for space reasons).", "For each blanked caption, workers were presented a video clip along with the corresponding masked caption.", "They were then asked to fill in the blank with a noun phrase.", "3 We also asked annotators to provide answers in a confidence-descending order (the first answer should be the most natural one to the annotator).", "We presented five videos in each Human Intelligence Task (HIT).", "Nine workers annotated each of them with at least two answers for each blank.", "We paid a bonus for each extra answer for each blanked caption, from the second one to the fifth one, to encourage them to provide more answers.", "We calculated a $12 hourly rate for a worker that provides at least five answers.", "We estimated the time to annotate one video to be 30 seconds.", "Consequently, the HIT pay rate was $0.2, which could result in a total of $0.5 with the added bonus.", "Additionally, we offered another type of bonus of $0.2 to the worker with the largest number of correct answers for every HIT, to encourage them to provide more than five answers.", "We required workers to be in Canada or the United States, 4 and to have completed at least 1,000 3 We blanked multi-word spans for the task, rather than single-word noun phrases, because blanking a single noun at a time led to a lower annotator agreement in preliminary experiments, likely due to the lower likelihood of overlap.", "For example, annotator 1 might write young boy and annotator 2 might write young child, which would have at least some overlap as compared to boy and child (no overlap).", "4 We restricted the task to these countries because it is a good proxy for proficient English speakers and because our task received lower-quality responses otherwise.", "HITs on AMT with at least a 92% approval rate.", "The interface also checked that for a given worker and caption the answers were different.", "For this, we first normalized the answers by lower-casing, stripping punctuation and extra spaces, and removing the determiners the, a, and an.", "During the annotation, we manually reviewed a sample to identify cases of incorrectly tagged noun phrases (e.g., inside marked as a noun when it should be a preposition) and factually incorrect noun phrases (e.g., referring to bags as eggs without any information on the contents of the bags); we disqualified workers who consistently provided incorrect annotations.", "After collecting annotations, we filtered for noun phrases using the same method as before, based on whether the text is parsed as a noun phrase (including bare nouns, e.g. man is walking), a wh-phrase (who is speaking), a simple gerund (eating is a good way to stay healthy), or infinitive (to eat is wonderful).", "We compute summary statistics on the annotated data to determine the degree of similarity with the originally blanked phrases.", "The statistics are shown in Table 1.", "We find that, in general, annotators tend to provide 3 unique answers for the provided data.", "Compared to the original phrases, annotators tend to use about the same number of tokens.", "Annotators also use visual words at a much lower rate than the original phrases, possibly because the task encouraged the annotators to generate as many distinct nouns as possible without regard to descriptive information.", "To further validate the utility of the annotations collected in this study, we provide an extensive", "analysis of the answers (which is obtained from the union of the annotations and the originally blanked phrases).", "We compute the most-frequent answers and find, as expected, that noun phrases related to person are the most frequent: the word man appears in 5.7% of total original phrases and 1.2% of total annotations (see Figure 5 in the Appendix).", "Note that our annotations have a long tail distribution, as the most-frequent noun phrase appears in only 1.2% of total annotations.", "In addition, we find that answers related to person, such as another person are not trivial.", "On the contrary, in the third example in Fig. 1, for example, a model has to reason about the actions of both persons and distinguish between them.", "The other two examples in Fig. 1 also reflect how a model needs to understand both the video and the text in order to complete the blanks.", "Figure 3 shows what kind of answers are depicted in the videos.", "This analysis shows the diversity and complexity of answers that a model needs to fill in, demonstrating a strong video understanding.", "As expected, the cluster Person-related has the most answers, followed by the clusters: Objects (e.g., shoes, glasses), Places (e.g., mountain, street), Materials (e.g., metal, wood), and Body parts (e.g., fingers, head).", "Note also that the Person-related cluster, among more typical answers such as male and female, also contains complex and diverse answers such as dancer, workers, musician or audience.", "To establish a reference for the machine models, we compute the agreement among annotators using the evaluation metrics described in Section 5.1, which we also use for model evaluation (Section 5.2).", "Specifically, we apply a leave-one-out strategy to construct the test set and the ground truth set.", "We compare the first answer provided by each crowd worker (which is their most natu-ral/confident answer) against the complete set of answers provided by the other crowd workers, using maximum F1 score (token overlap) and maximum exact match (EM) as agreement metrics, as described in Section 5.1.", "Table 2 shows the inter-annotator agreement.", "We show the mean values of the agreement metrics per-caption and per-answer (recall there are multiple answers per caption, so in the former case we first average among the answers within the caption and person man guy someone kid woman girl boy child people lady male adult group friend human female athlete he teenager gentleman worker individual young man crowd dude student they teen player she audience young boy toddler another person somebody team baby youngster instructor teacher young girl some dude little girl artist young woman performer group of people trainer young child two people couple employee young lady room video ground building house floor field gym it object camera tool something instrument item food animal machine water board ball thing container chair device equipment toy table stick music lot bunch song piece line other gamecompetition hand wood wall grass dirt front back side Person-relatedPlace ObjectOther Body part Material Position Figure 3: The 2D t-SNE (Van der Maaten and Hinton, 2008) representation of the clustering of the top 100 most frequent answers provided for the blanks.", "then across the captions).", "The higher rates of agreement at the caption level, compared to the answer level, indicate a high amount of answer diversity among the workers.", "To validate the quality of the crowdsourced annotations, we also compare them against human annotations collected from two trusted annotators (both researchers at the University of Michigan).", "We sample 200 captions from the validation set and ask these two annotators to perform the same labeling task that the MTurk workers performed, and then compare their agreement with the crowdsourced data.", "The annotators obtain a per-caption average of 90.2% F1 score and 49.0% exact match accuracy, comparable to the agreement scores of the workers.", "NPs vs. other phrases.", "By looking at a video and filling a blank caption with a noun phrase can sometimes indirectly capture other aspects such as actions (verbs, adverbs) and object quality (adjec-tives, modifiers).", "However, this is not always the case.", "This is especially true for noun phrases that are easier to guess (cf. Table 4).", "Focus on human actions.", "Our data focuses mostly on human-related activities (e.g., sports), and may lack general representation available in other datasets related to animals, nature, and technology, to name a few.", "Availability of the videos.", "As we build upon VaTeX (Wang et al., 2019) and YouTube, some videos may become unavailable over time.", "To mitigate this issue, the VaTeX website offers to download pre-extracted video features.", "5 Efficiency of the data annotation process.", "Not all videos have multiple possible captions for noun phrases.", "For example, the fork may be the only reasonable answer for a given video and blanked caption, and annotators may not have anything else to add.", "We propose an encoder-decoder multimodal method to perform the task of video fill-in-the-blanks.", "We first encode the text and visual modalities together to obtain a semantic representation of the blanked caption and video.", "The decoder uses the semantic representation to generate text corresponding only to the answer to the blank.", "To correctly generate an answer, a model needs to learn which parts of videos relate to the missing parts of the caption.", "To accomplish this, we use the original Transformer architecture (Vaswani et al., 2017), whose self-attention mechanism is particularly effective for encoding relations within an input sequence and have been shown to perform well in many language understanding tasks.", "We consider two types of encoders, namely the early-fusion encoder and the late-fusion (two-stream) encoder.", "The structure of our multimodal 5 https://eric-xw.github.io/ vatex-website/download.html Transformer Encoder _____ performs a shot put at an outdoor course.", "model with an early-fusion encoder is shown in Fig. 4a.", "The input to the model consists of the tok-enized blanked caption-text t 1 , . . . , t n , as well as a representation of the video consisting of multiple video sequence features v 1 , . . . , v m from a video feature extractor.", "The blanked captions are embedded by an embedding layer.", "The video features are projected into the encoder by a linear layer.", "We use a special token to represent the masked phrase and another one to separate the input text and video sequences.", "We add positional embeddings to each input token or video feature to represent the sequence order, and another embedding to indicate whether it belongs to the text or video sequence similarly to BERT (Devlin et al., 2019).", "The late-fusion model is shown in Fig. 4b.", "The late-fusion model encodes the language and video first separately and then jointly.", "This is because the modalities may benefit from learning independently about their own context before using them together.", "For the video encoder, we use the existing I3D (Car-reira and Zisserman, 2017) features (size 1024 every 8 consecutive frames) provided by the VaTeX dataset (Wang et al., 2019), in which videos were sampled at 25 fps.", "We initialize our multimodal model using T5 (Raffel et al., 2020), given its ability to fill in variable-length blanks.", "T5 is an encoder-decoder Transformer (Vaswani et al., 2017) model that is a good starting point as it provides state-of-the-art performance on text-only tasks and it was pretrained to fill arbitrary-length text spans that were previously masked.", "Building upon T5 allows our model to not only leverage the pre-trained large-scale language models that already have strong language abilities but also to fuse it with visual inputs.", "We initialize the early-fusion model with pretrained T5-base weights.", "For the late-fusion model, we use T5-base for the text encoder and for the decoder.", "We use two one-layer transformers to encode videos and fuse text and video features, and the weights of these two transformers are randomly initialized.", "Following T5 model implementation, the special token <extra_id_0> is used to represent the blanked phrase, and <\\s> is used to separate the text and video sequences.", "The generated output follows T5 output format: the special token <extra_id_0> followed by the predicted text for the blanked phrase.", "See Appendix B.1 for more details.", "Most Frequent Answer.", "The baseline makes use of the most frequent answer in the training set (a man) as the answer to all the blanked captions during evaluation.", "Text-based Transformer.", "Previous visual question answering datasets found that a text-only model can nearly match the performance of the multimodal system (Antol et al., 2015).", "We analyze the degree to which language alone can contribute to our video understanding framework by conducting experiments based on text-only models.", "We use the off-the-shelf T5-base transformer model (Raffel et al., 2020) as our baseline model.", "We use both a zero-shot model (not trained on our data) and a fine-tuned model.", "For the latter, we use the base model v1.1 because it performed better in our experiments on the validation set.", "The decoding hyperparameters are the same as in the multimodal models, except the beam size is 8 for both the zero-shot one and 2 for the fine-tuned variant as we obtained the best validation results for each one using these beam sizes.", "Single video feature.", "We consider using a single I3D feature per video to determine how well the model does with a small portion of the video.", "Based on a study of 50 randomly sampled videos, the blanked entity in the caption appeared 95% of the time in the third second of the video (see Fig. 11 in the Appendix).", "For this method, we pick the I3D feature which corresponds roughly to it and apply it to the proposed multimodal methods instead of using all the video features.", "Note I3D takes a window of 16 frames as input, which in our case corresponds to 640 milliseconds, centered at the mentioned moment within the video.", "This can be seen as a small generalization of the Image Understanding task, which considers a single image (frame).", "We perform experiments and evaluations using the dataset described in Section 3.", "We use exact match accuracy and ROUGE-1 F1 score (token-level) (Lin, 2004) to evaluate the output of the generation models and to evaluate human agreement (Section 3.4).", "For the exact match, we count a generated text string as correct if it has at least one string-level match among the provided annotations.", "For the token-level F1, we compute the token overlap (true positives) between the generated text string and each annotation, normalized by the sum of the true positives and average of the false negatives/positives.", "We then compute the maximum across all annotations.", "For all evaluations, we computed the metrics based on the normalized text (i.e., without articles).", "We evaluate the visual understanding ability of our multimodal model by comparing its performance with the text-only baseline and the human performance.", "The results from the fill-in-the-blanks task are shown in Table 3.", "The accuracy of the text-only model and F1 score are low, indicating that the language bias is controlled in our dataset.", "The 2931 val test Method EM F1 EM F1 BASELINES Most Frequent Answer 15.4 45.1 16.4 45.3 T5 zero-shot 39.3 52.0 37.4 49.2 T5 fine-tuned 58.0 73.8 54.5 70.9 OUR MULTIMODAL MODELS T5 + 1f I3D 59.2 74.7 54.3 70.5 T5 + I3D 60.2 75.0 56.2 71.4 Late-fusion T5 + 1f I3D 53.7 70.3 50.3 67.6 Late-fusion T5 + I3D 53.5 69.7 51.6 67.8 UPPER BOUND (HUMANAGREEMENT ) leave one worker out 75.3 82.6 75.0 82.5 new humans* 49.0 90.2 n/a n/a Table 3: Results on the validation set.", "multimodal model outperforms the text-only baselines in both exact match accuracy and F1 score, meaning that our multimodal model is able to learn video features relevant to caption language during training.", "We also note that the early-fusion multimodal model (T5 + I3D) slightly outperforms the late-fusion multimodal model, which suggests that the model learns more effectively without extra encoders (see Fig. 4b).", "Both the early-fusion and the late-fusion multimodal models perform worse with a single I3D feature.", "This suggests that the model benefits from the whole video to correctly answer the caption.", "We also find a large performance gap between the multimodal model performance and the human performance.", "Therefore, plenty of space exists for improvements to achieve human performance, and the video fill-in-the-blanks task is worth investigating in future visual understanding research.", "Results per Semantic Label.", "To measure how well the model understands different patterns in the caption data, we compare the predictions generated for blanks corresponding to words of different semantic categories (the rest of the answers generally belong to the same category as the blanked words).", "Two of the authors annotated the originally blanked phrases for common non-overlapping semantic categories, including people, passive entities, and lo-Category Size (%) T5 zs T5 ft T5 + I3D Passive entity 40.4 52.9 63.6 63.6 Person 33.4 37.0 81.8 83.2 Pronoun 6.1 73.5 85.6 84.3 Location 5.5 55.1 74.5 75.4 Preposition 4.5 81.6 95.7 97.5 Action 3.9 47.8 65.5 59.9 Audio 2.5 56.4 73.0 63.6 Abstract 2.2 59.6 70.0 77.9 Other 1.5 56.9 75.0 83.7 Event 1.0 70.0 68.0 84.0 Table 4: F1 scores on the validation set for blanks with different semantic categories, in descending order based on their size.", "We list the categories and their distribution/size in Table 4, and we also show the performance for the best text-only zero-shot method (T5 zero-shot), text-only fine-tuned method (T5 fine-tuned), and multimodal method (T5 + I3D).", "The results of T5 zero-shot show some categories can be easily predicted, without fine-tuning on the dataset, namely Preposition , Pronoun , and Event .", "However, fine-tuning T5 on our dataset yields improvements for nearly all categories.", "The multimodal (T5 + I3D) model improves the categories of Person and Abstract nouns but performs worse for others, namely Audio and Action .", "This finding follows from the fact that understanding higher-order audio and visual concepts requires complex reasoning, for which the video-aware model may need more training.", "In general, Action and Passive entity will likely require extra attention in future work, considering the comparatively low performance for these categories.", "gain insights on how to improve our models for future work, we measure where our best model (T5 +", "I3D) fails and humans perform well.", "We find three main types of wrong predictions.", "The most common error is predicting man instead of women, followed by predicting person instead of child or baby.", "The majority of the remaining errors are predictions close to the ground truth answers such as dance instead of exercise, pillow instead of sheets, rug instead of sand, floor instead of court, knife instead of spatula or basketball game instead of wrestling.", "Based on these types of errors, in future work, the model would benefit from pre-training on unbiased data (both gender and age) and also from pre-training on a large-scale multimodal (language and video) dataset, to learn about more diverse situations and objects.", "This paper introduced the fill-in-the-blanks evaluation framework for video understanding.", "The framework addresses drawbacks of alternative video understanding tasks, such as multiple-choice visual question answering or video captioning.", "Our paper makes three important contributions.", "First, we introduced FIBER , which is a large dataset consisting of 28,000 videos and tests based on filling blanks, building upon an existing video captioning dataset with a new set of manual annotations, and using a modified annotation framework to encourage diverse responses among annotators.", "This process can be easily replicated to create new fill-in-the-blanks data for other datasets and tasks.", "Second, we conducted extensive analyses on the dataset to evaluate the quality of the annotations and to understand the patterns and limitations of the data.", "Finally, we introduced a multimodal model that fuses language and visual information and found that the video-aware models significantly outperform the text-only models.", "Notably, we found a consistent gap between model performance and human performance, which suggests room for improvement in future models addressing video understanding through the lens of the fill-in-the-blanks task.", "Even though we compensated the annotators based on the quality of the answers they produced (and stated so in the instructions), they were rewarded based on the number of answers they input since we looked for diversity.", "These incentives may have encouraged the annotators to make many judgments quickly and therefore make biased decisions.", "Due to these biases, we cannot guarantee that annota-tors' guesses always match reality.", "Based on spot-checking, it seems that annotators made reasonable judgments, but others may disagree.", "We have also observed our data is skewed toward more male noun phrases (cf. Appendix A.5), which could be due to a bias both in VaTeX and in the annotators we hired.", "Our evaluation weights all errors equally, even though some errors may have a bigger impact than others.", "For example, someone in a video may be misgendered by being referred to as a man when the correct reference should be woman.", "We thank Laura Biester for helping with data quality assurance.", "We thank the following people for reviewing drafts of this document: Artem Abza-liev, Christine Feak, Victoria Florence, Zhijing Jin, and Max Krogius.", "We also want to thank the LIT Research Group @ UMich members for feedback on some of the ideas discussed here.", "This material is based in part upon work supported by the Automotive Research Center (ARC).", "Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of ARC or any other related entity." ]
[ "objective", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "abstain", "other", "abstain", "result", "abstain", "result", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "result", "result", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "other", "other", "other", "other" ]
[ "Conditional Variational AutoEncoder (CVAE) effectively increases the diversity and informativeness of responses in open-ended dialogue generation tasks through enriching the context vector with sampled latent variables.", "However, due to the inherent one-to-many and many-to-one phenomena in human dialogues, the sampled latent variables may not correctly reflect the contexts' semantics, leading to irrelevant and incoherent generated responses.", "To resolve this problem, we propose Self-separated Conditional Variational AutoEncoder (abbre-viated as SepaCVAE ) that introduces group information to regularize the latent variables, which enhances CVAE by improving the re-sponses' relevance and coherence while maintaining their diversity and informativeness.", "SepaCVAE actively divides the input data into groups, and then widens the absolute difference between data pairs from distinct groups, while narrowing the relative distance between data pairs in the same group.", "Empirical results from automatic evaluation and detailed analysis demonstrate that SepaCVAE can significantly boost responses in well-established open-domain dialogue datasets.", "When conversing with a human user, an open-domain dialogue system is expected to generate human-like responses responses that not only are diverse and informative, but also contain relevant and cohesive information that correctly addresses the context dialogue.", "Through using sampled latent variables, Conditional Variational AutoEncoders (CVAE) are powerful tools to ensure diversity and informativeness of the generated responses (Bow-man et al., 2016; Serban et al., 2017; Shen et al., 2017; Zhao et al., 2017; Chen et al., 2018).", "Yet, it is challenging for a CVAE-based dialogue generation model to keep the responses relevant and c 1 z 1 c 2 z 2 Latent Space low similarity Semantic Space high similarity c 1 c 2 z 1 z 2 Latent Space low y similarity Semantic Space highhih similarity c 3 z 3 high similarity low similarity Figure 1: In this example, the latent variables ( z 1 , z 2 , z 3 ) sampled by a general CVAE model don't inherit the semantic relationship of the contexts ( c 1 , c 2 , c 3 ) .", "coherent.", "The challenge arises as human dialogues inherently exhibit the one-to-many and many-to-one phenomena (Csaky et al., 2019), meaning that the same context could lead to very different responses, and different contexts could lead to the same response, respectively.", "As a result, the latent variables sampled by CVAE often fail to capture the correct contextual semantics, as shown in Fig. 1, leaving open the possibility that similar contexts producing drastically different latent variables.", "This has two particular drawbacks: First, the discrepancy between latent variables could lead to irrelevant and incoherent generated responses .", "Different latent variables in a continuous latent space correspond to different responses (Bowman et al., 2016).", "As dissimilar latent variables may be sampled for similar contexts, the generated responses for contexts in the test set could be drastically different from responses to similar contexts in the training set.", "For instance, given a context Everything about this movie is awesome! , a standard CVAE may generate response as dissimilar as Smartphones of the best games!. and Caves would never say yes, but I'd love to know. (Gao et al., 2019).", "Thus this approach sacrifices too much relevance and coherence for diversity and informativeness.", "Second, the disparity between contexts and latent variables hurts model generalizability .", "Model generalizability is often evaluated using a separate dataset taken from a similar distribution as the training set ( e.g. , a validation or a noisy version of the training set).", "High generalizability is indicated if the model can transfer favourable abilities from the training set to this second dataset, in the sense that it produces consistent responses between similar contexts across the two datasets.", "This suggests that the model has acquired certain semantic relations between sentences from the training set.", "However, if the sampled latent variable departs significantly from the contextual semantics, the model may perform quite differently on the second dataset from the training set.", "To address these drawbacks, we propose a novel model, namely Self-Separated Conditional Variational Autoencoder ( SepaCVAE ).", "SepaCVAE proactively partitions the input data into a number of groups, and then widens the absolute differences between data pairs across different groups while narrowing the relative distance between data pairs within the same group.", "In this way, SepaCVAE aims to put the contexts that sample similar latent variables into the same groups, thereby regularizing the latent variables.", "The design of SepaCVAE involves three components that are built on top of standard CVAE.", "First, inspired from image augmentation, we propose a dialogue augmentation method to partition data without any prior knowledge.", "For this, we construct N orthogonal vectors to classify data into N groups, which retain the original semantic relationships of data within a group.", "We directly enlarge the semantic distance of the data across different groups.", "Then, we propose a gradient blocking algorithm to select the most suitable group for each data according to gains obtained from different groups.", "Here, the gains are evaluated using reconstruction loss.", "Finally, inspired from the contrastive learning paradigm (Cai et al., 2020; Chen et al., 2020a,b; Mitrovic et al., 2020), we propose relationship enhancement to increase similarity between the representations of data within the same group, and differentiate the representations of data between different groups.", "Contributions: Our first contribution is a theoretical analysis on why sampled latent variables fail to reflect the contexts' semantics.", "The next contribution lies in the proposal of SepaCVAE to overcome issues of irrelevant and incoherent responses caused by standard CVAE.", "Our third contribution involves a series of experiments.", "The results show that our SepaCVAE can generate more relevant and coherent responses compared to existing methods.", "Open-domain dialogue generation is a challenging task in natural language processing.", "Early dialogue models (Shang et al., 2015; Sordoni et al., 2015b) often tend to generate dull responses.", "To improve the quality of these responses, two pathways have been adopted: one is to introduce external semantic information, such as dialogue history (Sordoni et al., 2015a; Serban et al., 2016), topic (Xing et al., 2017), sentiment (Huber et al., 2018), knowledge (Ghazvininejad et al., 2018), persona-style (Li et al., 2016c), and other information (Li et al., 2016a; Wang et al., 2017; Baheti et al., 2018; Feng et al., 2020b).", "The other is through more complex models or frameworks, such as attention mechanisms (Bahdanau et al., 2015; Luong et al., 2015), reinforcement learning (RL) (Li et al., 2016d; Zhang et al., 2018a; Liu et al., 2020), generative adversarial network (GAN) (Yu et al., 2017; Li et al., 2017a; Zhang et al., 2018b; Feng et al., 2020a), and variational reasoning (Bowman et al., 2016; Serban et al., 2017; Shen et al., 2017; Zhao et al., 2017; Chen et al., 2018).", "CVAE models are conversational models that are based on variational reasoning.", "Many existing CVAE models have achieved state-of-the-art performance by generating diverse and informative responses.", "Moreover, as opposed to methods that introduce external semantic information, CVAE models use latent variables to represent such information.", "Hence they can be applied when external information is not available.", "Comparing with the models based on RL or GAN, CVAE models are simpler and can be easily trained.", "In addition, CVAE models can be enhanced by methods that use RL or GAN as generators to further improve their performances.", "latent variables may make the generated responses more diverse and informative, it could also reduce relevance and coherence.", "To alleviate this apparent issue, CVAE models have been used in combination with external information such as persona information, dialogue history and dialogue act (Shen et al., 2017; Serban et al., 2017; Zhao et al., 2017).", "However, simply borrowing external information is not sufficient to resolve the one-to-many issue, especially when the amount of data is very large.", "No existing model resolves the core issue of the problem, that is, the latent variable inherits little semantic information from the context sentence , a consequence of the inherent one-to-many and many-to-one phenomena of human conversations.", "To address this issue, we propose the SepaCVAE model which trains latent variables that inherit contextual semantics.", "Recently, self-supervised methods such as contrastive learning popularized in computer vision (Chen et al., 2020a,b) are drawing increasing attention in NLP (Wu et al., 2019; Clark et al., 2020; Cai et al., 2020).", "Generally speaking, the major issue with applying contrastive learning is how positive and negative examples are constructed.", "Many existing work explore ways to design reasonable pairs of positive and negative examples to accurately capture the semantic relations of these pairs, so that the obtained representation can be better-used on downstream tasks.", "The problem with the standard CVAE model lies in that the sampled latent variables may not accurately reflect the contextual semantics due to the apparent one-to-many (one context may correspond to many responses) and many-to-one (many contexts may also correspond to one response) phenomena.", "This leads to irrelevant and incoherent responses, and harms model generalizability.", "Our aim is to adapt sampled latent variables to capture the contextual semantics, so that the effects of these phenomena are neutralized.", "This will in turn be helpful to generate relevant and coherent responses.", "With this goal, we focus on single-turn dialogue datasets where the one-to-many situations appear more frequently than multi-turn dialogue datasets.", "This section formally analyzes the many-to-one and one-to-many phenomena and we present several important assumptions and contextual information (i.e., preconditions) for the CVAE model.", "Notations: and are parameters of CVAE's recognition network and prior network, respectively; c represents the condition information, x and r represent the generation target, and z represents the latent variable.", "Precondition 1: Bowman et al. (2016) confirmed that the latent space is continuous; the latent variable z is highly correlated with the target data x , meaning that different z will reconstruct different x .", "Precondition 2: CVAE has a recognition network q ( z | c, x ) and a prior network p ( z | c ) to approximate the true posterior distribution p ( z | c, x ) and prior distribution p ( z | c ) , respectively.", "These distributions are assumed to follow the Gaussian distribution, e.g. , q ( z | c, x ) N ( , 2 ) .", "Precondition 3: To efficiently train a CVAE model, the Stochastic Gradient Variational Bayes (SGVB) framework (Sohn et al., 2015; Yan et al., 2016; Kingma and Welling, 2014) is adopted which aims to maximize the variational lower bound of the conditional log likelihood: L ( , ; c, x ) = KL( q ( z | c, x ) || p ( z | c )) + E q ( z | c,x ) [log p ( x | z, c )] (1) where KL represents KullbackLeibler divergence.", "During training, the of q ( z | x, c ) will get smaller and smaller, and the of q ( z | x, c ) will get closer and closer to z that corresponding to x , which aims to stabilize the E q ( z | x,c ) [log p ( x | z, c )] and make it converge.", "We use Fig. 2 to illustrate the impact of one-to-many phenomenon and many-to-one phenomenon on a trained standard CVAE model.", "Consider the situation in Fig.", "2(a) where the context c 1 has two different responses r 1 and r 2 .", "By Precondition 2 , we assume two approximate posterior distributions p ( z | c 1 , r 1 ) N ( 1 , 21 ) , p ( z | c 1 , r 2 ) N ( 2 , 22 ) and one approximate prior distribution p ( z | c 1 ) N ( , 2 ) .", "By Precondition 3 , during training, 1 and 2 will get closer to the latent variables that could be reconstructed to r 1 and r 2 , respectively.", "By Precondition 1 , as r 1 is different from p ( z|c 1 , r 1 ) p ( z|c 1 , r 2 ) p ( z|c 1 ) p ( z|c (( 1 ,r 1 ) p ( z|c (( 1 ,r 2 r ) p ( z|c (( 1 ) latent variable z", "r 2 , 1 should also be different from 2 .", "Otherwise, the latent variables sampled from p ( z | c 1 , r 1 ) and p ( z | c 1 , r 2 ) tend to be the same, making these latent variables irrelevant to the responses.", "This leads to the vanishing latent variable problem (Bowman et al., 2016).", "Therefore, 1 and 2 cannot be the same, and their discrepancy can be considered stable; only in this way we can ensure one-to-one correspondence between latent variables and responses.", "From Precondition 3 , it is easy to see that p ( z | c ) is only affected by p ( z | c, r ) .", "Hence, we ignore E [ ] in Eq.", "(1) and use KL( p ( z | c, r ) || p ( z | c )) to analyze the trend of p ( z | c ) during training.", "Considering Fig.", "2(a) where KL( ) of ( c 1 , r 1 ) and ( c 1 , r 2 ) equals to KL( p ( z | c 1 , r 1 ) || p ( z | c 1 )) + KL( p ( z | c 1 , r 2 ) || p ( z | c 1 )) .", "We provide details of the computation in Appendix A .", "The formulation can then be simplified as: log (cid:16) 2 1 2 (cid:17) + 21 + 22 +( 1 ) 2 +( 2 ) 2 2 2 1 .", "Hence, we can compute and that minimizes the above using Lagrange multiplier: = ( 1 + 2 ) / 2 = (cid:113) ( 21 + 22 ) / 2 + ( 1 2 ) 2 / 4 .", "The derivation above provides insights on the problem caused by the one-to-many phenomena in Fig.", "2(a): After training, the prior conditional probability p ( z | c 1 ) N ( , 2 ) , which will be used in inference.", "If the difference between r 1 and r 2 widens, the difference between 1 and 2 will also widen and will become further away from 1 and 2 .", "During inference, the latent variables sampled from p ( z | c 1 ) have a high probability to differ from those sampled from p ( z | c 1 , r 1 ) and p ( z | c 1 , r 2 ) .", "These latent variables will introduce irrelevant information and contribute to the generation of irrelevant responses.", "In addition, as one response r 1 may correspond to different contexts c 1 and c 2 , as shown in Fig.", "2(b), p ( z | c 1 ) and p ( z | c 2 ) tend to be the same, which contributes to the phenomenon that different context could sample similar latent variables.", "In a word, similar contexts could correspond to different latent variables and different contexts could correspond to similar latent variables, which explains why the latent variables can not accurately reflect the contexts' semantics.", "In this section, we introduce in detail the proposed SepaCVAE model and its three key components, dialogue augmentation , gradient blocking , and relationship enhancement .", "As shown in Fig. 3, SepaCVAE uses G ( ) to separate the contexts into different groups.", "For the one-to-many phenomenon, the contexts in different groups will have different prior distributions p ( z | G ( )) , which is easily affected by the different posterior distributions.", "As for the many-to-one phenomenon, SepaCVAE makes the contexts ( c 1 , c 2 ) generate latent variables related to the response r 1 only when it contains group information G 1 ( ) .", "The other group would help the contexts to align with the other latent variables.", "In SepaCVAE , we first propose dialogue augmentation (see Algorithm 1), which designs a group of orthogonal vectors ( y 1 , y 2 , . . . , y N ) to separate the contexts into different groups.", "These vectors ( y 1 , y 2 , . . . , y N ) are called group information .", "Algorithm 1 Dialogue augmentation Input: C ori 1 m : the vector representation of original context sentence after word embedding process; N : the hyper-parameter; m : the dimension of word embedding; Output: C extN m : vector representations of context sentences after augmentation; Y extN 1 : the labels of the augmented contexts; 1: Initialize C extN m and Y extN 1 ; 2: Set d the integer of m/N ; 3: for i = 1 to N do 4: Initialize augment vector y i (0 , 0 , . . . , 0) 1 m ; 5: Set y i (( i 1) d + 1 : i d ) (1 , 1 , . . . , 1) 1 d ; 6: C extN m ( i, :) C ori 1 m + y i ; 7: Y extN 1 ( i ) i ; 8: end for 9: return C extN m , Y extN 1 In SepaCVAE , we apply Algorithm 1 to extend each dialogue pair ( c i , r i ) to [( c i + y 1 , r i ) , ( c i + y 2 , r i ) , . . . , ( c i + y N , r i )] before feeding them to start training.", "If different contexts c i , c j , . . . have the same y i added, then these contexts belong to the same group.", "In this way, all contexts will keep a certain relationship within the same group.", "In this work, the value N is set to 8.", "Since we use c + y to replace the original c , the variational lower bound of SepaCVAE is re-written as: L ( , ; r, c, y ) = E q ( z | r,c + y ) [log p ( r | z, c + y )] KL ( q ( z | r, c + y ) || P ( z | c + y )) (2) 4.3 Gradient blocking Before the gradient back-propagation, we propose gradient blocking (see Algorithm 2 in Appendix B for implementation details) to filter the gradients.", "Since we extend the dialogue pair ( c, r ) to [( c + y 1 , r ) , ( c + y 2 , r ) , . . . , ( c + y N , r )] , if we optimize the model through all calculated gradients, y 1 , y 2 , . . . , y N would be regarded as noise.", "Therefore, We choose the largest variational lower bound that is calculated through the dialogue pair ( c, r ) with the positive group information y + , which can be represented as (3): L ( , ; r, c, y + ) = max ,,y i YL ( , ; r, c, y i ) (3) For each [( c + y 1 , r ) , ( c + y 2 , r ) , . . . , ( c + y N , r )] , we only pass L ( , y + ) to optimize the model.", "Through dialogue augmentation and gradient blocking , the positive y + for each dialogue pair ( c, r ) is captured.", "We then propose relationship enhancement , which is inspired from contrastive learning , to adjust the separated results.", "Those responses under the same y + are considered to be in the same group, and thus can be seen as positive samples; similarly, those responses under different y + are seen as negative samples.", "From the perspective of contrastive learning, we design a relationship-enhancement-loss named L re to help our model achieve the representation learning: L re = (4) log e (cid:80) Posj =1 f ( x (cid:48) i ) T f ( x (cid:48) + j ) e (cid:80) Posj =1 f ( x (cid:48) i ) T f ( x (cid:48) + j ) + e (cid:80) Negm =1 f ( x (cid:48) i ) Tf ( x (cid:48) m ) N 1 , where x (cid:48) represents the embedded generated response, f ( ) represents our model' encoder, P os means the number of positive samples, and Neg means the number of negative samples.", "In addition, we introduce an MLP to predict y + based on vector representation of the generated response f ( x (cid:48) ) .", "We therefore define LY : LY = E p ( x | z,c + y + ) (cid:104) log( p ( y + | x (cid:48) )) (cid:105) (5) Overall, SepaCVAE is trained by maximizing: L all = L ( , ; r, c, y + ) L re LY (6) Quoting the KL annealing trick (Bowman et al., 2016), increases linearly from 0 to 1 in the first 10,000 batches.", "We use two public dialogue datasets in our experiments, and change them as single-turn dialog data.", "The first dataset, named DailyDialog (Li et al., 2017b), consists of dialogues that resemble human dataset name vocab train valid test DailyDialog 10,064 18,406 2,008 988 OpenSubtitles 87,840 5M 100K 50K Table 1: Statistics for DailyDialog and OpenSubtitles datasets.", "daily communication.", "The second dataset, named OpenSubtitles (Tiedemann, 2009), includes a large collection of conversations converted from movie transcripts in English.", "In this work, we extract single-turn dialogues from two dialogue datasets, DailyDialog and OpenSubtitles.", "From a multi-turn dialogue ( u 1 , u 2 , ..., u T ) , we can extract T 1 single-turn dialogues [( u 1 , u 2 ) , ( u 2 , u 3 ) , ..., ( u T 1 , u T )] , where u represents an utterance.", "As discussed above, compared with multi-turn dialogue dataset the single-turn dialogue dataset contains a more serious one-to-many problem.", "Therefore, using the single-turn dialogue dataset for experimentations can highlight the problem of general CVAE model and reflect the effect of our method.", "We utilize 300-dimensional GloVe embeddings (Pennington et al., 2014) to represent these dialogues in vectors.", "Since the tokens in GloVe do not cover all tokens in DailyDialog and OpenSubtitles datasets, we extract the token-list of GloVe to filter these datasets.", "Table 1 lists key statistics of the dataset after processing.", "In addition, we count the one-to-many samples of both datasets and found that 408 contexts in DailyDialog and 90,149 contexts in OpenSubtitles have multiple responses.", "In particular, a context in OpenSubtitles has a maximum of 623 responses, while a context in DailyDialog has a maximum of 29 responses, which shows that the one-to-many phenomenon is more prevalent in OpenSubtitles dataset.", "We use ppl (Neubig, 2017), response length and distinct-n (Li et al., 2016b) to evaluate the diversity of generated responses.", "We also use BLEU (Papineni et al., 2002) to evaluate the degree of the word-overlap between generated responses and ground truth.", "Moreover, we use Embedding Average (Average) (Liu et al., 2016)) to evaluate the semantic relationship of generated responses and ground-truth responses.", "Finally, we introduce the coherence (Xu et al., 2018b) to assess the coherence between contexts and generated responses.", "We conduct human evaluation to further evaluate our model and baseline models.", "Following the work of Li et al. (2017a); Xu et al. (2018a), we randomly extract 200 samples from the test sets of the two dialogue datasets, respectively.", "Each sample contains one context and the response generated by different models.", "Three annotators are invited to rank the generated responses with respect to three aspects: diversity, relevance and fluency.", "Ties are allowed.", "Diversity indicates how much the generated response provides specific information, rather than generic and repeated information.", "Relevance means how likely the generated response is relevant to the context.", "Fluency specifies how likely the generated response is produced by human.", "Our baseline models include sequence-to-sequence (Seq2Seq) model, CVAE model, and cluster-CVAE model.", "They are all implemented based on a 2-layer GRU kgCVAE model (Zhao et al., 2017).", "The cluster-CVAE model represents that kgCVAE utilize the cluster results as the knowledge.", "We employ three cluster methods, i.e. K-means( K ), Spectral( S ), Agglomerative( A ).", "For a fair comparison among all models, we utilized 300-dimensional GloVe embeddings as the word embedding matrix.", "The numbers of hidden nodes are all set to 300.", "The parameter max len is set to 25.", "We set the batch sizes to 64 and 32 for DailyDialog and OpenSubtitles datasets, respectively.", "Adam is utilized for optimization.", "The parameter init lr is set to 0.001.", "We train all models in 50 epochs on a RTX 2080Ti GPU card with Tensorflow, and save the generated responses when the ppl reaching minimum.", "Greedy search is used to generate responses for evaluation.", "Table 2 and Table 3 report the automatic evaluation results of SepaCVAE and baseline models on validation and test data of both two datasets, respectively.", "For the validation stage, we first select and save the positive group information ( y + ) mode ppl distinct-1 distinct-2 length BLEU-1 Average coherence Seq2Seq 42.9 .18 0.033 .01 0.119 .02 9.1 .22 0.386 .00 0.858 .00 0.763 .00 CVAE 13.3 .09 0.074 .00 0.407 .01 11.3 .33 0.405 .01 0.853 .00 0.763 .00 CVAE+BOW 13.0 .30 0.078 .00 0.415 .01 11.4 .21 0.402 .01 0.855 .00 0.762 .00 K -CVAE+BOW 13.1 .11 0.074 .00 0.406 .01 11.5 .14 0.424 .00 0.868 .00 0.766 .00 S -CVAE+BOW 12.9 .12 0.075 .00 0.414 .01 11.5 .17 0.426 .01 0.867 .00 0.765 .00 A -CVAE+BOW 13.0 .22 0.076 .00 0.418 .02 11.6 .11 0.418 .00 0.863 .00 0.765 .00 SepaCVAE 9.8 .17 0.078 .00 0.504 .01 11.5 .10 0.461 .00 0.862 .00 0.767 .00 Seq2Seq 45.9 .13 0.002 .00 0.010 .00 11.8 .81 0.236 .04 0.465 .08 0.281 .05 CVAE+BOW 12.2 .17 0.005 .00 0.095 .00 13.1 .26 0.172 .02 0.285 .04 0.195 .03 K -CVAE+BOW 12.1 .20 0.006 .00 0.098 .00 13.1 .10 0.203 .02 0.311 .06 0.200 .05 SepaCVAE 2.0 .06 0.016 .00 0.282 .01 12.6 .11 0.417 .00 0.836 .01 0.707 .01 Table 2: Metrics results on validation data of DailyDialog (up) and OpenSubtitles (down).", "for each context, and then generate responses under this y + .", "For the test data where no ground truth response is available to select the positive group information, we first generate N responses for each context through N group information, and then choose the most possible generated response through calculating the cosine score between the generated responses and context.", "Both generated responses and contexts are input into SepaCVAE 's encoder to obtain the vector representations.", "S pectral and A gglomerative cluster methods would not work well under the large-scale dataset ( i.e. OpenSubtitles), and the general CVAE model suffers from the vanishing latent variable problem while training on such dataset.", "Therefore, we remove the results of S -CVAE+BOW, A -CVAE+BOW and CVAE on Table 2 and Table", "3. As shown in Table 2 and Table 3, the results on large-scale dataset (OpenSubtitles) are better than that on small dataset (DailyDialog), that is, the results on OpenSubtitles show an obvious pattern that verifies our hypothesis.", "On both validation and test data of OpenSubtitles, CVAE and K CVAE achieve better performance on diversity metric ( distinct ) but worse performance on relevant metrics ( i.e. BLEU , Average and coherence ) than Seq2Seq model.", "Moreover, our proposed SepaCVAE outperforms all baseline models in terms of all metrics with statistical significance.", "However, the results obtained on the DailyDialog dataset do not show a clear pattern.", "For DailyDialog's validation data, SepaCVAE achieves good performance on diversity but on relevance the results is unimpressive.", "On the other hand, for test data, SepaCVAE achieves good performance on relevance but generally poor results on diversity.", "We believe that the reason for this phenomenon is related to the level of prevalence of the one-to-many phenomenon in the model diversity relevance fluency Seq2Seq 3.64 3.12 2.16 CVAE+BOW 3.16 3.58 3.42 K -CVAE+BOW 3.27 3.71 3.49 SepaCVAE 2.11 2.95 3.49 Ground-truth 1.88 1.02 1.00 Seq2Seq 3.12 3.11 3.24 CVAE+BOW 2.69 2.98 3.05 K -CVAE+BOW 2.59 3.53 3.72 SepaCVAE 2.57 2.36 2.25 Ground-truth 2.49 1.12 1.02 Table 4: Human evaluation results on test data of DailyDialog (up) and OpenSubtitles (down).", "dataset.", "For instance, only 66,260 contexts have multiple responses among the 90,149 contexts on the OpenSubtitles that was added the cluster results.", "Moreover, one context has a maximum of 296 responses, which amounts to almost half of 623.", "Since the DailyDialog dataset is very small and contains few samples that we focus on, which cause the not specific tendency on its results.", "In a word, the evaluation results illustrate the effectiveness of SepaCVAE in terms of improving the relevance and coherence of responses.", "The results of the human evaluation are shown in Table", "4. To evaluate the consistency of the ranking results assessed by three annotators, we use Pear-son's correlation coefficient.", "This coefficient is 0.22 on diversity , 0.63 on relevance , and 0.70 on fluency , with p < 0 .", "0001 and below 0.001, which indicates high correlation and agreement.", "Similarly with the automatic evaluation results in Table 3, this result shows that our SepaCVAE significantly outperforms baselines in term of relevance and diversity.", "Except the ground-truth responses, our SepaCVAE achieve the best scores of relevance and diversity metrics.", "The fluency result of SepaCVAE on the DailyDialog dataset is slightly worse than that of baselines, which is mainly due to the length of responses generated by SepaCVAE is almost two times than that of baselines (see Table 3).", "When the response lengths are similar on the Opensubtitles dataset, SepaCVAE could also achieve the best fluency score.", "We further analyze the effectiveness of SepaCVAE on regularizing latent variables.", "For the contexts 0 500 1000 1500 2000 2500 batch 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 c o s i n e s c o r e inner-dis of (context+latent virables) Baseline SeparaCVAE 0 500 1000 1500 2000 2500 batch 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 c o s i n e s c o r e inter-dis of (context+latent virables) Baseline SeparaCVAE Figure 4: The average inner-class distance and the average inter-class distance of the jointly vectors .", "in the validation data of DailyDialog dataset, we collect their generated responses and the sampled latent variables of both SepaCVAE and baseline models on the first 2,500 batches.", "Then we calculate the average inner-group distance and the average inter-group distance for each context based on jointly vector representations (concatenating the context vector and the latent variable).", "All distances are calculated by cosine scores, and the higher the distance, the greater the similarity.", "For each context, SepaCVAE outputs a positive group information y + , which is used to distinguish whether other contexts are in the same group.", "As for the standard CVAE, we set a threshold of the cosine score to replace the group information.", "In this work, the threshold is set to 0.9.", "Finally, we take the average of all contexts' inner-group distance results and inter-group distance results as inner-dis.", "and inter-dis.", "of each batch, which are shown in Fig.", "4. SepaCVAE achieves significantly higher inner-dis.", "than baseline (standard CVAE) model, while the inter-dis.", "are similar.", "Meanwhile, our method also gets the similar average distance of all jointly vectors with the standard CVAE.", "In addition, past studies conjecture that the posterior z sampled from the recognition network should cluster the responses into meaningful groups that correlate with the knowledge.", "Fig. 5 visualizes the posterior z of responses in the validation data of DailyDialog dataset in 2D space using t-SNE (van der Maaten and Hinton, 2008).", "We found that the learned latent space of our SepaCVAE is more correlated with the group information.", "These results demonstrate that SepaCVAE can effectively regularize latent variables.", "We collected the generated responses of contexts in validation and test set, which are similar to the training set, and showed a sample in Table", "4. The context in training set has two contradictory responses.", "As we analyzed, the standard CVAE and CVAE+BOW generated irrelevant and incoherent response for the similar context in validation and test set.", "In contrast, our SepaCVAE outputted sure, it will be happy and sure.", "i go with my parents are more relevant and coherent than the response generated by baselines, and it also similar with the true response 1 ( oh, that sounds great! ), which means the SepaCVAE is able to handle the one-to-many situation.", "In this paper, we theoretically prove that latent variables hardly reflect the semantics of contexts due to the one-to-many and many-to-one phenomena of dialogues.", "For the standard CVAE model, these issues lead to irrelevant and incoherent responses during the validation or test stage, and also damaging the generalization performance.", "To address these problems, we proposed the SepaCVAE model.", "There are three main technical novelties of SepaCVAE : dialogue augmentation, gradient blocking, and relationship enhancement, which enable the latent variables to reflect semantic relationships between contexts.", "As demonstrated in the experimental results, SepaCVAE could get the best performance for large-scale dataset.", "We would like to thank the anonymous reviewers for their constructive comments.", "This research is supported by Beijing Natural Science Foundation (No. L181010 and 4172054), National Key R&D Program of China (No. 2016YFB0801100).", "Kan Li is the corresponding author." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "method", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "other", "other", "other" ]
[ "Arguably, the visual perception of conversational agents to the physical world is a key way for them to exhibit the human-like intelligence.", "Image-grounded conversation is thus proposed to address this challenge.", "Existing works focus on exploring the multimodal dialog models that ground the conversation on a given image.", "In this paper, we take a step further to study image-grounded conversation under a fully open-ended setting where no paired dialog and image are assumed available.", "Specifi-cally, we present Maria, a neural conversation agent powered by the visual world experiences which are retrieved from a large-scale image index.", "Maria consists of three flexible components, i.e. , text-to-image retriever, visual concept detector and visual-knowledge-grounded response generator.", "The retriever aims to retrieve a correlated image to the dialog from an image index, while the visual concept detector extracts rich visual knowledge from the image.", "Then, the response generator is grounded on the extracted visual knowledge and dialog context to generate the target response.", "Extensive experiments demonstrate Maria outperforms previous state-of-the-art methods on automatic metrics and human evaluation, and can generate informative responses that have some visual commonsense of the physical world.", "Building intelligent conversational agents that can not only converse freely with human but also have the ability to perceive the physical world, has been one of the longest standing goals of natural language processing (NLP) and artificial intelligence (AI).", "Although the recent large-scale conversation models trained on text-only corpora, such as Meena Work performed during the internship at Microsoft.", "(Adiwardana et al., 2020), Blender (Roller et al., 2020) and DialoGPT (Zhang et al., 2020), have shown the compelling performance, they are still lack of the perception ability to our physical world.", "A recent study (Bisk et al., 2020) points out the successful linguistic communication relies on a shared experience of the world that makes language really meaningful.", "The visual perception is a rich signal for modeling a vastness of experiences in the world that cannot be documented by text alone (Harnad, 1990).", "On the other hand, human-human conversations involve their understandings of context, the background knowledge they had, and perhaps most importantly the experiences of the world they shared, e.g. , what they have seen before.", "Figure 1 shows a conversation between humans.", "Human-A recalls his/her past experience of playing volleyball or having BBQ on the beach when human-B talks about vacation on the beach of Hawaii.", "However, the association relationship between beach and volleyball (or BBQ) is hard to capture in traditional knowledge bases, such as knowledge graph.", "Motivated by this, we select a common word pizza and collect the top 17 words that mostly co-occur with pizza on Google Figure 2: The word co-occurrence distribution with pizza on Google knowledge graph and MS-COCO images.", "Knowledge Graph 1 and MS-COCO images 2 (Lin et al., 2014).", "As shown in Figure 2, the words co-occurring with pizza on knowledge graph tend to be the abstract concepts, while the co-occurrence relationship of object tags on images reflects some commonsense of our physical world, e.g. , pizza is usually on the dining table, people usually use knife when eating pizza.", "Interestingly, we found the pizza also co-occurs with cell phone and even plotted plant.", "This indicates when people eat pizza, they sometimes would put their cell phones aside on the table, or there might exist some plotted plants in the restaurant.", "Thus, empowering conversational agents to have the visual perception ability about the physical world is a key way for them to exhibit the human-like intelligence.", "The existing works (Mostafazadeh et al., 2017; Huber et al., 2018; Shuster et al., 2020) focus on exploring the multimodal dialog models that ground the conversation on a given image.", "Recently, Yang et al. (2020) propose to learn the dialog generation model with both image-grounded dialogs and textual dialogs by resorting to text-to-image synthesis techniques (Xu et al., 2018; Qiao et al., 2019) to restore a latent image for the text-only dialog.", "Even so, these works are still constrained by the assumption that the dialog is conducted center around a given (or synthesized) image.", "In this paper, we take a step further to extend the assumption of image-grounded conversation to a fully open-ended setting where no image-dialog pairs are assumed available.", "Specifically, we present Maria, a neural conversational agent powered by visual world experiences which are retrieved from a pre-built image index, e.g. , the 1 https://developers.google.com/ knowledge-graph/ 2 We calculate the co-occurrence distribution of object tags from the images in MS-COCO dataset.", "More examples could be found in Appendices.", "Open Images Dataset (Kuznetsova et al., 2018).", "Maria consists of three components: text-to-image retriever, visual concept detector, and visual-knowledge-grounded response generator.", "The retriever is responsible for retrieving a piece of visual world experiences, e.g. , a correlated image to the dialog from an image index.", "The visual concept detector utilizes the object detector from UpDown (Anderson et al., 2018) to extract the regions features ( i.e. , bboxes) and the corresponding visual concepts ( i.e. , tags) from the retrieval images.", "Hence, we can construct ( bboxes , tags , context , response ) 4-tuple as the training data.", "Finally, these constructed 4-tuples are used to train the visual-knowledge-grounded response generator, which is built on the top of a multi-layer Transformer architecture (Vaswani et al., 2017).", "To effectively inject the visual knowledge into the response generator, we carry out the Masked Concept Prediction and Visual Knowledge Bias besides the response generation objective.", "The former aims to align the semantic representations between textual words and image regions, while the latter tries to provide more visual knowledge to facilitate the dialog generation.", "The experimental results on Reddit Conversation Corpus (Dziri et al., 2019a) demonstrate that Maria significantly outperforms previous state-of-the-art methods, and can generate informative responses with visual commonsense of our physical world.", "Overall, the contributions of this paper are summarized as follows: We explore the task of image-grounded dialog generation under a fully open-ended setting where no specific image-dialog pairs are assumed available, i.e., zero-resource image-grounded conversation.", "To the best of our knowledge, this is the first work to connect dialog corpus with the unpaired image data; We present Maria, a neural conversational agent consisting of three flexible components, which can effectively capture the visual commonsense from images and accordingly generate informative and vivid responses; Extensive experiments on the widely used Reddit Conversation Corpus are conducted to justify the effectiveness of Maria.", "Vision and Language In the research of vision and language, various tasks have been extensively studied, such as image captioning (Vinyals et al., 2015; Lu et al., 2017; Hu et al., 2020), visual question answering (Antol et al., 2015; Anderson et al., 2018), visual dialog (Das et al., 2017a,b).", "Popular benchmark datasets in this area include MS-COCO (Lin et al., 2014), VisDial (Das et al., 2017a) and Visual Genome (Krishna et al., 2017).", "Visual dialog is a task to answer the questions about the factual content of the image in a multi-turn manner.", "Differently, image-grounded conversation studies how to reply to a dialog context and a given image with proper responses in an open-ended way.", "Dialog Generation Encouraged by the success of the neural sequence-to-sequence architecture (Sutskever et al., 2014) on machine translation, end-to-end neural approaches on open-domain dialog generation (Vinyals and Le, 2015; Shang et al., 2015; Serban et al., 2016; Sordoni et al., 2015; Xing et al., 2017; Wu et al., 2018; Zhang et al., 2020; Xu et al., 2019; Adiwardana et al., 2020) have been widely studied in literature.", "Recently, there is an emerging trend towards grounding the dialog generation models on the external knowledge, such as knowledge graphs (Zhou et al., 2018), documents (Ghazvininejad et al., 2018; Dinan et al., 2019; Kim et al., 2020; Zhao et al., 2020a,b; Li et al., 2020) and images (Mostafazadeh et al., 2017; Shuster et al., 2020; Yang et al., 2020).", "Different from the previous work on knowledge-grounded conversation that connects dialogs with unpaired document knowledge (Li et al., 2020), our work lies in the research of image-grounded conversation where a response is generated with a dialog context and a given image.", "Existing works (Mostafazadeh et al., 2017; Shuster et al., 2020; Yang et al., 2020) in this direction assume there is a given (or synthesized) image for the dialog and explore the multimodal dialog models.", "In contrast to these works, we study the image-grounded conversation under Multi-turn Reddit Conversation Image Index (Open Images) Visual Concept Detector Visual-Commonsense-Aware Response Generation Model Training: (,) ; Inference: Text-to-Image Retrieve Module Top-k Images (k=1 here): ( , ,, ) Training Quaternion: ,,, ; Inference Triplet: (,,) Figure 3: The flowchart of our framework.", "a fully open-ended assumption where no paired dialog and image are assumed available, i.e. , zero-resource image-grounded conversation.", "Suppose we have a dialog set D = { ( C i , R i ) } ni =1 , where i { 1 , . . . , n } , C i refers to a dialog context and R i is a response to C i .", "We assume there is a set of images V = { V j } mj =1 , where j { 1 , . . . , m } , V j denotes an image.", "C D , we assume that there is an image V that triggered by the given dialog context C and response R .", "Our goal is to estimate a generation model P ( R | V, C ) from D and V .", "Thus, given a new dialog context C associated with an image V , the model can generate a response R according to P ( R | V, C ) .", "To learn such a generation model P ( R | V, C ) , we need to tackle several challenges: (1) How to bridge the gap between unpaired dialog corpus and image data; (2) After obtaining the correlated images, how to extract the detailed visual features and concepts; (3) How to effectively inject the visual knowledge into response generator and enable it to generate responses that are visual-knowledge-grounded.", "Figure 3 illustrates the framework of our approach.", "We first build a large-scale image dataset and leverage a cross-modal matching model to retrieve a correlated image using the content of the dialog.", "Then an off-the-shelf object detector is applied to extracting the object features and visual concepts from the retrieval image.", "Finally, the response generator is trained to generate the target response conditioned on the context, extracted object features, and visual concepts.", "In this section, we develop a retrieval model that assigns each dialog with a correlated image V .", "Specifically, we train a text-to-image matching model from image captioning dataset and utilize it to construct the ( C, R, V ) triple data.", "Modeling To improve the efficiency of cross-modal retrieval model on large-scale dialog corpus and image dataset, we adopt a two-tower architecture (Lu et al., 2019) to accelerate the retrieval process where the image features can be pre-extracted offline.", "The model takes a sentence T and an image V as input, and predicts the relevance score s ( T, V ) between the sentence and the image.", "We use a text encoder and an image encoder to produce the representations of T and V , respectively.", "The text encoder is a pre-trained BERT-base model (Devlin et al., 2019) and we use the hidden state of special token [CLS] as the embedding of T : e t = BERT ( T ) (1) Then a Multi-Layer Perceptron (MLP) projects the sentence embedding into the cross-modal space.", "We follow Tan and Bansal (2020) to perform L2-normalization on the last output features, by which we can simplify the nearest neighbor search problem in the euclidean space to the Maximum Inner Product problem (Mussmann and Ermon, 2016): f t ( T ) = H t ( e t ) (cid:107) H t ( e t ) (cid:107) (2) Similarly, the image encoder is composed of a pre-trained ResNeXt backbone (Xie et al., 2017) and a MLP with L2 normalization: f v ( V ) = H v ( e v ) (cid:107) H v ( e v ) (cid:107) , e v = ResNeXt ( V ) (3) Thus, we define the relevance score s ( T, V ) as an inner product of the language feature representation f t ( T ) and image feature representation f v ( V ) : s ( T, V ) = f t ( T ) (cid:62) f v ( V ) (4) Training We train the cross-modal matching model on MS-COCO image captioning dataset (Lin et al., 2014), where each image is paired with 5 sentences describing its visual content.", "The model is optimized by minimizing the hinge loss so that the relevance score s ( T, V ) of the positive image-sentence pair can be larger than the negative pair s ( T, V ) by at least a margin M : L hinge (cid:0) T, V, V (cid:1) = l (cid:88) i =1 max { 0 , M s ( T, V ) + s (cid:0) T, V (cid:1)(cid:9) (5) Inference Given the trained retrieval model, we can now assign each dialog with a correlated image V .", "To ensure the diversity and richness of the retrieval results, we fetch 500,000 images from the large-scale Open Images dataset (Kuznetsova et al., 2018) as our image set V .", "The image V i V with the maximum relevance score is paired with the given dialog ( C i , R i ) D .", "Note that for the dialog in the training set, we use both the context C and response R are concatenated as the query for retrieval ( i.e. , T = ( C, R ) ), which is beneficial to retrieving an image with the related visual knowledge.", "On the other hand, for the validation/test set of the dialog corpus, the query is only the context ( i.e. , T = C ) so as to keep consistent with the real-world setting where the response is unavailable and need to be generated at inference.", "Given the correlated image V i to the dialog as the visual clue, we can now extract the visual knowledge from it.", "One naive approach is to utilize the CNN-based models to extract the latent image features.", "However, this approach does not consider the fine-grained representation modeling for images, which is crucial for the dialog model to understand the local visual features in images.", "To address this issue, we adopt an object detection model (Ander-son et al., 2018) pre-trained on Visual Genome (Krishna et al., 2017) to extract a set of salient object features O = { o k } Kk =1 , where each object feature o k is a 2048-dimensional vector.", "These features represent the images at the level of objects and other salient regions, which has proven to be vital in many high-level image understanding tasks.", "Besides, the same detector is used to extract a set of visual concepts Q = { q m } Km =1 , where each concept q m is the high-precision textual label of the visual region, e.g. , sunset, melon, etc.", "In this manner, we simultaneously obtain the fine-grained image representations and the necessary visual concepts for the subsequent dialog generation.", "In this section, we propose a unified architecture to effectively inject a set of region features and corresponding visual concepts into the response generation model.", "In following parts, we describe the model design and training objectives in detail.", "Figure 4 shows the architecture of our response generation model, which is a multi-layer transformer network for both bidirectional vision/context ( O, Q, C ) encoding, and unidirectional response R decoding, via the flexible self-attention masks inspired by (Dong et al., 2019).", "For each token, the final input representation to the multi-layer transformer network is the element-wise summation of four kinds of embeddings, including token-level, turn-level, position-level, and segment-level.", "Then, we concatenate all the input representations to one sequence for model training.", "Token-Level The token-level embeddings are the concatenation of ( O w , Q w , C w , R w ) , which denote the token embedding sequence of visual objects, visual concepts, contexts and response respectively.", "Note that O w is the object embedding transformed by a linear layer into the same dimension as word embedding.", "Turn-Level Since the dialog is multi-turn, we encode this turn order with a relative turn embedding (Bao et al., 2020).", "Specifically, the turn number is counted from the last utterance of the dialogue to the beginning.", "Note that as for the tokens corresponding to O and Q , we simply set them the same as the first utterance of C .", "Position-Level Positional embedding encodes the signal of the token order in the total input sequence, which is the same as positional encoding of the original transformer (Vaswani et al., 2017).", "Segment-Level Segment embedding is employed to differentiate which segment the token is in, i.e. , O, Q, C or R .", "Due to the inherent gap between visual modality and textual modality, directly optimizing the model by response generation objective may result in the insufficient utilization of the visual knowledge.", "To align the semantic representations of two modalities, we devise Masked Concept Prediction (MCP) objective.", "15 % of the visual concepts are randomly replaced with [MASK] tokens in each training instance, which need to be predicted by the model.", "However, one problem still remains, i.e. , the visual concepts have no specific order when extracting from images.", "In other words, we need to model MCP as a matching problem of set, which does not need to consider the order of predicted concepts when there are more than two concepts masked out simultaneously.", "To tackle this, inspired by Hu et al. (2020), we adopt the Hungarian Matching Loss (Stewart et al., 2016; Carion et al., 2020) to estimate an optimal mapping so that the prediction for each masked position is assigned one of the target concepts.", "Here we denote the set of all input as X = ( O, Q, C, R ) , the set of the bidirectional self-attention part of X as B = ( O, Q, C ) , the set of masked concepts as Q , the set of unmasked tokens as B \\ Q , and the prediction probabilities of the corresponding representations in the final layer of transformer as H = { h i } mi =1 where h i is the probability distribution of the i -th masked position.", "Hence, the MCP loss can be defined as: LMCP ( Q, H, ) = (cid:88) q ( i ) Q log h i (cid:16) q ( i ) | B \\ Q (cid:17) (6) where ( i ) is the index of the target concept assigned to the i -th prediction.", "When predicting a masked concept, the model will have to resort to visual region features, dialog contexts and other unmasked visual concepts.", "This would help the model to align the cross-modal representations between text and visual regions.", "Encouraged by the success of UniLM (Dong et al., 2019) in Seq2Seq tasks, we adopt the Masked Response Prediction (MRP) objective to model the response generation.", "During training, 70 % of the tokens in R are randomly masked with the special token [MASK] .", "The model is optimized to recover the masked tokens.", "The masked response tokens and other unmasked tokens in the whole input sequence can be denoted as R and X \\ R , respectively.", "Suppose that p i is the conditional probability distribution of the i -th token in R , the MRP loss is the Negative Log-Likelihood (NLL) of the masked Position-Level Token-Level Network Turn-Level Segment-Level Multi-Layer Transformer vis Response ( ) Dialog Context ( ) Visual Concepts ( ) Region Features ( ) vis vis vis vis vis tag tag tag tag usr usr usr usr usr usr usr usr usr usr sys sys sys sys sys sys sys sys sys sys 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 -2 -2 -2 -2 -2 -2 -2 -2 -2 -2 -2 -2 -2 -2 -2 -2 -1 -1 -1 -1 0 0 0 0 0 0 0 0 0 0 did you eat ?", "Note that the self-attention mask in R is left-to-right, but the rest are bidirectional.", "In other words, the tokens in O, Q and C can attend to each other from both directions, while the tokens in R can attend all tokens in O, Q, C and the leftward tokens in R including itself.", "MRP implicitly encourages the model to generate responses by learning the relationship among all input tokens.", "For decoding, we first encode the image regions, visual concepts, dialog contexts, and a special token [BOS] as input.", "Then the model starts the generation by feeding a [MASK] token and samples a word from the predicted distribution over vocabulary.", "Then, the [MASK] token is replaced by the generated token and a new [MASK] is appended to the input sequence for next word prediction.", "The generation process terminates when the model predicts [EOS] token or reaches the pre-defined maximum length.", "Visual Knowledge Bias Normally, the top projection layer of generation model produces a probability distribution over the vocabulary: p = softmax ( W e r + b ) , (8) where the e r R d , W R | V | d and b R | V | are the last output of the transformer network, weight and bias parameters of the decoding head, respectively.", "| V | denotes the vocabulary size.", "So far, the visual world knowledge is introduced into the response generation model by the shared-parameter self-attention layers.", "To further inject the visual knowledge into the generation model, we design a simple but effective strategy, namely Visual Knowledge Bias (VKB).", "Concretely, an additional visual vocabulary bias b q is first calculated as follow: b q = F q ( e qavg ) (9) where F q : R d R | V | is a projection layer.", "e qavg denotes the average pooling on all hidden representations of visual concepts, i.e. , e qavg = AvgP ooling ( E q ) where E q = ( e q 1 , ..., e qK ) .", "Then, we mask non-visual-concept tokens in the vocabulary and the masked vocabulary bias b q R | V | is added to the top layer of generation model to get the final distribution over vocabulary: p = softmax ( W e r + b + b q ) (10) We leverage this final vocabulary distribution to calculate the MRP loss in Eq.", "7 to optimize the model.", "This visual knowledge bias would encourage the model to generate more visual knowledge related tokens in the response.", "To sum up, the final objective of our response generation model is to minimize the integrated loss: L = LMRP + LMCP (11) 5 Experimental Setup 5.1 Datasets To evaluate the performance of Maria, we conduct comprehensive experiments on the Reddit dataset released by Yang et al. (2020), which is a large-scale and high-quality multi-turn conversations extracted from Reddit Conversation Corpus (Dziri et al., 2019b).", "Each dialog has 3 to 5 utterances, and the training/validation/test set has 1M/20K/20K dialogs respectively.", "We train and validate the retrieval model using the Karpathy's split 3 of the MS-COCO image captioning data, where the images are split into 3 https://cs.stanford.edu/people/ karpathy/deepimagesent 113.2K/5K/5K samples as training/validation/test set, respectively.", "After the retrieval model is trained, we fetch 500K images from the Open Images dataset as the image index, and then retrieve images from it by dialog context and response to construct the training data for response generator.", "Both automatic metrics and human evaluation are employed to assess the performance of Maria and baselines.", "Automatic metrics include: (1) Fluency : perplexity (PPL) measures the confidence of the generated responses; (2) Relevance : BLEU-1 (Pa-pineni et al., 2002), Rouge-L (Lin, 2004), and we follow Serban et al. (2017) to utilize Embedding Average cosine similarity, Vector Extrema cosine similarity, and Embedding Greedy Matching score.", "All this metrics are calculated by running the public NLG evaluation script 4 ; (3) Diversity : Distinct-1 (Dist-1) and Distinct-2 (Dist-2) (Li et al., 2016) are defined as the number of distinct uni-grams or bi-grams divided by the total amount of words.", "In human evaluation, we randomly select 100 dialogue contexts and the corresponding generated responses for Maria and compared baselines.", "Three human annotators are asked to score the response quality on a scale of { 0, 1, 2 } from three aspects, including Fluency , Relevance and Richness .", "The higher score means the better.", "Since each response receives 3 scores on each aspect, we report the average scores over annotators and responses.", "The inter-annotator agreement is measured by Fleiss' Kappa(Fleiss and Cohen, 1973).", "For the retrieval model, ResNeXt-101-32x8d feature is used as the visual embedding, while the concatenation of the last 4 layers of BERT's outputs is used as the textual embedding.", "Both embeddings are then respectively fed into an MLP composed of three layers of size (1024, 1024, 512).", "When training the retrieval model, we set the margin M = 0 .", "5 for the hinge loss, and only tune the parameters of both MLPs while freezing the parameters of ResNeXt and BERT.", "The total training epoch is 20.", "At inference, the FAISS (Johnson et al., 2019) library is utilized to accelerate the inner product search by batch processing.", "We use the off-the-shelf object detector from UpDown (An-derson et al., 2018) to extract top-k (k=36) image 4 https://github.com/Maluuba/nlg-eval region features and the corresponding visual concepts.", "The detector is a Faster R-CNN (Ren et al., 2015) model trained on the Visual Genome dataset (Krishna et al., 2017).", "For the response generation model, we set the number of transformer layers L = 12 and the hidden embedding dimension D = 768 .", "Besides, the network parameters are initialized by UniLM.", "The maximum sequence lengths of context and response are set to 110 and 40, respectively.", "The sequence lengths of region features and concept tokens are both set to 36.", "The batch size is 64.", "We use the Adam Optimizer (Kingma and Ba, 2015) with a learning rate 3e-5 to train the response generation model.", "The training is conducted on 4 Nvidia Tesla P40 24G GPU cards for 20 epochs.", "We compare the following baselines in the experiments: (1) Seq2Seq : A standard Sequence to Seqence model with attention mechanism (Bahdanau et al., 2015).", "(2) HRED : A Hierarchical Recurrent Encoder-Decoder neural network (Serban et al., 2016).", "(3) VHRED : A variation of HRED that introduces latent variables into the generation (Ser-ban et al., 2017).", "(4) ReCoSa : A hierarchical transformer-based model (Zhang et al., 2019) that achieves the state-of-the-art performance on benchmarks of dialog generation.", "(5) ImgVAE : A dialog generation model (Yang et al., 2020) that is trained on both textual dialogs and image-grounded dialogs by recovering a latent image behind the textual dialog within a conditional variational auto-encoding framework.", "(6) DialoGPT : An open-domain dialog model (Zhang et al., 2020) that fine-tunes GPT-2 (Radford et al., 2019) on massive Reddit data.", "Since DialoGPT is a dialog generation model trained on the text-only corpus, we introduce it as an auxiliary baseline.", "For a fair comparison, we choose the same model size ( L =12, D =768) of DialoGPT (117M) as our model.", "We summarize the experimental results of automatic evaluations in Table 1.", "Maria achieves the substantial performance improvements over baselines on all metrics except for the comparison to DialoGPT.", "Especially, Maria significantly surpasses ImgVAE on Dist-1/2, which indicates introducing richer visual knowledge, i.e. , image region features Model PPL BLEU-1 Rouge-L Average Extrema Greedy Dist-1 Dist-2 Seq2Seq (Bahdanau et al., 2015) 77.27 12.21 10.81 78.38 40.06 62.64 0.53 1.96 HRED (Serban et al., 2016) 84.02 11.68 11.29 75.54 37.49 60.41 0.89 3.21 VHRED (Serban et al., 2017) 78.01 12.22 11.82 75.57 39.24 62.07 0.87 3.49 ReCoSa (Zhang et al., 2019) 71.75 12.75 11.75 79.84 42.29 63.02 0.66 3.83 ImgVAE (Yang et al., 2020) 72.06 12.58 12.05 79.95 42.38 63.55 1.52 6.34 DialoGPT (Zhang et al., 2020) 36.03 5.87 5.20 77.80 35.40 58.39 10.41 49.86 Maria 54.38 14.21 13.02 82.54 44.14 65.98 8.44 33.35 Maria ( w/o MCP) 66.71 13.91 11.60 81.59 41.06 64.10 8.36 31.80 Maria ( w/o VKB) 65.51 12.76 11.76 82.49 40.22 64.49 7.15 29.44 Maria ( w/o VKB & MCP) 62.64 11.50 10.45 77.52 41.27 61.00 6.92 28.53 Maria ( w/o images) 64.75 10.70 9.15 78.89 39.88 62.39 6.88 28.01 Maria ( w/o concepts) 69.24 11.43 10.61 82.96 41.02 65.07 4.56 16.44 Maria ( w/o images & concepts) 69.50 10.75 8.34 80.62 41.15 64.25 3.69 10.11 Table 1: Evaluation results of generated responses on the test set.", "and the corresponding visual concepts, is beneficial to generating more diverse and informative responses.", "This also reflects in human evaluation of Table 2 that the richness score of Maria is higher than that of ImgVAE.", "Besides, in terms of relevance metrics including BLEU-1, Rouge-L, Average, Extrema and Greedy, Maria outperforms all baselines and even performs better than DialoGPT.", "This indicates introducing the extra visual knowledge related to dialog context can further force the model to produce more relevant responses.", "On the other hand, the discrepancy of data distributions between the training data ( i.e. , Image-Chat (Shuster et al., 2020) dataset) and test data ( i.e. , Reddit conversation dataset) of the text-to-image synthesis model in ImgVAE limits its performance in practice.", "Besides, constrained by the capability of the text-to-image synthesis model, the richness and diversity of the synthesized images are undesirable, while Maria can retrieve a variety of images from the large-scale image index.", "That may be the reason why ImgVAE consistently underperforms our Maria on relevance including automatic evaluation and human judgement, which also shows the superiority of the retrieval method for the zero-resource image-grounded conversation.", "Another observation is that Maria slightly underperforms DialoGPT on PPL and Dist-1/2.", "Since DialoGPT is a large-scale pre-training based dialog generation model and introduces the extra mutual information maximization objective to improve the informativeness of generated responses, which is consistent in human evaluation with respect to flu-ency and richness.", "We conduct extensive ablation experiments over different model variants and input components to better understand their relative importance to the dialog generation task.", "As shown in Table 1, training the simplified versions of Maria or removing any visual signals from input components leads to worse performance in terms of relevance and diversity.", "In particular, the results on the ablation study validate that: (1) The performance improvement of dialog generation benefits from the MCP's effectiveness in aligning the representations of text and vision; (2) When training Maria, introducing VKB can further improve the quality and diversity of generated responses; (3) Rich visual knowledge, i.e. , image region features and visual concepts, play a significant role in improving the performance of dialog generation.", "Especially, removing the visual concepts leads to a dramatic performance drop on diversity.", "The phenomenon is due to the lack of necessary visual concepts, Maria can not well understand the visual world knowledge when only learning from the visual features.", "To further investigate the quality of responses generated by Maria, we put an example of generated responses in Figure 5.", "As we can see from Figure 5, when the context talks about the supermarket Aldi, Maria can retrieve a pizza related image and generate the informative response grounded on Dialog Context: Maria A: No Aldi ?", "it, i.e. , the pizza at Aldi is the best in the world.", "This implies the commonsense that the supermarket usually has the pizza to sell.", "It is also observed that Maria pays more attention to the relevant image regions when generating the word pizza, which demonstrates that Maria could capture useful visual knowledge from the image and subsequently leverage it to generate commonsense-aware responses.", "More cases are demonstrated in Appendices.", "In this paper, we present Maria, a neural conversational agent powered by the visual world experiences.", "It is able to retrieve the visual world experiences with users and generate human-like responses with some visual commonsense.", "Extensive experiments demonstrate Maria achieves substantial improvements over the state-of-the-art methods in automatic and human evaluation.", "The future works could include: (1) Design a more precise and comprehensive image retriever to include multiple retrieval images; (2) Combining the retrieve module and dialog generation into an end-to-end model, instead of learning them individually; (3) Explore more efficient neural architectures to inject the visual knowledge into response generation." ]
[ "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "objective", "objective", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "other", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain" ]
[ "We study the task of cross-database semantic parsing (XSP), where a system that maps natural language utterances to executable SQL queries is evaluated on databases unseen during training.", "Recently, several datasets, including Spider, were proposed to support development of XSP systems.", "We propose a challenging evaluation setup for cross-database semantic parsing, focusing on variation across database schemas and in-domain language use.", "We re-purpose eight semantic parsing datasets that have been well-studied in the setting where in-domain training data is available, and instead use them as additional evaluation data for XSP systems instead.", "We build a system that performs well on Spider, and find that it struggles to generalize to our re-purposed set.", "Our setup uncovers several generalization challenges for cross-database semantic parsing, demonstrating the need to use and develop diverse training and evaluation datasets.", "Semantic parsing is the task of mapping natural language utterances to formal meaning representations, and has been studied in tasks including instruction following, evaluating sentence meaning, and building interfaces to knowledge bases.", "In this paper, we focus on the task of mapping from natural language utterances to SQL queries executable in a database.", "Most prior work in mapping from natural language to SQL queries train and test the system on a single database.", "We refer to this setup as single-database semantic parsing ( SSP ).", "Well-studied datasets used in the SSP setting include GeoQuery (Zelle and Mooney, 1996) and ATIS (Hemphill et al., 1990; Dahl et al., 1994).", "as it is often cost-prohibitive to collect a suffi-cient number of training examples for all possible databases.", "Several datasets, including Spider (Yu et al., 2018), were proposed to evaluate this dimension of generalization.", "These datasets include examples grounded in multiple databases, distinguishing between training databases and evaluation databases.", "We refer to this setup as cross-database semantic parsing ( XSP ).", "While these datasets have been valuable in understanding and addressing some of the additional generalization challenges introduced by XSP, current evaluation of XSP systems has been limited to datasets designed for XSP.", "This limits the types of generalization challenges studied to those introduced by these datasets.", "Existing XSP evaluation data such as Spider simplifies some of these challenges, for example by including utterances that closely match their paired SQL query, as shown in last row of Figure 1.", "This setup misses an important opportunity for studying cross-database semantic parsing: evaluating on challenging datasets designed for single-database semantic parsing, like GeoQuery and ATIS.", "While the in-domain challenges of these datasets are relatively well-understood, generalization challenges introduced by studying these datasets in an XSP context have not been addressed.", "In this paper, we propose a more holistic analysis and evaluation setup for XSP.", "We propose to evaluate a semantic parsing system not only on evaluation data designed for XSP, but also on datasets that have only been studied in the SSP setting.", "Our repurposed evaluation set includes eight well-studied datasets like ATIS, but in a completely new setting.", "Instead of training on the original training data for these datasets, we train a single model on training data designed for the XSP setting, and evaluate the trained model on each evaluation dataset.", "These datasets were collected at different times, by different researchers, and with different motivations.", "This results in a wide variety of language usage, database structures, and SQL styles across datasets, further stressing a system's ability to adapt to unseen datasets.", "These variations pose many new generalization challenges for cross-database semantic parsing models, where in-domain examples are not available at training time.", "Our proposed XSP evaluation setup addresses several evaluation challenges posed by these dataset variations.", "With our proposed setup, we are able to analyze the potential limitations of current cross-database semantic parsing models.", "We uncover and attempt to address several new forms of generalization in cross-dataset semantic parsing.", "We develop a neural semantic parsing model is competitive all public systems on the Spider development set, and evaluate its ability to generalize to the evaluation datasets.", "First, we observe that the datasets originally designed for SSP become much more difficult under the XSP setting, with a notable drop in performance from both the Spider development results.", "Second, we experiment with several techniques that improve generalization to the eight evaluation datasets.", "Finally, we provide in-depth qualitative analysis on our results.", "Our results and analysis demonstrate a need for diverse training and evaluation datasets for XSP.", "Our code and experimental setup is available at https://github.com/google-research/ language/tree/master/language/xsp .", "We focus on the task of semantic parsing for databases.", "A natural language utterance u is a sequence (cid:10) u 1 , . . . , u | u | (cid:11) , where each u i is a natural language token.", "The task is to map u to an executable formal query y = (cid:10) y 1 , . . . , y | y | (cid:11) executable in a database D , where each y i is a SQL query token.", "Single-database Semantic Parsing (SSP) In SSP, all data is grounded in the same knowledge database.", "The training data consists of N pairs of utterances and SQL queries { x ( l ) , y ( l ) } Nl =1 grounded in database D .", "The evaluation data contains M unseen pairs of utterances and SQL queries { x ( l ) , y ( l ) } Ml =1 , also grounded in D .", "SSP has been studied using a number of datasets including ATIS (Hemphill et al., 1990; Dahl et al., 1994) and GeoQuery (Zelle and Mooney, 1996).", "Many prior approaches in SSP assume access to database contents at inference time.", "At test time, this allows the system to resolve the columns containing novel entities by performing a database look-up; for example, by labeling entity mentions in the input utterance with the columns in which they appear (Dong and Lapata, 2016; Iyer et al., 2017; Suhr et al., 2018).", "Cross-database Semantic Parsing (XSP) In the XSP setting, examples from the evaluation databases are not seen at training time (Yu et al., 2018, 2019b,a).", "Previously, the crossdomain semantic parsing task focused mostly on databases consisting of a single table (Pasupat and Liang, 2015; Iyyer et al., 2017; Zhong et al., 2017).", "However, the crossdatabase setting requires generalizing to unseen domains and novel database schemas.", "In XSP, the N training examples are { x ( l ) , y ( l ) , D ( l ) i } Nl =1 and the M evaluation examples are { x ( l ) , y ( l ) , D ( l ) j } Nl =1 , where each D is a database.", "Importantly, the set of training and evaluation datasets do not overlap.", "In addition to the generalization challenges posed by SSP, this setting adds several challenges, including generalizing to new schema structures, domain-specific phrases, and database conventions.", "Unlike SSP, prior work in XSP does not assume that the system has access to database contents at model inference time (Yu et al., 2018).", "Preprocessing steps that perform database look-ups are unavailable at inference time.", "Instead, the model only has access to the database schema for each evaluation example.", "This setting requires additional generalization, where the model must be able to map unfamiliar entities to columns in domains unseen during training.", "Other Related Work Semantic parsing has been widely studied for tasks including sentence understanding (Zettlemoyer and Collins, 2005, 2007; Ba-narescu et al., 2013), instruction following (Chen and Mooney, 2011; Artzi and Zettlemoyer, 2013; Long et al., 2016; Givoli and Reichart, 2019), and knowledge base querying (Popescu et al., 2004; Poon, 2013; Iyer et al., 2017).", "Related to the task of semantic parsing is code generation (Oda et al., 2015; Ling et al., 2016; Yin et al., 2018; Lin et al., 2018; Iyer et al., 2018).", "While our experiments are performed on English-langauge data, a limited amount of existing work has explored semantic parsing in languages besides English (Wong and Mooney, 2006; Min et al., 2019).", "Annotating SQL queries for new domains can be expensive.", "Several prior works present approaches to reduce this cost, for example by having crowd-workers paraphrase generated examples (Wang et al., 2015; Zhong et al., 2017), give feedback (Iyer et al., 2017), interact with a system (Artzi and Zettlemoyer, 2011; Thomason et al., 2015; Lab-utov et al., 2018), or a combination (Herzig and Berant, 2019).", "Research in SSP and code generation has led to innovations including constrained decoding and grammar-based decoding (Xiao et al., 2016; Yin and Neubig, 2017; Krishnamurthy et al., 2017; Lin et al., 2019).", "SSP has also been studied alongside additional generalization challenges, including to new compositional structures (Finegan-Dollak et al., 2018) and with additional context (Miller et al., 1996; Zettlemoyer and Collins, 2009; Suhr et al., 2018).", "Recent works evaluating in the XSP setting have explored methods of jointly embedding an utterance and the database schema (Shaw et al., 2019; Bogin et al., 2019a), interactive learning (Yao et al., 2019), and using intermediate output representations and new inference methods (Herzig and Berant, 2018; Guo et al., 2019; Zhang et al., 2019; Bogin et al., 2019b; Lin et al., 2019).", "We incorporate several such methods proposed into our proposed system.", "We propose to study the task of XSP by training on datasets designed for XSP, and evaluating on datasets originally designed for SSP.", "In our full model, we use both the Spider 1 (Yu et al., 2018) and WikiSQL (Zhong et al., 2017) datasets for training.", "For evaluation, in addition to the Spider development set, 2 we use eight English-language SSP datasets curated by Finegan-Dollak et al. (2018) covering a variety of domains, for example flights, geography, and movies.", "3 For each dataset, we evaluate on as much data as possible, excluding test sets.", "Table 1 describes our evaluation datasets.", "Developing evaluation metrics for these repurposed evaluation sets is challenging because of the diversity of SQL styles across different databases.", "Yu et al. (2018)'s proposed evaluation metric compares components of the predicted and correct query, allowing for variation in the exact form of the query, for example using different table aliases.", "However, it does not capture all possible SQL syntax, and fails to cover some of the gold queries in our evaluation datasets.", "For example, it does not handle assigning an alias to the results of an intermediate SELECT statement.", "Moreover, it does not measure equivalence of values, meaning 1 In addition to introducing Spider, Yu et al. (2018) propose to use a number of SSP datasets, including GeoQuery, as additional training examples for systems evaluated on Spider.", "However, these SSP datasets were not previously used as evaluation data in the XSP setting.", "During training, we use only the original Spider data, and discard this additional training data used by some Spider systems.", "2 WikiSQL contains much more simplified language, SQL queries, and databases than Spider.", "Therefore, we focus on Spider as part of our proposed XSP evaluation setup.", "3 Finegan-Dollak et al. (2018) re-split these datasets to evaluate generalization to novel query structures.", "However, this work still operates in the SSP setting, where in-domain training examples are available.", "Our setup uses the original splits of the data, rather than the structure-based splits (Table 1).", "not execute correctly.", "We propose to use variation of execution accuracy as our main metric.", "Execution accuracy over an evaluation set is the proportion of predicted queries which, when executed against the database, result in a table equivalent to the correct query's result.", "If the correct query requires ordering on the final table, we require the tables be exactly the same; if it does not, we consider result tables equivalent if they contain the same set of rows.", "We supplement the results with additional baselines and data filtering to address the problem of over-crediting spurious predictions.", "We report the empty-table prior for each dataset, demonstrating how well a model could perform if predicting incorrect queries that result in empty tables.", "We create a filtered subset where relative performance of systems is more meaningful, including attempting to remove examples that are impossible to solve without in-domain training data.", "Our heuristics include removing examples with correct queries that result in empty tables, and where the correct query contains a value token that is not copiable from the input.", "4 For example, in Restaurants, the phrase good restaurant always requires constraining the SQL query to restaurants with a rating greater than 2.5, even when the rating is not explicitly mentioned.", "Single-database semantic parsing requires recognizing unseen, in-domain entities, understanding new compositional structures, and generating executable representations.", "Cross-database semantic 4 Details are included in Appendix A. parsing introduces additional challenges, which we analyze and discuss below.", "We find that with existing XSP datasets, these challenges have been relatively under-explored.", "In our proposed setup, where we evaluate on datasets designed for SSP, these challenges become more prevalent.", "Generalizing to a new domain requires understanding domain-particular language, including entity mentions and their types, and how to map domain-specific phrases to SQL queries.", "Identifying Entities In the XSP setting, identifying spans of tokens comprising relevant entities in the utterance is difficult, especially without access to the database contents.", "For example, in some databases, first and last names are stored in separate columns, so the corresponding tokens should appear in different parts of the SQL query.", "In other databases, a single column is used to store names.", "Even if a model is trained on databases where names are always stored in a single column, it still must generalize to databases where first and last names are stored in separate columns.", "This becomes more challenging with domain-specific entities.", "For example, in the Advising example in Figure 1, the span EECS 478 refers to two distinct database entities, rather than a single entity.", "This requires taking into account the database schema, for example by considering that the course table has distinct columns for department and number .", "a natural language utterance to an executable SQL query requires correctly identifying which columns and", "tables each entity should be compared with.", "Consider the following example (from GeoQuery): NL : what states are next to the mississippi SQL : select traverse from river where river name = mississippi'; To correctly identify that the entity mississippi refers to a river name in the river table, the system must have some domain knowledge.", "mississippi appears twice in the database: as a state and as a river.", "Even in the SSP setup, if the system has access to database contents, this entity mention's type is ambiguous without reasoning about its context in the utterance.", "In the XSP setup, this problem becomes even more difficult.", "Database contents are not available at model inference time, so an exhaustive search over the database for matching columns is not possible.", "Without in-domain training data, the model must still be able to choose the most likely column match for each mentioned entity.", "However, sometimes the column name is explicitly mentioned in the utterance, making the matching problem much easier, as demonstrated in the Spider example of Figure 1.", "We measure how prevalent the challenge of mapping from entities to column names is in our XSP setup.", "In each evaluation set, we estimate the proportion of examples whose entity mentions can be resolved using exact string matching between the utterance and the schema.", "5 Yavuz et al. (2018) perform a similar analysis manually on the WikiSQL dataset, estimating that roughly 54.1% of examples can be solved using exact match.", "The rightmost column in Table 1 compares all eight evaluation datasets and the Spider development set.", "In all eight evaluation datasets originally developed for SSP, fewer than half of examples explicitly mention column names for all entities in the utterance.", "In contrast, all column names are explicitly mentioned in least 72.4% of examples in the Spider development set.", "These results demonstrate that addressing this challenge is critical to XSP on completely unseen domains.", "Domain-Specific Phrases Generalizing to new domains requires mapping domain-specific phrases to implementations in SQL.", "Consider the following examples (from GeoQuery): 5 More details on this analysis are available in Appendix A. NL : what is the smallest city in arkansas SQL : select city name from city where population = (select min (population) from city where state name = arkansas') and state name = arkansas' NL : what is the smallest state that borders texas SQL : select state name from state where area = (select min (area) from state where state name in (select border from border info where state name = texas')) and state name in (select border from border info where state name = texas') When smallest describes a city, it requires sorting by the city.population column, but when used to describe a state, it requires sorting by the state.area column, even though the state table also has a population column.", "Another phrase whose implementation may change in a new database is how many .", "This phrase is often mapped to the count operator, but is sometimes mapped to specific database columns.", "For example, in Figure 1, how many credits maps to the credits table in Advising, and how many people maps to the population table in GeoQuery.", "To scope the problem, Yu et al. (2018) avoid including examples in Spider that require commonsense reasoning, including examples of domain-specific phrases.", "However, understanding domain-specific phrases is an important capability for a domain-general semantic parsing system.", "Cross-database semantic parsing requires generalizing to new database schemas, including larger tables and compositions of SQL components.", "Four of our evaluation datasets have at least ten tables in the database, with the largest database being ATIS with 32 tables.", "6 Figure 1 demonstrates that generating queries for large databases such as ATIS often requires reasoning about the relationships between many tables.", "In contrast, our training databases are relatively small, with one table per example in WikiSQL and an average of 5.1 per database in Spider.", "The queries themselves also vary in complexity.", "Finegan-Dollak et al. (2018) show that our eight target evaluation datasets range from using 1.4 to 6.4 tables per SQL query.", "We estimate that in Spider, an average query uses around 1.7 tables, which 6 Finegan-Dollak et al. (2018) and Yu et al. (2018) provide comprehensive statistics on the databases and gold queries in our evaluation domains.", "is more than only one target dataset (GeoQuery).", "Generalizing to new databases and datasets in our setting requires generating queries that use more tables than the training data.", "In some evaluation datasets, the system must not only reason about the input utterance and schema, but about dataset-specific conventions that are not specified in the inputs.", "Consider the following example (from Scholar): NL : papers on semantic parsing SQL : select distinct T1.paperid from keyphrase as T2, paper as T1, paperkeyphrase as T3, where T2.keyphrasename = semantic parsing' and T3.keyphraseid = T2.keyphraseid and T1.paperid = T3.paperid; The annotated SQL query for this utterance returns the paperid column from the paper table.", "However, the paper table also includes a column named title .", "The utterance does not specify whether the final column should be paperid or title .", "While both columns may seem like reasonable options, the dataset's convention is that a list of papers should be presented using the paperid column, and a query selecting the title column will have an incorrect execution result.", "Such conventions are difficult, if not impossible, to learn without any in-domain training data.", "Unfortunately, these cases occur in nearly all target datasets.", "We do not focus on addressing this type of generalization, and instead report how pervasive this problem is during error analysis.", "A possible direction for future work is to assume access to a small number of in-domain training examples and perform few-shot learning.", "Our model takes as input an utterance x and a database schema S .", "Similar to Guo et al. (2019), we serialize S into a sequence of wordpieces s = t 0 + t 1 + + t |S| .", "Each t i is a serialization of table T i , where t i = (cid:104) TAB (cid:105) + T i + c i, 0 + c i, 1 + . . . c i, |T i | .", "TAB is a token noting the beginning of a table schema serialization.", "T i is the tokenization of table T i 's name.", "Each c i,j is a serialization of a column C i,j , where c i,j = CT i,j + C i,j .", "CT i,j is a token denoting the type of the column's contents as provided by the database schema, for example numerical or text.", "C i,j is the tokenization of the column's name.", "The ordering of table schemas in s and table columns in each t i is arbitrary.", "7 The input to the encoder is the concatenation of the query wordpieces and the serialized schema, represented as the sequence of tokens x = (cid:104) CLS (cid:105) + u + (cid:104) SEP (cid:105) + s .", "The inputs to the encoder are embedded and passed to a pretrained Transformer encoder such as BERT (De-vlin et al., 2019).", "The decoder is an autoregressive Transformer decoder (Vaswani et al., 2017) that attends over the outputs of the encoder and the generated prefix.", "We use a training set { x ( l ) , y ( l ) , S ( l ) } Nl =1 consisting of pairs of natural language utterances, gold SQL queries, and database schemas.", "We train the encoder and decoder end-to-end, minimizing the token-level cross-entropy loss of the gold query y ( l ) .", "We update the parameters of the pre-trained encoder during training.", "For training data we use training sets developed for XSP.", "Importantly, to ensure we are evaluating the cross -database setting, our training data does not include examples from the evaluation databases.", "During inference, we use beam search and execute the highest-probability, syntactically correct prediction.", "We impose a maximum execution time of 45 seconds for predictions.", "More details on the model, learning, and evaluation setup are available in Appendix B. 5.1 Generalization Strategies While using pre-trained language models can help encode natural language text, we need other strategies to reason jointly about the language and the database schema in completely unseen domains.", "We focus on generalizing to domain-specific language and novel database structures.", "Value Copying Similar to previous work (Jia and Liang, 2016; Gu et al., 2016; Gulcehre et al., 2016; See et al., 2017), we use a copy mechanism in the decoder.", "At each output step, the decoder generates a distribution over possible actions, including selecting a symbol from the output vocabulary, and copying a token from the input x .", "We only allow copying of certain token types, and mask out invalid copying actions, including independent wordpieces from u and TAB and column-type tokens.", "For table and column tokens, the name of 7 To discourage over-fitting to an arbitrary ordering of schema elements, we duplicate each Spider training example seven times with randomly permuted orderings.", "Duplicating seven times results in the number of Spider training examples roughly matching the number of WikiSQL training examples (Section 5.1).", "the corresponding table or column is recovered by post-processing the predicted sequence y .", "Previous approaches on Spider do not evaluate execution accuracy over the databases.", "Because the main metric does not require values in the predicted and gold queries to be the same, many approaches simplify the problem by using a placeholder token for all values during training.", "However, correctly generating values is critical for correctly executing predicted queries.", "To the best of our knowledge, our approach is the first to evaluate on Spider with execution accuracy and to generate SQL queries without placeholder values.", "Multiple Data Sources We train with training data from Spider (Yu et al., 2018) and WikiSQL (Zhong et al., 2017).", "Spider includes examples of complex SQL queries grounded in multi-table databases, while queries in WikiSQL are compositionally simple and grounded in single web tables.", "We use WikiSQL to improve generalization to domain-specific data, as it covers a large variety of domains.", "WikiSQL contains many more tables than Spider, and prior work estimates that roughly half of WikiSQL examples require using domain knowledge to map from entity mentions to column names (Yavuz et al., 2018).", "Different Output Space Guo et al. (2019) demonstrated improvements on Spider by deterministically mapping SQL to an intermediate representation, SemQL, and learning to predict outputs in this space.", "SemQL does not require predicting all of the tables in the FROM clause of the SQL query, or explicitly predicting the columns on which tables are joined.", "Instead of reasoning about foreign keys, the model predicts queries in the SemQL space, which are deterministically transformed to a final SQL query.", "In most cases, SemQL queries can be mapped back to SQL using database foreign key relations.", "We implement this aspect of SemQL as a mapping from SQL to a representation with an under-specified FROM clause, which we call SQLUF .", "Conversion from SQL to SQLUF removes tables from the FROM clause(s) of the SQL query implicitly referenced via a column elsewhere in the query, and removes JOIN clauses.", "Conversion from SQLUF to SQL restores these tables, and joins between tables are inferred by greedily identifying a path that connects all tables in the FROM clause, given foreign key relations.", "8 Examples of SQLUF are shown in Appendix C. 6 Experiments Comparison to Existing XSP Systems Our best model performs well on the Spider development set.", "Table 3 compares our system with top systems on the Spider leaderboard 9 .", "On the development set, our model performs competitively with contemporaneous systems.", "Table 3 shows that Spider performance correlated to the choice of the pre-trained models.", "Public BERTLARGE is better than BERTBASE .", "To further improve performance, we experiment with an enhanced pre-trained model BERTLARGE + following the recipe proposed by Liu et al. (2019).", "The BERTLARGE + model is trained with 8K batch size and 100k training steps, and in contrast to RoBERTa, is only trained on the Wiki + Books Corpus used in Devlin et al. (2019).", "Training our model to predict value placeholders ( Value Copying) instead of copying values from the input results in a performance drop, showing a benefit of modeling values even when ignored by the metric.", "10 XSP on Unseen Datasets Table 2 shows results on all evaluation data, including datasets originally studied in the SSP setting.", "We report results on the filtered set (Section 3) as well as the full set of these datasets.", "A large portion of the examples in datasets such as Restaurants and Advising yield empty execution results.", "This shows the need to also evaluate on the filtered set, where incorrect spurious predictions are much less likely to result in the same table as a gold query with an empty table result.", "Second, while execution accuracy on Spider is relatively high, performance on the other evaluation datasets is much lower.", "We find that all three techniques for addressing generalization challenges are effective.", "First, including WikiSQL in the training data results in better performance than only using Spider training data.", "We hypothesize that this is due to the addi-8 Like SemQL, this conversion is not possible if foreign key relations between predicted tables are not provided or if a given table is referenced more than once in a FROM clause.", "This can also result in a lossy or ambiguous conversion if there are multiple foreign key relations between a pair of tables.", "9 https://yale-lily.github.io/spider .", "In Table 3, we include non-anonymized leaderboard submissions, and for anonymous systems, the most recent submission for duplicate systems.", "10 About 55% of examples in Spider do not require copying values from the input utterance to the gold query.", "tional domains in WikiSQL, as well as the larger proportion of examples that require mapping from entities in the utterance to column names (Yavuz et al., 2018).", "Using SQLUF also improves performance, as it produces queries coherent with respect to the schema, for example only selecting columns from tables where the column exists.", "Finally, using value placeholders significantly reduces execution accuracy in all datasets.", "While masking values decreases Exact Set Match on Spider by 10.9%, its effect on execution accuracy can be devastating both for Spider and the eight evaluation datasets.", "This demonstrates the need to consider execution results when evaluating semantic parsing systems.", "Error Analysis For each evaluation dataset, we analyze twenty random predictions from the filtered subset.", "Examples of the most common error types are shown in Figure 2, along with the proportion of analyzed predictions in the eight target datasets that contain the error type.", "Appendix D discusses the complete results of error analysis.", "40% of errors are caused by comparing an entity to the wrong column, for example searching for James Bond' in the director.name column when it actually refers to a movie.title .", "This usually requires using domain knowledge identify to columns that are likely to contain the mentioned entity (Section 4.1).", "31.1% of errors are caused by missing constraints specified in the utterance, for example by failing to use a relevant entity in the predicted query.", "28.8% of errors are also caused by incorrectly identifying entity spans, for example by treating FIN 340 as a single entity rather than two separate entities in the database (Section 4.1).", "Another common error is predicting the wrong final column.", "While choosing what to return is difficult for the model due to understanding domain-specific phrases such as how many (20.0% of errors; Section 4.1), sometimes the errors are due to dataset conventions (26.9% of errors; Section 4.3).", "For example, the paperid column should be selected instead of the title column in Scholar.", "Such dataset conventions could be learned through few-shot learning, where a small number of in-domain training examples are available.", "Our system is required to generalize to larger databases than it was trained on, including more complex compositions of tables (Section 4.2).", "For 40.0% Entity-column matching (IMDBXSP ) NL : List James Bond directors Pred.", "example, while SQLUF can be used to represent most gold queries in most evaluation datasets (shown in Appendix C), in ATIS, only 17.3% of gold queries are covered by SQLUF .", "Most of the uncovered examples require mapping two columns, to airport and from airport , in the same table flight to the same foreign key airport service.airport code .", "This compositional structure is not covered by SQLUF , but is critical to perform well on ATIS.", "We study the task of cross-database semantic parsing (XSP), where a system that maps natural language utterances to executable SQL queries is evaluated on databases unseen at training time.", "While this task has been studied through datasets developed specifically for XSP, we propose a more holistic evaluation for XSP, where we also evaluate on datasets originally studied in a setting where in-domain training data is available.", "We identify several new generalization challenges that arise when evaluating in our proposed setup, including identifying entities, mapping entities and domain-specific phrases to a database schema, and generalizing to more complex database schemas.", "Using a model that performs well on evaluation data designed for XSP, we are able to move towards addressing some of the generalization challenges on these additional evaluation sets without any in-domain training data.", "Our results and analysis demonstrate the need for developing more holistic evaluation of cross-database semantic parsing using a more diverse set of language and databases.", "Several significant generalization challenges remaining, including improving commonsense and in-domain reasoning and table schema understanding capabilities.", "Some examples in our filtered evaluation set still require reasoning about dataset conventions that are difficult to acquire without in-domain training examples.", "Future work could also make the stronger assumption that a small number of in-domain training examples are available, and train and evaluate in a few-shot setting.", "The first author is supported by the National Science Foundation Graduate Research Fellowship under Grant No.", "DGE-1650441.", "We thank the Google Research Language and Cornell NLP groups for their comments and feedback during the project's development.", "We also thank Jing Li for pre-training the BERTLARGE + model, and Philip Massey, Zuyao Li, Angelica Chen, Karl Pichotta, and Francesco Piccinno for their contributions to the codebase.", "Finally, we thank the anonymous reviewers for their comments and suggestions." ]
[ "method", "abstain", "objective", "objective", "result", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "method", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "result", "result", "objective", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "objective", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "other", "other", "other", "other" ]
[ "Current measures for evaluating text simplification systems focus on evaluating lexical text aspects, neglecting its structural aspects.", "In this paper we propose the first measure to address structural aspects of text simplification, called SAMSA .", "It leverages recent advances in semantic parsing to assess simplification quality by decomposing the input based on its semantic structure and comparing it to the output.", "SAMSA provides a reference-less automatic evaluation procedure, avoiding the problems that reference-based methods face due to the vast space of valid simplifications for a given sentence.", "Our human evaluation experiments show both SAMSA 's substantial correlation with human judgments, as well as the deficiency of existing reference-based measures in evaluating structural simplification.", "1 1 Introduction Text simplification ( TS ) addresses the translation of an input sentence into one or more simpler sentences.", "It is a useful preprocessing step for several NLP tasks, such as machine translation (Chan-drasekar et al., 1996; Mishra et al., 2014) and relation extraction (Niklaus et al., 2016), and has also been shown useful in the development of reading aids, e.g., for people with dyslexia (Rello et al., 2013) or non-native speakers (Siddharthan, 2002).", "The task has attracted much attention in the past decade (Zhu et al., 2010; Woodsend and Lapata, 2011; Wubben et al., 2012; Siddharthan and Angrosh, 2014; Narayan and Gardent, 2014), but has yet to converge on an evaluation protocol that yields comparable results across different methods and strongly correlates with human judgments.", "This is in part due to the difficulty to combine the effects of different simplification operations 1 All data and code are available in https://github.", "(e.g., deletion, splitting and substitution).", "Xu et al. (2016) has recently made considerable progress towards that goal, and proposed to tackle it both by using an improved reference-based measure, named SARI, and by increasing the number of references.", "However, their research focused on lexical, rather than structural simplification, which provides a complementary view of TS quality as this paper will show.", "This paper focuses on the evaluation of the structural aspects of the task.", "We introduce the semantic measure SAMSA (Simplification Automatic evaluation Measure through Semantic An-notation), the first structure-aware measure for TS in general, and the first to use semantic structure in this context in particular.", "SAMSA stipulates that an optimal split of the input is one where each predicate-argument structure is assigned its own sentence, and measures to what extent this assertion holds for the input-output pair in question, by using semantic structure.", "SAMSA focuses on the core semantic components of the sentence, and is tolerant towards the deletion of other units.", "2 For example, SAMSA will assign a high score to the output split John got home. John gave Mary a call. for the input sentence John got home and gave Mary a call., as it splits each of its predicate-argument structures to a different sentence.", "Splits that alter predicate-argument relations such as John got home and gave. Mary called. are penalized by SAMSA .", "SAMSA 's use of semantic structures for TS evaluation has several motivations.", "First, it provides means to measure the extent to which the meaning of the source is preserved in the output.", "Second, it provides means for measuring whether the input sentence was split to semantic units of 2 We do not consider other structural operations, such as passive to active transformations (Canning, 2002), that are currently not treated by corpus-based simplification systems.", "the right granularity.", "Third, defining a semantic measure that does not require references avoids the difficulties incurred by their non-uniqueness, and the difficulty in collecting high quality references, as reported by Xu et al. (2015) and by Narayan and Gardent (2014) with respect to the Parallel Wikipedia Corpus (PWKP; Zhu et al., 2010).", "SAMSA is further motivated by its use of semantic annotation only on the source side, which allows to evaluate multiple systems using same source-side annotation, and avoids the need to parse system outputs, which can be garbled.", "In this paper we use the UCCA scheme for defining semantic structure (Abend and Rappoport, 2013).", "UCCA has been shown to be preserved remarkably well across translations (Sulem et al., 2015) and has also been successfully used for machine translation evaluation (Birch et al., 2016) (Section 2).", "We note, however, that SAMSA can be adapted to work with any semantic scheme that captures predicate-argument relations, such as AMR (Banarescu et al., 2013) or Discourse Representation Structures (Kamp, 1981), as used by Narayan and Gardent (2014).", "We experiment with SAMSA both where semantic annotation is carried out manually, and where it is carried out by a parser.", "See Section 4.", "We conduct human rating experiments and compare the resulting system rankings with those predicted by SAMSA .", "We find that SAMSA 's rankings obtain high correlations with human rankings, and compare favorably to existing reference-based measures for TS.", "Moreover, our results show that existing measures, which mainly target lexical simplification, are ill-suited to predict human judgments where structural simplification is involved.", "Finally, we apply SAMSA to the dataset of the QATS shared task on simplification evaluation (Stajner et al., 2016).", "We find that SAMSA obtains comparative correlation with human judgments on the task, despite operating in a more restricted setting, as it does not use human ratings as training data and focuses only on structural aspects of simplicity.", "Section 2 presents previous work.", "Section 3 discusses UCCA.", "Section 4 presents SAMSA .", "Section 5 details the collection of human judgments.", "Our experimental setup for comparing our human and automatic rankings is given in Section 6, and results are given in Section 7, showing superior results for SAMSA .", "A discussion on the results is presented in Section 8.", "Evaluation Metrics for Text Simplification.", "As pointed out by Xu et al. (2016), many of the existing measures for TS evaluation do not generalize across systems, because they fail to capture the combined effects of the different simplification operations.", "The two main directions pursued are direct human judgments and automatic measures borrowed from machine translation (MT) evaluation.", "Human judgments generally include grammaticality (or fluency), meaning preservation (or adequacy) and simplicity.", "Human evaluation is usually carried out with a small number of sentences (18 to 20), randomly selected from the test set (Wubben et al., 2012; Narayan and Gardent, 2014, 2016).", "The most commonly used automatic measure for TS is BLEU (Papineni et al., 2002).", "Using 20 source sentences from the PWKP test corpus with 5 simplified sentences for each of them, Wubben et al. (2012) investigated the correlation of BLEU with human evaluation, reporting positive correlation for simplicity, but no correlation for adequacy.", "Stajner et al. (2014) explored the correlation with human judgments of six automatic metrics: co-sine similarity with a bag-of-words representation, METEOR (Denkowski and Lavie, 2011), TERp (Snover et al., 2009), TINE (Rios et al., 2011) and two sub-components of TINE: T-BLEU (a variant of BLEU which uses lower n-grams when no 4-grams are found) and SRL (based on semantic role labeling).", "Using 280 pairs of a source sentence and a simplified output with only structural mod-ifications, they found positive correlations for all the metrics except TERp with respect to meaning preservation and positive albeit lower correlations for METEOR, T-BLEU and TINE with respect to grammaticality.", "Human simplicity judgments were not considered in this experiment.", "In this paper we collect human judgments for grammaticality, meaning preservation and structural simplicity.", "To our knowledge, this is the first work to target structural simplicity evaluation, and it does so both through elicitation of human judgments and through the definition of SAMSA .", "Xu et al. (2016) were the first to propose two evaluation measures tailored for simplification, focusing on lexical simplification.", "The first metric is FKBLEU, a combination of iBLEU (Sun 686 and Zhou, 2012), originally proposed for evaluating paraphrase generation by comparing the output both to the reference and to the input, and of the Flesch-Kincaid Index (FK), a measure of the readability of the text (Kincaid et al., 1975).", "The second one is SARI (System output Against References and against the Input sentence) which compares the n-grams of the system output with those of the input and the human references, separately evaluating the quality of words that are added, deleted and kept by the systems.", "They found that FKBLEU and even more so SARI correlate better with human simplicity judgments than BLEU.", "On the other hand, BLEU (with multiple references) outperforms the other metrics on the dimensions of grammaticality and meaning preservation.", "As the Parallel Wikipedia Corpus (PWKP), usually used in simplification research, has been shown to contain a large portion of problematic simplifications (Xu et al., 2015; Hwang et al., 2015), Xu et al. (2016) further proposed to use multiple references (instead of a single reference) in the evaluation measures.", "SAMSA addresses this issue by directly comparing the input and the output of the simplification system, without requiring manually curated references.", "Structural Measures for Text-to-text Generation.", "Other than measuring the number of splits (Narayan and Gardent, 2014, 2016), which only assesses the frequency of this operation and not its quality, no structural measures were previously proposed for the evaluation of structural simplification.", "The need for such a measure is pressing, given recent interest in structural simplification, e.g., in the Split and Rephrase task (Narayan et al., 2017), which focuses on sentence splitting.", "In the task of sentence compression, which is similar to simplification in that they both involve deletion and paraphrasing, Clarke and Lapata (2006) showed that a metric that uses syntactic dependencies better correlates with human evaluation than a metric based on surface sub-strings.", "Toutanova et al. (2016) found that structure-aware metrics obtain higher correlation with human evaluation over bigram-based metrics, in particular with grammaticality judgments, but that they do not significantly outperform bigram-based metrics on any parameter.", "Both Clarke and Lapata (2006) and Toutanova et al. (2016) use reference-based metrics that use syntactic structure on both the output and the references.", "SAMSA on the other hand uses linguistic annotation only on the source side, with semantic structures instead of syntactic ones.", "Semantic structures were used in MT evaluation, for example in the MEANT metric (Lo et al., 2012), which compares the output and the reference sentences, both annotated using SRL (Se-mantic Role Labeling).", "Lo et al. (2014) proposes the XMEANT variant, which compares the SRL structures of the source and output (without using references).", "As some frequent constructions like nominal argument structures are not addressed by the SRL annotation, Birch et al. (2016) proposed HUME, a human evaluation metric based on UCCA, using the semantic annotation only on the source side when comparing it to the output.", "We differ from HUME in proposing an automatic metric, tackling monolingual text simplification, rather than MT. The UCCA annotation has also been recently used for the evaluation of Grammatical Error Correction (GEC).", "The USIM metric (Choshen and Abend, 2018) measures the semantic faithfulness of the output to the source by comparing their respective UCCA graphs.", "Semantic Structures in Text Simplification.", "In most of the work investigating the structural operations involved in text simplification, both in rule-based systems (Siddharthan and Angrosh, 2014) and in statistical systems (Zhu et al., 2010; Woodsend and Lapata, 2011), the structures that were considered were syntactic.", "Narayan and Gardent (2014, 2016) proposed to use semantic structures in the simplification model, in particular in order to avoid splits and deletions which are inconsistent with the semantic structures.", "SAMSA identi-fies such incoherent splits, e.g., a split of a phrase describing a single event, and penalizes them.", "Glavas and Stajner (2013) presented two simplification systems based on event extraction.", "One of them, named Event-wise Simplification, transforms each factual event motion into a separate sentence.", "This approach fits with SAMSA 's stipulation, that an optimal structural simplification is one where each (UCCA-) event in the input sentence is assigned a separate output sentence.", "However, unlike in their model, SAMSA stipulates that not only should multiple events evoked by a verb in the same sentence be avoided in a simplification, but penalizes sentences containing multiple events evoked by a lexical item of any category.", "For example, the sentence John's un-687 expected kick towards the gate saved the game which has two events, one evoked by kick (a noun) and another by saving (a verb) can be converted to John kicked the ball towards the gate. It saved the game. 3 UCCA's Semantic Structures In this section we will briefly describe the UCCA scheme, focusing on the concepts of Scenes and Centers which are key in the definition of SAMSA .", "UCCA (Universal Cognitive Conceptual Annotation; Abend and Rappoport, 2013) is a semantic annotation scheme based on typological (Dixon, 2010b,a, 2012) and cognitive (Langacker, 2008) theories which aims to represent the main semantic phenomena in the text, abstracting away from syntactic detail.", "UCCA structures are directed acyclic graphs whose nodes (or units) correspond either to the leaves of the graph (including the words of the text) or to several elements jointly viewed as a single entity according to some semantic or cognitive consideration.", "Unlike AMR, UCCA semantic units are directly anchored in the text (Abend and Rappoport, 2017; Birch et al., 2016), which allows easy inclusion of a word-to-word alignment in the metric model (Section 4).", "UCCA Scenes.", "A Scene, which is the most basic notion of the foundational layer of UCCA considered here, describes a movement, an action or a state which persists in time.", "Every Scene contains one main relation, which can be either a Process or a State.", "The Scene may contain one or more Participants, which are interpreted in a broad sense, including locations and destinations.", "For example, the sentence He ran into the park has a single Scene whose Process is ran.", "The two Participants are He and into the park.", "Scenes can have several roles in the text.", "First, they can provide additional information about an established entity (Elaborator Scenes) as for example the Scene who entered the house in the sentence The man who entered the house is John.", "They can also be one of the Participants of another Scene, for example, he will be late in the sentence: He said he will be late.", "In the other cases, the Scenes are annotated as parallel Scenes (H) which can be linked by a Linker (L): When L [he will arrive at home] H , [he will call them] H .", "Unit Centers.", "With regard to units which are not Scenes, the category Center denotes the semantic head of the unit.", "For example, dogs is the center of the expression big brown dogs and box is the center of in the box.", "There could be more than one Center in a non-Scene unit, for example in the case of coordination, where all conjuncts are Centers.", "SAMSA 's main premise is that a structurally correct simplification is one where: (1) each sentence contains a single event from the input (UCCA Scene), (2) the main relation of each of the events and their participants are retained in the output.", "For example, consider John wrote a book. I read that book. as a simplification of I read the book that John wrote..", "Each output sentence contains one Scene, which has the same Scene elements as the source, and would thus be deemed correct by SAMSA .", "On the other hand, the output John wrote. I read the book. is an incorrect split of that sentence, since a participant of the writing Scene, namely the book is absent in the split sentence.", "SAMSA would indeed penalize such a case.", "Similarly, Scenes which have elements across several sentences receive a zero score by SAMSA .", "As an example, consider the sentence The combination of new weapons and tactics marks this battle as the end of chivalry, and erroneous split The combination of new weapons and tactics. It is the end of chivalry. (adapted from the output of a recent system on the PWKP corpus), which does not preserve the original meaning.", "SAMSA is based on two external linguistic resources.", "One is a semantic annotation (UCCA in our experiments) of the source side, which can be carried out either manually or automatically, using the TUPA parser 3 (Transition-based UCCA parser; Hershcovich et al., 2017) for UCCA.", "UCCA decomposes each sentence s into a set of Scenes { sc 1 , sc 2 ,", ".., sc n } , where each scene sc i contains a main relation mr i (sub-span of sc i ) and a set of zero or more participants A i .", "The second resource is a word-to-word alignment A between the words in the input and one or zero words in the output.", "The monolingual alignment thus permits SAMSA not to penalize outputs that involve lexical substitutions (e.g., com-3 https://github.com/danielhers/tupa 688 mence might be aligned with start).", "We denote by n inp the number of UCCA Scenes in the input sentence and by n out the number of sentences in the output.", "Given an input sentence's UCCA Scenes sc 1 , . . . , sc n inp , a non-annotated output of a simplification system split into sentences s 1 , . . . , s n out , and their word alignment A , we distinguish between two cases:", "1. n inp n out : in this case, we compute the maximal Many-to-1 correspondence between Scenes and sentences.", "A Scene is matched to a sentence in the following way.", "We say that a leaf l in a Scene sc is consistent in a Scene-sentence mapping M which maps sc to a sentence s , if there is a word w s which l aligns to (according to the word alignment A ).", "The score of matching a Scene sc to a sentence s is then defined to be the total number of consistent leaves in sc .", "We traverse the Scenes in their order of occurrence in the text, selecting for each the sentence that maximizes the score.", "If n inp = n out , once a sentence is matched to a Scene, it cannot be matched to another one.", "Ties between sentences are broken towards the sentence that appeared first in the output.", "M ( sc i ) = argmax s score ( sc i , s ) s .", "t .", "s / { M ( sc 1 ) , . . . , M ( sc i 1 ) } if n inp = n out", "Minimal Centers.", "The minimal center of a UCCA unit u is UCCA's notion of a semantic head word, defined through recursive rules not unlike the head propagation rules used for converting constituency structures to dependency structures.", "More formally, we define the minimal center of a UCCA unit u (here a Participant or a Main Relation) to be the UCCA graph's leaf reached by starting from u and iteratively selecting the child tagged as Center.", "If a Participant (or a Center inside a Participant) is a Scene, its center is the main relation (Process or State) of the Scene.", "For example, the center of the unit The previous president of the commission ( u 1 ) is presi-dent of the commission.", "The center of the latter is president, which is a leaf in the graph.", "So the minimal center of u 1 is president.", "Given the input sentence Scenes { sc 1 , ..., sc n inp } , the output sentences { s 1 , ..., s n out } , and a mapping between them M , SAMSA is defined as: n out n inp 1 2 n inp X sci (cid:2) 1 M ( sci ) ( MR i ) + 1 k i ki X j =1 1 M ( sci ) (Par ( j ) i ) (cid:3) where MR i is the minimal center of the main relation (Process or State) of sc i , and Par (j)i ( j = 1 , . . . , k i ) are the minimal centers of the Participants of sc i .", "For an output sentence s , 1 s ( u ) is a function from the input units to { 0 , 1 } , which returns 1 iff u is aligned (according to A ) with a word in s .", "4 The role of the non-splitting penalty term n out /n inp in the SAMSA formula is to penalize cases where the number of sentences in the output is smaller than the number of Scenes.", "In order to isolate the effect of the non-splitting penalty, we experiment with an additional metric SAMSA abl (reads SAMSA ablated), which is identical to SAMSA but does not take this term into account.", "Corpus-level SAMSA and SAMSA abl scores are obtained by averaging their sentence scores.", "In the case of implicit units i.e. omitted units that do not appear explicitly in the text (Abend and Rappoport, 2013), since the unit preservation cannot be directly captured, the score t for the relevant unit will be set to 0 .", "5 .", "For example, in the Scene traveling is fun, the people who are traveling correspond to an implicit Participant.", "As implicit units are not covered by TUPA, this will only be relevant for the semi-automatic implementation of the metric (see Section 6).", "For testing the automatic metric, we first build a human evaluation benchmark, using 100 sentences from the complex part of the PWKP corpus and the outputs of six recent simplification systems for these sentences: 5 (1) TSM (Zhu et al., 2010) using Tree-Based SMT, (2) RevILP (Woodsend and Lapata, 2011) using Quasi-Synchronous Grammars, (3) PBMT-R (Wubben et al., 2012) using Phrase-Based SMT, (4) Hybrid (Narayan and Gardent,", "4 In some cases, the unit u can be a sequence of centers (if there are several minimal centers).", "In these cases, 1 s ( u ) returns 1 iff the condition holds for all centers.", "5 All the data can be found here: http: //homepages.inf.ed.ac.uk/snaraya2/data/simplification-2016.tgz .", "2014), a supervised system using DRS, (5) UNSUP (Narayan and Gardent, 2016), an unsupervised system using DRS, and (6) Split-Deletion (Narayan and Gardent, 2016), the unsupervised system with only structural operations.", "All these systems explicitly address at least one type of structural simplification operation.", "The last system, Split-Deletion, performs only structural (i.e., no lexical) operations.", "It is thus an interesting test case for SAMSA since here the aligner can be replaced by a simple match between identical words.", "In total we obtain 600 system outputs from the six systems, as well as 100 sentences from the simple Wikipedia side of the corpus, which serve as references.", "Five in-house annotators with high proficiency in English evaluated the resulting 700 input-output pairs by answering the questions in Table", "1. 6 Qa addresses grammaticality, Qb and Qc capture two complementary aspects of meaning preservation (the addition and the removal of information) and Qd addresses structural simplicity.", "Possible answers are: 1 (no), 2 (maybe) and 3 (yes).", "Following Glavas and Stajner (2013), we used a 3 point Likert scale, which has recently been shown to be preferable over a 5 point scale through human studies on sentence compression (Toutanova et al., 2016).", "Question Qd was accompanied by a negative example 7 showing a case of lexical simplification, where a complex word is replaced by a simple one.", "A positive example was not included so as not to bias the annotators by revealing the nature of the operations our experiments focus on (i.e., splitting and deletion).", "The PWKP test corpus (Zhu et al., 2010) was selected for our experiments over the development and test sets used in (Xu et al., 2016), as the lat-ter's selection process was explicitly biased towards input-output pairs that mainly contain lexical simplifications.", "Given the annotator's answers, we consider the following scores.", "First, the grammaticality score G is the answer to Qa.", "By inverting (changing 1 to 3 and 3 to 1) the answer for Qb, we obtain a Non-Addition score indicating to which extent no additional information has been added.", "Similarly, inverting the answer to Qc yields the Non-Removal score.", "Averaging these two scores, we obtain the meaning preservation score P .", "Finally, the structural simplicity score S is the answer to Qd.", "Each of these scores is averaged over the five annotators.", "We further compute an average human score: AvgHuman = 1 3( G + P + S ) 5.3 Inter-annotator Agreement Inter-annotator agreement rates are computed in two ways.", "Table 2 presents the absolute agreement and Cohen's quadratic weighted (Cohen, 1968).", "Table 3 presents Spearman's correlation ( ) between the human ratings of the input-output pairs (top row), and between the resulting system scores (bottom row).", "In both cases, the agreement between the five annotators is computed as the average agreement over the 10 annotator pairs.", "Qa Qb Qc Qd AvgHuman Sen. 0.63 0.30 0.48 0.11 0.49 Sys.", "0.92 0.54 (0.1) 0.64 (0.06) 0.14 (0.4) 0.64 (0.06) Table 3: Spearman's correlation (and p -values) of the system-level (top row) and sentence-level (bottom row) ratings of the five annotators.", "We further compute SAMSA for the 100 sentences of the PWKP test set and the corresponding system outputs.", "Experiments are conducted in two settings: (1) a semi-automatic setting where 690 UCCA annotation was carried out manually by a single expert UCCA annotator using the UC-CAApp annotation software (Abend et al., 2017), and according to the standard annotation guidelines; 8 (2) an automatic setting where the UCCA annotation was carried out by the TUPA parser (Hershcovich et al., 2017).", "Sentence segmentation of the outputs was carried out using the NLTK package (Loper and Bird, 2002).", "For word alignments, we used the aligner of Sultan et al. (2014).", "9 7 Correlation with Human Evaluation We compare the system rankings obtained by SAMSA and by the four human parameters.", "We find that the two leading systems according to AvgHuman and SAMSA turn out to be the same: Split-Deletion and RevILP.", "This is the case both for the semi-automatic and the automatic implementations of the metric.", "A Spearman correlation between the human and SAMSA scores (com-paring their rankings) is presented in Table 4.", "We compare SAMSA and SAMSA abl to the reference-based measures SARI 10 (Xu et al., 2016) and BLEU, as well as to the negative Levenshtein distance to the reference (-LDSR ).", "We use the only available reference for this corpus, in accordance with the standard practice.", "SARI is a reference-based measure, based on n-gram overlap between the source, output and reference, and focuses on lexical (rather than structural) simplification.", "For completeness, we include the other two measures reported in Narayan and Gardent (2016), which are measures of similarity to the input (i.e., they quantify the tendency of the systems to introduce changes to the input): the negative Levenshtein distances between the output and input compared to the original complex corpus (-LDSC ), and the number of sentences split by each of the systems.", "The highest correlation with AvgHuman and grammaticality is obtained by semi-automatic SAMSA (0.58 and 0.54), a high correlation especially in comparison to the inter-annotator agreement on AvgHuman (0.64, Table 3).", "The automatic version obtains high correlation with human judgments in these settings, where for struc-8 http://www.cs.huji.ac.il/oabend/ ucca.html 9 https://github.com/ma-sultan/ monolingual-word-aligner 10 Data and code for can be found in https://github.", "tural simplicity, it scores somewhat higher than the semi-automatic SAMSA .", "The highest correlation with structural simplicity is obtained by the number of sentences with splitting, where SAMSA (automatic and semi-automatic) is second and third highest, although when restricted to multi-Scene sentences, the correlation for SAMSA (semi-automatic) is higher (0.89, p = 0 . 009 and 0.77, p = 0 . 04 ).", "The highest correlation for meaning preservation is obtained by SAMSA abl which provides further evidence that the retainment of semantic structures is a strong predictor of meaning preservation (Sulem et al., 2015).", "SAMSA in itself does not correlate with meaning preservation, probably due to its penalization of under-splitting sentences.", "Note that the standard reference-based measures for simplification, BLEU and SARI, obtain low and often negative correlation with human ratings.", "We believe that this is the case because SARI and BLEU admittedly focus on lexical simplification, and are difficult to use to rank systems which also perform structural simplification.", "Our results thus suggest that SAMSA provides additional value in predicting the quality of a simplification system and should be reported in tandem with more lexically-oriented measures.", "Human evaluation parameters.", "The fact that the highest correlations for structural simplicity and meaning preservation are obtained by different metrics ( SAMSA and SAMSA abl respectively) highlights the complementarity of these two parameters for evaluating TS quality but also the difficulty of capturing them together.", "Indeed, a given sentence-level operation could both change the original meaning by adding or removing information (affecting the P score) and increase simplicity ( S ).", "On the other hand, the identity transformation perfectly preserves the meaning of the original sentence without making it simpler.", "For examining this phenomenon, we compute Spearman's correlation at the system-level between the simplicity and meaning preservation human scores.", "We obtain a correlation of -0.77 ( p = 0 . 04 ) between S and P .", "The correlation between S and the two sub-components of P , the Non-Addition and the Non-Removal scores, are -0.43 ( p = 0 . 2 ) and -0.77 ( p = 0 . 04 ) respectively.", "These negative correlations support our use 691 Reference-less Reference-based from Source SAMSA SAMSA abl BLEU SARI -LDSR-LDSC # Split Sents.", "Distribution at the sentence level.", "In addition to the system-level analysis presented in Section 7, we also investigate the behavior of SAMSA at the sentence level by examining its joint distribution with the human evaluation scores.", "Focusing on the AvgHuman score and the automatic implementation of SAMSA and using the same data as in Section 7, we consider a single pair of scores (AvgHuman i , SAMSA i ) , 1 i 100 , for each of the 100 source sentences, averaging over the SAMSA and human scores obtained for the 6 simplification systems (See Figure 1).", "The joint distribution indicates a positive correlation between SAMSA and AvgHuman .", "The corresponding Pearson correlation is indeed 0.27 ( p = 0 . 03 ).", "In order to provide further validation for SAMSA predictive value for quality of simplification systems, we report SAMSA 's correlation with a recently proposed benchmark, used for the QATS (Quality Assessment for Text Simplification) shared task (Stajner et al., 2016).", "Setup.", "The test corpus contains 126 sentences taken from 3 datasets described in Stajner et al. (2016) 11 : (1) EventS: original sentences from the EMM News-Brief 12 and their syntactically simplified versions (with significant content reduction) by the EventSimplify TS system (Glavas 11 http://qats2016.github.io/shared.html 12 emm.newsbrief.eu/NewsBrief/ clusteredition/en/latest.html and Stajner, 2013) 13 (the test corpus contains 54 pairs from this dataset), (2) EncBrit: original sentences from the Encyclopedia Britannica (Barzilay and Elhadad, 2003) and their automatic simplifications obtained using ATS systems based on several phrase-based statistical MT systems (Stajner et al., 2015) trained on Wikipedia TS corpus (Coster and Kauchak, 2011) (24 pairs), and (3) LSLight: sentences from English Wikipedia and their automatic simplifications (Glavas and Stajner, 2015) by three different lexical simplification systems (Biran et al., 2011; Horn et al., 2014; Glavas and Stajner, 2015) (48 pairs).", "Human evaluation is also provided by this resource, with scores for overall quality, grammaticality, meaning preservation and simplicity.", "Importantly, the simplicity score does not explicitly refer to the output's structural simplicity, but rather to its readability.", "We focus on the overall human score, and compare it to SAMSA .", "Since different systems were used to simplify different portions of the input, correlation is taken at the sentence level.", "We use the same implementations of SAMSA .", "Manual UCCA annotation is here performed by one of the authors of this paper.", "Results.", "We follow Stajner et al. (2016) and report the Pearson correlations (at the sentence level) between the rankings of the metrics and the human evaluation scores.", "Results show that the semi-automatic/automatic SAMSA obtains a Pearson correlation of 0.32 and 0.28 with the human scores.", "This places these measures in the 3rd and 4th places in the shared task, where the only two systems that surpassed it are marginally better, with scores of 0.33 and 0.34, and where the next 13 takelab.fer.hr/data/symplify 692 1.5 1.6 1.7 1.8 1.9 2 2.1 2.2 2.3 2.4 AvgHuman 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 SAMSA Figure 1: Joint distribution of the automatic SAMSA and the AvgHuman scores at the sentence level.", "system in QATS obtained a correlation of 0.23.", "This correlation by SAMSA was obtained in more restricted conditions, compared to the measures that competed in QATS.", "First, SAMSA computes its score by only considering the UCCA structure of the source, and an automatic word-to-word alignment between the source and output.", "Most QATS systems, including OSVCML and OSVCML2 (Nisioi and Nauze, 2016) which scored highest on the shared task, use an ensemble of classifiers based on bag-of-words, POS tags, sentiment information, negation, readability measures and other resources.", "Second, the systems participating in the shared task had training data available to them, annotated by the same annotators as the test data.", "This was used to train classifiers for predicting their score.", "This gives the QATS measures much predictive strength, but hampers their interpretability.", "SAMSA on the other hand is conceptually simple and interpretable.", "Third, the QATS shared task does not focus on structural simplification, but experiments on different types of systems.", "Indeed, some of the data was annotated by systems that exclusively perform lexical simplification, which is orthogonal to SAMSA 's structural focus.", "Given these factors, SAMSA 's competitive correlation with the participating systems in QATS suggests that structural simplicity, as reflected by the correct splitting of UCCA Scenes, captures a major component in overall simplification quality, underscoring SAMSA 's value.", "These promising results also motivate a future combination of SAMSA with classifier-based metrics.", "We presented the first structure-aware metric for text simplification, SAMSA , and the first evaluation experiments that directly target the structural simplification component, separately from the lexical component.", "We argue that the structural and lexical dimensions of simplification are loosely related, and that TS evaluation protocols should assess both.", "We empirically demonstrate that strong measures that assess lexical simplification quality (notably SARI), fail to correlate with human judgments when structural simplification is performed by the evaluated systems.", "Our experiments show that SAMSA correlates well with human judgments in such settings, which demonstrates its usefulness for evaluating and tuning statistical simplification systems, and shows that structural evaluation provides a complementary perspective on simplification quality.", "We would like to thank Zhemin Zhu and Sander Wubben for sharing their data, as well as the annotators for participating in our evaluation and UCCA annotation experiments.", "We also thank Daniel Hershcovich and the anonymous reviewers for their helpful comments.", "This work was partially supported by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI) and by the Israel Science Foundation (grant No. 929/17), as well as by the HUJI Cyber Security Research Center in conjunction with the Israel National Cyber Bureau in the Prime Minis-ter's Office." ]
[ "abstain", "objective", "abstain", "abstain", "result", "abstain", "other", "abstain", "other", "abstain", "abstain", "result", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "result", "result", "result", "method", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "objective", "other", "other", "other" ]
[ "Recent approaches to cross-lingual word embedding have generally been based on linear transformations between the sets of embedding vectors in the two languages.", "In this paper, we propose an approach that instead expresses the two monolingual embedding spaces as probability densities defined by a Gaussian mixture model, and matches the two densities using a method called normalizing flow.", "The method requires no explicit supervision, and can be learned with only a seed dictionary of words that have identical strings.", "We argue that this formulation has several intuitively attractive properties, particularly with the respect to improving robustness and generalization to mappings between difficult language pairs or word pairs.", "On a benchmark data set of bilingual lexicon induction and cross-lingual word similarity, our approach can achieve competitive or superior performance compared to state-of-the-art published results, with particularly strong results being found on etymologically distant and/or morphologically rich languages.", "1 1 Introduction Cross-lingual word embeddings represent words in different languages in a single vector space, capturing the syntactic and semantic similarity of words across languages in a way conducive to use in computational models (Upadhyay et al., 2016; Ruder et al., 2017).", "These embeddings have been shown to be an effective tool for cross-lingual NLP, e.g. the transfer of models trained on high-resource languages to low-resource ones (Klemen-tiev et al., 2012; Guo et al., 2015; Zoph et al., 2016; Zhang et al., 2018; Gu et al., 2018) or unsupervised learning (Artetxe et al., 2018c).", "There are two major paradigms in the learning of cross-lingual word embeddings: online and offline.", "Online methods learn the crosslingual embeddings directly from parallel corpora (Hermann and Blunsom, 2014), optionally augmented with monolingual corpora (Gouws et al., 2015).", "In contrast, offline approaches learn a bilingual mapping function or multilingual projections from pre-trained monolingual word embeddings or feature vectors (Haghighi et al., 2008; Mikolov et al., 2013; Faruqui and Dyer, 2014).", "In this work, we focus on this latter offline approach.", "The goal of bilingual embedding is to learn a shared embedding space where words possessing similar meanings are projected to nearby points.", "Early work focused on supervised methods maximizes the similarity of the embeddings of words that exist in a manually-created dictionary, according to some similarity metric (Mikolov et al., 2013; Faruqui and Dyer, 2014; Jawanpuria et al., 2018; Joulin et al., 2018).", "In contrast, recently proposed unsupervised methods frame this problem as minimization of some form of distance between the whole set of discrete word vectors in the chosen vocabulary, e.g. Wasserstein distance or JensenShannon divergence (Xu et al., 2018; Conneau et al., 2017; Zhang et al., 2017; Grave et al., 2018).", "While these methods have shown impressive results for some language pairs despite the lack of supervision, regarding the embedding space as a set of discrete points has some limitations.", "First, expressing embeddings as a single point in the space doesn't take into account the inherent uncertainty involved in learning embeddings, which can cause embedding spaces to differ significantly between training runs (Wendlandt et al., 2018).", "Second, even in a fixed embedding space the points surrounding those of words that actually exist in the pre-trained vocabulary also often are coherent points in the embedding space.", "In this work, we propose a method for density matching for bilingual word embedding (DeMa-BWE) .", "Instead of treating the embedding space as a collection of discrete points, we express it as a probability density function over the entire continuous space over word vectors.", "We assume each vector in the monolingual embedding space is generated from a Gaussian mixture model with components centered at the pretrained word embeddings (Fig. 1), and our approach then learns a bilingual mapping that most effectively matches the two probability densities of the two monolingual embedding spaces.", "To learn in this paradigm, instead of using the pre-trained word embeddings as fixed training samples, at every training step we obtain samples from the Gaussian mixture space.", "Thus, our method is exploring the entire embedding space instead of only the specific points assigned for observed words.", "To calculate the density of the transformed samples, we use volume-preserving invertible transformations over the target word embeddings, which make it possible to perform density matching in a principled and efficient way (Rezende and Mohamed, 2015; Papamakarios et al., 2017; He et al., 2018).", "We also have three additional ingredients in the model that proved useful in stabilizing training: (1) a back-translation loss to allow the model to learn the mapping jointly in both directions, (2) an identical-word-matching loss that provides weak supervision by encouraging the model to have words with identical spellings be mapped to a similar place in the space, and (3) frequency-matching based Gaussian mixture weights that accounts for the approximate frequencies of aligned words.", "Empirical results are strong; our method is able to effectively learn bilingual embeddings that achieve competitive or superior results on the MUSE dataset (Conneau et al., 2017) over state-of-the-art published results on bilingual word translation and cross-lingual word similarity tasks.", "The results are particularly encouraging on etymologically distant or morphologically rich languages, as our model is able to explore the integration over the embedding space by treating the space as a continuous one.", "Moreover, unlike previous unsupervised methods that are usually sensitive to initialization or require sophisticated optimization procedures, our method is robust and requires no special initialization.", "In this section, we will briefly describe normalizing flows the backbone of DeMa-BWE.", "As mentioned in the introduction and detailed later, our model is based on matching two probability density functions, one representing the source embedding space and one representing the target embedding space.", "To learn in this framework, we will use the concept of normalizing flows (Rezende and Mohamed, 2015).", "We will explain them briefly here, but refer readers to Rezende and Mohamed (2015) for details due to space constraints.", "Concretely, let u denote a high dimensional random vector (e.g. representing a point in the source embedding space) and z be a latent variable that corresponds to u (e.g. a point in the target embedding space).", "Flow-based generative models (Kingma et al., 2016; Kingma and Dhariwal, 2018) learn invertible transformations f ( z ) from the distribution over z to the distribution over u .", "The generative story of the model is defined as: z p ( z ) , u = f ( z ) (1) where p ( z ) is the prior distribution.", "This prior can be any distribution for which we can tractably compute the density of sample points z .", "A common choice of such distribution is a spherical multivariate Gaussian distribution: p ( z ) = N ( z ; 0 , I ) (Dinh et al., 2016).", "Assuming the transformation function f ( ) is invertible, using the rule for change of variables, the probability density of u can be calculated as: p ( u ) = p ( z ) | det( J ( f 1 ( u ))) | (2) where det( J ( f 1 ( u ))) is determinant of the Jacobian matrix of the function inverse.", "accounts for the way in which f locally expands or contracts regions of z , and enforces the invertibility of the function.", "A normalizing flow is a cascaded sequence of such invertible transformations, which is learned by maximizing the density in Equation (2) over observed data points u .", "One computational issue with these models lies in calculating the Jacobian matrix, which is expensive in the general case.", "A common method is to choose transformations whose Jacobians' are a triangular matrix, which renders this computation tractable (Dinh et al., 2016; Papamakarios et al., 2017).", "In this section, we present notation used in our method, describe the prior we define for the monolingual embedding space, then detail our density matching method.", "Given two sets of independently trained monolingual embeddings, the problem of bilingual embedding mapping is to learn a mapping function that aligns the two sets in a shared space.", "Let x R d , y R d denote vectors in the source and target language embedding space respectively.", "Let x i and y j denote an actual word in the pretrained source and target vocabularies respectively.", "Words are sorted by their occurrence counts in the monolingual corpus and the index of the word represents its rank.", "We use x i and y j to denote the pretrained word embeddings for word x i in the source language and word y j in the target language respectively.", "Given a pair of languages s and t , our approach learns two mapping functions: f xy that maps the source embedding to the target space and f yx that gives the mapping in the reverse direction.", "To learn the mapping, we project a vector x in the source embedding space into the target space y .", "We learn this mapping by maximizing the density of data points in the source space.", "The density can be computed using the idea of normalizing flow described above.", "Thus, for the the monolingual embedding spaces, we need to define tractable density functions p ( x ) and p ( y ) .", "While any number of functions could be conceived to calculate these densities, in the current method we opt to use a Gaussian Mixture Model (GMM) with Gaussian components centered at each pretrained word embedding.", "This is motivated by the assumption that embeddings are likely to appear in the neighborhood of other embeddings, where we define the neighborhood to be characterized as closeness in Euclidean space, and the uncertainty of each neighborhood as being Gaussian.", "Concretely, let N x and N y denote the number of pretrained word embeddings that serve as Gaussian component centers during training for the source and target languages, respectively.", "Then we can express the density of any point in the source embedding space as: p ( x ) = (cid:88) i { 1 ,...,N x } ( x i ) p ( x | x i ) (3) where ( x i ) is the frequency of word x i normalized within the N x component words, and p ( x | x i ) is a Gaussian distribution centered at the embedding of word x i .", "We simply use a fixed variance 2 x for all Gaussian components: p ( x | x i ) = N ( x | x i , 2 x I ) (4) Similarly, the density of any point in the target embedding space can be written as: p ( y ) = (cid:88) j { 1 ,...,N y } ( y j ) p ( y | y j ) (5) where p ( y | y j ) = N ( y | y j , 2 y I ) .", "With the Gaussian mixture model as the prior distribution in the monolingual space, our goal is to learn a mapping function from one embedding space to the other such that the log probabilistic density is maximized in the source space.", "While we are jointly learning the two mapping functions f xy and f yx simultaneously, for conciseness we will illustrate our approach using the source to target mapping f xy .", "First, a continuous vector x is sampled from the Gaussian mixture model (Eq.", "(3)) by sampling x i ( x i ) then x p ( x | x i ) (4).", "Next, we apply the mapping function f xy to obtain the transformed vector y in the target space.", "Concretely, the mapping functions we employ in this work are two linear transformations: f xy ( ) = W xy and f yx = W yx .", "Connecting to the transformation function in Sec. 2, we see that x = f ( y ) = W 1 xy y , y = f 1 ( x ) = W xy x , and J ( f 1 ( x )) = W xy .", "We can then express the log density of a sample x as: log p ( x ; W xy ) = log p ( y )+log (cid:12)(cid:12) det( W xy ) (cid:12)(cid:12) (6) where the Jacobian regularization term accounts for the volume expansion or contraction resulting from the projection matrix W xy .", "We maximize the likelihood function which is equivalent to minimizing expectation of the KL-divergence between the prior distribution and the model distribution of x .", "This provides a natural objective for optimization: minimize: KL( p ( x ) || p ( x ; W xy )) (7) By replacing W xy x with y , this is equivalent to maximizing the log density of transformed source samples in the target space (see Eq.", "The objective L xy contains two parts: the log density function log p ( y ) and a regularization term log det( W xy ) .", "Likewise, for the target to source mapping W yx , we have the density matching objective L yx .", "Conditional Density Matching The above marginal density matching method does not take into account the dependency between the embeddings in the two monolingual spaces.", "To address this issue, we extend the density matching method to the conditional density function: log p ( x | x i ; W xy ) = log p ( y | x i )+log (cid:12)(cid:12) det( W xy ) (cid:12)(cid:12) The conditional density p ( y | x i ) is the prior distribution in this simple normalizing flow, and for this we use a Gaussian mixture model in the target monolingual space: p ( y | x i ) = (cid:88) j { 1 ,...,N y } p ( y , y j | x i ) = (cid:88) j { 1 ,...,N y } p ( y | y j ) ( y j | x i ) (9) Similarly, where p ( y | y j ) is the Gaussian density function in the mixture model defined in Equation (5).", "( y j | x i ) allows us to incorporate a-priori assumptions about whether two words are likely to match.", "In fact, the density matching method in Eq.", "(6) can be regarded as a special case of the conditional density matching method by adopting a naive prior ( y j | x i ) := ( y j ) .", "However previous work (Zhang et al., 2017) has noted that word frequency information is a strong signal words with similar frequency are likely to be matched and thus we use ( y j | x i ) to incorporate this prior knowledge.", "In this work, we assume that the frequencies of aligned bilingual words in the individual monolingual corpus should be correlated, and to match words that are ranked similarly, we model the Gaussian mixture weights as the negative abso-lute difference between log-scale word ranks and normalize over all the target Gaussian component words by a softmax function with temperature : ( y j | x i ) = exp( | log( j ) log( i ) | ) / (cid:80) N y k =1 exp( | log( k ) log( i ) | / ) Thus, if a word x i has similarly frequency rank as word y j , the sample x from the Gaussian distribution centered at x i will be assigned higher weight for the component p ( y | y j ) = p ( W xy x | y j ) .", "Although this assumption will not hold always (e.g. for languages that have different levels of morphological complexity), intuitively we expect that using this signal will help more overall than it will hurt, and empirically we find that this weighting is not sensitive to language variation and works well in practice.", "L xy = E x i ( x i ) [KL( p ( x | x i ) || p ( x | x i ; W xy ))] = E x i ( x i ) , x p ( x | x i ) (cid:2) log p ( y | x i ) (10) + log (cid:12)(cid:12) det( W xy ) (cid:12)(cid:12)(cid:3)", "In the conditional density above, both the frequency-matching weight and the Gaussian density function play an important role in matching the density of a source-space sample with the target embedding space.", "The former matches bilingual words with their frequency ranks while the latter matches words with their vector distances.", "A common choice of bilingual mapping function is a linear transformation with an orthogonality constraint.", "Various motivations have been proposed for the orthogonality constraint such as length normalization of embeddings (Xing et al., 2015), and reversible mapping (Smith et al., 2017).", "In this work, we add a weak orthogonality constraint to the bilingual mappings via a back-translation loss as follows: L bt = E x i ( x i ) , x p ( x | x i ) (cid:2) g ( W yx W xy x , x ) (cid:3) + E y j ( y j ) , y p ( y | x j ) (cid:2) g ( W xy W yx y , y ) (cid:3) where g ( , ) = 1 cosine( , ) is the cosine loss.", "Jointly learning the two mapping matrices by minimizing this cyclic loss encourages W xy and W yx to be orthogonal to each other.", "To reduce the search space of the mapped bilingual embeddings, we add an additional weakly supervised loss over words that have identical strings in both the source and target languages denoted W id .", "L sup = (cid:88) v W id g ( v x W Txy , v y ) + g ( v y W Tyx , v x ) where v x and v y are the pretrained word embedding of word v in the source and target side respectively, and g ( , ) is the cosine loss described above.", "Although using identical strings for supervision is very noisy, especially for languages with little overlap in vocabularies, we find that they provide a enough guidance to training to prevent the model from being trapped in poor local optima.", "Putting everything together, the overall objective function of DeMa-BWE includes three parts: the density matching loss, the weak orthogonality loss and the weak supervised loss: L = L xy + L yx + L bt + L sup (11) where and are coefficients that tradeoff between different losses.", "One standard use case for bilingual embeddings is in bilingual lexicon induction, where the embeddings are used to select the most likely translation in the other language given these embeddings.", "In this case, it is necessary to have a retrieval metric that selects word or words likely to be translations given these embeddings.", "When performing this retrieval, it has been noted that high-dimensional embedding spaces tend to suffer from the hubness problem (Radovanovic et al., 2010) where some vectors (known as hubs ) are nearest neighbors of many other points, which is detrimental to reliably retrieving translations in the bilingual space.", "To mitigate the hubness problem, we adopt the Cross-Domain Similarity Local Scaling (CSLS) metric proposed in (Conneau et al., 2017) that penalizes the similarity score of the hubs.", "Specifically, given two mapped embeddings W xy x denoted x (cid:48) and y , CSLS first computes the average cosine similarity of x (cid:48) and y for their k nearest neighbors denoted r T ( x (cid:48) ) and r S ( y ) in the other language respectively, then the corrected similarity measure CSLS( , ) is defined as: CSLS( x (cid:48) , y ) = 2cos( x (cid:48) , y ) r T ( x (cid:48) ) r S ( y ) where cos( , ) is the cosine similarity.", "Following (Conneau et al., 2017), k is set to be 10.", "CSLS consistently outperform cosine similarity on nearest neighbor retrieval, however it does not consider the relative frequency of bilingual words which we hypothesize can be useful in disambiguation.", "As we discussed in Sec. 3.2, our density matching objective considers both the relative frequencies and vector similarities.", "The conditional density p ( y | x i ) (Eq.", "(9)) in our density matching objective (Eq.", "(8)) is a marginalized distribution over all target component words y j where the density of each component p ( y , y j | x i ) can be directly used as a similarity score for a pair of words ( y j , x i ) to replace the cosine similarity cos( x (cid:48) , y ) in CSLS.", "Let CSLS-D denote this mod-ified CSLS metric, which we compare to standard CSLS in experiments.", "We find that using CSLS-D for nearest neighbor retrieval outperforms the CSLS metric in most cases on the bilingual dictionary induction task.", "Iterative refinement, which learns the new mapping matrix by constructing a bilingual lexicon iteratively, has been shown as an effective method for improving the performance of unsupervised lexicon induction models (Conneau et al., 2017).", "Given a learned bilingual embedding mapping W , the refinement starts by inferring an initial bilingual dictionary using the retrieval method above on the most frequent words.", "Let X and Y denote the ordered embedding matrices for the inferred dictionary words for source and target languages respectively.", "Then a new mapping matrix W is induced by solving the Procrustes problem: W = argmin W O d ( R ) || WX Y || F = UVT s.t. U VT = SVD( YXT ) The step above can be applied iteratively by inducing a new dictionary with the new mapping W .", "DeMa-BWE is able to achieve very competitive performance without further refinement, but for comparison we also report results with the refinement procedure, which brings small improvements in accuracy for most language pairs.", "Note that for bilingual dictionary induction during refinement, we use CSLS as the retrieval metric across all experiments for fair comparison to the refinement step in previous work.", "We evaluate our approach extensively on the bilingual lexicon induction (BLI) task, which measures the word translation accuracy in comparison to a gold standard.", "We report results on the widely used MUSE dataset (Conneau et al., 2017), which consists of FastText monolingual embeddings pretrained on Wikipedia (Bojanowski et al., 2017), and dictionaries for many language pairs divided into train and test sets.", "We follow the evaluation setups of (Conneau et al., 2017).", "We evaluate DeMa-BWE by inducing lexicons between English and different languages including related languages, e.g. Spanish; etymologically distant languages, e.g. Japanese; and morphologically rich languages, e.g. Finnish.", "Embedding Normalization Following (Artetxe et al., 2018b), we pre-process the monolingual embeddings by first applying length normalization, then mean center each dimension, and then length normalize again to ensure that the final embeddings have a unit length.", "We observe that this normalization method helps stabilize training and accelerate convergence.", "Other Experimental Details We held out 1000 translation pairs randomly sampled from the training set in the MUSE dataset as our validation data.", "We also tried the unsupervised validation criterion proposed in (Conneau et al., 2017) as the model selection method that computes the average cosine similarity over the model induced dictionary pairs and found that this unsupervised criterion can select models that achieve similar performance as the supervised validation criterion.", "All hyperpa-rameters are tuned on the validation set and include the following: For the number of base words used as Gaussian components in the GMM, we typically choose the most frequent 20,000 words for all language pairs but en-ja for which we use 10,000 which achieves better performance.", "We use a batch size of 2048 for all languages but en-ja for which we use 1024.", "We use Adam (Kingma and Ba, 2014) for optimization with default hyper-parameters.", "We empirically set the Gaussian variance to be 0.01 for both the source and target languages in en-es, en-de, en-fr, en-ru; in the experiments for morphologically rich languages (Sec. 5.4), we set the variance to be 0.015 for all these languages except for et whose variance is set to be 0.02 while keeping the variance of English to be 0.01.", "In the experiments for etymologically distant language pairs en-ja and en-zh, we set different variances for the source and target languages in different mapping directions.", "For details of the variance setting please check the scripts in our code base.", "We empirically find that for a language pair the variance of the language with relatively more complex morphological properties needed to be set larger than the other language, indicating that the model needs to explore more in the embedding space for the morphologically richer language.", "We initialize mapping matrices W xy and W yx with a random orthogonal matrix.", "For the weak orthogonality constraint loss L bt , we set the weight to be 0.5 throughout all language pairs.", "For the weak supervision loss L sup , we set the weight to be 10 for all languages except for en-zh where we find 5 performs better.", "We set the temperature used in the softmax function for Gaussian mixture weights to be 2 across all languages.", "In Tab.", "1, we compare the performance of DeMa-BME extensively with the best performing unsupervised and supervised methods on the commonly benchmarked language pairs.", "Our unsupervised baselines are: (1) MUSE (U+R) (Conneau et al., 2017), a GAN-based unsupervised method with refinement.", "(2) A strong and robust unsupervised self-learning method SL-unsup from (Artetxe et al., 2018b).", "We also run their published code with identical words as the initial dictionary for fair comparison with our approach, denoted SL-unsup-ID .", "(3) Sinkhorn (Xu et al., 2018) that minimizes the Sinkhorn distance between the source and target word vectors.", "(4) An iterative matching method from (Hoshen and en-es es-en en-de de-en en-fr fr-en en-ru ru-en en-zh zh-en en-ja ja-en Supervised Procrustes (R) 81.4 82.9 73.5 72.4 81.1 82.4 51.7 63.7 42.7 36.7 14.2 7.44 MSF-ISF 79.9 82.1 73.0 72.0 80.4 81.4 50.0 65.3 28.0 40.7 -MSF 80.5 83.8 73.5 73.5 80.5 83.1 50.5 67.3 32.3 43.4 -CSLS-Sp 80.7 83.9 75.1 72.1 81.7 83.2 51.1 63.8 --GeoMM 81.4 85.5 74.7 76.7 82.1 84.1 51.3 67.6 49.1 45.3 -Unsupervised MUSE (U+R) 81.7 83.3 74.0 72.2 82.3 81.1 44.0 59.1 32.5 31.4 0.0 4.2 SL-unsup 82.3 84.7 75.1 74.3 82.3 83.6 49.2 65.6 0.0 0.0 2.9 0.2 SL-unsup-ID 82.3 84.6 75.1 74.1 82.2 83.7 48.8 65.7 37.4 34.2 48.5 33.7 Sinkhorn 79.5 77.8 69.3 67.0 77.9 75.5 ---Non-Adv 81.1 82.1 73.7 72.7 81.5 81.3 44.4 55.6 0.0 0.0 0.0 0.0 Non-Adv (R) 82.1 84.1 74.7 73.0 82.3 82.9 47.5 61.8 0.0 0.0 0.0 0.0 WS-Procrustes (R) 82.8 84.1 75.4 73.3 82.6 82.9 43.7 59.1 --DeMa-BME CSLS (w/o R) 82.0 85.4 75.3 74.9 82.6 82.4 46.9 62.4 39.6 40.0 46.7 32.9 CSLS-D (w/o R) 82.3 85.1 76.3 75.1 83.7 82.5 48.0 61.7 40.5 37.7 45.3 32.4 CSLS (w/ R) 82.8 84.5 75.6 74.1 82.5 83.3 47.3 63.5 41.9 37.7 50.7 35.2 CSLS-D (w/ R) 82.8 84.9 77.2 74.4 83.1 83.5 49.2 63.6 42.5 37.9 52.0 35.6 Table 1: Precision@1 for the MUSE BLI task compared with previous work.", "Wolf, 2018)): Non-Adv and Non-Adv (R) with refinement.", "(5) WS-Procrustes (R) using refinement by (Grave et al., 2018).", "Our supervised methods include: (1) The iterative Procrustes method Procrustes (R) (Smith et al., 2017).", "(2) A multi-step framework MSF-ISF (Artetxe et al., 2018a) and its variant MSF which uses CSLS for retrieval, whose results are from (Jawanpuria et al., 2018).", "(3) CSLS-Sp by (Joulin et al., 2018) that optimizes the CSLS score, and (4) a geometric approach GeoMM by (Jawanpuria et al., 2018).", "For fair comparisons, all supervised results are trained with the training dictionaries in the MUSE dataset.", "All baseline methods employ CSLS for retrieval except for the Sinkhorn method.", "For DeMa-BME, we present results with and without refinement, and with CSLS and CSLS-D as retrieval methods.", "From Tab.", "1, we can see the overall performance of DeMa-BME is remarkable comparing with other unsupervised methods and is also competitive with strong supervised methods.", "The results without the iterative refinement CSLS (w/o R) are strong on almost all language pairs with particularly strong performance being observed on es-en, en-de and en-fr on which DeMa-BME outperforms or is on par with the best performing methods.", "Applying refinement to DeMa-BME brings slightly better performance on most language pairs but degrades the performance on some language pairs such as es-en, zh-en for which the DeMa-BME already obtains very good results.", "Also, DeMa-BME demonstrates notably better performance on distant language pairs (en-ru, en-ja and en-zh) over other unsupervised methods, which often achieve good performance on etymologically close languages but fail to converge on the distant language pairs.", "However, when the dictionary is initialized with identical strings for SL-unsup, we obtain decent results on these languages.", "The strong performance of supervised methods on Russian and Chinese demonstrates that on some language pairs supervised seed lexicons are still necessary.", "Finally, when our density-based metric CSLS-D is employed for retrieval, it could achieve further gains in accuracy for most language pairs compared to its counterpart.", "Sgaard et al. (2018) found that the commonly", "They select several languages with different morphological traits and complexity then studied the impacts of language similarities on the bilingual lexicon induction.", "They show that a simple trick, harvesting the identical word strings in both languages as an initial dictionary and running the iterative Procrustes analysis described in Sec. 4.2, enables more robust and competitive bilingual induction results over the GAN-based unsupervised method with refinement.", "We denote this method id+Procrustes (R)'.", "Tab.", "2 shows results on the morphologically complex languages used by Sgaard et al. (2018).", "For each language pair we run experiments in both directions.", "The baseline methods in Sgaard et al. (2018) include id+Procrustes (R) and the MUSE (U+R).", "We run id+Procrustes (R) ourselves and obtain different results from them: except for en-et and en-el, we obtain significantly better results on other language pairs.", "In addition, we add an-other strong supervised baseline (5k+Procrustes (R)) with the training dictionary in the MUSE dataset and iterative Procrustes refinement.", "From Tab.", "2, we observe that even without refinement, DeMa-BME (CSLS (w/o R)) outperforms both the unsupervised and supervised baselines on even these difficult morphologically rich languages.", "We evaluate DeMa-BWE on the cross-lingual word similarity task from SemEval 2017 (Camacho-Collados et al., 2016) and compare with some strong baselines in Xu et al. (2018).", "In Tab.", "5, following the convention in benchmark evaluation for this task, we report the Pearson correlation scores ( 100 ).", "DeMa-BME achieves Supervised de-en es-en fa-en it-en Xing et al. (2015) 72 71 69 72 Shigeto et al. (2015) 72 72 69 72 Artetxe et al. (2016) 73 72 70 73 Artetxe et al. (2017) 70 70 67 71 Unsupervised Conneau et al. (2017) 71 71 68 71 Xu et al. (2018) 71 71 67 71 DeMa-BME (w/o R) 72.2 72.2 68.6 72.2 Table 3: Pearson rank correlation ( 100 ) on crosslingual word similarity task.", "the best performance among all the listed unsupervised methods.", "Compared with the supervised methods, DeMa-BME is also very competitive.", "Finally, we perform ablation studies in Tab.", "4 to examine the effects of different components of DeMa-BWE.", "In comparison to the full model, we remove the density matching loss L xy & L yx , the weakly supervised loss L sup , the back-translation loss L bt respectively.", "First, we observe that without the identical strings as the supervised loss, DeMa-BWE fails to converge as the density matching is difficult given a high-dimensional embedding space to search.", "Second, when we remove the proposed density matching loss, the model is able to produce reasonable accuracy for fr-en and ja-en, but has undesirable results on en-fr and en-ja, which verifies the necessity of the unsupervised density matching.", "Third, the back-translation loss is not a crucial component in DeMa-BME; removing it only degrades the model's performance by a small margin.", "This indicates that orthogonality is not must-have constraint given the model has enough capacity to learn a good transformation.", "In addition, we also compare the frequency-matching based Gaussian mixture weights in (9) with the naive target frequency based weights.", "As shown in the fourth row of Tab.", "4, the performance of DeMa-BWE with the naive weights is nominally worse than the model using the frequency-matching based mixture weights.", "In this work, we propose a density matching based unsupervised method for learning bilingual word embedding mappings.", "DeMa-BWE performs well in the task of bilingual lexicon induction.", "In the future work, we will integrate Gaussian embeddings (Vilnis and McCallum, 2015) with our approach.", "This work is sponsored by Defense Advanced Research Projects Agency Information Innovation Office (I2O), Program: Low Resource Languages for Emergent Incidents (LORELEI), issued by DARPA/I2O under Contract No.", "HR0011-15-C-0114.", "The authors would also like to thank Ruochen Xu, Barun Patra, Joel Ruben Antony Moni for their helpful discussions during drafting this paper." ]
[ "abstain", "objective", "abstain", "result", "result", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "result", "objective", "method", "method", "result", "objective", "abstain", "abstain", "method", "method", "abstain", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "other", "other", "other" ]
[ "Document-level information extraction (IE) tasks have recently begun to be revisited in earnest using the end-to-end neural network techniques that have been successful on their sentence-level IE counterparts.", "Evaluation of the approaches, however, has been limited in a number of dimensions.", "In particular, the precision/recall/F1 scores typically reported provide few insights on the range of errors the models make.", "We build on the work of Kummerfeld and Klein (2013) to propose a transformation-based framework for automating error analysis in document-level event and (N-ary) relation extraction.", "We employ our framework to compare two state-of-the-art document-level template-filling approaches on datasets from three domains; and then, to gauge progress in IE since its inception 30 years ago, vs. four systems from the MUC-4 (1992) evaluation.", "1 1 Introduction Although information extraction (IE) research has almost uniformly focused on sentence-level relation and event extraction (Grishman, 2019), the earliest research in the area formulated the task at the document level .", "Consider, for example, the first large-scale (for the time) evaluations of IE systems e.g. MUC-3 (1991) and MUC-4 (1992).", "Each involved a complex document-level event extraction task: there were 24 types of events, over a dozen event arguments (or roles ) to be identified for each event; documents could contain zero to tens of events, and extracting argument entities (or role fillers ) required noun phrase coreference resolution to ensure interpretability for the end-user (e.g. to ensure that multiple distinct mentions of the * These authors contributed equally to this work. 1 Our code for the error analysis tool and its output on different model predictions are available at https://github. com/IceJinx33/auto-err-template-fill/. same entity in the output were not misinterpreted as references to distinct entities).", "The task was challenging: information relevant for a single event could be scattered across the document or repeated in multiple places; relevant information might need to be shared across multiple events; information regarding different events could be intermingled.", "In Figure 1, for example, the DISEASE \"Newcastle\" is mentioned well before its associated event is mentioned (via the triggering phrase \"the disease has killed\"); that same mention of \"Newcastle\" must again be recognized as the DISEASE in a second event; and the COUNTRY of the first event (\"Honduras\") appears only in the sentence describing the second event.", "In fact, the problem of document-level information extraction has only recently begun to be revisited (Quirk and Poon, 2017; Jain et al., 2020; Du et al., 2021b,a; Li et al., 2021; Du, 2021; Yang et al., 2021) in part in an attempt to test the power of end-to-end neural network techniques that have been so successful on their sentence-level counterparts.", "2 Evaluation, however, has been limited in a number of ways.", "First, despite the relative complexity of the task, approaches are only evaluated with respect to their overall performance scores (e.g. precision, recall, and F1).", "Even though scores at the role level are sometimes included, no systematic analysis or characterization of the types of errors that occur is typically done.", "The latter is needed to determine strategies to improve performance, to obtain more informative cross-system and cross-genre comparisons, and to identify and track broader advances in the field as the underlying approaches evolve.", "To date, for example, there has been no attempt to directly compare the error landscape and distribution of 2 See, for example, Zhang et al. (2019), Du and Cardie (2020) and Lin et al. (2020) for within-sentence event extraction; Akbik et al. (2018), and Akbik et al. (2019) for named entity recognition (NER); and Zhang et al. (2018) and Luan et al. (2019) for sentence-level relation extraction.", "newly developed neural IE methods with that of the largely hand-crafted systems of the 1990s.", "In this work, we first introduce a framework for automating error analysis for document-level event and relation extraction, casting both as instances of a general role-filling, or template-filling task (Juraf-sky and Martin, 2021).", "Our approach converts predicted system outputs into their gold standard counterparts through a series of template-level transformations (Figure 2) and then maps combinations of transformations into a collection of IE-based error types.", "Examples of errors include duplicates, missing and spurious role fillers, missing and spurious templates, and incorrect role and template assignments for fillers.", "(See Figure 3 for the full set).", "Next, we employ the error analysis framework in a comparison of two state-of-the-art document-level neural template-filling approaches, DyGIE++ (Wadden et al., 2019) and GTT (Du et al., 2021b), across three template-filling datasets (SciREX, ProMED (Patwardhan and Riloff, 2009) 3 , and MUC-4).", "Finally, in an attempt to gauge progress in the information extraction field over the past 30 years, we employ the framework to compare the performance of four of the original MUC-4 systems with the two newer deep-learning approaches to document-level IE.", "4 We find that (1) the best of the early IE models which strikes a better balance between 3 http://www.promedmail.org 4 The 1992 model outputs are available in the MUC-4 dataset released by NIST, available at https: //www-nlpir.nist.gov/related_projects/muc/muc_data/muc_data_index.html.", "precision and recall outperforms modern models that exhibit much higher precision and much lower recall; (2) the modern neural models make more mistakes on scientific vs. news-oriented texts, and missing role fillers is universally the largest source of errors; and (3) modern models have clear advantages over the early IE systems in terms of accurate span extraction, while the early systems make fewer mistakes assigning role fillers to their roles.", "Aside from the original MUC-4 evaluation scoring reports (Chinchor, 1991), which included counts of missing and spurious role filler errors, there have been very few attempts at understanding the types of errors made by IE systems and grounding those errors linguistically.", "Valls-Vargas et al. (2017) proposed a framework for studying how different errors propagate through an IE system; however, the framework can only be used for pipelined systems, not end-to-end ones.", "On the other hand, automated error analysis with linguistically motivated error types has been used in other sub-fields of NLP such as machine-translation (Vilar et al., 2006; Zhou et al., 2008; Farrs et al., 2010; Kholy and Habash, 2011; Ze-man et al., 2011; Popovic and Ney, 2011), coreference resolution (Uryupina, 2008; Kummerfeld and Klein, 2013; Martschat and Strube, 2014; Martschat et al., 2015) and parsing (Kummerfeld et al., 2012).", "Recently, generalized automated error analysis frameworks involving human-in-the-loop testing like Errudite (Wu et al., 2019), CHECK 3961 LIST (Ribeiro et al., 2020), CrossCheck (Arendt et al., 2021), and AllenNLP Interpret (Wallace et al., 2019) have successfully been applied to tasks like machine comprehension and relation extraction (Alt et al., 2020).", "Closest to our work are Kummerfeld et al. (2012) and Kummerfeld and Klein (2013), which use model-agnostic transformation-based mapping approaches to automatically obtain error information in the predicted structured output.", "As in Jurafsky and Martin (2021), we will refer to document-level information extraction tasks as template-filling tasks and use the term going for-ward to refer to both event extraction and document-level relation extraction tasks.", "Given a document, D , and an IE template specification consisting of a predetermined list of roles R 1 , R 2 , ..., R i associated with each type of relevant event for the task of interest, the goal for template filling is to extract from D , one output template, T for every relevant event/relation e 1 , e 2 , . . . , e n present in the document.", "Notably, in the general case, n 0 and is not specified as part of the input.", "In each output template, its roles are filled with the corresponding role filler(s), which can be inferred or extracted from the document depending on the predetermined role types.", "We consider two role types here: 5 Set-fill roles , which must be filled with exactly one role filler from a finite set supplied in the template specification.", "An example of a set-fill role in Figure 1 is STATUS , which can be confirmed , possible , or suspected .", "String-fill roles , whose role filler(s) are spans extracted from the document, or left empty if no corresponding role filler is found in the document.", "VICTIMS , DISEASE and COUNTRY are string-fill roles in Figure 1. Some string-fill roles allow multiple fillers; for example, there might be more than one VICTIMS .", "Importantly, for document-level template filling, exactly one string should be included for each role filler entity (typically a canonical mention of the entity), i.e. coreferent mentions of the same entity are not permitted.", "Evaluation.", "We use the standard (exact-match) F1 score (Chinchor, 1991) to evaluate the output 5 There are potentially more role types depending on the dataset (e.g. normalized dates, times, locations); we will not consider those here.", "produced by a template-filling system: F 1 = 2 Precision Recall Precision + Recall 4 Methodology: Automatic Transformations for Error Analysis Similar to the work of Kummerfeld and Klein (2013), our error analysis approach is system-agnostic, i.e. it only uses system output and does not consider intermediate system decisions.", "This allows for error analysis and comparison across different kinds of systems end-to-end or pipeline; neural or pattern-based.", "Given inputs consisting of the system-predicted templates and gold standard templates (i.e. desired output) for every document in the target dataset, our error analysis tool operates in three steps.", "For each document, 1. Perform an optimized mapping of the associated predicted templates and gold templates.", "2. Apply a pre-defined set of transformations to convert each system-predicted template into the desired gold template, keeping track of the transformations applied.", "The first stage of the error analysis tool involves matching each system-predicted template to the best-matching gold template for each document in the dataset.", "In particular, the overall F1 score for a given document can vary based on how a predicted template is individually matched with a gold template (or left unmatched).", "Specifically, for each document, we recursively generate all possible template matchings where each predicted template is matched (if possible) to a gold template.", "In particular, for a document with P predicted templates and G gold templates, the total number of possible template matchings is: 1 + (cid:32) P 1 (cid:33) G + (cid:32) P 2 (cid:33) G ( G 1) + ... + G !", "( G P )!", ", if G P 0 1 + (cid:32) P 1 (cid:33) G + (cid:32) P 2 (cid:33) G ( G 1) + ... + (cid:32) P G (cid:33) G !", ", if G P < 0 = min ( P,G ) (cid:88) i =0 (cid:32) P i (cid:33) G !", "( G i )!", "Note that template matching can result in unmatched predicted templates ( Spurious Templates ), as well as unmatched gold templates ( Missing Templates ).", "Next, for each predicted-gold pair in a template matching, we iterate through all its roles and recursively generate all possible mention matchings , in each of which a predicted role filler is matched (if possible) to a set of coreferent gold role fillers.", "Similar to template matching, the process of mention matching can also result in unmatched predicted role fillers ( Spurious Role Fillers ) and unmatched coreferent sets of gold role fillers ( Missing Role Fillers ).", "Through the process, each predicted role filler increases the denominator of the total precision by 1, and each set of coreferent gold role fillers increases the denominator of total recall by 1. Whenever there is a matched mention pair in which the predicted role filler has an exact match to an element of the set of coreferent gold role fillers, this adds 1 to the numerator of both precision and recall.", "These counts are calculated for each template matching.", "Using precision and recall, the total F1 score across all the slots/roles is calculated and the template matching with the highest total F1 score is chosen.", "If there are ties, the template matching with the fewest errors is chosen (see Section 4.3).", "The second part of the error analysis tool involves changing the predicted templates to the desired gold templates with the help of a fixed set of transformations as detailed below.", "Alter Span transforms a role filler into the gold role filler with the lowest span comparison score ( SCS ).", "The tool provides two options for computing the SCS between two spans, and each depends only on the starting and ending indices of the spans.", "6 SCS can be interpreted as distance and is 0 between two identical spans, and 1 for non-overlapping spans.", "The two modes are given as follows:", "a) absolute : This mode captures the (posi-tive) distance between the starting (and ending) character offsets of spans x and y in the document, and scales that value by the sum of the lengths of x and y , capping it at a maximum of 1. SCS = max (cid:16) 1 , | x start y start | + | x end y end | length ( x )+ length ( y ) (cid:17)", "This mode captures the degree of disjointedness between spans x and y by dividing the length of the overlap between the two spans with respect to each of their lengths, multiplying those two fractions, and subtracting the final result from 1.", "If si is the length of the intersection of x and y , and neither x nor y have length 0, SCS is calculated as shown below; otherwise, SCS is 1.", "overlap = min ( x end , y end ) max ( x start , y start", "SCS is between 0 and 1 (not inclusive), and if there is no overlap between the spans, the SCS is 1. The order of comparison of the spans doesn't change the SCS score for both modes.", "As the absolute mode is less sensitive to changes in span indices as compared to the geometric mean, we chose geometric mean for our analysis, as tiny changes in index positions result in a bigger change in the SCS score.", "another role within the same template.", "Remove Duplicate Role Filler removes a role filler that is coreferent to an already matched role filler.", "Remove Cross Template Spurious Role Filler removes a role filler that would be correct if present in another template (in the same role).", "Remove Spurious Role Filler removes a role filler that has not been mentioned in any of the gold templates for a given document.", "Introduce Role Filler introduces a role filler that was not present in the predicted template but was required to be present in the matching gold template.", "Remove Template removes a predicted template that could not be matched to any gold template for a given document.", "Introduce Template introduces a template that can be matched to an unmatched gold template for a given document.", "For a given document, all singleton Alter Span and Alter Role transformations, as well as sets of Alter Span + Alter Role transformations, are applied first.", "The other transformations are applied in the order in which they were detected, which is dependent on the order of predicted and gold template pairs in the optimized matching and the order of the slots/roles in the template.", "The transformations in Section 4.2 are mapped onto a set of IE-specific error types as shown in Figure 3. In some cases, a single transformation maps onto a single error, while in others a sequence of transformations is associated with a single error.", "Full details are in Appendix A. 5 Document-level IE Datasets Our experiments employ three document-level information extraction datasets.", "We briefly describe each below.", "Dataset statistics are summarized in Table 1. MUC-4 (MUC-4, 1992) consists of newswire describing terrorist incidents in Latin America provided by the FBIS (Federal Broadcast Information Services).", "We converted the optional templates to required templates and removed the subtypes of the incidents as done in previous work (Chambers, 2013; Du et al., 2021b) so that the dataset is transformed into standardized templates.", "The roles chosen from the MUC-4 dataset are PERPIND (individual perpetrator), PERPORG (organization perpetrator), TARGET (physical target), VICTIM (human target), and WEAPON which are all string-fill roles, as well as INCIDENT TYPE which is a set-fill role with six possible role fillers: attack , kidnapping , bombing , arson , robbery , and forced work stoppage .", "As seen in Table 1, 44.59% of the documents have no templates, which makes the identification of relevant vs. irrelevant documents critical to the success of any IE model for this dataset.", "ProMED 8 (Patwardhan and Riloff, 2009) consists of just 125 annotated tuning examples and 120 annotated test examples, describing global disease outbreaks by subject matter experts from ProMED.", "We use the tuning data as training data and reserve 10% of the test data, i.e. 12 examples, to create a de-velopment/validation set.", "19.83% of the documents in the dataset have no templates.", "The roles that we extract from the dataset are STATUS , COUNTRY , DISEASE , and VICTIMS .", "DISEASE , VICTIMS , and COUNTRY are string-fill roles 9 ; STATUS is a set-fill role with confirmed , possible , and suspected as the possible role filler options.", "SciREX (Jain et al., 2020) consists of annotated computer science articles from Papers with Code 10 .", "We focus specifically on its 4-ary relation extraction subtask.", "The roles present in each relation are MATERIAL (DATASET ), METRIC , TASK , and METHOD which are all string-fills.", "We convert the dataset from its original format to templates for our models, and remove individual role fillers (entities) that have no mentions in the text.", "11 We also remove any duplicate templates.", "12 During preprocessing, we remove malformed words longer than 25 characters, as the majority of these consist of concatenated words that are not present in the corresponding text.", "In our experiments, we train and test two neural-based IE models, described briefly below, on the MUC-4, ProMED, and SciREX datasets.", "Note that 8 http://www.promedmail.org 9 In the ProMED dataset, COUNTRY is a set-fill role, but since countries are explicitly mentioned in most of the documents, we can treat this role as a string-fill.", "10 https://paperswithcode.com 11 According to Jain et.", "al., around 50% of relations in the SciREX dataset contain one or more role fillers that do not appear in the corresponding text.", "These relations are removed during evaluation for our end-to-end task.", "https://github.com/allenai/SciREX/blob/master/README.md 12 Removing unmentioned entities sometimes eliminates differences between templates.", "This results in some templates becoming identical or making some templates contain information that is a subset of the information present in another template.", "Thus, we only keep one of these processed templates.", "to create the training data for both the DyGIE++ and GTT models, we use the first mention of each role filler in the document as the mention to be extracted.", "DyGIE++ with Clustering We use DyGIE++ a span-based, sentence-level extraction model to identify role fillers in the document and associate them with certain role types.", "During training, the maximum span length enumerated by the model is set to 8 tokens as in Wadden et al. (2019) for the SciREX dataset and 11 tokens for the ProMED dataset.", "We use bert-base-cased and al-lenai/scibert_scivocab_uncased for the base BERT and SciBERT models respectively, which both have a maximum input sequence length of 512 tokens.", "To aggregate entities detected by DyGIE++ into templates, we use a clustering algorithm.", "For the SciREX dataset, we adopt a heuristic approach that assumes there is only one template per document, and in that template, we assign the named entities predicted by DyGIE++ for a document to the predicted role types.", "For the ProMED dataset, we use a different clustering heuristic that ensures that each template has exactly one role filler for the COUNTRY and DISEASE roles, as detailed in the dataset annotation guidelines.", "Also, since STATUS has the value confirmed in the majority of the templates, every template predicted has its STATUS assigned as confirmed .", "GTT is an end-to-end document-level template-generating model.", "For the MUC-4 and SciREX datasets, GTT is run for 20 epochs, while for ProMED it is run for 36 epochs, to adjust for the smaller size of the dataset.", "All other hyperparameters are set as in Du et al. (2021b).", "We use the same BERT and SciBERT base models as described in the DyGIE++ architecture above, both with a maximum input sequence length of 512 tokens.", "We first discuss the results of DyGIE++ and GTT on SciREX, ProMED, and MUC-4; and then examine the performance of these newer neural models on the 1992 MUC-4 dataset vs. a few of the best-performing IE systems at the time.", "Table 2 shows the results of evaluating DyGIE++ and GTT on the SciREX, ProMED, and MUC-4 datasets.", "We can see that all models perform substantially worse on scientific texts (ProMED, SciREX) as compared to news (MUC-4) , likely because the model base is pretrained for general-purpose NLP applications (BERT) or there are not enough examples of scientific-style text in the pretraining corpus (SciBERT).", "In addition, models seem to perform better on the news-style ProMED dataset than the scientific-paper-based long-text SciREX dataset.", "This can be explained by the fact that all four models handle a maximum of 512 tokens as inputs, while the average length of a SciREX document is 5401 tokens.", "Thus, a majority of the text is truncated and, hence, unavailable to the models.", "Nevertheless, we see an increase in F1 scores for all SciBERT-based models when compared to their BERT counterparts for the SciREX dataset.", "The same trend is seen for DyGIE++ for ProMED, but not for GTT.", "This can be explained by the fact that GTT (SciBERT) has more Missing Template errors than GTT (BERT).", "So even if GTT (SciBERT) performs better on the scientific slot VICTIMS , i.e. it extracts more scientific information, it does not identify relevant events as well as GTT (BERT), reducing the F1 score across the remaining slots.", "From the error count results in Figure 4, we see that GTT makes fewer Missing Template errors than DyGIE++ on the MUC-4 dataset (86 vs. 97).", "However, there is no significant difference 3966 SciREX ProMED MUC-4 DyGIE++ (BERT) 22.47% 35.01% 45.79% DyGIE++ (SciBERT) 25.39% 38.15% GTT (BERT) 21.54% 44.64 % 49.00% GTT (SciBERT) 27.68 % 42.96% Table 2: F1 Scores for the Neural Models on SciREX, ProMED, and MUC-4 Precision Recall F1 GE NLToolset 56.69% 52.09% 54.29% NYU PROTEUS 34.23% 31.28% 32.69% SRI FASTUS 48.47% 38.42% 42.86% UMass CIRCUS 48.62% 39.04% 43.30% GTT (BERT) 63.18 % 40.02% 49.00% DyGIE++ (BERT) 61.90% 36.33% 45.79% Table 3: Precision, Recall, and F1 scores for models on the MUC-4 dataset.", "in the number of missing templates between the two models on the ProMED and SciREX datasets.", "This could be because DyGIE++ is prone to over-generation there are significantly more Spurious Role Filler and Spurious Template errors as compared to the results of GTT.", "Since we use heuristics that create templates based on the extracted role fillers, this increases the probability that there was a possible match to a gold template, reducing the number of Missing Template Errors.", "We can also conclude that DyGIE++ is worse at coreference resolution when compared to GTT as DyGIE++ makes more Duplicate Role Filler errors across all datasets.", "Overall, we find that the major source of error for both GTT and DyGIE++ across all the datasets is missing recall in the form of Missing Role Filler and Missing Template errors.", "Table 3 presents the precision, recall, and F1 performance on the MUC-4 dataset for early models from 1992 alongside those of the more recent DyGIE++ and GTT models.", "We summarize key findings below.", "The best of the early models (GE NLToolset) performs better than either of the modern models.", "It does so by doing a better job balancing precision and recall, whereas GTT and DyGIE++ exhibit much higher precision and much lower recall.", "The early models have more span errors than the modern DyGIE++ and GTT models.", "The representative kinds of span errors from the 1992 model outputs are shown in Table 4.", "One interesting difference between the span errors in the early models and the modern models is that the predicted mentions include longer spans with more information than is indicated in the best gold mention match.", "Some could be due to errors in dataset annotation; for example, maoist shining path group versus shining path but a significant number of the span errors occur as the early models seem to extract the entire sentence or clause which contains the desired role filler mention.", "The modern models tend to leave off parts of the desired spans, and if they do predict larger spans than required, are only off by a few words.", "The early models have fewer Missing Template and Missing Role Filler errors as compared to the modern models.", "However, the former also have more Spurious Template and Spurious Role Filler errors than the latter, indicating these models mitigate the issue of Missing Templates through over-generation.", "The early models have fewer Incorrect Role errors as compared to modern models.", "However, since all the models make relatively few such errors, it suggests that role classification for predicted mentions is not a major problem for modern models.", "The main source of error for both early and modern models is missing recall due to missing templates and missing role fillers.", "This strongly suggests future systems can maximize their performance by being less conservative in 3967 Error Counts for Models on the MUC 4 Dataset M o d e l s DyGIE++ (BERT) GTT (BERT) GE NYU SRI UMass Number of Errors 0 250 500 750 1000 Span Error Duplicate Role Filler Duplicate ParLally Matched Role Filler Spurious Role Filler Missing Role Filler Incorrect Role Incorrect Role + ParLally Matched Filler Wrong Template Role Filler Wrong Template For ParLally Matched Role Filler Wrong Template + Wrong Role Wrong Template + Wrong Role + ParLally Matched Filler Spurious Template Spurious Template Role Filler Missing Template Missing Template Role Filler W i t h i n T e m p l a t e W i t h i n + C r o ss T e m p l a t e T e m p l a t e D e t e c t i o n Span Error Duplicate Role Filler Duplicate Partially Matched Role Filler Spurious Role Filler Missing Role Filler Incorrect Role Incorrect Role + Partially Matched Filler Wrong Template Role Filler Wrong Template For Partially Matched Role Filler Wrong Template + Wrong Role Wrong Template + Wrong Role + Partially Matched Filler Spurious Template Spurious Template Role Filler Missing Template Missing Template Role Filler 1 Figure 4: Automated Error Analysis Results (Error Counts) for Models on the MUC-4 dataset.", "9 Conclusion As new models for information extraction continue to be developed, we find that their predicted error types contain insights regarding their shortcomings.", "Analyzing error patterns within model predictions in a more fine-grained manner beyond scores provided by commonly used metrics is important for the progress of the field.", "We introduce a framework for the automatic categorization of model prediction errors for document-level IE tasks.", "We used the tool to analyze the errors of two state-of-the-art models on three datasets from varying domains and compared the error profiles of these models to four of the earliest systems in the field on a dataset from that era.", "We find that state-of-the-art models, when compared to the earlier manual feature-based models, perform better at span extraction but worse at template detection and role assignment.", "With a better balance between precision and recall, the best early model outperforms the relatively high-precision, low-recall modern models.", "Missing role fillers remain the main source of errors, and scientific corpora are the most difficult for all systems, suggesting that improvements in these areas should be a priority for future system development.", "This work explores subtypes of Spurious Role Filler errors extensively, however, we would like to further analyze Missing Role Filler and template-level errors for more fine-grained error subtypes and the linguistic reasons behind why they occur.", "Due to the pairwise comparisons between all predicted and gold mentions in a role for all pairs of predicted and gold templates in an example, the error analysis tool is slow when the number of both the predicted and gold templates as well as the number of role fillers in the templates is high.", "Thus, we would also like to improve the time complexity of our template (and mention) matching algorithms using an approach like bipartite matching (Yang et al., 2021).", "Currently, the error analysis tool reports exact match precision/recall/F1 which is more suitable for string-fill roles.", "We would like to extend the tool to further analyze set-fill roles by implementing metrics such as false-positive rate.", "We used a limited number of models in this paper as we aimed to develop and test the usability of our error analysis tool.", "In the future, however, we would like to test our tool on a wider range of models, in addition to running more experiments in order to reach more generalizable conclusions about the behavior of IE models.", "We thank the anonymous reviewers and Ellen Riloff for their helpful comments(!) and Sienna Hu for converting the 1992 model outputs to a format compatible with our error analysis tool.", "Our research was supported, in part, by NSF CISE Grant 1815455 and the Cornell CS Department CSURP grants for undergraduate research." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "objective", "other", "other", "other", "other", "other", "other", "abstain", "method", "other", "other", "other", "method", "other", "other", "other", "other", "other", "method", "method", "other", "method", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "other", "other", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "Given the prevalence of pre-trained contextualized representations in today's NLP, there have been many efforts to understand what information they contain, and why they seem to be universally successful.", "The most common approach to use these representations involves fine-tuning them for an end task.", "Yet, how fine-tuning changes the underlying embedding space is less studied.", "In this work, we study the English BERT family and use two probing techniques to analyze how fine-tuning changes the space.", "We hypothesize that fine-tuning affects classification performance by increasing the distances between examples associated with different labels.", "We confirm this hypothesis with carefully designed experiments on five different NLP tasks.", "Via these experiments, we also discover an exception to the prevailing wisdom that fine-tuning always improves perfor-mance.", "Finally, by comparing the representations before and after fine-tuning, we discover that fine-tuning does not introduce arbitrary changes to representations; instead, it adjusts the representations to downstream tasks while largely preserving the original spatial structure of the data points.", "Pre-trained transformer-based language models (e.g., Devlin et al., 2019) form the basis of state-of-the-art results across NLP.", "The relative opacity of these models has prompted the development of many probes to investigate linguistic regularities captured in them (e.g., Kovaleva et al., 2019; Conneau et al., 2018; Jawahar et al., 2019).", "Broadly speaking, there are two ways to use a pre-trained representation (Peters et al., 2019): as a fixed feature extractor (where the pre-trained weights are frozen), or by fine-tuning it for a task.", "The probing literature has largely focused on the former (e.g., Kassner and Schtze, 2020; Perone et al., 2018; Yaghoobzadeh et al., 2019; Krasnowska-Kieras and Wrblewska, 2019; Wallace et al., 2019; Pruksachatkun et al., 2020; Agha-janyan et al., 2021).", "Some previous work (Mer-chant et al., 2020; Mosbach et al., 2020b; Hao et al., 2020) does provide insights about fine-tuning: fine-tuning changes higher layers more than lower ones and linguistic information is not lost during fine-tuning.", "However, relatively less is understood about how the representation changes during the process of fine-tuning and why fine-tuning invariably seems to improve task performance.", "In this work, we investigate the process of fine-tuning of representations using the English BERT family (Devlin et al., 2019).", "Specifically, we ask:", "1. Does fine-tuning always improve performance?", "2. How does fine-tuning alter the representation to adjust for downstream tasks?", "3. How does fine-tuning change the geometric structure of different layers?", "We apply two probing techniquesclassifier-based probing (Kim et al., 2019; Tenney et al., 2019) and DIRECTPROBE (Zhou and Srikumar, 2021)on variants of BERT representations that are fine-tuned on five tasks: part-of-speech tagging, dependency head prediction, preposition supersense role & function prediction and text classification.", "Beyond confirming previous findings about fine-tuning, our analysis reveals several new findings, briefly described below.", "First, we find that fine-tuning introduces a divergence between training and test sets , which is not severe enough to hurt generalization in most cases.", "However, we do find one exception where fine-tuning hurts the performance; this setting also has the largest divergence between training and test set after fine-tuning (4.1).", "Second, we examine how fine-tuning changes labeled regions of the representation space.", "For a representation where task labels are not linearly separable, we find that fine-tuning adjusts it by 1046 grouping points with the same label into a small number of clusters (ideally one), thus simplifying the underlying representation.", "Doing so makes it easier to linearly separate labels with fine-tuned representations than untuned ones (4.2).", "For a representation whose task labels are already linearly separable, we find that fine-tuning pushes the clusters of points representing different labels away from each other, thus introducing large separating regions between labels.", "Rather than simply scaling the points, clusters move in different directions and with different extents (measured by Euclidean distance).", "Overall, these clusters become distant compared to the untuned representation.", "We conjecture that the enlarged region between groups admits a bigger set of classifiers that can separate them, leading to better generalization (4.3).", "We verify our distance hypothesis by investigating the effect of fine-tuning across tasks.", "We observe that fine-tuning for related tasks can also provide useful signal for the target task by altering the distances between clusters representing different labels (4.4).", "Finally, fine-tuning does not change the higher layers arbitrarily.", "This confirms previous findings.", "Additionally, we find that fine-tuning largely preserves the relative positions of the label clusters, while reconfiguring the space to adjust for downstream tasks (4.5).", "Informally, we can say that fine-tuning only slightly changes higher layers.", "These findings help us understand fine-tuning better, and justify why fine-tuned representations can lead to improvements across many NLP tasks 1 .", "In this work, we probe representations in the BERT family during and after fine-tuning.", "First, let us look at the two supervised probes we will employ: a classifier-based probe (e.g., Tenney et al., 2019; Jullien et al., 2022) to assess how well a representation supports classifiers for a task, and DIRECTPROBE (Zhou and Srikumar, 2021) to analyze the geometry of the representation.", "Trained classifiers are the most commonly used probes in the literature (e.g. Hewitt et al., 2021; Whitney et al., 2021; Belinkov, 2021).", "To understand how well a representation encodes the labels 1 The code and data to replicate our analysis is available at https://github.com/utahnlp/ BERT-fine-tuning-analysis for a task, a probing classifier is trained over it, with the embeddings themselves kept frozen when the classifier is trained.", "For all our experiments, we use two-layer neural networks as our probe classifiers.", "We use grid-search to choose the best hyperparameters.", "Each best classifier is trained five times with different initializations.", "We report the average accuracy and its standard deviation for each classifier.", "The hidden layer sizes are selected from { 32 , 64 , 128 , 256 } { 32 , 64 , 128 , 256 } , and the regularizer weight from the range 10 7 to 10 0 .", "All models use ReLUs as the activation function for the hidden layer and are optimized by Adam (Kingma and Ba, 2015).", "We set the maximum number of learning iterations to 1000 .", "We use scikit-learn v0.22 (Pedregosa et al., 2011) for these experiments.", "Classifier probes aim to measure how well a contextualized representation captures a linguistic property.", "The classification performance can help us assess the effect of fine-tuning.", "Classifier probes treat the representation as a black box and only focus on the final task performance; they do not reveal how fine-tuning changes the underlying geometry of the space.", "To this end, we use DIRECTPROBE (Zhou and Srikumar, 2021) 2 , a recently proposed technique which analyzes embeddings from a geometric perspective.", "We briefly summarize the technique and refer the reader to the original work for details.", "For a given labeling task, DIRECTPROBE returns a set of clusters such that each cluster only contains the points with the same label, and there are no overlaps between the convex hulls of these clusters.", "Any decision boundary must cross the regions between the clusters that have different labels (see in Figure 1).", "Since fine-tuning a contextualized representation creates different representations for different tasks, it is reasonable to probe the representation based on a given task.", "These clusters allow us to measure three properties of interest.", "Number of Clusters : The number of clusters indicates the linearity of the representation for a task.", "If the number of clusters equals the number of labels, then examples with the same label are grouped into 2 We use the DIRECTPROBE implementation from https: //github.com/utahnlp/DirectProbe with default settings.", "one cluster; a simple linear multi-class classifier will suffice.", "If, however, there are more clusters than labels, then at least two clusters of examples with the same label can not be grouped together (as in Figure 1, right).", "This scenario calls for a non-linear classifier.", "Distances between Clusters : Distances 3 between clusters can reveal the internal structure of a representation.", "By tracking these distances during fine-tuning, we can study how the representation changes.", "To compute these distances, we use the fact that each cluster represents a convex object.", "This allows us to use max-margin separators to compute distances.", "We train a linear SVM (Chang and Lin, 2011) to find the maximum margin separator and compute its margin.", "The distance between the two clusters is twice the margin.", "Spatial Similarity : Distances between clusters can also reveal the spatial similarity of two representations.", "Intuitively, if two representations have similar relative distances between clusters, the representations themselves are similar to each other for the task at hand.", "We use these distances to construct a distance vector v for a representation, where each element v i is the distance between the clusters of a pair of labels.", "With n labels in a task, the size of v is n ( n 1) 2 .", "This construction works only when the number of clusters equals the number of labels (i.e., the dataset is linearly separable under the represen-tation).", "Surprisingly, we find this to be the case for most representations we studied.", "As a measure of the similarity of two representations for a labeling task, we compute the Pearson correlation coefficient between their distance vectors.", "Note that this coefficient can also be used to measure the similarity between two labeled datasets with respect to the 3 We use Euclidean distance throughout this work.", "same representation.", "We exploit this observation to analyze the divergence between training and test sets for fine-tuned representations (4.1).", "In this section, we describe the representations and tasks we will encounter in our experiments.", "We investigate several models from the BERT family (Devlin et al., 2019; Turc et al., 2019).", "These models all share the same basic architecture but with different capacities, i.e., different layers and hidden sizes.", "Table 1 summarizes the models we investigate in this work 4 .", "All of these models are for English text and uncased.", "For tokens that are broken into subwords by the tokenizer, we average the subword embeddings for the token representation.", "We use the models provided by HuggingFace v4.2.1 (Wolf et al., 2020), and Pytorch v1.6.0 (Paszke et al., 2019) for our experiments.", "We instantiate our analysis of the BERT models on a diverse set of five NLP tasks, which covers syntactic and semantic predictions.", "Here, we briefly describe the tasks, and refer the reader to the original sources of the data for further details.", "5 Part-of-speech tagging (POS) predicts the part-of-speech tag for each word in a sentence.", "The task helps us understand if a representation captures coarse grained syntactic categorization.", "We use the English portion of the parallel universal dependencies treebank (ud-pud, Nivre et al., 2016).", "Dependency relation (DEP) predicts the syntactic dependency relation between two tokens, i.e. 4 We ignore the BERT large because, during preliminary experiments, we found BERT large is highly unstable.", "The variance between different fine-tuning runs is so large that not comparable with other BERT models.", "This is consistent with the observations from Mosbach et al. (2020a).", "5 All the datasets we use in this work are publicly available under a creative commons or an open source license.", "( w head and w mod ).", "This task can help us understand if, and how well, a representation can characterize syntactic relationships between words.", "This task involves assigning a category to a pair of tokens.", "We concatenate their contextualized representations from BERT and treat the concatenation as the representation of the pair.", "We use the same dataset as the POS task for dependencies.", "Preposition supersense disambiguation involves two categorization tasks of predicting preposition's semantic role ( PS-role ) and semantic function ( PS-fxn ) .", "These tasks are designed for disambiguating semantic meanings of prepositions.", "Following the previous work (Liu et al., 2019), we only train and evaluate on single-token prepositions from Streusle v4.2 corpus (Schneider et al., 2018).", "Text classification , in general, is the task of categorizing sentences or documents.", "We use the TREC-50 dataset (Li and Roth, 2002) with 50 semantic labels for sentences.", "As is the standard practice, we use the representation of the [CLS] token as the sentence representation.", "This task can show how well a representation characterizes a sentence.", "We fine-tune the models in 3.1 on the five tasks from 3.2 separately.", "6 The fine-tuned models (along with the original models) are then used to generate contextualized representations.", "The probing techniques described in 2 are applied to study both original and fine-tuned representations.", "Our preliminary experiments showed that the commonly used 3 5 epochs of fine-tuning are insufficient for the smaller representations, such as BERT tiny , and they require more epochs.", "We fine-tuned all the representations for 10 epochs except BERT base , which we fine-tuned for the usual three epochs.", "Note that the fine-tuning phase is separate from the classifier training phase for probing; for the probe classifiers, we train two-layer neural networks (described in 2.1) from scratch on both original and fine-tuned representations 7 , ensuring a fair comparsion between them.", "In this section, we will use classifier probes to examine if fine-tuning always improves classifier per-6", "per-6 More detailed settings can be found in Appendix A 7 When the fine-tuned representations are probed, their weights are frozen.", "Essentially, after fine-tuning, we treat the fine-tuned representations as a black-box that produces embeddings for analysis.", "formance (4.1).", "Then we propose a geometric explanation for why fine-tuning improves classification performance using DIRECTPROBE (4.2 and 4.3).", "Next, we will confirm this geometric explanation by investigating cross-task fine-tuning (4.4).", "Finally, we will analyze how fine-tuning changes the geometry of different layers of BERT base (4.5).", "It is commonly accepted that the fine-tuning improves task performance.", "Does this always hold?", "Table 2 summarizes the relevant observations from our experiments.", "Appendix C presents the complete fine-tuning results.", "Fine-tuning diverges the training and test set.", "In Table 2, the last column shows the spatial similarity between the training and test set for each representation.", "We apply DIRECTPROBE on the training and test set separately.", "The spatial similarity is calculated as the Pearson correlation coefficient between the distance vectors of training and test set (described in 2).", "We observe that after fine-tuning, all the similarities decrease, implying that the training and test set diverge as a result of fine-tuning.", "In most cases, this divergence is not severe enough to decrease the performance.", "performance.", "An interesting observation in Table 2 is that BERT small does not show the improvements on the PS-fxn task after fine-tuning, which breaks the well-accepted impression that fine-tuning always improve the performance.", "However, only one such exception is observed across all our experiments (see Appendix C).", "It is insufficient to draw any concrete conclusions about why this is happening.", "We do observe that BERT small shows the smallest similarity ( 0 . 44 ) between the training and test set after fine-tuning on PS-fxn task.", "We conjecture that controlling the divergence between the training and test sets can help ensure that fine-tuning helps.", "Verifying or refuting this conjecture requires further study.", "Next, let us examine the geometry of the representations before and after fine-tuning using DIRECTPROBE and counting the number of clusters.", "We will focus on the overwhelming majority of cases where fine-tuning does improve performance.", "Table 3 summarizes the results.", "For brevity, we only present the results on BERT tiny .", "The full results are in Appendix C. We observe that before fine-tuning, small representations (i.e., BERT tiny ) are non-linear for most tasks.", "Although a non-linearity does not imply poor generalization, it represents a more complex spatial structure, and requires a more complex classifier.", "This suggests that to use small representations (say, due to limited resources), it would be advisable to use a non-linear classifier rather than a simple linear one.", "Fine-tuning makes the space simpler.", "In Table 3, we observe that the number of clusters decreases after fine-tuning.", "This tells us that after fine-tuning, the points associated with different labels are in a simpler spatial configuration.", "The same trend holds for TREC-50 (Table 4), even when the final representation is not linearly separable.", "To better understand the changes in spatial structure, we apply DIRECTPROBE to every intermediate representation encountered during fine-tuning.", "Here, we focus on the BERT base .", "Since all representations we considered are linearly separable 8 , the number of clusters equals the number of labels.", "As a result, each cluster exclusively corresponds to one label.", "Going ahead, we will use clusters and labels interchangeably.", "each other.", "This confirms the observation of Zhou and Srikumar (2021), who pointed out that the fine-tuning pushes each label away from each other.", "However, they use the global minimum distance between clusters to support this argument, which only partially supports the claim: the distances between some clusters might increase despite the global minimum distance decreasing.", "We track the minimum distance of each label to all other labels during fine-tuning.", "We find that all the minimum distances are increasing.", "Figure 2 shows how these distances change in the last layer of BERT base for the PS-role and POS tagging tasks.", "Appendix D includes the plots for all tasks.", "For clarity, we only show the three labels where the distance increases the most, and the three where it increases the least.", "We also observe that although the trend is increasing, the minimum distance associated with a label may decrease during the course of fine-tuning, e.g., the label STUFF in PS-role task, suggesting a potential instability of fine-tuning.", "To further see how labels move during the fine-tuning, we track the centroids of each cluster.", "We select three closest labels from the POS tagging task and track the paths of the centroids of each label cluster in the last layer of BERT base during the fine-tuning.", "Figure 3 (right) shows the 2D PCA projection of these paths.", "We observe that before fine-tuning, the centroids of all these three labels are close to each other.", "As fine-tuning proceeds, the centroids move around in different directions, away from each other.", "We conclude that fine-tuning enlarges the gaps between label clusters and admits more classifiers consistent with the labels, allowing for better generalization.", "Note that neither the loss nor the optimizer explicitly mandates this change.", "Indeed, merging during fine-tuning.", "In 4.3, we hypothesized that fine-tuning improves the performance because it enlarges the gaps between label clusters.", "A natural inference of this hypothesis is that the process may shrink the gaps between labels of an unrelated task, and its performance can decrease.", "In this subsection, we investigate how fine-tuning for one task affects another.", "We fine-tune the BERT base on PS-role and POS tagging tasks separately and use the fine-tuned models to generate contextualized representations for the PS-fxn task.", "Our choice of tasks in this experimental design is motivated by the observation that PS-role and PS-fxn are similar tasks that seek to predict supersense tags for prepositions.", "On the other hand, POS tagging can adversely affect the PS-fxn task because POS tagging requires all the prepositions to be grouped together (label ADP) while PS-fxn requires different prepositions to be far away from each other.", "We apply DIRECTPROBE on both representations to analyze the geometric changes 9 with respect to PS-fxn.", "The effects of cross-task fine-tuning depends on how close two tasks are.", "The third and fourth columns of Table 5 indicate the number of labels whose minimum distance is increased or decreased after fine-tuning.", "The second column from the right shows the average distance change over all labels, e.g. fine-tuning on POS results in the minimum distances of the PS-fxn labels decreasing by 1 .", "68 on average.", "We observe that fine-tuning on the same dataset (PS-fxn) increases the distances between labels (second row), which is consistent with observations from 4.3; fine-tuning on a similar task also increases the distances between clusters (third row) but to a lesser extent.", "However, fine-tuning on a opposing task decreases the distances between clusters (last row).", "These observations suggest that cross-task fine-tuning could add or remove information from the representation, depending on how close the source and target task are.", "Small distances between label clusters indicate a poor performance.", "Based on our conclusion in 4.3 that a larger gap between labels leads to better generalization, we expect that the performance 9 The PS-fxn task is still linearly separable even after fine-tuning on PS-role or POS tagging tasks.", "of PS-fxn after fine-tuning on PS-role would be higher than the performance after fine-tuning on POS tagging.", "To verify this, we train two-layer neural networks on PS-fxn task using the representations that are fine-tuned on PS-role and POS tagging tasks.", "Importantly, we do not further fine-tune the representations for PS-fxn.", "The last column of Table 5 shows the results.", "Fine-tuning on PS-fxn enlarges gaps between all PS-fxn labels, which justifies the highest performance; fine-tuning on PS-role enlarges gaps between some labels in PS-fxn, leading to a slight improvement; fine-tuning on POS tags shrinks the gaps between all labels in PS-fxn, leading to a decrease in performance.", "In summary, based on the results of 4.2, 4.3 and 4.4, we conclude that fine-tuning injects or removes task-related information from representations by adjusting the distances between label clusters even if the original representation is linearly separable (i.e., when there is no need to change the representation).", "When the original representation does not support a linear classifier, fine-tuning tries to group points with the same label into a small number of clusters, ideally one cluster.", "Previous work (Merchant et al., 2020; Mosbach et al., 2020b) showed that during fine-tuning, lower layers changed little compared to higher layers.", "In the following experiments, we confirm their findings and further show that:", "(i) fine-tuning does not change the representation arbitrarily, even for higher layers;", "(ii) an analysis of the changes of different layers by a visual comparison between lower and higher layers.", "Here, we focus on the POS tagging task with BERT base .", "Our conclusions extend to other tasks, whose results are in Appendix E. Higher layers do not change arbitrarily.", "Although previous work (Mosbach et al., 2020b) shows that higher layers change more than the lower layers, we find that higher layers still remain close to the original representations.", "To study the dynamics of fine-tuning, we compare each layer during fine-tuning to its corresponding original pretrained one.", "The spatial similarity between two representations is calculated as the Pearson correlation coefficient of their distance vectors as described in 2.", "Intuitively, a classifier learns a decision boundary that traverses the region between clusters, which makes the distances between clusters more relevant to our analysis (as opposed to the spatial structure of points within each cluster).", "10 To avoid visual clutter, we only show the plots for every alternate layer.", "Figure 4 shows the results for all four tasks.", "For the higher layers, we find that the Pearson correlation coefficient between the original representation and the fine-tuned one is surprisingly high (more than 0 . 5 ), reinforcing the notion that fine-tuning does not change the representation arbitrarily.", "Instead, it attempts to preserve the relative positions the labels.", "This means the fine-tuning process encodes task-specific information, yet it largely preserves the pre-trained information encoded in the representation.", "The labels of lower layers move only in a small region and almost in the same directions.", "The unchanged nature of lower layers raises the question: do they not change at all?", "To answer this question, for every label, we compute difference between its centroids before and after fine-tuning.", "Figure 5 shows the PCA projection in 2D of these difference vectors.", "For brevity, we only present the plots for every alternative layer.", "A plot with all layers can be found in Appendix E. We observe that the movements of labels in lower layers concentrate in a few directions compared to the higher layers, suggesting the labels in lower layers do change, but do not separate the labels as much as higher layers.", "Also, we observe that the labels INTJ and SYM have distinctive directions in the lower layers.", "Note that, in Figure 5, the motion range of lower layers is much smaller than the higher layers.", "The projected two dimensions range from 1 to 3 and from 3 to 3 for layer two, while for layer 12 they range from 12 to 13 and 12 to 8 , suggesting that labels in lower layers only move in a small region compared to higher layers.", "Figure 3 shows an example of this difference.", "Compared to the layer 12 (right) paths, we see that the layer 1 paths (left) traverse almost the same trajectories, which is consistent with the observations from Figure 5.", "Does fine-tuning always improve performance?", "Indeed, fine-tuning almost always improves task performance.", "However, rare cases exist where fine-tuning decreases the performance.", "Fine-tuning introduces a divergence between the training set and unseen examples (4.1).", "However, it is unclear how this divergence affects the generalization ability of representations, e.g. does this divergence suggest a new kind of overfitting that is driven by representations rather than classifiers?", "same label into small number of clusters (4.2) and pushing each label cluster away from the others (4.3).", "We hypothesize that the distances between label clusters correlate with the classification performance and confirm this hypothesis by investigating cross-task fine-tuning (4.4).", "Our findings are surprising because fine-tuning for a classification task does not need to alter the geometry of a representation if the data is already linearly separable in the original representation.", "What we observe reveals geometric properties that characterize good representations.", "We do not show theoretical analysis to connect our geometric findings to representation learnability, but the findings in this work may serve as a starting point for a learning theory for representations.", "How does fine-tuning change the underlying geometric structure of different layers?", "It is established that higher layers change more than the lower ones.", "In this work, we analyze this behavior more closely.", "We discover that higher layers do not change arbitrarily; instead, they remain similar to the untuned version.", "Informally, we can say that fine-tuning only slightly changes even the higher layers (4.5).", "Nevertheless, our analysis does not reveal why higher layers change more than the lower layers.", "A deeper analysis of model parameters during fine-tuning is needed to understand the difference between lower and higher layers.", "Limitations of this work.", "Our experiments use the BERT family of models for English tasks.", "Given the architectural similarity of transformer language models, we may be able to extrapolate the results to other models, but further work is needed to confirm our findings to other languages or model architectures.", "In our analysis, we ignore the structure within each cluster, which is another information source for studying the representation.", "We plan to investigate these aspects in future work.", "We make our code available for replication and extension by the community.", "There are many lines of work that focus on analyzing and understanding representations.", "The most commonly used technique is the classifier-based method.", "Early work (Alain and Bengio, 2017; Kulmizev et al., 2020) starts with using linear classifiers as the probe.", "Hewitt and Liang (2019) pointed out that a linear probe is not sufficient to evaluate a representation.", "Some recent work 1053 also employ non-linear probes (Tenney et al., 2019; Eger et al., 2019).", "There are also efforts to inspect the representations from a geometric persepc-tive (e.g. Ethayarajh, 2019; Mimno and Thompson, 2017), including the recently proposed DIRECTPROBE (Zhou and Srikumar, 2021), which we use in this work.", "Another line of probing work designs control tasks (Ravichander et al., 2021; Lan et al., 2020) to reverse-engineer the internal mechanisms of representations (Kovaleva et al., 2019; Wu et al., 2020).", "However, in contrast to our work, most studies (Zhong et al., 2021; Li et al., 2021; Chen et al., 2021) focused on pre-trained representations, not fine-tuned ones.", "While fine-tuning pre-trained representations usually provides strong empirical performance (Wang et al., 2018; Talmor et al., 2020), how fine-tuning manage to do so has remained an open question.", "Moreover, the instability (Mosbach et al., 2020a; Dodge et al., 2020; Zhang et al., 2020) and forgetting problems (Chen et al., 2020; He et al., 2021) make it harder to analyze fine-tuned representations.", "Despite these difficulties, previous work (Merchant et al., 2020; Mosbach et al., 2020b; Hao et al., 2020) draw valuable conclusions about fine-tuning.", "This work extends this line of effort and provides a deeper understanding of how fine-tuning changes representations.", "In this work, we take a close look at how fine-tuning a contextualized representation for a task modifies it.", "We investigate the fine-tuned representations of several BERT models using two probing techniques: classifier-based probing and DIRECTPROBE .", "First, we show that fine-tuning introduces divergence between training and test set, and in at least one case, hurts generalization.", "Next, we show fine-tuning alters the geometry of a representation by pushing points belonging to the same label closer to each other, thus simpler and better classifiers.", "We confirm this hypothesis by cross-task fine-tuning experiments.", "Finally, we discover that while adjusting representations to downstream tasks, fine-tuning largely preserves the original spatial structure of points across all layers.", "Taken collectively, the empirical study presented in this work can not only justify the impressive performance of fine-tuning, but may also lead to a better understanding of learned representations.", "We thank the ARR reviewers and the Utah NLP group for their constructive feedback.", "This work is partially supported by NSF grants #1801446 (SaTC) and #1822877 (Cyberlearning), and a generous gift from Verisk Inc." ]
[ "abstain", "abstain", "abstain", "method", "objective", "method", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "method", "objective", "objective", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "result", "abstain", "abstain", "method", "method", "abstain", "other", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "method", "result", "method", "abstain", "abstain", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "other", "other", "other", "objective", "method", "objective", "objective", "result", "method", "abstain", "method", "other", "other" ]
[ "Extreme classification is a classification task on an extremely large number of labels (tags).", "User generated labels for any type of online data can be sparing per individual user but intractably large among all users.", "It would be useful to automatically select a smaller, standard set of labels to represent the whole label set.", "We can then solve efficiently the problem of multi-label learning with an intractably large number of interdependent labels, such as automatic tagging of Wikipedia pages.", "We propose a submodular maximization framework with linear cost to find informative labels which are most relevant to other labels yet least redundant with each other.", "A simple prediction model can then be trained on this label subset.", "Our framework includes both label-label and label-feature dependencies, which aims to find the labels with the most representation and prediction ability.", "In addition, to avoid information loss, we extract and predict outlier labels with weak dependency on other labels.", "We apply our model to four standard natural language data sets including Bibsonomy entries with users assigned tags, web pages with user assigned tags, legal texts with EUROVOC descriptors(A topic hierarchy with almost 4000 categories regarding different aspects of European law) and Wikipedia pages with tags from social bookmarking as well as news videos for automated label detection from a lexicon of semantic concepts.", "Experimental results show that our proposed approach improves label prediction quality, in terms of precision and nDCG, by 3% to 5% in three of the 5 tasks and is competitive in the others, even with a simple linear prediction model.", "An ablation study shows how different data sets benefit from different aspects of our model, with all aspects contributing substantially to at least one data set.", "Multi-label learning has recently attracted attention in the research community due to an increase in applications such as semantic labeling of images and videos, bio-informatics, genetic functions, and music categorization.", "In addition, multilabel learning can address machine learning problems in web data mining, including recommender systems, multimedia sharing websites, and ranking (Zhang and Zhang, 2010).", "An important application of extreme multi-label learning is automatic tagging and social tagging of large information collections such as Wikipedia or the Web.", "A user can add their own keywords to a text, as if they were the keywords they would use to look for the article in a search engine.", "Since tags use an open vocabulary, the number of tags is increasing continually in order to adjust to the needs of new information.", "Moreover, different users can assign different tags to the same resource, resulting in a great diversity of tags for that resource.", "The biggest challenge of extreme multi-label learning is the dimension of the output space.", "As the number of output labels increases, the number of output states increases exponentially.", "In order to overcome this exponential growth, it is necessary to use label dependencies to simplify the problem (Zhang and Zhang, 2010; Tsoumakas et al., 2010).", "We propose a submodular maximization approach with a linear cost to find an informative set of labels.", "In contrast to the other similar approaches (Balasubramanian and Lebanon, 2012; Bi and Kwok, 2013) which consider only label-label dependencies, we also consider label-feature dependencies and outlier labels that are highly independent of other labels.", "Solving the problem using the selected (smaller number of) labels leads to minimizing both representation and training error.", "Representation ability is equivalent to the power of the selected subset to reconstruct the remaining labels, and prediction ability is equivalent to training accuracy leading to less error propagation from predicted label subset to the remaining labels during reconstruction.", "Submodular maximization approaches have proved very effective in many applications, such as finding the most influential nodes in social networks to maximize the spread of information (for applications such as advertising and marketing (Kempe et al., 2003; Ohsaka et al., 2014)) and video and image collection summarization (Gygli et al., 2015; Tschiatschek et al., 2014).", "There are many effective algorithms such as (Mirzasoleiman et al., 2015) to make submodular optimization approaches much faster or do them in a distributed way (Mirrokni and Zadimoghaddam, 2015) to perform faster parallel processing for very large scale datasets.", "Many of the early proposed multi-label learning approaches struggle with large-scale applications, as they learn each label separately or investigate the label dependencies in a way that leads to a costly and complicated model (Tsoumakas et al., 2010).", "The other research trends is to transform the label space to a smaller space and map back the predicted results in the compressed space to the original space.", "Hsu et al. (2009) presented the first approach targeting label space compression based on compressed sensing, which assumes sparsity of the label space.", "An expensive optimisation problem has to be solved in the prediction step.", "Tai and Lin (2012); Chen and Lin (2012); Yu et al. (2014), and (Lin et al., 2014) used orthogonal projections and low-rank assumptions to extract a label matrix decomposition and find a low-dimensional embedding space.", "In (Bhatia et al., 2015b), the authors perform local embedding of the label vectors.", "To achieve stronger locality, they cluster the data into smaller regions, which is unstable and costly for high-dimensional spaces and one needs an ensemble of the learners to overcome this instability and achieve a good prediction accuracy.", "Although the previously proposed approaches make the embedding space smaller and more tractable, they may lead to loss of information as a result of transforming the label space to lower-dimensional spaces.", "Many of these approaches rely on low-rank assumptions which transform the sparse label space to a new dense embedding space resulting in even lower accuracy, with a higher prediction cost in the new complicated space (Bhatia et al., 2015a).", "Balasubramanian and Lebanon (2012) and Bi and Kwok (2013) proposed to select a subset of the labels, and solve the problem in the original label space, based on structure sparsity optimization and SVD decomposition, correspondingly.", "However, these methods are not tractable for large scale data and not compatible for the real application data.", "In addition, they have ignored the training error in the label selection step which can lead to selection of the labels that are hard to predict resulting in training error propagation through the next steps.", "Another recent thread of research includes the methods that partition the data into smaller groups: In the framework proposed by Barezi et al. (2017), the label space is divided into smaller independent groups, while Agrawal et al. (2013); Prabhu and Varma (2014); Prabhu et al. (2018) propose tree-based methods which partition the data into tree-structured hierarchical groups.", "These partitioning-based approaches avoid information loss from dimension reduction.", "However, finding a partitioning tree is a very complicated and time-consuming problem and these approaches require solving a complicated optimization problem to perform partitioning at each node, which is expensive and needs many training samples.", "In addition, the tree-based approaches suffer from error propagation through the hierarchy and need many training samples to avoid under-fitting in the lower levels of the partitioning tree (Liu et al., 2005).", "Instead of making the structural assumption on the relation between the labels, Yen et al. (2016) assume the label space is highly sparse and has a strong correlation with the feature space.", "They ignore the label space correlation information.", "Yen et al. (2017) proposed the parallel version of (Yen et al., 2016).", "In this paper, we propose a landmark selection framework for selecting the most informative labels and to solve the multi-label learning problem with these labels.", "As an example, consider predicting the commercial impact of a new event on some global organizations (equivalent to the labels in our problem) given a history of the impact of previous events (equivalent to the features and training data in our problem).", "Instead of predicting the impact on each organization individually, we predict only the impact on a small number of organizations which are both easier to predict and analyze according to available data as well as being more indicative of the economy and the other organizations.", "Being indicative means that if we know the impact of the new event on these organizations, it can help us to predict the reaction of the other organizations.", "More formally, we optimize the above set function f ( S ) in Equation", "1. The proposed method includes both label-label and label-feature dependencies in order to minimize both representation and training error.", "Previous similar methods ignore label-feature dependencies in the subset selection step, allowing the training error for the selected subset of the labels to be propagated to the reconstructed labels and affecting the final predictions.", "In addition, to avoid information loss, we also extract and predict outlier labels with weak dependency on other labels and treat them separately.", "Our construction results in a monotone submodular function of label sets allowing us to use a maximization framework that benefits from a good theoretical bound by a fast greedy approach with linear cost (Nemhauser et al., 1978).", "We use a method based on Alternating Direction Method of Multipliers (ADMM) (Boyd, 2011) optimization to learn a linear mapping back to original label space.", "Therefore, during training, we can select and learn the most informative label subset using a submodular maximization framework of linear cost.", "During the prediction time, we can use the selected subset to represent the remaining labels using a linear equation with a linear cost in number of the labels.", "Submodular functions have a natural diminishing property which makes them suitable for many applications.", "A submodular function is a set function with the property that as the size of the selected subset increases, the incremental value of the function by adding a new element to the selected subset does not increase.", "The formal definition of a submodular function is as follows: Definition", "1. For a set function f ( S ) : 2 V R defined for a finite ground set V = 1 , 2 , ...n , the marginal gain of adding each new member can be computed as f ( e | S ) = f ( e S ) f ( S ) .", "The function f ( . ) is submodular, if for each A B V , e V \\ A V \\ B , then f ( e | A ) f ( e | B ) .", "Equivalently, the function f ( S ) : 2 V R is submodular if for any two sets A, B V , f ( A B )+ f ( A B ) f ( A ) + f ( B ) .", "Monotony of sunmodular functions is a useful property which means that the value of the function would not decrease by adding each new member to the input set, and can be defined as following.", "Definition", "2. A submodular function f ( . ) is monotone (non-decreasing) if for every T S , we have that f ( T ) f ( S ) .", "A simple example of a submodular function is the setup cost in a factory.", "Suppose that a factory is capable of making any one of a large finite set V of products.", "In order to produce product e V , it is necessary to set up the machines needed to manufacture e , and this costs money.", "The setup cost is non-linear, and it depends on which other products you choose to produce.", "For example, if you are already producing iPhones, then the setup cost for also producing iPads is small, but if you are not producing iPhones, the setup cost for producing iPads is large.", "We can find a good approximation of the optimum answer for a monotone submodular maximization problems by using the greedy approach and considering the selected subset size constraint.", "More formally: Theorem", "3. (Nemhauser et al., 1978) For a nonnegative, monotone submodular function f , let S be a set of size k obtained by the greedy strategy similar to Algorithm", "1. Then, f ( S ) (1 1 /e ) f ( S ) , where S is the optimum solution, and e is Euler's constant approximately equal to 2 .", "71828 3.2 Submodularity for Label Subset Selection We propose two submodular functions, aiming to select the most informative subset of the labels.", "The first function is a penalized version of the graph cut function.", "It scores label sets with correlation to the other labels and penalizes their similarity to the previously selected labels ( f pen in f ( S ) = ( How members of set S are individually predictable ) + ( How members of set S can represent the members not included in S ) = ( Prediction ability ) + ( Representation ability ) = ( Label Feature dependency ) + ( Label Label dependency ) (1) Algorithm 1 argmax S f ( S ) s.t. (cid:107) S (cid:107) = k .", "Equation 3).", "The graph is constructed using the labels as nodes and label correlations as weights for the graph edges.", "The second function scores the predictability of labels with respect to problem input features ( f score in Equation 5).", "Our final function for identifying the optimal subset of labels is a weighted sum of these (Equation 6).", "We consider the label correlations as graph weights w .", "The graph cut function f cut ( . ) aims to find a subset of the graph nodes (labels) with the highest weights (strongest dependencies) to the remaining nodes (labels).", "This captures strong correlation of a label set to the other labels and thus its ability to reconstruct the other labels.", "The penalised version f pen ( . ) adds one more term to increase the diversity of the selected labels and avoid choosing similar labels.", "f cut ( S ) = (cid:88) i V \\ S (cid:88) j S w i,j (2) f pen ( S ) = f cut ( S ) (cid:88) i,j S i (cid:54) = j w i,j , 0 (3) Theorem", "4. f cut ( S ) is a submodular function and it is monotone for non-large values of | S | (Nemhauser et al., 1978).", "Theorem", "5. f pen ( S ) is a submodular function and it is monotone for non-large values of (Lin et al., 2009).", "The proofs for Theorem 4 and 5 is provided in supplementary Section.", "It is important also to consider predictability, which is the training error for the selected subset of the labels, in order to avoid the prediction error of labels with high training error being propagated to the whole label space.", "As an estimate of predictability we use either a G 2 or 2 independence test for the discrete data, and Fishers Z or t test for the continuous data in order to reject or accept the null hypothesis of independence (Tsamardinos and Borboudakis, 2010).", "Since, this measures include an implicit normalization, the frequency of the classes in training data does not affect the sampling step.", "A higher dependency score for each label and the input feature space means a stronger correlation of the label with the feature space and higher predictability.", "Given label predictability scores f ij for label i and input feature j and D input features, we calculate dependency scores f i of the i -th label and the input features: f i = D (cid:88) j =1 f ij (4) Note that f i 0 .", "We then define the following set function, which also is monotone and submodular (Theorem 6): f score ( S ) = (cid:88) i S f i (5) Theorem", "Proof.", "For w i = the sum over the dependency scores of the i -th label and the feature space, f ( S ) = (cid:80) i S w i is a linear function with w i 0 .", "Any linear function of the form f ( S ) = (cid:80) i S w i is a submodular function.", "If S R, f ( k | S ) f ( k | R ) = 0 f ( k | S ) f ( k | R ) .", "Additionally, if i w i 0 , then f is monotone, because f ( S k ) f ( S ) = w k , w k 0 .", "max | S | = k f ( S ) = max (cid:80) i S w i .", "Therefore f ( S ) is a monotone submodular function.", "Since, any sum of submodular functions with positive coefficients is a submodular function, we can combine f pen ( . ) , and f score ( . ) by positive weights, which results in a new submodular function that includes both representation ability and prediction ability of the selected labels.", "We choose a model parameter > 0 giving us our final submodular function: f ( S ) = f pen ( S ) +", "The main step of our proposed framework is to propagate the predicted value for the selected label subset to the full set of labels in order to recover the original space.", "Therefore, we aim to find a linear relation including the dependency of the selected labels and all the other labels.", "In the prediction step, this linear function obtains the full label set by combining the subset ( Y s ) and outlier predictions ( E ) predicted by the regression functions discussed in next section 3.4.", "Given 1-hot representations Y s over the reduced set of labels and Y the full set of labels, we seek matrices Z and E that recover the original labels: Y = Y s Z + E (7) To find optimal Z and E , we solve the optimization problem Equation 8, where Y and Y s are matrices populated with our training data.", "Note that (cid:107) E (cid:107) 2 , 1 is the L 1 norm of the L 2 norms of the columns of E .", "The sparse matrix Z is a k L matrix which includes a few representative labels (due to the sparsity constraint (cid:107) Z (cid:107) 1 ) for each label ( Y = Y k Z ).", "The Z matrix includes the dependency information and performs propagation of the predicted label subset to the full label set, while nonzero columns of matrix E show the outlier and tail labels set O , which cannot be computed perfectly through their relation to the other labels.", "The index set of the nonzero columns of matrix E indicates the outlier labels.", "is a model parameter.", "The alternating direction method of multipliers (ADMM) method (Boyd, 2011; Nesterov, 2004; Beck and Teboulle, 2009) provides an efficient algorithm for solving this problem, achieving a convergence rate of O (1 /T 2 ) (where T is the number of iterations).", "ADMM solves the problem with more than one unknown variable, ( Z and E in our case), by alternating between optimizing each variable using augmented Lagrangian.", "Please see the supplementary materials for more detail on the ADMM method and how it is applied in this case.", "We now train a linear classifier to predict labels in the reduced label set S O and map back to the full label set.", "Given features of the training data X , corresponding labels from selected and outlier labels YS and YO , we learn linear regression parameters w s , b s for the selected labels and w e , b e for the outlier labels: argmin w s ,b s (cid:107) Y s ( X w s + b s ) (cid:107) + 1 2 (cid:107) w s (cid:107) 2 argmin w e ,b e (cid:107) Y o ( X w e + b e ) (cid:107) + 2 2 (cid:107) w e (cid:107) 2 (9) Since all these training tasks are independent of each other, this step is highly parallelizable.", "The final values for the labels are computed by propagating the selected label subset through the linear relation 7: Y s = X w s + b s E = X w e + b e (10) Y = Y s Z + E (11) An overview of steps for training and prediction are shown in Algorithms 2 and", "Input : Training Data X and Y .", "1: Find the best label subset by submodular optimization over function 6; 2: Find the linear propagation equation through ADMM optimization over problem 8.", "3: Find the linear regression models over small subset of labels and outliers by Equation 9 Output : Label subset, outliers, propagation and regression models.", "We used six different datasets in the experiments.", "The Bibtex dataset is a text dataset extracted from the BibSonomy website (Katakis et al., 2008) Algorithm 3 Prediction Algorithm.", "contains metadata for the bibtex items like the title of the paper, the authors, etc and extracts the features according to the term frequency.", "The Mediamill dataset is extracted from the Mediamill contest datasets, which include low-level multimedia features (visual and textual features) extracted from 85 hours of international news videos from the TRECVID 2005/2006 benchmark datasets (Snoek et al., 2006) labeled using a lexicon of 101 semantic concepts, like commercials, nature, and baseball.", "The Eurlex dataset includes 19,348 legal documents from European nations, containing several different types of documents, including treaties, legislation, case-law and legislative proposals, classified according to the EUROVOC descriptor using 3993 different classes, and 5000 features extracted using common TF-IDF term weighting (Mencia and Furnkranz, 2008).", "The De-licious dataset is a text dataset extracted from the del.icio.us social bookmarking site on the 1st of April 2007 and contains textual data of web pages along with their user defined tags (Tsoumakas et al., 2008).", "The content of web pages was represented using the Boolean bag-of-words model.", "Wiki10-31K is a collection of social tags for given Wikipedia pages with TF-IDF features (Zubiaga, 2012).", "The statistics of these datasets are provided in Table", "1. 4.2 Experimental Setup For the small datasets, Bibtex, Mediamill, Delicious, and Eurlex, the reported results are the average of 10 different experiments for random partitions of each dataset.", "For the larger dataset, Wiki10-31K, we did one experiment with the training and testing partition reported in Table", "1. For all experiments we chose a label subset size of 100 , except for Mediamill where we chose 30 since 100 would represent all labels.", "Model tuning is done in two phases: first we tune for group sparsity (Equation 8), and for weighting of the submodular functions (Equation 6), then we tune for 1 and 2 , the regression parameters for mapping back to the original label set (Equation 9) with and fixed.", "All parameters were cho-sen by measuring the precision of 10-fold cross validation and using a grid search over the values { 0 , 10 3 ,..., +3 } for each dataset.", "The proposed method was compared with several state-of-the-art methods with diverse approaches.", "LEML (Yu et al., 2014), CPLST (Chen and Lin, 2012), CS (Hsu et al., 2009) and SLEEC (Bhatia et al., 2015b) which are embedding based approaches with a low-rank or sparse assumption in the label space.", "ML-CSSP (Bi and Kwok, 2013) which solves the problem in the original label space which ignores the training error in the subset selection step.", "FastXML (Prabhu and Varma, 2014), and PD-sparse (Yen et al., 2016) which do not use an embedding transformation and aim to solve the problem without using compression or sampling.", "We have used the reported results, if available, and otherwise tuned the parameters for the baseline algorithms by means of 10-fold cross validation.", "Table 2 shows the average and standard deviation of Precision@k for the four small-scale datasets, Bibtex, Mediamill, Delicious, and Eurlex, and the large-scale dataset Wiki10-31k.", "For Wiki10-31k, results are reported only for those baselines that were tractable.", "The results for nDCG@k are included in supplementary Material, Table", "5. Since the SLEEC and FastXML methods are ensemble-based, using multiple nonlinear models, it is not fair to compare them with the single model methods such as our own.", "These methods partition the sample space into smaller tractable clusters and obtain separate classifiers for each partition.", "We compare our method with these in Table", "3. The proposed approach in most cases has significantly better results than other methods on both measures.", "The embedding based approaches suffer from accumulation of the embedding and training error (Balasubramanian and Lebanon, 2012), however in the proposed approach, we have removed the embedding step and considered the training error minimization at the label subset selection step.", "On the other hand, the non-embedding approaches such as PD-sparse (Yen et al., 2016) ignore the label space inter-Dataset Domain Number of Features Number of Labels Training Points Testing Points Bibtex Text 1836 159 4880 2515 Delicious Text(Web) 500 983 12920 3185 Mediamill Video 120 101 30993 12914 Eurlex Text 5000 3993 17413 1935 Wiki10-31K Text 101938 30938 14146 6616 Table 1: Dataset statistics Proposed PD-sparse LEML CPLST CS ML-CSSP Bibtex P@1 64.56 0.79 61.29 0.65 62.54 0.52 62.38 0.63 58.87 0.61 44.98 1.15 P@3 39.51 0.34 35.82 0.46 38.41 0.42 37.84 0.48 33.53 0.49 30.43 0.59 P@5 28.80 0.26 25.74 0.30 28.21 0.24 27.62 0.27 23.72 0.29 23.53 0.37 Delicious p@1 65.13 0.39 51.82 1.40 65.67 0.73 65.31 0.88 61.36 0.38 63.04 1.28 P@3 59.07 0.41 44.18 1.04 60.55 0.48 59.95 0.43 56.46 0.33 56.26 1.13 P@5 54.52 0.34 38.95 0.94 56.08 0.43 55.31 0.50 52.07 0.30 50.16 0.83 Mediamill P@1 84.25 0.27 81.86 4.08 84.01 0.31 83.35 0.33 83.82 5.92 78.95 0.23 P@3 67.29 0.24 62.52 2.31 67.20 0.23 66.18 0.22 67.32 4.42 60.93 0.24 P@5 52.90 0.15 45.11 1.14 52.80 0.18 51.46 0.20 52.80 2.61 44.27 0.20 Eurlex P@1 81.04 0.81 76.43 1.04 63.40 1.58 72.28 0.99 58.52 1.06 62.09 2.12 P@3 67.91 0.97 60.37 0.74 50.35 1.44 58.16 1.11 45.51 0.71 48.39 1.31 P@5 56.81 0.97 49.72 0.74 41.28 1.07 47.73 0.97 32.47 0.58 40.11 1.10 Wiki10-31k p@1 86.05 82.14 73.47 -P@3 76.85 69.68 62.43 -P@5 67.77 58.76 54.35 -Table 2: Non-ensemble models with k=100 or 30 (Mediamill).", "dependency information which can be useful to improve the prediction accuracy for the labels which are not easy to predict only from input features.", "ML-CSSP (Bi and Kwok, 2013) and the work of Balasubramanian and Lebanon (2012) attempt, like us, to find the most informative labels in order to perform label subset selection.", "However, our approach improves on their results, supporting the idea that considering only the label space information (ignoring label-feature dependency information) in the label selection step can lead to label sets that are not easy to predict whose training error will be propagated through to final model predictions.", "The SLEEC and FastXML methods are ensemble-based methods using multiple nonlinear models and can be expected to outperform single model methods such as ours.", "SLEEC aims to partition the sample space into smaller tractable clusters to obtain a nonlinear embedding and trained model for each partition.", "FastXML finds a partitioning tree by using nonlinear binary classifiers to partition the samples at each node, which is a very complicated and unstable problem for high-dimensional spaces.", "Therefore, for both SLEEC and FastXML methods, they need an ensemble of the learners in order to overcome this instability and achieve a good prediction accuracy.", "Table 3 shows that SLEEC performs best on the Mediamill and FastXML performs best on the Delicious dataset.", "This shows that finding a representative subset using a linear method is not a consistent assumption for these datasets than the low-rank and tree-based assumptions.", "However, for Bibtex datasset, our proposed method is competitive with the best results, and for Eurlex and Wiki10-31k, our method is substantially better than both SLEEC and FastXML, a notable achievement for a single model approach.", "The ablation study results in Table 4 shows how different data sets benefit from different parts of our proposed framework, with all parts contributing substantially to at least one data set.", "We have reported the results by considering only label-label dependency information ( f pen ), label-feature dependency information ( f score ) and combining all 3 parts ( f pen , f score and outlier information).", "The results support the assertions that considering only the label space information (ignoring label-feature dependency information) in the label selection step causes prediction error of labels with high training error to be propagated to the whole label space and that it is important to also select outlier labels that are hard to predict from other selected labels.", "Figure 1 shows an initial marked increase in performance with subset size, however the results gets more stable when the subset size gets larger.", "This observation, which is consistent with the submodular property, provides a clue that using a more complicated training model, like a nonlinear model, for a smaller selected set of labels may lead to higher performance than increasing the subset size while using a linear model.", "We propose a novel approach for extreme multilabel classification that simplifies the problem by selecting an informative and easily modelled subset of labels and subsequently mapping back to the full set of labels.", "While the method is very well applicable to text datasets, it is applicable as a general ML method for different domains.", "Our novel label selection mechanism follows three principles: A new submodular maximisation framework that combines label-label dependencies and label training error together with a mechanism to identify outlier labels that are hard to reconstruct.", "Modelling only the most informative labels helps to avoid transforming the label space to a new embedding space leading to accumulation of training and embedding errors.", "We use a greedy approach for our monotone submodular framework with linear cost and good theoretical convergence.", "Extensive experiments using a linear prediction model on selected labels conducted on five standard real-world datasets demonstrate that our method achieves better performance than single model approaches, and better or comparable performance to ensemble based methods.", "In future, we can improve our model by using nonlinear training model instead of a simple linear regression model for the selected subset of the labels.", "Moreover, ablation study results suggest that a nonlinear propagation model to reconstruct the full label set may be of benefit.", "This work was partially funded by grants #16214415 and #16248016 of the Hong Kong Research Grants Council, ITS/319/16FP of Innovation Technology Commission, and RDC 1718050-0 of EMOS.AI." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "method", "method", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "method", "objective", "result", "abstain", "other" ]
[ "The current COVID-19 pandemic has lead to the creation of many corpora that facilitate NLP research and downstream applications to help fight the pandemic.", "However, most of these corpora are exclusively for English.", "As the pandemic is a global problem, it is worth creating COVID-19 related datasets for languages other than English.", "In this paper, we present the first manually-annotated COVID-19 domain-specific dataset for Vietnamese.", "Particularly, our dataset is annotated for the named entity recognition (NER) task with newly-defined entity types that can be used in other future epidemics.", "Our dataset also contains the largest number of entities compared to existing Vietnamese NER datasets.", "We empirically conduct experiments using strong baselines on our dataset, and find that: automatic Vietnamese word segmentation helps improve the NER results and the highest performances are obtained by fine-tuning pre-trained language models where the monolingual model PhoBERT for Vietnamese (Nguyen and Nguyen, 2020) produces higher results than the multilingual model XLM-R (Conneau et al., 2020).", "We publicly release our dataset at: https://github.com/ VinAIResearch/PhoNER_COVID19 .", "As of early November 2020, the total number of COVID-19 cases worldwide has surpassed 50M.", "1 The world is once again hit by a new wave of COVID-19 infection with record-breaking numbers of new cases reported everyday.", "Along with the outbreak of the pandemic, information about the COVID-19 is aggregated rapidly through different types of texts in different languages (Aizawa et al., 2020).", "Particularly, in Vietnam, text reports containing official information from the government about COVID-19 cases are presented in great 1 https://www.worldometers.info/coronavirus/worldwide-graphs/#total-cases detail, including de-identified personal information, travel history, as well as information of people who come into contact with the cases.", "The reports are frequently kept up to date at reputable online news sources, playing a significant role to help the country combat the pandemic.", "It is thus essential to build systems to retrieve and condense information from those official sources so that related people and organizations can promptly grasp the key information for epidemic prevention tasks, and the systems should also be able to adapt and sync quickly with epidemics that take place in the future.", "One of the first steps to develop such systems is to recognize relevant named entities mentioned in the texts, which is also known as the NER task.", "Compared to other languages, data resources for the Vietnamese NER task are limited, including only two public datasets from the VLSP 2016 and 2018 NER shared tasks (Huyen and Luong, 2016; Nguyen et al., 2018b).", "Here, the VLSP-2018 NER dataset is an extension of the VLSP-2016 NER dataset with more data.", "These two datasets only focus on recognizing generic entities of person names, organizations, and locations in online news articles.", "Thus, making them difficult to adapt to the context of extracting key entity information related to COVID-19 patients.", "This leads to our work's main goals that are:", "(i) To develop a NER task in the COVID-19 specified domain, that potentially impacts research and downstream applications, and", "(ii) To provide the research community with a new dataset for recognizing COVID-19 related named entities in Vietnamese.", "In this paper, we present a named entity annotated dataset with newly-defined entity types that can be applied to future epidemics.", "The dataset contains informative sentences related to COVID-19, extracted from articles crawled from reputable Vietnamese online news sites.", "Here, we do not consider other types of popular social media in Vietnam such as Facebook as they contain much Label Definition PATIENT_ID Unique identifier of a COVID-19 patient in Vietnam.", "noisy information and are not as reliable as official news sources.", "We then empirically evaluate strong baseline models on our dataset.", "Our contributions are summarized as follows: We introduce the first manually annotated Vietnamese dataset in the COVID-19 do-main.", "Our dataset is annotated with 10 different named entity types related to COVID-19 patients in Vietnam.", "Compared to the VLSP-2016 and VLSP-2018 Vietnamese NER datasets, our dataset has the largest number of entities, consisting of 35K entities over 10K sentences.", "We empirically investigate strong baselines on our dataset, including BiLSTM-CNN-CRF (Ma and Hovy, 2016) and the pre-trained language models XLM-R (Conneau et al., 2020) and PhoBERT (Nguyen and Nguyen, 2020).", "We find that:", "(i) Automatic Vietnamese word segmentation helps improve the NER results, and", "(ii) The highest results are obtained by fine-tuning the pre-trained language models, where PhoBERT does better than XLM-R.", "We publicly release our dataset for research or educational purposes.", "We hope that our dataset can serve as a starting point for future COVID-19 related Vietnamese NLP research and applications.", "Most COVID-19 related datasets are constructed from two types of sources.", "The first one is scientific publications, including the datasets CORD-19 (Wang et al., 2020) and LitCovid (Chen et al., 2020), that help facilitate many types of research works, such as building search engines to retrieve relevant information from scholarly articles (Esteva et al., 2020; Zhang et al., 2020; Verspoor et al., 2020), question answering and summarization (Lee et al., 2020; Su et al., 2020).", "Recently, Colic et al. (2020) fine-tune a BERT-based NER model on the CRAFT corpus (Verspoor et al., 2012) to recognize and then normalize biomedical ontology and terminology entities in LitCovid.", "The second type is social media data, particularly Tweets.", "COVID-19 related Tweet datasets are built for many analytic tasks such as identification of informative Tweets (Nguyen et al., 2020b), and disinformation detection and fact-checking (Shahi and Nandini, 2020; Alam et al., 2020; Alsudias and Rayson, 2020).", "The most relevant work to ours is proposed by Zong et al. (2020), that aims to extract COVID-19 events reporting test results, death cases, cures and prevention from English Tweets.", "As Twitter is rarely used by Vietnamese people, we could not use it for data collection.", "We define 10 entity types with the aim of extracting key information related to COVID-19 patients, which are especially useful in downstream applications.", "In general, these entity types can be used in the context of not only the COVID-19 pandemic but also in other future epidemics.", "The description of each entity type is briefly described in Table 1. See the Appendix for entity examples as well as some notices over the entity types.", "We first crawl articles tagged with \"COVID-19\" or \"COVID\" keywords from the reputable Vietnamese online news sites, including VnExpress, 2 ZingNews, 3 BaoMoi 4 and ThanhNien.", "5 These articles are dated between February 2020 and August 2020.", "We then segment the crawled news articles' primary text content into sentences using RDRSegmenter (Nguyen et al., 2018a) from VnCoreNLP (Vu et al., 2018).", "To retrieve informative sentences about COVID-19 patients, we employ BM25Plus (Trotman et al., 2014) with search queries of common keywords appearing in sentences that report confirmed, suspected, recovered, or death cases as well as the travel history or location of the cases.", "From the top 15K sentences ranked by BM25Plus, we manually filter out sentences that do not contain information related to patients in Vietnam, thus resulting in a dataset of 10027 raw sentences.", "We develop an initial version of our annotation guidelines and then randomly sample a pilot set of 1K sentences from the dataset of 10027 raw sentences for the first phase of annotation.", "Two of the guideline developers are employed to annotate the pilot set independently.", "Following Brandsen et al. (2020), we utilize F 1 score to measure the inter-annotator agreement between the two annotators at the entity span level, resulting in an F1 score of 0.88.", "We then host a discussion session to resolve annotation conflicts, identify complex cases, and refine the guidelines.", "In the second annotation phase, we divide the whole dataset of 10027 sentences into 10 nonoverlapping and equal subsets.", "Each subset contains 100 sentences from the pilot set from the first annotation phase.", "For this second phase, we employ 10 annotators who are undergraduate students with strong linguistic abilities (here, each annotator annotates a subset, paid 0.05 USD per sentence).", "Annotation quality of each annotator is measured by F 1 calculated over the 100 sentences that already have gold annotations from the pilot set.", "All annotators are asked to revise their annotations until they achieve an F 1 of at least 0.92.", "Finally, we 2 https://vnexpress.net 3 https://zingnews.vn 4 https://baomoi.com 5 https://thanhnien.vn Entity Type Train Valid.", "revisit each annotated sentence to make further corrections if needed, resulting in a final gold dataset of 10027 annotated sentences.", "Note that when written in Vietnamese texts, in addition to marking word boundaries, white space is also used to separate syllables that constitute words.", "Therefore, the annotation process is performed at syllable-level text for convenience.", "To obtain a word-level variant of the dataset, we apply the RDRSegmenter to perform automatic Vietnamese word segmentation, e.g. a 4-syllable written text bnh vin Nng (Da Nang hospital) is word-segmented into a 2-word text bnh_vin hospital _Nng Da_Nang .", "Here, automatic Vietnamese word segmentation outputs do not affect gold boundaries of entity mentions.", "We randomly split the gold annotated dataset of 10027 sentences into training/validation/test sets with a ratio of 5/2/3, ensuring comparable distributions of entity types across these three sets.", "Statistics of our dataset is presented in Table 2. 4 Experiments 4.1 Experimental setup We formulate the COVID-19 NER task for Vietnamese as a sequence labeling problem with the BIO tagging scheme.", "We conduct experiments on our dataset using strong baselines to investigate:", "(i) the influence of automatic Vietnamese word segmentation (here, input sentence can be represented in either syllable or word level), and", "(ii) the usefulness of pre-trained language models.", "The baselines include: BiLSTM-CNN-CRF (Ma and Hovy, 2016) and the pre-trained language models XLM-R (Conneau et al., 2020) and PhoBERT (Nguyen and Model PAT.", "Nguyen, 2020).", "XLM-R is a multi-lingual variant of RoBERTa (Liu et al., 2019), pre-trained on a 2.5TB multilingual dataset that contains 137GB of syllable-level Vietnamese texts.", "PhoBERT is a monolingual variant of RoBERTa, pre-trained on a 20GB word-level Vietnamese dataset.", "We employ the BiLSTM-CNN-CRF implementation from AllenNLP (Gardner et al., 2018).", "Training BiLSTM-CNN-CRF requires input pre-trained syllableand word-level embeddings for the syllableand word-level settings, respectively.", "Thus we employ the pre-trained Word2Vec syllable and word embeddings for Vietnamese from Nguyen et al. (2020a).", "These embeddings are fixed during training.", "Optimal hyper-parameters that we grid-searched for BiLSTM-CNN-CRF are presented in Table 3. We utilize the transformers library (Wolf et al., 2020) to fine-tune XLM-R and PhoBERT for the syllableand word-level settings, respectively, using Adam (Kingma and Ba, 2014) with a fixed learning rate of 5.e-5 and a batch size of 32 (Liu et al., 2019).", "The baselines are trained/fine-tuned for 30 epochs.", "We evaluate the Micro-average F 1 score after each epoch on the validation set (here, we apply early stopping if we find no performance improvement after 5 continuous epochs).", "We then choose the best model checkpoint to report the final score on the test set.", "Note that each F 1 score reported is an average over 5 runs with different random seeds.", "Table 4 shows the final entity-level NER results of the baselines on the test set.", "In addition to the standard Micro-average F 1 score, we also report the Macro-average F 1 score.", "We categorize the results under two comparable settings of using syllable-level dataset and its automatically-segmented word-level variant for training and evaluation.", "We find that the performances of word-level models are higher than their syllable-level counterparts, showing that automatic Vietnamese word segmentation helps improve NER, e.g. BiLSTM-CNN-CRF improves from 0.906 to 0.910 Micro-F 1 and from 0.858 to 0.875 Macro-F 1 .", "We also find that fine-tuning the pre-trained language models XLM-R and PhoBERT helps produce better performances than BiLSTM-CNN-CRF.", "Here, PhoBERT outperforms XLM-R (Micro-F 1 : 0.945 vs. 0.938; Macro-F 1 : 0.931 vs. 0.911), thus reconfirming the effectiveness of pre-trained monolingual language models on the language-specific downstream tasks (Nguyen and Nguyen, 2020).", "We perform an error analysis using the best performing model PhoBERT large that produces 353 incorrect predictions in total on the validation set.", "The first error group consists of 69/353 instances with correct entity boundaries (i.e. exact spans) and incorrect entity labels.", "It is largely due to the fact that the model could not differentiate between LOCATION and ORGANIZATION entities.", "This is not surprising because of the ambiguity between these two entity types, in which the same entity mention may act as either LOCATION or ORGANIZATION depending on the sentence context.", "Also, in terms of contact tracing, it would be more useful to label an organization-like entity mention as LOCATION if we can infer that a patient presented at that organization; however, such inference requires additional world knowledge about the entity.", "In addition, in this error group, the model also struggles to recognize OCCUPATION entities correctly.", "Recall that OCCUPATION entity mention must represent the job of a particular person labeled with PERSON_NAME or PATIENT_ID.", "Therefore, it may cause confusion to the model for deciding whether an occupation is linked to a determined person or not in a single sentence context.", "The second error group contains 65/353 instances with inexact spans overlapped with gold spans but having correct entity labels.", "These errors generally happen with multi-word ORGANIZATION entity mentions, where", "(i) an ORGANIZATION entity contains a nested location inside its span, e.g. Bnh vin Lao v Bnh phi Cn Th (Can Tho hospital for Tuberculosis and Lung disease; here, Can Tho is a province in Vietnam), or", "(ii) an organization is a subdivision of a larger organization, e.g. Khoa tim mch Bnh vin Bch Mai (Department of Cardiology Bach Mai Hospital).", "6 The third group of 8/353 errors with overlapped inexact spans and incorrect entity labels does not provide us with any useful insight.", "The final group of remaining 211/353 errors is accounted for predicted entities corresponding with gold O labels.", "Particularly in the case of LOCATION, where generic mentions, such as Bnh vin tnh (province hospital), Trm y t x (commune medical station), chung c (apartment), are recognized as entities, while in fact, they are not.", "In this paper, we have presented the first manually-annotated Vietnamese dataset in the COVID-19 domain, focusing on the named entity recognition task.", "We empirically conduct experiments on our dataset to compare strong baselines and find that the input representations and the pre-trained language models all have influences on this COVID-19 related NER task.", "We hope that our dataset can serve as the starting point for further Vietnamese NLP research and applications in fighting the COVID-19 and other future epidemics." ]
[ "abstain", "abstain", "abstain", "objective", "method", "method", "result", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "objective", "abstain", "method", "abstain", "result", "objective", "objective", "objective", "abstain", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "result", "abstain" ]
[ "Existing works on multimodal affective computing tasks, such as emotion recognition, generally adopt a two-phase pipeline, first extracting feature representations for each single modality with hand-crafted algorithms and then performing end-to-end learning with the extracted features.", "However, the extracted features are fixed and cannot be further fine-tuned on different target tasks, and manually finding feature extraction algorithms does not generalize or scale well to different tasks, which can lead to sub-optimal performance.", "In this paper, we develop a fully end-to-end model that connects the two phases and optimizes them jointly.", "In addition, we restructure the current datasets to enable the fully end-to-end training.", "Furthermore, to reduce the computational overhead brought by the end-to-end model, we introduce a sparse cross-modal attention mechanism for the feature extraction.", "Experimental results show that our fully end-to-end model significantly surpasses the current state-of-the-art models based on the two-phase pipeline.", "Moreover, by adding the sparse cross-modal attention, our model can maintain performance with around half the computation in the feature extraction part.", "Humans show their characteristics through not only the words they use, but also the way they speak and their facial expressions.", "Therefore, in multimodal affective computing tasks, such as emotion recognition, there are usually three modalities: textual, acoustic, and visual.", "One of the main challenges in these tasks is how to model the interactions between different modalities, as they contain both supplementary and complementary information (Bal-truaitis et al., 2018).", "In the existing works, we discover that a two-phase pipeline is generally used (Zadeh et al., 2018a,b; Tsai et al., 2018, 2019; Rahman et al., 2020).", "In the first phase, given raw input data, feature representations are extracted with hand-crafted algorithms for each modality separately, while in the second phase, end-to-end multimodal learning is performed using extracted features.", "However, there are three major defects of this two-phase pipeline: 1) the features are fixed after extraction and cannot be further fine-tuned on target tasks; 2) manually searching for appropriate feature extraction algorithms is needed for different target tasks; and 3) the hand-crafted model considers very few data points to represent higher-level feature, which might not capture all the useful information.", "These defects can result in sub-optimal performance.", "In this paper, we propose a fully end-to-end model that connects the two phases together and optimizes them jointly.", "In other words, the model receives raw input data and produces the output predictions, which allows the features to be learned automatically through the end-to-end training.", "However, the current datasets for multimodal emotion recognition cannot be directly used for the fully end-to-end training, and we thus conduct a data restructuring to make this training possible.", "The benefits from the end-to-end training are that the features are optimized on specific target tasks, and there is no need to manually select feature extraction algorithms.", "Despite the advantages of the end-to-end training, it does bring more computational overhead compared to the two-phase pipeline, and exhaustively processing all the data points makes it computationally expensive and prone to over-fitting.", "Thus, to mitigate these side-effects, we also propose a multimodal end-to-end sparse model, a combination of a sparse cross-modal attention mechanism and sparse Convolutional Neural Network (CNN) (Graham and van der Maaten, 2017), to select the most relevant features for the task and reduce the redundant information and noise in the video and audio.", "Experimental results show that the simply end-to-end training model is able to consistently outperform the existing state-of-the-art models which are based on the two-phase pipeline.", "Moreover, the incorporation of the sparse cross-modal attention and sparse CNN is able to greatly reduce the computational cost and maintain the performance.", "We summarize our contributions as follows.", "To the best of our knowledge, we are the first to apply a fully end-to-end trainable model for the multimodal emotion recognition task.", "We restructure the existing multimodal emotion recognition datasets to enable the end-to-end training and cross-modal attention based on the raw data.", "We show that the fully end-to-end training significantly outperforms the current state-of-the-art two-phase models, and the proposed sparse model can greatly reduce the computational overhead while maintaining the performance of the end-to-end training.", "We also conduct a thorough analysis and case study to improve the interpretability of our method.", "Human affect recognition is a popular and widely studied research topic (Mirsamadi et al., 2017;", "Zhang and Liu, 2017; Xu et al., 2020; Dai et al., 2020b).", "In recent years, there is a trend to leverage multimodal information to tackle these research tasks, such as emotion recognition (Busso et al., 2008), sentiment analysis (Zadeh et al., 2016, 2018b), personality trait recognition (No-javanasghari et al., 2016), etc, have drawn more and more attention.", "Different methods have been proposed to improve the performance and cross-modal interactions.", "In earlier works, early fusion (Morency et al., 2011; Prez-Rosas et al., 2013) and late fusion (Zadeh et al., 2016; Wang et al., 2017) of modalities were widely adopted.", "Later, more complex approaches were proposed.", "For example, Zadeh et al. (2017) introduced the Tensor Fusion Network to model the interactions of the three modalities by performing the Cartesian product, while (Wang et al., 2019) used an attention gate to shift the words using the visual and acoustic features.", "In addition, based on the Transformer (Vaswani et al., 2017), Tsai et al. (2019) introduced the Multimodal Transformer to improve the performance given unaligned multimodal data, and Rahman et al. (2020) introduced a multimodal adaptation gate to integrate visual and acoustic information into a large pre-trained language model.", "However, unlike some other multimodal tasks (Chen et al., 2017; Yu et al., 2019; Li et al., 2019) using fully end-to-end learning, all of these methods require a feature extraction phase using hand-crafted algorithms (details in Section 5.2), which makes the whole approach a two-phase pipeline.", "The fully end-to-end multimodal model requires the inputs to be raw data for the three modalities (visual, textual and acoustic).", "The existing multimodal emotion recognition datasets cannot be directly applied for the fully end-to-end training for two main reasons.", "First, the datasets provide split of training, validation and test data for the hand-crafted features as the input of the model and emotion or sentiment labels as the output of the model.", "However, this dataset split cannot be directly mapped to the raw data since the split indices cannot be matched back to the raw data.", "Second, the labels of the data samples are aligned with the text modality.", "However, the visual and acoustic modalities are not aligned with the textual modality in the raw data, which disables the fully end-to-end training.", "To make the existing datasets usable for the fully end-to-end training and evaluation, we need to reorganize them according to two steps: 1) align the text, visual and acoustic modalities; 2) split the aligned data into training, validation and test sets.", "In this work we reorganize two emotion recognition datasets: Interactive Emotional Dyadic Motion Capture (IEMOCAP) and CMU Multimodal Opinion Sentiment and Emotion Intensity (CMU-MOSEI).", "Both have multi-class and multi-labelled data for multimodal emotion recognition obtained by generating raw utterance-level data, aligning the three modalities, and creating a new split over the aligned data.", "In the following section, we will first introduce the existing datasets, and then we will give a detailed description of how we reorganize them.", "IEMOCAP (Busso et al., 2008) is a multimodal emotion recognition dataset containing 151 videos.", "In each video, two professional actors conduct dyadic conversations in English.", "The dataset is labelled by nine emotion categories, but due to the data imbalance issue, we take the six main categories: angry , happy , excited , sad , frustrated , and neutral .", "As the dialogues are annotated at the utterance level, we clip the data per utterance from the provided text transcription time, which results in 7,380 data samples in total.", "Each data sample consists of three modalities: audio data with a sampling rate of 16 kHz, a text transcript, and image frames sampled from the video at 30 Hz.", "The provided pre-processed data from the existing work (Busso et al., 2008) 1 doesn't provide an identifier for each data sample, which makes it impossible to reproduce it from the raw data.", "To cope with this problem, we create a new split for the dataset by randomly allocating 70%, 10%, and 20% of data into the training, validation, and testing sets, respectively.", "The statistics of our dataset split are shown in Table", "1. 3.2 CMU-MOSEICMU-MOSEI (Zadeh et al., 2018b) comprises 3,837 videos from 1,000 diverse speakers with six emotion categories: happy , sad , angry , fearful , disgusted , and surprised .", "It is annotated at utterance-level, with a total of 23,259 samples.", "Each data 1 http://immortal.multicomp.cs.cmu.edu/ raw_datasets/processed_data/iemocap Label Avg.", "sample in CMU-MOSEI consists of three modalities: audio data with a sampling rate of 44.1 kHz, a text transcript, and image frames sampled from the video at 30 Hz.", "We generate the utterance-level data from the publicly accesible raw CMU-MOSEI dataset.", "2 The generated utterances are perfectly matched with the preprocessed data from the existing work (Zadeh et al., 2018b), but there are two issues with the existing dataset: 1) in includes many misaligned data samples; and 2) many of the samples do not exist in the generated data, and vice versa, in the provided standard split from the CMU MultiModal SDK.", "3 To cope with the first issue, we perform data cleaning to remove the misaligned samples, which results in 20,477 clips in total.", "We then create a new dataset split following the CMU-MOSEI split for the sentiment classification task.", "4 The statistics of the new dataset split setting are shown in Table", "2. 4 Methodology 4.1 Problem Definition We define I multimodal data samples as X = { ( t i , a i , v i ) } Ii =1 , in which t i is a sequence of words, a i is a sequence of spectrogram chunks from the 2 http://immortal.multicomp.cs.cmu.edu/ raw_datasets/processed_data/cmu-mosei/seq_length_20/ 3 https://github.com/A2Zadeh/ CMU-MultimodalSDK 4 http://immortal.multicomp.cs.cmu.edu/ raw_datasets/processed_data/cmu-mosei/seq_length_50/mosei_senti_data_noalign.pkl Cross-modalAttention Linear Linear Add tanh Linear softmax Nucleus Sampling Sparse CNN Wow.", "audio, and v i is a sequence of RGB image frames from the video.", "Y = { y i } Ii =1 denotes the annotation for each data sample.", "We build a fully end-to-end model which jointly optimizes the two separate phases (feature extraction and multimodal modelling).", "For each spectrogram chunk and image frame in the visual and acoustic modalities, we first use a pre-trained CNN model (an 11-layer VGG (Si-monyan and Zisserman, 2014) model) to extract the input features, which are then flattened to vector representations using a linear transformation.", "After that, we can obtain a sequence of representations for both visual and acoustic modalities.", "Then, we use a Transformer (Vaswani et al., 2017) model to encode the sequential representations since it contains positional embeddings to model the temporal information.", "Finally, we take the output vector at the CLS token and apply a feed-forward network (FFN) to get the classification scores.", "In addition, to reduce GPU memory and align with the two-phase baselines which extract visual features from human faces, we use a MTCNN (Zhang et al., 2016) model to get the location of faces for the image frames before feeding them into the VGG.", "For the textual modality, the Transformer model is directly used to process the sequence of words.", "Similar to the visual and acoustic modalities, we consider the feature at the CLS token as the output feature and feed it into a FFN to generate the classification scores.", "We take a weighted sum of the classification scores from each modality to make the final prediction score.", "Although the fully end-to-end model has many advantages over the two-phase pipeline, it also brings much computational overhead.", "To reduce this overhead without downgrading the performance, we introduce our Multimodal End-to-end Sparse Model (MESM).", "Figure 2 shows the overall architecture of MESM.", "In contrast to the fully end-to-end model, we replace the original CNN layers (except the first one for low-level feature capturing) with N cross-modal sparse CNN blocks.", "A cross-modal sparse CNN block consists of two parts, a cross-modal attention layer and a sparse CNN model that contains two sparse VGG layers and one sparse max-pooling layer.", "The cross-modal attention layer accepts two inputs: a query vector q R d and a stack of feature maps M RC S H W , where C, S, H, and W are the number of channels, sequence length, height, and width, respectively.", "Then, the cross-modal spatial attention is performed over the feature maps using the query vector.", "The cross-modal spatial attention can be formularized in the following steps: M q = tanh (( W m M + b m ) W q q ) (1) M i = softmax ( W i M q + b i ) (2) M ns = Nucleus Sampling ( M i ) (3) M o = M ns M, (4) in which W m R k C , W q R k d , and W i R k are linear transformation weights, and b m R k and b i R 1 are biases, where k is a pre-defined hyper-parameter, and represents the broadcast addition operation of a tensor and a vector.", "In Eq.2, the softmax function is applied to the ( H W ) dimensions, and M i RS H W is the tensor of the spatial attention scores corresponding to each feature map.", "Finally, to make the input feature maps M sparse while reserving important information, firstly, we perform Nucleus Sampling (Holtzman et al., 2019) on M i to get the topp portion of the probability mass in each attention score map ( p is a pre-defined hyper-parameter in the range of (0 , 1] ).", "In M ns , the points selected by the Nucleus Sampling are set to one and the others are set to zero.", "Then, we do broadcast point-wise multiplication between M ns and M to generate the output M o .", "Therefore, M o is a sparse tensor with some positions being zero, and the degree of sparsity is controlled by p .", "We use the submanifold sparse CNN (Graham and van der Maaten, 2017) after the cross-modal attention layer.", "It is leveraged for processing low-dimensional data which lies in a space of higher dimensionality.", "In the multimodal emotion recognition task, we assume that only part of the data is related to the recognition of emotions (an intuitive example is given in Figure 1), which makes it align with the sparse setting.", "In our model, the sparse CNN layer accepts the output from the cross-modal attention layer, and does convolution computation only at the active positions.", "Theoretically, in terms of the amount of computation (FLOPs) at a single location, a standard convolution costs z 2 mn FLOPs, and a sparse convolution costs amn FLOPs, where z is the kernel size, m is the number of input channels, n is the number of output channels, and a is the number of active points at this location.", "Therefore, considering all locations and all layers, the sparse CNN can help to significantly reduce computation.", "Following prior works (Tsai et al., 2018; Wang et al., 2019; Tsai et al., 2019; Dai et al., 2020a), we use the accuracy and F1-score to evaluate the models on the IEMOCAP dataset.", "On the CMU-MOSEI dataset, we use the weighted accuracy instead of the standard accuracy.", "Additionally, according to Dai et al. (2020a), we use the standard binary F1 rather than the weighted version.", "Weighted Accuracy Similar to existing works (Zadeh et al., 2018b; Akhtar et al., 2019), we use the weighted accuracy (WAcc) (Tong et al., 2017) to evaluate the CMU-MOSEI dataset, which contains many more negative samples than positive ones on each emotion category.", "If normal accuracy is used, a model will still get a fine score when predicting all samples to be negative.", "The formula of the weighted accuracy is WAcc.", "in which P means total positive, TP true positive, N total negative, and TN true negative.", "For our baselines, we use a two-phase pipeline, which consists of a feature extraction step and an end-to-end learning step.", "Feature Extraction We follow the feature extraction procedure in the previous works (Zadeh et al., 2018b; Tsai et al., 2018, 2019; Rahman et al., 2020).", "For the visual data, we extract 35 facial action units (FAUs) using the OpenFace library 5 (Bal-truaitis et al., 2015; Baltrusaitis et al., 2018) for the image frames in the video, which capture the movement of facial muscles (Ekman et al., 1980).", "For the acoustic data, we extract a total of 142 dimension features consisting of 12 dimension bark band energy (BBE) features, 22 dimension mel-frequency cepstral coefficient (MFCC) features, and 108 statistical features from 18 phonological classes.", "We extract the features per 400 ms time frame using the DisVoice library 6 (Vsquez-Correa et al., 2018, 2019).", "For textual data, we use the pre-trained 5 https://github.com/TadasBaltrusaitis/ OpenFace 6 https://github.com/jcvasquezc/ DisVoice Model #FLOPs ( 10 9 ) Angry Excited Frustrated Happy Neutral Sad Average Acc.", "GloVe (Pennington et al., 2014) word embeddings (glove.840B.300d 7 ).", "Multimodal Learning As different modalities are unaligned in the data, we cannot compare our method with existing works that can only handle aligned input data.", "We use four multimodal learning models as baselines: the late fusion LSTM (LF-LSTM) model, the late fusion Transformer (LF-TRANS) model, the Emotion Embeddings (EmoEmbs) model (Dai et al., 2020a), and the Multimodal Transformer (MulT) model (Tsai et al., 2019).", "They receive the hand-crafted features extracted from the first step as input and give the classification decisions.", "We use the Adam optimizer (Kingma and Ba, 2014) for the training of every model we use.", "For the loss function, we use the binary cross-entropy loss as both of the datasets are multi-class and multi-labelled.", "In addition, the loss for the positive samples is weighted by the ratio of the number of positive and negative samples to mitigate the imbalance problem.", "For all of the models, we perform an exhaustive hyper-parameter search to ensure we have solid comparisons.", "The best hyper-parameters 7 https://nlp.stanford.edu/projects/ glove/ are reported in Appendix A. Our experiments are run on an Nvidia 1080Ti GPU, and our code is implemented in the PyTorch (Paszke et al., 2019) framework v1.6.0.", "We perform preprocessing for the text and audio modalities.", "For the text modality, we perform word tokenization for our baseline and subword tokenization for our end-to-end model.", "We limit the length of the text to up to 50 tokens.", "For the audio modality, we use mel-spectrograms with a window size of 25 ms and stride of 12.5 ms and then chunk the spectrograms per 400 ms time window.", "In Table 3, we show the results on the IEMOCAP dataset.", "Compared to the baselines, the fully end-to-end (FE2E) model surpasses them by a large margin on all the evaluation metrics.", "Empirically, this shows the superiority of the FE2E model over the two-phase pipeline.", "Furthermore, our MESM achieves comparable results with the FE2E model, while requiring much less computation in the feature extraction.", "Here, we only show the results of MESM with the best p value of the Nucleus Sampling.", "In Section 6.3, we conduct a more detailed discussion of the effects of the top-p values.", "We further evaluate the methods on the CMU-MOSEI Happy Angry original layer 1 layer 2 layer 3 Fear Sad Surprised Disgusted original layer 1 layer 2 layer 3 Figure 3: Case study of MESM on six basic emotion categories (happy, sad, angry, surprised, fear, disgusted).", "dataset and the results are shown in Table", "4. We observe similar trends on this dataset.", "To improve the interpretability and gain more insights from our model, we visualize the attention maps of our sparse cross-modal attention mechanism on the six basic emotions: happy, sad, angry, surprised, fear, and disgusted.", "As shown in Figure 3, in general, the models attend to several regions of interest such as the mouth, eyes, eyebrows, and facial muscles between the mouth and the eyes.", "We verify our method by comparing the regions that our model captures based on the facial action coding system (FACS) (Ekman, 1997).", "Following the mapping of FACS to human emotion categories (Basori, 2016; Ahn and Chung, 2017), we conduct empirical analysis to validate the sparse cross-modal attention on each emotion category.", "For example, the emotion happy is highly influ-enced by raising of the lip on both ends, while sad is related to a lowered lip on both ends and downward movement of the eyelids.", "Angry is determined from a narrowed gap between the eyes and thinned lips, while surprised is expressed with an open mouth and raising of the eyebrows and eyelids.", "Fear is indicated by a rise of the eyebrows and upper eyelids, and also an open mouth with the ends of the lips slightly moving toward the cheeks.", "For the emotion disgusted , wrinkles near the nose area and movement of the upper lip region are the determinants.", "Based on the visualization of the attention maps on the visual data in Figure 3, the MESM can capture most of the specified regions of interest for the six emotion categories.", "For the emotion angry , the sparse cross-modal attention can retrieve the features from the lip region quite well, but it sometimes fails to capture the gap between the eyes.", "For surprised , the eyelids and mouth regions can be successfully captured by MESM, but sometimes the model fails to consider the eyebrow regions.", "For the acoustic modality, it is hard to analyse the attention in terms of emotion labels.", "We show a general visualization of the attention maps over the audio data in Figure", "4. The model attends to 0.60 0.65 0.70 0.75 0.80 0.85 0.90 A cc u r a c y 0.84 0.85 0.83 0.84 0.84 0.84 0.75 0.74 0.73 0.62 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 top-p 3.00 4.00 5.00 6.00 7.00 8.00 9.00 FLOPS ( x 10 9 ) 8.65 6.47 5.66 5.18 4.71 4.34 3.94 3.58 3.23 2.93 Figure 5: The trend line of the Top: Weighted Accuracy and Bottom: FLOPs (x 10 9 )) of the MESM with different top-p values used in the Nucleus Sampling.", "More visualized examples are provided in Appendix B. 6.3 Effects of Nucleus Sampling To have an in-depth understanding of the effects of Nucleus Sampling on the MESM, we perform more experiments with different top-p values ranging from 0 to 1, with a step of 0.1.", "As shown in Figure 5, empirically, the amount of computation is reduced consistently with the decrease of the top-p values.", "In terms of performance, with a top-p value from 0.9 to 0.5, there is no significant drop in the evaluation performance.", "Starting from 0.5 to 0.1, we can see a clear downgrade in the performance, which means some of the useful information for recognizing the emotion is excluded.", "The inflec-tion point of this elbow shaped trend line can be an indicator to help us make a decision on the value of the topp .", "Specifically, with a topp of 0.5, the MESM can achieve comparable performance to the FE2E model with around half of the FLOPs in the feature extraction.", "We conduct a comprehensive ablation study to further investigate how the models perform when one or more modalities are absent.", "The results are shown in Table", "5. Firstly, we observe that the more modalities the more improvement in the performance.", "TAV, representing the presence of all three modalities, results in the best performance for both models, which shows the effectiveness of having more modalities.", "Secondly, with only a single modality, the textual modality results in better performance than the other two, which is similar to the results of previous multimodal works.", "This phenomenon further validates that using textual (T) to attend to acoustic (A) and visual (V) in our cross-modal attention mechanism is a reasonable choice.", "Finally, with two modalities, the MESM can still achieve a performance that is on par with the FE2E model or is even slightly better.", "In this paper, we first compare and contrast the two-phase pipeline and the fully end-to-end (FE2E) modelling of the multimodal emotion recognition task.", "Then, we propose our novel multimodal end-to-end sparse model (MESM) to reduce the computational overhead brought by the fully end-to-end model.", "Additionally, we reorganize two existing datasets to enable fully end-to-end training.", "The empirical results demonstrate that the FE2E model has an advantage in feature learning and surpasses the current state-of-the-art models that are based on the two-phase pipeline.", "Furthermore, MESM is able to halve the amount of computation in the feature extraction part compared to FE2E, while maintaining its performance.", "In our case study, we provide a visualization of the cross-modal attention maps on both visual and acoustic data.", "It shows that our method can be interpretable, and the cross-modal attention can successfully select important feature points based on different emotion categories.", "For future work, we believe that incorporating more modalities into the sparse cross-modal attention mechanism is worth exploring since it could potentially enhance the robustness of the sparsity (selection of features).", "This work is funded by MRP/055/18 of the Innovation Technology Commission, the Hong Kong SAR Government." ]
[ "abstain", "abstain", "objective", "abstain", "method", "result", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "method", "result", "objective", "other" ]
[ "Yang Gao, Steffen Eger, Ilia Kuznetsov, Iryna Gurevych, and Yusuke Miyao.", "\"Does My Rebuttal Matter? Insights from a Major NLP Conference.\"", "In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 1274-1290.", "2019.", "The paper uses the term peer pressure to refer to the incentive to update the scores and reach a consensus.", "Such consensus is often explicitly encouraged by the area chairs, e.g. by enforcing a discussion for submissions with high deviation in scores while leaving submissions with low deviation intact.", "In our analysis, we use the data from the ACL 2018 conference to estimate which factor is more responsible for the score updates, the authors' rebuttal or the opinions of the other reviewers, expressed numerically as scores.", "For this, we conduct regression analyses.", "If the scores given by the other reviewers dominate in the regression, we take this as evidence for the peer pressure effect; if factors relating to the rebuttal dominate, we assume the peer pressure effect to be weaker.", "Our experiments in Section 4.3 show that peer pressure dominates the regression.", "Note that both the texts and the scores of the peer reviews can yield peer pressure.", "A reviewer can be influenced by the ideas expressed in the other reviewers' texts, or simply by the pressure to be aligned with the other reviewers in terms of scores.", "The latter kind of influence is also known as herd behavior (Banerjee, 1992) or conformity bias (Buechelet al., 2015).", "In our study, we do not distinguish the influence of these two factors, but instead treat the influence from the other peer reviews as a whole as peer pressure.", "Completely disentangling the causes of peer pressure would require controlled experimentation, e.g. allowing a group of reviewers to only see the scores while allowing another group to only see the comments, and analyze the score-update behavior differences in each group.", "Such experiments are to be set up in advance and integrated into the reviewing process itself and hence are beyond the scope of our study which operates on historical data.", "We encourage the community to perform such controlled experiments in the future." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method" ]
[ "A well-calibrated neural model produces confidence (probability outputs) closely approximated by the expected accuracy.", "While prior studies have shown that mixup training as a data augmentation technique can improve model calibration on image classification tasks, little is known about using mixup for model calibration on natural language understanding (NLU) tasks.", "In this paper, we explore mixup for model calibration on several NLU tasks and propose a novel mixup strategy for pre-trained language models that improves model calibration further.", "Our proposed mixup is guided by both the Area Under the Margin (AUM) statistic (Pleiss et al., 2020) and the saliency map of each sample (Si-monyan et al., 2013).", "Moreover, we combine our mixup strategy with model miscalibration correction techniques (i.e., label smoothing and temperature scaling) and provide detailed analyses of their impact on our proposed mixup.", "We focus on systematically designing experiments on three NLU tasks: natural language inference, paraphrase detection, and commonsense reasoning.", "Our method achieves the lowest expected calibration error compared to strong baselines on both in-domain and out-of-domain test samples while maintaining competitive accuracy.", "Training a well-calibrated classifier that produces a match between confidence (the probability output that a model assigns to a prediction) and correctness (accuracy), is important in modern neural networks.", "As an example, if an AI-based application knows what it does not know , or in other words, the chance that the current prediction is wrong, a human is more helpful to correct the error.", "However, many works reveal that current deep neural networks are prone to over-confidence, which implies that the models' confidence is not reliable (Guo et al., 2017).", "This is a critical issue on the deployment of AI-based user applications such as the healthcare domain (Zhu et al., 2018; Li et al., 2019) or safety-critical domain (Sarabadani, 2019) due to the problem of prediction trustworthiness.", "Recently, the study of calibration on neural network models especially on natural language processing tasks has started to receive attention.", "To overcome the problem of miscalibration, numerous suggestions on how to address it have been proposed.", "For example, Guo et al. (2017) revealed that using temperature scaling before the final softmax layer reduces calibration errors.", "Mller et al. (2019), Kumar and Sarawagi (2019), and Wang et al. (2020a) found that label smoothing and its variants yield better calibration for neural machine translation.", "Desai and Durrett (2020) also reported that the aforementioned miscalibration correction methods can be applied to calibrate pre-trained language models which are often miscalibrated potentially due to over-parameterization.", "Mixup (Zhang et al., 2018) is a data augmentation method for deep neural networks in which additional samples are generated during training by combining random pairs of training inputs and their associated labels.", "While simple to implement, mixup has been shown to improve both predictive performance and model calibration, particularly on image classification tasks due to its regularization effect through data augmentation (Thulasidasan et al., 2019).", "The recent success of mixup on image classification has led to the development of various mixup strategies for NLU especially those that use hidden state representations (Guo et al., 2019a; Chen et al., 2020; Zhang et al., 2020; Sun et al., 2020; Kong et al., 2020; Yin et al., 2021).", "However, most prior works on NLU focus on performance improvement using mixup rather than model calibration.", "Despite its benefits for calibration, a mixup for correcting miscalibrated predictions is still an under-explored topic in NLU.", "While Kong et al. (2020) explored BERT (Devlin et al., 2019) cali-5364 bration using mixup for both in-domain and out-of-domain, they only focused on generating mixup samples by utilizing the distance between instances in the feature space.", "In contrast, we propose a novel mixup method, in which we first leverage the behavior of a model on individual samples during training (training dynamics), which can reveal samples with distinct pronounced characteristicswhether they are easy-to-learn or hard-to-learn/ambiguous for the model, and then we generate mixup samples by mixing easy-to-learn with hard-to-learn/ambiguous samples according to their similarity/dissimilarity provided by saliency maps.", "Saliency maps capture how much each data portion contributes to the final classification decision of a sample (Si-monyan et al., 2013).", "Intuitively, easy-to-learn samples help with model optimization, whereas hard-to-learn or potentially ambiguous samples are essential for learning since they are the most challenging for the model (Swayamdipta et al., 2020), and mixing them using saliency maps can yield better calibrated models (more realistic model con-fidence), e.g., mixing easy-to-learn with hard-to-learn/ambiguous samples by similarity in saliency maps can benefit in-domain calibration and by dissimilarity can benefit out-of-domain calibration.", "To monitor training dynamics, we use the Area Under the Margin (AUM) statistic (Pleiss et al., 2020) which measures how different a true label for a sample is compared to a model's beliefs at each epoch and is calculated as the average difference between the logit values for a sample's assigned class and its highest non-assigned class across training epochs.", "Moreover, we combine our mixup with wellknown miscalibration correction methods such as label smoothing and temperature scaling (Guo et al., 2017) to investigate their impact on our proposed mixup.", "We conduct a comprehensive set of experiments using BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) to show the efficacy of our mixup approach by testing on three NLU tasks: natural language inference, paraphrase detection, and commonsense reasoning.", "We achieve the lowest Expected Calibration Error (ECE) without accuracy drops in comparison with strong baseline methods.", "Our contributions are as follows: We propose a novel mixup method which is guided by AUM and saliency signals and is targeted at improving model calibration.", "Specifically, we compare logits to categorize samples into two sets (i.e., a set of easy-to-learn samples and another set of hard-to-learn/ambiguous samples), and interpolate samples across these two sets by find-ing the most similar and most dissimilar samples from the other set while leveraging saliency (to compute sample similarities) for pre-trained language models' calibration on in-domain and out-of-domain data.", "We combine our method with miscalibration correction techniques (i.e., label smoothing, temperature scaling) to investigate their impact on our proposed mixup.", "We conduct comprehensive experiments showing that our method achieves the lowest expected calibration errors (ECEs) on both in-domain and out-of-domain samples compared with strong baselines without accuracy drops on multiple NLU tasks, namely, natural language inferences, paraphrase detection, and commonsense reasoning.", "Model Calibration Calibration on NLU tasks has been widely studied in related literature.", "Nguyen and O'Connor (2015) provided the method of how to analyze the calibration of non-neural NLP models.", "Guo et al. (2017) examined the calibration of modern deep neural networks and revealed that techniques such as temperature scaling and dropout affect the calibration on binary/multi-class classification tasks.", "Wang et al. (2020b) investigated the calibration of neural machine translation models and found that inference suffers from serious miscalibration.", "Jagannatha and Yu (2020) demonstrated that neural networks show high calibration error on structured predictions such as NER, POS, and QA, and proposed to use a binary class forecaster to calibrate the predictor confidence for a defined output entity of interest.", "Desai and Durrett (2020) explored pre-trained language models' calibration in combination with temperature scaling and label smoothing both on in-domain and out-of-domain datasets.", "Jung et al. (2020) jointly optimized two objectives (a cross-entropy loss and a calibration loss) and directly penalized the difference between the predicted and the true posterior probabilities dynamically over the training steps.", "He et al. (2021) obtained better calibration on natural language understanding tasks by augmenting and training the classifier jointly with an energy-based model using noise-contrastive estimation.", "Mixup Mixup (Zhang et al., 2018) is a method for data augmentation in which additional samples are generated during training by convexly combining random pairs and their associated labels, and aims to alleviate overfitting.", "Verma et al. (2019) showed that manipulating hidden representations rather than manipulating input-level features on mixup results in better regularization effects due to the fact that it encourages the neural network to focus more on representations of the real training examples in a low dimensional subspace.", "Many works have empirically noticed regularization effects that improve model performance on deep neural networks.", "For example, Guo et al. (2019a) explored the NLU specific mixup strategy by using sentence and word embeddings on CNNs and LSTMs to add performance gains in supervised text classification.", "Chen et al. (2020) proposed mixup for semi-supervised learning in which labeled and unlabeled samples are interpolated with their hidden representations to improve the performance of text classification.", "Zhang et al. (2020) explored mixup for sequence labeling tasks with active learning to improve the performance of supervised sequence labeling tasks.", "Yin et al. (2021) proposed mixup that interpolates every instance in a mini-batch to boost the performance of NLU tasks on the pre-trained language model RoBERTa (Liu et al., 2019).", "Similar to us, Yoon et al. (2021) explored mixup by incorporating saliency signals to generate augmented samples.", "Precisely, they use saliency signals to select a span of text from one sample to be replaced with another text span from another sample.", "However, in contrast, our method first divides data samples into two categories (easy-to-learn and hard-to-learn/ambiguous categories) according to their AUM (Pleiss et al., 2020) distribution monitored over training epochs and then uses saliency to find the most similar/dissimilar samples across these two data categories.", "Recently, several works started to explore mixup for NLU model calibration.", "For example, Thulasidasan et al. (2019) investigated the impact of mixup for model calibration of NLU but only explored in-domain settings with simple deep learning architecture such as CNNs.", "Kong et al. (2020) explored BERT calibration using mixup as a regularization component on in-domain and out-of-domain.", "However, their mixup method only relied on the feature space distance between samples.", "In contrast, we explore a novel mixup method in which we categorize the training samples into two sets using AUM (Pleiss et al., 2020) and combine samples across these two sets based on saliency signals, for in-domain and out-of-domain model calibration.", "Background Let D train = { ( x i , y i ) } i =1 , ,n be a training set and f a language model.", "Mixup training generates vicinity training samples according to the rule introduced in Zhang et al. (2018): x = x i + (1 ) x j y = y i + (1 ) y j (1) where x i and x j are two randomly sampled input points, y i and y j are their associated one-hot encoded labels, and is a mixing ratio sampled from a Beta( , ) distribution with a hyper-parameter .", "In mixup, training data is augmented by linearly interpolating training samples in the input space.", "We propose a mixup method targeted at improving model calibration that synthesizes samples guided by the Area Under the Margin (AUM) (Pleiss et al., 2020) and saliency (Simonyan et al., 2013).", "Data Categorization In our method, we first categorize D train into two sets (a set of easy-to-learn samples and a set of hard-to-learn/ambiguous samples) according to the AUM of each sample.", "Given a sample ( x i , y i ) , we compute AUM ( x i , y i ) as the area under the margin averaged across all training epochs T .", "Specifically, at some epoch t T , the margin is defined as: M t ( x i , y i ) = z y i max y i", "where M t ( x i , y i ) is the margin of example x i with gold label y i , z y i is the logit corresponding to the gold label y i , and max y i", "!= k ( z k ) is the largest other logit corresponding to label k not equal to y i .", "Precisely, the margin measures how different a gold label is compared to a model's beliefs at each epoch t .", "The AUM of ( x i , y i ) across all epochs is: AUM ( x i , y i ) = 1 TT (cid:88) t =1 M t ( x i , y i ) (3) Intuitively, the samples with high AUM are easy-to-learn (the model's belief matches the gold label), but they are essential for model optimization, while 5366 Algorithm 1 : Identify high/low AUM samples Require: D train = { ( x i , y i ) } i =1 , ,n ; model f 1: function DATA-CATEGORIZATION ( D train ) 2: D high , D low 3: Train f for T epochs and compute AUM ( x i , y i ) for each i as in Eq.", "the samples with low AUM are hard-to-learn or ambiguous (and hence they are the most challenging for the model), but they are essential for learning.", "Our proposed mixup method first splits D train into two data categories depending on whether the AUM value is high or low, namely, D high and D low .", "In experiments, we compute the median AUM over the entire training samples and use it as a threshold to split the dataset.", "If a sample has a lower AUM than the threshold, we add the sample to D low , otherwise we add it to D high .", "Accordingly, we balance D high and D low , but other splits are possible.", "We then conduct a mixup operation by referring to each other set.", "Mixing easy-to-learn and hard-to-learn adjusts the difficulty of samples and hence adjusts models' confidence according to samples' difficul-ties and yields better calibrated models.", "The data categorization step is summarized in Algorithm", "1. Mixup using Saliency Signals We conduct a mixup operation on the two data categories generated by Algorithm 1 using saliency signals (as detailed below).", "For the mixup, rather than selecting random samples from D high and D low to mix, we utilize saliency signals to select samples.", "To measure saliency, gradient-based methods are usually used for saliency computation (Li et al., 2016; Rei and Sgaard, 2018; Yoon et al., 2021).", "Following this idea, we simply compute the gradient of the classification loss L with respect to each logit value z i z and take the absolute value of the gradient components as the saliency map or signature S for a sample ( x i , y i ) D train .", "For a sample ( x i , y i ) , we then leverage its saliency map S to find the most similar and most dissimilar samples from Algorithm 2 : Proposed Mixup Require: D train = { ( x i , y i ) } i =1 , ,n ; model f 1: D high , D low DATA-CATEGORIZATION ( D train ) 2: for k := 0 to T do 3: T otal _ Loss 0 4: for i := 0 to |D train | do 5: Loss CrossEntropy ( f ( x i ) , y i ) 6: Construct a saliency map S by computing the gradient of Loss with respect to z 7: if ( x i , y i ) D high then : 8: Find the most similar/dissimilar samples from D low using Eq.", "the other data category that ( x i , y i ) does not belong to according to its AUM, in order to calibrate in-domain and out-of-domain data.", "For example, if ( x i , y i ) D high , we find its most similar sample ( x (cid:48) i , y (cid:48) i ) and its most dissimilar sample ( x (cid:48)(cid:48) i , y (cid:48)(cid:48) i ) from D low , that return the largest and smallest cosine similarity, respectively, with the saliency map S of ( x i , y i ) .", "That is, the most similar and most dissimilar samples to ( x i , y i ) D high are calculated as follows: ( x (cid:48) i , y (cid:48) i ) = argmax ( x j ,y j ) D low CosSim ( S, S ( x j ,y j ) ) ( x (cid:48)(cid:48) i , y (cid:48)(cid:48) i ) = argmin ( x j ,y j ) D low CosSim ( S, S ( x j ,y j ) ) (4) Similarly, if ( x i , y i ) belongs to D low , we find the most similar/dissimilar samples from D high that return the largest/smallest cosine similarity with S 5367 as follows: ( x (cid:48) i , y (cid:48) i ) = argmax ( x j ,y j ) D high CosSim ( S, S ( x j ,y j ) ) ( x (cid:48)(cid:48) i , y (cid:48)(cid:48) i ) = argmin ( x j ,y j ) D high CosSim ( S, S ( x j ,y j ) ) (5) We then generate two mixup samples for a given sample ( x i , y i ) by interpolating the selected samples, which are the most similar sample ( x (cid:48) i , y (cid:48) i ) , and the most dissimilar sample ( x (cid:48)(cid:48) i , y (cid:48)(cid:48) i ) .", "For the mixup operation, we follow the original mixing ratio sampling strategy which is shown in Eq.", "(1).", "The ratio is sampled from a Beta( , ) distribution with a hyper-parameter .", "Intuitively, by synthesizing the original sample and the most similar sample from the other data category, we calibrate in-domain data.", "The augmented sample mimics in-domain sample since it aligns the most with the original sample.", "Furthermore, by selecting the sample from the other category, we allow the generated mixup sample to combine easy-to-learn and hard-to-learn samples properly.", "By synthesizing the original and the most dissimilar sample from the other data category, we calibrate out-of-domain data.", "The augmented sample mimics out-of-domain instances since we pick a sample that is the most dissimilar to the original sample.", "As above, by selecting the sample from the other category, we allow the augmented sample to contain both information of easy-to-learn and hard-to-learn samples, useful for both optimization and learning.", "Note that our mixup method mixes samples on the level of [CLS] hidden state representations generated by task-specific layer on top of the pre-trained language model.", "We summarize the process in Algorithm", "2. We combine each loss by weighted sum (see Alg. 2) where , , are hyper-parameters that sum up to", "1. In our experiments, we conduct our mixup operation using mini-batch SGD to update the model weights.", "Note that other saliency measures are possible to compute similar-ity/dissimilarity between samples and will be an interesting future direction.", "A model is perfectly calibrated when the confidence estimate p of the model is equal to true probability (accuracy) P ( y = y | p ) = p .", "(Naeini et al., 2015; Guo et al., 2017; Desai and Durrett, 2020).", "This can be empirically approximated by discretiz-ing the probability interval into a fixed number of bins M = 10 where each bin b m contains predicted probabilities that encompass the interval.", "The expected calibration error (ECE) is calculated by weighting the average of the difference between each bin's accuracy and confidence as follows: acc ( b m ) = 1 | b m | (cid:88) i b m 1 ( y i = y i ) conf ( b m ) = 1 | b m | (cid:88) i b m p i ECE = M (cid:88) m =1 | b m | N | acc ( b m ) conf ( b m ) | where N is the total number of predictions.", "We explore the combination of miscalibration correction methods (described below) with mixup to investigate their impact on our proposed mixup for model calibration.", "Label Smoothing (LS) In supervised learning, one-hot encoded labels fail to provide uncertainty of inputs due to the fact that all the probability mass is given to one class.", "This results in over-confident models since the largest logit becomes larger than the others which removes the uncertainty of label space.", "Label smoothing (LS) is a solution to penalize this by preventing the models from becoming over-confident.", "In this work, we incorporate label smoothing with our proposed mixup.", "We generate smoothed one-hot target signal while creating mixup instances by distributing | y | 1 mass over non ground-truth classes, where (0 , 1) is a hyper-parameter and | y | is the number of classes.", "1 Temperature Scaling (TS) Temperature scaling (TS) is a post-processing step which re-scales the logit vector z using a single scale parameter temperature, T > 0 for all classes.", "TS has the effect of softening the outputs to be uniform with T > 1 , while T 0 has the effect of collapsing probability mass to one class.", "We explore the effect of TS when incorporated with our proposed mixup.", "We evaluate our calibration-targeted mixup on three natural language understanding tasks: natural lan-1", "guage inference, paraphrase detection, and commonsense reasoning.", "We evaluate the models in-domain (training and testing on data from the same distribution) and out-of-domain (training and testing on data from different distributions).", "Mixup reduces the number of undesirable oscillations when predicting especially on out-of-distribution samples (Zhang et al., 2018).", "Hence, effective mixup should be less prone to over-fitting when handling out-of-distribution data.", "To test the benefits of our proposed method for pre-trained language model calibration, we use in-domain trained models to predict out-of-distribution test samples.", "We describe our in-domain and out-of-domain sets as follows.", "Natural Language Inference Stanford Natural Language Inference (SNLI) is a natural language inference task to predict if the relation between a hypothesis and a premise is entailment, contradiction, or neutral (Bowman et al., 2015).", "Multi-Genre Natural Language Inference (MNLI) captures natural language inference with more diverse domains (Williams et al., 2018) than SNLI.", "Paraphrase Detection Quora Question Pairs (QQP) is a paraphrase detection task to test if two questions are semantically equivalent (Iyer et al., 2017).", "TwitterPPDB (TPPDB) is to determine whether sentence pairs from Twitter convey similar semantics when they share URLs (Lan et al., 2017) Commonsense Reasoning Situations With Adversarial Generations (SWAG) is a commonsense reasoning task to choose the most plausible continuation of a sentence among four candidates (Zellers et al., 2018).", "HellaSWAG is a dataset built using adversarial filtering to generate challenging out-of-domain samples.", "It is distributionally different in that its examples exploit statistical biases in pre-trained models.", "In this work, we explore the mixup effects on NLU with the goal of producing better calibrated models, in particular pre-trained language models, which are BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019).", "We consider the following baselines: Pre-trained Language Models : Pre-trained language models fine-tuning on each downstream task using BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019).", "Mixup (Zhang et al., 2018; Thulasidasan et al., 2019): Mixup augments training data by linearly interpolating randomly selected training samples in the input space.", "The interpolation of Mixup is performed on the input embeddings obtained from the first layer of the language model.", "Manifold-mixup (M-mixup) (Verma et al., 2019) : An extension of Mixup, which interpolates training samples in the hidden feature space.", "The interpolation of Manifold-mixup is performed on the features obtained from the last layer of the language model.", "Each method is compared with two variants where miscalibration correction methods (label smoothing, LS and temperature scaling, TS) are applied.", "2 4.3 Implementation Details We use the same set of hyper-parameters across all tasks as Desai and Durrett (2020) for a fair comparison.", "We train models with a maximum of 3 epochs.", "For BERT, we set batch size of 16, a learning rate of 1e-5, gradient clip of 1.0, and no weight decay.", "For RoBERTa, we set batch size of 32, a learning rate of 2e-5, gradient clip of 1.0, and weight decay of 0.1.", "We follow the published train/validation/test split by Desai and Durrett (2020).", "3 For mixup, we use a mixing ratio sampling strategy hyper-parameter = 0 .", "4 .", "We use loss weight hyper-parameters, , , , values as 0 .", "8 / 0 .", "1 / 0 .", "1 respectively.", "We did hyper-parameter search for label smoothing [0 . 001 , 0 . 003 , 0 . 01 , 0 . 03 , 0 . 1 , 0 . 3] .", "We use = 0 .", "01 / 0 .", "03 / 0 .", "3 for BERT, = 0 .", "003 / 0 .", "03 / 0 .", "3 for RoBERTa on SNLI/QQP/SWAG, respectively.", "We use threshold values for splitting data into two groups D high and D low (the median AUM over full training samples) as 3 .", "5 / 4 .", "4 / 2 .", "5 for BERT, 3 .", "4 / 4 .", "0 / 2 .", "7 for RoBERTa on SNLI/QQP/SWAG, respectively.", "For all results, we report the mean across five training runs with random restarts.", "Finally, all experiments are conducted on a single NVIDIA RTX A5000 24G GPU with a total time for fine-tuning all models being under 24 hours.", "Temperature scaling (TS) searches are performed in the range of [0.01,5.0] with a granularity of 0.01 using development datasets.", "TS is completed very fast since it uses separate cached logits.", "We show the comparison of experimental results (ECE) on BERT and RoBERTa in Table", "1. For each task, we train the model on in-domain training set, and evaluate its expected calibration errors (ECEs) on in-domain and out-of-domain test sets.", "We make the following observations: First, for in-domain data, label smoothing (LS) does not exhibit its effectiveness on pre-trained language models' calibration.", "Specifi-cally, for in-domain data, pre-trained language models with LS (i.e., BERT+LS/RoBERTa+LS) achieve higher expected calibration errors (ECEs) compared with vanilla pre-trained language models (i.e., BERT/RoBERTa) on all tasks.", "In contrast, out-of-domain gains benefit from LS (ex-cept RoBERTa on MNLI).", "From these results, we conclude that simply incorporating label uncertainty (through label smoothing) is not an effective regularization method since LS does not consistently improve the model calibration (especially for the in-domain setting).", "While temperature scaling (TS) corrects the miscalibration of vanilla pre-trained language models (see BERT/RoBERTa No TS vs. TS in the table), it fails to correct miscalibrated pre-trained language models with LS (see BERT+LS/RoBERTa+LS No TS vs. TS) in-domain.", "Interestingly, for some cases of out-of-domain data, pre-trained language models with LS show comparatively low ECEs while TS further reduces ECEs (e.g., BERT(LS) on Twit-terPPDB/HellaSWAG, RoBERTa(LS) on TwitterP-PDB).", "However, its impact is not enough as it still results in high ECE.", "This implies that TS is not a notable strategy either to pre-trained language models' calibration.", "Accordingly, we conclude that stronger regularization techniques are required to calibrate the pre-trained language models.", "Second, we find that mixup on the hidden feature space (i.e., M-Mixup) generally yields lower ECE than mixup on the input embedding space (i.e., Mixup) on most tasks.", "We infer that Mixup generates augmented samples that are not good for model calibration (i.e., semantically or syntactically) and fails to encourage regularization effects that arise from mixup.", "We observe that mixup training with LS is beneficial to reduce ECEs on some tasks.", "We find that TS leads to much lower ECEs on Mixup and M-Mixup (with and without LS) on most tasks.", "However, this implies that baseline mixup methods fail to produce well-calibrated models independently (without LS or TS).", "This supports our intuition and motivation for the design of a more robust approach of mixup.", "Third, we observe that our proposed mixup yields the best calibrated models (lowest ECEs) both on in-domain and out-of-domain data (ex-cept on SWAG with RoBERTa).", "We observe that often LS effectively operates along with our proposed mixup and achieves the lowest ECEs on most tasks on in-domain and out-of-domain settings.", "In contrast to baseline mixup methods, our proposed mixup performs well on in-domain and out-of-domain even without applying post-calibration correction TS (see ECE values of baselines compared with our ECE values).", "We also observe that TS improves the model calibration further on our mixup training in most cases.", "Accordingly, we confirm the robustness of our AUM and saliency guided mixup for pre-trained language models calibration.", "Accuracy We explore the accuracy of mixup training and show comparisons in Table", "2. We make the following observations: 1) Both BERT+LS/RoBERTa+LS generally lead to substantial accuracy drops especially on in-domain compared with BERT/RoBERTa (i.e., 4.49% accuracy drops on SWAG).", "This implies that label smoothing (LS) fails to improve model generalization by simply manipulating labels (changing from hard to soft labels).", "This potentially leads to a loss of information that is correlated to model generalization (Mller et al., 2019).", "2) Mixup and M-Mixup fail to achieve an accuracy that is as good as that of vanilla pre-trained language models, potentially due to an increased chance of manifold intrusion resulting from conflicts between the synthetic samples of the mixup and original training data (Guo et al., 2019b).", "3) In contrast, our proposed mixup method generally achieves competitive accuracy regardless of applying LS or not.", "This evidence supports the robustness of our proposed mixup.", "Note that TS does not affect the model's accuracy because it does not change the maximum of the softmax function.", "Effect of AUM and Saliency We investigate the effectiveness of each component (i.e., AUM and saliency) in our proposed mixup.", "As shown in Table 3, our proposed mixup without the AUM (i.e., -AUM) and without saliency (i.e., -Saliency) generally increase the expected calibration errors.", "In our method without using AUM, we randomly divide training data into two categories and conduct mixup operation based on saliency map.", "In our method without using saliency, we randomly pick two samples from the opposite low and high AUM set and conduct mixup operation.", "The results demonstrate that both metrics (AUM and saliency) are required to improve model calibration.", "Effect of selecting the most similar and dissimilar samples We explore the effectiveness of selecting the most similar and dissimilar samples, which are used for mixing purposes for in-domain and out-of-domain calibration, respectively.", "Specifically, in our proposed mixup, we synthesize additional samples that mimic in-domain data by selecting the most similar sample from the other category (e.g., an easy-to-learn sample is mixed with a hard-to-learn/ambiguous sample that is most similar to the easy-to-learn sample, by saliency maps).", "This is because the selected sample aligns the most with the given sample.", "This intuitively results in better model generalization due to the effect arising from data augmentation (i.e., augmenting samples that are particularly similar to in-domain data) and allows better in-domain calibration.", "Similarly, we calibrate out-of-domain by augmenting a sample that mimics out-of-domain distribution.", "This is because we select the sample that is the most different from a given sample by selecting the most dissimilar sample from the other category.", "To verify this intuition, we conduct our proposed mixup when excluding the most similar instance (i.e., -similar) and the most dissimilar instance (i.e., -dissimilar), respectively.", "Table 3 shows the results of this ablation.", "We observe that our proposed mixup without using the most dissimilar sample (i.e., -dissimilar) results in higher ECEs compared with our mixup that uses dissimilar samples on all tasks in the out-of-domain setting for both BERT and RoBERTa.", "Interestingly, we observe that our proposed mixup without using the most similar sample (i.e., -similar) results in higher ECEs compared with our mixup that uses the most similar samples on in-domain and out-of-domain data for both BERT and RoBERTa.", "These results support that selecting the most sim-ilar/dissimilar samples effectively calibrates pre-trained models for in-domain/out-of-domain data.", "We proposed a novel mixup guided by the Area Under the Margins (AUM) and saliency maps to mitigate the miscalibration of pre-trained language models BERT and RoBERTa.", "We showed that our proposed mixup method achieves the lowest Expected Calibration Errors (ECEs) for both pre-trained language models on various types of natural language understanding tasks, for both in-domain and out-of-domain data.", "For future work, we will enhance our proposed mixup further, focusing not only on model calibration but also on performance gains.", "Exploring different saliency maps for computing sample similarity/disimilarity (and its degree) is another interesting future direction.", "This research is supported in part by NSF CAREER award #1802358 and NSF CRI award #1823292.", "Any opinions, findings, and conclusions expressed here are those of the authors and do not necessarily reflect the views of NSF.", "We thank AWS for computing resources.", "We also thank our anonymous reviewers for their constructive feedback and comments, which helped improve our paper." ]
[ "abstain", "abstain", "objective", "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "method", "abstain", "result", "result", "objective", "other", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "other", "other", "other", "other", "abstain", "abstain", "other", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "other", "other", "other", "other" ]
[ "Reading comprehension has recently seen rapid progress, with systems matching humans on the most popular datasets for the task.", "However, a large body of work has highlighted the brittleness of these systems, showing that there is much work left to be done.", "We introduce a new English reading comprehension benchmark, DROP, which requires D iscrete R easoning O ver the content of P aragraphs.", "In this crowdsourced, adversarially-created, 96k-question benchmark, a system must resolve references in a question, perhaps to multiple input positions, and perform discrete operations over them (such as addition, counting, or sort-ing).", "These operations require a much more comprehensive understanding of the content of paragraphs than what was necessary for prior datasets.", "We apply state-of-the-art methods from both the reading comprehension and semantic parsing literatures on this dataset and show that the best systems only achieve 32.7% F 1 on our generalized accuracy metric, while expert human performance is 96.4%.", "We additionally present a new model that combines reading comprehension methods with simple numerical reasoning to achieve 47.0% F 1 .", "The task of reading comprehension , where systems must understand a single passage of text well enough to answer arbitrary questions about it, has seen significant progress in the last few years, so much that the most popular datasets available for this task have been solved (Chen et al., 2016; Devlin et al., 2019).", "We introduce a substantially more challenging English reading comprehension dataset aimed at pushing the field towards more comprehensive analysis of paragraphs of text.", "In Work done as an intern at the Allen Institute for Artificial Intelligence in Irvine, California.", "this new benchmark, which we call DROP, a system is given a paragraph and a question and must perform some kind of D iscrete R easoning O ver the text in the P aragraph to obtain the correct answer.", "These questions that require discrete reasoning (such as addition, sorting, or counting; see Table 1) are inspired by the complex, compositional questions commonly found in the semantic parsing literature.", "We focus on this type of questions because they force a structured analysis of the content of the paragraph that is detailed enough to permit reasoning.", "Our goal is to further paragraph understanding ; complex questions allow us to test a system's understanding of the paragraph's semantics.", "DROP is also designed to further research on methods that combine distributed representations with symbolic, discrete reasoning.", "In order to do well on this dataset, a system must be able to find multiple occurrences of an event described in a question (presumably using some kind of soft matching), extract arguments from the events, then perform a numerical operation such as a sort, to answer a question like Who threw the longest touchdown pass? .", "We constructed this dataset through crowdsourc-ing, first collecting passages from Wikipedia that are easy to ask hard questions about, then encouraging crowd workers to produce challenging questions.", "This encouragement was partially through instructions given to workers, and partially through the use of an adversarial baseline : we ran a baseline reading comprehension method (BiDAF) (Seo et al., 2017) in the background as crowd workers were writing questions, requiring them to give questions that the baseline system could not correctly answer.", "This resulted in a dataset of 96,567 questions from a variety of categories in Wikipedia, with a particular emphasis on sports game summaries and history passages.", "The answers to the questions are required to be spans in the passage or question, numbers, or dates, which allows for easy and accurate evaluation metrics.", "We present an analysis of the resulting dataset to show what phenomena are present.", "We find that many questions combine complex question semantics with SQuAD-style argument finding; e.g., in the first question in Table 1, BiDAF correctly finds the amount the painting sold for, but does not understand the question semantics and cannot perform the numerical reasoning required to answer the question.", "Other questions, such as the fifth question in Table 1, require finding all events in the passage that match a description in the question, then aggregating them somehow (in this instance, by counting them and then performing an argmax).", "Very often entity coreference is required.", "Table 1 gives a number of different phenomena, with their proportions in the dataset.", "We used three types of systems to judge baseline performance on DROP: (1) heuristic baselines, to check for biases in the data; (2) SQuAD-style reading comprehension methods; and (3) semantic parsers operating on a pipelined analysis of the passage.", "The reading comprehension methods perform the best, with our best baseline achieving 32.7% F 1 on our generalized accuracy metric, while expert human performance is 96.4%.", "Finally, we contribute a new model for this task that combines limited numerical reasoning with standard reading comprehension methods, allowing the model to answer questions involving counting, addition and subtraction.", "This model reaches 47% F 1 , a 14.3% absolute increase over the best baseline system.", "The dataset, code for the baseline systems, and a leaderboard with a hidden test set can be found at https://allennlp.org/drop .", "Question answering datasets With systems reaching human performance on the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016), many follow-on tasks are currently being proposed.", "All of these datasets throw in additional complexities to the reading comprehension challenge, around tracking conversational state (Reddy et al., 2019; Choi et al., 2018), requiring passage retrieval (Joshi et al., 2017; Yang et al., 2018; Talmor and Berant, 2018), mismatched passages and questions (Saha et al., 2018; Kocisky et al., 2018; Rajpurkar et al., 2018), integrating knowledge from external sources (Mihaylov et al., 2018; Zhang et al., 2019), tracking entity state changes (Mishra et al., 2018; Ostermann et al., 2018) or a particular kind of multi-step reasoning over multiple documents (Welbl et al., 2018; Khashabi et al., 2018).", "Similar facets are explored in medical domain datasets (Pampari et al., 2018; Suster and Daele-mans, 2018) which contain automatically generated queries on medical records based on predefined templates.", "We applaud these efforts, which offer good avenues to study these additional phenomena.", "However, we are concerned with paragraph understanding , which on its own is far from solved, so DROP has none of these additional complexities.", "It consists of single passages of text paired with independent questions, with only linguistic facility required to answer the questions.", "1 One could argue that we are adding numerical reasoning as an additional complexity, and this is true; however, it is only simple reasoning that is relatively well-understood in the semantic parsing literature, and we use it as a necessary means to force more comprehensive passage understanding.", "Many existing algebra word problem datasets also contain similar phenomena to what is in DROP (Koncel-Kedziorski et al., 2015; Kushman et al., 2014; Hosseini et al., 2014; Clark et al., 2016; Ling et al., 2017).", "Our dataset is different in that it uses much longer contexts, is more open domain, and requires deeper paragraph understanding.", "Semantic parsing The semantic parsing literature has a long history of trying to understand complex, compositional question semantics in terms of some grounded knowledge base or other environment (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Berant et al., 2013a, inter alia ).", "It is this literature that we modeled our questions on, particularly looking at the questions in the Wik-iTableQuestions dataset (Pasupat and Liang, 2015).", "If we had a structured, tabular representation of the content of our paragraphs, DROP would be largely the same as WikiTableQuestions, with similar (possibly even simpler) question semantics.", "Our novelty is that we are the first to combine these complex questions with paragraph understanding, with the aim of encouraging systems that can produce comprehensive structural analyses of paragraphs, either explicitly or implicitly.", "1 Some questions in our dataset require limited sports domain knowledge to answer; we expect that there are enough such questions that systems can reasonably learn this knowledge from the data.", "a recent trend in creating datasets with adversarial baselines in the loop (Paperno et al., 2016; Minervini and Riedel, 2018; Zellers et al., 2018; Zhang et al., 2019; Zellers et al., 2019).", "In our case, instead of using an adversarial baseline to filter automatically generated examples, we use it in a crowd-sourcing task, to teach crowd workers to avoid easy questions, raising the difficulty level of the questions they provide.", "to encourage research on methods that combine neural methods with discrete, symbolic reasoning.", "We present one such model in Section 6.", "Other related work along these lines has been done by Reed and de Freitas (2016), Neelakantan et al. (2016), and Liang et al. (2017).", "In this section, we describe our annotation protocol, which consists of three phases.", "First, we automatically extract passages from Wikipedia which are expected to be amenable to complex questions.", "Second, we crowdsource question-answer pairs on these passages, eliciting questions which require discrete reasoning.", "Passage extraction We searched Wikipedia for passages that had a narrative sequence of events, particularly with a high proportion of numbers, as our initial pilots indicated that these passages were the easiest to ask complex questions about.", "We found that National Football League (NFL) game summaries and history articles were particularly promising, and we additionally sampled from any Wikipedia passage that contained at least twenty numbers.", "2 This process yielded a collection of about 7,000 passages.", "Question collection We used Amazon Mechanical Turk 3 to crowdsource the collection of question-answer pairs, where each question could be answered in the context of a single Wikipedia passage.", "In order to allow some flexibility during the annotation process, in each human intelligence task (HIT) workers were presented with a random sample of 5 of our Wikipedia passages, and were asked to produce a total of at least 12 question-answer pairs on any of these.", "We presented workers with example questions from five main categories, inspired by questions from the semantic parsing literature (addi-tion/subtraction, minimum/maximum, counting, selection and comparison; see examples in Table 1), to elicit questions that require complex linguistic understanding and discrete reasoning.", "In addition, to further increase the difficulty of the questions in DROP, we employed a novel adverserial annotation setting, where workers were only allowed to submit questions which a real-time QA model BiDAF could not solve.", "4 Next, each worker answered their own question with one of three answer types: spans of text from either question or passage, a date (which was common in history and open-domain text) and numbers, allowed only for questions which explicitly stated a specific unit of measurement (e.g., How many yards did Brady run?), in an attempt to simplify the evaluation process.", "2 We used an October 2018 Wikipedia dump, as well as scraping of online Wikipedia.", "3 www.mturk.com 4 While BiDAF is no longer state-of-the-art, performance is reasonable and the AllenNLP implementation (Gardner et al., 2017) made it the easiest to deploy as a server.", "workers and gradually reduced our worker pool to workers who understood the task and annotated it well.", "Each HIT paid 5 USD and could be completed within 30 minutes, compensating a trained worker with an average pay of 10 USD/ hour.", "Overall, we collected a total of 96,567 question-answer pairs with a total Mechanical Turk budget of 60k USD (including validation).", "The dataset was randomly partitioned by passage into training (80%), development (10%) and test (10%) sets, so all questions about a particular passage belong to only one of the splits.", "Validation In order to test inter-annotator agreement and to improve the quality of evaluation against DROP, we collected at least two additional answers for each question in the development and test sets.", "In a separate HIT, workers were given context passages and a previously crowdsourced question, and were asked to either answer the question or mark it as invalid (this occurred for 0.7% of the data, which we subsequently filtered out).", "We found that the resulting inter-annotator agreement was good and on par with other QA tasks; overall Cohen's was 0.74, with 0.81 for numbers, 0.62 for spans, and 0.65 for dates.", "In the following, we quantitatively analyze properties of passages, questions, and answers in DROP.", "Different statistics of the dataset are depicted in Table 2.", "Notably, questions have a diverse vocabulary of around 30k different words in our training set.", "Question analysis To assess the question type distribution, we sampled 350 questions from the training and development sets and manually annotated the categories of discrete operations required to answer the question.", "Table 1 shows the distribution of these categories in the dataset.", "In addition, to get a better sense of the lexical diversity of questions in the dataset, we find the most frequent Answer Type Percent Example NUMBER 66.1 12 PERSON 12.2 Jerry Porter OTHER 9.4 males OTHER ENTITIES 7.3 Seahawks VERB PHRASE 3.5 Tom arrived at Acre DATE 1.5 3 March 1992 Table 3: Distribution of answer types in training set, according to an automatic named entity recognition.", "trigram patterns in the questions per answer type.", "We find that the dataset offers a huge variety of linguistic constructs, with the most frequent pattern (Which team scored) appearing in only 4% of the span type questions.", "For number type questions, the 5 most frequent question patterns all start with How many, indicating the need to perform counting and other arithmetic operations.", "A distribution of the trigrams containing the start of the questions are shown in Figure 1.", "Answer analysis To discern the level of passage understanding needed to answer the questions in DROP, we annotate the set of spans in the passage that are necessary for answering the 350 questions mentioned above.", "We find that on an average 2.18 spans need to be considered to answer a question and the average distance between these spans is 26 words, with 20% of samples needing at least 3 spans (see appendix for examples).", "Finally, we assess the answer distribution in Table 3, by running the part-of-speech tagger and named entity recognizer from spaCy 5 to automatically partition all the answers into various categories.", "We find that a majority of the answers are numerical values and proper nouns.", "In this section we describe the initial baselines that we evaluated on the DROP dataset.", "We used three types of baselines: state-of-the-art semantic parsers ( 5.1), state-of-the-art reading comprehension models ( 5.2), and heuristics looking for annotation artifacts ( 5.3).", "We use two evaluation metrics to compare model performance: Exact-Match , and a numeracy-focused (macro-averaged) F 1 score, which measures overlap between a bag-of-words representation of the gold and predicted answers.", "We employ the same implementation of Exact-Match accuracy as used by SQuAD, which 5 https://spacy.io/ removes articles and does other simple normalization, and our F 1 score is based on that used by SQuAD.", "Since DROP is numeracy-focused, we de-fine F 1 to be 0 when there is a number mismatch between the gold and predicted answers, regardless of other word overlap.", "When an answer has multiple spans, we first perform a one-to-one alignment greedily based on bag-of-word overlap on the set of spans and then compute average F 1 over each span.", "When there are multiple annotated answers, both metrics take a max over all gold answers.", "Semantic parsing has been used to translate natural language utterances into formal executable languages (e.g., SQL) that can perform discrete operations against a structured knowledge representation, such as knowledge graphs or tabular databases (Zettlemoyer and Collins, 2005; Berant et al., 2013b; Yin and Neubig, 2017; Chen and Mooney, 2011, inter alia ).", "Since many of DROP's questions require similar discrete reasoning, it is appealing to port some of the successful work in semantic parsing to the DROP dataset.", "Specifi-cally, we use the grammar-constrained semantic parsing model built by Krishnamurthy et al. (2017) (KDG) for the WIKITABLEQUESTIONS tabular dataset (Pasupat and Liang, 2015).", "Sentence representation schemes We experimented with three paradigms to represent paragraphs as structured contexts: (1) Stanford dependencies (de Marneffe and Manning, 2008, Syn Dep); which capture word-level syntactic relations, (2) Open Information Extraction (Banko et al., 2007, Open IE), a shallow semantic representation which directly links predicates and arguments; and (3) Semantic Role Labeling (Carreras and M`arquez, 2005, SRL), which disambiguates senses for polysemous predicates and assigns predicate-specific argument roles.", "6 To adhere to KDG's structured representation format, we convert each of these representations into a table, where rows are predicate-argument structures and columns correspond to different argument roles.", "Logical form language Our logical form language identifies five basic elements in the table representation: predicate-argument structures (i.e., table rows), relations (column-headers), strings , num-6 We used the AllenNLP implementations of state-of-the-art models for all of these representations (Gardner et al., 2017; Dozat et al., 2017; He et al., 2017; Stanovsky et al., 2018).", "bers , and dates .", "In addition, it defines functions that operate on these elements, such as counters and filters.", "7 Following Krishnamurthy et al. (2017), we use the argument and return types of these functions to automatically induce a grammar to constrain the parser.", "We also add context-specific rules to produce strings occurring in both question and paragraph, and those paragraph strings that are neighbors of question tokens in the GloVe embedding space (Pennington et al., 2014), up to a cosine distance of d .", "8 The complete set of functions used in our language and their induced grammar can be found in the code release.", "Training and inference During training, the KDG parser maximizes the marginal likelihood of a set of (possibly spurious) question logical forms that evaluate to the correct answer.", "We obtain this set by performing an exhaustive search over the grammar up to a preset tree depth.", "At test time, we use beam search to produce the most likely logical form, which is then executed to predict an answer.", "We test four different SQuAD-style reading comprehension models on DROP: (1) BiDAF (Seo et al., 2017), which is the adversarial baseline", "7 For example filter number greater takes a set of predicate-argument structures, the name of a relation, and a number, and returns all those structures where the numbers in the argument specified by the relation are greater than the given number.", "8 d = 0 .", "3 was manually tuned on the development set.", "we used in data construction (66.8% EM on SQuAD 1.1); (2) QANet (Yu et al., 2018), currently the best-performing published model on SQuAD 1.1 without data augmentation or pretraining (72.7% EM); (3) QANet + ELMo , which enhances the QANet model by concatenating pre-trained ELMo representations (Peters et al., 2018) to the original embeddings (78.7% EM); (4) BERT (Devlin et al., 2019), which recently achieved improvements on many NLP tasks with a novel pretraining technique (84.7% EM).", "9 These models require a few minor adaptations when training on DROP.", "While SQuAD provides answer indices in the passage, our dataset only provides the answer strings.", "To address this, we use the marginal likelihood objective function proposed by Clark and Gardner (2018), which sums over the probabilities of all the matching spans.", "10 We also omitted the training questions which cannot be answered by a span in the passage (45%), and therefore cannot be represented by these systems.", "For the BiDAF baseline, we use the implementation in AllenNLP but change it to use the marginal objective.", "For the QANet model, our settings differ from the original paper only in the batch size (16 v.s. 32) and number of blocks in the modeling layer 9 The first three scores are based on our own implementation, while the score for BERT is based on an open-source implementation from Hugging Face: https://github.com/huggingface/pytorch-pretrained-bert 10 For the black-box BERT model, we convert DROP to SQuAD format by using the first match as the gold span.", "(6 v.s. 7) due to the GPU memory limit.", "We adopt the ELMo representations trained on 5.5B corpus for the QANet+ELMo baseline and the large uncased BERT model for the BERT baseline.", "The hyper-parameters for our NAQANet model ( 6) are the same as for the QANet baseline.", "A recent line of work (Gururangan et al., 2018; Kaushik and Lipton, 2018) has identified that popular crowdsourced NLP datasets (such as SQuAD (Rajpurkar et al., 2016) or SNLI (Bowman et al., 2015)) are prone to have artifacts and annotation biases which can be exploited by supervised algorithms that learn to pick up these artifacts as signal instead of more meaningful semantic features.", "We estimate artifacts by training the QANet model described in Section 5.2 on a version of DROP where either the question or the paragraph input representation vectors are zeroed out ( question-only and paragraph-only , respectively).", "Consequently, the resulting models can then only predict answer spans from either the question or the paragraph.", "In addition, we devise a baseline that estimates the answer variance in DROP.", "We start by counting the unigram and bigram answer frequency for each wh question-word in the train set (as the first word in the question).", "The majority baseline then predicts an answer as the set of 3 most common answer spans for the input question word (e.g., for when, these were quarter, end and October).", "DROP is designed to encourage models that combine neural reading comprehension with symbolic reasoning.", "None of the baselines we described in Section 5 can do this.", "As a preliminary attempt toward this goal, we propose a numerically-aware QANet model, NAQANet, which allows the state-of-the-art reading comprehension system to produce three new answer types: (1) spans from the question; (2) counts; (3) addition or subtraction over numbers.", "To predict numbers, the model first predicts whether the answer is a count or an arithmetic expression.", "It then predicts the specific numbers involved in the expression.", "This can be viewed as the neural model producing a partially executed logical form, leaving the final arithmetic to a symbolic system.", "While this model can currently only handle a very limited set of operations, we believe this is a promising approach to combining neural methods and symbolic reasoning.", "Our NAQANet model follows the typical architecture of previous reading comprehension models, which is composed of embedding, encoding, passage-question attention, and output layers.", "We use the original QANet architecture for everything up to the output layer.", "This gives us a question representation Q R m d , and a projected question-aware passage representation P R n d .", "We have four different output layers, for the four different kinds of answers the model can produce: Passage span As in the original QANet model, to predict an answer in the passage we apply three repetitions of the QANet encoder to the passage representation P and get their outputs as M 0 , M 1 , M 2 respectively.", "Then the probabilities of the starting and ending positions from the passage can be computed as: p p start = softmax ( FFN ([ M 0 ; M 1 ]) , (1) p p end = softmax ( FFN ([ M 0 ; M 2 ]) (2) where FFN is a two-layer feed-forward network with the RELU activation.", "Question span Some questions in DROP have their answer in the question instead of the passage.", "To predict an answer from the question, the model first computes a vector h P that represents the information it finds in the passage: P = softmax ( WP P ) , (3) h P = P P (4) Then it computes the probabilities of the starting and ending positions from the question as: p q start = softmax ( FFN ([ Q ; e | Q | h P ]) , (5) p q end = softmax ( FFN ([ Q ; e | Q | h P ]) (6) where the outer product with the identity ( e | Q | ) simply repeats h P for each question word.", "Count We model the capability of counting as a multi-class classification problem.", "Specifically, we consider ten numbers (09) in this preliminary model and the probabilities of choosing these numbers is computed based on the passage vector h P : p count = softmax ( FFN ( h P )) (7) Arithmetic expression Many questions in DROP require the model to locate multiple numbers in the passage and add or subtract them to get the final answer.", "To model this process, we first extract all the numbers from the passage and then learn to assign a plus, minus or zero for each number.", "In this way, we get an arithmetic expression composed of signed numbers, which can be evaluated to give the final answer.", "To do this, we first apply another QANet encoder to M 2 and get a new passage representation M 3 .", "Then we select an index over the concatenation of M 0 and M 3 , to get a representation for each number in this passage.", "The i th number can be represented as h Ni and the probabilities of this number being assigned a plus, minus or zero are: p sign i = softmax ( FFN ( h Ni )) (8) Answer type prediction We use a categorical variable to decide between the above four answer types, with probabilities computed as: p type = softmax ( FFN ([ h P , h Q ])) (9) where h Q is computed over Q , in a similar way as we did for h P .", "At test time, we first determine this answer type greedily and then get the best answer from the selected type.", "For supervision, DROP contains only the answer string, not which of the above answer types is used to arrive at the answer.", "To train our model, we adopt the weakly supervised training method widely used in the semantic parsing literature (Be-rant et al., 2013a).", "We find all executions that evaluate to the correct answer, including matching passage spans and question spans, correct count numbers, as well as sign assignments for numbers.", "Our training objective is then to maximize the marginal likelihood of these executions.", "11 7 Results and Discussion The performance of all tested models on the DROP dataset is presented in Table 4.", "Most notably, all models perform significantly worse than on other prominent reading comprehension datasets, while human performance remains at similar high 11 Due to the exponential search space and the possible noise, we only search the addition/subtraction of two numbers.", "Given this limited search space, the search and marginalization are exact.", "levels.", "12 For example, BERT, the current state-of-the-art on SQuAD, drops by more than 50 absolute F1 points.", "This is a positive indication that DROP is indeed a challenging reading comprehension dataset, which opens the door for tackling new and complex reasoning problems on a large scale.", "The best performance is obtained by our NAQANet model.", "Table 6 shows that our gains are obtained on the challenging and frequent number answer type, which requires various complex types of reasoning.", "Future work may also try combining our model with BERT.", "Furthermore, we find that all heuristic baselines do poorly on our data, hopefully attesting to relatively small biases in DROP.", "Difficulties of building semantic parsers We see that all the semantic parsing baselines perform quite poorly on DROP.", "This is mainly because of our pipeline of extracting tabular information from paragraphs, followed by the denotation-driven logical form search, can yield logical forms only for a subset of the training data.", "For SRL and syntactic dependency sentence representation schemes, 12 Human performance was estimated by the authors collectively answering 560 questions from the test set, which were then evaluated using the same metric as learned systems.", "This is in contrast to holding out one gold annotation and evaluating it against the other annotations, as done in prior work, which underestimates human performance relative to systems.", "the search was able to yield logical forms for 34% of the training data, whereas with OpenIE, it was only 25%.", "On closer examination of a sample of 60 questions and the information extracted by the SRL scheme (the best performing of the three), we found that only 25% of the resulting tables contained information needed to the answer the questions.", "These observations show that high quality information extraction is a strong prerequisite for building semantic parsers for DROP.", "Additionally, the fact that this is a weakly supervised semantic parsing problem also makes training hard.", "The biggest challenge in this setup is the spuriousness of logical forms used for training, where the logical form evaluates to the correct denotation but does not actually reflect the semantics of the question.", "This makes it hard for the model trained on these spurious logical forms to generalize to unseen data.", "From the set of logical forms for a sample of 60 questions analyzed, we found that only 8 questions (13%) contained non-spurious logical forms.", "Error Analysis Finally, in order to better understand the outstanding challenges in DROP, we conducted an error analysis on a random sample of 100 erroneous NAQANet predictions.", "The most common errors were on questions which required complex type of reasoning, such as arithmetic operations (evident in 51% of the errors), counting (30%), domain knowledge and common sense (23%), co-reference (6%), or a combination of different types of reasoning (40%).", "See Table 5 for examples of some of the common phenomena.", "We have presented DROP, a dataset of complex reading comprehension questions that require D iscrete R easoning O ver P aragraphs.", "This dataset is substantially more challenging than existing datasets, with the best baseline achieving only 32.7% F1, while humans achieve 96%.", "We hope this dataset will spur research into more comprehensive analysis of paragraphs, and into methods that combine distributed representations with symbolic reasoning.", "We have additionally presented initial work in this direction, with a model that augments QANet with limited numerical reasoning capability, achieving 47% F1 on DROP.", "We would like to thank Noah Smith, Yoav Goldberg, and Jonathan Berant for insightful discussions that informed the direction of this work.", "The computations on beaker.org were supported in part by credits from Google Cloud." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "result", "objective", "abstain", "objective", "abstain", "objective", "abstain", "method", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "objective", "abstain", "abstain", "abstain", "method", "result", "objective", "abstain", "other", "other", "other", "other", "method", "abstain", "other", "method", "other", "method", "other", "abstain", "abstain", "objective", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "objective", "other", "other" ]
[ "In computational psycholinguistics, various language models have been evaluated against human reading behavior (e.g., eye movement) to build human-like computational models.", "However, most previous efforts have focused almost exclusively on English, despite the recent trend towards linguistic universal within the general community.", "In order to fill the gap, this paper investigates whether the established results in computational psycholinguistics can be generalized across languages.", "Specifically, we re-examine an established generalization the lower perplexity a language model has, the more human-like the language model is in Japanese with typologically different structures from English.", "Our experiments demonstrate that this established generalization exhibits a surprising lack of universality; namely, lower perplexity is not always human-like.", "Moreover, this discrepancy between English and Japanese is further explored from the perspective of (non-)uniform information density.", "Overall, our results suggest that a cross-lingual evaluation will be necessary to construct human-like computational models.", "It is well known that the probability of a word in context (i.e., surprisal) impacts its processing difficulty in incremental human language comprehension (Hale, 2001; Demberg and Keller, 2008; Levy, 2008; Smith and Levy, 2013).", "Building on this basis, researchers have compared a variety of language models (LMs) in terms of how well their surprisal correlates with human reading behavior (Roark et al., 2009; Frank and Bod, 2011; Fossum and Levy, 2012; Hale et al., 2018; Goodkind and Bicknell, 2018; Aurnhammer and Frank, 2019; Merkx and Frank, 2020; Wilcox et al., 2020).", "Such investigations could provide insights into the development of a general computational model of human language processing.", "For example, recent studies reported that LMs with better performance for next-word prediction could also better predict the human reading behavior (i.e. more humanlike) (Fossum and Levy, 2012; Goodkind and Bicknell, 2018; Wilcox et al., 2020).", "In this paper, we re-examine whether the recent findings on human-like computational models can be generalized across languages.", "Despite the community's ongoing search for a language-independent model (Bender, 2011), existing studies have focused almost exclusively on the English language.", "Having said that, broad-coverage cross-linguistic evaluation of the existing reports is prohibitively difficult.", "In fact, data on human reading behavior (e.g., eye movement) is available only in limited languages.", "As an initial foray, this study focuses on the Japanese language as a representative of languages that have typologically different characteristics from the English language.", "If the observation is different between English and Japanese, the current findings on English data might lack a universality across languages.", "We specifically revisit the recent report the lower perplexity a LM has, the more human-like the LM is in the English and Japanese languages (Fos-sum and Levy, 2012; Goodkind and Bicknell, 2018; Wilcox et al., 2020).", "In addition to the importance of cross-linguistic evaluation, the report itself is worth investigating.", "Recent studies in the machine learning field have reported that more parameters, training data, and computation cost can result in better PPL (Kaplan et al., 2020; Brown et al., 2020).", "Our investigation has implications for whether a human-like model might exist beyond such improvements.", "More concretely, over three dozens of LMs were trained for each language, with variants in their architecture, training data size, and the number of parameter updates.", "Then, the surprisals computed by Yononakaniwa(In the world) samazamana(all kinds of) hitoga(people) irutoiu (there are) kotoga(that) yoku(well) wakatta (understood) G a z e du r a t i on and s u r p r i s a l Figure 1: Gaze duration from human subjects and surprisal from language models for the Japanese sentence Yononakaniwa samazamana hitoga irutoiu kotoga yoku wakatta. ( I understood well that there are all kinds of people in the world. ) each LM were compared to human eye movement data (Figure 1).", "The analysis of the relationship between PPL and the psychometric predictive power revealed substantively different trends between the Japanese and English LMs.", "In Japanese, a lower PPL of a LM does not indicate better performance for modeling reading behavior.", "By contrast, in English, there was a clear relationship between the two metrics as reported in the prior studies.", "This opens a remaining and important question: why are English and Japanese different in this aspect?", "We discuss the differing results between English and Japanese from the perspective of the uniform information density hypothesis (Genzel and Charniak, 2002; Levy, 2005; Jaeger and Levy, 2007).", "We find that the processing difficulty (i.e., gaze duration) of segments is less uniformly distributed within a Japanese sentence.", "Given this, the discrepancy of the results between English and Japanese might stem from a mismatch between the information uniformity of the target language and the LM's training objective.", "We demonstrate that tuning Japanese LMs to this training objective collapses the human-like nonuniformity of the processing difficulty observed in Japanese subjects.", "Our code is made publicly available.", "1 2 Related work 2.1 Human sentence processing and LMs What factor determines the incremental difficulty of human language processing?", "At present, surprisal theory (Hale, 2001; Levy, 2008) has been widely adopted in the field of computational psycholinguistics.", "This theory suggests that the processing difficulty of a segment is determined by how predictable the segment is in its preceding context ( log p (segment | preceding context) ).", "Existing studies have compared various computational models by checking the effectiveness of their surprisals in modeling human reading behavior (Hale, 2001; Roark et al., 2009; Frank and Bod, 2011; Fossum and Levy, 2012; Hale et al., 2018; Goodkind and Bicknell, 2018; Merkx and Frank, 2020; Wilcox et al., 2020).", "Data such as eye movement (Kennedy et al., 2003) and brain activity (Frank et al., 2015; Brennan et al., 2016) are used as measures of human reading behavior.", "For example, using eye movement data, Frank and Bod (2011) compared the surprisals from phrase-structure grammars (PSGs) with those from a nonhierarchical, sequential model, tentatively concluding that human sentence processing was insensitive to hierarchical structures since non-hierarchical models displayed better psychological predictive power than PSGs.", "Recently, researchers reported that surprisals from LMs with low PPL correlate well with human reading behaviors (Fossum and Levy, 2012; Goodkind and Bicknell, 2018; Aurnhammer and Frank, 2019; Wilcox et al., 2020).", "The work most closely related to this study is Wilcox et al. (2020).", "They examined the relationship between PPL, psychometric predictive power, and syntactic knowledge in LMs using a variety of models, including modern neural LMs (Radrof et al., 2018).", "They found a tight relationship between PPL and psychometric predictive power in the English corpora.", "This study investigates whether this relationship can be generalized across languages.", "In comparison to English speakers, Japanese speakers display different patterns in sentence processing.", "For example, an anti-locality effect (the more modifiers a word has in its preceding context, the easier the word is to process) has typically been observed in head-final languages, including Japanese (Konieczny, 2000).", "Such differences between the languages are assumed to be more or less due to their different sentence structures.", "Recently, eye movement data for naturally occurring Japanese texts have recently become available (Asa-hara et al., 2016) and was extensively annotated with various linguistic properties (Asahara and Kato, 2017; Asahara, 2017, 2018).", "This section describes the settings of LMs, eye movement data, and evaluation metrics.", "A variety of sentence-level, left-to-right sequential LMs was used.", "Training data of English LMs: We used the WikiText-103 dataset to train the English LMs.", "Based on the reports that subword-level English LMs exhibits superior psychometric predictive power (Wilcox et al., 2020), input texts were divided into subwords by a byte-pair encoding (BPE) (Sennrich et al., 2016).", "2 The training data consist of approximately 4M sentences (114M subwords units).", "Training data of Japanese LMs: We used news articles and the Japanese part of Wikipedia to train the Japanese LMs.", "Input texts were first segmented into morphemes by MeCab (Kudo, 2006) with uni-dic dictionary, and then further divided into subwords by BPE.", "2 The training data consist of approximately 5M sentences (146M subwords units).", "Architectures: The following four variants of LMs were used: Transformer-large (TRANSLG ) (Vaswani et al., 2017), Transformer-small (TRANS-SM ), LSTM (LSTM) (Hochreiter and Schmidhuber, 1997), and N-gram LMs (N-GRAM ).", "3 The parameter size was almost the same for TRANS-SM and LSTM.", "With respect to the N-GRAM models, 3-gram, 4-gram, and 5-gram LMs were used.", "Appendix A shows the hyperparameters of the neural LMs.", "2 Implemented in SentencePiece (Kudo and Richardson, 2018).", "We set character coverage to 0.9995 and vocabulary size to 32,000 in English.", "In Japanese, the vocabulary size is 100,000, reflecting its rich morphemes.", "3 The neural LMs were trained with the fairseq toolkit (Ott et al., 2019).", "N-GRAM LMs were trained using KenLM https://github.com/kpu/kenlm .", "Training data size: For each neural LM architecture (TRANS-LG , TRANS-SM , and LSTM), three variants were trained using different training data sizes: LG (full training data), MD (1/10 training data), and SM (1/100 training data).", "The N-gram LMs were trained on LG datasets.", "Number of updates: The parameters of each neural LM were saved at four different points during training: 100, 1K, 10K, and 100K parameter updates.", "To summarize, 39 LM training settings were attained for each language (3 architectures 3 data size 4 parameter updates = 36 neural LMs, plus 3 N-GRAM LMs).", "In addition, our experiments use three LMs trained using different random seeds for each neural LM training configure; hence, 111 LMs (36 neural LMs 3 seeds, plus 3 N-GRAM LMs) were tested for each language.", "English: The Dundee Corpus (Kennedy et al., 2003), which contains gaze duration annotation for each word, was used.", "Following Smith and Levy (2013), the first-pass gaze duration was analyzed.", "Then, following Goodkind and Bicknell (2018), the data points that met any of the following criteria were excluded: data points with zero gaze duration or that beyond three standard deviations segments with punctuation or numeric characters segments whose next segment has punctuation or numeric characters first or last segment in a line In total, the analysis included 214,955 data points in the corpus.", "Japanese: The BCCWJ-EyeTrack (Asahara et al., 2016), which contains gaze duration annotation for each phrasal unit, was used.", "Note that the phrasal unit (i.e., bunsetsu) consists of at least one content morpheme and its postpositional function morphemes.", "Henceforth, an English word and a Japanese phrasal unit are referred to as a segment.", "The same exclusion criteria as the Dundee Corpus was applied to the BCCWJ-EyeTrack data.", "4 In 4 Strictly speaking, the exclusion criteria was slightly different between Japanese and English data.", "In the Japanese data, we included the segments whose next segment had punctuation or a numeric character, as there is no spillover effect in Japanese (see Section 3.3) Corpus #articles #sents.", "total, the analysis included 6,009 data points in the corpus.", "Note that the BCCWJ-EyeTrack data was deliberately designed to address language-specific issues in Japanese such as the lack of segmentation spaces in Japanese texts (Asahara et al., 2016).", "Statistics: Table 1 shows the statistics of the Dundee Corpus and BCCWJ-EyeTrack data.", "The BCCWJ-EyeTrack has more than 10 times a smaller number of data points than the Dundee Corpus.", "Notably, the segment annotated with eye movement information differs between English and Japanese.", "On average, a Japanese segment consists of 3.4 subwords, while an English segment consists of 1.3 subwords.", "Smith and Levy (2013) theoretically proved that the more fragments a word is divided into when computing its surprisal, the better the calculated surprisal approximates the human cognitive effort if the human language processing is highly incremental.", "Thus, we tentatively consider that this difference did not make a negative impact on the results using the Japanese data.", "Perplexity (PPL): PPL, the inverse geometric mean of next-word probabilities p ( w i | w <i ) in a text that consists of N signals ( w 1 , w 2 , , w N ), is a typical evaluation metric for unidirectional LMs (Eq. 1):", "Low PPL indicates that the model can accurately predict the upcoming signal based on its preceding context.", "The training objective of LMs works to minimize the PPL computed by the model.", "In the experiments, the PPL of a LM is evaluated with the texts in the eye movement data, which do not overlap with the training data.", "A model with low PPL is also called a linguistically accurate model (Frank and Bod, 2011).", "Psychometric predictive power: The surprisal measure, a negative logarithmic probability of a segment in context ( log p (segment | preceding context) ), is a widely used information-theoretic complexity metric.", "Intuitively, a model is considered to have high psychometric predictive power (i.e., psychological accuracy ) if the surprisals of segments computed by the model have trends similar to the human subject's cognitive load (e.g., measured by gaze duration).", "Following the existing studies (Goodkind and Bicknell, 2018; Merkx and Frank, 2020; Wilcox et al., 2020), the psychometric predictive power of a model was measured by comparing surprisal from the model and gaze duration from human subjects.", "While LMs process a text subword-by-subword, gaze duration is annotated in a larger segment.", "Following the study using subwords (Wilcox et al., 2020), the surprisal of each segment was calculated using the joint probability of its constituent subwords.", "Formally, given a text consisting of N subwords w 1: N = ( w 1 , w 2 , , w N ) , surprisal I ( ) of a segment s k = ( w l , w l +1 , , w m ) , where 1 l m N , was calculated as follows: I ( s k ) = log p ( w l , , w m | w <l ) = m (cid:88) k = l log p ( w k | w 1 , , w k 1 ) .", "(2) The effect of surprisals for modeling human reading behavior was calculated using a linear mixed-effects regression (Bates et al., 2015).", "Specifically, the gaze duration ( GD ) was modeled using the following formula: Trans-smLSTMTrans-lg Model 100000 Number of updates 10000 1000 100 Data size LG MD SM + N-gram Figure 2: Relationship between PPL (X-axis) and psychometric predictive power, i.e., LogLik (Y-axis) in the English and Japanese languages.", "GD surprisal + surprisal prev 1 + surprisal prev 2 + freq length + freq prev 1 length prev 1 + screenN + lineN + segmentN + (1|article) + (1|subj) .", "(3) The regression model includes baseline factors (e.g., frequency of a segment) that are of no interest in the comparison of LMs.", "A collection of factors used in the existing studies (Asahara et al., 2016; Wilcox et al., 2020) were initially examined and the factors that were not significant ( p > 0 . 05 ) for gaze duration modeling both in the Dundee Corpus and BCCWJ-EyeTrack were excluded.", "The frequency of a segment ( freq ) was calculated using the entire training data for LMs.", "Appendix B shows the details of each factor in Eq.", "3. In English experiments, surprisals of preceding words ( surprisal prev 1 and surprisal prev 2 ) were included in order to handle the spillover effect (the processing cost of a certain segment is affected by its preceding segments) (Rayner and Well, 1996; Smith and Levy, 2013).", "In Japanese experiments, the surprisals of preceding words were not included because our preliminary experiment showed that these factors were not significantly effective for modeling gaze duration in the BCCWJ-EyeTrack.", "5 5 The reason is probably that a Japanese phrasal unit (i.e., bunsetsu) could be a larger unit than an English word.", "All the regression models used in our experiments were converged.", "To isolate the effect of surprisal for gaze duration modeling, a baseline regression model was trained without surprisal information (exclud-ing the surprisal , surprisal prev 1 , and surprisal prev 2 terms from Eq. 3).", "Following Wilcox et al. (2020), the mean by-segment difference of log-likelihood between the model using surprisal values (Eq. 3) and the baseline model was calculated.", "Henceforth, this metric is called LogLik.", "When surprisal from a LM is not effective for gaze duration modeling, the LogLik score becomes zero.", "A high LogLik means that the surprisal values obtained by the LM are effective for modeling gaze duration (i.e., the LM has a high psychometric predictive power).", "The relationship between PPL and psychometric predictive power is investigated.", "Furthermore, the relationship is analyzed with respect to the training configures of LMs (e.g., the number of parameter updates).", "Then, we discuss the results from the perspective of the uniformity of information density.", "Figure 2 shows the relationship between PPL and psychometric predictive power (i.e., LogLik) of LMs in each of the languages.", "Each point corresponds to each LM, and a score on the Y-axis indicates the psychometric predictive power of a BCCEWJ-EyeTrack (Japanese) Dundee Corpus (English) p syc ho m e t r i c p r ed i c t i v e po w e r p syc ho m e t r i c p r ed i c t i v e po w e r Figure 3: Separate effect of model architecture, training data size, and the number of parameter updates for LMs' psychometric predictive power in each language.", "LM (higher is better).", "The X-axis is PPL on a log scale (lower is better).", "Dundee Corpus: First, the results of the data from the Dundee Corpus show a clear relationship between PPL and psychometric predictive power; namely, lower PPL corresponds to more psychometric predictive power, as reported by prior studies (Goodkind and Bicknell, 2018; Wilcox et al., 2020).", "Spearman's rank correlation coefficient between the two metrics was 0 .", "87 .", "BCCWJ-EyeTrack: By contrast, in BCCWJ-EyeTrack, there was no clear, consistent trend between the PPL and psychometric predictive power.", "While LMs with PPL over 400 show the correlation between PPL and psychometric predictive power ( 0 . 68 with Spearman's ), there is a positive correlation ( 0 . 53 with Spearman's ) for LMs with PPL below 400.", "The positive correlation means that the more accurately the LMs can predict the upcoming word, the worse the psychometric predictive power of the LMs is.", "These results demonstrate the non-universality of the recent report across languages; lower perplexity is not always human-like .", "The LSTMLM trained using the MD dataset with 1K updates achieved the best psychometric predictive power.", "Notably, surprisal was effective for gaze duration modeling in all the Japanese LMs.", "logLik scores were significantly higher than zero with the chi-square test ( p < 0.05).", "Which factor (e.g., model architecture, training data size, and the number of parameter updates) characterizes the psychometric predictive power of LMs?", "Is the collection of effective factors consistent between the two languages?", "This study takes a more in-depth look at the separate effects of", "(i) model architecture,", "(ii) training data size, and", "(iii) the number of parameter updates for the psychometric predictive power.", "Figure 3 summarizes the effect of each factor, where the Y-axis denotes the psychometric predictive power.", "The most noticeable trend is that Japanese LMs with a relatively fewer number of parameter updates (1K) have better psychometric predictive power than the other Japanese LMs (bottom right part of Figure 3), while this trend does not exist in the English LMs (top right part).", "This implies that the training objective of the LMs, maximizing 1 N (cid:80) Ni =1 log P ( w i | w <i ) , had a negative impact on the psychometric predictive power of LMs, at least in Japanese.", "We discuss this point in Section 4.3.", "To quantitatively test the differences in Figure 3, a linear regression model was trained to estimate psychometric predictive power with the factors of the model architecture, the training data size, and the parameter update number in each language.", "The training data size and the parameter update number are represented as logarithmically transformed numerical factors.", "The following trends were found:", "(i) ;", "(ii) the training data size positively affects the performance in English alone; and", "(iii) the number of parameter updates positively affects the performance only in English.", "There was no factor that boosted the psychometric predictive power of LMs in both English and Japanese languages.", "The key question is: why do Japanese and English show different trends between PPL and psychometric predictive power?", "One possible interpretation connecting our results to the uniform information density is discussed in this section.", "In computational psycholinguistics, it is commonly assumed that language is designed to enable efficient communication.", "This principle has been typically investigated under the uniform information density (UID) hypothesis (Genzel and Char-niak, 2002; Levy, 2005; Jaeger and Levy, 2007).", "This hypothesis suggests that speakers seek to keep the amount of information constant across the signals (e.g., segments).", "Assuming this hypothesis holds for all languages, the reasonable expectation would be for human subjects to show a near-uniform gaze duration across segments regardless of their native language.", "However, this study found that the coefficient of variation 6 in gaze duration over the whole corpus was around 1.7 times higher in Japanese compared to English (0.75 vs. 0.44).", "Specifically, in Japanese, the gaze duration tended to speed up towards the end of sentences, whereas the duration was near-uniform in English (Figure 4).", "7 These observations imply that the Japanese language might have a less uniform information density than English.", "This phenomenon was also investigated through the lens of word order, where SOV languages such as Japanese are reported to show less uniformity of information density (Maurits et al., 2010).", "Based on this observation, the discrepancy between English and Japanese low-PPL LMs' psycholinguistic predictive power could stem from a mismatch between the LM's training objective and 6 Coefficient of variation is , where and are the standard deviation and the mean of the first-pass gaze durations in the eye movement data.", "7 At least in our experimental setup, token position within the sentence was not significantly effective for gaze duration modeling in English sentences, whereas it was significant in Japanese sentences.", "We checked the coefficient of the factor of position in sentence segmentN using the linear regression model of GD sengmentN .", "the information uniformity of the target language.", "The objective function, 1 N (cid:80) Ni =1 log P ( w i | w <i ) , defines that the ideal is to maximize all next word probabilities to 1.0 (a uniform goal).", "8 That is, LMs are, in theory , trained to approach a model satisfying the UID assumption (Bloem, 2016), where all surprisals from the LM are equally, sufficiently small across the segments.", "Therefore, the objective function might lead to a worse approximation of human-like surprisal in languages that are further from the UID assumption, such as Japanese, while it might be more compatible with English, which has a more uniform processing difficulty across segments.", "This explanation would be consistent with the observation that more tuning to the LM training objective (i.e., a lower PPL) had a negative impact on the psycholinguistic performance of the Japanese LMs (Section 4.2).", "Note the tendency of LMs to assign unreasonably high probabilities to segments has also attracted attention from the viewpoint of memorization capability of LMs (Car-lini et al., 2020).", "In addition, the connection of the UID hypothesis to the modern NLP techniques has been recently explored (Meister et al., 2020; Wei et al., 2021).", "We further investigate our hypothesis in Section 5.", "This study hypothesized that tuning to the LM objective (i.e., uniform goal) obscures the nonuniform trend observed in the reading behavior of Japanese subjects.", "We investigated whether the nonuniformity of the processing difficulty observed in human reading time is mirrored by LM surprisals.", "Settings: In a preliminary experiment, we observed that the syntactic category (similar to part-of-speech) was the most dominant linguistic factor for explaining the difference in human gaze duration in Japanese sentences (see Appendix D).", "Based on this observation, we analyze the nonuniformity of surprisals in Japanese LMs with respect to the syntactic categories.", "The segments in BCCWJ-EyeTrack were classi-fied into one of the following syntactic categories:", "(a) nominal (nouns),", "(b) verbal (verbs),", "(c) modifier (adjectives and adverbs), and", "(d) other entries, as follows: Kanojo-ga akai kaban-o kat-ta SheNOM red bagACC buyPAST nominal modifier nominal verbal As Asahara and Kato (2017) reported, verbal and modifier segments have a shorter gaze duration than the other segments in Japanese sentences.", "An analysis was conducted on how strongly the Japanese LM's surprisals on segments are influ-enced by their syntactic category.", "This influence can be evaluated by examining how effectively syntactic category factors can model LM surprisals.", "In this experiment, surprisal was regarded as simulated gaze duration from an LM subject, and the importance of syntactic category information for modeling the simulated gaze duration ( simulated GD ) was evaluated.", "To inspect the effect of the syntactic category information for modeling the simulated gaze duration, the following regression model 9 was used, including a factor defining which syntactic category the segment falls into ( syn category ): simulated GD syn category + sentN + tokenN + freq length .", "From this regression model, a log-likelihood score for the simulated gaze duration was obtained.", "To evaluate the separate effect of syn category , LogLik between Eq.", "4 and a baseline model was calculated.", "The baseline model was simulated GD sentN + tokenN + freq length .", "The LogLik is denoted as Effect of syntactic category.", "A lower score means that the LM lacked the property of varying processing difficulty with respect to the syntactic category.", "Results: The results are shown in Figure 5.", "First, the higher psychometric predictive power the LMs exhibit, the greater the effect of syntactic category on surprisals (left part in Figure 5).", "This means that, depending on the syntactic category of the segment they processed, LMs with high psychometric predictive power computed surprisals with a more nonuniform trend.", "The right part of Figure 5 shows that, as PPL decreases below a certain value ( PPL 400 ), the Japanese LMs compute surprisals that obscure the nonuniform trends with 9 sentN and tokenN denote the sentence position and the segment position in a sentence (see Appendix B).", "Note that the tokenN and syntactic category exhibit low correlation (0.02 with Pearson's r ).", "respect to the syntactic category of segments.", "10 This trend supports our hypothesis that tuning to LM objectives obscures the human-like nonuniformity of the processing difficulty.", "Even though LMs that are not fully tuned to the LM objective ( PPL 400 ) acquire human-like trends with respect to syntactic category, these biases tend to be lost by further lowering their PPL.", "Notably, we also observed that not all the types of linguistic nonuniformity were obscured in surprisals computed by the LMs with low PPL.", "For example, Appendix E shows that LMs with lower PPL compute surprisals that better correlates with a particular syntactic factor although that factor is a less dominant trend in human reading behavior than the syntactic category (Appendix D).", "To test the universality of the recent findings in computational psycholinguistics across languages, the initial focus is on English and Japanese as a pair of languages with different linguistic properties.", "Although the discrepancy of the results in the two languages is discussed from the viewpoint of the UID hypothesis, the two languages are also different in various ways, such as writing systems, agglutinative property, case marking, sentence structure, and pro-drop nature.", "To identify the difference that relates to the human-like behaviors of LMs, experiments that include additional languages should be conducted in the future.", "In addition, the corpus size of the BCCWJ-EyeTrack data is smaller than the Dundee Corpus.", "While the reading time data in the BCCWJ-EyeTrack was collected from various human subjects, the number of the independent segments was limited (1,643 segments, 218 sentences).", "Thus, whether the trends reported in this study generalize to more diverse Japanese texts should be explored in future work.", "It is hoped that this study motivates the creation of a large-scale corpus of human reading behaviors in diverse languages.", "This study has investigated whether the recent reports on the psychometric predictive power of LMs can be generalized across languages.", "Our initial investigation has re-examined the recent report 10 The correlation between PPL and the effect of syntactic category in the LMs with PPL less than 400 was 0.45 and 0.34 with Pearson's r and Spearman's , respectively.", "the lower PPL a LM has, the more human-like the LM is using Japanese eye movement data.", "Our experiments have demonstrated a surprising lack of universality of this report; lower perplexity is not always human-like.", "This discrepancy of the results between the languages reinforces the need for the cross-lingual evaluation of the psychometric predictive power of LMs.", "The discussion considers potential factors that make the observation different across languages from the viewpoint of the uniform information density hypothesis.", "We believe that this is an important first step for seeking a language-agnostic model of human sentence processing.", "Hopefully, this study encourages researchers to further investigate the universality of human language processing across languages.", "We would like to thank the members at the Tohoku NLP Lab for their valuable advice, particularly Ana Brassard for proofreading.", "This work was supported by Grant-in-Aid for JSPS Fellows Grant Number JP20J22697, JSPS KAKENHI Grant Number 19H04990, and JST CREST Grant Number JP-MJCR20D2.", "This work was also supported by the National Institute for Japanese Language and Linguistics (NINJAL) Collaborative Research Project Computational Psycholinguistics of Language Processing with Large Corpora.", "Language models with low perplexity are typically trained with a high computational cost.", "Our work demonstrates that further up-scaling the models might not be a reasonable direction of searching for human-like language models.", "This could potentially contribute to reducing energy and carbon costs, which are needed to train large-scale language models." ]
[ "abstain", "abstain", "objective", "abstain", "objective", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "objective", "other", "other", "other", "abstain", "abstain", "abstain" ]
[ "As offensive content has become pervasive in social media, there has been much research in identifying potentially offensive messages.", "However, previous work on this topic did not consider the problem as a whole, but rather focused on detecting very specific types of offensive content, e.g., hate speech, cyberbulling, or cyber-aggression.", "In contrast, here we target several different kinds of offensive content.", "In particular, we model the task hierarchically, identifying the type and the target of offensive messages in social media.", "For this purpose, we complied the Offensive Language Identification Dataset (OLID), a new dataset with tweets annotated for offensive content using a fine-grained three-layer annotation scheme, which we make publicly available.", "We discuss the main similarities and differences between OLID and pre-existing datasets for hate speech identification, aggression detection, and similar tasks.", "We further experiment with and we compare the performance of different machine learning models on OLID.", "Offensive content has become pervasive in social media and thus a serious concern for government organizations, online communities, and social media platforms.", "One of the most common strategies to tackle the problem is to train systems capable of recognizing offensive content, which can then be deleted or set aside for human moderation.", "In the last few years, there have been several studies on the application of computational methods to deal with this problem.", "Prior work has studied offensive language in Twitter (Xu et al., 2012; Burnap and Williams, 2015; Davidson et al., 2017; Wiegand et al., 2018), Wikipedia comments, 1 and Facebook posts (Kumar et al., 2018).", "Previous studies have looked into different aspects of offensive language such as the use of abusive language (Nobata et al., 2016; Mubarak et al., 2017), (cyber-)aggression (Kumar et al., 2018), (cyber-)bullying (Xu et al., 2012; Dadvar et al., 2013), toxic comments 1 , hate speech (Kwok and Wang, 2013; Djuric et al., 2015; Burnap and Williams, 2015; Davidson et al., 2017; Malmasi and Zampieri, 2017, 2018), and offensive language (Wiegand et al., 2018).", "Recently, Waseem et al. (2017) analyzed the similarities between different approaches proposed in previous work and argued that there was a need for a typology that differentiates between whether the (abusive) language is directed towards a specific individual or entity, or towards a generalized group, and whether the abusive content is explicit or implicit.", "Wiegand et al. (2018) further applied this idea to German tweets.", "They experimented with a task on detecting offensive vs. nonoffensive tweets, and also with a second task on further sub-classifying the offensive tweets as profanity, insult, or abuse.", "However, to the best of our knowledge, no prior work has explored the target of the offensive language, which might be important in many scenarios, e.g., when studying hate speech with respect to a specific target.", "Below, we aim at bridging this gap.", "More generally, in this paper, we expand on the above ideas by proposing a novel three-level hierarchical annotation schema that encompasses the following three general categories: A: Offensive Language Detection B: Categorization of Offensive Language C: Offensive Language Target Identification Tweet A B C @USER He is so generous with his offers.", "We further use the above schema to annotate a large dataset of English tweets, which we make publicly available online.", "2 The key contributions of this paper can be summarized as follows: We propose a new three-level hierarchical annotation schema for abusive language detection and characterization.", "We apply the schema to create Offensive Language Identification Dataset (OLID) , a new large-scale dataset of English tweets with high-quality annotation of the target and type of offenses.", "We perform experiments on OLID using different machine learning models for each level of the annotation, thus setting important baselines to compare to in future work.", "While each of these sub-tasks tackles a particular type of abuse or offense, they share similar properties and the hierarchical annotation model proposed in this paper aims to capture this.", "Considering that, for example, an insult targeted at an individual is commonly known as cyberbulling and that insults targeted at a group are known as hate speech, we believe that OLID's use of a hierarchical annotation schema makes it a useful resource for various offensive language identification and characterization tasks.", "In the OLID dataset, we use a hierarchical annotation schema split into three levels to distinguish between whether the language is offensive or not (A), its type (B), and its target (C).", "Each level is described in more detail in the following subsections and examples are shown in Table", "1. 2 The data can be downloaded from the following address: http://scholar.harvard.edu/malmasi/olid 2.1 Level A: Offensive language Detection Level A discriminates between the following types of tweets: Not Offensive (NOT): Posts that do not contain offense or profanity; Offensive (OFF): Posts containing any form of non-acceptable language (profanity) or a targeted offense, which can be veiled or direct.", "This includes insults, threats, and posts containing profane language or swear words.", "Level B categorizes the type of offense:", "Targeted Insult (TIN): Posts containing in-sult/threat to an individual, a group, or others; Untargeted (UNT): Posts containing non-targeted profanity and swearing.", "Posts with general profanity are not targeted, but they contain non-acceptable language.", "2.3 Level C: Offensive Language Target Identification Level C categorizes the targets of insults/threats: Individual (IND): Posts targeting an individual.", "This can be a famous person, a named individual or an unnamed participant in the conversation.", "Insults and threats targeted at individuals are often defined as cyberbulling.", "Group (GRP): Posts targeting a group of people considered as a unity due to the same ethnicity, gender or sexual orientation, political affiliation, religious belief, or other common characteristic.", "Many of the insults and threats targeted at a group correspond to what is commonly understood as hate speech.", "Other (OTH) The target of these offensive posts does not belong to any of the previous two categories (e.g., an organization, a situation, an event, or an issue).", "We retrieved the examples in OLID from Twitter using its API and searching for keywords and constructions that are often included in offensive messages, such as she is' or to:BreitBartNews' 3 .", "The full list of keywords we used is shown in Table", "2. We first carried out a round of trial annotation of 300 instances with six experts using nine keywords.", "The goal of the trial annotation was ( i ) to evaluate the proposed tagset, ( ii ) to evaluate the data retrieval method, and ( iii ) to create a gold standard with instances that could be used as test questions to ensure the quality of the annotators for the rest of the data, which was carried out using crowdsourcing.", "The keywords used in the trial annotation are shown in the first nine rows of Table", "2. We included left ( @NewYorker ) and far-right ( @BreitBartNews ) news accounts because there tends to be political offense in the comments for such accounts.", "The keyword that resulted in the highest concentration of offensive content was the Twitter safe' filter, corresponding to tweets that were flagged as unsafe by Twitter (the -' sym-biol indicates not safe').", "Since the vast majority of content on Twitter is not offensive, we tried different strategies to keep the distribution of offensive tweets at around 30% of the dataset.", "We excluded some keywords that were not high in offensive content during the trial annotation such as they are' and to:NewYorker'.", "Although he is' was poor in offensive content in the trial dataset (15%), we kept it as a keyword in order to avoid gender bias, and we found that in the full dataset it was more offensive (32.4%).", "The trial keywords that we ultimately decided to exclude due to low percentage of offensive tweets are shown in the top portion of Table", "2. We computed Fleiss' kappa on the trial dataset for the five annotators on 21 of the tweets.", "The value was .83 for Layer A (OFF vs. NOT) indicating high agreement.", "As to normalization and anonymization, we did not store any user meta-data or Twitter IDs, and we substituted the URLs and the Twitter mentions by placeholders.", "During the full annotation task, we decided to search for more political keywords as they tend to be richer in offensive content.", "Thus, we sampled our full dataset, so that 50% of the tweets come from political keywords, and the other 50% come from non-political keywords.", "Within these two groups, tweets were evenly sampled for the keywords.", "In addition to gun control', and to:BreitbartNews' used during the trial annotation, four new political keywords were used to col-lect tweets for the full dataset: MAGA', antifa', conservatives', and liberals'.", "The breakdown of keywords and their offensive content in the full dataset is shown in the bottom of Table", "2. We follow prior work in related areas (Burnap and Williams, 2015; Davidson et al., 2017) and we annotate our data using crowdsourcing.", "We used Figure Eight 4 and we ensured data quality by ( i ) only hiring annotators who were experienced in the platform, and ( ii ) using test questions to discard annotations by individuals who did not reach a certain threshold.", "Each instance in the dataset was annotated by multiple annotators and inter-annotator agreement was calculated at the end.", "We first acquired two annotations for each instance.", "In the case of disagreement, we requested a third annotation, and we then took a majority vote.", "The annotators were asked to label each tweet at all three levels of the annotation scheme, and we considered there to be agreement only when the annotators agreed on the labels for all levels.", "Approximately 60% of the time, the two annotators agreed, and thus no additional annotation was needed.", "A third annotation was requested for the rest of the tweets; there was no instance when more than three annotations were needed.", "The breakdown of the data into training and testing for the labels from each level is shown in Table", "3. It is worth noting that one of the key challenges we observed when collecting for OLID was producing a dataset containing a sufficient number of instances for each class.", "This is particularly evident in the sizes for Subtasks B and C. Other studies also had this issue when collecting similar datasets.", "For example, in (Davidson et al., 2017), only 5% of the tweets were considered hate speech by the majority of the annotators, and in (Burnap and Williams, 2015) only 11.6% of the examples were labeled as hate speech.", "SVM Our simplest machine learning model is a linear SVM trained on word unigrams.", "SVMs have achieved state-of-the-art results for many text classification tasks (Zampieri et al., 2018).", "BiLSTM We also experiment with a bidirectional Long Short-Term-Memory (BiLSTM) model, which we adapted from a pre-existing model for sentiment analysis (Rasooli et al., 2018).", "The model consists of ( i ) an input embedding layer, ( ii ) a bidirectional LSTM layer, and ( iii ) an average pooling layer of input features.", "The concatenation of the LSTM layer and the average pooling layer is further passed through a dense layer, whose output is ultimately passed through a softmax to produce the final prediction.", "We set two input channels for the input embedding layers: pre-trained FastText embeddings (Bo-janowski et al., 2017), as well as updatable embeddings learned by the model during training.", "CNN Finally, we experiment with a Convolutional Neural Network (CNN) model based on the architecture of (Kim, 2014), and using the same multi-channel inputs as the above BiLSTM.", "Our models are trained on the training dataset, and evaluated by predicting the labels for the held-out test set.", "As the label distribution is highly im-balanced (see Table 3), we evaluate and we compare the performance of the different models using macro-averaged F1-score.", "We further report per-class Precision (P), Recall (R), and F1-score (F1), and weighted average.", "Finally, we compare the performance of the models against simple majority and minority class baselines.", "The performance on discriminating between offensive (OFF) and non-offensive (NOT) posts is reported in Table", "4. We can see that all models perform significantly better than chance, with the neural models performing substantially better than the SVM.", "The CNN outperforms the RNN model, achieving a macro-F1 score of 0.80.", "In this set of experiments, the models were trained to discriminate between targeted insults and threats (TIN) and untargeted (UNT) offenses, which generally refer to profanity.", "The results are shown in Table", "5. We can see that the CNN performs better than the BiLSTM, with a macro-F1 score of 0.69.", "Note that all models perform better at identifying TIN compared to UNT.", "The results for the offensive target identification experiment are shown in Table", "6. Here the models were trained to distinguish between three targets: a group (GRP), an individual (IND), or others (OTH).", "We can see that all three models achieved similar results, far surpassing the random baselines, with a slight performance edge for the neural models.", "The performance of all models for the OTH class is 0, which can be explained by two factors.", "First, unlike the two other classes, OTH is a heterogeneous collection of targets.", "It includes offensive tweets targeted at organizations, situations, events, etc., thus making it more challenging for models to learn discriminative properties for this class.", "Second, there are fewer training instances for this class compared to the other two: there are only 395 instances for OTH vs. 1,075 for GRP and 2,407 for IND.", "We presented OLID, a new dataset with annotation of type and target of offensive language.", "It is the official dataset of the shared task SemEval 2019 Task 6: Identifying and Categorizing Offensive Language in Social Media (OffensEval) (Zampieri et al., 2019).", "5 In OffensEval, each annotation level in OLID is an independent sub-task.", "To the best of our knowledge, this is the first dataset to contain annotation of type and target of offenses in social media, and it opens interesting research directions.", "We further presented baseline experiments using SVMs and neural networks, which have shown that this is a challenging, yet doable task.", "In future work, we would like to make a cross-corpus comparison of OLID vs. datasets annotated for similar tasks such as aggression identification (Kumar et al., 2018) and hate speech detection (Davidson et al., 2017).", "We further plan to create similar datasets for other languages, following OLID's hierarchical annotation scheme.", "We would like to thank the anonymous NAACL reviewers for their valuable suggestions and Nikola Ljubesic for the feedback provided.", "This research presented was partially supported by an ERAS fellowship awarded to Marcos Zampieri by the University of Wolverhampton." ]
[ "abstain", "abstain", "objective", "method", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "objective", "objective", "method", "objective", "objective", "method", "objective", "method", "method", "abstain", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "result", "abstain", "method", "other", "other" ]
[ "Cross-lingual summarization aims at summarizing a document in one language (e.g., Chinese) into another language (e.g., English).", "In this paper, we propose a novel method inspired by the translation pattern in the process of obtaining a cross-lingual summary.", "We first attend to some words in the source text, then translate them into the target language, and summarize to get the final summary.", "Specifically, we first employ the encoder-decoder attention distribution to attend to the source words.", "Second, we present three strategies to acquire the translation probability, which helps obtain the translation candidates for each source word.", "Finally, each summary word is generated either from the neural distribution or from the translation candidates of source words.", "Experimental results on Chinese-to-English and English-to-Chinese summarization tasks have shown that our proposed method can significantly outperform the baselines, achieving comparable performance with the state-of-the-art.", "Cross-lingual summarization is to produce a summary in a target language (e.g., English) from a document in a different source language (e.g., Chinese).", "Cross-lingual summarization can help people effi-ciently understand the gist of an article written in an unfamiliar foreign language.", "Traditional cross-lingual summarization methods are pipeline-based.", "These methods either adopt summarize-then-translate (Orasan and Chiorean, 2008; Wan et al., 2010) or employ translate-then-summarize (Leuski et al., 2003; Ouyang et al., 2019).", "The pipeline-based approach is intuitive and straightforward, but it suffers from error propagation.", "Due to the difficulty of acquiring cross-lingual summarization dataset, some previous researches focus on zero-shot methods (Ayana et al., 2018; (cid:16)(cid:39)(cid:5)(cid:35)(cid:37)(cid:32)(cid:34)(cid:42)(cid:19)(cid:55)(cid:40)(cid:12)(cid:67)(cid:6)(cid:73)(cid:59)(cid:75)(cid:63)(cid:4)(cid:53)(cid:43)(cid:50)(cid:25) 10 (cid:18)(cid:47)(cid:66)(cid:74)(cid:4)(cid:51)(cid:22)(cid:37)(cid:32) (cid:34)(cid:71)(cid:21)(cid:48)(cid:3)(cid:19)(cid:55)(cid:40)(cid:27)(cid:20)(cid:64)(cid:4)(cid:1)(cid:13)(cid:14)(cid:9)(cid:56)(cid:58)(cid:31)(cid:72)(cid:15)(cid:2)(cid:4)(cid:54)(cid:77)(cid:61)(cid:33)(cid:30)(cid:10)(cid:3)(cid:44)(cid:41)(cid:64) (cid:49)(cid:60)(cid:57)(cid:44)(cid:4)(cid:26)(cid:62)(cid:23)(cid:15)(cid:17)(cid:24)(cid:65)(cid:70)(cid:8)(cid:52)(cid:46)(cid:17)(cid:24)(cid:75)(cid:63)(cid:45)(cid:69)(cid:7)(cid:76)(cid:4)(cid:38)(cid:36)(cid:71)(cid:21)(cid:11)(cid:29)(cid:68)(cid:3) Foshan young couple was detained for charging 10 yuan for buying train tickets online for migrant workers A young couple in Foshan who help migrant workers book train tickets online have been detained after receiving a 10-yuan handling fee for each ticket.", "Duan et al., 2019), i.e., using machine translation or monolingual summarization or both to teach the cross-lingual system.", "Recently, Zhu et al. (2019) propose to use round-trip translation strategy to obtain large-scale cross-lingual summarization datasets.", "They incorporate machine translation and monolingual summarization into the training of cross-lingual summarization using multi-task learning to improve the summary quality with a quite promising performance.", "However, we find that there exist the following problems: (1) The multi-task methods adopt extra large-scale parallel data from other related tasks, such as monolingual summarization or machine translation.", "These methods are heavily dependent on data, making it difficult to migrate to languages with low resources.", "(2) The multi-task methods either simultaneously train cross-lingual summarization and monolingual summarization or alternately train cross-lingual summarization and machine translation, resulting in a quite time-consuming training process.", "To alleviate the above problems, we observe some examples extracted from the cross-lingual summarization dataset.", "We find that there exists a translation pattern in the cross-lingual summaries, as shown in Figure 1.", "Inspired by the translation pattern, we can first attend to some specific segments in the input sequence, then translate them into the target language, and integrate this bilingual information into the final summary.", "Therefore, in this paper, we explore an efficient method consistent with the translation pattern.", "To achieve that goal, we propose a novel method (Figure 2) that allows either generating words from the vocabulary or selecting words from the translation candidates of the words in the source article.", "Specifically, we first employ the encoder-decoder attention distribution to help determine which source word should be translated.", "Then we present three strategies, i.e., Naive, Equal, and Adapt, to obtain the translation probability from a probabilistic bilingual lexicon.", "The translation distribution can be acquired based on the encoder-decoder attention distribution and the translation probability.", "Next, we add an extra translation layer to calculate a translating probability.", "The final distribution is the weighted sum (weighed by the translating probability) of the translation distribution and the neural distribution.", "Our main contributions are as follows: We introduce a novel and efficient method which integrates the operation of attending, translating, and summarizing.", "We present three effective strategies to acquire the translation probability.", "It has shown that all these strategies can significantly improve the performance over the baseline.", "Experimental results demonstrate that our method can achieve remarkable improvements over baselines and achieve comparable performance with the state-of-the-art on both English-to-Chinese and Chinese-to-English cross-lingual summarization tasks.", "in-1 A multi-task method (Zhu et al., 2019) which trains cross-lingual summarization and machine translation using alternating training strategy.", "stead of a large-scale parallel machine translation dataset, which significantly relaxes the model's dependence on data.", "(2) Our model has a much smaller model size and a much faster training speed.", "In this paper, we implement our method based on Transformer (Vaswani et al., 2017) encoder-decoder framework, where the encoder first maps the input sequence X = ( x 1 , x 2 , , x n ) into a sequence of continuous representations z = ( z 1 , z 2 , , z n ) and the decoder generates an output sequence Y = ( y 1 , y 2 , , y m ) from the continuous representations.", "The encoder and decoder are trained jointly to maximize the conditional probability of target sequence given a source sequence: L = N (cid:88) t =1 logP(y t | y < t , X; ) (1) Transformer is composed of stacked encoder and decoder layers.", "The encoder layer is a self-attention block followed by a position-wise feed-forward block.", "Compared with the encoder layer, the decoder layer has an extra encoder-decoder attention block.", "For self-attention and encoder-decoder attention, a multi-head attention block is used to obtain information from different representation subspaces at different positions.", "Each head corresponds to a scaled dot-product attention, which operates on query Q , key K , and value V : Attention ( Q, K, V ) = softmax ( QKT d k ) V (2) where d k is the dimension of the key.", "Finally, the output values are concatenated and projected by a feed-forward layer to get final values: MultiHead ( Q, K, V ) = Concat ( head 1 , . . . , head h ) WO where head i = Attention ( QW Qi , KW Ki , V W Vi ) (3) where WO , W Qi , W Ki , and W Vi are learnable matrices, and h is the number of heads.", "Inspired by the phenomenon that some words contained in a cross-lingual summary can be obtained by translating some source words (Figure 1), we introduce a novel cross-lingual summarization", "method.", "It first attends to some source words, then obtains the translation candidates of them, and fi-nally generates words from the translation candidates or the neural distribution.", "Our proposed method is a hybrid between Transformer and an additional translation layer, which is depicted in Figure 2 and described as follows.", "Attend.", "Inspired by the pointer-generator network (See et al., 2017), we employ the encoder-decoder attention distribution ht (the last layer) to help focus on some salient words in the source text.", "Since ht is a multi-head attention, we take the mean value over the heads as follow: t = 1 h (cid:88) h ht (4) Translate.", "With the attention distribution on the source words, we also need to know what should each source word be translated into.", "To achieve that, we obtain a probabilistic bilingual lexicon PL ( w 1 w 2 ) from existing machine translation corpora and then acquire the translation probability PT based on PL ( w 1 w 2 ) .", "Acquisition of the probabilistic bilingual lexicon.", "There are many different ways to get the probabilistic bilingual lexicon, such as learning from bilingual corpora (Dyer et al., 2013; Chandar A P et al., 2014; Artetxe et al., 2016) and learning from monolingual corpora (Conneau et al., 2018; Zhang et al., 2017; Artetxe et al., 2018).", "To facilitate access to the high-quality probabilistic bilingual lexicon, we apply the method described in Dyer et al. (2013).", "Specifically, we first extract word alignments L using the fast-align tool (Dyer et al., 2013) on the bilingual parallel corpus 2 for machine translation in both source-to-target and target-to-source directions.", "To improve the quality of the word alignments, we only keep the alignments existing in both directions.", "Next, the lexicon translation probability PL ( w 1 w 2 ) is the average of source-to-target and target-to-source probabilities calculated through maximum likelihood estimation on word alignments L. We filter the lexicon pairs ( w 1 , w 2 ) , where PL ( w 1 w 2 ) < 0 .", "05 , and renormalize the lexicon translation probabilities to get the final probabilistic bilingual lexicon.", "We propose the following three different strategies (Figure 3) to obtain the translation probability: (1) Naive.", "We directly use the probability in the probabilistic bilingual lexicon as the translation probability.", "We limit the number of translation candidates of the word w 1 to at most m .", "Specifically, 2 We employ the 2.08M sentence pairs from the LDC corpora which includes LDC2000T50, LDC2002L27, LDC2002T01, LDC2002E18, LDC2003E07, LDC2003E14, LDC2003T17, LDC2004T07.", "we sort the translation candidates of word w 1 in descending order according to the lexicon translation probability and then take the topm .", "Finally, the lexicon translation probability will be normalized to get the translation probability: PT ( w 1 w 2 ) = PL ( w 1 w 2 ) (cid:80) w j PL ( w 1 w j ) (5) (2) Equal.", "The Naive strategy will bring about a problem that the decoder tends to select the words with the high probability from the translation candidates of source word w 1 , and those with low translation probability will hardly be selected.", "To alleviate this, we set the translation probability of w 1 's translation candidates to be equal.", "Therefore, which translation candidate will eventually be selected depends on the probability of these translation candidates in the neural distribution.", "This strategy can be considered to achieve the goal of small vocabulary with the help of translation knowledge.", "(3) Adapt.", "This strategy aims to select the correct translation candidates by source-side context.", "Specifically, we first limit the number of translation candidates to at most m , which is consistent with the two strategies above.", "Then we propose a translation-attention which is a multi-head attention block, where the hidden state of the source word w 1 is fed as the query and the target-side embedding of the corresponding translation candidates will be treated as the keys and values.", "where w tgt 2 is the target-side embedding of word w 2 .", "We also take the mean value of the multihead translation-attention as the final translation probability.", "Since the hidden state of the source word w 1 is obtained by the self-attention on the source-side, this context-aware strategy can help the model learn to choose the correct translation adaptively with the help of the source-side context.", "Summarize.", "We use H dec to represent the decoder hidden state at timestep t and d model to denote the dimension of the hidden states.", "We employ a translation layer to determine the translating probability p trans [0 , 1] via a dynamic gate: p trans = ( W 2 ( W 1 H dec + b 1 ) + b 2 ) (7) where W 1 R d model d model and W 2 R 1 d model are learnable matrices, b 1 R d model and b 2 R 1 are bias vectors, is the sigmoid function.", "Then p trans is regarded as a soft switch to determine whether to generate a word w by sampling from the neural distribution or directly select a word from the translation candidates of the source words.", "Therefore, the final probability distribution can be calculated as follow: P ( w ) = p trans (cid:88) i : w i = w src t,i PT ( w src w ) +(1 p trans ) PN ( w ) (8) where PT ( w src w ) denotes the translation probability of word w src to word w and PN means the neural distribution.", "In this study, we focus on Chinese-to-English and English-to-Chinese cross-lingual summarization.", "We test our proposed method on En2ZhSum and Zh2EnSum datasets 3 released by Zhu et al. (2019).", "En2ZhSum is an English-to-Chinese summarization dataset, which contains 370,687 English documents (755 tokens on average) paired with multi-sentence English (55 tokens on average) and Chinese summaries (96 Chinese characters on aver-age).", "The dataset is split into 364,687 training pairs, 3,000 validation pairs, and 3,000 test pairs.", "Zh2EnSum is a Chinese-to-English summarization dataset, which contains 1,699,713 Chinese short texts (104 Chinese characters on average) paired with Chinese (18 Chinese characters on average) and English short summaries (14 tokens on av-erage).", "The dataset is split into 1,693,713 training pairs, 3,000 validation pairs, and 3,000 test pairs.", "Both the English-to-Chinese and Chinese-to-English test sets are manually corrected.", "We follow the setting of the vocabularies described in Zhu et al. (2019).", "In En2ZhSum, we surround each target sentence with tags < t > and < /t > .", "If there is no special explanation, the limit on the number of translation candidate m in our models is set to 10.", "All the parameters are initialized via Xavier initialization method (Glorot and Bengio, 2010).", "We train our models using configuration transformer base (Vaswani et al., 2017), which contains a 6-layer encoder and a 6-layer decoder with 512-dimensional hidden representations.", "Each mini-batch contains a set of document-summary pairs with roughly 3,072 source and 3,072 target tokens.", "We apply Adam optimizer (Kingma and Ba, 2015) with 1 = 0 .", "9 , 2 = 0 .", "998 , and (cid:15) = 10 9 .", "For evaluation, we use beam search with a beam size 4 and length penalty 0.6.", "All our methods are trained and tested on a single NVIDIA TITAN XP.", "We compare our method with the following relevant methods (Zhu et al., 2019):", "and then summarizes the translated text via LexRank (Erkan and Radev, 2004).", "GLTran : It first summarizes the original article via a Transformer-based monolingual summarization model and then translates the summary into the target language by Google Translator.", "The above methods only employ the cross-lingual summarization dataset, and we also compare our method with the following two methods (Zhu et al., 2019) that use extra datasets in other tasks.", "CLSMS : It refers to the multi-task method, which simultaneously trains cross-lingual summarization and monolingual summarization.", "CLSMT : It is the multi-task method which adopts the alternating training strategy (Dong et al., 2015) to train cross-lingual summarization and machine translation jointly.", "We denote our method as ATS: ATS : It refers to our method with three different strategies (Naive, Equal, and Adapt).", "We evaluate all models with the standard ROUGE metric (Lin, 2004), reporting the F1 scores for ROUGE-1, ROUGE-2, and ROUGE-L.", "All ROUGE scores are reported by the 95% confidence interval measured by the official script 4 .", "Besides, we evaluate the equality of English summaries in Zh2EnSum with MoverScore (Zhao et al., 2019) which compares system output against references based on their semantics rather than surface forms.", "Zhao et al. (2019) have shown that MoverScore has a higher correlation with human judgment than ROUGE on evaluating English summaries.", "Results on Zh2EnSum and En2ZhSum.", "Table 1 shows the results of different models on Zh2EnSum test set, while Table 2 gives the results on En2ZhSum test set.", "We use subword-subword and word-character segmentation granularities in Zh2EnSum and En2ZhSum, respectively.", "En2ZhSum.", "Furthermore, ATS can significantly outperform CLSMS and CLSMT on Zh2EnSum while achieving comparable performance with CLSMS and CLSMT on En2ZhSum.", "However, both CLSMS and CLSMT employ large-scale parallel datasets of other tasks during the training process, limiting the generality of the models.", "In contrast, our method only requires an extra probabilistic bilingual lexicon, which significantly reduces the dependence on data.", "Among the variants of ATS, the ATS with Adapt strategy has the best performance.", "The reason is quite straightforward since the Adapt strategy helps to choose the right translation with the help of the source-side context.", "The Equal strategy performs worst, but its advantage over the Naive strategy is that it is not affected by the prior probability in probabilistic bilingual lexicon.", "In other words, the Equal strategy only makes use of the corresponding relationship be-Src-Tgt Model Size (M) Train (S) Zh-En TNCLS 134.92 21 CLSMS 211.41 48 CLSMT 208.84 63 ATS-NE 136.55 27 ATS-A 137.60 30 En-Zh TNCLS 113.74 24 CLSMS 190.23 65 CLSMT 148.16 72 ATS-NE 114.00 24 ATS-A 115.05 25 Table 3: Model size (number of trainable parameters and M denotes mega) and training time of various models.", "tween source language words and target language words, making it effective even if there is only a bilingual vocabulary dictionary.", "In summary, all three of our strategies can bring about significant improvement, which demonstrates that our method is robust to the acquisition method of translation candidates.", "Model size and training time.", "The model size and training time of various models are given in Table 3.", "As it is shown, ATS is comparable with Transformer from both model size and training time.", "For model size, ATS is significantly less than the multi-task methods CLSMS and CLSMT.", "Especially on the Zh2En task, the size of multi-task models is nearly twice that of ATS.", "For training time, ATS is roughly half of the multi-task methods on both Zh2En and En2Zh tasks.", "Therefore, compared with the multi-task methods, ATS can significantly reduce the model size and improve the training efficiency.", "In conclusion, our ATS models have achieved significant improvements over the baseline TNCLS on both Zh2EnSum and En2ZhSum, which can demonstrate the effectiveness of our approach.", "Furthermore, ATS achieves comparable performance with the state-of-the-art.", "Compared with the state-of-the-art, ATS can not only relax model's dependence on datasets but also reduce model size and improve training efficiency.", "The impact of m .", "To study the impact of m (the limit on the number of translation candidates), we conduct an experiment on how the model performance changes when m varies from 10 to 5 or a Model m Zh2En En2Zh RG-1 RG-2 RG-L MVS RG-1 RG-2 RG-L ATS-A 1 40.93 24.17 37.11 22.31 39.85 21.45 36.12 5 41.05 24.31 37.28 22.77 40.27 21.96 36.60 10 40.68 24.12 36.97 22.15 40.47 22.21 36.89 Table 4: Results of ATS on Zh2EnSum and En2ZhSum under different hyperparameters, where m is the limit on the number of translation candidates.", "more aggressive value 1.", "The results are presented in Table", "4. In Zh2En experiment, the ATS-A ( m =5) performs best while ATS-A ( m =1) performs comparably with ATS-A ( m =10).", "In En2Zh experiment, the ATS-A ( m =5) performs comparably with ATS-A ( m =10) while the performance drops a bit when m =1.", "The above results illustrate that (1) A slightly larger m enables the model to learn when to search for translation candidates from the source words and which ones to choose, leading to improve the quality of the final summaries.", "(2) When m =1, the translation probability will contain some noise, but our method is still significantly better than the baseline, which further demonstrates the effectiveness and robustness of our method.", "The impact of segmentation granularity.", "To study the effect of different segmentation granularities on the performance, we compare the performance of the model trained with word-word and subword-subword segmentation granularities on Zh2EnSum dataset.", "The results are given in Table", "5. From ROUGE, our method brings about a similar degree of improvement over the baseline when using these two segmentation granularities.", "From MoverScore, it can be found that our method brings slightly greater improvement over the baseline when using the subword-subword segmentation granularity than using the word-word segmentation granularity.", "MoverScore metric com-Task Unit p macrotrans p microtrans r macro r micro Zh2En sw-sw 21.41 20.71 21.86 21.00 Zh2En w-w 21.17 20.46 21.90 21.05 En2Zh w-c 14.91 14.84 14.27 14.05 Table 6: Statistics on p trans in ATS-A models.", "pares system output against references based on their semantics, thus we believe ATS-A (sw-sw) can improve the semantic accuracy of the generated summary to a greater extent than ATS-A (w-w).", "Although the obtained probabilistic bilingual lexicon is of lower quality when using a smaller segmentation granularity, the source side covers more units, thus more translation candidates are exposed, making up for the noise in the probabilistic bilingual dictionary.", "In summary, our method can improve the performance under the above two different segmentation granularities, which illustrates that our method is robust to the segmentation granularity.", "Translating Probability.", "Table 6 gives the statistics of translating probability in different ATS-A models.", "As it is shown, there is little difference in average translating probability under different segmentation granularities.", "However, the translation probabilities in tasks with different language directions are quite different.", "It is worth noting that the ration of words with translating probability greater than 0.5 does not mean that so many words are generated from translation operations, since the final distribution of summary words is jointly determined by translating probability, translation probability, encoder-decoder attention distribution, and neural distribution.", "Human Evaluation.", "We conduct the human evaluation on 25 random samples extracted from each of Zh2EnSum and En2ZhSum, respectively.", "We compare the summaries generated by ATS (Adapt strategy) with other methods (including TNCLS, CLSMS, and CLSMT).", "Three graduate students are recruited to rate the generated summaries according to the references.", "Each summary is assessed from the three independent aspects: (1) How informative is the summary?", "(2) How concise is the summary?", "(3) How fluent and grammatical is the summary?", "Each aspect is scored from 1 (worst) to 5 (Best).", "The average results are given in Table 7.", "We can find that the informativeness score, conciseness score, and fluency score of ATS-A are significantly better than those of the baseline TNCLS, which further demonstrates the effectiveness of our method.", "In Zh2En task, ATS-A outperforms CLSMT from all three aspects.", "The conciseness score of ATS-A is comparable with that of CLSMS, but ATS can generate more informative and fluent summaries.", "In En2Zh task, ATS-A outperforms CLSMS from all three aspects as well.", "The informativeness score and conciseness score of CLSMT are comparable with those of ATS-A, but ATS-A can generate more fluent summaries.", "To sum up, ATS-A can outperform CLSMS and CLSMT in Zh2En task, and ATS-A can outperform CLSMS while performing comparably with CLSMT in En2Zh task.", "Although the summary generated by the TNCLS captures the critical character the former director of zengcheng health and the crime of received bribes committed by the character, it mistakenly expresses sentenced as arrested and fails to identify the prison term.", "Both CLSMT-generated summary and CLSMS-generated summary are fluent and grammatically correct.", "How-Input (Chinese): (cid:12)(cid:38)(cid:35)(cid:36)(cid:8)(cid:53)(cid:11)(cid:46)(cid:47)(cid:69)(cid:9)(cid:57)(cid:20)(cid:2)(cid:38)(cid:7)(cid:51)(cid:30)(cid:29)(cid:36)(cid:24) (cid:48)(cid:33)(cid:25)(cid:33)(cid:68)(cid:65)(cid:67)(cid:17)(cid:41)(cid:26)(cid:6)(cid:34)(cid:21)(cid:49)(cid:23)(cid:14) 20 (cid:27)(cid:60)(cid:61)(cid:11)(cid:62)(cid:64)(cid:50)(cid:56)(cid:42)(cid:52)(cid:66) (cid:63) 34 (cid:4)(cid:15)(cid:3)(cid:45)(cid:2)(cid:59)(cid:69)(cid:10)(cid:32)(cid:54)(cid:40)(cid:3)(cid:32)(cid:25)(cid:19)(cid:2)(cid:70)(cid:28)(cid:65)(cid:67)(cid:17)(cid:5)(cid:58)(cid:2)(cid:13)(cid:26) (cid:62)(cid:55)(cid:19)(cid:31)(cid:16)(cid:43)(cid:44)(cid:39)(cid:18) 5 (cid:37)(cid:22)(cid:1) Reference: zengcheng 's former director of health received bribes and was sentenced to five and a half years' imprisonment TNCLS: the former director of zengcheng health bureau was arrested on suspicion of accepting bribes CLSMS: the former director of zengcheng health bureau was sentenced to five and a half years 'imprisonment for accepting bribes of nearly 340,000 yuan ATS-A: the former director of zengcheng health bureau was sentenced to five and a half years for bribery According to the Guangzhou Intermediate People's Court, GuoTiejun, former director of the Zengcheng Municipal Health Bureau in Guangdong Province, received bribes nearly 340,000 yuan in holiday gifts from 20 persons in charge of subordinate medical units. The court upheld the original judgment in the first instance, rejected GuoTiejun's appeal and sentenced him to five and a half years in prison for bribery. (The EnglishTranslation of SourceText) CLSMT: guo tiejun , former director of zengcheng health bureau , was sentenced to five and a half years 'imprisonment for accepting bribes of 340,000 yuan Figure 4: Examples of generated summaries. The English translation of source text is also given for better reading. The blue shading intensity denotes the value of the translating probability p trans . ever, the amount in the source article is an approximate value nearly 340,000 yuan, while CLSMT-generated summary directly expresses the exact value, which is inappropriate. The downside to both CLSMT-generated summary and CLSMS-generated summary is that they contain redundant information, since they are relatively lengthy. The summary generated by our ATS-A method matches the reference best and nearly captures all the key points in the source article. In conclusion, our method can generate summaries with more accurate semantics than baselines. 5 Related Work Cross-Lingual Summarization. The traditional cross-lingual summarization approaches are based on the pipelined paradigm and can be categorized into translate-then-summarize (Leuski et al., 2003; Ouyang et al., 2019) and summarize-then-translate (Orasan and Chiorean, 2008; Wan et al., 2010). Leuski et al. (2003) translate the Hindi document into English and then generate the English headline. Ouyang et al. (2019) train a robust abstractive summarization system on noisy English documents and clean English reference summaries. Then the system can learn to produce fluent summaries from disfluent inputs, which enables the system to summarize translated documents. Orasan and Chiorean (2008) summarize the Romanian news and then translate the summary into English. Wan et al. (2010) apply the summarize-then-translate scheme to English-to-Chinese cross-lingual summarization, which extracts English sentences considering both the informativeness and translation quality of sentences and automatically translates the English summary into Chinese. They also argue that summarize-then-translate is better, since it can alleviate both the computational expense of translating sentences and sentence extraction errors caused by incorrect translations. There have been some researches focusing on improving cross-lingual summarization with bilingual information. Wan (2011) translates the English document into Chinese and extracts sentences based on the original English sentences and Chinese translation. Yao et al. (2015) propose a compressive method which calculates the sentence scores based on the aligned bilingual phrases obtained by machine translation service and performs compression via deleting redundant or poorly translated phrases. Zhang et al. (2016) introduce an abstractive method that constructs a pool of bilingual concepts represented by the bilingual elements of the source-side predicate-argument structures and the target-side counterparts. Recently, end-to-end methods have been applied to cross-lingual summarization. Due to the lack of supervised training data, Ayana et al. (2018) and Duan et al. (2019) focus on zero-shot training methods that use machine translation or monolingual summarization or both to teach the cross-lingual system. Zhu et al. (2019) propose to acquire large-scale datasets via a round-trip translation strategy. They incorporate monolingual summarization or machine translation into cross-lingual summarization training using multi-task learning. Neural Abstractive Summarization. Rush et al. (2015) present the first neural abstractive summarization model, an attentive convolutional encoder and a neural network language model decoder, which learns to generate news headlines from the lead sentences of news articles. Their approach has been further improved with recurrent decoders (Chopra et al., 2016), abstractive meaning representations (Takase et al., 2016), hierarchical networks (Nallapati et al., 2016), variational au-toencoders (Miao and Blunsom, 2016), hybrid strategy (Zhu et al., 2017), selective mechanism (Zhou et al., 2017), and entailment knowledge. See et al. (2017) propose a pointer-generator network, which allows copying words from the source text with the copying mechanism (Gu et al., 2016). Li et al. (2018) incorporate entailment knowledge into summarization model to improve the correctness of the generated summaries. Li et al. (2020) apply guidance signals of keywords to both the encoder and decoder in the abstractive summarization model. Inspired by the pointer-generator network and the translation pattern in obtaining cross-lingual summaries, we introduce a novel model in this paper, which integrates the operation of attending, translating, and summarizing. 6 Conclusion and Future Work In this paper, we present a novel method consistent with the translation pattern in the process of obtaining a cross-lingual summary. This method first attends to the source words, then obtains the translation candidates, and incorporates them into the generation of the final summary. Experimental results have shown that our method can significantly outperform the baseline and achieve comparable performance with the state-of-the-art. Furthermore, our method has two advantages over the state-of-the-art: (1) Our model requires only an additional probabilistic bilingual lexicon rather than large-scale parallel datasets of other tasks, thus reducing the model's dependence on data and making it easier for the model to migrate to other domains or other language pairs. (2) Our model has a much smaller size and a much faster training efficiency. In our future work, we consider incorporating our method into the multi-task method. Besides, we will also explore the influence of probabilistic bilingual lexicon obtained by learning only from monolingual data on our method. 7 Acknowledgments The research work described in this paper has been supported by the National Key Research and Development Program of China under Grant No. 2017YFC0820700. We thank the three anonymous reviewers for their insightful comments and suggestions. We would like to thank Haitao Lin, Yu Lu, Yining Wang, Lu Xiang, and Yang Zhao for their invaluable contributions in shaping the early stage of this work. We thank Jinliang Lu, Xina Fu, and Meng Zhang for conducting human evaluation. We make our code publicly available here: https://github.com/ZNLP/ATSum . References Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word em-beddings while preserving monolingual invariance. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 22892294. Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL) , pages 789798. Association for Computational Linguistics. Ayana, shi-qi Shen, Yun Chen, Cheng Yang, Zhi-yuan Liu, and Maosong Sun. 2018. Zero-shot cross-lingual neural headline generation. IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP) , 26(12):23192327. Sarath Chandar A P, Stanislas Lauly, Hugo Larochelle, Mitesh Khapra, Balaraman Ravindran, Vikas C Raykar, and Amrita Saha. 2014. An autoencoder approach to learning bilingual word representations. In Advances in Neural Information Processing Systems 27 (NIPS) , pages 18531861. Curran Associates, Inc. Sumit Chopra, Michael Auli, and Alexander M. Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT) , pages 9398. Association for Computational Linguistics. Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv e J egou. 2018. Word translation without parallel data. In International Conference on Learning Representations (ICLR) . Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for multiple language translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing (ACL-IJCNLP) , pages 17231732. Association for Computational Linguistics. Xiangyu Duan, Mingming Yin, Min Zhang, Boxing Chen, and Weihua Luo. 2019. Zero-shot cross-lingual abstractive sentence summarization through teaching generation and attention. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL) , pages 31623172. Association for Computational Linguistics. Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameteriza-tion of IBM model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT) , pages 644 648. Association for Computational Linguistics. Gunes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research (JAIR) , 22:457479. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS10). Society for Artificial Intelligence and Statistics , pages 249256. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL) , pages 16311640. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR) . Anton Leuski, Chin-Yew Lin, Liang Zhou, Ulrich Germann, Franz Josef Och, and Eduard Hovy. 2003. Cross-lingual c* st* rd: English access to Hindi information. ACM Transactions on Asian Language Information Processing (TALIP) , 2(3):245269. Haoran Li, Junnan Zhu, Jiajun Zhang, and Chengqing Zong. 2018. Ensure the correctness of the summary: Incorporate entailment knowledge into abstractive sentence summarization. In Proceedings of the 27th International Conference on Computational Linguistics (COLING) , pages 14301441. Association for Computational Linguistics. Haoran Li, Junnan Zhu, Jiajun Zhang, Chengqing Zong, and Xiaodong He. 2020. Keywords-guided abstractive sentence summarization. In Proceedings of the Thirty-Forth AAAI Conference on Artificial Intelligence (AAAI) . Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out , pages 7481. Association for Computational Linguistics. Yishu Miao and Phil Blunsom. 2016. Language as a latent variable: Discrete generative models for sentence compression. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 319328. Association for Computational Linguistics. Ramesh Nallapati, Bowen Zhou, Cicero Dos Santos, Caglar Gulcehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning (CONLL) , pages 280290. Association for Computational Linguistics. Constantin Orasan and Oana Andreea Chiorean. 2008. Evaluation of a cross-lingual Romanian-English multi-document summariser. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08) . European Language Resources Association (ELRA). Jessica Ouyang, Boya Song, and Kathy McKeown. 2019. A robust abstractive system for cross-lingual summarization. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT) , pages 2025 2031. Association for Computational Linguistics. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 379389. Association for Computational Linguistics. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL) , pages 10731083. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL) , pages 17151725. Association for Computational Linguistics. Sho Takase, Jun Suzuki, Naoaki Okazaki, Tsutomu Hi-rao, and Masaaki Nagata. 2016. Neural headline generation on abstract meaning representation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 10541059. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30 (NIPS) , pages 59986008. Curran Associates, Inc. Xiaojun Wan. 2011. Using bilingual information for cross-language document summarization. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics (ACL) , pages 15461555. Association for Computational Linguistics. Xiaojun Wan, Huiying Li, and Jianguo Xiao. 2010. Cross-language document summarization based on machine translation quality prediction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL) , pages 917 926. Association for Computational Linguistics. Jin-ge Yao, Xiaojun Wan, and Jianguo Xiao. 2015. Phrase-based compressive cross-language summarization. In Proceedings of the 2015 conference on empirical methods in natural language processing (EMNLP) , pages 118127. Association for Computational Linguistics. Jiajun Zhang, Yu Zhou, and Chengqing Zong. 2016. Abstractive cross-language summarization via translation model enhanced predicate argument structure fusing. IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP) , 24(10):1842 1853. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL) , pages 19591970. Association for Computational Linguistics. Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M. Meyer, and Steffen Eger. 2019. MoverScore: Text generation evaluating with contextualized em-beddings and earth mover distance. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages 563578. Association for Computational Linguistics. Qingyu Zhou, Nan Yang, Furu Wei, and Ming Zhou. 2017. Selective encoding for abstractive sentence summarization. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL) , pages 10951104. Association for Computational Linguistics. Junnan Zhu, Qian Wang, Yining Wang, Yu Zhou, Jiajun Zhang, Shaonan Wang, and Chengqing Zong. 2019. NCLS: Neural cross-lingual summarization. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages 3045 3055. Association for Computational Linguistics. Junnan Zhu, Long Zhou, Haoran Li, Jiajun Zhang, Yu Zhou, and Chengqing Zong. 2017. Augmenting neural sentence summarization through extractive summarization. In Proceedings of the 6th Conference on Natural Language Processing and Chinese Computing (NLPCC) , pages 1628. Springer. A Supplemental Material Zh2EnSum train valid test #Documents 1,693,713 3,000 3,000 #AvgChars (S) 103.59 103.56 140.06 #AvgWords (R) 13.70 13.74 13.84 #AvgSentChars 52.73 52.41 53.38 #AvgSents 2.32 2.33 2.30 Table 8: Corpus statistics of Zh2EnSum. #AvgChars (S) is the average number of Chinese characters in the source document. #AvgWords (R) means the average number of English words in the reference. #AvgSentChars refers to the average number of characters in a sentence in the source document. #AvgSents denotes the average number of sentences in the source document. En2ZhSum train valid test #Documents 364,687 3,000 3,000 #AvgWords (S) 755.09 759.55 744.84 #AvgChars (R) 55.21 55.28 54.76 #AvgSentWords 19.62 19.63 19.61 #AvgSents 40.62 41.08 40.25 Table 9: Corpus statistics of En2ZhSum. #Avg-Words (S) is the average number of English words in the source document. #AvgChars (R) means the average number of Chinese characters in the reference. #AvgSentWords refers to the average number of Words in a sentence in the source document. #AvgSents denotes the average number of sentences in the source document. Datasets. Table 8 and Table 9 show the statistics of Zh2EnSum dataset and En2ZhSum dataset, respectively. Task Unit Source Target Zh2En sw-sw 100,000 40,000 Zh2En w-w 100,000 40,000 En2Zh w-c 100,000 18,000 Table 10: The vocabulary size of models with different segmentation granularities. Vocabulary Size. Table 10 gives the vocabulary size of models with different segmentation granularities. We employ the Urheen 5 tool to segment the Chinese text into words. ROUGE Evaluation Details. In En2Zh task, we first delete the tags < t > and < /t > generated by models. Then, we convert the text units in 5 http://www.nlpr.ia.ac.cn/cip/software. htm the reference and the system output into English IDs, such as word1, word2, etc. Each text unit has a unique English ID. Finally, we report the ROUGE scores based on these English IDs. The ROUGE scores reported in this paper can also be obtained by files2rouge 6 tool. 6 https://github.com/pltrdy/files2rouge Input (English): ed miliband 's plan to cut university tuition fees is facing internal opposition with predictions it could cause a civil war within the party . ed miliband 's plan to cut university tuition fees was yesterday facing mounting opposition with even a former labour no10 aide joining the attack . there were predictions last night that the party could descend into civil war over the controversial proposals after ex-tony blair aide huw evans was joined by the leader of britain 's nurses in challenging the plans . mr miliband has said his pledge to slash the fees from 9,000 a year to 6,000 is 'cast-iron ' , adding the plan will be a 'red line ' in any possible future coalition talks ." ]
[ "abstain", "objective", "objective", "objective", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "objective", "objective", "objective", "objective", "result", "abstain", "method", "abstain", "objective", "method", "objective", "objective", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other" ]
[ "The advent of micro-blogging sites has paved the way for researchers to collect and analyze huge volumes of data in recent years.", "Twitter, being one of the leading social networking sites worldwide, provides a great opportunity to its users for expressing their states of mind via short messages which are called tweets.", "The urgency of identifying emotions and sentiments conveyed through tweets has led to several research works.", "It provides a great way to understand human psychology and impose a challenge to researchers to analyze their content easily.", "In this paper, we propose a novel use of a multi-channel convolutional neural architecture which can effectively use different emotion and sentiment indicators such as hashtags, emoticons and emojis that are present in the tweets and improve the performance of emotion and sentiment identification.", "We also investigate the incorporation of different lexical features in the neural network model and its effect on the emotion and sentiment identification task.", "We analyze our model on some standard datasets and compare its effectiveness with existing techniques.", "Social networking sites (e.g., Twitter) have become immensely popular in the last decade.", "User generated content (e.g., blog posts, statuses, tweets etc.) in social media provides a wide range of opinionated, emotional and sentimental content which gives researchers a massive data source to explore.", "For example, Twitter, being one of the leading social networking giants, provides an online environment that allows people of various backgrounds and locations to share their opinions and views on different matters.", "As of July 2018, over 500 million tweets are sent per day having over 300 million monthly active users.", "1 1 http://www.internetlivestats.com/twitter-statistics/ There is often a misconception about sentiments and emotions as these subjectivity terms have been used interchangeably (Munezero et al., 2014).", "Munezero et al. (2014) differentiate these two terms along with other subjectivity terms and provide the computational linguistics community with clear concepts for effective analysis of text.", "While sentiment classification tasks deal with the polarity of a given text (whether a piece of text expresses positive, negative or neutral sentiment) and the intensity of the sentiment expressed, emotion mining tasks naturally deal with human emotions which in some end purposes are more desirable (Ren and Quan, 2012; Desmet and Hoste, 2013; Mohammad et al., 2015).", "Detecting emotion and sentiment from noisy twitter data is really challenging due to its nature.", "Tweets tend to be short in length and have a diverse vocabulary making them harder to analyze due to the limited contextual information they contain.", "In this study, we are interested in tackling these two tasks with a novel use of a single neural network architecture.", "A number of emotion theories are available which suggest different sets of basic emotions.", "Interestingly, joy, sadness, anger, fear and surprise are common to all.", "To the best of our knowledge, the model suggested by Ekman (1999) is the most broadly used emotion model.", "In this study, we use Ekman's basic emotions together with other sets of emotions (Plutchik, 1984; Shaver et al., 1987).", "In early textual emotion mining and sentiment analysis research, the usefulness of using external lexicons along with predefined rules has been demonstrated (Aman and Szpakowicz, 2008; Neviarouskaya et al., 2007; Bandhakavi et al., 2017; Thelwall et al., 2010; Gilbert, 2014).", "Aman and Szpakowicz (2008) used Roget's Thesaurus along with WordNet-Affect for fine-grained emotion prediction from blog data.", "Bandhakavi et al. (2017) propose a unigram mixture model (UMM) K=1 K=2 K=3 K=1 Convolutional layer with multiple window sizes (k=1,2,3) Convolutional layer with single window size (k=1) Multichannel Embedding Layer Tweet Matrix (L 1 x d) Hash-Emo Matrix (L 2 x d) Dropout + Max Pooling", "to create a domain-specific lexicon which performs better in extracting features than Point-wise Mutual Information and supervised Latent Dirichlet Allocation methods.", "Neviarouskaya et al. (2007) propose a rule-based system which can handle informal texts in particular.", "They built a database of abbreviations, emoticons, affect words, etc., in which each entry is labeled with an emotion and its intensity.", "Thelwall et al. (2010) propose an algorithm, SentiStrength , which utilizes a dictionary of sentiment words associated with strength measures to deal with short informal texts from social media.", "Gilbert (2014) propose VADER , a rule-based model for sentiment analysis.", "They built a lexicon which is specially attuned to microblog-like contexts and their model outperforms individual human raters.", "More recently, deep learning models have proven to be very successful when applied on various text-related tasks (Kim, 2014; Kalchbrenner et al., 2014; dos Santos and Gatti, 2014; Tai et al., 2015; Wang et al., 2016; Felbo et al., 2017; Abdul-Mageed and Ungar, 2017).", "Kim (2014) showed the effectiveness of a simple CNN model that leverages pretrained word vectors for a sentence classification task.", "Kalchbrenner et al. (2014) propose a dynamic CNN model using a dynamic k-max pooling mechanism which is able to generate a feature graph which captures a variety of word relations.", "They showed the efficacy of their model by achieving high performances on binary and multi-class sentiment classification tasks without any feature engineering.", "dos Santos and Gatti (2014) propose a deep CNN model that uses both character and word-level information allowing them to achieve state-of-the-art performance on both binary and fine-grained multi-class sentiment classification for one of the twitter datasets.", "Tai et al. (2015) propose a Tree-LSTM model which captures syntactic properties in text.", "Their model performs particularly well on sentiment classification.", "Wang et al. (2016) propose a regional CNN-LSTM model for dimensional sentiment analysis.", "Their proposed model computes valence-arousal ratings from texts and outperforms several regression-based methods.", "Felbo et al. (2017) propose a bidirectional LSTM model with attention showing that their model learns better representations when distant supervision is expanded to a set of noisy labels.", "Abdul-Mageed and Ungar (2017) also used distant supervision to build a large twitter dataset and proposed a Gated Recurrent Neural Network model for fine-grained emotion detection.", "The recent success of neural based models motivated us to take a different look at the sentiment and emotion prediction from the noisy twitter data task.", "Compared with sequential models, CNN models train relatively faster and seem to work very well on noisy data such as tweets which are grammatically error-prone.", "We decided to work with CNN models after our initial experiments suggested that they perform comparatively better than a simple BiLSTM model on twitter dataset.", "We address the following questions in this paper: Can CNN models be used in a way that can improve the performance of detecting emotion and sentiment from noisy Twitter data?", "How important are hashtag words, emoticons and emojis as predictors of emotion and sentiment in micro-blogging sites?", "How can we encode them in a multi-channel convolutional neural network?", "How can we add external features to a CNN model effectively?", "The remainder of the paper is organized as follows: We describe our model architecture in detail in Section 2.", "In Section 3, we describe the datasets and lexicons used in our experiments.", "As well, we describe the experimental setup required for working with Twitter data.", "In Section 4, we discuss the results from our experiments.", "In Section 5, we discuss our findings with particular attention paid to answering the above questions.", "Finally, in Section 6, we give a summary of our work followed by our remarks on future studies.", "We represent the architecture of our model in Fig. 1.", "The model consists of an embedding layer with two channels, a convolution layer with different kernel sizes and multiple filters, a dropout layer for regularization, a max pooling layer, multiple hidden layers and a softmax layer.", "We now describe each of these layers in detail.", "In this layer, two embedding matrices, the Tweet Matrix and the Hash-Emo Matrix, are passed through two different channels of our convolutional neural network.", "The first matrix represents a particular tweet.", "Each tweet t i consists of a sequence of tokens w 1 , w 2 , . . . , w n i .", "L 1 is the maximum tweet length.", "The height of the Tweet Matrix is L 1 .", "Short tweets are padded using zero padding.", "In the Tweet Matrix, every word is represented as a d -dimensional word vector.", "Since tweets are usually noisy, short in length, and have different kinds of features other than text, it's useful to have a word embedding specially trained on a large amount of Tweet data (Tang et al., 2014).", "Previous research (Collobert et al., 2011; Socher et al., 2011) has shown the usefulness of using pretrained word vectors to improve the performance of various models.", "As a result, in our experiments, we have used the publicly available pre-trained GloVe word vectors for Twitter by (Pennington et al., 2014a).", "The word vectors are trained on 27 B word tokens in an unsupervised manner.", "In this layer, we also pass another matrix called the Hash-Emo Matrix through a different channel in our network.", "This matrix is composed of three different sets of features: hashtags, emoticons and emojis.", "These are considered as distinguishable traits to showcase one's mood (Zhao et al., 2012).", "People like to use hashtags to express their emotional state through various micro-blogging sites (e.g., Twitter) (Qadir and Riloff, 2014).", "Also graphical emoticons or emojis can convey strong emotion or sentiment.", "So for each tweet t i , we extract hashtags h 1 , h 2 , . . . , h k i and emoticons/emojis e 1 , e 2 , . . . , e p i .", "We concatenate the hashtags and emoticon/emoji vectors for each tweet t i to get the Hash-Emo Matrix.", "We introduce a hyper-parameter L 2 as a threshold on the height of the Hash-Emo Matrix.", "Tweets with the number of hash-emo features less than L 2 are padded with zero while tweets with more hash-emo features than L 2 are truncated.", "We use word vectors from GloVe with dimension d for hashtags words.", "In the case that no word vector is found for a particular word we randomly initialize it.", "We also do random initialization of word vectors for emoticons.", "For emojis, we first map it to something descriptive (to be discussed in more detail in Section 3.2) and then generate random word vectors.", "These word vectors are tuned during the training phase.", "In this layer, we apply m filters of varying window sizes over the Tweet Matrix from the embedding layer.", "Here, window size ( k ) refers to the number of adjacent word vectors in the Tweet Matrix that are filtered together (when k > 1 ).", "Sliding our filter down we repeat this for the rest of the word vectors.", "Let w i IR d be the d -dimensional word vector corresponding to the i -th word in a tweet.", "Also let w i i + j denote the concatenation of word vectors w i , w i + 1 , . . . , w i + j and F IR k d denote the filter matrix.", "Thus a feature f i is generated by: f i = F w i i + k 1 + b (1) where b is a bias term and represents the convolution action (a sum over element-wise multipli-cations).", "At this stage, we apply a nonlinear activation function such as ReLU (Nair and Hinton, 2010) before passing it through the dropout layer.", "We use multiple filters with the same window size in order to learn complementary features from the same window.", "Different window sizes ( k ) allow us to extract active local k -gram features.", "For the Hash-Emo Matrix, we apply m filters to each vector to generate local unigram features in different scales before passing it to the next layer.", "In this layer, employing a max-over pooling operation (Collobert et al., 2011) on the output from the previous layer for each channel extracts the most salient features.", "In this way, for each filter, we get the maximum value.", "So we get features equal to the number of filters in this stage.", "We chose max pooling instead of other pooling schemes because Zhang and Wallace (2017) showed that max pooling consistently performs better than other pooling strategies for various sentence classification tasks.", "We concatenate all the feature vectors from the previous layer.", "In addition, we concatenate additional sentiment and affect feature vectors (which are described in detail in Section 3.2) as well which forms a large feature vector.", "This is then passed through a number of hidden layers.", "A nonlinear activation function (i.e., ReLU (Nair and Hinton, 2010)) is applied in each layer before the vector is finally passed through the output layer.", "We tried a different activation function (tanh) as well, but ReLU worked the best for us.", "This is a fully connected layer which maps the inputs to a number of outputs corresponding to the number of classes we have.", "For multi-class classification task, we use softmax as the activation function and categorical cross-entropy as the loss function.", "The output of the softmax function is equivalent to a categorical probability distribution which generally indicates the probability that any of the classes are true.", "For binary classification task, we use sigmoid as the activation function and binary cross-entropy as our loss function.", "We used a number of emotion and sentiment datasets for our experiments.", "A description of each dataset is given below: BTD.", "Big Twitter Data is an emotion-labeled Twitter dataset provided by Wang et al. (2012).", "The dataset had been automatically annotated based on the seven emotion category seed words (Shaver et al., 1987) being a hashtag and the quality of the data was verified by two annotators as described in (Wang et al., 2012).", "We were only able to retrieve a portion of the original dataset as many tweets were either removed or not available at the time we fetched the data using the Twitter API.", "We applied the heuristics from (Wang et al., 2012) to remove any hashtags from the tweets which belong to the list of emotion seed words.", "TEC.", "Twitter Emotion Corpus has been published by Mohammad (2012) for research purposes.", "About 21,000 tweets were collected based on hashtags corresponding to Ekman's (1999) six basic emotions.", "The dataset has been used in related works (Shahraki and Zaiane, 2017; Balahur, 2013; Mohammad and Kiritchenko, 2015).", "CBET.", "The Cleaned Balanced Emotional Tweet dataset is provided by Shahraki and Zaiane (2017).", "To the best of our knowledge, this is one of the largest publically available balanced datasets for twitter emotion detection research.", "The dataset contains 80,937 tweets with nine emotion categories including Ekman's six basic emotions.", "SE.", "The SemEval-2018 Task 1 Affect dataset was provided by Mohammad et al. (2018).", "The SemEval task was to estimate the intensity of a given tweet and its corresponding emotion.", "However, in this study, we utilize the labeled dataset only to classify the tweets into four emotion categories and use the training, development and test sets provided in this dataset in our experiments.", "STS-Gold.", "This dataset was constructed by Saif et al. (2013) for Twitter sentiment analysis.", "The dataset contains a total of 2,034 tweets labeled (positive/negative) by three annotators.", "This dataset has been extensively used in several works for model evaluation (Saif et al., 2014b; Krouska et al., 2017; Saif et al., 2014a).", "STS.", "The Stanford Twitter Sentiment dataset was introduced by Go et al. (2009).", "It consists of a training set and a test set.", "The training set contains around 1.6 million tweets, whereas the test set contains 498 tweets.", "The training set was built automatically based on several emoticons as potential identifiers of sentiment.", "However, the test set was manually annotated and heavily used for model evaluation in related research.", "We perform one experiment with all three labels (posi-tive/negative/neutral) to compare the performance of different variants of our model and another one with two labels (positive/negative) to make comparison with related works (Jianqiang et al., 2018; dos Santos and Gatti, 2014; Go et al., 2009).", "SS-Twitter.", "The Sentiment Strength Twitter dataset was constructed by Thelwall et al. (2012) to evaluate SentiStrength.", "The tweets were manually labeled by multiple persons.", "Each tweet is assigned a number between 1 and 5 for both positive and negative sentiments, 1 represents weak sentiment strength and 5 represents strong sentiment strength.", "We followed the heuristics used by Saif et al. (2013) to obtain a single sentiment label for each tweet, giving us a total of 4 , 242 positive, negative and neutral tweets.", "The transformed dataset has been used in other literature (Go et al., 2009; Zhang et al., 2018).", "Data Cleaning.", "Twitter data is unstructured and highly informal (Yoon et al., 2013) and thus it requires a great deal of effort to make it suitable for any model.", "NLTK (Bird and Loper, 2004) provides a regular-expression based tok-enizer for Twitter, TweetTokenizer, which preserves user mentions, hashtags, urls, emoticons and emojis in particular.", "It also reduces the length of repeated characters to three (i.e. Haaaaaapy will become Haaapy).", "In our experiments, we utilized the TweetTokenizer to tokenize tweets.", "To accommodate the pretrained word vectors from (Pennington et al., 2014b), we pre-processed each tweet in a number of ways.", "We lowercased all the letters in the tweet.", "User mentions have been replaced with < user > token (i.e. @user-name1 will become < user > ).", "In addition, we also removed urls from the tweets as urls do not provide any emotional value.", "We also normalized certain negative words (e.g., won't will become will not).", "Using slang words is a very common practice in social media.", "We compiled a list of the most common slang words from various online resources 2 and replaced all of the occurrences with their full form (e.g., nvm will become never mind).", "Our list of slang words doesn't contain any word which has multiple meanings.", "Usage of certain punctuation is often crucial in social media posts as it helps the user to emphasize certain things.", "We found that two punctuation symbols (! and ?) are common among social media users to express certain emotional states.", "We kept these symbols in our text and normalized the repetitions (e.g., !!! will become ! < repeat > ).", "The use of emojis and emoticons has increased significantly with the advent of various social media sites.", "Emoticons (e.g., :-D) are essentially a combination of punctuation marks, letters and numbers used to create pictorial icons which generally display an emotion or sentiment.", "On the other hand, emojis are pictographs of faces, objects and symbols.", "The primary purpose of using emojis and emoticons is to convey certain emotions and sentiments (Dresner and Herring, 2010).", "One advantage of using the TweetTokenizer is that it gives us emoticons and emojis as tokens.", "Though we use the emoticons as is in our experiment, we utilize a python library called emoji to get descriptive details about the pictorial image.", "For example, (cid:44) represents smiling face .", "In our experiments, we removed stop-words and replaced numbers occurring in the tweets with the token < number > .", "We also stripped off # symbols from all the hashtags within the tweets (e.g., 2 Example: https://slangit.com/terms/social media Emotion Dataset BTD TEC CBET SemEval P R F1 P R F1 P R F1 P R F1 joy 68 .", "# depressed will become depressed) and used the stripped version of hashtags on the second channel of our model.", "We only kept tokens with more than one character.", "Input Features.", "Along with word embed-dings, we used additional affect and sentiment features in our network.", "In our experiments, we used a feature vector V f where each value in the vector corresponds to a particular lexical feature ranging between [ 0 , 1 ] .", "We utilized a number of publicly available lexicons which are described briefly below to construct the vector.", "Warriner et al. (2013) provides a lexicon consisting of 14,000 English lemmas with valence, arousal and dominance scores.", "Three components of emotion are scored for each word between 1 and 9 in this lexicon.", "We calculate the average score for each component across all tokens in a tweet and normalize them in the range [0, 1].", "Gilbert (2014) provides a list of lexical features along with their associated sentiment intensity measures.", "We use this lexicon to calculate the average of positive, negative, and neutral scores over all the tokens in a tweet.", "In addition, we used the NRC Emotion Lexicon provided by Mohammad and Turney (2013) which consists of a list of unigrams and their association with one of the emotion categories (anger, anticipation, disgust, fear, joy, sadness, surprise, trust).", "We use the percentage of tokens belonging to each emotion category as features.", "We also used the NRC Affect Intensity Lexicon provided by Mohammad and Bravo-Marquez (2017) and NRC Hashtag Emotion Lexicon provided by Mohammad and Kiritchenko (2015) which contain real-valued fine-grained word-emotion association scores for words and hashtag words.", "We combined two lexicons MPQA and BingLiu provided by Wilson et al. (2005) and Hu and Liu (2004), respectively, and used them to calculate the percentage of positive and negative tokens belonging to each tweet.", "We also used AFINN (Nielsen, 2011) which contains a list of English words rated for valence with an integer between 5 (negative) to + 5 (positive).", "We first normalized the scores in the range [0,1] and then calculated the average of this score over all the tokens in a tweet.", "Lastly, we detect the presence of consecutive exclamation (!) and question marks (?) in a tweet and use them as boolean features.", "Network Parameters and Training.", "Zhang and Wallace (2017) performed a sensitivity analysis on various parameters of a one-layer CNN model and showed how tuning the parameters can affect the performance of a model.", "Inspired by the work done by (Zhang and Wallace, 2017), we also searched for the optimal parameter configurations in our network.", "Table 3 shows different hyper-parameter configurations that we tried and the final configuration that was used in our model.", "The final configuration was based on both performance and training time.", "The embedding dimension has been set to 100 for both channels of our network as it worked best for us among other dimensions.", "We also experimented with a different number of fil-Hyper-parameter Ranges Selected Embedding dimension 50 / 100 / 200 100 Number of filters 64 / 128 / 256 128 Kernel sizes 1 / 2 / 3 / 4 / 5 1 / 2 / 3 Batch size 16 / 30 / 50 16 Epochs 10 / 20 10 Dropout rate 0 .", "ters and varying kernel sizes.", "The combination of kernel sizes, ( k = 1 , 2 , 3 ) in the first channel and k = 1 in the second channel worked the best for us.", "We also experimented with various batch sizes and the performance of the model remained reasonably constant, though the training time varied significantly.", "In our network, we used three hidden layers.", "In addition, we used the Adam optimizer (Kingma and Ba, 2014) and the back-propagation (Rumelhart et al., 1986) algorithm for training our model.", "Keras 2.2.0 was used for implementing the model.", "Regularization.", "In order to reduce overfit-ting, it is a common practice to employ regularization strategies in CNNs.", "In our experiments, we used dropout regularization (Srivastava et al., 2014) for both of the channels after the convolutional layer.", "We experimented with three different dropout rates as seen in Table 3 and also with no dropout at all.", "The model works better when we apply dropouts after the convolutional layer.", "In this section, we describe the results obtained in our experiments.", "We use precision, recall, F1-score and accuracy as our evaluation metrics.", "In recent emotion category recognition studies on Twitter data, people tend to construct their own dataset by collecting tweets from Twitter for their experiments.", "Hence, it is hard to find a large enough benchmark dataset to compare the performance with other people's work.", "In this study, we experimented with four emotion labeled datasets which have been made publicly available by their authors.", "Table 2 shows the results for each emotion category for all of the datasets.", "For the BTD dataset, we trained our model with 1 , 136 , 305 tweets, while we used 140 , 979 and 142 , 602 tweets as development and test data respectively.", "We used the same training, development and test sets as (Wang et al., 2012) except that our retrieved dataset contains fewer samples.", "We achieved relatively high F1-scores of 72 .", "6 % , 73 .", "6 % and 76 .", "8 % for joy, sadness and anger , respectively, whereas for surprise we get a low F1-score of 27 .", "1 % .", "This is probably due to the imbalanced nature of the dataset as can be seen in Table 1.", "The number of samples for joy, sadness and anger is much higher than for surprise .", "Our model achieves an accuracy of 69 .", "2 % , whereas Wang et al. (2012) reported an accuracy of 65 .", "6 % when trained on a much larger dataset.", "We can not make direct comparison with (Wang et al., 2012) since we were not able to retrieve the full test set due to the unavailability of some tweets at the time of fetching data from Twitter.", "For the TEC dataset, we evaluated our model with 10-fold cross validation.", "Mohammad (2012) reported an F1-score of 49 .", "9 % with SVM, whereas our model achieves an F1-score of 55 .", "6 % .", "For the CBET dataset, we used 80% of the data as the training set and the remaining 20% as the test set.", "We get an average F1-score of 56 .", "5 % .", "We also used 10-fold cross-validation for the SemEval dataset and achieved an F1-score of 61 .", "3 % .", "Table 4 shows the performance of our model with 10-fold cross-Datasets Methods Positive Negative Neutral Accuracy P R F1 P R F1 P R F1 STS-Test CNN 73 .", "validation on different sentiment datasets with two classes (positive and negative).", "For the STS-Gold dataset, our model achieves an accuracy of 90 .", "7 % whereas the previous best accuracy ( 86 . 0 % ) was reported by Jianqiang et al. (2018) with a deep CNN model.", "Our model achieves the best accuracy ( 90 . 3 % ) for the STS-Test dataset as well, while the previous best ( 87 . 6 % ) was reported in (Jianqiang et al., 2018).", "dos Santos and Gatti (2014) also experimented with the same dataset with their Character to Sentence CNN model, reporting an accuracy of 86 .", "4 % .", "Lastly, for the SS-Twitter dataset, our model achieves an accuracy of 79 .", "3 % whereas Zhang et al. (2018) and Saif et al. (2013) reported an accuracy of 61 .", "9 % and 73 .", "4 % , respectively.", "Tables 5 and 6 show the performance of three variants of our model on the sentiment labeled datasets and the emotion labeled datasets respectively.", "The first variant is a basic CNN model without hash-emo embedding or any additional features.", "The second variant includes the hash-emo embedding, while the last variant combines additional lexical features as well.", "It can be observed that when we introduce the second channel with hash-emo embedding, we get a significant increase in accuracy for most of the datasets.", "We can see in Table 5 that, for STS-Test and SS-Twitter datasets, we get better F1-scores for all three sentiment labels when we include the hash-emo embedding along with external lexical features.", "In Models Dataset BTD TEC CBET SE CNN 66 .", "Table 6, we can see that, inclusion of hash-emo embedding in the network gives us 2 .", "4 , 3 .", "3 , 2 .", "3 and 3 .", "5 percentage points increase in accuracy and the inclusion of additional features as well gives us 3 .", "1 , 4 .", "6 , 2 .", "6 and 5 .", "7 percentage points increase in accuracy for BTD, TEC, CBET and SE datasets, respectively, over the base models.", "In this study, we have showed the effectiveness of encoding hashtags, emoticons and emojis through a separate channel in a CNN network for emotion and sentiment detection tasks.", "Our MC-CNN model with hash-emo embedding performs well when compared to the basic CNN model.", "To the best of our knowledge, our model achieves the best accuracies on the three sentiment datasets, and has significant improvement in performance on the four emotion labeled datasets over the basic CNN model.", "The results show the importance of hashtags, emoticons and emojis in social media as predictors of emotion and sentiment.", "The model performs even better when additional lexical features are introduced into the network.", "In this paper, we propose a novel use of a multichannel convolutional neural architecture which effectively encodes different types of emotion indicators which are found in social media posts.", "Results suggest that encoding the emotion indicators through a separate channel provides significant improvement over the traditional CNN based models.", "We also demonstrate a simple approach to incorporate different lexical features in the network giving us comparatively better results when used along with our MC-CNN model.", "Our model performs particularly well on two important tasks in social media: emotion detection and sentiment analysis.", "This model can be extended to perform other tasks as well.", "this can give us noisy twitter data.", "We are thankful for the support by The Social Sciences and Humanities Research Council of Canada (SSHRC) through our Partnership Development Grant titled Living Archives of Rwandan Exiles and Survivors in Canada." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "result", "objective", "method", "method", "method", "method", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "other", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "result", "abstain", "abstain", "objective", "abstain", "objective", "method", "abstain", "abstain", "other" ]
[ "Representation learning with pivot-based methods and with Neural Networks (NNs) have lead to significant progress in domain adaptation for Natural Language Processing.", "However, most previous work that follows these approaches does not explicitly exploit the structure of the input text, and its output is most often a single representation vector for the entire text.", "In this paper we present the Pivot Based Language Model (PBLM) , a representation learning model that marries together pivot-based and NN modeling in a structure aware manner.", "Particularly, our model processes the information in the text with a sequential NN (LSTM) and its output consists of a context-dependent representation vector for every input word.", "Unlike most previous representation learning models in domain adaptation, PBLM can naturally feed structure aware text classifiers such as LSTM and CNN.", "We experiment with the task of cross-domain sentiment classification on 20 domain pairs and show substantial improvements over strong baselines.", "1 1 Introduction Domain adaptation ( DA , (Daume III, 2007; Ben-David et al., 2010)) is a fundamental challenge in NLP, due to the reliance of many algorithms on costly labeled data which is scarce in many domains.", "To save annotation efforts, DA aims to import algorithms trained with labeled data from one or several domains to new ones.", "While DA algorithms have long been developed for many tasks and domains (e.g. (Jiang and Zhai, 2007; Mc-Closky et al., 2010; Titov, 2011; Bollegala et al., 2011; Rush et al., 2012; Schnabel and Schutze, 1 Our code is publicly available at: https://github. com/yftah89/PBLM-Domain-Adaptation . 2014)), the unprecedented growth of heterogeneous online content calls for more progress.", "DA through Representation Learning (DReL), where the DA method induces shared representations for the examples in the source and the target domains, has become prominent in the Neural Network (NN) era.", "A seminal (non-NN) DReL work is structural correspondence learning (SCL) (Blitzer et al., 2006, 2007) which models the connections between pivot features features that are frequent in the source and the target domains and are highly correlated with the task label in the source domain and the other, non-pivot, features.", "While this approach explicitly models the correspondence between the source and the target domains, it has been outperformed by NN-based models, particularly those based on autoencoders (AEs, (Glorot et al., 2011; Chen et al., 2012)) which employ compress-based noise reduction to extract features that empirically support domain adaptation.", "Recently, Ziser and Reichart (2017) (ZR17) proposed to marry these approaches.", "They have presented the autoencoder-SCL models and demonstrated their superiority over a large number of previous approaches, particularly those that employ pivot-based ideas only or NNs only.", "Current DReL methods, however, suffer from a fundamental limitation: they ignore the structure of their input text (usually sentence or docu-ment).", "This is reflected both in the way they represent their input text, typically with a single vector whose coordinates correspond to word counts or indicators across the text, and in their output which typically consists of a single vector representation.", "This structure-indifferent approach stands in a sharp contrast to numerous NLP algorithms where text structure plays a key role.", "Moreover, learning a single feature vector per 1241 input example, these methods can feed only task classifiers such as SVM and feed-forward NNs that take a single vector as input, but cannot feed sequential (e.g. RNNs and LSTMs (Hochreiter and Schmidhuber, 1997)) or convolution (CNNs (LeCun et al., 1998)) networks that require an input vector per word or sentence in their input.", "This may be a serious limitation given the excellent performance of structure aware models in a large variety of NLP tasks, including sentiment analysis and text classification (e.g.(Kim, 2014; Yogatama et al., 2017)) prominent DA evaluation tasks.", "Fig. 1 demonstrates the limitation of structure-indifferent modeling in DA for sentiment analysis.", "While the example review contains more positive pivot features (see definition in Sec. 2), the sentiment expressed in the review is negative.", "A representation learning method should encode the review structure (e.g. the role of the terms at first and However ) in order to uncover the sentiment.", "2 In this paper we overcome these limitations.", "We present (Section 3) the Pivot Based Language Model (PBLM) a domain adaptation model that", "(a) is aware of the structure of its input text; and", "(b) outputs a representation vector for every input word.", "Particularly, the model is a sequential NN (LSTM) that operates very similarly to LSTM language models (LSTM-LMs).", "The fundamental difference is that while for every input word LSTM-LMs output a hidden vector and a prediction of the next word, the output of PBLM is a hidden vector and a prediction of the next word if that word is a pivot feature or else, a generic NONE tag.", "Hence, PBLM not only exploits the sequential nature of its input text, but its output states can naturally feed LSTM and CNN task classifiers.", "Notice that PBLM is very flexible: instead of pivot based unigram prediction it can be defined to predict pivots of arbitrary length (e.g. the next bigram or trigram), or, alternatively, it can be defined over sentences or other textual units instead of words.", "Following a large body of DA work, we experiment (Section 5) with the task of binary sentiment classification.", "We consider adaptation between each domain pair in the four product review domains of Blitzer et al. (2007) (12 domain pairs) as well as between these domains and an airline review domain (Nguyen, 2015) and vice versa (8 domain pairs).", "The latter 8 setups are particularly 2 Pivots are defined with respect to a (source, target) domain pair.", "The pivots highlighted in the figure are the pivots for this review in all the setups we explored.", "I was at first :::: very ::::::: excited with my new Zyliss salad spinner it is easy to spin and looks ::::: great ... .", "However , ... it doesn't get your greens very dry.", "I've been surprised and disappointed by the amount of water left on lettuce after spinning, and spinning, and spinning.", "challenging as the airline reviews tend to be more negative than the product reviews (see Section 4).", "We implement PBLM with two task classifiers, LSTM and CNN, and compare them to strong previous models, among which are: SCL (pivot based, no NN), the marginalized stacked denoising autoencoder model (MSDA, (Chen et al., 2012) AE based, no pivots), the MSDA-DAN model ((Ganin et al., 2016) AE with a Domain Adversarial Network (DAN) enhancement) and AE-SCL-SR (the best performing model of ZR17, combining AEs, pivot information and pre-trained word vectors).", "PBLM-LSTM and PBLM-CNN perform very similarly to each other and strongly outperform previous models.", "For example, PBLM-CNN achieves averaged accuracies of 80.4%, 84% and 76.2% in the 12 product domain setups, 4 product to airline setups and 4 airline to product setups, respectively, while AE-SCL-SR, the best baseline, achieves averaged accuracies of 78.1%, 78.7% and 68.1%, respectively.", "DA is an established challenge in machine learning in general and in NLP in particular (e.g. (Roark and Bacchiani, 2003; Chelba and Acero, 2004; Daume III and Marcu, 2006)).", "While DA has several setups, the focus of this work is on unsupervised DA.", "In this setup we have access to unlabeled data from the the source and the target domains, but labeled data is available in the source domain only.", "We believe that in the current web era with the abundance of text from numerous domains, this is the most realistic setup.", "Several approaches to DA have been proposed, for example: instance reweighting (Huang et al., 2007; Mansour et al., 2009), sub-sampling from 1242 both domains (Chen et al., 2011) and learning joint target and source feature representations (DReL), the approach we take here.", "The rest of this section hence discusses DReL work that is relevant to our ideas, but first we describe our problem setup.", "Unsupervised Domain Adaptation with DReL The pipeline of this setup typically consists of two steps: representation learning and classification.", "In the first step, a representation model is trained on the unlabeled data from the source and target domains.", "In the second step, a classifier for the supervised task is trained on the source domain labeled data.", "To facilitate domain adaptation, every example that is fed to the task classifier (sec-ond step) is first represented by the representation model of the first step.", "This is true both when the task classifier is trained and at test time when it is applied to the target domain.", "An exception of this pipeline are end-to-end models that jointly learn to represent the data and to perform the classification task, exploiting the unlabeled and labeled data together.", "A representative member of this class of models (MSDA-DAN, (Ganin et al., 2016)) is one of our baselines.", "Pivot Based Domain Adaptation This approach was proposed by Blitzer et al. (2006, 2007), through their SCL method.", "Its main idea is to divide the shared feature space of the source and the target domains to a set of pivot features that are frequent in both domains and have a strong impact on the source domain task classifier, and a complementary set of non-pivot features.", "In SCL, after the original feature set is divided into the pivot and non-pivot subsets, this division is utilized in order to map the original feature space of both domains into a shared, low-dimensional, real-valued feature space.", "To do so, a binary classifier is defined for each of the pivot features.", "This classifier takes the non-pivot features of an input example as its representation, and is trained on the unlabeled data from both the source and the target domains, to predict whether its associated pivot feature appears in the example or not.", "Note that no human annotation is required for the training of these classifiers, the supervision signal is in the unlabeled data.", "The matrix whose columns are the weight vectors of the classifiers is post-processed with singular value decomposition (SVD) and the derived matrix maps feature vectors from the original space to the new.", "Since the presentation of SCL, pivot-based DA has been researched extensively (e.g. (Pan et al., 2010; Gouws et al., 2012; Bollegala et al., 2015; Yu and Jiang, 2016; Ziser and Reichart, 2017)).", "PBLM is a pivot-based method but, in contrast to previous models, it relies on sequential NNs to exploit the structure of the input text.", "Even models such as (Bollegala et al., 2015), that embed pivots and non-pivots so that the former can predict if the latter appear in their neighborhood, learn a single representation for all the occurrences of a word in the input corpus.", "That is, Bollegala et al. (2015), as well as other methods that learn cross-domain word embeddings (Yang et al., 2017), learn word-type representations, rather than context specific representations.", "In Sec. 3 we show how PBLM's context specific outputs naturally feed structure aware task classifiers such as LSTM and CNN.", "AE Based Domain Adaptation The basic elements of an autoencoder are an encoder function e and a decoder function d , and its output is a re-construction of its input x : r ( x ) = d ( e ( x )) .", "The parameters of the model are trained to minimize a loss between x and r ( x ) , such as their Kullback-Leibler (KL) divergence or their cross entropy.", "Variants of AEs are prominent in recent DA literature.", "Examples include Stacked Denoising Autoencoders (SDA, (Vincent et al., 2008; Glo-rot et al., 2011) and marginalized SDA (MSDA, (Chen et al., 2012)) that is more computationally efficient and scalable to high-dimensional feature spaces than SDA, and has been extended in various manners (e.g. (Yang and Eisenstein, 2014; Clinchant et al., 2016)).", "Finally, models based on variational autoencoders (Kingma and Welling, 2014; Rezende et al., 2014) have recently been applied in DA (e.g. variational fair autoencoder (Louizos et al., 2016)), but in our experiments they were still not competitive with MSDA.", "While AE based models have set a new state-of-the-art for DA in NLP, they are mostly based on noise reduction in the representation and do not exploit task specific and linguistic information.", "This paved the way for ZR17 that integrated pivot-based ideas into domain adaptation with AEs.", "Combining Pivots and AEs in Domain Adaptation ZR17 combined AEs and pivot-based modeling for DA.", "Their basic model (AE-SCL) is a three layer feed-forward network where the non-pivot features are fed to the input layer, encoded 1243 into a hidden representation and this hidden representation is then decoded into the pivot features of the input example.", "Their advanced model (AE-SCL-SR) has the same architecture but the decoding matrix consists of pre-trained embeddings of the pivot features, which encourages input documents with similar pivots to have similar hidden representations.", "These embeddings are induced by word2vec (Mikolov et al., 2013) trained with unlabeled data from the source and the target domains.", "ZR17 have demonstrated the superiority of their models (especially, AE-SCL-SR) over SCL (pivot-based, no AE), MSDA (AE-based, no pivots) and MSDA-DAN (AE-based with adversarial enhancement, no pivots) in 16 cross-domain sentiment classification setups, including the 12 legacy setups of Blitzer et al. (2007).", "However, as in previous pivot based methods, AE-SCL and AE-SCL-SR learn a single, structure-indifferent, feature representation of the input text.", "Our core idea is to implement a pivot-based sequential neural model that exploits the structure of its input text and that its output representations can be smoothly integrated with structure aware classifiers such as LSTM and CNN.", "Our second goal is motivated by the strong performance of LSTM and CNN in text classification tasks (Yogatama et al., 2017).", "We now introduce our PBLM model that learns representations for DA.", "As PBLM is inspired by language modeling, we assume the original feature set of the NLP task classifier consists of word unigrams and bigrams.", "This choice of features also allows us to directly compare our work to the rich literature on DA for sentiment classification where this is the standard feature set.", "PBLM, however, is not limited to word n-gram features.", "We start with a brief description of LSTM based language modeling (LSTM-LM, (Mikolov et al., 2010)) and then describe how PBLM modifies that model in order to learn pivot-based representations that are aware of the structure of the input text.", "We then show how to employ these representations in structure aware text classification (with LSTM or CNN) and how to train such PBLM-LSTM and PBLM-CNN classification pipelines.", "LSTM Language Modeling LSTMs address the vanishing gradient problem commonly found in RNNs (Elman, 1990) by incorporating gating functions into their state dynamics (Hochreiter and Schmidhuber, 1997).", "At each time step, an LSTM maintains a hidden vector, h t , computed in a sequence of non-linear transformations of the input x t and the previous hidden states h 1 , . . . , h t 1 .", "Given an input word, an LSTM-LM should predict the next word in the sequence.", "For a lexicon V , the probability of the j -th word is: p ( y t = j ) = e h t W j P | V | k =1 e h t W k Here, W i is a parameter vector learned by the network for each of the words in the vocabulary.", "The loss function we consider in this paper is the cross-entropy loss over these probabilities.", "Figure 2a provides an illustration of the PBLM model.", "The first (bottom) layer is an embedding layer, 1244 where a 1-hot word vector input is multiplied by a (randomly initialized) parameter matrix before being passed to the next layer.", "The second layer is an LSTM that predicts the next bigram or unigram if one of these is a pivot (if both are, it predicts the bigram).", "Otherwise its prediction is NONE.", "PBLM operates similarly to LSTM-LM.", "The basic difference between the models is the prediction they make for a given input word ( x t ).", "While an LSTM-LM aims to predict the next input word, PBLM predicts the next word unigram or bigram if one of these is a pivot, and NONE otherwise.", "PBLM is very flexible.", "It can be of any order: a k -order PBLM predicts the longest prefix of the sequence consisting of the next k words, as long as that prefix forms a pivot.", "If none of the pre-fixes forms a pivot then PBLM predicts NONE.", "3 Moreover, while PBLM is defined here over word sequences, it can be defined over other sequences, e.g., the sentence sequence of a document.", "Intuitively, in the example of fig.", "2a a second order model is more informative for sentiment classification than a first-order model (that predicts only the next word unigram in case that word is a pivot) would be.", "Indeed, not bad conveys the relevant sentiment-related information, while bad is misleading with respect to that same sentiment.", "Notice that after the prefix very witty the model predicts great and not great story because in this example great is a pivot while great story is not, as great story is unlikely to be frequent outside the book review domain.", "Figures 2a and 1 also demonstrate a major advantage of PBLM over models that learn a single text representation.", "From the book review example in fig.", "2a, PBLM learns the connection between witty an adjective that is often used to describe books, but not kitchen appliances and great a common positive adjective in both domains, and hence a pivot feature.", "Likewise, from the example of fig.", "1 PBLM learns the connection between easy an adjective that is often used to describe kitchen appliances, but not books and great .", "That is, PBLM is able to learn the connection between witty and easy which will facilitate adaptation between the books and kitchen appliances domains.", "Previous work that learns a single text representation, in contrast, would learn from fig.", "1 a connection between easy and the three pivots: very excited , great and disappointed .", "From 3 A word sequence is one of its own prefixes.", "fig.", "2a such a method would learn the connection between witty and great and not bad .", "The connection between witty and easy will be much weaker.", "Structure Aware Classification with PBLM Representations PBLM not only exploits the sequential nature of its input text, but its output vectors can feed LSTM (PBLM-LSTM, fig. 2b) and CNN (PBLM-CNN, fig. 2c) classifiers.", "PBLM-LSTM is a three-layer model.", "The bottom two layers are the PBLM model of fig.", "2a.", "When PBLM is combined with a classifier, its softmax layer (top layer of fig. 2a) is cut and only its output vectors ( h t ) are passed to the next LSTM layer (third layer of fig. 2b).", "The final hidden vector of that layer feeds the task classifier.", "Note that since we cut the PBLM softmax layer when it is combined with the task classifier, PBLM should be trained before this combination is performed.", "Below we describe how we exploit this modularity to facilitate domain adaptation.", "In PBLM-CNN, the combination between the PBLM and the CNN is similar to fig.", "2b: the PBLM's softmax layer is cut and a matrix whose columns are the h t vectors of the PBLM is passed to the CNN.", "We employ K different filters of size | h t | d , each going over the input matrix in a sliding window of d consecutive hidden vectors, and generating a 1 ( n d +1) size vector, where n is the input text length.", "A max pooling is performed for each of the k vectors to generate a single 1 K vector that is fed into the task classifier.", "PBLM can feed structure aware classifiers other than LSTM and CNN.", "Moreover, PBLM can also generate a single text representation as in most previous work.", "This can be done, e.g., by averaging the PBLM's hidden vectors and feeding the averaged vector into a linear non-structured classifier (e.g. logistic regression) or a feed-forward NN.", "In Sec. 5 we demonstrate that PBLM's ability to feed structure aware classifiers such as LSTM and CNN provides substantial accuracy gains.", "To the best of our knowledge, PBLM is unique in its structure aware representation: previous work generated one representation per input example.", "Domain Adaptation with PBLM Representations We focus on unsupervised DA where the input consists of a source domain labeled set and a plentiful of unlabeled examples form the source and the target domains.", "Our goal is to use the unlabeled data as a bridge between the domains.", "Our fundamental idea is to decouple the PBLM training which requires only unlabeled text, from the NLP classification task which is supervised and for which the required labeled example set is available only for the source domain.", "We hence employ a two step training procedure.", "First PBLM (figure 2a) is trained with unlabeled data from both the source and the target domains.", "Then the trained PBLM is combined with the classifier layers (top layer of fig. 2b, CNN layers of fig. 2c) and the final model is trained with the source domain labeled data to perform the classification task.", "As noted above, in the second step we cut the PBLM's softmax layer, only its h t vectors are passed to the classifier.", "Moreover, during this step the parameters of the pre-trained PBLM are held fixed, only the parameters of the classifier layers are trained.", "4 Task and Domains Following a large body of DA work, we experiment with the task of cross-domain sentiment classification.", "To facilitate comparison with previous work we experiment with the product review domains of (Blitzer et al., 2007) Books (B), DVDs (D), Electronic items (E) and Kitchen appliances (K) (12 ordered domain pairs) replicating the experimental setup of ZR17 (including baselines, design, and hyperparameter details).", "For each domain there are 2000 labeled reviews, 1000 positive and 1000 negative, and unlabeled reviews: 6000 (B), 34741 (D), 13153 (E) and 16785 (K).", "To consider a more challenging setup we experiment with a domain consisting of user reviews on services rather than products.", "We downloaded an airline review dataset, consisting of reviews labeled by their authors (Nguyen, 2015).", "We randomly sampled 1000 positive and 1000 negative reviews for our labeled set, the remaining 39396 reviews form our unlabeled set.", "We hence have 4 product to airline and 4 airline to product setups.", "Interestingly, in the product domains unlabeled reviews tend to be much more positive than in the airline domain.", "Particularly, in the B domain there are 6.43 positive reviews on every negative review; in D the ratio is 7.39 to 1; in E it is 3.65 to 1; and in K it is 4.61 to 1.", "In the airline domain there are only 1.15 positive reviews for every negative review.", "We hence expect DA from product to airline 4 The URLs of the datasets and the code (previous models and standard packages) we used, are in Appendix A. reviews and vice versa to be more challenging than DA from one product review domain to another.", "5 Baselines We consider the following baselines:", "(a) AE-SCL-SR (ZR17).", "We also experimented with the more basic AE-SCL but, like in ZR17, we got lower results in most cases;", "(b) SCL with pivot features selected using the mutual information criterion (SCL-MI, (Blitzer et al., 2007)).", "For this method we used the implementation of ZR17;", "(c) MSDA (Chen et al., 2012), with code taken from the authors' web page;", "(d) The MSDA-DAN model (Ganin et al., 2016) which employs a domain adversarial network (DAN) with the MSDA vectors as input.", "The DAN code is taken from the authors' repository;", "(e) The no domain adaptation case where the sentiment classifier is trained in the source domain and applied to the target domain without adaptation.", "For this case we consider three classifiers: logistic regression (denoted NoSt as it is not aware of its input's structure), as well as LSTM and CNN which provide a control for the importance of the structure aware task classifiers in PBLM models.", "To further control for this effect we compare to the PBLM-NoSt model where the PBLM output vectors ( h t vectors generated after each input word) are averaged and the averaged vector feeds the logistic regression classifier.", "6 In all the participating methods, the input features consist of word unigrams and bigrams.", "The division of the feature set into pivots and non-pivots is based on the the method of ZR17 that followed the work of Blitzer et al. (2007) (de-tails are in Appendix C).", "The sentiment classifier employed with the SCL-MI, MSDA and AE-SCL-SR representations is the same logistic regression classifier as in the NoSt condition mentioned above.", "For these methods we concatenate the representation learned by the model with the original representation and this representation is fed to the classifier.", "MSDA-DAN jointly learns the feature representation and performs the sentiment classification task.", "It is hence fed by a concatenation of the original and the MSDA-induced representations.", "5 While we have the labels for our unlabeled data, we did not use them in our research except in this analysis.", "6 We considered several additional baselines: (1) Variational fair autoencoder (Louizos et al., 2016) which performed substantially worse than the DA baselines", "((a)-(d)); (2) We tried to compare to (Bollegala et al., 2015) but, similarly to ZR17, failed to replicate their results; and (3) We replaced PBLM with an LSTM-LM, but the results substantially degraded.", "We do not report results for these models.", "Five Fold CV We employ a 5-fold cross-validation protocol as in (Blitzer et al., 2007; Ziser and Reichart, 2017).", "In all five folds 1600 source domain examples are randomly selected for training data and 400 for development, such that both the training and the development sets are balanced and have the same number of positive and negative reviews.", "The results we report are the averaged performance of each model across these 5 folds.", "Hyperparameter Tuning For all previous models, we follow the tuning process described in ZR17 (paper and appendices).", "Hyperparameter tuning for the PBLM models and the non-adapted CNN and LSTM is described in Appendix B. 5 Results Overall Results Table 1 presents our results.", "PBLM models with structure aware classifiers (PBLM-LSTM and PBLM-CNN, henceforth denoted together as S-PBLM) outperform all other alternatives in all 20 setups and three averaged evaluations ( All columns in the tables).", "The gaps are quite substantial the average accuracy of PBLM-LSTM and PBLM-CNN compared to the best baseline, AE-SCL-SR, are: 79.6% and 80.4% vs. 78.1% for the product review setups, 85% and 84% vs. 78.7% for the product to airline (service) review setups, and 76.1% and 76.2% vs. 68.1% for the airline to product review setups.", "S-PBLM performance in the more challenging product to airline and airline to product setups are particularly impressive.", "The challenging nature of these setups stems from the presumably larger differences between product and service reviews and from the different distribution of positive and negative reviews in the unlabeled data of both domains (Sec. 4).", "These differences are reflected by the lower performance of the non-adapted classifiers: an averaged accuracy of 70.6%-73.1% across product domain pairs (three lower lines of the All column of the top table), compared to an average of 67.3%-69.9% across product to airline setups and an average of 61.3%-62.4% across airline to product setups.", "Moreover, while the best previous method (AE-SCL-SR) achieves an averaged accuracy of 78.1% for product domains and an averaged accuracy of 78.7% when adapting from product to airline reviews, when adapting from airline to product reviews its averaged accuracy drops to 68.1%.", "The S-PBLM models do consistently better in all three setups, with an 0.96 0.98 1 1.02 1.04 1.06 0.55 0.6 0.65 0.7 0.75 0.8 0.85 1 2 3 4 5 6 7 8 9 10 L o ss A cc u r a c y Epoch B->K PBLM-CNN sentiment accuracy PBLM loss 0.78 0.8 0.82 0.84 0.86 0.88 0.9 0.6 0.65 0.7 0.75 0.8 0.85 1 2 3 4 5 6 7 8 9 10 L o ss A cc u r a c y Epoch D->E PBLM-LSTM sentiment accuracy PBLM loss 0.7 0.71 0.72 0.73 0.74 0.75 0.76 0.77 0.78 0.7 0.75 0.8 0.85 0.9 1 2 3 4 5 6 7 8 9 10 L o ss A cc u r a c y Epoch E->APBLM-LSTM sentiment accuracy PBLM loss 1.85 1.9 1.95 2 2.05 2.1 0.6 0.65 0.7 0.75 0.8 0.85 1 2 3 4 5 6 7 8 9 10 L o ss A cc u r a c y Epoch A->E PBLM-CNN sentiment accuracy PBLM loss Figure 3: PBLM loss (solid, red line) vs. sentiment accuracy (dashed, blue line) of PBLM-CNN (top) and PBLM-LSTM (bottom) in four representative setups.", "averaged accuracy of 80.4%, 85% and 76.2% of the best S-PBLM model, respectively.", "Analysis of S-PBLM Strength The results shed light on the sources of the S-PBLM models success.", "The accuracy of these models, PBLM-LSTM and PBLM-CNN, is quite similar across setups: their accuracy gap is up to 3.1% in all 20 setups and up to 1% in the three averages ( All columns).", "However, the S-PBLM models substantially outperform PBLM-NoSt that employs a structure-indifferent classifier.", "The averaged gaps are 5.6% (80.4% vs. 74.8%) in the product to product setups, 11.1% in the product to airline setups (85% vs. 73.9%) and 10.9% in the airline to product setups (76.2% vs. 65.3%).", "Hence, we can safely conclude that while the integration of PBLM with a structured task classifier has a dramatic impact on cross-domain accuracy, it is less important if that classifier is an LSTM or a CNN.", "Comparison with non-adapted models reveals that structure aware modeling, as provided by LSTM and CNN, is not sufficient for high performance.", "Indeed, non-adapted LSTM and CNN do substantially worse than S-PBLM in all setups.", "Finally, comparison with AE-SCL-SR demonstrates that while the integration of pivot based learning with NNs leads to stronger results than in any other previous work, the structure awareness of the S-PBLM models substantially improves accuracy.", "Figure 3 further demonstrates the adequacy of the PBLM architecture for domain adaptation.", "The graphs demonstrate, for both S-PBLM models, a strong correlation between the PBLM cross-entropy loss values and the sentiment accuracy of the resulting PBLM-LSTM and PBLM-CNN models.", "We show these patterns for two product domain setups and two setups that involve a product domain and the airline domain the patterns for the other setups of table 1 are very similar.", "This analysis highlights our major contribution.", "We have demonstrated that it is the combination of four components that makes DA for sentiment classification very effective:", "(a) Neural network modeling;", "(b) Pivot based modeling;", "(c) Structure awareness of the pivot-based model; and", "(d) Structure awareness of the task classifier.", "We addressed the task of DA in NLP and presented PBLM: a representation learning model that combines pivot-based ideas and NN modeling, in a structure aware manner.", "Unlike previous work, PBLM exploits the structure of its input, and its output consists of a vector per input word.", "PBLM-LSTM and PBLM-CNN substantially outperform strong previous models in traditional and newly presented sentiment classification DA setups.", "In future we intend to extend PBLM so that it could deal with NLP tasks that require the prediction of a linguistic structure.", "For example, we believe that PBLM can be smoothly integrated with recent LSTM-based parsers (e.g. (Dyer et al., 2015; Kiperwasser and Goldberg, 2016; Dozat and Manning, 2017)).", "We also intend to extend the reach of our approach to cross-lingual setups." ]
[ "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "objective", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "method", "objective" ]
[ "Modern deep learning models for NLP are notoriously opaque.", "This has motivated the development of methods for interpreting such models, e.g., via gradient-based saliency maps or the visualization of attention weights.", "Such approaches aim to provide explanations for a particular model prediction by highlighting important words in the corresponding input text.", "While this might be useful for tasks where decisions are explicitly influenced by individual tokens in the input, we suspect that such highlighting is not always suitable for tasks where model decisions should be driven by more complex reasoning.", "In this work, we investigate the use of influence functions for NLP, providing an alternative approach to interpreting neural text classifiers.", "Influence functions explain the decisions of a model by identifying influential training examples.", "Despite the promise of this approach, influence functions have not yet been extensively evaluated in the context of NLP, a gap addressed by this work.", "We conduct a comparison between influence functions and common word-saliency methods on representative tasks.", "As suspected, we find that influence functions are particularly useful for natural language inference, a task in which saliency maps' may not provide clear interpretation.", "Furthermore, we develop a new quantitative measure based on influence functions that can reveal artifacts in training data.", "1 1 Introduction Deep learning models have become increasingly complex, and unfortunately their inscrutability has grown in tandem with their predictive power (Doshi-Velez and Kim, 2017).", "This has motivated efforts to design example-specific approaches to interpreting black box NLP model predictions, i.e., 1 Code is available at https://github.com/ xhan77/influence-function-analysis .", "indicating specific input tokens as being particularly influential for a given prediction.", "This in turn facilitates the construction of saliency maps over texts, in which words are highlighted with intensity proportional to continuous importance' scores.", "Prominent examples of the latter include gradient-based attribution (Simonyan et al., 2014; Sundararajan et al., 2017; Smilkov et al., 2017), LIME (Ribeiro et al., 2016), and attention-based (Xu et al., 2015) heatmaps.", "While widely used and potentially useful for some lexicon-driven tasks (e.g., sentiment analy-sis), we argue that by virtue of being constrained to highlighting individual input tokens, saliency maps will necessarily fail to explain predictions in more complex semantic tasks involving reasoning, such as natural language inference (NLI), where fine-grained interactions between multiple words or spans are key (Camburu et al., 2018).", "Moreover, saliency maps are inherently limited as a model debugging tool; they may tell us which inputs the model found to be important, but not why .", "To address these shortcomings, we investigate the use of what Lipton (2018) referred to as explanation by example .", "Instead of constructing importance scores over the input texts on which the model makes predictions, such methods rank training examples by their influence on the model's prediction for the test input (Caruana et al., 1999; Koh and Liang, 2017; Card et al., 2019).", "Specifically, we are interested in the use of influence functions (Koh and Liang, 2017), which are in a sense inherently faithful' in that they reveal the training examples most responsible for particular predictions.", "These do not require any modifications to the model structure.", "This paper presents a series of experiments intended to evaluate the potential utility of influence functions for better understanding modern neural NLP models.", "In this context, our contributions inFigure 1: A sentiment analysis example interpreted by gradient-based saliency maps (left) and influence functions (right).", "clude answering the following research questions.", "RQ 1 We empirically assess whether the approximation to the influence functions (Koh and Liang, 2017) can be reliably used to interpret decisions of deep transformer-based models such as BERT (Devlin et al., 2019).", "RQ 2 We investigate the degree to which results from the influence function are consistent with insights gleaned from gradient-based saliency scores for representative NLP tasks.", "RQ 3 We explore the application of influence functions as a mechanism to reveal artifacts (or confounds) in training data that might be exploited by models.", "To the best of our knowledge, this is the first work in NLP to compare interpretation methods that construct saliency maps over inputs with methods that explain predictions via influential training examples.", "We also propose a new quantitative measurement for the effect of hypothesized artifacts (Gururangan et al., 2018; McCoy et al., 2019) on the model's prediction using influence functions.", "Machine learning models in NLP depend on two factors when making predictions: the input text and the model parameters.", "Prior attempts to interpret opaque NLP models have typically focused on the input text.", "Our work investigates the complementary approach of interpreting predictions by analyzing the influence of examples in training data.", "Saliency maps aim to provide interpretability by highlighting parts of the input text, whereas influence functions seek clues in the model parameters, eventually locating interpretations within the training examples that influenced these estimates.", "In this section we explain the two interpretation methods in detail.", "2 2.1 Gradient-based saliency maps As a standard, illustrative explanation-by-input-features' method, we focus on gradient-based saliency maps, in which the gradient of the loss L is computed with respect to each token t in the input text, and the magnitude of the gradient serves as a feature importance score (Simonyan et al., 2014; Li et al., 2016a).", "Gradients have the advantage of being locally faithful' by construction: they tell us how much the loss would change, were we to perturb a token by a small amount.", "Gradient-based attributions are also agnostic with respect to the model, as long as it is differentiable with respect to inputs.", "Finally, calculating gradients is computationally efficient, especially compared to methods that require post-hoc input perturbation and function fitting, like LIME (Ribeiro et al., 2016).", "We are interested in why the model made a particular prediction.", "We therefore define a loss L y with respect to the prediction y i that the model actually made, rather than the ground truth y i .", "For each token t x i , we define a saliency score e ( t ) L y e ( t ) , where e ( t ) is the embedding of t .", "This is also referred as the gradient input method in Shrikumar et al. (2017).", "The gradi-ent e ( t ) L y captures the sensitivity of the loss to the change in the input embedding, and the input 2 Here we focus on interpretability approaches which are faithful (Wiegreffe and Pinter, 2019; Jacovi and Goldberg, 2020; Jain et al., 2020) by construction; other approaches are discussed in", "6. e ( t ) leverages the sign and magnitude of the input.", "The final saliency score of each token t would be L1-normalized across all tokens in x i .", "Unlike Simonyan et al. (2014) and Li et al. (2016a), when scoring features for importance, we do not take the absolute value of the saliency score, as this encodes whether a token is positively influencing the prediction (i.e., providing support the prediction) or negatively influencing the prediction (highlighting counter-evidence).", "We show an example in the left part of Figure", "1. 2.2 Influence functions In contrast to explanations in the form of token-level heatmaps, the influence function provides a method for tracing model predictions back to training examples.", "It first approximates how upweight-ing a particular training example ( x i , y i ) in the training set { ( x 1 , y 1 ) , . . . , ( x n , y n ) } by (cid:15) i would change the learned model parameters : d d(cid:15) i = ( 1 n n (cid:88) j =1 2 L ( x j , y j , )) 1 L ( x i , y i , ) We can then use the chain rule to measure how this change in the model parameters would in turn affect the loss of the test input (as in saliency maps, w.r.t. the model prediction): d L y d(cid:15) i = L y d d(cid:15) i More details (including proofs) can be found in Koh and Liang (2017).", "We define the influence score for each training example ( x i , y i ) as d L y d(cid:15) i , and then z -normalize it across all examples in the training set.", "Note that since L y is defined with respect to a particular test input, influence scores of training examples are also defined for individual test instances.", "Intuitively, a positive influence score for a training example means: were we to remove this example from the train set, we would expect a drop in the model's confidence when making the prediction on the test input.", "A negative influence score means that removing the training example would increase the model's confidence in this prediction.", "We show an example in the right part of Figure", "1. 3 Experimental Setup We are interested in analyzing and comparing the two interpretation approaches (gradient-based attributions and influence functions) on relatively shallow, lexicon-driven tasks and on more complex, reasoning-driven tasks.", "We focus on sentiment analysis and natural language inference (NLI) as illustrative examples of these properties, respectively.", "Both models are implemented on top of BERT encoders (Devlin et al., 2019).", "In particular we use BERT-Base, with the first 8 of the 12 layers frozen, only fine-tuning the last 4 transformer layers and the final projection layer.", "3 It is worth noting that influence functions are guaranteed to be accurate only when the model is strictly convex (i.e., its Hessian is positive definite and thus invertible) and is trained to convergence.", "However, deep neural models like BERT are not convex, and one often performs early stopping during training.", "We refer to Koh and Liang (2017) for details on how influence functions can nonetheless provide good approximations.", "To summarize briefly: for the non-convexity issue, we add an appropriate damping' term to the model's Hessian so that it is positive definite and invertible.", "Concerning non-convergence: the approximated influence may still be interpretable as the true influence of each training example plus a constant offset that does not depend on the individual examples.", "Aside from this theory, we also perform a sanity check in 4 to show that influence functions can be applied to BERT in practice on the two tasks that we consider.", "Sentiment analysis We use a binarized version of the Stanford Sentiment Treebank (SST-2) (Socher et al., 2013).", "Our BERT-based model is trained on 10k examples; this achieves 89.6% accuracy on the SST-2 dev set of 872 examples.", "We randomly sample 50 examples from the SST-2 dev set as the set for which we extract explanations for model predictions.", "Natural language inference Our deeper seman-tic' task is NLI, a classification problem that concerns the relationship between a premise sentence and a hypothesis sentence.", "NLI is a ternary task with three types of premisehypothesis relations: entailment , neutral , and contradiction .", "We train our BERT model on the Multi-Genre NLI (MNLI) dataset (Williams et al., 2018), which contains 393k 3 We used smaller BERT models because influence functions are notoriously expensive to compute.", "We also resort to the same stochastic estimation method, LiSSA (Agarwal et al., 2017), as in Koh and Liang (2017), and we deliberately reduce the size of our training sets.", "Even with these efforts, computing the influence scores of 10k training examples w.r.t. one typical test input would take approximately 10 minutes on one NVIDIA GeForce RTX 2080 Ti GPU.", "premise and hypothesis pairs of three relations from 10 different genres.", "We collapse the neutral and contradiction labels to a single non-entailment label and only use 10k randomly sampled examples for training.", "On the MNLI dev set of 9815 examples, the model achieves an accuracy of 84.6%.", "To evaluate model interpretations in a controlled manner, we adopt a diagnostic dataset, HANS (Mc-Coy et al., 2019).", "This contains a balanced number of examples where hypotheses may or may not entail premises with certain artifacts that they call heuristics' (e.g., lexical overlap, subsequence).", "The original HANS dataset contains 30k examples that span 30 different heuristic sub-categories.", "We test our model and interpretation methods on 30 examples covering all the sub-categories.", "reliable when used for deep architectures in NLP?", "Influence functions are designed to be an approximation to leave-one-out training for each training example.", "But the theory only proves that this works on strictly convex models.", "While Koh and Liang (2017) show that influence functions can be a good approximation even when the convexity assump-tion is not satisfied (in their case, a CNN for image classification), it is still not obvious that the influence function would work for BERT.", "Therefore, we conduct a sanity check: for each instance in our test set, we by turns remove the most positively influential 10%, the most negatively influential 10%, the least influential (where influence scores are near zero) 10%, and a random 10% of training examples.", "We are interested in how these removals in retraining would affect the confidence of model predictions.", "Table 1 and Table 2 show the result of experiments on sentiment analysis and NLI, repeated with 5 random initialization seeds.", "expectation in both tasks: removing the most positively influential training examples would cause the model to have a significantly lower prediction confidence for each test example; removing the most negatively influential examples makes the model slightly more confident during prediction; and removing the least influential examples leads to an effect that is closest to removing a same amount of random examples (although we note that deleting the least influential features still yields a larger than choosing features at random to remove in NLI).", "We therefore conclude that the influence function behaves reasonably and reliably for BERT in both sentiment analysis and NLI tasks.", "RQ", "2. Are gradient-based saliency maps and influential' examples compatible?", "Comparing saliency maps and outputs from application of the influence function is not straightforward.", "Saliency maps communicate the importance of individual tokens in test instances, while influence functions measure the importance of training examples.", "Still, it is reasonable to ask if they seem to tell similar stories regarding specific predictions.", "We propose two experiments that aim to estimate the consistency between these two interpretation methods.", "The first experiment addresses whether a token with high saliency also appears more frequently in the training examples that have relatively high influence .", "For each example in the test set, we find the tokens with the most positive, most negative, and median saliency scores.", "We then find all the influential training examples w.r.t. the test inputs that contain one of these tokens.", "These training examples could have any labels in the label set.", "We further only consider examples whose label is the same as the test prediction, because the token saliency scores, whether positive or negative, are directly w.r.t. the test prediction, and the effect of a token in an oppositely labeled training example is therefore indirect.", "We compute the average influence score of these training examples and report the results on top 10%, 20%, 50%, and all training examples for both sentiment analysis and NLI tasks in Figure 2 and Figure 3 respectively.", "The reason we have results at different granularity is that from empirical results in Koh and Liang (2017), we see that the influence function approximations tend to be less accurate when going from the most influential to the less influential examples down in the spectrum.", "In the task of sentiment analysis, we observe that training examples containing the most positively salient token in the test example generally have a higher influence to the test prediction.", "However, we do not see this trend (in fact, it is the opposite) in the task of natural language inference.", "The second experiment answers the question of whether the influence result would change significantly when a salient token is removed from the Saliency of the removed token @0.1% @0.2% @0.5% @1% Most negative 75.6% 77.4% 80.0% 82.4% Median 84.2% 86.7% 88.9% 89.1% Most positive 65.2% 68.8% 71.4% 72.0% Table 3: Average overlap rate of top influential sentiment analysis training examples before and after removal of a token with the most positive, most negative, or median saliency.", "input .", "Again, for each of the test examples, we identify the tokens with the most positive, most negative, and median saliency score.", "We by turns remove them from the input and compute the influence distribution over all training examples.", "We compare these new influence results with the one on the original input, and report an overlap rate of the top 0.1%, 0.2%, 0.5%, and 1% influential training examples before and after the token removal.", "Table 3 and Table 4 show results for sentiment analysis and NLI, respectively.", "When removing a token with the most positive saliency score, we expect the model to be less confident about its current prediction; it could possibly make a different prediction.", "Therefore, we expect to see a most different influence distribution from the original influence result compared to removing the token with median or the most negative saliency score.", "This is exactly what we observe in Table 3 for sentiment analysis.", "However, for NLI, we again see a rather opposite trend: removing the most negatively salient token (might make the prediction more confident but should not change the prediction itself) leads to the most different influence distribution.", "We conclude from the above two experiments that gradient-based saliency maps and influential examples are compatible and consistent with each other in sentiment analysis.", "However, for NLI the two approaches do not agree with each other and could potentially tell very different stories.", "To this end, we take a closer look at the task of NLI.", "Are saliency-based explanations useful for NLI?", "Gradient-based saliency maps are faithful by construction, but this does not mean that they will highlight input tokens that humans find plausible or useful.", "We hypothesize that highlighting individual input tokens as important is likely most useful for shallow' classification tasks like sentiment analysis, and less so for more complex reasoning tasks such as NLI.", "To contrast the types of explanations these methods offer in this context, we show explanations for a prediction made for a typical example in HANS in the form of a saliency map and influential examples in Table", "5. The tokens that get the most positive and most negative saliency scores are marked in cyan and red, respectively.", "The training examples with the most positive and most negative influence scores are presented as supporting and opposing instances, respectively.", "is often made through an interaction between multiple words or spans.", "Therefore, an importance measure on each individual token might not give us much useful insight into model prediction.", "Though influence functions also do not explicitly tell us which latent interactions between words or spans informed the model prediction, we can test whether the model is relying on some hypothesized artifacts in a post-hoc way by looking at patterns in the influential training examples.", "In Table 5, though the most influential examples (both supporting and opposing) are ostensibly far from the test input, they all exhibit lexical overlap between the premise and hypothesis.", "Some of the influential training examples (e.g., the 4th supporting example and 2nd opposing example) capture a reverse ordering of spans in the premise and hypothesis.", "We note that our test input also has a high lexical overlap and similar reverse ordering.", "This exposes a problem: the model might be relying on the wrong artifacts like word overlap during the decision process rather than learning the relationship between the active and passive voice in our case.", "This problem was surfaced by finding influential examples.", "McCoy et al. (2019) hypothesize that the main artifact NLI models might learn is lexical overlap .", "In fact, for all of the examples in HANS, every word in the hypothesis would appear in the corresponding premise (100% lexical overlap rate).", "Half of the examples would have an entailment relationship while the other half have an non-entailment relationship.", "McCoy et al. (2019) compare four models with strong performance in MNLI, and all of them predict far more entailments than non-entailments.", "Because of this imbalance in prediction, they conclude that the models are perhaps exploiting artifacts in data when making decisions.", "We see one potential problem out of the above method: it can only be applied to a certain group of examples and imply a general model behavior by examining the prediction imbalance.", "However, model behavior should depend on the actual example it sees each time.", "The extent to which the model exploits the artifact in each individual example remains unclear.", "To analyze the effect of artifacts on individual examples, we propose a method using influence functions.", "We hypothesize that if an artifact informs the model's predictions for a test instance, the most influential training examples for this test example should contain occurrences of said artifact.", "For instance, if our model exploits lexical overlap' when predicting the relation between a premise and a hypothesis, we should expect the most influential training examples found by the influence function to have a highly overlapping premise and hypothesis.", "In Figure 4a, we plot each training example's influence score and lexical overlap rate between its premise and hypothesis for a typical example in the HANS dataset.", "In linen with our expectation, the most influential (both positively and negatively) training examples tend to have a higher lexical overlap rate.", "Note that we also expect this trend for the most negatively influential examples, because they influence the model's prediction as much as the positively influential examples do, only in a different direction.", "To quantify this bi-polarizing effect, we find it natural to fit a quadratic regression to the influence-artifact distribution.", "We would expect a high positive quadratic coefficient if the artifact feature appears more in the most influential examples.", "For an irrelevant feature, we would expect this coefficient to be zero.", "With this new quantitative measure, we are ready to explore the below problems unanswered by the original diagnostic dataset.", "lexical overlap feature?", "Was the artifact not exploited in these cases?", "Figure 4a and Figure 4b show two examples in HANS, one predicted as entailment and the other predicted as non-entailment.", "We observe that the example predicted as non-entailment does not have a significantly different influence-artifact pattern from the entailment example.", "In fact, the average quadratic coefficients for all examples predicted as entailment and non-entailment are +3 .", "28 10 3 and +3 .", "30 10 3 respectively.", "Therefore, for predicted non-entailment examples, we still see that the most influential training examples tend to have a high rate of lexical overlap, indicating that the model still recognizes the artifact in these cases.", "The model relies on training examples with high lexical overlap when predicting in the ar-tificial HANS dataset.", "Would it still exploit the same artifact for natural examples?", "Apart from finding the most influential training examples for each HANS example, we also apply influence functions on 50 natural MNLI examples, not controlled to exhibit any specific artifacts.", "A typical example is shown in Figure 4c.", "The average quadratic coefficient over all 50 natural examples is +0 .", "65 10 3 , which is considerably smaller than the above cases in HANS dataset.", "The model therefore does not rely on as much lexical overlap in natural examples as in the diagnostic dataset.", "We have been analyzing scenarios focusing on one data artifact.", "What if we have a second artifact during prediction possibly indicating a contradicting decision?", "How will the model recognize the two artifacts in such a scenario?", "We know that lexical overlap could be a data artifact exploited by NLI models for making an entailment prediction in HANS.", "On the other hand, as briefly pointed out by McCoy et al. (2019), other artifacts like negation might be indicative of non-entailment .", "We are interested in how two contradicting artifacts might compete when they both appear in an example.", "We take all examples in HANS labeled as entailment and manually negate the hypothesis so that the relationship becomes non-entailment.", "For example, a hypothesis the lawyers saw the professor would become the lawyers did not see the professor.", "Figure 5a and Figure 5b show the influence-artifact distributions on both lexical overlap and negation for an original HANS example.", "Figure 5c and Figure 5d show the distributions for the same HANS example with negated hypothesis.", "The average quadratic coefficients on all examples are shown in Table", "6. We observe that in the original HANS example, negation is actually a negative artifact: the training examples with negation tend to be the least influential ones.", "In the negated HANS example, we see the effect of negations becomes positive, while the effect of lexical overlap is drastically weakened.", "This confirms that the model recognizes the new set of artifacts, and the two are competing with each other.", "Importantly, observing an artifact in the most influential training examples is a necessary but not sufficient condition to concluding that it was truly exploited by the model.", "However, it can serve as a first step towards identifying artifacts in black-box neural models and may be complemented by probing a larger set of hypothesized artifacts.", "(a) HANS example predicted as entailment .", "( P: The athlete by the doctors encouraged the senator. H: The athlete encouraged the senator.)", "Quadratic coefficient: +3 .", "74 10 3 .", "(b) HANS example predicted as nonentailment .", "( P: Since the author introduced the actors, the senators called the tourists. H: The senators called the tourists.)", "Quadratic coef: +3 .", "59 10 3 .", "(c) A typical MNLI example.", "( P: And uh as a matter of fact he's a draft dodger. H: They dodged the draft, I'll have you know.)", "Quadratic coefficient: +0 .", "74 10 3 .", "(a) Lexical overlap in original HANS example.", "professor behind the bankers. H: The lawyers saw / did not see the professor.)", "Lexical overlap coef Negation coef Original +3 .", "Interpreting NLP model predictions by constructing importance scores over the input tokens is a widely adopted approach (Belinkov and Glass, 2019).", "Since the appearance and rise of attention-based models, many work naturally inspect attention scores and interpret with them.", "However, we are aware of the recent discussion over whether attention is a kind of faithful explanation (Jain and Wallace, 2019; Wiegreffe and Pinter, 2019).", "Using vanilla attention as interpretation could be more problematic in now ubiquitous deep transformer-based models, such as we use here.", "Gradient-based saliency maps are locally faithful' by construction.", "Other than the vanilla gradients (Simonyan et al., 2014) and the gradient input method (Shrikumar et al., 2017) we use in this work, there are some variants that aim to make gradient-based attributions robust to potential noise in the input (Sundararajan et al., 2017; Smilkov et al., 2017).", "We also note that Feng et al. (2018) find that gradient-based methods sometimes yield counter-intuitive results when iterative input reductions are performed.", "Other token-level interpretations include input perturbation (Li et al., 2016b) which measure a token's importance by the effect of removing it, and LIME (Ribeiro et al., 2016) which can explain any model's decision by fitting a sparse linear model to the local region of the input example.", "The main focus of this work is the applicability of influence functions (Koh and Liang, 2017) as an interpretation method in NLP tasks, and to highlight the possibility of using this to surface annotation artifacts.", "Other methods that can trace the model's decision back into the training examples include deep weighted averaging classifiers (Card et al., 2019), which make decisions based on the labels of training examples that are most similar to the test input by some distance metrics.", "Croce et al. (2019) use kernel-based deep architectures that project test inputs to a space determined by a group of sampled training examples and make explanations through the most activated training instances.", "While these methods can similarly identify the influential' training examples, they require special designs or modifications to the model and could sacrifice the model's performance and generalizability.", "Other general methods for model interpretability include adversarial-attack approaches that identify that part of input texts can lead to drastically different model decisions when minimally edited (Ebrahimi et al., 2018; Ribeiro et al., 2018), probing approaches that test internal representations of models for certain tasks and properties (Liu et al., 2019b; Hewitt and Liang, 2019), and generative approaches that make the model jointly extract or generate natural language explanations to support predictions (Lei et al., 2016; Camburu et al., 2018; Liu et al., 2019a; Rajani et al., 2019).", "Specific to the NLI task, Gururangan et al. (2018) recognize and define some possible artifacts within NLI annotations.", "McCoy et al. (2019) create a diagnostic dataset that we use in this work and suggest that the model could be exploiting some artifacts in training data based on its poor performance on the diagnostic set.", "Beyond NLI, the negative influence of artifacts in data was explored in other text classification tasks (Pryzant et al., 2018; Kumar et al., 2019; Landeiro et al., 2019), focusing on approaches to adversarial learning to demote the artifacts.", "We compared two complementary interpretation methodsgradient-based saliency maps and influence functionsin two text classification tasks: sentiment analysis and NLI.", "We first validated the reliability of influence functions when used with deep transformer-based models.", "We found that in a lexicon-driven sentiment analysis task, saliency maps and influence functions are largely consistent with each other.", "They are not consistent, however, on the task of NLI.", "We posit that influence functions may be a more suitable approach to interpreting models for such relatively complex natural language understanding tasks (while simpler attribution methods like gradients may be sufficient for tasks like sentiment analysis). We introduced a new potential use of influence functions: revealing and quantifying the effect of data artifacts on model predictions, which have been shown to be very common in NLI. Future work might explore how rankings induced over training instances by influence functions can be systematically analyzed in a stand-alone manner (rather than in comparison with interpretations from other methods), and how these might be used to improve model performance. Finally, we are interested in exploring how these types of explanations are actually interpreted by users, and whether providing them actually establishes trust in predictive systems. Acknowledgments. We thank the anonymous ACL reviewers and members of TsvetShop at CMU for helpful discussions of this work. This material is based upon work supported by NSF grants IIS1812327 and SES1926043, and by Amazon MLRA award. Wallace's contributions were supported by the Army Research Office (W911NF1810328). We also thank Amazon for providing GPU credits. References Naman Agarwal, Brian Bullins, and Elad Hazan. 2017. Second-order stochastic optimization for machine learning in linear time. Journal of Machine Learning Research (JMLR) , 18:116:1116:40. Yonatan Belinkov and James Glass. 2019. Analysis methods in neural language processing: A survey. Transactions of the Association for Computational Linguistics , 7:4972. Oana-Maria Camburu, Tim Rockt aschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-snli: Natural language inference with natural language explanations. In Proc. NeurIPS . Dallas Card, Michael Zhang, and Noah A. Smith. 2019. Deep weighted averaging classifiers. In FAT* . Rich Caruana, Hooshang Kangarloo, John David N. Dionisio, Usha S. Sinha, and David B. Johnson. 1999. Case-based explanation of non-case-based learning methods. Proc. AMIA Symposium , pages 2125. Danilo Croce, Daniele Rossini, and Roberto Basili. 2019. Auditing deep learning processes through kernel-based explanatory models. In Proc. EMNLP . Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proc. NAACL-HLT . Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 . Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. HotFlip: White-box adversarial examples for text classification. In Proc. ACL . Shi Feng, Eric Wallace, Alvin Grissom, Mohit Iyyer, Pedro Rodriguez, and Jordan L. Boyd-Graber. 2018. Pathologies of neural models make interpretation difficult. In Proc. EMNLP . Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Proc. NAACL-HLT . John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proc. EMNLP , pages 27332743. Alon Jacovi and Yoav Goldberg. 2020. Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness? ArXiv , abs/2004.03685. Sarthak Jain and Byron C. Wallace. 2019. Attention is not explanation. In Proc. NAACL-HLT . Sarthak Jain, Sarah Wiegreffe, Yuval Pinter, and Byron C. Wallace. 2020. Learning to faithfully rationalize by construction. In Proc. ACL . Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In Proc. ICML . Sachin Kumar, Shuly Wintner, Noah A. Smith, and Yulia Tsvetkov. 2019. Topics to avoid: Demoting latent confounds in text classification. In Proc. EMNLP , pages 41514161. Virgile Landeiro, Tuan Tran, and Aron Culotta. 2019. Discovering and controlling for latent confounds in text classification using adversarial domain adaptation. In Proc. SIAM International Conference on Data Mining , pages 298305. Tao Lei, Regina Barzilay, and Tommi S. Jaakkola. 2016. Rationalizing neural predictions. In Proc. EMNLP . Jiwei Li, Xinlei Chen, Eduard H. Hovy, and Dan Ju-rafsky. 2016a. Visualizing and understanding neural models in NLP. In Proc. HLT-NAACL . Jiwei Li, Will Monroe, and Dan Jurafsky. 2016b. Understanding neural networks through representation erasure. ArXiv , abs/1612.08220. Zachary Chase Lipton. 2018. The mythos of model interpretability. Commun. ACM , 61:3643. Hui Liu, Qingyu Yin, and William Yang Wang. 2019a. Towards explainable NLP: A generative explanation framework for text classification. In Proc. ACL . Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019b. Linguistic knowledge and transferability of contextual representations. In Proc. NAACL-HLT . R. Thomas McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proc. ACL . Reid Pryzant, Kelly Shen, Dan Jurafsky, and Stefan Wagner. 2018. Deconfounded lexicon induction for interpretable social science. In NAACL-HLT . Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense reasoning. arXiv preprint arXiv:1906.02361 . Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. why should i trust you?: Explaining the predictions of any classifier. In Proc. HLT-NAACL (System Demonstrations) . Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversarial rules for debugging NLP models. In Proc. ACL . Avanti Shrikumar, Peyton Greenside, and Anshul Kun-daje. 2017. Learning important features through propagating activation differences. In Proc. ICML . Karen Simonyan, Andrea Vedaldi, and Andrew Zisser-man. 2014. Deep inside convolutional networks: Vi-sualising image classification models and saliency maps. In Workshop at International Conference on Learning Representations . Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda B. Viegas, and Martin Wattenberg. 2017. Smooth-Grad: removing noise by adding noise. ArXiv , abs/1706.03825. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proc. EMNLP . Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Proc. ICML . Eric Wallace, Jens Tuyls, Junlin Wang, Sanjay Subra-manian, Matt Gardner, and Sameer Singh. 2019. Al-lenNLP interpret: A framework for explaining predictions of NLP models. In Proc. EMNLP (System Demonstrations) , pages 712. Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proc. EMNLP , pages 1120. Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proc. NAACL-HLT . Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier-ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow-icz, and Jamie Brew. 2019. HuggingFace's transformers: State-of-the-art natural language processing." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "method", "result", "objective", "abstain", "other", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "method", "objective", "abstain", "method", "objective", "objective", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "result", "result", "abstain", "other", "method", "abstain", "method", "abstain", "result", "method", "method", "abstain", "objective", "abstain", "abstain", "method", "objective", "abstain", "result", "method", "result", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "result", "abstain", "method", "result", "result", "result", "result", "other", "abstain", "method", "method", "objective", "abstain", "method", "result", "result", "result", "abstain", "abstain", "method", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "method", "other", "abstain", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "method", "other", "method", "objective", "result", "abstain", "other" ]
[ "This paper proposes a simple and effective approach to address the problem of posterior collapse in conditional variational autoencoders (CVAEs).", "It thus improves performance of machine translation models that use noisy or monolingual data, as well as in conventional settings.", "Extending Transformer and conditional VAEs, our proposed latent variable model measurably prevents posterior collapse by (1) using a modified evidence lower bound (ELBO) objective which promotes mutual information between the latent variable and the target, and (2) guiding the latent variable with an auxiliary bag-of-words prediction task.", "As a result, the proposed model yields improved translation quality compared to existing variational NMT models on WMT Ro En and De En.", "With latent variables being effectively utilized, our model demonstrates improved robustness over non-latent Transformer in handling uncertainty: exploiting noisy source-side monolingual data (up to +3.2 BLEU), and training with weakly aligned web-mined parallel data (up to +4.7 BLEU).", "The conditional variational autoencoder (CVAE; Sohn et al., 2015) is a conditional generative model for structured prediction tasks like machine translation.", "This model, learned by variational Bayesian methods (Kingma and Welling, 2014), can capture global signal about the target in its latent variables.", "Unfortunately, variational inference for text generation often yields models that ignore their latent variables (Bowman et al., 2016), a phenomenon called posterior collapse .", "In this paper, we introduce a new loss function for CVAEs that counteracts posterior collapse, motivated by our analysis of CVAE's evidence lower bound objective (ELBO).", "Our analysis ( 2) reveals that optimizing ELBO's second term not only brings the variational posterior approximation closer to the prior, but also decreases mutual information between latent variables and observed data.", "Based on this insight, we modify CVAE's ELBO in two ways ( 3): (1) We explicitly add a principled mutual information term back into the training objective, and (2) we use a factorized decoder (Chen et al., 2017), which also predicts the target bag-of-words as an auxiliary decoding distribution to regularize our latent variables.", "Our objective is effective even without KullbackLeibler term (KL) annealing (Bowman et al., 2016), a strategy for iteratively altering ELBO over the course of training to avoid posterior collapse.", "In applying our method to neural machine translation (NMT; Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014), we find that we have measurably mitigated posterior collapse.", "The latent variables are not ignored, even in the presence of a powerful Transformer decoder.", "By addressing this problem, the resulting NMT model has improved robustness and performance in low-resource scenarios.", "Noisy data like those scraped from the Internet (Smith et al., 2013; Michel and Neubig, 2018) present a challenge for NMT (Khayrallah and Koehn, 2018; Ott et al., 2018a); we are measurably more able to model this extrinsic uncertainty than the (non-latent) Transformer (Vaswani et al., 2017) or existing variational NMT with the CVAE architecture (Zhang et al., 2016).", "Finally, we extend the model to semi-supervised learning (Cheng et al., 2016) to more effectively learn from monolingual data.", "In summary, our conditional text generation model overcomes posterior collapse by promoting mutual information.", "It can easily and successfully integrate noisy and monolingual data, and it does this without the cost of lower BLEU score than non-latent NMT in typical settings.", "Here we review the standard framework for neural MT. Next, we connect this to the conditional variational autoencoder, a model with latent random variables whose distributions are learned by black-box variational Bayesian inference.", "Finally, we analyze the CVAE's objective to explain why these models will ignore their latent variables (posterior collapse).", "Problem instances in machine translation are pairs of sequences ( x (cid:44) [ x 1 , . . . , x m ] , y (cid:44) [ y 1 , . . . , y n ]) , where x and y represent the source and target sentences, respectively.", "Conventionally, a neural machine translation model is a parameterized conditional distribution whose likelihood factors in an autoregressive fashion: p ( y | x ) = n (cid:89) t =1 p ( y t | x , y <t ) .", "The dominant translation paradigm first represents the source sentence as a sequence of contextual-ized vectors (using the encoder ), then decodes this representation into a target hypothesis according to Equation", "1. The parameters are learned by optimizing the log-likelihood of training pairs with stochastic gradient methods (Bottou and Cun, 2004; Kingma and Ba, 2015).", "Decoding is deterministic, using an efficient approximate search like beam search (Tillmann and Ney, 2003).", "The Transformer architecture with multi-head attention has become the state of the art for NMT (Vaswani et al., 2017).", "Our NMT approach extends the conditional variational autoencoder (Sohn et al., 2015), which we identify as a generalization of Variational NMT (Zhang et al., 2016).", "It introduces a latent random variable z into the standard NMT conditional distribution from Equation 1: 1,2 p ( y | x ) = (cid:90) z p ( y | z , x ) (cid:124) (cid:123)(cid:122) (cid:125) decoder p ( z | x ) (cid:124) (cid:123)(cid:122) (cid:125) encoder d z .", "For a given source sentence x , first a latent variable z is sampled from the encoder, then the target sen-1", "tence y is generated by the decoder: z p ( z | x ) , y p ( y | z , x ) .", "3 It is intractable to marginalize Equation 2 over z .", "Instead, the CVAE training objective is a variational lower bound (the ELBO) of the conditional log-likelihood.", "It relies on a parametric approximation of the model posterior: q ( z | x , y ) .", "The variational family we choose for q is a neural network whose parameters are shared (i.e., amortized) across the dataset.", "The ELBO lower-bounds the log-likelihood, as can be proven with Jensen's inequality.", "Its form is: LCVAE = E q ( z | x , y ) [log p ( y | x , z )] DKL ( q ( z | x , y ) (cid:107) p ( z | x )) , (3) where DKL represents the KullbackLeibler divergence between two distributions.", "We use amortized variational inference to simultaneously perform learning and approximate posterior inference, updating both and with stochastic gradient methods.", "Improving raises the lower bound, and improving keeps the bound tight with respect to the model conditional log-likelihood.", "The same argument pertains to the joint maximization interpretation of the expectationmaximization (EM) algorithm (Neal and Hinton, 1998).", "(Our optimization is a variational generalization of EM.) 2.3 Posterior Collapse Despite their success when applied to computer vision tasks, variational autoencoders in natural language generation suffer from posterior collapse , where the learnt latent code is ignored by a strong autoregressive decoder.", "This presents a challenge to conditional language generation tasks in NLP like machine translation.", "The phenomenon can be explained mathematically by an analysis of the ELBO objective, as well as from the perspective of a powerful decoder that can model the true distribution without needing the latent code.", "We consider both in this subsection.", "ELBO surgery Recall that the computed objective approximates the objective on the true data distribution p D , using a finite number of samples 3 The sense of encoder in the context of variational autoencoders differs from the typical sense in neural machine translation, such that the NMT encoder is a component of both the VAE's encoder and decoder.", "We can separate these by computing a second, deterministic latent variable h from x to represent the NMT encoder outputs, used by both the VAE encoder and the NMT/VAE decoder.", "L = E p D ( x , y ) [ LCVAE ( , ; x , y )] .", "(4) We can factor the KL term of Equation 3 (omitting parameter subscripts) as: E p D ( x , y ) [DKL ( q ( z | x , y ) (cid:107) p ( z | x ))] = H( x , y ) H( x , y | z ) (cid:124) (cid:123)(cid:122) (cid:125) (cid:44) I q ( z ; x , y ) + E q ( z ) log q ( z ) p ( z ) (cid:124) (cid:123)(cid:122) (cid:125) (cid:44) DKL ( q ( z ) (cid:107) p ( z )) , (5) which we prove in Appendix A, following (Hoff-man and Johnson, 2016).", "As both the resulting mutual information and KL terms are non-negative (Cover and Thomas, 2006), the global minimum of Equation 5 is I q ( z ; x , y ) = DKL ( q ( z ) (cid:107) p ( z )) = 0 .", "Unfortunately, at this point, the consequence of the optimization is that the latent variable z is conditionally independent of the data ( x , y ) .", "A powerful decoder Revisiting Equation 3, we see that the decoder is conditioned on both the stochastic latent variable z and the source text x .", "A sufficiently high-capacity autoregressive decoder can model the conditional density directly, ignoring the latent variable and reducing inference to Equation", "1. The KL term can then be reduced to its minimum (0) by equating the posterior to the prior.", "To prevent this, some work weakens the decoder in various ways.", "This is a challenge, because NMT requires a powerful decoder such as Transformer with direct attention to the encoder.", "We modify our training objective to explicitly retain mutual information between the latent variable z and the observation ( x , y ) .", "Further, we use an auxiliary decoder that only uses the latent variable, not the encoder states.", "We combine it with the existing decoder as a mixture of softmaxes (Yang et al., 2018a).", "The model is trained with amortized variational inference.", "When source-language monolingual text is available, we augment our modified CVAE objective with a similarly modified (non-conditional) VAE objective.", "The training and inference strategy is summarized in Figure", "1. 3.1 Adding I q ( z ; x , y ) to ELBO To combat the optimization dilemma from Equation 5 (namely, that the objective discourages mutual information between the latent variable and the data), we explicitly add the mutual information term to the CVAE's ELBO and obtain a new training objective: LMICVAE = LCVAE + I q ( z ; x , y ) = E q ( z | x , y ) log p ( y | x , z ) DKL ( q ( z ) (cid:107) p ( z )) (6) The new training objective LMICVAE aims to match the aggregated approximate posterior distribution of the latent variable q ( z ) (Hoffman and Johnson, 2016) to the aggregated-posterior prior distribution p ( z ) .", "4 4 It can be seen as extending InfoVAE (Zhao et al., 2019) to conditional generative models, where we have overcome 3.2 Guiding z to Encode Global Information Several existing approaches weaken the decoder : limiting its capacity to encourage latent variables to be utilized (Bowman et al., 2016; Gulrajani et al., 2017).", "Here we propose a different approach: explicitly guiding the information encoded in z without reducing the decoder's capacity.", "The decision to weaken the decoder can be understood in the context of Bits-Back Coding theory (Chen et al., 2017), which suggests that at optimality the decoder will model whatever it can locally, and only the residual will be encoded in the latent variable z .", "A consequence is that explicit information placement can give more powerful latent representations.", "Inspired by this Bits-Back perspective, we add a global auxiliary loss for z to encode information which cannot be modelled locally by the autoregressive decoder (cid:81) t p ( y t | x , y <t , z ) .", "We use bag-of-words (BoW) prediction as the auxiliary loss.", "It encodes global information while having a non-autoregressive factorization: (cid:81) t p ( y t | z ) .", "(We choose not to condition it on the source sentence x .)", "Further, it requires no additional annotated data.", "The auxiliary decoder complements the autoregressive decoder (which is locally factor-ized), interpolating predictions at the softmax layer, i.e. p ( y t | x , y <t , z ) is a mixture of softmaxes (Yang et al., 2018b): p ( y t | ) = (1 ) p ( y t | x , y <t , z ) + p ( y t | z ) , (7) with mixing parameter .", "Our model uses discrete latent variables.", "These are used to select a latent embedding, which is concatenated to the decoder state.", "Inference Network We use discrete latent variables with reparameterization via GumbelSoftmax (Jang et al., 2017; Maddison et al., 2017) to allow backpropagation through discrete sampling.", "Unlike the multivariate Gaussian distribution commonly used in VAE and CVAE, our parameterization can explicitly account for multiple the mismatch between the (joint) data distribution p D ( x , y ) and the (conditional) likelihood objective p ( y | x ) .", "modes in the data.", "(See Rezende and Mohamed (2015) for a perspective on the value of multimodal distributions over latent", "variables.) To make our model more general, we introduce a set of discrete latent variables z = { z 1 , . . . , z K } which are independently sampled from their own inference networks k .", "Specifically, each k computes scaled dot product attention with encoder outputs h R d using latent code embedding e k : C k = Attention (cid:16) e k W k , hW h , hW h (cid:17) = Softmax (cid:18) e k W k ( hW h ) (cid:62) d (cid:19) hW h .", "We can now sample z k by the Gumbel-Softmax reparameterization trick (Maddison et al., 2017; Jang et al., 2017):", "z k GumbelSoftmax ( C k ) = Softmax (cid:18) C k + g (cid:19) ,", "where g = log( log( u )) , u Uniform is the Gumbel noise and is a fixed temperature.", "(We use = 1 in this paper.)", "At inference time, we use a discrete version by directly sampling from the latent variable distribution.", "BoW Auxiliary Decoder Given an inferred sample z k ( h ) , the BoW decoder predicts all tokens at once without considering their order.", "We compute the cross-entropy loss for the predicted tokens over the output vocabulary space V : L BoW = | V | (cid:88) i =1 p i log p ( y i | z ) , | V | (cid:88) i =1 p i = 1 .", "We take the (unnormalized) empirical distribution p i to be a token's frequency within a sentence normalized by its total frequency within a mini-batch, mitigating the effect of frequent (stop) words.", "This is then normalized over the sentence to sum to 1, giving values p i .", "The model distribution p is computed by conditioning on the latent code only, without direct attention to encoder outputs.", "We use scaled dot-product attention between the latent embeddings and the target embeddings (each of dimensionality d , represented as a matrix EV ): p ( y i | z ) = Softmax (cid:18) e ( z ) E (cid:124) V d (cid:19) i .", "For training with parallel data, we optimize LMICVAE .", "We draw samples z from the approximate posterior q ( z | x , y ) parameterized by the inference network, then feed the samples to both the autoregressive and auxiliary (BoW) decoders to get a Monte Carlo estimate of the gradient.", "Estimating aggregated distributions We estimate p ( z ) and q ( z ) over each minibatch, following Zhao et al. (2018).", "Semi-supervised learning We apply the same modification to VAE's ELBO, following Zhao et al. (2019).", "For jointly training with source-side monolingual data, we add I q ( z ; x ) to the ELBO, and for target-side monolingual data, we add I q ( z ; y ) .", "5 The joint objective sums the modified CVAE and VAE objectives: L Mono = log p ( x | z ) + DKL (cid:32) 1 LL (cid:88) (cid:96) =1 q (cid:16) z ( (cid:96) ) (cid:12)(cid:12)(cid:12) x ( (cid:96) ) (cid:17) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) 1 LL (cid:88) (cid:96) =1 p (cid:16) z ( (cid:96) ) (cid:17)(cid:33) (13) L Joint = LMICVAE + L Mono , (14) where L is the number of monolingual examples.", "Algorithm 1 describes the overall training strategy.", "Here we present empirical results on the Transformer architecture.", "We evaluate our model on four standard datasets and compare against three baselines.", "We use four measures to quantify posterior collapse, then examine translation quality (BLEU score) in standard fully supervised settings, a semi-supervised setting, and a fully supervised setting with noisy source text.", "Hyperparameters, regularization choices, and subword vocabulary information can be found in 5.3.", "The results show that we have effectively addressed posterior collapse: latent variables are no longer ignored despite the presence of a powerful decoder.", "As a result, we outperform both the standard Transformer and the Transformer-based variational NMT approach, when using noisy data or source-language monolingual data.", "First, we evaluate our models on a standard high-resource and low-resource benchmark dataset from WMT.", "Second, we focus on situations where noisy or monolingual data is available.", "We note that low-resource scenarios and noisy data are two representative challenges in MT (Lopez and Post, 2013).", "WMT16 RomanianEnglish We use data from the WMT16 news translation shared task.", "We use the same BPE-preprocessed (Sennrich et al., 2016b) train, dev and test splits as in Gu et al. (2018) with 608k sentence pairs for training.", "This dataset pairs web-scraped text from Reddit with professional translations.", "We use 30k subword units built jointly from source and target sentences and only keep sentences with less than 100 tokens.", "For training, there are 34,380 sentence pairs for EnglishFrench and 17,616 sentence pairs for FrenchEnglish (Michel and Neubig, 2018).", "We also used 18,676 monolingual sentences per language from the same data source (Reddit).", "We compare our model to three baselines:", "without latent variables.", "VNMT A CVAE model with Gaussian distribution as proposed in Variational NMT by Zhang et al. (2016), which we reimplement using Transformer.", "(Zhang et al. (2016) use a GRU-based recurrent", "model.) DCVAE A CVAE model with the same discrete latent variable parameterization as ours but without the new objective (i.e., the mutual information term and bag-of-words regularizer).", "All of our models build on Transformer.", "For WMT14 DeEn and WMT16 RoEn, we use the base configuration (Vaswani et al., 2017): 6 blocks, with 512-dimensional embedding, 2048-dimensional feed-forward network, and 8 attention heads.", "For FLoRes (low-resource) and MTNT (low-resource and noisy), we use a smaller Transformer: 4 layers, 256-dimensional embedding, 1024-dimensional inner layers, and 4 attention heads.", "Input and output embeddings are shared between the inference network and decoder.", "We use T = 4 categorical latent variables of dimension 16 (found by grid search on the dev set).", "Auxiliary bag-of-words predictions are combined with the decoder prediction with = 0 .", "1 .", "We optimize using Adam (Kingma and Ba, 2015) with 1 = 0 .", "9 , 2 = 0 .", "98 , (cid:15) = 1 E8 , weight decay of 0.001, and the warmup and learning rate schedule of Ott et al. (2018b).", "All models are trained on 8 NVIDIA V100 GPUs with 32K tokens per mini-batch.", "We train WMT14 DeEn with 200k updates and others with 100k updates.", "We do not use early stopping.", "We employ joint BPE vocabularies.", "The sizes are 32k for EnDe and EnRo; 30k for FrEn; and 3k for SiEn.", "We also use a word dropout rate of 0.4 during training of all models, which is complementary to our approach.", "We found the default initialization in the FAIRSEQNMT toolkit was effective; we did not need to explore several initializations to avoid degenerate models.", "We compare our model to a standard DCVAE lacking the new objective.", "We report four metrics of posterior collapse on the validation set of WMT RoEn:", "1. KullbackLeibler divergence (KL).", "2. Mutual information between the latent variable and the source: I q ( z ; x ) 3. Mutual information between the latent variable and the target: I q ( z ; y ) .", "4. Negative conditional log-likelihood (NLL) per token.", "Table 1 shows that when using standard DCVAE ELBO, even with the common practice of KL annealing (KLA), both the KL loss and mutual information settle to almost 0 which is consistent with the analysis in Equation", "5. We also plot the progression of DKL , I q ( z ; x ) , and I q ( z ; y ) during training in Figure", "2. The posterior collapse of the baseline model is apparent: both DKL mutual information terms drop to 0 at the beginning of training as a result ELBO's design.", "On the other hand, our model, without using any annealing schedule, effectively increases mutual information and prevents KL loss from settling to a degenerate solution early on.", "We report corpus-level BLEU (Papineni et al., 2002) 6 on the test sets where the translations are generated by sampling each z k with soft-assignment (vs. argmax).", "Supervised Learning on Parallel Data First, we evaluate our model's performance when trained with parallel data on standard WMT datasets.", "Table 2 shows that our model consistently outperforms both VNMT and DCVAE modelswhich 6 We use detokenized SacreBLEU (Post, 2018).", "Data Leveraging monolingual data is a common practice to improve low resource NMT.", "One popular approach uses target-side monolingual data through backtranslation as a data augmentation, but how to effectively leverage source-side monolingual data is an open challenge (Sennrich et al., Model FrEn EnFr Non-latent 26.7 24.8 DCVAE 26.4 26.1 + source mono 27.3 26.4 Our model 28.6 26.3 + source mono 29.8 26.7 Table 3: Translation performance (BLEU) of utilizing source-side monolingual data.", "2016a; Zhang and Zong, 2016; Wu et al., 2019).", "We use the joint training objective described in Equation 14.", "To have a fair comparison, we also extend VNMT and DCVAE with the same joint training algorithm, i.e., the newly added monolingual data is used to train their corresponding sequence encoder and inference network with standard VAE ELBO.", "That is, the only difference is that our model was trained to promote mutual information I q ( z , x ) and I q ( z , y ) .", "As shown in Table 3, by doing so the proposed model brings larger gains during semi-supervised learning with source-side monolingual data.", "Robustness to Noisy Data While high-quality parallel data is scarce for low-resource language pairs, weakly aligned sentence pairs can be mined from massive unpaired data such as Paracrawl.", "7 We evaluate our model's performance when augmenting the training set with increasingly noisy parallel data filtered by Zipporah (Xu and Koehn, 2017).", "Because VNMT and DCVAE underperform our proposal in previous experiments, we omit them from this experiment.", "Figure 3 shows the results in the SinhalaEnglish direction.", "Our model always outperforms standard Transformer, which struggles as more (and noisier) data is added.", "The gap grows from +1.2 to +4.7 BLEU.", "Ablation Study How do the different ingredients of our proposed approach contribute to preventing posterior collapse and improving translation quality?", "We explore two variants of the proposed model: 1) modified ELBO only: only adding mutual information term to the training objective, while without gradients from L BoW , 2) BoW only: which is equivalent to DCVAE combined with BoW decoder.", "First, we perform the same collapse metrics evaluation as in Table", "1. Figure 2(B) suggests that by explicitly adding mutual information term back to the training objective, both I q ( z ; x ) and I q ( z ; y ) are effectively raised, while the remaining aggregated KL term is still optimized to zero.", "Such behavior is consistent with the analysis revealed 7 https://paracrawl.eu/ Model DeEn (3.9M) RoEn (608K) BoW and LMICVAE 31.4 34.8 BoW only 31.1 34.2 Table 4: Ablation study on translation quality (BLEU).", "in Equation", "5. On the other hand, regularizing z with the BoW decoder only, shown in Figure 2(C), is very effective in preventing KL vanishing as well as increasing mutual information.", "When two approaches are combined, as was shown in Figure 2(A), the model retains higher mutual information for both I q ( z ; x ) and I q ( z ; y ) .", "Next, we see whether the difference in mutual information yields different translation quality.", "We compare two models: BoW only (Figure 2(C)) and both (Figure 2(A)), on WMT14 DeEn and WMT16 RoEn test sets.", "Table 4 shows the difference matters more in a low-data regime.", "Analysis of Outputs Delving into model predictions helps us understand how our model outperforms the others.", "We examined erroneous 1-best predictions on the RoEn data.", "We provide salient examples of phenomena we identified in Table", "5. (Naturally, as the RoEn score differences are not dramatic, the predictions are largely similar.) Several examples support the fact that our model has more fluent and accurate translations than the baseline or VNMT.", "VNMT often struggles by introducing disfluent words, and both VNMT and Transformer select justifiable but incorrect words.", "For instance, in our second example, the gender and animacy of the possessor are not specified in Romanian.", "Our model selects a more plausible pronoun for this context.", "Analysis of Latent Variables Finally, we probe whether different latent variables encode different information.", "We random sample 100 sentences from two test sets of distinct domains, MTNT (Reddit comments) and WMT (news) with 50 sentences each.", "We plot the t -SNE projection of their corresponding samples z k inferred from k , k = 1 , 2 , 3 , 4 respectively.", "Figure 4 suggests that different latent variables learn to organize the data in different manners, but there was no clear signal that any of them exclusively specialize in encoding a domain label.", "We leave a thorough analysis of Source : ma intristeaza foarte tare .", "Unlike most prior work in (conditional) text generation, we tackle posterior collapse without requiring an annealing schedule (Bowman et al., 2016; Snderby et al., 2016; Kim et al., 2018), a weakened decoder (Gulrajani et al., 2017), or a restricted variational family (Razavi et al., 2019).", "Unlike Ma et al. (2018), who also employ bag-of-words as an NMT objective, our BoW decoder only sees the latent variable z , not the encoder states.", "Conversely, unlike Weng et al. (2017), our generative decoder has access to both the latent variable and the encoder states; bag-of-words prediction is handled by separate parameters.", "VNMT (Zhang et al., 2016) applies CVAE with Gaussian priors to conditional text generation.", "VRNMT (Su et al., 2018) extends VNMT, modeling the translation process in greater granularity.", "Both needed manually designed annealing schedules to increase KL loss and avoid posterior collapse.", "Discrete latent variables have been applied to NMT (Kaiser et al., 2017; Gu et al., 2018; Shen et al., 2019), without variational inference or addressing posterior collapse.", "Approaches to stop posterior collapse include aggressively trained inference networks (He et al., 2019), skip connections (Dieng et al., 2019), and expressive priors (Tomczak and Welling, 2018; Razavi et al., 2019).", "Unlike our conditional approach, Shah and Barber (2018) jointly model the source and target text in a generative fashion.", "Their EM-based inference is more computationally expensive than our amortized variational inference.", "Eikema and Aziz (2019) also present a generative (joint) model relying on autoencoding; they condition the source text x on the latent variable z .", "Finally, Schulz et al. (2018), like us, value mutual information between the data and the latent variable.", "While they motivate KL annealing using mutual information, we show that the annealing is unnecessary.", "We have presented a conditional generative model with latent variables whose distribution is learned with variation inference, then evaluated it in machine translation.", "Our approach does not require an annealing schedule or a hamstrung decoder to avoid posterior collapse.", "Instead, by providing a new analysis of the conditional VAE objective to improve it in a principled way and incorporating an auxiliary decoding objective, we measurably prevented posterior collapse.", "As a result, our model has outperformed previous variational NMT models in terms of translation quality, and is comparable to non-latent Transformer on standard WMT Ro En and De En datasets.", "Furthermore, the proposed method has improved robustness in dealing with uncertainty in data, including exploiting source-side monolingual data as well as training with noisy parallel data.", "We thank Alexandra DeLucia, Chu-Cheng Lin, Hongyuan Mei, Kenton Murray, Guanghui Qin, and Joao Sedoc (alphabetically) for remarks on the exposition." ]
[ "objective", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "other", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other", "other", "other", "abstain", "abstain", "other", "abstain", "objective", "method", "abstain", "objective", "result", "abstain", "other" ]
[ "Given the ubiquitous nature of numbers in text, reasoning with numbers to perform simple calculations is an important skill of AI systems.", "While many datasets and models have been developed to this end, state-of-the-art AI systems are brittle; failing to perform the underlying mathematical reasoning when they appear in a slightly different scenario.", "Drawing inspiration from GLUE (Wang et al., 2018) that was proposed in the context of natural language understanding, we propose NUMGLUE, a multi-task benchmark that evaluates the performance of AI systems on eight different tasks, that at their core require simple arithmetic understanding.", "We show that this benchmark is far from being solved with neural models including state-of-the-art large-scale language models performing significantly worse than humans (lower by 46.4%).", "Further, NUMGLUE promotes sharing knowledge across tasks, especially those with limited training data as evidenced by the superior performance (average gain of 3.4% on each task) when a model is jointly trained on all the tasks as opposed to task-specific modeling.", "Finally, we hope that NUMGLUE will encourage systems that perform robust and general arithmetic reasoning within language, a first step towards being able to perform more complex mathematical reasoning 1 .", "Reasoning with numbers is an important skill that occurs in various day-to-day scenarios and not surprisingly, numbers are ubiquitous in textual data.", "To train AI reasoning systems that can perform simple mathematical reasoning, many tasks have been proposed (Dua et al., 2019b; Ravichander et al., 2019; Koncel-Kedziorski et al., 2016).", "Despite these efforts, current state-of-the-art AI 1 https://allenai.org/data/numglue Original Word Problem John had 5 apples.", "Premise: John had 5 apples.", "He gave 3 apples to Peter.", "Hypothesis: John has 2 apples now.", "Does the hypothesis entail, contradict or is neutral to the premise?", "Comparison Format John had 5 apples.", "He gave 3 to Peter.", "Who has more apples?", "systems are brittle and fail when problems involving similar mathematical reasoning is posed in a slightly different manner.", "For instance, presenting a word problem in a different manner as shown in fig.", "1, while hardly affecting human performance, is sufficient to confuse state-of-the-art AI systems 2 .", "This brittleness in reasoning indicates that the models latch on to spurious signals in the specific dataset resulting in solving the dataset while not truly understanding the underlying reasoning skill of simple arithmetic.", "Further, we believe that building AI systems that can truly understand and apply simple arithmetic reasoning is a mandatory first step towards successfully tackling complex 2 The recently released GPT3-Instruct, a fine-tuned model with 175B parameters produces inconsistent answers for these questions.", "See supplementary material: GPT3-Instruct's Response for more details.", "NumGLUE.", "To this end, we propose NUMGLUE, a multi-task benchmark consisting of eight different tasks that at their core test for arithmetic reasoning skills.", "For example, as discussed in fig.", "1, tasks can involve word problems presented in a slightly different manner or can involve additional reasoning strategies like commonsense reasoning or reading comprehension to be combined with the core skill of simple arithmetic.", "Our benchmark consists of four new tasks in addition to four existing ones; with 100 K problems spread across eight differet tasks.", "The motivation behind NUMGLUE is similar to GLUE (Wang et al., 2018, 2019), a multi-task benchmark that aimed at models that demonstrated superior language understanding by learning the underlying linguistic features.", "NUMGLUE is designed with goal of progressing towards AI systems that are capable of performing arithmetic reasoning in a general setting; achieving superior performance on our benchmark requires the ability to correctly identify and perform the underlying arithmetic reasoning without relying on task or dataset-specific signals.", "Finally, we hope that NUMGLUE will encourage systems that perform robust and general numeric reasoning within language, a first step towards being able to perform more complex mathematical reasoning.", "1. We introduce NUM GLUE a multi-task benchmark consisting of eight different tasks, including 4 new ones, whose solution at its core requires an understanding of simple arithmetic.", "2. We demonstrate that NUMGLUE is a challenging benchmark even for state-of-the-art large scale language models, obtaining poor scores not only in zero or few shot settings but also after fine-tuning.", "This indicates a fundamental barrier for AI systems; one that needs to be breached before complex mathematical challenges can be successfully tackled.", "3. Finally, we propose a memory-augmented neural model to demonstrate the utility of such a multi-task meta dataset.", "Our proposed model when trained on the entirety of NUMGLUE obtains an average improvement of 3.4% on each task as opposed to task-specific training indicating that joint training leads to beneficial transfer owing to the common theme of arithmetic reasoning.", "Datasets for Numerical reasoning.", "Quantitative reasoning has been a challenging problem for a long time.", "Small question answering datasets were proposed to understand the quantitative aspect of natural language such as the template-based dataset which solved questions with equations as parameters (Kushman et al., 2014), addition-subtraction dataset (Hosseini et al., 2014) and arithmetic problems dataset (Koncel-Kedziorski et al., 2015).", "Difficulty of questions were increased in subsequent datasets (Roy and Roth, 2016), (Upadhyay et al., 2016).", "Later, larger datasets were created to facilitate deep learning research (Ling et al., 2017; Dua et al., 2019b).", "Several other maths datasets have been proposed to improve explainability (Amini et al., 2019), diversity (Miao et al., 2020), scale information in language embeddings (Zhang et al.) and hardness of math questions (Hendrycks et al., 2021).", "One of the motivations behind creating this benchmark is to test for simple arithmetic reasoning independent of the context or the presentation style of the problem.", "Further, To the best of our knowledge, our work is the first to consider multiple tasks in the numerical reasoning space.", "Multi-Task Benchmarks.", "With increased success of deep learning based models on individual tasks, there has been a significant push both in the NLP community and in the broader AI community towards general purpose models that excel at multiple tasks.", "Naturally, various benchmarks and challenges that test for such understanding have been proposed.", "For instance, the BAbI dataset (Weston et al., 2015), GLUE (Wang et al., 2019) and the subsequent harder SuperGLUE (Wang et al., 2019) were proposed to both evaluate and drive progress in language understanding via shared linguistic knowledge across tasks.", "McCann (McCann et al., 2018) build a multi-task dataset via a novel approach formatting each task as that of question-answering.", "In the more restricted setting of reading comprehension, Dua et al. (2019a) and Downey and Rumshisky build a meta-dataset that spans multiple domains 3506 and reasoning skills.", "Multi-task Models.", "With the growing interest towards models that go beyond specific datasets, various neural models that can perform mutliple tasks have been proposed.", "When the underlying reasoning is similar eg.", "commonsense reasoning, problem decomposition or linguistic understanding it has been found that training on multi-task datasets yields more robust and accurate models.", "For instance, the Multi-task Question Answering Network (McCann et al., 2018), T5 (Raffel et al., 2019), GPT3 (Brown et al., 2020) and GPT3-Instruct models aim to build general purpose language models that are capable of transferring linguistic understanding across tasks.", "A similar approach is taken by Khashabi et al. (2020) in the setting of question-answering and Lourie et al. (2021) in the scope of commonsense reasoning.", "Further, Muppet (Aghajanyan et al., 2021) adds an additional step of pre-finetuning between pretraining and finetuning that improves generalization to multiple tasks.", "As mentioned previously, our NUMGLUE benchmark consists of both new and already existing arithmetic reasoning tasks.", "We first begin by introducing the novel datasets curated by us before providing a brief overview of existing tasks that are part of NUMGLUE.", "Finally, in this section, we provide an analysis of the datasets demonstrating that it contains interesting and diverse linguistic and mathematical properties.", "NUMGLUE Benchmark.", "Our proposed NUMGLUE benchmark is a collection of eight different tasks that together include 100 K questions.", "The tasks may either be self-contained or require additional background knowledge (e.g.commonsense reasoning) to arrive at the final solution; however, all the tasks, at their core, involve arithmetic reasoning.", "Table 1 shows an example question belonging to each task along with indicating the total number of data points associated with each task.", "It is important to note that tasks are imbalanced with only 400 examples for Task 1 and nearly 50 K questions under Task 5.", "While we could have under-sampled the questions to create a balanced suite, we retain the imbalanced dataset in order to mimic the real world for instance, arithmetic word problems are more abundant as opposed to word problems that may require commonsense reasoning in addition to arithmetic reasoning.", "Data Partition and Evaluation.", "We randomly partition data in each task into training (70%), development (10%) and test (20%) sets .", "In the case of reading comprehension tasks (Task 5 and 6), we assign all questions corresponding to a passage to the same split we do this in order to discourage any data leakage and thereby, allowing models to potentially rely on memorization to arrive at the correct answer.", "For each task, we report the F1 measure and as an aggregate measure of performance on the NUMGLUE benchmark similar to Dua et al. (2019b), we report the (unweighted) average of the F1 scores corresponding to each task.", "The novel tasks proposed as part of NUMGLUE are a combination of both freshly collected data and intelligent modifications of already existing datasets.", "The four novel arithmetic reasoning tasks introduced are as follows 3 : Task 1: Commonsense + Arithmetic Reasoning.", "Consider the following question How many faces do 10 dice have?", "Answering this not only requires simple arithmetic i.e.multiplying the number of faces in a die by ten but also requires knowing that a standard die has six faces.", "We collect this dataset by first asking the annotator to write down a numerical commonsense fact (e.g.a human has 2 hands, a day has 24 hours etc. ) and then use frame a question that requires using this numerical fact as part of a simple arithmetic calculation.", "Task 2: Domain Specific + Arithmetic Reasoning.", "How many units of hydrogen are required to produce 10 units of water?", "This question, similar to the previously introduced task of arithmetic reasoning questions, requires additional domain-specific knowledge specifically, that each unit of water contains two units of hydrogen.", "We 3 We annotate the datasets manually.", "curate a dataset of such questions that require both domain-specific knowledge and arithmetic reasoning motivated by the finding that QA systems perform poorly on the ARC dataset Clark et al. (2018) consisting of grade-school level science questions.", "Specifically, the dataset collected by us requires understanding of a small set of chemistry (conservation of mass in chemical reactions) and physics principles ( speed distance { time ).", "Task 3: Commonsense + Quantitative Comparison.", "A golf ball weighs 40g and a baseball weighs 150g.", "Which has a higher gravitational force?", "Answering this question requires both knowing that mass is directly proportional to gravitational force and a numerical comparison via subtraction.", "We collect such quantitative comparison questions by using the QuaRel dataset (Tafjord et al., 2019) containing questions from diverse fields such as physics and economics as the starting point.", "The annotator chooses a subset of these questions that involve numerically comparable quantities (for instance, in this example, mass of the objects involved) to create the required task of quantitative comparison questions.", "information (e.g.commonsense knowledge) in addition to simple arithmetic reasoning, this task is self-contained but a stylistic variant of existing math word problems.", "We source word problems from the Arithmetic Word Problem repository (Roy and Roth, 2016, 2017, 2018) and convert them into the fill-in-the-blanks format.", "For an example of such a conversion, refer to fig.", "1. 3.2 Existing Datasets We now review existing datasets while discussing any modifications made when including them in NUMGLUE.", "In general, for all the datasets included, we perform a filtering step to clean and control for the quality of the data points being included.", "This step includes", "a) discarding questions that do not have answer annotations", "b) eliminating questions with high lexical overlap with the remainder of the dataset and", "c) fixing any type mismatches present in the data (e.g.7.0 students 7 students).", "Task 5: Reading Comprehension (RC) + Explicit Numerical Reasoning.", "We select a subset from the DROP (Dua et al., 2019b) dataset to create this task.", "Specifically, the selected questions involve reading comprehension and numerical reasoning but importantly, the required 3508 answer is also a number.", "Task 6: Reading Comprehension (RC) + Implicit Numerical Reasoning.", "Consider the following question based on a relevant passage Which state has the highest income tax rate?", "Here, while the final answer is a name, arriving at it requires performing comparison (i.e.subtraction).", "We classify such questions in the DROP dataset as a separate task in NUMGLUE.", "Task 7: Quantitative NLI EQUATE (Ravichan-der et al., 2019) introduces quantitative NLI questions that require simple arithmetic calculations to be performed in order to accurately classify the relationship between the provided premise and the hypothesis.", "As noted in fig.", "1, many word problems can also be easily converted to this format and is therefore, a diverse and interesting task for evaluating arithmetic reasoning skills of AI systems.", "Task 8: Arithmetic Word Problems Finally, we arrive at one of the earliest and extensively studied class of arithmetic reasoning problems i.e.word problems.", "The specific dataset included as part of our NUM GLUEbenchmark is a combination of multiple datasets proposed by Koncel-Kedziorski et al. (2016), (Koncel-Kedziorski et al., 2015) and Kushman et al. (2014).", "Further, to ensure that the benchmark as a whole is diverse, we eliminate questions that have a high sentence similarity with questions from the fill-in-the-blanks task.", "In order to ensure a high-quality test set, three independent annotators evaluate each question in the test set across all tasks.", "A tiny porton of the data marked as invalid or with disagreement between the annotators was excluded, resulting in a verified, high-quality NUMGLUE evaluation suite.", "We also perform a variety of analysis and find that the novel question tasks we created (task 1-4) have higher quality than the existing question tasks since they have higher average vocabulary (number of unique words per number of samples), higher number of unique nouns, verbs and other POS tags and have less semantic textual similarity among each other (indicating lower repetition).", "Detailed analysis can be found in the supplementary material: Data Quality Analysis of NUMGLUE.", "In this section, we establish multiple baselines on our benchmark and discuss their performance.", "We evaluate several baselines on our benchmark", "(i) Heuristic,", "(ii) Zero-shot,", "(iii) Few-shot,", "(iv) Fine-tuning and", "(v) Human.", "We use two kinds of model architectures", "(i) Neuro-symbolic, a memory augmented novel architecture that extends Numnet+v2 (Ran et al., 2019) and", "(ii) End-to-end, GPT3 (Brown et al., 2020).", "Architectures.", "In the multi-task setting where the same model is trained on all the NUMGLUE tasks, we use Reading Comprehension (RC) as the common format converting each task to RC format via a set of hand-coded rules 4 .", "In addition to being capable of faithfully representing all the constituent tasks, the RC format also allows us to inject additional context in the IR setting without affecting the rest of the pipeline 5 .", "On the other hand, GPT3 being a generative model does not require such modifications.", "Importantly, note that both models are inputted the exact same information for the multi-task experiments.", "Heuristic Baselines with Task Oracle.", "For this baseline, we assume a task oracle that knows the task a particular question belongs (in a multitask setting) we use this to make our heuristic baselines more competitive.", "The first heuristic baseline is random : we randomly select one of the options in case the question has multiple options (task 3 and 7), a number between 0 to 100 for questions having a numerical answer and a random entity present in the passage for questions having a text segment from the passage as the answer.", "In the majority baseline, we select the most frequent answer for each task such as \"Entailment\" for NLI questions and similarly, the most frequent number for questions having numerical answer and the major entity present in the passage for questions having span based answer.", "As the task information is known, we include these baselines under task-specific baselines when discussing results.", "Zeroshot and Fewshot Baselines.", "We use GPT3 (Brown et al., 2020) and the more recent GPT3-Instruct 6 .", "We have two types of few shot baseline", "(i) task specific and", "(ii) multi task.", "In case of task specific fewshot baseline, instances of the same task are used as in-context examples (Brown et al., 2020) whereas in case of multitask few shot baseline, instances from all tasks are used to condition the model.", "Multitask fewshot is naturally a harder setting as it is task-agnostic.", "We use default parameters in GPT3 and GPT3-Instruct.", "In few-shot setting, we experiment after feeding as many examples as it can fit within the tokensize.", "For few shot experiments, we randomly select 6 newly released by OpenAI as part of the GPT3 finetuned series examples and averaged the results over 5 runs.", "first consider variations of the fine-tuning baselines in the context of our neuro-symbolic model, Ex-NumNet.", "We use it as bias-checking baseline to ensure that solving the benchmark correctly requires considering all of the information presented to it.", "To this end, we evaluate the performance of our model when finetuned only on the question (Q-only) or the context (C-only).", "Next, we present task-specific and multi-task baselines where Ex-NumNet is fine-tuned on individual tasks and the entire NUMGLUE benchmark respectively.", "With the goal of addressing the data imbalance across the tasks, we include an oversampling 3510 Learning Baseline Baseline Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Task 7 Task 8 NumGLUE category name Score HEURISTIC Task-specific Random 0 0.3 46.9 0 0.5 3.4 33 0.4 10.6 Task-specific Majority 1.2 13.9 50 0.5 7.4 3.8 36.5 1.2 14.3 ZERO-SHOT GPT3 0 1 11 2 0 17 6 2 4.9 -GPT3-Instruct 2 1 7 3 3 29 17 3 8.1 FEW-SHOT Task-specific GPT3 44 42 46 40 10 42 35 40 37.4 Task-specific GPT3-Instruct 40 39 51 33 13 43 35 33 35.9 Multi-task GPT3 0 3 27 1 7 28 30 4 12.5 Multi-task GPT3-Instruct 1 2 37 2 6 35 31 7 15.1 FINE-TUNING Multi-task GPT3-13B 21.5 40.7 71.2 11.1 6.3 48.2 48.0 14.2 32.7 FINE-TUNING Multi-task (Q-only) Ex-NumNet 1.2 13.2 25.1 0.5 6.1 25.1 32.8 2.4 13.3 Multi-task (C-only) Ex-NumNet 1.2 14.2 22.8 19.1 0.6 3 0 9.5 8.8 Single-task Ex-NumNet 0 37.8 50.8 22.2 66.6 71.6 85.9 12.2 43.4 Multi-task Ex-NumNet 0 37.5 58 31.4 68.2 70.2 85.7 23.2 46.8 Multi-task + IR Ex-NumNet 5.6 37.5 46.6 36.4 68.6 69.6 85.9 22.4 46.6 Multi-task + CIR Ex-NumNet 7.4 38.8 58 36.8 69.2 70.8 85.8 23.6 48.8 Multi-task + OS Ex-NumNet 7.4 38.8 47.8 35.9 44.3 53.7 85.4 22.4 42.0 -Human 94.4 94.5 97.8 95 94.7 96.1 96.5 92.8 95.2 Table 2: F1 performance of various baselines on the NumGLUE test set across various tasks 1-8.", "baseline that oversamples data from tasks with limited data so as to ensure that the model sees the same number of examples from each constituent task.", "In addition, we propose a new architectural modification to Ex-NumNet.", "Noting that our baseline model Ex-NumNet does not take into account external knowledge, we create a new enhanced architecture in the form of a memory-augmented model that does Information Retrieval (IR) (Khot et al., 2019) with respect to a knowledge base we create, MATH KB to identify the needed knowledge.", "This is inspired by the observation that formula book and mathematical knowledge make the task easier for humans while solving math questions of various types.", "We then use this knowledge in the Ex-NumNet setting.", "Figure 3 illustrates our approach which leverages our newly created knowledge base MATH KB .", "Conditional IR model is different from the regular IR model in the sense that, IR is performed only for questions of task 1 , 2 and 4, since they require external knowledge to get answered.", "More details about the model and the IR process can be found in supplementary material: Proposed Memory-Augmented Model (A.5 and A.6).", "Finally, we discuss fine-tuning baselines in the context of end-to-end models, specifically GPT3.", "We finetune the GPT3-13B model (for which the finetuning capability has been recently provided by OpenAI 7 ) in the multi-task setting i.e. the desired setting of the NUMGLUE benchmark.", "Human Baseline.", "Human baseline was calculated on 100 test set samples of each task (81 of Task 1) by averaging the scores of four annotators.", "Table 2 shows the performance of various baseline models on the test set of our benchmark.", "Note that the performance of all baseline models is significantly lesser than the human baseline (Figure 2).", "We now discuss various insights based on these results.", "Does the benchmark contain bias that a model can exploit?", "A challenging dataset requires the model to ideally consider all the information provided to it before arriving at an answer.", "To ensure that this is indeed the case, we perform ablations where only one portion of the input is provided i.e. either the question or the context.", "Both these bias-checking baselines perform poorly even in task-specific setting indicating that both the benchmark and constituent tasks are challenging.", "Which Tasks are Hard to Solve?", "Our results show that task 1 which requires numerical commonsense knowledge, is the hardest task to solve.", "Similarly, tasks 2, 4 and 8 appear to be 7 https://beta.openai.com/docs/guides/fine-tuning 3511 comparatively harder from the rest.", "One pattern among these tasks is that all of them expect the answer to be numeric.", "Numeric answer requires accurate calculation.", "So, models might have difficulty in learning the task directly from data.", "This hypothesis is also justified from the slight drop in human performance in these", "tasks..", "On the other hand, task 7 has the best performance among all.", "Further, we see that performance on task 6 is slightly better than task 5 although both tasks are sourced from the same dataset, we observe that models answer span based questions better as compared to numeric answers.", "Relatively higher performance for task 3 suggests that models find it easier to answer in an MCQ setting.", "Does IR Help?", "Results show that knowledge help in improving performance of tasks 1, 2 and 4 where indeed, external knowledge like commonsense or domain-specific knowledge is needed in addition to arithmetic reasoning to arrive at the correct answer.", "However, task 3 is an exception to this trend and in fact registers a drop in the score when provided with (unnecessary) additional information; we find that this shortcoming is fixed when using conditional information retrieval (CIR) which in fact leads to the strongest baseline presented in this work.", "Does Oversampling help overcome data imbalance across tasks?", "Even though oversampling results in higher performance in certain tasks (in comparison with the multitask baseline), specifically the ones with smaller training data, it results in significant drop in performance in the other extreme, i.e tasks with bigger training data.", "Also, it never performs better than the Conditional IR module in multitask setting.", "We now present an analysis of the errors made by our baselines to indicate potential avenues for future research.", "We analyze errors associated with 50 samples each of the 8 tasks and find that there are mainly 4 categories of error models make: (1) producing invalid output (e.g. answering text where the answer is supposed to be a number, answering a text different from the classes allowed in a classification problem), (2) copying a number Error Ex-NumNet GPT3 Invalid output 16 % 7% Copy number 5 % 3% Incorrect calculation 71 % 56% Redundant text 8 % 34% Table 3: Error analysis for the best Ex-NumNet Multi-task+CIR and GPT3 Task-specific model from the question instead of calculating the answer, (3) incorrect calculation this can be due to multiple reasons including", "(i) using an incorrect operation e.g. subtraction in place of addition,", "(ii) incorrect parsing of numbers or", "(iii) incorrect knowledge of numerical commonsense facts.", "(4) producing redundant text after producing correct answer.", "Based on error distribution in Table 3, we observe that the majority of errors come from incorrect calculation.", "Further, GPT3 is better than Ex NumNet+v2 in producing valid outputs, but it produces more redundant text.", "Future Directions: Bigger model, more data or . . . ?", "Table 2 shows that fine-tuned GPT3-13B outperforms other baselines on task 1, 2 and 3. Recall that these tasks require external knowledge and perhaps, this is the reason why GPT3, already pre-trained on a diverse web-scale text corpus has an edge over other baselines on these tasks.", "In case of the smaller Ex-NumNet, it is interesting that multitask baselines are higher than the single task baselines by 3.4% on average and that information retrieval helps in tasks that require external knowledge.", "Also notice that, GPT-3 is better on smaller datasets and NumNet is better on large datasets.", "This may indicate that GPT-3 is a better few-shot learner but not necessarily a better many-shot learner.", "This non-overlapping performance of GPT-3 and Ex-numnet, end-to-end and neuro-symbolic models respectively, indicates that a potential future direction for research is to combine the best of both the models.", "We propose NUMGLUE, a multi-task benchmark to test for arithmetic understanding.", "Our benchmark consists of eight tasks including four new ones.", "While some of the tasks require external knowledge like commonsense or domain-specific information in addition to arithmetic reasoning, some are self-contained e.g. arithmetic word problems.", "Further, we demonstrate that our benchmark 3512 is far from being solved with state-of-the-art large scale models achieving considerably lower performance than humans.", "This indicates that current AI systems are incapable of performing simple arithmetic reasoning in a general setting indicating a fundamental hurdle towards AI systems that understand complex mathematical concepts like differential equations or combinatorics.", "Finally, we present various baselines including a novel architecture (memory augmented Ex-NumNet) that demonstrate the advantages of various modeling choices (e.g. end-to-end vs neuro-symbolic mod-els).", "Specifically, we show that training in the multi-task setting leads to meaningful sharing of knowledge across tasks as evidenced by an average gain of 3.4% on tasks compared to task-specific modeling.", "Finally, we hope that our benchmark not only leads to AI systems that are capable of performing simple arithmetic reasoning in a fairly general setting but also results in progress towards more complex mathematical reasoning capability.", "We thank OpenAI for providing academic access to the GPT3 API, the Aristo team at AI2 for helpful input, the Beaker team for their support with experiments and the anonymous reviewers for their insightful feedback.", "The support of DARPA SAIL-ON, DARPA CHESS program is gratefully acknowledged.", "We have verified that all licenses of source datasets used in this paper allow for their use, modification, and redistribution in a research context.", "The dataset will be distributed in a manner similar to SuperGLUE (Wang et al., 2019) i.e. give full credit assignment to the original data and task creators." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "abstain", "objective", "result", "result", "other", "other", "abstain", "abstain" ]
[ "For sentence-level extractive summarization, there is a disproportionate ratio of selected and unselected sentences, leading to flatting the summary features when optimizing the classification.", "The imbalanced sentence classification in extractive summarization is inherent, which can't be addressed by data sampling or data augmentation algorithms easily.", "In order to address this problem, we innovatively consider the single-document extractive summarization as a rebalance problem and present a deep differential amplifier framework to enhance the features of summary sentences.", "Specifically, we calculate and amplify the semantic difference between each sentence and other sentences, and apply the residual unit to deepen the differential amplifier architecture.", "Furthermore, the corresponding objective loss of the minority class is boosted by a weighted cross-entropy.", "In this way, our model pays more attention to the pivotal information of one sentence, that is different from previous approaches which model all informative context in the source document.", "Experimental results on two benchmark datasets show that our summarizer performs competitively against state-of-the-art methods.", "Our source code will be available on Github.", "Single-document extractive summarization forms summary by copying and concatenating the most important spans (usually sentences) in a document.", "Sentence-level summarization is a very challenging task, because it arguably requires an in-depth understanding of the source document sentences, and current automatic solutions are still far from human performance.", "Recent approaches frame the task as a sequence labeling problem, taking advantage of the success of neural network architectures.", "1) It should be detrimental to keep tangential information (West et al., 2019).", "The intuitive limitation of those approaches is that they always prefer to model and retain all informative content from the source document.", "This goes against the fundamental goal of summarization, which crucially needs to forget all but the pivotal information.", "Recently, the Information Bottleneck principle (Tishby et al., 2000; West et al., 2019) is introduced to incorporate a tradeoff between information selection and pruning.", "Length penalty and the topic loss (Bazio-tis et al., 2019) are used in the autoencoding system to augment the reconstruction loss.", "However, these methods require external variables or augmentative terms, without enhancing the representation of pivotal information.", "models that have poor predictive performance, specifically for the minority class.", "The distribution of examples across the known classes can vary from a slight bias to a severe imbalance, where there is one example in the minority class for dozens of examples in the majority class.", "For instance, according to the statistics on the popular summarization dataset, only 7.33% sentences of CNN/DM (Hermann et al., 2015) are labeled as 1 and others are 0, indicating whether this sentence should be selected as summary or not.", "Conversely, most machine learning algorithms for classification predictive models are designed and demonstrated on problems that assume an equal distribution of classes.", "This means that a naive application of a model may only focus on learning the characteristics of the abundant observations, neglecting the examples from the minority class.", "Furthermore, as shown in Figure 1, the ROUGE score gradually declines along with the number of sentences accumulating, since the valuable summary sentences is generally a tiny minority (with the quantity of 1-4), while more and more majority sentences will swamp the minority ones.", "Unfortunately, the imbalance in summarization is inherent, which can't be addressed by common data augmentation (He and Ma, 2013; Asai and Hajishirzi, 2020; Min et al., 2020; Zoph et al., 2019; Xie et al., 2020), for there is a rare influence on the 0/1 distribution by adding or deleting the entire document.", "These two obstacles are interrelated and interact with each other.", "Highlighting the pivotal information will strengthen the unique semantic and weaken the common informative content.", "Additionally, a more balanced distribution would make minority class more attractive.", "If we can't resolve the category imbalance problem in extractive summarization by data augmentation, how to make the minority class more attractive?", "Inspired by the differential amplifier of analog electronics 1 , we propose a heuristic model, DifferSum , as shorthand for Differ ential Amplifier for Extractive Sum marization to enhance the representation of the summary sentences.", "Specifically, we calculate and amplify the semantic difference between each sentence and other sentences, by the subtraction operation.", "The original differential amplifier consists of two terms and the second term is used to avoid making the final output zero.", "In our model, we use the residual unit instead of the second term to make the architecture deeper.", "We further design a more appropriate objective function to avoid biasing the data, by making the loss of a minority much greater than the majority.", "DifferSum shows superiority over other extractive methods in two aspects: 1) enhancing the representation of the pivotal information and 2) compensating the minority class and penalizing the majority ones.", "1 https://en.wikipedia.org/wiki/Differential amplifier Experimental results validate the effectiveness of DifferSum.", "The human evaluation also shows that our model is better in relevance compared with others.", "Our contributions in this work are concluded as follows: We propose a novel conceptualization of extractive summarization as rebalance problem.", "We introduce a heuristic approach, calculating and amplifying the semantic representation of pivotal information by integrating both the differential amplifier and residual learning.", "Recent research work on extractive summarization spans a large range of approaches.", "These works usually instantiate their encoder-decoder architecture by choosing RNN (Nallapati et al., 2017; Zhou et al., 2018), Transformer (Wang et al., 2019; Zhong et al., 2019b; Liu and Lapata, 2019; Zhang et al., 2019b) or GNN (Wang et al., 2020; Jia et al., 2020b) as encoder, autoregressive (Jad-hav and Rajan, 2018; Liu and Lapata, 2019) or RL-based (Narayan et al., 2018; Arumae and Liu, 2018; Luo et al., 2019) decoders.", "For two-stage summarization, Chen and Bansal (2018) and Bae et al. (2019) follow a hybrid extract-then-rewrite architecture, with policy-based RL to bridge the two networks together.", "Lebanoff et al. (2019), Xu and Durrett (2019) and Mendes et al. (2019) focus on the extract-then-compress learning paradigm, which will first train an extractor for content selection.", "Zhong et al. (2020) introduces extract-then-match framework, which employs BERTSUMEXT (Liu and Lapata, 2019) as first-stage to prune unnecessary information.", "However, these above extractive approaches prefer to model all source informative context and they pay little attention to the imbalance problem.", "The original deep residual learning is introduced in image recognition (He et al., 2016a) for the notorious degradation problem.", "Then, residual is introduced to the natural language process by Transformer (Vaswani et al., 2017).", "Essentially, we cannot determine the depth of the network very well \u0000 1 \u0000 2 \u0000 3 \u0000 \u0000 \u0000 \u0000 \u0000 \u0000 \u0000 \u0000 \u0000 3 \u0000 2 \u0000 1 \u0000 1 \u0000 2 \u0000 3 \u0000 \u0000 \u0000 \u0000 \u0000 11 \u0000 12 \u0000 13 \u0000 \u00001 \u0000 \u00001 \u0000 11 \u0000 12 \u0000 13 \u0000 \u00001 \u0000 \u00001 Weighted Pooling \u0000 1 \u0000 2 \u0000 3 \u0000 \u0000 \u0000 \u0000 \u0000 1 \u0000 2 \u0000 3 \u0000 \u0000 \u0000 \u0000 FF & Sigmoid \u0000 1 \u0000 2 \u0000 3 \u0000 \u0000 \u0000 \u0000 Figure 2: Overview of DifferSum.", "when building a deep network.", "There will be optimal layers in the network, and outside the optimal layer is the redundant layer.", "We expect the redundant layer to correspond to the input and output, namely identity mapping (He et al., 2016a,b; Veit et al., 2016; Balduzzi et al., 2018).", "Resnet (He et al., 2016a) addresses the degradation problem by introducing a deep residual learning framework.", "If an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers (Huang and Wang, 2017).", "In this paper, the residual unit serves as the second item of the differential amplifier to keep our architecture deep enough and capture pivotal information.", "We model the sentence extraction task as a sequence tagging problem (Kedzie et al., 2018).", "Given a document D consisting of a sequence of M sentences [ s 1 , s 2 , ..., s M ] and a sentence s i consisting of a sequence of N words [ w i 1 , w i 2 , ..., w iN ] .", "We denote by h i and h ij the embedding of sentences and words in a continuous space.", "The extractive summarizer aims to produce a summary S by selecting m sentences from D (where m M ).", "For each sentence s i D , there is ground-truth y i { 0 , 1 } and we will predict a label y i { 0 , 1 } , where 1 means that s i should be included in the summary.", "We assign a score p ( y i | s i , D, ) to quantify s i 's relevance to the summary, where is the parameters of neural network model.", "Finally, we assemble a summary S by selecting m sentences, according to the probability of p (1 | s i , D, ) .", "The sentence encoder in extractive summarization models is usually a recurrent neural network with Long-Short Term Memory (Hochreiter and Schmid-huber, 1997) or Gated Recurrent Units (Cho et al., 2014).", "In this paper, our sentence encoder builds on the BERT architecture (Devlin et al., 2019), a recently proposed highly efficient model which is based on the deep bidirectional Transformer (Vaswani et al., 2017) and has achieved state-of-the-art performance in many NLP tasks.", "The Transformer aims at reducing the fundamental constraint of sequential computation which underlies most architecture (Liu et al., 2019).", "It eliminates recurrence in favor of applying a self-attention mechanism which directly models relationships between all words in a sentence.", "Our extractive model is composed of a sentence-level Transformer ( TS ) and a document-level Transformer ( TD ) (Liu et al., 2019).", "For each sentence s i in the input document, TS is applied to obtain a contextual representation for each word: [ u 11 , u 12 , ..., u MN ] = TS ([ w 11 , w 12 , ..., w MN ]) (1) And the representation of a sentence is acquired by applying weighted-pooling: a ij = W 0 u Tij s i = 1 NN (cid:88) j =1 a ij u ij (2) Document-level transformer TD takes s i as input and yields a contextual representation for each sentence: [ v 1 , v 2 , ..., v M ] = TD ([ s 1 , s 2 , ..., s M ]) (3) 3.3 Deep Differential Amplifier In the Transformer model sketched above, inter-sentence relations are modeled by multi-head attention based on softmax functions, which only capture shallow structural information (Liu et al., 2019).", "A differential amplifier is a type of electronic amplifier that amplifies the difference between two input voltages but suppresses any voltage common to the two inputs.", "d in in where V + in and V in are the input voltage; A d is the differential-mode gain .", "In practice, the gain should not be quite equal for the two inputs, V + in and V in .", "For instance, even if V + in and V in are equal, the output V out should not be zero.", "So, modern differential amplifiers are usually implemented with a more realistic expression, which includes a second term: V out = A d ( V + in V in ) + A c V + in + V in 2 (5) where A c is called the common-mode gain of the amplifier.", "Inspired by the differential amplifier above, we calculate and amplify the semantic difference between each sentence and other sentences by the subtraction operation of the sentence representations [ v 1 , v 2 , ..., v M ] .", "Particularly, for sentence s i , V + in and V in are calculated as follows: V + in = v i V in = (cid:80) j { 1 , 2 ,...,M }\\{ i } v j M 1 (6) The original differential amplifier consists of two terms and the second one avoids making the final output zero.", "While for the deep neural network: 1) inputs of the differential amplifier are vector instances in the high dimensional space, which is practically impossible for the zero output, compared with scalar; 2) the second term of the differential amplifier is not suitable for the deep iterative architecture, since it is exposed to the degradation problem.", "Notably, residual learning is introduced in deep learning as shortcut connections to skip one or more layers, which is naturally an alternative to the second item of the differential amplifier.", "The advantages of this method are: 1) the residual architecture will highlight the pivotal information as well as reserving the original sentence representation; 2) it is easier to optimize the residual mapping than to optimize the original (He et al., 2016a).", "Hence, the residual unit is employed as the second item, along with an iterative refinement algorithm to enhance the final representation of sentences.", "The differential amplifier in our architecture consists of a few stacked layers to iteratively refine the pivotal representation.", "Let us consider H ( x ) as an underlying mapping to be fit, with x denoting the inputs to the first of these layers.", "Since multiple nonlinear layers can asymptotically approximate complicated functions (He et al., 2016a; Montufar et al., 2014), the differential amplifier mapping H ( x ) is recast into a residual mapping F ( x ) and an identity mapping x : H ( x ) = F ( x ) + x (7) Obviously, residual learning is just a variant of the differential amplifier: H ( x ) := V out F ( x ) := A d ( V + in V in ) (8) where the output voltage V out thus becomes the original mapping H ( x ) and the first item of amplifier A d ( V + in V in ) equals to residual mapping F ( x ) , In our model, the second item of the differential amplifier is replaced by the identity mapping x , which is the shortcut connection and the output is added to the outputs of F ( x ) .", "Furthermore, 1) the identity shortcut connections advance the architecture without extra parameter; 2) the identity shortcut doesn't add the computational complexity (He et al., 2016a); Thus, for sentence respresentation v i , the deep differential amplifier is: H ( v i ) = A d ( v i (cid:80) j { 1 , 2 ,...,M }\\{ i } v j M 1 )+ v i (9) 3.5 Iterative Structure Refinement The differential amplifier and residual unit specialize in modeling the pivotal information, while deeper neural networks with more parameters are able to infer semantic more accurately.", "So, an iterative refinement algorithm is introduced to enhance the final representation of pivotal information.", "For sentence v i , the fundamental iterative unit is: H ( v i ) = F ( v i ) + v i v i = H ( v i ) (10) where we iteratively refine the representation v i for K times; and thanks to the built-in residual mechanism, most shorter paths are needed during training, as longer paths do not contribute any gradient.", "Along with the supervision, each iteration will pay more attention to the key semantic difference F ( v i ) of sentences with label 1, while trying to zero other F ( v j ) .", "Conversely, previous extractive approaches without differential amplifier can only classify those sentences by compensating or penalizing v i / v j , which is more difficult to model.", "Following previous work (Nallapati et al., 2017; Liu et al., 2019), we use a sigmoid function after a linear transformation to calculate the probability r i of selecting s i as a summary sentence: r i = sigmoid( W 1 v Ti ) (11) 3.6 Weighted Objective Function To rebalance the bias of minority 1-class and majority 0-class, we have built a deep differential amplifier to amplify and capture the unique information for summary sentences.", "Besides, another heuristic method is to make our model pay more attention to 1-class: a weighted cross-entropy function.", "Particularly, we further design a more appropriate objective function to avoid biasing the data, by making the loss of a minority much greater than the majority.", "The weight we employed is to rebalance the observations for each class, so the sum of observations for each class are equal.", "Finally, we define the model's loss function as the summation of the losses of all iterations: L = K (cid:88) k =1 (cid:40) 1 MM (cid:88) i =1 (cid:34)(cid:80) s j DI ( s j / S ) (cid:80) s j DI ( s j S ) y log( r ki ) +(1 y ) log(1 r ki ) (cid:35)(cid:41) (12) where I ( ) is an indicator function and K is the number of iterations.", "As shown in Table 1, we employ two datasets widely-used with multiple sentences summary: CNN and Dailymail (CNN/DM) (Hermann et al., 2015) and New York Times (NYT) (Sandhaus, 2008).", "CNN/DM We used the standard split (Her-mann et al., 2015) for training, validation, and test (90,266/1,220/1,093 for CNN and 196,96/12.148/10,397 for Daily Mail), with splitting sentences by Stanford CoreNLP (Manning et al., 2014) toolkit and pre-processing the dataset following (See et al., 2017) and (Zhong et al., 2020).", "This dataset contains news articles and several associated abstractive highlights.", "We use the un-anonymized version as in previous summarization work and each document is truncated to 800 BPE tokens.", "NYT Following previous work (Zhang et al., 2019b; Xu and Durrett, 2019), we use 137,778, 17,222 and 17,223 samples for training, validation, and test, respectively.", "We also followed their fil-tering procedure, documents with summaries less than 50 words were removed from the dataset.", "Sentences were split with the Stanford CoreNLP toolkit (Manning et al., 2014).", "Input documents were truncated to 800 BPE tokens too.", "Our code is based on Pytorch (Paszke et al., 2019) and the pre-trained model employed in DifferSum is albert-xxlarge-v2', which is based on the hug-gingface/transformers 2 .", "We train DifferSum two days for 100,000 steps on 2GPUs(Nvidia Tesla V100, 32GB) with gradient accumulation every two steps.", "Adam with 1 = 0 .", "9 , 2 = 0 .", "999 is used as optimizer.", "Learning rate schedule follows the strategy with warming-up on first 10,000 steps.", "We have tried the iteration steps of 2 / 4 / 6 / 8 for iterative refinement, and K = 4 is the best choice based on the validation set.", "We select the top-3 checkpoints based on the evaluation loss on the validation set, and report the averaged results on the test set.", "Following Jia et al. (2020a) and Jia et al. (2021), we employ the greedy algorithm for the sentence-level soft labels, which falls under the umbrella 2 https://github.com/huggingface/transformers Table 2: ROUGE F1 on CNN/DM.", "of subset selection.", "Besides, we employ the Trigram Blocking strategy for decoding, which is a simple but powerful version of Maximal Marginal Relevance (Carbonell and Goldstein, 1998).", "Specifically, when predicting summaries for a new document, we first use the model to obtain the probability score p (1 | s i , D, ) for each sentence, and then we rank sentences by their scores and discard those which have trigram overlappings with their predecessors.", "ROUGE (Lin, 2004) is the standard metric for evaluating the quality of summaries.", "We report the ROUGE-1, ROUGE-2, and ROUGE-L of DifferSum by ROUGE-1.5.5.pl , which calculates the overlap lexical units of extracted sentences and ground-truth.", "Table 2 shows the results on CNN/DailyMail.", "All of these scores are in accordance with original papers.", "Following Nallapati et al. (2017); Liu and Lapata (2019), we compare extractive summariza-Table 3: ROUGE F1 on NYT.", "tion models against abstractive models, and it is certainly that the abstractive paradigm is still on the frontier of summarization.", "The first part of extractive approaches is the Lead-3 baseline and Oracle upper bound, while the second part includes other extractive summarization models.", "We present our models finally at the bottom.", "It is obvious that our DifferSum outperforms all extractive baseline models.", "Compared with large version BERTSUMEXT, our DifferSum achieves 0.85/1.02/0.93 improvements on R-1, R-2, and R-L, which indicates the pivotal information captured by the differential amplifier is more powerful than the other structures.", "Compared with early approaches, especially for BERTSUMEXT, we observe that BERT outperforms all previous non-BERT-based summarization systems, and Trigram-Blocking leads to a great improvement on all ROUGE metrics.", "MATCHSUM is a comparable competitor to our DifferSum, which formulates the extractive summarization task as a two-step problem and extract-then-match summary based on a well-trained BERTSUMEXT.", "Therefore, we only train a large version DifferSum for a fair comparison.", "Results on NYT are summarized in Table 3.", "Note that we use limited-length ROUGE recall as Durrett et al. (2016), where the selected sentences are truncated to the length of the human-written summaries.", "The parts of Table 3 is similar to Table 2.", "The first four lines are abstractive models, and the next two lines are our golden baselines for extracTable 4: Ablation Study on CNN/DM.", "tive summarization.", "The third part reports the performance of other extractive works and our model respectively.", "Again, we observe that our differential amplifier modeling performs better than both LSTM and BERT.", "Meanwhile, we find that extractive approaches show superiority over abstractive models, and the ROUGE scores are higher than CNN/DailyMail.", "We propose several strategies to improve the performance of extractive summarization, including differential amplifier (vs. normal residual network), pre-trained ALBERT(vs. BERT), and iterative refinement (vs. None).", "To investigate the influence of these factors, we conduct experiments and list the results in Table 4.", "Significantly, 1) differential amplifier is more critical than ALBERT, for the reason that the pivotal information is essential and difficult for ALBERT to model; 2) iterative refinement mechanism enlarges the advantage of the differential amplifier, demonstrating the superiority of deep architecture.", "It is not enough to only rely on the ROUGE evaluation for a summarization system, although the ROUGE correlates well with human judgments (Owczarzak et al., 2012).", "Therefore, we design an experiment based on a ranking method to evaluate the performance of DifferSum by humans.", "Following Cheng and Lapata (2016), Narayan et al. (2018) and Zhang et al. (2019b), firstly, we randomly select 40 samples from CNN/DM test set.", "Then the human participants are presented with one original document and a list of corresponding summaries produced by different model systems.", "Participants are requested to rank these summaries (ties allowed) by taking informativeness (Can the summary capture the important information from the document) and fluency (Is the summary grammatical) into account.", "Each document is annotated by three different participants separately.", "also shown to the human participants in addition to the three model summaries (SummaRuNNer, BERTSUMEXT, and DifferSum).", "From the results shown in Table 5, it is obvious that DifferSum is better in relevance compared with others.", "Trigram Blocking leads to a great improvement on all ROUGE metrics for many extractive approaches (Liu and Lapata, 2019; Wang et al., 2020).", "It is has become a fundamental module in extractive summarization.", "In this paper, DifferSum extracts summary sentences with the Trigram-Blocking algorithm, but whether there is a great improvement along with it, like in SummaRuNNer or BERTSUMEXT?", "It has been explained by Nallapati et al. (2017); Liu and Lapata (2019), that picking all sentences by comparing the predicted probability with a threshold may not be an optimal strategy since the training data is very imbalanced in terms of summary-membership of sentences.", "Therefore, the Trigram-Blocking algorithm is introduced to select top-k sentences and reduce the redundancy .", "Coincidentally, our DifferSum is designed to 1) rebalance the distribution of majority and minority and 2) filter the tangential and redundant information.", "Thus, the Trigram-Blocking algorithm may be useless for our DifferSum.", "Table 6 further summarizes the performance gain of Trigram-Blocking strategy.", "It is obvious that this strategy is essential for BERTSUMEXT or SummaRuNNer, achieving more than 2.68 / 0.98 improvements on R-1 separately, for that there is no enough redundancy modeling for both of them.", "While on the other hand, the efficiency of the Trigram-Blocking strategy is weak for DifferSum.", "In this paper, we emphasize the inherent imbalance problem of the majority 0-class and the minority 1-class.", "In fact, in CNN/DailyMail dataset, there are plenty of documents with a different num-Table 6: ROUGE Scores about Trigram-Blocking on CNN/DM Test Set.", "ber of sentences, ranging from 3-sentences to 100-sentences.", "While the number of summary sentences, labeled with 1, is from 1-sentences to 5-sentences, and the average number of sentences labeled 1 in CNN/DailyMail is only 7.33%.", "What is worse is that the distribution of the number of sentences for documents is a uniform distribution, thus we could not avoid the imbalance by cleaning the data.", "In this paper, we design another experiment to analysis the harmful effect of imbalance classes.", "We train the BERTSUMEXT (12-layers) from scratch on CNN/DailyMail, and evaluate the model on the test set to check the tendency of ROUGE scores, along with the number of sentences accumulating.", "The result is shown in the line chart of Figure 1 and Figure 3a, and obviously we only pay attention to the document in which the number of sentences less than 55.", "Specifically, each document is truncated to 2000 BPE tokens to involve more sentences, but this can not cover those whole documents with more than 55-sentences.", "Therefore, we choose to calculate the ROUGE scores for documents with sentences from 3 to 55.", "For comparison, we train our DifferSum (12-layers) from scratch, and each document is truncated to 2000 BPE tokens too.", "The tendency of our DifferSum is as Figure 3b.", "Compared with the tendency of BERTSUMEXT, there is no obvious ROUGE decrease, demonstrating that our approach has strengthened the representation of pivotal and rebalanced the disproportionate ratio of summary sentences and other sentences.", "Note that more truncated BPE tokens will in-crease the final average ROUGE slightly, for it may lose some summary sentences when truncating too many tokens.", "Unfortunately, our 24-layers DifferSum can only be trained with 800 BPE tokens for the limitation of GPU source.", "A key issue motivating the sentence-level Transformer ( TS ) and the document-level Transformer ( TD ) is that the features for words after the TS might be at different scales or magnitudes.", "This can be due to some words having very sharp or very distributed attention weights when summing over the features of the other words.", "In this paper, we apply two ways to map the words representation into its sentence representation: weighted-pooling at Equation 2 and picking [CLS] token as sentence (Liu and Lapata, 2019).", "Table 7 shows that [CLS] is not enough to convey enough informative information of words for both our DifferSum and BERTSUMEXT.", "Especially, DifferSum is more sensitive to the word features since our differential amplifier may amplify the semantic features effectively.", "In this paper, we introduce a heuristic model, DifferSum, 1) to calculate and amplifier the pivotal information and 2) to rebalance the distribution of minority 1-class and majority 0-class.", "Besides, we employ another weighted cross-entropy function to compensate for the imbalance.", "Experimental results show that our method significantly outperforms previous models.", "In the future, we would like to generalize DifferSum to other fields.", "This research is supported by the National Key Research and Development Program of China", "(NO.2017YFC0820700) and National Natural Science Foundation of China (No.61902394).", "We thank all authors for their contributions and all anonymous reviewers for their constructive comments." ]
[ "abstain", "abstain", "method", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "objective", "abstain", "other", "result", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "result", "abstain", "other", "other", "other" ]
[ "In conversational speech, the acoustic signal provides cues that help listeners disambiguate difficult parses.", "For automatically parsing spoken utterances, we introduce a model that integrates transcribed text and acoustic-prosodic features using a convolutional neural network over energy and pitch trajectories coupled with an attention-based recurrent neural network that accepts text and prosodic features.", "We find that different types of acoustic-prosodic features are individually helpful, and together give statistically significant improvements in parse and disfluency detection F1 scores over a strong text-only baseline.", "For this study with known sentence boundaries, error analyses show that the main benefit of acoustic-prosodic features is in sentences with disfluencies, attachment decisions are most improved, and transcription errors obscure gains from prosody.", "While parsing has become a relatively mature technology for written text, parser performance on conversational speech lags behind.", "Speech poses challenges for parsing: transcripts may contain errors and lack punctuation; even perfect transcripts can be difficult to handle because of disfluencies (restarts, repetitions, and self-corrections), filled pauses (um, uh), interjections (like), paren-theticals (you know, I mean), and sentence fragments.", "Some of these phenomena can be handled in standard grammars, but disfluencies typically require extensions of the model.", "Different approaches have been explored in both constituency parsing (Charniak and Johnson, 2001; Johnson and Charniak, 2004) and dependency parsing (Rasooli and Tetreault, 2013; Honnibal and Johnson, 2014).", "Despite these challenges, speech carries helpful extra information beyond the words associated with the prosodic structure of an utterance and encoded via variation in timing and intonation.", "Speakers pause in locations that are correlated with syntactic structure (Grosjean et al., 1979), and listeners use prosodic structure in resolving syntactic ambiguities (Price et al., 1991).", "Prosodic cues also signal disfluencies by marking the interruption point (Shriberg, 1994).", "However, most speech parsing systems in practice take little advantage of these cues.", "Our study focuses on this last challenge, aiming to incorporate prosodic cues in a neural parser, handling disfluencies as constituents via a neural attention mechanism.", "A challenge of incorporating prosody in parsing is that multiple acoustic cues interact to signal prosodic structure, including pauses, lengthening, fundamental frequency modulation, and spectral shape.", "These cues also vary with the phonetic segment, emphasis, emotion and speaker, so feature extraction typically involves multiple time windows and normalization techniques.", "The most successful constituent parsers have mapped these features to prosodic boundary posteriors by using labeled training data (Kahn et al., 2005; Hale et al., 2006; Dreyer and Shafran, 2007).", "The approach proposed here takes advantage of advances in neural networks to automatically learn a good feature representation without the need to explicitly represent prosodic constituents.", "To narrow the scope of this work and facilitate error analysis, our experiments use known transcripts and sentence segmentation.", "Our work offers the following contributions.", "We introduce a framework for directly integrating acoustic-prosodic features with text in a neural encoder-decoder parser that does not require hand-annotated prosodic structure.", "We demonstrate improvements in constituent parsing of conversational 69 speech over a high-quality text-only parser and provide analyses showing where prosodic features help and that assessment of their utility is affected by human transcription errors.", "Our model maps a sequence of word-level input features to a linearized parse output sequence.", "The word-level input feature vector consists of the concatenation of (learnable) word embeddings e i and several types of acoustic-prosodic features, described in Section 2.3.", "We assume the availability of a training treebank of conversational speech (in our case, Switchboard-NXT (Calhoun et al., 2010)) and corresponding constituent parses.", "The transcriptions are preprocessed by removing punctuation and lower-casing all text to better mimic the speech recognition setting.", "Following Vinyals et al. (2015), the parse trees are linearized, and pre-terminals are normalized as XX (see Appendix A.1).", "Our attention-based encoder-decoder model is similar to the one used by Vinyals et al. (2015).", "The encoder is a deep long short-term memory recurrent neural network (LSTM-RNN) (Hochreiter and Schmidhuber, 1997) that reads in a word-level inputs, 1 represented as a sequence of vectors x = ( x 1 , , x T s ) , and outputs high-level features h = ( h 1 , , h T s ) where h i = LSTM ( x i , h i 1 ) .", "2 The parse decoder is also a deep LSTM-RNN that predicts the linearized parse sequence y = ( y 1 , , y T o ) as follows: P ( y | x ) = T o Y t =1 P ( y t | h , y <t ) In attention-based models, the posterior distribution of the output y t at time step t is given by: P ( y t | h , y <t ) = softmax ( W s [ c t ; d t ] + b s ) , where vector b s and matrix W s are learnable parameters; c t is referred to as a context vector that summarizes the encoder's output h ; and d t is the 1 As in Vinyals et al. (2015) the input sequence is processed in reverse order, as shown in Figure", "1. 2 For brevity we omit the LSTM equations.", "The details can be found, e.g., in Zaremba et al. (2014).", "decoder hidden state at time step t , which captures the previous output sequence context y <t .", "u it = v > tanh( W 1 h i + W 2 d t + b a ) t = softmax ( u t ) c t = T s X i =1 ti h i where vectors v , b a and matrices W 1 , W 2 are learnable parameters; u t and t are the attention score and attention weight vector, respectively, for decoder time step t .", "The above attention mechanism is only content based, i.e., it is only dependent on h i , d t .", "It is not location -aware, i.e., it does not consider the loca-tion of the previous attention vector.", "For parsing conversational text, location awareness is benefi-cial since disfluent structures can have duplicate words/phrases that may confuse the attention mechanism.", "In order to make the model location-aware, the attention mechanism takes into account the previous attention weight vector t 1 .", "In particular, we use the attention mechanism proposed by Chorowski et al. (2015), in which t 1 is represented via a feature vector f t = F t 1 , where F R k r represents k learnable convolution filters of width r .", "The filters are used for performing 1D convolution over t 1 to extract k features f ti for each time step i of the input sequence.", "The extracted features are then incorporated in the alignment score calculation as: u it = v > tanh( W 1 h i + W 2 d t + W f f ti + b a ) where W f is another learnable parameter matrix.", "Finally, the decoder state d t is computed as d t = LSTM ([ y t 1 ; c t 1 ] , d t 1 ) , where y t 1 is the embedding vector corresponding to the previous output symbol y t 1 .", "As we will see in Sec. 4.1, the location-aware attention mechanism is especially useful for handling disfluencies.", "In previous work using encoder-decoder models for parsing (Vinyals et al., 2015; Luong et al., 2016), vector x i is simply the word embedding e i of the word at position i of the input sentence.", "For parsing conversational speech, we can incorporate acoustic-prosodic features.", "Here we explore four types of features widely used in computational models of prosody: pauses, duration lengthening, fundamental frequency, and energy.", "Since prosodic cues are 70 w 1 w 2 A cou s t i c f ea t u r e s s 1 s 2 CNN filters w t w t type 1 type 2 type 3 N filters } maxpool s T s w T s E total E low E high NCCF log(f0) log(f0) . . . .", "at suband multi-word time scales, they are integrated with the encoder-decoder using different mechanisms.", "All features are extracted from transcriptions that are time-aligned at the word level.", "3 We use time alignments associated with the corpus to be consistent with other studies.", "In a small number of cases, the time alignment for a particular word boundary is missing.", "Some cases are due to tokenization.", "For example, contractions, such as don't in the original transcript, are treated as separated words for the parser ( do and n't ), and the internal word boundary time is missing.", "In such cases, these internal times are estimated.", "In other cases, there are transcription mismatches that lead to missing time alignments, where we cannot estimate times.", "For the roughly 1% of sentences where time alignments are missing, we simply back off to the text-based parser.", "Pause.", "The pause feature vector p i for word i is the concatenation of pre-word pause feature p pre,i and post-word pause feature p post,i , where each subvector is a learned embedding for 6 pause categories: no pause, missing, 0 < p 0 .", "05 s, 0 .", "05 s < p 0 .", "2 s, 0 .", "2 < p 1 s, and p > 1 s (including turn boundaries).", "The bins are chosen based on the observed distribution (see Appendix A.1).", "We did not use (real-valued) pause duration directly, for two main reasons: (1) to handle missing time alignments; and (2) duration of pause does 3 The assumption of known word alignments is standard for prosodic feature extraction in many spoken language processing studies.", "Time alignments can be obtained as a by-product of recognition or from forced alignment.", "Word duration.", "Both word duration and word-final duration lengthening are strong cues to prosodic phrase boundaries (Wightman et al., 1992; Pate and Goldwater, 2013).", "The word duration feature i is computed as the actual word duration divided by the mean duration of the word, clipped to a maximum value of 5.", "The sample mean is used for frequent words (count 15).", "For infrequent words we estimate the mean as the sum over the sample means for the phonemes in the word's dictionary pronunciation.", "We refer to the manually defined prosodic feature pair of p i and i as i .", "Fundament frequency (f0) and Energy (E) contours (f0/E).", "We use a CNN to automatically learn the mapping from the time series of f0/E features to a word-level vector.", "The contour features are extracted from 25-ms frames with 10-ms hops using Kaldi (Povey et al., 2011).", "Three f0 features are used: warped Normalized Cross Correlation Function (NCCF), log-pitch with Probability of Voicing (POV)-weighted mean subtraction over a 1.5-second window, and the estimated derivative (delta) of the raw log pitch.", "Three energy features are extracted from the Kaldi 40-mel-frequency filter bank features: E total , the log of total energy normalized by dividing by the speaker side's max total energy; E low , the log of total energy in the lower 20 mel-frequency bands, normalized by total energy, and E high , the log of total energy in the higher 20 mel-frequency bands, normalized by total energy.", "Multi-band energy features are used as a 71 simple mechanism to capture articulatory strengthening at prosodic constituent onsets (Fourgeron and Keating, 1997).", "Figure 1 summarizes the feature learning approach.", "The f0 and E features are processed at the word level: each sequence of frames corresponding to a time-aligned word (and potentially its surrounding context) is convolved with N filters of m sizes (a total of mN filters).", "The motivation for the multiple filter sizes is to enable the computation of features that capture information on different time scales.", "For each filter, we perform a 1-D convolution over the 6-dimensional f0/E features with a stride of", "1. Each filter output is max-pooled, resulting in mN -dimensional speech features s i .", "Our overall acoustic-prosodic feature vector is the concatenation of p i , i , and s i in various combinations.", "Our core corpus is Switchboard-NXT (Calhoun et al., 2010), a subset of the Switchboard corpus (Godfrey and Holliman, 1993): 2,400 telephone conversations between strangers; 642 of these were hand-annotated with syntactic parses and further augmented with richer layers of annotation facilitated by the NITE XML toolkit (Calhoun et al., 2010).", "Our sentence segmentations and syntactic trees are based on the annotations from the Treebank set, with a few manual corrections from the NXT release.", "This core dataset consists of 100K sentences, totaling 830K tokens forming a vocabulary of 13.5K words.", "We use the time alignments available from NXT, which is based on a corrected word transcript that occasionally differs from the Treebank, leading to some missing time alignments.", "We follow the sentence boundaries defined by the parsed data available, 4 and the data split (90% train; 5% dev; 5% test) defined by related work done on Switchboard (Charniak and Johnson, 2001; Kahn et al., 2005; Honnibal and Johnson, 2014).", "The standard evaluation metric for constituent parsing is the parseval metric which uses bracketing precision, recall, and F1, as in the canonical implementation of EVALB.", "5 For written text, punc-4 Note that these sentence units can be inconsistent with other layers of Switchboard annotations, such as slash units .", "tuation is sometimes represented as part of the sequence and impacts the final score, but for speech the punctuation is not explicitly available so it does not contribute to the score.", "Another challenge of transcribed speech is the presence of disfluencies.", "Speech repairs are indicated under EDITED nodes in Switchboard parse trees, which include structure under these nodes that is not of interest for simple text clean-up.", "Therefore, some studies report flattened-edit parseval F1 scores (flat-F1), which is parseval computed on trees where the structure under edit nodes has been eliminated so that all leaves are immediate children.", "We report both scores for the baseline text-only model showing that the differences are small, then use the standard parseval F1 score for most results.", "6 Disfluencies are particularly problematic for statistical parsers, as explained by Charniak and Johnson (2001), and some systems incorporate a separate disfluency detection stage.", "For this reason, and because it is useful for understanding system performance, most studies also report disfluency detection performance, which is measured in terms of the F1 score for detecting whether a word is in an edit region.", "Our approach does not involve a separate disfluency detection stage, but identifies disfluencies implicitly via the parse structure.", "Consequently, the disfluency detection results are not competitive with work that directly optimize for disfluency detection.", "We report disfluency detection scores primarily as a diagnostic.", "Most previous work on integrating prosody and parsing has used the Switchboard corpus, but it is still difficult to compare results because of differences in constraints, objectives and the use of constituent vs. dependency structure, as discussed further in Section 6.", "The most relevant prior studies (on constituent parsing) that we compare to are a bit old.", "The text-only result from our neural parser represents a stronger baseline and is important for decoupling the impact of prosody vs. the parsing framework.", "Both the encoder and decoder are 3-layer deep LSTM-RNNs with 256 hidden units in each layer.", "For the location-aware attention, the convolution operation uses 5 filters of width 40 each.", "We use 512-dimensional embedding vectors to repre-6 A variant of the flat-F1 score is used in (Charniak and Johnson, 2001; Kahn et al., 2005), which uses a relaxed edited node precision and recall but also ignores filled pauses.", "(S.", "7 A number of configurations are explored for the acoustic-prosodic features, tuning based on dev set parsing performance.", "Pause embeddings are tuned over { 4 , 16 , 32 } dimensions.", "For the CNN, we try different configurations of filter widths w { [10 , 25 , 50] , [5 , 10 , 25 , 50] } and number of filters N { 16 , 32 , 64 , 128 } for each filter width.", "8 These filter size combinations are chosen to capture f0 and energy phenomena on various levels: w = 5 , 10 for sub-word, w = 25 for word, and w = 50 for word and extended context.", "Our best model uses 32-dimensional pause embeddings and N = 32 filters of widths w = [5 , 10 , 25 , 50] , which corresponds to m = 4 and 128 filters.", "For optimization we use Adam (Kingma and Ba, 2014) with a minibatch size of 64.", "The initial learning rate is 0 .", "001 which is decayed by a factor of 0.9 whenever training loss, calculated after every 500 updates, degrades relative to the worst of its previous 3 values.", "All models are trained for up to 50 epochs with early stopping.", "For regularization, dropout with 0.3 probability is applied on the output of all LSTM layers (Pham et al., 2014).", "For inference, we use a greedy decoder to generate the linearized parse.", "The output token with maximum posterior probability is chosen at every time step and fed as input in the next time step.", "The decoder stops upon producing the end-of-sentence symbol.", "We use TensorFlow (Abadi et al., 2015) to implement all models.", "9 4 Results 4.1 Text-only Results Model F1 flat-F1 fluent disf Berkeley 85.41 85.91 90.52 83.08 C-attn 83.33 83.20 90.86 79.94 CL-attn 87.85 87.68 92.07 85.95 Table 1: Scores of text-only models on the dev set: 2044 fluent and 3725 disfluent sentences.", "7 The number of layers, dimension of hidden units, dimension of embedding, and convolutional attention filter parameters of the text-only parser were explored in earlier experiments on the development set and then fixed as described.", "8 Note that a filter of width 10 has size 6 10 , since the features are of dimension 6.", "We first show our results on the model using only text (i.e. x i = e i ) to establish a strong baseline, on top of which we can add acoustic-prosodic features.", "We experiment with the content only attention model used by Vinyals et al. (2015) and the content+location attention of Chorowski et al. (2015).", "For comparison with previous nonneural models, we use a high-quality latent-variable parser, the Berkeley parser (Petrov et al., 2006), retrained on our Switchboard data.", "Table 1 compares the three text-only models.", "In terms of F1, the con-tent+location attention beats the Berkeley parser by about 2.5% and content -only attention by about 4.5%.", "Flat-F1 scores for both encoder-decoder models is lower than their corresponding F1 scores, suggesting that the encoder-decoder models do well on predicting the internal structure of EDIT nodes while the reverse is true for the Berkeley parser.", "To explain the gains of content+location attention over content -only attention, we compare their scores on fluent (without EDIT nodes) and disfluent sentences, shown in Table", "1. It is clear that most of the gains for content+location attention are from disfluent sentences.", "A possible explanation is the presence of duplicate words or phrases in disfluent sentences, which can be problematic for a content only attention model.", "Since our best model is the content+location attention model, we will henceforth refer to it as the CL-attn text-only model.", "All models using acoustic-prosodic features are extensions of this model, which provides a strong text-only baseline.", "We extend our CL-attn model with the three kinds of acoustic-prosodic features: pause ( p ), word duration ( ), and CNN mappings of fundamental frequency (f0) and energy (E) features (f0/E-CNN).", "The results of several model configurations on our dev set are presented in Table", "2. First, we note that adding any combination of acoustic-prosodic features (individually or in sets) improves performance over the text-only baseline.", "However, certain combinations of acoustic-prosodic features are not always better than their subsets.", "The text + p + + f0/E-CNN model that uses all three types of features has the best performance with a gain of 0.7% over the already-strong text-only baseline.", "We will henceforth refer to the text + p + + f0/E-CNN model as our best model.", "As a robustness check, we report results of averaging 10 runs on the CL-attn text-only and the best model in Table", "3. We performed a bootstrap test (Efron and Tibshirani, 1993) that simulates 10 5 random test draws on the models giving median performance on the dev set.", "These median models gave a statistically significant difference between the text-only and best model ( p -value < 0 . 02 ).", "Additionally, a simple t-test over the two sets of 10 results also shows statistical significance p -value < 0 .", "03 .", "Table 4 presents the results on the test set.", "Again, adding the acoustic-prosodic features improves over the text-only baseline.", "The gains are statistically significant for the best model with p -value < 0 .", "02 , again using a bootstrap test with simulated 10 5 random test draws on the two models.", "Table 5 includes results from prior studies that compare systems using text alone with ones that incorporate prosody, given hand transcripts and sentence segmentation.", "It is difficult to compare systems directly, because of the many differences in the experimental set-up.", "For example, the original Charniak and Johnson (2001) result (reporting F=85.9 for parsing and F=78.2 for disfluencies) leverages punctuation in the text stream, which is not realistic for speech transcripts and not used in most other work.", "Our work benefits from more text training material than others, but others benefit from gold part-of-speech tags.", "Kahn et al. (2005) use a modified sentence segmentation.", "There are probably minor differences in handling of word fragments and scoring edit regions.", "Thus, this table primarily shows that our framework leads to more benefits from sentence-internal prosodic cues than others have obtained.", "Effect of sentence length.", "Figure 2 shows performance differences between our best model and the text-only model for varying sentence lengths.", "The performance difference between our best model and the text-only model increases with sentence length.", "This is likely because longer sentences more often have multiple prosodic phrases and disfluencies.", "Effect of disfluencies.", "Table 6 presents parse scores on the subsets of fluent and disfluent sentences, showing that the performance gain is in the disfluent set (65% of the dev set sentences).", "Because sentence boundaries are given, and so many fluent sentences in spontaneous speech are short, there is less potential for benefit from prosody in the fluent set.", "Types of errors.", "We use the Berkeley Parser Analyzer (Kummerfeld et al., 2012) to compare the types of errors made by the different parsers.", "10 Table 7 presents the relative error reductions over the text-only baseline achieved by the text + p model and our best model for disfluent sentences.", "The two models differ in the types of error reductions they provide.", "Including pause information gives largest improvements on PP attachment and Modifier at-10 This analysis omits the 1% of the sentences that did not have timing information.", "tachment errors.", "Adding the remaining acoustic-prosodic features helps to correct more types of attachment errors, especially VP and NP attachment.", "Figure 3 demonstrates one case where the pause feature helps in correcting a PP attachment error made by a text-only parser.", "Other interesting examples (see Appendix A.2) suggest that the learned f0/E features help reduce NP attachment errors where the audio reveals a prominent word at the constituent boundary, even though there is no pause at that word.", "Effect of transcription errors.", "The results and analyses so far have assumed that we have reliable transcripts.", "In fact, the original transcripts contained errors, and the Treebank annotators used these without reference to audio files.", "Mississippi State University (MS-State) ran a clean-up project 75 that produced more accurate word transcripts and time alignments (Deshmukh et al., 1998).", "The NXT corpus provides reconciliation between Treebank and MS-State transcripts in terms of annotating missed/extra/substituted words, but parses were not re-annotated.", "The transcript errors mean that the acoustic signal is inconsistent with the gold parse tree.", "Below are some examples of fluent sentences (according to the Treebank transcripts) with transcription errors, for which prosodic features hurt parsing.", "Words that transcribers missed are in brackets and those inserted are underlined.", "a sudden you be > all alone it 'd be nice to go someplace with people similar to you to have friends S2: uh uh < i have had > my wife 's picked up a couple of things saying uh boy if we could refinish that 'd be a beautiful piece of furniture", "Multi-syllable errors are especially problematic, leading to serious inconsistencies between the text and the acoustic signal.", "Further, the missed words lead to an incorrect attachment in the gold parse in S1 and a missing restart edit in S2.", "Indeed, for sentences with consecutive transcript errors, which we expect to impact the prosodic features, there is a statistically significant ( p -value < 0 . 05 ) negative effect on parsing with prosody.", "Not included in this analysis are sentence boundary errors, which also change the gold parse.", "Thus, prosody may be more useful than results here indicate.", "Related work on parsing conversational speech has mainly addressed four problems: speech recognition errors, unknown sentence segmentation, disfluencies, and integrating prosodic cues.", "Our work addresses the last two problems, which involve studies based on hand-transcribed text and known sentence boundaries, as in much speech parsing work.", "The related studies are thus the focus of this discussion.", "We describe studies using the Switchboard corpus, since it has dominated work in this area, being the largest source of treebanked English spontaneous speech.", "One major challenge of parsing conversational speech is the presence of disfluencies, which are grammatical and prosodic interruptions.", "Disfluencies include repetitions (I am + I am'), repairs (I am + we are'), and restarts (What I + Today is the...'), where the +' corresponds to an interruption point.", "Repairs often involve parallel grammatical constructions, but they can be more complex, involving hedging, clarifications, etc.", "Charniak and Johnson (Charniak and Johnson, 2001; Johnson and Charniak, 2004) demonstrated that disfluencies are different in character than other constituents and that parsing performance improves from combining a PCFG parser with a separate module for disfluency detection via parse rescoring.", "Our approach does not use a separate disfluency detection module; we hypothesized that the location-sensitive attention model helps handle these differences based on analysis of the text-only results (Table 1).", "However, more explicit modeling of disfluency pattern match characteristics in a dependency parser (Hon-nibal and Johnson, 2014) leads to better disfluency detection performance (F = 84.1 vs. 76.7 for our text only model).", "Pattern match features also benefit a neural model for disfluency detection alone (F = 87.0) (Zayats et al., 2016), and similar gains are observed by formulating disfluency detection in a transition-based framework (F = 87.5) (Wang et al., 2017).", "Experiments with oracle disfluencies as features improve the CL-attn text-only parsing performance from 87.85 to 89.38 on the test set, showing that more accurate disfluency modeling is a potential area of improvement.", "It is well known that prosodic features play a role in human resolution of syntactic ambiguities, with more than two decades of studies seeking to incorporate prosodic features in parsing.", "A series of studies looked at constituent parsing informed by the presence (or likelihood) of prosodic breaks at word boundaries (Kahn et al., 2004, 2005; Hale et al., 2006; Dreyer and Shafran, 2007).", "Our approach improves over performance of these systems using raw acoustic features, without the need for hand-labeling prosodic breaks.", "The gain is in part due to the improved text-based parser, but the incremental benefit of prosody here is similar to that in these prior studies.", "(In prior work using acoustic feature directly (Gregory et al., 2004), prosody actually degraded", "performance.) Our analyses of the impact of prosody also extends prior work.", "Prosody is also known to provide useful cues to sentence boundaries (Liu et al., 2006), and automatic sentence segmentation performance has been shown to have a significant impact on parsing performance (Kahn and Ostendorf, 2012).", "In our study, sentence boundaries are given so as to focus on the role of prosody in resolving sentence-internal parse ambiguity, for which prior work had 76 obtained smaller gains.", "Studies have also shown that parsing lattices or confusion networks can improve ASR performance (Kahn and Ostendorf, 2012; Yoshikawa et al., 2016).", "Our analysis of performance degradation for the system with prosody when the gold transcript and associated parse are in error suggests that prosody may have benefits for parsers operating on alternative ASR hypotheses.", "The results we compare to in Section 4 are relatively old.", "More recent parsing results on spontaneous speech involve dependency parsers using only text (Rasooli and Tetreault, 2013; Honnibal and Johnson, 2014; Yoshikawa et al., 2016), with the exception of a study on unsupervised dependency parsing (Pate and Goldwater, 2013).", "With the recent success of transition-based neural approaches in dependency parsing, researchers have adapted transition-based ideas to constituent parsing (Zhu et al., 2013; Watanabe and Sumita, 2015; Dyer et al., 2016).", "These approaches have not yet been used with speech, to our knowledge, but we expect it to be straightforward to extend our prosody integration framework to these systems, both for dependency and constituency parsing.", "We have presented a framework for directly integrating acoustic-prosodic features with text in a neural encoder-decoder parser that does not require hand-annotated prosodic structure.", "On conversational sentences, we obtained strong results when including word-level acoustic-prosodic features over using only transcriptions.", "The acoustic-prosodic features provide the largest gains when sentences are disfluent or long, and analysis of error types shows that these features are especially helpful in repairing attachment errors.", "In cases where prosodic features hurt performance, we observe a statistically significant negative effect caused by imperfect human transcriptions that make the ground truth parse tree and the acoustic signal inconsistent, which suggests that there is more to be gained from prosody than observed in prior studies.", "We thus plan to investigate aligning the Treebank and MS-State versions of Switchboard for future work.", "Here, we assumed known sentence boundaries and hand transcripts, leaving open the question of whether increased benefits from prosody can be gained by incorporating sentence segmentation in parsing and/or in parsing ASR lattices.", "Most prior work using prosody in parsing has been on constituent parsing, since prosodic cues tend to align with constituent boundaries.", "However, it remains an open question as to whether dependency, constituency or other parsing frameworks are better suited to leveraging prosody.", "Our study builds on a parser that uses reverse order text processing, since it provides a stronger text-only baseline.", "However, the prosody modeling component relies only on a 1 second lookahead of the current word (for pause binning), so it could be easily incorporated in an incremental parser.", "We thank the anonymous reviewers for their helpful feedback.", "We also thank Pranava Swaroop Madhyastha, Hao Tang, Jon Cai, Hao Cheng, and Navdeep Jaitly for their help with initial discussions and code setup.", "This research was partially funded by a Google Faculty Research Award to Mohit Bansal, Karen Livescu, and Kevin Gimpel; and NSF grant no.", "IIS-1617176.", "The opinions expressed in this work are those of the authors and do not necessarily reflect the views of the funding agency." ]
[ "abstain", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "other", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "method", "other", "abstain", "other", "other", "objective", "abstain", "other", "other", "other", "other", "abstain", "other", "other", "objective", "other", "abstain", "other", "abstain", "abstain", "other", "other", "objective", "method", "result", "abstain", "result", "objective", "method", "abstain", "abstain", "method", "abstain", "other", "other", "other", "other", "other" ]
[ "The performance of neural machine translation systems is commonly evaluated in terms of BLEU.", "However, due to its reliance on target language properties and generation, the BLEU metric does not allow an assessment of which translation directions are more difficult to model.", "In this paper, we propose cross-mutual information ( XMI ): an asymmetric information-theoretic metric of machine translation difficulty that exploits the probabilistic nature of most neural machine translation models.", "XMI allows us to better evaluate the difficulty of translating text into the target language while controlling for the difficulty of the target-side generation component independent of the translation task.", "We then present the first systematic and controlled study of cross-lingual translation difficulties using modern neural translation systems.", "Code for replicating our experiments is available online at https://github.com/ e-bug/nmt-difficulty .", "Machine translation (MT) is one of the core research areas in natural language processing.", "Current state-of-the-art MT systems are based on neural networks (Sutskever et al., 2014; Bahdanau et al., 2015), which generally surpass phrase-based systems (Koehn, 2009) in a variety of domains and languages (Bentivogli et al., 2016; Toral and Sanchez-Cartagena, 2017; Castilho et al., 2017; Bojar et al., 2018; Barrault et al., 2019).", "Using phrase-based MT systems, various controlled studies to understand where the translation difficulties lie for different language pairs were conducted (Birch et al., 2008; Koehn et al., 2009).", "However, comparable studies have yet to be performed for neural machine translation (NMT).", "As a result, it is still unclear whether all translation directions are equally easy (or hard) to model for NMT.", "This paper hence aims at filling this gap: Ceteris paribus , MI : characterize language H ( S ) H ( S | T ) MI ( S ; T ) intrinsic source/targetlanguagevariation sharedinfor-mation H ( T ) H ( T | S ) MI ( S ; T ) XMI : characterize models H q LM ( S ) H q MT ( S | T ) XMI ( T S ) intrinsic source/targetmodelingdifficulty transferdiffi-culty H q LM ( T ) H q MT ( T | S ) XMI ( S T ) Figure 1: Left: Decomposing the uncertainty of a sentence as mutual information plus language-inherent uncertainty: mutual information ( MI ) corresponds to just how much easier it becomes to predict T when you are given S .", "is it easier to translate from English into Finnish or into Hungarian?", "And how much easier is it?", "Conversely, is it equally hard to translate Finnish and Hungarian into another language?", "Based on BLEU (Papineni et al., 2002) scores, previous work (Belinkov et al., 2017) suggests that translating into morphologically rich languages, such as Hungarian or Finnish, is harder than translating into morphologically poor ones, such as English.", "However, a major obstacle in the cross-lingual comparison of MT systems is that many automatic evaluation metrics, including BLEU and METEOR (Banerjee and Lavie, 2005), are not cross-lingually comparable.", "In fact, being a function of n -gram overlap between candidate and reference translations, they only allow for a fair comparison of the performance between models when translating into the same test set in the same target language.", "Indeed, one cannot and should not draw conclusions about the difficulty of translating a source language into different target languages purely based on BLEU (or METEOR) scores.", "In response, we propose cross-mutual information ( XMI ), a new metric towards cross-linguistic comparability in NMT.", "In contrast to BLEU, this information-theoretic quantity no longer explicitly depends on language, model, and tokenization choices.", "It does, however, require that the models under consideration are probabilistic.", "As an initial starting point, we perform a case study with a controlled experiment on 21 European languages.", "Our analysis showcases XMI 's potential for shedding light on the difficulties of translation as an effect of the properties of the source or target language.", "We also perform a correlation analysis in an attempt to further explain our findings.", "Here, in contrast to the general wisdom, we find no significant evidence that translating into a morphologically rich language is harder than translating into a morphologically impoverished one.", "In fact, the only significant correlate of MT difficulty we find is source-side typetoken ratio.", "Human evaluation will always be the gold standard of MT evaluation.", "However, it is both time-consuming and expensive to perform.", "To help researchers and practitioners quickly deploy and evaluate new systems, automatic metrics that correlate fairly well with human evaluations have been proposed over the years (Banerjee and Lavie, 2005; Snover et al., 2006; Isozaki et al., 2010; Lo, 2019).", "BLEU (Papineni et al., 2002), however, has remained the most common metric to report the performance of MT systems.", "BLEU is a precision-based metric: a BLEU score is proportional to the geometric average of the number of n -grams in the candidate translation that also appear in the reference translation for 1 n 4 .", "1 In the context of our study, we take issue with two shortcomings of BLEU scores that prevent a cross-linguistically comparable study.", "First, it is not possible to directly compare BLEU scores across languages because different languages might express the same meaning with a very different number of words.", "For instance, agglutinative languages like Turkish often use a single word to express what other languages have periphrastic constructions for.", "To be concrete, the expression I will have been programming is five words in En-1 BLEU also corrects for reference coverage and includes a length penalty, but we focus on the high-level picture.", "glish, but could easily have been one word in a language with sufficient morphological markings; this unfairly boosts BLEU scores when translating into English.", "The problem is further exacerbated by tokenization techniques as finer granularities result in more partial credit and higher n for the n -gram matches (Post, 2018).", "In summary, BLEU only allows us to compare models for a fixed target language and tokenization scheme , i.e. it only allows us to draw conclusions about the difficulty of translating different source languages into a specific target one (with downstream performance as a proxy for difficulty).", "Thus, BLEU scores cannot provide an answer to which translation direction is easier between any two sourcetarget pairs.", "In this work, we address this particular shortcoming by considering an information-theoretic evaluation.", "Formally, let VS and VT be sourceand target-language vocabularies, respectively.", "Let S and T be sourceand target-sentence-valued random variables for languages S and T , respectively; then S and T respectively range over V S and V T .", "These random variables S and T are distributed according to some true, unknown probability distribution p .", "The cross-entropy between the true distribution p and a probabilistic neural translation model q MT ( t | s ) is defined as: H q MT ( T | S ) = (1) (cid:88) t V T (cid:88) s V S p ( t , s ) log 2 q MT ( t | s ) Since we do not know p , we cannot compute eq.", "(1).", "However, given a held-out data set of sentence pairs { ( s ( i ) , t ( i ) ) } Ni =1 assumed to be drawn from p , we can approximate the true cross-entropy as follows: H q MT ( T | S ) (2) 1 NN (cid:88) i =1 log 2 q MT ( t ( i ) | s ( i ) ) In the limit as N , eq.", "(2) converges to eq.", "(1).", "We emphasize that this evaluation does not rely on language tokenization provided that the model q MT does not (Mielke, 2019).", "While common in the evaluation of language models, cross-entropy evaluation has been eschewed in machine translation research since", "(i) not all MT models are probabilistic and", "(ii) we are often interested in measuring the quality of the candidate translation our model actually produces, e.g. under approximate decoding.", "However, an information-theoretic evaluation is much more suitable for measuring the more abstract notion of which language pairs are hardest to translate to and from, which is our purpose here.", "We contend that simply reporting cross-entropies is not enough.", "A second issue in performing a controlled, cross-lingual MT comparison is that the language generation component (without translation) is not equally difficult across languages (Cot-terell et al., 2018).", "We claim that the difficulty of translation corresponds more closely to the mutual information MI( S ; T ) between the source and target language, which tells us how much easier it becomes to predict T when S is given (see Figure 1).", "But what is the appropriate analogue of mutual information for cross-entropy?", "One such natural generalization is a novel quantity that we term cross-mutual information , defined as: XMI( S T ) = H q LM ( T ) H q MT ( T | S ) (3) where H q LM ( T ) denotes the cross-entropy of the target sentence T under the model q LM .", "As in 2, this can, analogously, be approximated by the cross-entropy of a separate target-side language model q LM over our held-out data set: XMI( S T ) (4) 1 NN (cid:88) i =1 log 2 q LM ( t ( i ) ) q MT ( t ( i ) | s ( i ) ) which, again, becomes exact as N .", "In practice, we note that we mix different distributions q LM ( t ) and q MT ( t | s ) and, thus, q LM ( t ) is not necessarily representable as a marginal: there need not be any distribution q ( s ) such that q LM ( t ) = (cid:80) s V S q MT ( t | s ) q ( s ) .", "While q MT and q LM can, in general, be any two models, we exploit the characteristics of NMT models to provide a more meaningful, model-specific estimate of XMI .", "NMT architectures typically consist of two components: an encoder that embeds the input text sequence, and a decoder that generates translated output text.", "The latter acts as a conditional language model, where the source-language sentence embedded by the encoder drives the target-language generation.", "Hence, we use the decoder of q MT as our q LM to accurately estimate the difficulty of translation for a given architecture in a controlled way.", "In summary, by looking at XMI , we can effectively decouple the language generation component, whose difficulties have been investigated by Cotterell et al. 2018 and Mielke et al. 2019, from the translation component.", "This gives us a measure of how rich and useful the information extracted from the source language is for the target-language generation component.", "In order to measure which pairs of languages are harder to translate to and from, we make use of the latest release v7 of Europarl (Koehn, 2005): a corpus of the proceedings of the European Parliament containing parallel sentences between English ( en ) and 20 other European languages: Bulgarian ( bg ), Czech ( cs ), Danish ( da ), German ( de ), Greek ( el ), Spanish ( es ), Estonian ( et ), Finnish ( fi ), French ( fr ), Hungarian ( hu ), Italian ( it ), Lithuanian ( lt ), Latvian ( lv ), Dutch ( nl ), Polish ( pl ), Portuguese ( pt ), Romanian ( ro ), Slovak ( sk ), Slovene ( sl ) and Swedish ( sv ).", "Pre-processing steps In order to precisely effect a fully controlled experiment, we enforce a fair comparison by selecting the set of parallel sentences available across all 21 languages in Europarl.", "This fully controls for the semantic content of the sentences; however, we cannot adequately control for translationese (Stymne, 2017; Zhang and Toral, 2019).", "Our subset of Europarl contains 190 , 733 sentences for training, 1 , 000 unique, random sentences for validation and 2 , 000 unique, random sentences for testing.", "For each parallel corpus, we jointly learn byte-pair encodings (BPE; Sennrich et al., 2016) for the source and target languages, using 16 , 000 merge operations.", "We use the same vocabularies for the language models.", "2 Setup In our experiments, we train Transformer models (Vaswani et al., 2017), which often achieve state-of-the-art performance on MT for various language pairs.", "In particular, we rely on the PyTorch (Paszke et al., 2019) re-implementation of the Transformer model in the fairseq toolkit (Ott et al., 2019).", "For language modeling, we use the decoder from the same architecture, training it at the sentence level, as opposed to commonly used fixed-length chunks.", "We train our systems using label smoothing (LS; Szegedy et al., 2016; Meister et al., 2 For English, we arbitrarily chose the English portion of the en-bg vocabulary.", "en bg cs da de el es et fi fr hu it lt lv nl pl pt ro sk sl sv avg BLEU 47.4 42.4 46.3 44.0 50.0 50.6 39.3 38.2 44.9 38.4 40.8 37.6 40.3 38.3 39.8 48.3 50.5 44.2 45.3 43.7 43.5 XMI( en ) 102.3 97.0 99.7 96.5 105.3 103.8 92.8 92.1 97.0 92.5 92.1 89.2 94.2 86.5 91.9 102.5 106.1 99.8 100.1 96.9 96.9 H q LM ( en ) 154.2 154.2 H q MT ( en | ) 51.8 57.2 54.5 57.7 48.9 50.4 61.4 62.0 57.2 61.6 62.1 65.0 60.0 67.7 62.3 51.7 48.1 54.4 54.1 57.3 57.3 en bg cs da de el es et fi fr hu it lt lv nl pl pt ro sk sl sv avg BLEU 46.3 34.7 45.0 36.3 45.5 50.2 27.7 30.5 45.7 30.3 37.9 31.0 34.6 34.9 30.5 46.7 44.2 39.8 41.5 41.3 38.73 XMI( en to ) 106.2 102.8 103.3 104.0 111.0 108.1 100.2 98.0 99.7 99.1 95.3 96.0 99.3 90.4 98.3 105.2 112.4 105.8 107.9 100.1 102.1 H q LM ( ) 156.5 164.0 152.7 167.6 163.7 159.3 162.5 158.6 154.9 166.6 158.6 159.2 156.4 159.7 163.4 159.3 160.5 157.7 158.2 153.1 159.6 H q MT ( | en ) 50.3 61.2 49.4 63.6 52.7 51.3 62.4 60.6 55.1 67.5 63.3 63.1 57.0 69.3 65.1 54.1 48.1 51.9 50.3 53.0 57.5 Table 1: Test scores, from and into English, Europarl, visualized in Figure 2 and Figure", "2020) as it has been shown to prevent models from over-confident predictions, which helps to regularize the models.", "We report cross-entropies ( H q MT , H q LM ), XMI , and BLEU scores obtained using SACREBLEU (Post, 2018).", "3 Finally, in a similar vein to Cotterell et al. (2018), we multiply cross-entropy values by the number of sub-word units generated by each model to make our quantities independent of sentence lengths (and divide them by the total number of sentences to match our approximations of the true distributions).", "See App.", "A for experimental details.", "We train 40 systems, translating each language into and from English.", "4 The models' performance in terms of BLEU scores, and the cross-mutual information ( XMI ) and cross-entropy values over the test sets are reported in Table 1 with significant values marked in App.", "B. 3 Signature: BLEU+c.mixed+#.1+s.exp+tok.13a+v.1.2.12.", "4 Due to resource limitations, we chose these tasks because most of the information available in the web is in English ( https://w3techs.com/technologies/ overview/content_language ) and effectively translating it into any other language would reduce the digital language divide ( http://labs.theguardian.com/ digital-language-divide/ ).", "Besides, translating into English gives most people access to any local information.", "Translating into English When translating into the same target language (in this case, English), BLEU scores are, in fact, comparable, and can be used as a proxy for difficulty.", "We can then conclude, for instance, that Lithuanian ( lt ) is the hardest language to translate from, while Spanish ( es ) is the easiest.", "In this scenario, given the good correlation of BLEU scores with human evaluations, it is desirable that XMI correlates well with BLEU.", "This behavior is indeed apparent in the blue points in the left part of Figure 2, confirm-ing the efficacy of XMI in evaluating the difficulty of translation while still being independent of the target language generation component.", "Translating from English Despite the large gaps between BLEU scores in Table 1, one should not be tempted to claim that it is easier to translate into English than from English for these languages as often hinted at in previous work (e.g., Belinkov et al., 2017).", "As we described above, different target languages are not directly comparable, and we actually find that XMI is slightly higher, on average, when translating from English, indicating that it is actually easier , on average, to transfer information correctly in this direction.", "For instance, translation from English to Finnish is shown to be easier than from Finnish to English, despite the large gap da sv fr lv bg sk sl it fi lt pt es nl ro et pl el cs hu de 0 25 50 75 100 125 150 175 H q LM ( T ) 99 .", "in BLEU scores.", "This suggests that the former model is heavily penalized by the target-side language model; this is likely because Finnish has a large number of inflections for nouns and verbs.", "Another interesting example is given by Greek ( el ) and Spanish ( es ) in Table 1, where, again, the two tasks achieve very different BLEU scores but similar XMI .", "In light of the correlation with BLEU when translating into English, this shows us that Greek is just harder to language-model, corroborating the findings of Mielke et al. (2019).", "Moreover, Figure 2 clearly shows that, as expected, XMI is not as well correlated with BLEU when translating from English, given that BLEU scores are not cross-lingually comparable.", "Correlations with linguistic and data features Last, we conduct a correlation study between the translation difficulties as measured by XMI and the linguistic and data-dependent properties of each translation task, following the approaches of Lin et al. (2019) and Mielke et al. (2019).", "Table 2 lists Pearson's and Spearman's correlation coefficients for data-dependent metrics, where bold values indicate statistically significant results ( p < 0 . 05 ) after Bonferroni correction ( p < 0 . 0029 ).", "Interestingly, the only features that significantly correlate with our metric are related to the type-to-token ratio (TTR) for the source language and the distance between source and target TTRs.", "This implies that a potential explanation for the differences in translation difficulty lies in lexical variation.", "For full correlation results, refer to App.", "D. 6 Conclusion In this work, we propose a novel information-theoretic approach, XMI , to measure the translation difficulty of probabilistic MT models.", "Differently from BLEU and other metrics, ours is languageand tokenization-agnostic, enabling the first systematic and controlled study of cross-lingual translation difficulties.", "Our results show that XMI correlates well with BLEU scores when translating into the same language (where they are comparable), and that higher BLEU scores in different languages do not necessarily imply easier translations.", "In future work, we plan to extend this analysis across more translation pairs, more diverse languages and multiple domains, as well as investigating the effect of translationese or source-side grammatical errors (Anastasopoulos, 2019).", "The authors are thankful to the anonymous reviewers for their valuable feedback.", "The second-to-last author acknowledges a Facebook Fellowship and discussions with Tiago Pimentel.", "This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skodowska-Curie grant agreement No 801199, the National Science Foundation under grant 1761548, and by Research and Development of Deep Learning Technology for Advanced Multilingual Speech Translation, the Commissioned Research of National Institute of Information and Communications Technology (NICT), Japan." ]
[ "abstain", "abstain", "objective", "method", "objective", "other", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "method", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "result", "abstain", "objective", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other" ]
[ "Generating metaphors is a challenging task as it requires a proper understanding of abstract concepts, making connections between unrelated concepts, and deviating from the literal meaning.", "In this paper, we aim to generate a metaphoric sentence given a literal expression by replacing relevant verbs.", "Based on a theoretically-grounded connection between metaphors and symbols, we propose a method to automatically construct a parallel corpus by transforming a large number of metaphorical sentences from the Gutenberg Poetry corpus (Jacobs, 2018) to their literal counterpart using recent advances in masked language modeling coupled with commonsense inference.", "For the generation task, we incorporate a metaphor discriminator to guide the decoding of a sequence to sequence model fine-tuned on our parallel data to generate high quality metaphors.", "Human evaluation on an independent test set of literal statements shows that our best model generates metaphors better than three well-crafted baselines 66% of the time on average.", "Moreover, a task-based evaluation shows that human-written poems enhanced with metaphors proposed by our model are preferred 68% of the time compared to poems without metaphors.", "Metaphors allow us to communicate not just information, but also feelings and complex attitudes (Veale et al., 2016). While most computational work has focused on metaphor detection (Gao et al., 2018; Stowe et al., 2019; Shutova et al., 2010; Tsvetkov et al., 2014; Veale et al., 2016; Stowe and Palmer, 2018), research on metaphor generation is Work down when the author is interning at UCLA. Literal Input1 The wildfire spread through the forest at an amazing speed. GenMetaphor1 The wildfire danced through the forest at an amazing speed. Literal Input2 The window panes were rattling as the wind blew through them GenMetaphor2 The window panes were trembling as the wind blew through them Table 1: Examples of two generated metaphors GenMetaphor1 and GenMetaphor2 by our best model MERMAID from their literal inputs. under-explored (Yu and Wan, 2019; Stowe et al., 2020). Generating metaphors could impact many downstream applications such as creative writing assistance, literary or poetic content creation. Relevant statistics demonstrate that the most frequent type of metaphor is expressed by verbs (Steen, 2010; Martin, 2006). We therefore focus on the task of generating a metaphor starting from a literal utterance (Stowe et al., 2020), where we transform a literal verb to a metaphorical verb. Table 1 shows examples of literal sentences and the generated metaphors. To tackle the metaphor generation problem we need to address three challenges: 1) the lack of training data that consists of pairs of literal utterances and their equivalent metaphorical version in order to train a supervised model; 2) ensuring that amongst the seemingly endless variety of metaphoric expressions the generated metaphor can fairly consistently capture the same general meaning as the literal one, with a wide variety of lexical variation; and 3) computationally overcome the innate tendency of generative language models to produce literal text over metaphorical one. In an attempt to address all these challenges, we introduce our approach for metaphor generation called MERMAID (MEtaphor geneRation with syMbolism And dIscriminative Decoding), making the following contributions: A method to automatically construct a corpus that contains 93,498 parallel [literal sentence, metaphorical sentence] pairs by leveraging the theoretically-grounded relation between metaphor and symbols . Barsalou et al. (1999) showed how perceptual symbols arising from perception are used in conceptual tasks such as representing propositions and abstract concepts. Philosopher Susanne Langer in her essay Expressiveness and Symbolism stated A metaphor is not language, it is an idea expressed by language, an idea that in its turn functions as a symbol to express something.", "Our approach has two steps: 1) identify a set of sentences that contains metaphorical verbs from an online poetry corpus; 2) convert these metaphorical sentences to their literal versions using Masked Language Models and structured common sense knowledge achieved from COMET (Bosselut et al., 2019), a language model fine-tuned on ConceptNet (Speer et al., 2017).", "For the later, we exploit the SymbolOf relation to make sure the generated sentence that contains the literal sense of the verb has the same symbol as the metaphorical sentence.", "For example, for the metaphorical sentence The turbulent feelings that surged through his soul\" our method will generate The turbulent feelings that continued through his soul\" maintaining the common symbolic meaning of (love, loss, despair, sorrow, loneliness) between the two (Section 2). A metaphor discriminator that guides the decoding of a sequence-to-sequence model fine-tuned on our parallel data to generate high quality metaphors. Our system MERMAID , fine-tunes BART (Lewis et al., 2019) a state of the art pre-trained denoising autoencoder built with a sequence to sequence model, on our automatically collected parallel corpus of [literal sentence, metaphorical sentence] pairs (Sec. 3.1) to generate metaphors. A discriminative model trained in identifying metaphors is further used to complement our generator and guide the decoding process to improve the generated output (Sec. 3.2). Human evaluations show that this approach generates metaphors that are better than two literary experts 21% of the time on average, better 81% of the time than two well-crafted baselines, and better 36% of the time than fine-tuned BART (Lewis et al., 2019) (Section 5). A task-based evaluation to improve the quality of human written poems using metaphorical rewriting. Evaluation via Amazon Mechanical Turk shows that poems enhanced with metaphors generated by MERMAID are preferred by Turkers 68% of the times compared to poems without metaphors, which are preferred 32% of the times (Section 6). 1 2 Dataset Creation with Symbolism Datasets for metaphors are scarce. To our knowledge, there is no large scale parallel corpora containing literal and metaphoric paraphrases. The closest and most useful work is that of Mohammad et al. (2016). However the size of this data-set is small: 171 instances, which is not sufficient to train deep learning models. Recently, Stowe et al. (2020) rely on available metaphor detection datasets to generate metaphors by a metaphor-masking framework, where they replace metaphoric words in the input texts with metaphor masks (a unique metaphor token), hiding the lexical item.", "This creates artificial parallel training data: the input is the masked text, with the hidden metaphorical word, and the output is the original text (e.g., The war [MASK] many people The war uprooted many people).", "The major issue with such masking strategy is that it ignores the semantic mapping between the literal verb and the metaphorical verb.", "Moreover, there are only 11,593 such parallel instances, still too small to train a neural model.", "The lack of semantic mapping between the artificial parallel training data samples, coupled with limited size thus affects the lexical diversity and meaning preservation of generated metaphors at test time.", "In light of these challenges, we propose to compose a large-scale parallel corpora with literal and metaphorical sentence pairs to learn the semantic mappings.", "We start with collecting a large-scale corpora of metaphorical sentences (Sec-tion 2.1) and leverage masked language model and symbolism-relevant common sense knowledge to create literal version for each metaphorical sentence (Section 2.2).", "Metaphors are frequently used in Poetry to explain and elucidate emotions, feelings, relationships and", "The tax cut will help the economy Black desert covered in iron silences The tax cut will stimulate the economy Black desert gripped in iron silences BART DISCRIMANTORSOURCE Figure 1: A schematic illustration of our system, which shows the data creation and training process where we use MLM along with COMET to transform an original metaphorical input to a literal output evoking similar symbolic meaning and use them to fine-tune BART.", "other elements that could not be described in ordinary language.", "We use this intuition to identify a naturally occurring poetry corpus that contains metaphors called Gutenberg Poetry Corpus (Jacobs, 2018).", "2 The corpus contains 3,085,117 lines of poetry extracted from hundreds of books.", "Not every sentence in the corpus contains a metaphorical verb.", "So as a first step, we identify and filter sentences containing a metaphorical verb.", "We build a classifier by fine-tuning BERT (De-vlin et al., 2018) on a metaphor detection corpus VU AMSTERDAM (Steen, 2010).", "Since our work is focused on verbs, we only do token classification and calculate loss for verbs.", "Figure 2 illustrates the BERT-based token-level classifier.", "The classification accuracy on test set is 74.7%, which is on par with most state of art methods.", "Using the metaphor detection model, we identify 622,248 ( 20 . 2% ) sentences predicted by our model as containing a metaphoric verb.", "Considering the classifier can introduce noise as the accuracy of the metaphor detection model is far from oracle 100% , we only retain sentences which are predicted by our model with a confidence score of 95% (i.e., prediction probability 0.95).", "This results in a total number of 518,865 ( 16 . 8% ) metaphorical sentences.", "After identifying high quality metaphorical sentences, we want to obtain their literal counterparts to create a parallel training data.", "Masked language models like BERT (Devlin et al., 2018), or roBERTa (Liu et al., 2019) can be used for fill-in-the-blank tasks, where the model uses the context words surrounding a masked token to predict the masked word.", "We borrow this framework to mask 2 https://github.com/aparrish/ gutenberg-poetry-corpus BERT [CLS] Linear + Softmax w 1 v 1 w 2 w 3 w 4 v 2 [SEP] M L Figure 2: BERT-base-cased model to identify metaphoric verbs, where v 1 and v 2 represent the verbs in a sentence.", "the metaphorical verb (Table 2 Row1 vs Row2) from a sentence and use BERT-base-cased model to obtain the top 200 candidate verbs to replace the metaphorical one to generate literal sentences (Ta-ble 2 Row3).", "There are two main issues in solely relying on MLM predicted verbs: 1) they are not necessarily literal in nature; 2) after replacing the default MLM predicted verb, the metaphorical sentence and the new sentence with the replaced verb might be semantically dissimilar.", "Even though our inductive biases tell us that the chance of a predicted token having a literal sense is higher than having a metaphorical one, this cannot be assumed.", "To filter only literal candidate verbs we re-rank the MLM predicted mask tokens based on literal scores obtained from 2.1 since the model can predict the softmax probability of a verb in a sentence being either literal or metaphorical (Table 2 Row 4).", "While we can potentially pair the sentence with the top most literal ranked verb with the input", "sentence containing the metaphorical verb, they might symbolically or semantically represent different abstract concepts.", "For example, in Table 3, after replacing the metaphorical verb surge\" with the top most literal verb eased\", the sentence The turbulent feelings that eased through his soul\" evoke a different symbolic meaning of peace,love,happiness,joy & hope in comparison to the input containing the metaphorical verb, which evokes a symbolic meaning of love, loss, despair, sorrow & loneliness . To tackle this problem we ensure that the transformed literal output represents the same symbolic meaning as the metaphorical input. To generate the common sense SYMBOL that is implied by the literal or metaphorical sentences, we feed the sentences as input to COMET (Bosse-lut et al., 2019) and restrict it to return top-5 beams. COMET is an adapted knowledge model pre-trained on ConceptNet. 3 Our work only leverages the SymbolOf relation from COMET. 3 https://mosaickg.apps.allenai.org/ comet_conceptnet We now need a method to combine information from MLM and symbolic knowledge obtained from COMET described above. To do this, we filter candidates from MLM token predictions based on the symbolic meaning overlap between the metaphorical input and literal output first. To ensure that the quality is high, we put a strict requirement that all the 5 symbolic beams (typically words or short phrases) for the input metaphorical sentence should match all the 5 symbolic beams for the output literal sentence. Between multiple literal candidates all having beam overlap of 5, they are further ranked by reverse metaphoricity (i.e., literal) scores. The top most candidate is returned thereafter. We fi-nally end up with 90,000 pairs for training and 3,498 pairs for validation. 3 Metaphor Generation Our goal of generating metaphors can be broken down into two primary tasks: 1) generating the appropriate substitutions for the literal verb while being pertinent to the context; 2) ensuring that the generated utterances are actually metaphorical. 3.1 Transfer Learning from BART To achieve the first goal, we fine-tune BART (Lewis et al., 2019), a pre-trained conditional language model that combines bidirectional and autoregressive transformers, on the collected parallel corpora. Specifically, we fine-tune BART by treating the literal input as encoder source and the metaphorical output as the the decoder target (Fig-ure 1). One issue of the pre-trained language models is that they have a tendency to generate literal tokens over metaphorical ones. To overcome this, we introduce a rescoring model during the decoding process to favor more metaphorical verbs. The rescoring model is inspired by Holtzman et al. (2018); Goldfarb-Tarrant et al. (2020) and detailed in the next section. 3.2 Discriminative Decoding We have a base metaphor generation model p ( z | x ) which is learned by fine-tuning BART (Lewis et al., 2019) on pairs of literal ( x ) and metaphorical ( z ) sentences. We propose to modify the decoding objective to incorporate a Metaphor detection rescoring model a and re-rank the base, or naive\" BART generated hypotheses, bringing the metaphoric representation closer to the rescoring model's specialty That wounded forehead dashed with blood and wine To heal and raise from death my heart That wounded forehead covered with blood and wine To heal and help from death my heart BARTDECODERTARGET ENCODERTARGET The tax cut will help the economy Black desert covered in iron silences The tax cut will stimulate the economy Black desert gripped in iron silences BARTMLM COMETDISCRIMANTORSOURCE Figure 3: Schematic showing the decoding step where we use fine-tuned BART along with a metaphor detecting discriminator to generate a metaphorical sentence conditioned on a literal input and desirable attribute. The modified decoding objective becomes: f ( x , z ) = m (cid:88) i log p ( z | z < i, x ) + a ( x , z i...m ) (1) where is a weight of the score given by a . Implementation Details We use top-k sampling strategy (Fan et al., 2018) (k=5) to generate metaphors conditioned on a literal input. Our rescoring model a is a RoBERTa model fine-tuned on a combined dataset of (Steen, 2010; Beigman Klebanov et al., 2018) to classify sentences as literal or metaphorical based on whether there exists a metaphorical verb. It is a sentence level task where the model predicts a sentence as literal or metaphorical. We down-sample the data to maintain a ratio of ( 1 : 1 ) between two classes and use 90% of the data to train and 10% for validation. We achieve a considerably decent validation accuracy of 83% . We manually tune using grid search on a small subset of 3,498 validation samples from our parallel automatic data and choose the best value. Figure 3 shows the process of re-ranking BART hypothesis using the discriminator described above to generate novel metaphorical replacements for literal verbs. All the hyper-parameters for data creation, fine-tuning and discriminative decoding are exactly the same as mentioned in Appendix A . The reason to use a separate discriminator for decoding instead of using the same BERT based classifier used for parallel data creation, was to avoid introducing dataset biases or spurious correlations. The BERT-based classifier used for automatically creating the parallel dataset ideally has already picked up salient metaphorical phenomena in the VUA dataset. To further guide the decoding process, we hypothesize that a model trained on datasets not seen during training would lead to better generalization. We experimented with using the BERT model trained on VUA for rescoring, but the results were not better. 4 Experimental Setup To compare the quality of the generated metaphors, we benchmark our MERMAID model against human performance (i.e., the two creative writing experts HUMAN1 (a novelist) & HUMAN2 (a poet) who are not the authors of the paper) (Section 4.2) and three baseline systems described below. 4.1 Baseline Systems Lexical Replacement (LEXREP) : We use the same idea as our data creation process (Section 2.2). We use our model described in Section 2.1 to re-rank the predicted tokens from a mask language model based on metaphoricity scores. We filter the top 25 ranked metaphorical candidates and further rerank them based on symbolic meaning overlap with the literal meaning using COMET (Bosselut et al., 2019) and replace the literal verb with the top scoring candidate. Metaphor Masking (META_M) : We use the metaphor masking model proposed by Stowe et al. (2020) where the language model learns to replace a masked verb with a metaphor. They train a seq2seq model with the encoder input of the format (The tax cut [MASK] the economy) and the decoder output being the actual metaphorical sentence (The tax cut lifted the economy). During inference, they mask the literal verb and expect the language model to infill a metaphorical verb. BART : We use generations from a BART model fine-tuned on our automatically created data without the discriminative decoding. This helps us gauge the effect of transfer learning from a large generative pre-trained model, which also accounts for context unlike the retrieval based methods. 4.2 Test Data To measure the effectiveness of our approach, we need to evaluate our model on a dataset that is independent of our automatically created parallel data and that is diverse across various domains, genres and types. Hence we rely on test data from multiple sources. As our first source, we randomly sample literal and metaphorical sentences with high confidence ( > 0 . 7 ) and unique verbs from the existing dataset introduced by Mohammad et al. (2016). For the metaphorical sentences from Mohammad et al. (2016) we convert them to their literal equivalent the same way as discussed in Section 2.2 without the use of COMET as we do not need it. To ensure diversity in genre, as our second source we scrape WRITINGPROMPT and OCPOETRY subreddits for sentences with length up to 12 words, which are literal in nature based on prediction from our model described in Section 2.1. We collate 500 such sentences combined from all sources and randomly sample 150 literal utterance for evaluation. We use two literary experts (not authors of this paper) a student in computer science who is also a poet, and a student in comparative literature who is the author of a novel to write corresponding metaphors for each of these 150 inputs for evaluation and comparison. 4.3 Evaluation Criteria Automatic evaluation. One important aspect in evaluating the quality of the generated metaphors is whether they are faithful to the input: while we change literal sentences to metaphorical ones, it should still maintain the same denotation as the input. To this end, we calculate the Semantic Similarity between the metaphorical output and the input using sentence-BERT (SBERT) (Reimers and Gurevych, 2019). We also calculate corpus-level BLEU-2 (Papineni et al., 2002) and BERTScore (Zhang et al., 2019) with human written references. Human evaluation. Since automatic evaluation is known to have significant limitations for creative generation (Novikova et al., 2017), we further conduct human evaluation on a total of 900 utterances, 600 generated from 4 systems and 300 generated by the two human experts. We propose a set of four criteria to evaluate the generated output: (1) Fluency (Flu) (How fluent, grammatical, well formed and easy to understand are the generated ut-terances?), (2) Meaning (Mea) (Are the input and the output referring or meaning the same thing?\") (3) Creativity (Crea) (How creative are the generated utterances?), and (4) Metaphoricity (Meta) (How metaphoric are the generated utterances).", "The human evaluation is done on the Amazon Mechanical Turk platform.", "Each Turker was given a literal input and 6 metaphorical outputs (4 from system outputs 3 baselines and our proposed system MERMAID , and 2 from humans) at a time, with the metaphorical outputs randomly shuffled to avoid potential biases.", "Turkers were instructed to evaluate the quality of the metaphorical sentences with respect to the input and not in isolation.", "As we System Similarity BLEU-2 BertScore LEXREP 79.6 68.7 0.56 META_M 73.2 61.0 0.62 BART 83.6 65.0 0.65 MERMAID 85.0 66.7 0.71 HUMAN1 86.6 -HUMAN2 84.2 -Table 4: Automatic evaluation results on test set where MERMAID significantly outperforms other automatic methods for 2 out of 3 metrics ( p < . 001) according to approximate randomization test).", "evaluate on four dimensions for 900 utterances, we have a total of 3600 evaluations.", "Each criteria was rated on a likert scale from 1 (not at all) to 5 (very).", "Each group of utterances was rated by three separate Turkers, resulted in 42, 48, 44 and 53 Turkers for the four evaluation tasks respectively.", "We pay them at a rate of $15 per hour.", "Based on the semantic similarity metric shown in column 1 of Table 4, our system MERMAID is better in preserving the meaning of the input than the other baselines.", "As mentioned, we calculate BLEU-2 and BERTScore between system outputs and human references.", "MERMAID is better than the other baselines according to BERTScore.", "In terms of BLEU-2, MERMAID is second best.", "Table 5 shows the average scores for the human evaluation on four metaphor quality criteria for MERMAID , the baselines, and human written metaphors on the test set.", "The inter-annotator agreements computed using Krippendorff's alpha for Creativity, Meaning, Fluency and Metaphoricity are 0.44, 0.42, 0.68, 0.52 respectively.", "The results demonstrate that MERMAID is significantly better Literal System Metaphor Flu Mea Crea Meta The scream filled the night HUMAN1 The scream pierced the night 4.3 5.0 3.7 4.0 HUMAN2 The scream covered the night 2.7 4.0 3.0 3.0 LEXREP The scream held the night 1.7 3.7 2.0 1.7 META_M The scream opened the night 1.0 1.0 1.0 1.0 BART The scream filled the night 2.3 1.0 2.3 1.0 MERMAID The scream pierced the night 4.3 5.0 3.7 4.0 The wildfire spread through the forest at an amazing speed HUMAN1 The wildfire ravaged through the forest at an amazing speed 4.7 4.3 4.0 3.0 HUMAN2 The wildfire leapt through the forest at an amazing speed 3.7 3.0 5.0 3.7 LEXREP The wildfire saw through the forest at an amazing speed 1.3 1.0 2.7 3.3 META_M The wildfire grows through the forest at an amazing speed 3.7 2.7 2.7 4.0 BART The wildfire swept through the forest at an amazing speed 4.0 3.7 4.7 4.0 MERMAID The wildfire danced through the forest at an amazing speed 3.0 4.0 4.0 3.7 My heart beats when he walks in the room HUMAN1 My heart skips when he walks in the room 4.7 5.0 4.0 4.3 HUMAN2 My heart sings when he walks in the room 5.0 4.3 3.7 3.3 LEXREP My heart made when he walks in the room 1.0 1.0 1.0 1.0 META_M My heart came when he walks in the room 1.7 1.0 1.3 1.3 BART My heart sings when he walks in the room 5.0 4.3 3.7 3.7 MERMAID My heart jumps when he walks in the room 4.7 4.7 4.3 4.0 After a glass of wine, he relaxed up a bit HUMAN1 After a glass of wine, he loosened up a bit 4.7 5.0 5.0 4.0 HUMAN2 After a glass of wine, he unfurled up a bit 2.0 5.0 2.0 3.7 LEXREP After a glass of wine, he followed up a bit 3.7 1.0 2.7 1.7 META_M After a glass of he touched up a bit 1.3 1.0 1.7 2.0 BART After a glass of wine, he dried up a bit 2.7 1.0 2.3 2.0 MERMAID After a glass of wine, he loosened up a bit 4.3 5.0 5.0 3.7 The tax cut will help the economy HUMAN1 The tax cut will uplift the economy 4.7 5.0 4.7 4.0 HUMAN2 The tax cut will fertilize the economy 4.0 4.3 4.3 3.7 LEXREP The tax cut will bring the economy 1.7 3.0 2.7 1.7 META_M The tax cut will prevent the economy 1.7 1.0 2.0 1.0 BART The tax cut will strengthen the economy 5.0 5.0 4.3 3.7 MERMAID The tax cut will stimulate the economy 5.0 4.7 3.7 4.0 I tried to resolve things over between them HUMAN1 I tried to tide things over between them 4.3 3.0 3.7 4.3 HUMAN2 I tried to patch things over between them 4.7 4.7 5.0 2.0 LEXREP I tried to push things over between them 3.3 1.0 2.3 2.0 META_M I tried to make things over between them 4.0 1.0 2.7 2.7 BART I tried to put things over between them 4.7 2.0 3.0 2.7 MERMAIDI tried to smooth things over between them 4.7 4.7 5.0 4.0 Table 6: Examples of generated outputs from different systems (with human written metaphors as references).", "than the baselines on all four criteria ( p < . 001 according to approximate randomization test).", "Table 6 presents several generation outputs from different systems along with human judgements on individual criteria.", "We observe that incorporating a discriminator often guides our model to generate better metaphors than the already strong baseline using BART.", "Finally, incorporating symbolic meaning in data creation step helps our model to maintain the same meaning as the input.", "Metaphors are frequently used by creative writing practitioners, in particular poets, to embellish their work.", "We posit that MERMAID can be used to edit literal sentences in poems to further enhance creativity.", "To test this hypothesis, we first crawl origi-Figure 4: Percentage of Preference of Original Quatrains vs Quatrains rewritten by MERMAID nal poems submitted by authors from the sub-reddit OCPOETRY .", "The poems are of variable lengths, so to ensure parity we break them into Quatrains (four sentence stanza).", "We randomly sample 50 such Quatrains containing at least one sentence with a And the hills have a shimmer of light between, And the valleys are covered with misty veils, And .........,", "We then select a sentence containing a literal verb from each Quatrain and use MERMAID to re-write it so that the resulting output is metaphorical.", "We ignore common verbs like is,was,are,were,have,had .", "If there are more than one sentence in Quatrain with literal verbs, we choose the sentence with a literal verb that has the highest probability for being literal.", "For sentences with multiple literal verbs, we choose the verb with highest literal probability.", "Our goal is to see if re-written poems are qualitatively better than the original forms.", "To do this, we hire Turkers from Amazon Mechanical Turk and present them with hits where the task is to choose the better version between the original Quatrain and the re-written version.", "15 Turkers were recruited for the task.", "Each Quatrain was evaluated by 3 distinct Turkers.", "Table 7 shows metaphorical transformations by a MERMAID Figure 4 shows that poems rewritten by MERMAID were considered better by the Turkers.", "Most researchers focused on identification and interpretation of metaphor, while metaphor generation is relatively under-studied.", "For metaphor detection, researchers focused on variety of features, including unigrams, imageabil-ity, sensory features, WordNet, bag-of-words features (Klebanov et al., 2014; Tsvetkov et al., 2014; Shutova et al., 2016; Tekiroglu et al., 2015; Hovy et al., 2013; Kper and im Walde, 2016).", "With advent of deep learning approaches, Gao et al. (2018) used BiLSTM models based on GloVe (Pennington et al., 2014) and ELMo word vectors (Peters et al., 2018) to detect metaphoric verbs.", "Inspired by the linguistic theories, MIP (Semino et al., 2007; Steen, 2010) and SPV (Wilks, 1975, 1978), Mao et al. (2019) proposed two detection models consisting of BiLSTM with attention mechanisms that relied on GloVe and ELMo embeddings.", "Recent work on metaphor detection have also used pretrained language models (Su et al., 2020; Gong et al., 2020).", "While we focus on metaphor generation , we use (Devlin et al., 2018) to detect metaphoric verbs to create parallel data and (Liu et al., 2019) to rescore our generated hypothesis during decoding.", "Some early works made contributions to use template and heuristic-based methods (Abe et al., 2006; Terai and Nakagawa, 2010) to generate A is like B sentences, more popularly referred to as similes .", "Chakrabarty et al. (2020) concentrated on simile generation, applying seq2seq model to paraphrase a literal sentence into a simile.", "Other attempts learned from the mappings of different domains and generated conceptual metaphors of pattern A is B (Hervs et al., 2007; Mason, 2004; Gero and Chilton, 2019).", "These works paid attention to the relationship between nouns and concepts to create elementary figurative expressions.", "Recent metaphor generation works focus mainly on verbs.", "Yu and Wan (2019) proposed an unsupervised metaphor extraction method, and developed a neural generation model to generate metaphorical sentences from literal-metaphorical verb pairs.", "They however do not focus on literal to metaphorical sentence transfer , but generate a sentence given a metaphorical fit word.", "The closest to our work is that of Stowe et al. (2020), who focus on building a seq2seq model, using a special mask token to mask the metaphorical verbs as input, and the original metaphorical sentences as output.", "However, this model face challenges in transferring the literal sentences to metaphorical ones, while maintaining the same meaning.", "We, on the contrary, focus on maintaining the same meaning through parallel data creation focusing on symbolism.", "Additionally, we incorporate a metaphor detection model as a discriminator to improve decoding during generation.", "We show how to transform literal sentences to metaphorical ones.", "We propose a novel way of creating parallel corpora and an approach for generating metaphors that benefits from transfer learning and discriminative decoding.", "Human and automatic evaluations show that our best model is successful at generating metaphors.", "We further show that leveraging symbolic meanings helps us learn better abstract representations and better preservation of the denotative meaning of the input.", "Future directions include learning diverse conceptual metaphoric mapping using our parallel data and constraining our metaphoric generations based on particular mapping.", "Our data is collected from Reddit and we understand and respect user privacy.", "Our models are fine-tuned on sentence level data obtained from user posts.", "These do not contain any explicit detail which leaks information about a users name, health, negative financial status, racial or ethnic origin, religious or philosophical affiliation or beliefs, sexual orientation, trade union membership, alleged or actual commission of crime.", "Second, although we use language models trained on data collected from the Web, which have been shown to have issues with bias and abusive language (Sheng et al., 2019; Wallace et al., 2019), the inductive bias of our models should limit inadvertent negative impacts.", "Unlike model variants such as GPT, BART is a conditional language model, which provides more control of the generated output.", "Furthermore, we specifically encode writing style from a poetic corpus in our models and train on parallel data in the direction of literal to metaphorical style.", "Open-sourcing this technology will help to generate metaphoric text assisting creative writing practitioners or non native language speakers to improve their writing.", "We do not envision any dual-use that can cause harm for the use of our the metaphor generation system.", "This work was supported in part by the MCS program under Cooperative Agreement N66001-19-2-4032, and the CwC program under Contract W911NF-15-1-0543 with the US Defense Advanced Research Projects Agency (DARPA).", "The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.", "The authors would like to thank the members of PLUS-Lab at the University of California Los Angeles and University of Southern California and the anonymous reviewers for helpful comments." ]
[ "abstain", "objective", "abstain", "method", "result", "objective", "other", "result", "method", "result", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "method", "other", "method", "abstain", "result", "objective", "result", "result", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other" ]
[ "zhenhai@google.com", "Radu Soricut Google Research", "rsoricut@google.com", "Abstract", "We describe an efficient hierarchical method to compute attention in the Transformer architecture.", "The proposed attention mechanism exploits a matrix structure similar to the Hierarchical Matrix (H-Matrix) developed by the numerical analysis community, and has linear run time and memory complexity.", "We perform extensive experiments to show that the inductive bias embodied by our hierarchical attention is effective in capturing the hierarchical structure in the sequences typical for natural language and vision tasks.", "Our method is superior to alternative sub-quadratic proposals by over +6 points on average on the Long Range Arena benchmark.", "It also sets a new SOTA test perplexity on One-Billion Word dataset with 5x fewer model parameters than that of the previous-best Transformer-based models.", "Linearly combining information using content-based weights, a method generically known as attention, is a key building block in many deep neural networks such as recurrent neural networks (RNN) (Luong et al., 2015), convolutional neural networks (CNN) (Bello et al., 2019) and graph convolutional networks (GCN) (Velickovic et al., 2018).", "One particular type of such attention, called multi-head scaled dot-product attention, is one of the main components of the Transformer architecture proposed by Vaswani et al. (2017), which has been shown to push the state-of-the-art (SOTA) performance for various understanding and generation tasks.", "These include standard natural language processing (NLP) tasks such as machine translation, document classification, entailment, summarization and question answering (Za-heer et al., 2020; Dai et al., 2019; Baevski and Auli, 2019), as well as music generation (Huang et al., 2018), image generation (Parmar et al., 2018; Chen et al., 2020) and genomics (Zaheer et al., 2020; Choromanski et al., 2020).", "The Transformer is also the backbone architecture for models such as BERT (Devlin et al., 2019) (and its numerous relatives) and GPT3 (Brown et al., 2020), which have delivered impressive performance across many NLP tasks.", "However, the standard attention mechanism of the Transformer has a run time and memory usage that scales quadratically with sequence length.", "Therefore, this quadratic complexity has become a critical bottleneck in processing long sequences (over 1,000 tokens), and has since motivated many new attention algorithms, see (Tay et al., 2020d) for a survey of such work.", "In this paper, we draw inspiration from two branches in numerical analysis: Hierarchical Matrix (H-Matrix) (Hackbusch, 1999, 2000) and Multigrid method (Briggs et al., 2000).", "We propose a hierarchical attention that has linear complexity in run time and memory, and only utilizes dense linear algebra operations optimized for GPUs or TPUs.", "We hypothesize that the inductive bias embodied by the proposed hierarchical structure for the attention matrix is effective in capturing the hierarchical structure in the sequences typically seen in many natural language processing and computer vision tasks.", "The main benchmark we use in this paper is the Long Range Arena (LRA) benchmark (Tay et al., 2020c), which has been specifically designed to evaluate and compare various sub-quadratic attention algorithms.", "Our new hierarchical attention mechanism achieves best average performance to-date on the LRA benchmark by more than 6 points over the previous-best BigBird algorithm (Zaheer et al., 2020), while pushing SOTA performance higher in 4 of the 5 successful tasks.", "Furthermore, using this new attention, a Transformer-based language model trained on the One-Billion Word dataset (Chelba et al., 2014) sets a new SOTA performance record by reducing the test perplexity by 1 .", "55 points comparing to the previous-best Transformer-XL (Dai et al., 2019) with 5x more parameters.", "Overall, these empirical results both validate the soundness of our approximation method for computing attention weights, as well as the the appropriateness of the inductive bias present in the proposed hierarchical attention.", "It is well established in the NLP literature that the embeddings of nearby tokens tend to be more similar than the distant ones (Manning and Schutze, 1999).", "This leads to the intuition that token similarity and hence the attention should decrease with the sequence distance between a query token and a key token 1 .", "This motivates the sliding-window local attention (Parmar et al., 2018; Ramachandran et al., 2019; Qiu et al., 2019) which amounts to truncating off-diagonal entries in the attention matrix beyond a user-specified sequence distance.", "A second approach is to keep O (1) number of nonze-ros per row in the attention matrix.", "The nonzero entry selection is either content-based (Kitaev et al., 2020; Roy et al., 2020; Tay et al., 2020b; Zhou et al., 2020), hand-crafted (Beltagy et al., 2020; Brown et al., 2020; Child et al., 2019; Ho et al., 2019) or simply random (Zaheer et al., 2020).", "It is also well known in the NLP literature that long-range contextual information is necessary for many NLP tasks (Khandelwal et al., 2018; Liu and Lapata, 2019).", "So a set of global tokens are also considered.", "This adds O (1) number of dense rows and columns to the attention matrix (Zaheer et al., 2020; Ainslie et al., 2020; Belt-agy et al., 2020).", "A third approach is to approximate the attention matrix with a low-rank factored form (Choromanski et al., 2020; Wang et al., 2020; Tay et al., 2020a).", "The first two approaches are based on the premise that one needs to explicitly zero out entries in the attention matrix in order to reduce the quadratic complexity.", "Decades of research by the scientific computing and numerical analysis community has resulted in more sophisticated algorithms to sparsify matrices.", "A 1 Eq.", "small set of samples of these algorithms and their engineering applications include Fast Multipole Method (Greengard and Rokhlin, 1987; Greengard, 1994; Nabors et al., 1994; Shi et al., 1998), Pre-corrected FFT (Phillips and White, 1997; Zhu et al., 2005), Hierarchical Singular Value Decomposition (SVD) (Kapur and Long, 1997) and Hierarchical Matrix (H-Matrix) (Hackbusch, 1999, 2000; Zhu and White, 2005).", "These are generally called Multilevel Methods (Brandt and Lubrecht, 1990).", "The hierarchical attention proposed in this paper is inspired by these Multilevel Methods in general and the H-Matrix in particular.", "The hierarchical matrix structure allows a linear complexity in both constructing and applying the attention matrix.", "Given matrices Q , K and V , with rows representing sequences of token embedding or feature vectors for query, key and value respectively, the output weighted by the scaled dot-product attention in the Transformer (Vaswani et al., 2017) is defined as", "where Z, Q, K, V RL d , L is the length of the sequences, and d is the embedding or feature size.", "In a more compact matrix form, Eq.", "(1) can be written as Z = D 1 AV (2) where A = e S (3) S i,j = Q i K Tj d (4) D = diag { A 1 L } (5) 1 L = [1 , 1 , ..., 1] T .", "Here, A, S RL L , 1 L RL is a vector with all ones, and S i,j represents the unnormalized cosine similarity between query embedding Q i (the i -th row in Q ) and key embedding K j (the j -th row in K ).", "For the sake of clarity, we focus on the single-head attention in the exposition of the proposed algorithm.", "Extension to the multi-head case is straightforward since each attention head is computed independently (Vaswani et al., 2017).", "Computing the similarity matrix S in Eq.", "(4) and the attention matrix A in Eq.", "(3) takes O ( L 2 d ) time and O ( L 2 ) memory.", "Similarly, computing AV in Eq.", "(2) takes O ( L 2 d ) time, and computing A 1 L in Eq.", "(5) takes O ( L 2 ) time.", "The O ( L 2 d ) and O ( L 2 ) complexities are the bottlenecks for applying the attention mechanism over very long sequences.", "4.1 H-Matrix", "The singular-value decomposition of the attention matrix A in Eq.", "(3) is A = U VT (7) where = diag { 1 , 2 , ..., L } and i is the i -th singular value.", "The numerical rank of matrix A is r if (cid:80) L i = r +1 i < (cid:15) for a given tolerance (cid:15) (Tre-fethen and Bau, 1997).", "The standard rankr approximation to matrix A is A U VT = U VT (8) where = diag { 1 , 2 , ..., r } , U, V RL r have the first r columns of U and V , and V = V .", "This is the low-rank approximation used in (Choromanski et al., 2020; Wang et al., 2020; Tay et al., 2020a).", "This approximation compresses L 2 entries in A to 2 rL entries in U and VT .", "So the compression rate is L 2 r .", "The H-Matrix generalizes this low-rank approximation by using matrix block hierarchy.", "Consider a two-level H-Matrix with 4 4 and 2 2 block partition at level-0 and level-1, respectively.", "Matrix A is partitioned as A = A (0)11 A (0)12 A (0)21 A (0)22 A (1)12 A (1)21 A (0)33 A (0)34 A (0)43 A (0)44 .", "The low-rank approximation in Eq.", "(8) is applied to the off-diagonal blocks at each level.", "For example, A ( l ) 12 U ( l ) 12 V ( l ) 12 T (10) where l = 0 , 1 .", "To give a concrete example, suppose each entry in matrix A has the analytical form A i,j = e S i,j (11) S i,j = 2 e ( i j ) 2 1 (12) where i, j = 0 , 1 , 2 , ..., 15 2 .", "With the block hierarchy defined in Eq.", "(9), the size of the matrix block at level-1 and level-0 is 8 8 and 4 4 , respectively.", "For tolerance (cid:15) = 10 3 , one can verify that the numerical rank map of matrix A is 4 2 2 4 2 2 4 2 2 4 (13) where the number in each block is the numerical rank of the corresponding block in Eq.", "(9).", "Note that matrix A still has full numerical rank of 16 at a looser tolerance 10 1 .", "So the standard low-rank approximation is ineffective in this case.", "But even this simple two-level H-matrix already offers a compression rate of 43 since storing an H-matrix with the rank map in Eq.", "(13) takes 192 entries 3 .", "In addition, one can verify that no entry A i,j in Eq.", "(11) is very small, since S i,j [ 1 , 1] in Eq.", "(12).", "Therefore, truncating off-diagonal entries of matrix A , as proposed in (Parmar et al., 2018), would produce a poor approximation.", "In practice, the number of levels is adapted to the underlining governing equations that result in matrix A and it can easily be over 10 (Kapur and Long, 1997; Hackbusch, 2000; Zhu and White, 2005).", "In turn, this can substantially increase the compression rate.", "In general, the computation complexity of the H-Matrix is either O ( L ) or O ( L log L ) , depending on the underlining physics (Hackbusch, 1999, 2000).", "Multigrid Method is a multi-level nested iterative method for solving large-scale sparse matrices resulting from discretized partial-differential equations (PDEs) (Briggs et al., 2000; Trottenberg et al., 2000).", "At its core are two simple but powerfully complementary ideas: relaxation and correction.", "Our proposed hierarchical attention only uses the correction scheme as a building block since there is no sparse matrix to relax on.", "The correction scheme has two components: restriction or coarsening, and interpolation or pro-2 Matrix A in", "Eq.(11) is a symmetric Toeplitz matrix (Golub and Loan, 1996) and hence only has 16 unique entries.", "But we ignore this fact and treat A as a general matrix here.", "3 Each one of four diagonal blocks at level-0 takes 16 entries.", "Each one of four off-diagonal blocks at level-0 takes 16 entries.", "Each one of two off-diagonal blocks at level-1 takes 32 entries.", "longation.", "Consider a vector v h of scalar values defined on a set of N grids with uniform interval h .", "The simplest coarsening is to take the average of the scalar values on each pair of grids, i.e., v 2 hj = 1 2( v h 2 j + v h 2 j +1 ) (14) where j = 0 , 1 , 2 , ...N/ 2 1 .", "The superscript in Eq.", "(14) indicates that the grid interval at these two levels is h and 2 h , respectively.", "The simplest interpolation is to duplicate the value on each coarse grid to values on a pair of fine grids, i.e., v h 2 j = v 2 hj , v h 2 j +1 = v 2 hj (15) where j = 0 , 1 , 2 , ...N/ 2 1 .", "The hierarchical low-rank structure like Eq.", "(13) turns out to be pervasive in many if not all physics phenomena.", "Much of the theoretical analysis by (Greengard and Rokhlin, 1987; Hackbusch, 1999) is concerned with quantifying such aspects.", "The key insight into these Multilevel Methods can be summarized as follows: perform no approximation for near interactions, and apply progressively lower-precision approximation for progressively longer distance interactions .", "The simple case shown in Eq.", "(9)-(13) is a good example.", "To satisfy the tolerance of 10 3 , we need full rank (no approximation) for the diagonal blocks (near inter-actions), higher precision approximation (rank-2 vs full-rank of 4) for the 4 4 off-diagonal blocks at level-0 (mid-distance) and lower precision approximation (rank-2 vs full-rank of 8) for the 8 8 off-diagonal blocks at level-1 (long-distance).", "In this section, we present some intuition to answer two important questions: 1) Does the hierarchical low-rank structure hold for the attention matrix A in Eq.", "(3)?", "2) What is the algorithm to ef-ficiently compute the hierarchical low-rank structure?", "We only give an informal exposition of the hierarchical attention.", "The formal mathematical derivation is deferred to the Appendix.", "The error analysis in (Greengard and Rokhlin, 1987; Hackbusch, 1999) offers little direct insight since the attention matrix A in Eq.", "(3) is data dependent by definition and hence its analytical form like Eq.", "(11) and (12) is generally unknown.", "So gathering empirical evidences seems the only viable path to answer the first question listed above.", "The ablation studies by (Khandelwal et al., 2018) examine the effect of context words on a language model.", "Within the context range of about 200 tokens, word order is only relevant within the 20 most recent tokens or about a sentence.", "In the long-range context, order has almost no effect on performance, suggesting that the model maintains a high-level, rough semantic representation of faraway words.", "The observation is succinctly summarized by the title of the paper sharp nearby, fuzzy far away.", "Remarkably, this is in spirit very close to the key insight into the Multilevel Methods.", "A few recent attention-related studies have explored this direction with some success, such as word-level and sentence-level attentions in (Miculicich et al., 2018; Abreu et al., 2019), and sentence-level and paragraph-level attentions in (Liu and Lapata, 2019).", "Even though the proposed hierarchical attention in these studies only has two levels, as opposed to ten or more levels typically used by the Multilevel Methods, the reported positive results are quite suggestive.", "We therefore hypothesize that the same hierarchical low-rank structure as shown in Eq (13) might also hold for the attention matrix in many NLP tasks.", "And we treat it as the inductive bias in the hierarchical attention mechanism proposed in this paper.", "As pointed out in (Goyal and Ben-gio, 2020), inductive biases encourage the learning algorithm to prioritise solutions with certain properties.", "Hence good benchmark performance delivered by a Transformer-based model with proposed hierarchical attention can be regarded as a positive evidence to support the hierarchical low-rank structure hypothesis.", "In the standard definition of attention in Eq.", "(3) and (4), there is no preference given to any keys based on the sequence distance between a query and keys.", "The observation in (Khandelwal et al., 2018) clearly suggests that a distance-dependent attention mechanism should be a better alternative.", "We will take three steps to informally explain the hierarchical attention mechanism.", "First, the attention matrix blocks for nearby, mid-distance and long-distance attention are separated in section 5.2.1.", "This is the first step toward the distance-dependent attention mentioned above.", "Second, a token hierarchy is established in section 5.2.2.", "Third, the hierarchical attention is constructed in section 5.2.3 5.2.1 Attention Partition Consider a 16-word sentence in Fig. 1. The sentence is partitioned at three segment granularity.", "This induces a three-level partition of the attention matrix A for the original sequence: A = A (2) + A (1) + A (0) (16) where A (2) = (cid:34) 0 A (2)12 A (2)21 0 (cid:35) (17) A (1) = A (1)12 A (1)21 A (1)23 A (1) 32 A (1) 34 A (1)43 (18) A (0) = A (0)11 A (0)12 A (0)21 A (0)22 A (0)23 ... ... ...", "(19)", "Note that the nonzero entries in A (0) , A (1) and A (2) are the same as the corresponding entries of matrix A in Eq.", "(3).", "Matrix block size of A (0) ij , A (1) ij and A (2) ij is 2 2 , 4 4 and 8 8 , respectively.", "Following the key insight into Multilevel Methods, we perform no approximation to any level-0 matrix block A (0) ij and apply a low-rank approximation to off-diagonal matrix blocks in A (1) and A (2) .", "If we set the numerical rank of all these blocks to 2, then we can assemble the three rank maps into a single rank map as 4 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 .", "this sentence is to illustrate how to set up token hierarchy level by level with aggregation", "a) Level-0: 16 tokens partitioned into 8 segments", "b) Level-1: 16 tokens partitioned in 4 segments", "c) Level-2: 16 tokens partitioned in 2 segments this sentence is to to set up token illustrate how hierarchy level by level with aggregation this sentence is to illustrate how to set up token hierarchy level by level with aggregation Figure 1: Token sequence partitions in three segment granularity.", "The hierarchical structure embodied by the predetermined rank map in Eq.", "(20) represents the inductive bias for the attention matrix A in Eq.", "(16).", "But this construction step is inefficient because we need to form the original attention matrix and then perform SVD to discover the low-rank approximation.", "To illustrate the notion of token hierarchy, consider the same 16-word sentence in Fig. 2. A simple 3-level binary-tree hierarchy can be set up by following the simple coarsening defined in Eq.", "(14): 1) At level-0, each one of the 16 words is mapped to its word embedding; 2) At level-1, each token (parent node) corresponds to a pair of adjacent words at level-0 (child nodes), which are shown inside each box.", "The embedding of each parent token is simply the average of its child token embeddings; 3) At level-2, each token (parent node) corresponds to one pair of adjacent tokens at level-1 (child nodes) or 4 adjacent words at level-0 (grand child nodes), which are shown inside each box.", "The embedding of each parent token is simply the average of its child token embeddings.", "In general, the height of the binary tree is O ( log 2 ( L ) and the total number of tree nodes is O (2 L ) , where L is the sequence length.", "We only need word embeddings for the leaf nodes since the embeddings of all other tree nodes can be recursively computed.", "The formal definition and notations of the recursion for query and key are detailed in section 6.1.", "It is clear from Fig. 2 that the embeddings of higher level tokens represent a coarser level representation of a larger chunk of the text.", "The tokens at different levels can be understood as multi-scale snapshots of the original token sequence at level-0.", "Hence this token hierarchy naturally induces a set of multi-scale attention matrices.", "Let A ( i ) be the attention matrix induced by the tokens at leveli .", "It is clear from Fig. 2 that the size of A (0) , A (1) and A (2) is 16 16 , 8 8 and 4 4 , respectively.", "This multi-scale viewpoint does not directly lead to a useful algorithm since matrix A (0) contains all the information and there is little additional information from A (1) and A (2) .", "A key step to arrive at the hierarchical attention is to apply the contextual sliding window at each hierarchy level.", "The tokens at each level are partitioned into segments of size 2 in Fig. 2. One way to implement the local attention is to allow each query token segment to attend only two adjacent key token segments, one to its left and another to its right.", "At level-0, each query token segment also attends to the collocated key token segment.", "The token segment partition and local attention lead to a tri-diagonal block sparse matrix structure for A (0) and bi-diagonal block sparse matrix structure for A (1) and A (2) .", "Their sparsity patterns are A (0) 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 (21) A (1) 2 2 2 2 2 2 (22) A (2) (cid:20) 2 2 (cid:21) (23) where the 2 in the nonzero blocks indicates that these are dense blocks of size 2 2 .", "It is clear that A (0) is identical to A (0) in Eq.", "(19).", "The efficiency gain comes from A (2) and A (1) .", "Each nonzero entry in A (2) and A (1) captures the aggregated or coarse attention between two disjoint chunk of four and two tokens, respectively.", "Progressively larger token chunks lead to progressively lower-precision approximation to the original attention blocks.", "This is precisely the intention of the rank map in Eq.", "(20).", "We can now see that A (2) and A (1) provide an efficient way to approximate A (2) in Eq.", "(17) and A (1) in Eq.", "(18), respectively.", "The simple example in Fig. 2 can be easily generalized.", "Eq.", "(14) is used to coarsen or merge rows in matrices Q , K and V in Eq.", "(1).", "For sequence length L = 2 M +1 , the coarsening establishes a binary tree of depth M for Q , K and V , respectively.", "Each tree node represents a matrix row and there are 2 M +1 l nodes or rows at levell .", "To facilitate the discussion, we define a few hierarchy related notations here.", "Let Q ( l ) , K ( l ) and V ( l ) be coarsened versions of Q , K and V at levell in the binary tree.", "We note that l = 0 is a special case, which is defined as Q (0) = Q, K (0) = K, V (0) = V. (24) Following Eq.", "(14), the recursion to coarsen Q , K and V is: Q ( l +1) j = 1 2( Q ( l ) 2 j + Q ( l ) 2 j +1 ) (25) K ( l +1) j = 1 2( K ( l ) 2 j + K ( l ) 2 j +1 ) (26) V ( l +1) j = ( V ( l ) 2 j + V ( l ) 2 j +1 ) (27) where l = 0 , 1 , ..., M 2 and j = 0 , 1 , 2 , ..., 2 M l .", "It should be noted that the coarsening of V in Eq.", "(27) does not have the averaging factor 12 .", "We defer more details on coarsening to Appendix Section A.1.", "Now we are ready to compute the nonzero entries in Eq.", "(21), (22) and (23) and construct hierarchical attention matrix A ( l ) .", "Substituting Eq.", "(25) and (26) into (4) and then into (3), we obtain A ( l ) ij = e S ( l ) ij = e Q ( l ) i ( K ( l ) j ) T d (28) Again, we note that l = 0 is a special case because A (0) ij = A ij .", "The hierarchical matrix structure in Eq.", "(17), (18) and (19) naturally leads to a hierarchical approach to the matrix-matrix multiplication in Eq.", "(2) and the matrix-vector multiplication in Eq.", "(5).", "We use the matrix-matrix multiplication as an example since matrix-vector multiplication is just a special case of the matrix-matrix multiplication.", "In view of Eq.", "(17), (18) and (19), we write the matrix-matrix multiplication in Eq.", "(2) as Y = AV A (0) V (0) + A (1) V (1) + A (2) V (2) = Y (0) + P (0) (cid:16) Y (1) + P (1) Y (2) (cid:17) (29) where Y ( l ) = A ( l ) V ( l ) , l = 1 , 2 (30) We defer the detailed derivation of Eq.", "(29) to Appendix Section A.5 and A.6.", "To facilitate the description and the complexity analysis of the algorithm, we define a few more hierarchy-related notations.", "In addition to sequence length L , number of hierarchy levels M and embedding or feature size d in Eq.", "(1), the new notations include: 1) N r : numerical rank of the off-diagonal blocks (for instance, 2 in Eq.", "(20)).", "This is also the diagonal block size at level-0; 2) N ( l ) b : number of blocks at levell .", "Note that L and d are usually data-dependent hyper-parameters, while N r is the only model hyper-parameter responsible for our method's inductive bias.", "In turn, N ( l ) b and M are derived parameters, computed as: N (0) b = LN r , N ( l +1) b = N ( l ) b 2 (31) M = log 2 ( N (0) b ) .", "It is important to note that only the diagonal blocks at level-0 and the super-diagonal and sub-diagonal blocks at levell are needed in applying the hierarchical attention matrix.", "This is clearly shown in Eq.", "(21)(23).", "This means that only N ( l ) b 1 super-diagonal and sub-diagonal blocks are computed at levell .", "This is crucial to the overall linear complexity in run time and memory.", "We should also note that all matrix blocks in coarse attention matrix A ( l ) have the same size N r N r .", "This is due to the rank map in Eq.", "(20).", "This is crucial for efficiency reason since the single-instruction-multiple-data (SIMD) programming style supported by the dense linear algebra libraries for GPU and TPU encourages uniform tensor shapes.", "We summarize the main steps to construct and apply the hierarchical attention in Algorithm 1. Algorithm 1 H-Transformer-1D Input: Q (query), K (key), V (value) Output: Z Coarsen Q using Eq.", "The computational cost for Algorithm 1 has two parts:", "(a) diagonal blocks at level0 : dN 2 r N (0) b", "(b) Superand sub-diagonal blocks at level-l : 4 dN 2 r ( N ( l ) b 1)", "(c) total: 5 dLN r = O ( dL ) 2. Computing matrix-matrix (MM) multiplication in Eq.", "(2) and matrix-vector (MV) multiplication in Eq.", "(5):", "(a) MM: 5 dLN r", "(b) MV: 5 LN r", "(c) total: 5( d + 1) LN r = O ( dL ) So the overall run time complexity of the hierarchical attention algorithm is O ( dL ) .", "Likewise, the memory complexity can be shown to be O ( dL ) as well.", "We defer the detailed analysis to appendix Section A.5 and A.6.", "We have implemented the proposed hierarchical attention using Jax, an open source library 5 for automatic gradient computation and linear algebra operations on GPUs and TPUs.", "All numerical operations in our algorithm use the Numpy native linear algebra functions supported by Jax.", "In all our experiments in this section, we use the standard Transformer architecture described in (Vaswani et al., 2017) as the backbone for our H-Transformer-1D model.", "Unless specified otherwise, the model parameters are: number of layers is 6, number of heads is 8, word embedding size is 512 and the feed-forward module (FFN) size is 2048.", "We follow the API for the standard multihead scaled dot-product attention implementation 6 so that we can perform a simple drop-in replacement of the standard multihead attention with our hierarchical attention implementation.", "This allows for an easy and fair comparison.", "The open-source Long-Range Arena (LRA) benchmark 7 has been proposed as a standard way to probe and quantify the capabilities of various xformer (long-range Transformer) architectures (Tay et al., 2020c).", "In our case, it also serves to highlight the effectiveness of the inductive bias 5 https://github.com/google/jax 6 https://github.com/google/flax/blob/master/flax/nn 7 https://github.com/google-research/long-range-arena inspired by the H-Matrix method, as well as the capability of our hierarchical attention to handle long sequences.", "The LRA has several desirable qualities that made us focus on it as a primary evaluation benchmark: generality (restricted to encoder-only tasks to accommodate most proposals); simplicity (no pretraining, no data augmentation allowed); diffi-culty (large headroom with existing approaches); long-input focus (so that modeling improvements in this area are visible); diverse (6 tasks, covering math, language, image, and spatial modeling); and lightweight (so that modeling improvements are measurable independently of the ability to train and run high-capacity models).", "The tasks that comprise LRA are: ListOps (sequences of arithmetical expressions of lengths of up to 2K that tests the ability to reason hierarchically while handling long context); Text (byte/character-level text classification at document level, which both simulates longer input sequences max length 4K and increases the diffi-culty level); Retrieval (byte/character-level document retrieval, which simulates the ability to model document similarity as a score between two independently-encoded long input sequences max length 4K + 4K = 8K); Image (image classification based on the CIFAR-10 dataset, where an NxN image is flattened to a sequence of length N 2 pixels); Pathfinder (long-range spatial dependency task, with images consisting of two small Model perplexity parameters (Dai et al., 2019) 21.8 800M (Baevski and Auli, 2019) 23.02 1000M (Dai et al., 2019) 23.5 465M (Baevski and Auli, 2019) 23.91 465M (Shazeer et al., 2018) 24.0 4900M Transformer baseline 30.04 53M Transformer baseline 24.8 144M H-Transformer-1D N r = 16 23.95 53M H-Transformer-1D N r = 16 20.25 144M Table 2: Experimental results on one-billion word benchmark.", "circles and dash-line paths that either connect the two circles or not image dimensions of 32x32 for a pixel sequence of length 1,024); Path-X (same as Pathfinder, but for image dimensions of 128x128 for a total pixel sequence of length 16,384).", "The default Transformer model parameters such as number of layers and number of heads etc are pre-determined by the benchmark configu-ration for each task.", "The results obtained by our H-Transformer-1D model on the LRA benchmark are given in Table 1. Overall, the H-Transformer-1D model achieves 61.41 average accuracy, a +6.4 points improvement over the previous-best average performance from BigBird (Zaheer et al., 2020).", "We want to highlight ListOps, Text and Retrieval because they all involve long sequences and H-Transformer-1D model improves SOTA performance by relatively large margins.", "These should be strong evidences to support our hypothesis in section 5.1 and validate the inductive bias due to the hierarchical attention.", "We have used Flax, an open-source library 8 to train neural networks, as the code base for the model training.", "Our H-Transformer-1D model uses the standard Transformer decoder implementation in Flax as the backbone.", "Only the attention is replaced with our hierarchical attention.", "We trained both the Transformer baseline and H-Transformer-1D on the One-Billion Word benchmark (Chelba et al., 2014).", "We tried different N r 8 https://github.com/google/flax (numerical rank) in our H-Transformer-1D model.", "These represent different inductive bias.", "We found that H-Transformer-1D with N r = 16 generated text with quality comparable to that of the baseline Transformer.", "For both Transformer baseline and H-Transformer-1D, we also tried two sets of model parameters: 1) embedding size is 512 and feed-forward module size is 2048 and hence the parameter count is 53M; 2) embedding size is 1024 and feed-forward module size is 4096 and hence the parameter count is 144M.", "The test perplexity results of these four models and various SOTA models are shown in table 2. H-Transformer-1D delivers the lowest perplexity to-date while using 5 smaller model capacity than that of the previous SOTA model Transformer-XL (Dai et al., 2019).", "This is another strong evidence to support our hypothesis in section 5.1 and validate the inductive bias due to the hierarchical attention.", "We have proposed a new Transformer attention using the inductive bias inspired by the H-Matrix.", "The new algorithm has linear complexity in run time and memory usage and is fully compatible with dense linear algebra libraries on GPU and TPU.", "The effectiveness of this new attention is demonstrated by the empirical evidences from long-range arena benchmark and One-Billion word language modeling.", "Future work include applying the new attention to music and genomics, developing proper inductive bias for cross-attention and extending the one-dimensional hierarchical attention to 2D images." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "abstain", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain" ]
[ "Transformer-based pre-trained models, such as BERT, have shown extraordinary success in achieving state-of-the-art results in many natural language processing applications.", "However, deploying these models can be prohibitively costly, as the standard self-attention mechanism of the Transformer suffers from quadratic computational cost in the input sequence length.", "To confront this, we propose FCA, a fineand coarse-granularity hybrid self-attention that reduces the computation cost through progressively shortening the computational sequence length in self-attention.", "Specifically, FCA conducts an attention-based scoring strategy to determine the informativeness of tokens at each layer.", "Then, the informative tokens serve as the fine-granularity computing units in self-attention and the uninformative tokens are replaced with one or several clusters as the coarse-granularity computing units in self-attention.", "Experiments on GLUE and RACE datasets show that BERT with FCA achieves 2x reduction in FLOPs over original BERT with <1% loss in accuracy.", "We show that FCA offers significantly better trade-off between accuracy and FLOPs compared to prior methods 1 .", "Transformer-based large pre-trained language models with BERT (Devlin et al., 2018) as a typical model routinely achieve state-of-the-art results on a number of natural language processing tasks (Yang et al., 2019; Liu et al., 2019; Clark et al., 2020), such as sentence classification (Wang et al., 2018), question answering (Rajpurkar et al., 2016, 2018), and information extraction (Li et al., 2020b).", "practicality, especially in the case of limited industry time and resources, such as Mobile Phone and AIoT.", "In addition, the excessive energy consumption and environmental impact caused by the computation of these models also raise the widespread concern (Strubell et al., 2019; Schwartz et al., 2020).", "To improve the efficiency of BERT, the mainstream techniques are knowledge distillation (Hin-ton et al., 2015) and pruning.", "Knowledge distillation aims to transfer the knowledge\" from a large teacher model to a lightweight student model. The student model is then used during inference, such as DistilBERT (Sanh et al., 2019). Pruning technique includes: (1) structured methods that prune structured blocks of weights or even complete architectural components in BERT, for example encoder layers (Zhang and He, 2020), (2) unstructured methods that dynamically drop redundant units, for example, attention head (Voita et al., 2019) and attention tokens (Goyal et al., 2020). However, both types of methods encounter challenges. For the former, a great distillation effect often requires an additional large teacher model and very complicated training steps (Jiao et al., 2019; Hou et al., 2020). For the latter, pruning methods discard some computing units, which inevitably causes information loss. In contrast to the prior approaches, we propose a self-motivated and information-retained technique, namely FCA, a fineand coarse-granularity hybrid self-attention that reduces the cost of BERT through progressively shortening the computational sequence length in self-attention. Specifically, FCA first evolves an attention-based scoring strategy to assign each token with the informativeness. Through analyzing the informativeness distribution at each layer, we conclude that maintaining full-length token-level representations is progressive redundant along with layers, especially for the classification tasks that only require single-vector repre-4811 sentations of sequences. Consequently, the tokens are divided into informative tokens and uninformative tokens according to their informativeness. Then, they are updated through different computation paths. The informative tokens carry most of the learned features and remain unchanged as the fine-grained computing units in self-attention. The uninformative tokens may not be as important as informative ones but we will not completely discard them to avoid information loss. Instead, We replace them with more efficient computing units to save memory consumption. Experiments on the standard GLUE benchmark show that FCA accelerates BERT inference speed and maintains high accuracy as well. Our contributions are summarized as follows: We analyze the progressive redundancies in maintaining full-length token-level representations for the classification tasks. We propose a fineand coarse-granularity hybrid self-attention, which is able to reduce the cost of BERT and maintain high accuracy. Experiments on the standard GLUE benchmark show that the FCA-based BERT achieves 2x reduction in FLOPs over the standard BERT with < 1% loss in accuracy. 2 Related work There has been much prior literature on improving the efficiency of Transformers. The most common technologies include: Knowledge distillation refers to training a smaller student model using outputs from various intermediate representations of larger pre-trained teacher models. In the BERT model, there are multiple representations that the student can learn from, such as the logits in the final layer, the outputs of the encoder units, and the attention maps. The distillation on output logits is most commonly used to train smaller BERT models (Sanh et al., 2019; Sun et al., 2019; Jiao et al., 2019; Sun et al., 2020). The output tensors of encoder units contain meaningful semantic and contextual relationships between input tokens. Some work creates a smaller model by learning from the outputs of teacher's encoder (Jiao et al., 2019; Sun et al., 2020; Li et al., 2020a). Attention map refers to the softmax distribution output of the self-attention layers and indicates the contextual dependence between the input tokens. A common practice of distillation on attention maps is to directly minimize the difference between the self-attention outputs of the teacher and the student (Jiao et al., 2019; Sun et al., 2020; Mao et al., 2020). This line of work is orthogonal to our approach and our proposed FCA can be applied to the distillate models to further accelerate their inference speed. Pruning refers to identifying and removing less important weights or computation units. Pruning methods for BERT broadly fall into two categories. Unstructured pruning methods prune individual weights by comparing their absolute values or gradients with a pre-defined threshold (Mao et al., 2020; Gordon et al., 2020; Chen et al., 2020). The weights lower than the threshold are set to zero. Unlike unstructured pruning, structured pruning aims to prune structured blocks of weights or even complete architectural components in the BERT model. Voita et al. (2019) pruned attention heads using a method based on stochastic gates and a differentiable relaxation of the L0 penalty. Fan et al. (2019) randomly dropped Transformer layers to sample small sub-networks from the larger model during training which are selected as the inference models. Goyal et al. (2020) progressively reduced sequence length by pruning word-vectors based on the attention values. This work is partly similar to the fine-grained computing units in our proposed FCA. However they ignored the coarse-grained units that may cause information loss. In addition, there are some engineering techniques to speed up the inference speed, such as Mixed Precision (Micikevicius et al., 2017) and Quantization (Zafrir et al., 2019; Fan et al., 2020). Using half-precision or mixed-precision representations of floating points is popular in deep learning to accelerate training and inference speed. Quantization refers to reducing the number of unique values required to represent the model weights, which in turn allows to represent them using fewer bits. 3 Preliminary BERT (Devlin et al., 2018) is a Transformer-based language representation model, which can be fine-tuned for many downstream NLP tasks, including sequence-level and token-level classification. The Transformer architecture (Vaswani et al., 2017) is a highly modularized neural network, where each Transformer layer consists of two submodules, namely the multi-head self-attention sub-4812 Standard Deviation Variance Layer1 Layer2 Layer3 Layer6 Layer12 Figure 1: The first sub-figure is the normalized variance and standard deviation of informativeness with respect to BERT-base layers from 1 to 12. The last five sub-figures are the informativeness distributions on some layers. layer (MHA) and the position-wise feed-forward network sub-layer (FFN). Both sub-modules are wrapped by a residual connection and layer normalization. MHA . The self-attention mechanism allows the model to identify complex dependencies between the elements of each input sequence. It can be formulated as querying a dictionary with key-value pairs. Formally, MHA ( Q, K, V ) = Concat ( head 1 , ..., head h ) WO (1) where Q, K , and V represent query, key, and value. h is the number of heads. Each head is defined as: head t = Attention ( QW Qt , KW Kt , V W Vt ) = softmax ( QW Qt ( KW Kt ) T d K ) (cid:124) (cid:123)(cid:122) (cid:125) AV W Vt (2) where W Qt R d h d Q , W Kt R d h d K , W Vt R d h d V , WO R hd V d h are learned parameters. d K , d Q , and d V are dimensions of the hidden vectors. The main cost of MHA layer is the calculation of attention mapping matrix A R n n in Eq. 2 which is O ( n 2 ) in time and space complexity. This quadratic dependency on the sequence length has become a bottleneck for Transformers. FFN . The self-attention sub-layer in each of the layers is followed by a fully connected position-wise feed-forward network, which consists of two linear transformations with a GeLU (Hendrycks and Gimpel, 2016) activation in between. Given a vector x i in [ x 1 , ..., x n ] outputted by MHA sublayer, FFN is defined as: FFN ( x i ) = GeLU ( x i W 1 + b 1 ) W 2 + b 2 , (3) where W 1 , W 2 , b 1 , b 2 are learned parameters. Previous research (Ganesh et al., 2021) has shown that in addition to MHA sub-layer, FFN sub-layer also consumes large memory in terms of model size and FLOPs. As a result, if we reduce the computational sequence length of MHA, the input and the consumption of FFN sub-layer will become less accordingly. 4 Methodologies To shorten the computational sequence length of self-attention, our core motivation is to divide tokens into informative and uninformative ones and replace the uninformative tokens with more efficient units. This section introduces each module of our model in detail. 4.1 Scoring Strategy Our strategy of scoring the informativeness of tokens is based on the self-attention map. Concretely, taking a single token vector x i as an example, its attention head x ( t ) i is updated by: x ( t ) i = (cid:80) nj =1 a i,j x ( t ) j (Eq. 2). a i,j is an element in attention map A . Therefore, a i,j represents the information contribution from token vector x j to x i over head t . Intuitively, we define the informativeness of a token by accumulating along the columns of 4813 Granularity Hybrid Layer Granularity Hybrid Layer Multi-head Self-Attention Multi-head Self-Attention Layer1 Layer2 ! \" 0.21 0.02 0.01 0.04 0.19 0.11 0.32 !", "The overall informativeness of x j is defined as the average over the heads:", "We next analyze some properties of defined informativeness in BERT-base.", "The first sub-figure in Figure 1 displays the normalized variance and standard deviation of informativeness of layers from 1 to 12 on RTE (classification dataset), which supports the phenomenon that the informativeness distributions at the bottom layers are relatively uniform and the top layers are volatile.", "The last five sub-figures further present the informativeness distributions of some BERT-base layers, where the first token is [CLS] and its representations are used for the final prediction.", "We can see that as the layers deepen, the informativeness is progressively concentrated on two tokens.", "This means that maintaining full-length token-level representations for the classification tasks may be redundant.", "A straightforward approach for reducing the sequence length of self-attention is to maintain the informative tokens and prune the rest.", "We argue that this approach is effortless but encounters the risk of information loss, especially for lower layers.", "Instead of pruning, we propose to process the uninformative tokens with more efficient units.", "Figure 2 shows the architecture of FCA layer, which inserts a granularity hybrid sub-layer after MHA.", "At each layer, it first divides tokens into informative and uninformative ones based on their assigned informativeness.", "The CLS token is always divided into informative part as it is used to derive the final prediction.", "Let x ( l ) cls X ( l ) be the sequence of token vectors input to l -th layer, where X ( l ) = [ x ( l ) 1 , ..., x ( l ) n ] and n is the sequence length of X ( l ) .", "We gather the token vectors from X ( l ) with the topk informativeness to form the informative sequence X ( l ) in and the rest vectors to form the uninformative sequence X ( l ) un , where X ( l ) in R k d h and X ( l ) un R ( n k ) d h .", "The length of the uninformative sequence is reduced by performing certain type of aggregating operations along the sequence dimension, such as 4814 average pooling : X ( l ) un = Pooling ( X ( l ) un ) (6) or weighted average pooling with informativeness as weights: x ( l ) un = softmax ( I ( x ( l ) un )) X ( l ) un = Pooling ( X ( l ) un ) (7) where x ( l ) un is the token vector in X ( l ) un .", "The aggregated sequence X ( l ) un R k d h and k is a fixed parameter.", "After hybrid layer, token sequence is updated to [ x ( l ) cls X ( l ) in X ( l ) un ] and sequence length is shortened by n k k .", "Therefore, in addition to the following layers, the computation cost of FFN in l -th layer is reduced as well.", "It should be noted that the relative position of uninformative tokens should be preserved, which contains their contextual features to a certain extent and they can be captured by aggregating operations.", "The parameter k is learnable and progressively shortened.", "Inspired by Goyal et al. (2020), we train n learnable parameters to determine the configuration of k , denoted R = [ r 1 , ..., r n ] .", "The parameters are constrained to be in the range, i.e., r i [0; 1] and added after MHA sub-layer.", "Given a token vector x i output by MHA, it is modified by: x i r pos ( x i ) x i (8) where pos ( x i ) is the sorted position of x i over informativeness.", "Intuitively, the parameter r i represents the extent to which the informativeness of the token at i -th position is retained.", "Then, for the l -th layer, we obtain the configuration of k l from the sum of the above parameters, i.e., k l = ceil ( sum ( l ; R )) s.t. k l +1 k l (9) 5 Loss Function Let be the parameters of the baseline BERT model and L ( ) be cross entropy loss or mean-squared error as defined in the original task.", "We adopt the multi-task learning idea to jointly minimize the loss in accuracy and total sequence length over all layers.", "where L is the number of layers.", "L ( , R ) controls the accuracy and sum ( l ; R ) controls the sequence length of l -th layer.", "The hyper-parameter tunes the trade-off.", "The training schema of our model involves three stages, which are given in Algorithm 1.", "Our experiments are mainly conducted on GLUE (General Language Understanding Evaluation) 2 (Wang et al., 2018) and RACE (Lai et al., 2017) datasets.", "GLUE benchmark covers four tasks: Linguistic Acceptability, Sentiment Classification, Natural Language Inference, and Paraphrase Similarity Matching.", "RACE is the Machine Reading Comprehension dataset.", "For experiments on RACE, we denote the input passage as P , the question as q , and the four answers as { a 1 , a 2 , a 3 , a 4 } .", "We concatenate passage, question and each answer as a input sequence [ CLS ] P [ SEP ] q [ SEP ] a i [ SEP ] , where [ CLS ] and [ SEP ] are the special tokens used in the original BERT.", "The representation of [ CLS ] is treated as the single logit value for each a i .", "Then, a softmax layer is placed on top of these four logits to obtain 2 https://gluebenchmark.com/ 4815 the normalized probability of each answer, which is used to compute the cross-entropy loss.", "The input length of BERT is set to 512 by default.", "However, the instances in these datasets are relatively short, rarely reaching 512.", "If we keep the default length settings, most of the input tokens are [PAD] tokens.", "In this way, our model can easily save computational resources by discriminating [PAD] tokens as the uninformative ones, which is meaningless.", "To avoid this, we constrained the length of the datasets.", "The statistic information of the datasets is summarized in Table 1.", "For accuracy evaluation, we adopt Matthew's Correlation for CoLA, F1-score for QQP, and Accuracy for the rest datasets.", "For efficiency evaluation, we use the number of floating operations (FLOPs) to measure the inference efficiency, as it is agnostic to the choice of the underlying hardware.", "We compare our model with both distillation and pruning methods.", "Distillation methods contain four models DistilBERT (Sanh et al., 2019), BERT-PKD (Sun et al., 2019), Tiny-BERT (Jiao et al., 2019), and Mobile-BERT (Sun et al., 2020).", "All four models are distillation from BERT-base and have the same structure (6 transformer layers, 12 attention heads, dimension of the hidden vectors is 768).", "Pruning methods contain FLOP (Wang et al., 2020), SNIP (Lin et al., 2020), and PoWER-BERT (Goyal et al., 2020).", "PoWER-BERT (Goyal et al., 2020) is the state-of-the-art pruning method which reduces sequence length by eliminating word-vectors.", "To make fair comparisons, we set the length of our informative tokens at each layer same to the sequence length of PoWER-BERT.", "We deploy BERT-base as the standard model in which transformer layers L = 12 , hidden size d h = 512 , and number of heads h = 12 .", "All models are trained with 3 epochs.", "The batch size is selected in list 16,32,64.", "The model is optimized using Adam (Kingma and Ba, 2014) with learning rate in range [2e-5,6e-5] for the BERT parameters , [1e-3,3e-3] for R .", "Hyper-parameter that controls the trade-off between accuracy and FLOPs is set in range [1e-3,7e-3].", "We conducted experiments with a V100 GPU.", "The FLOPs for our model and the baselines were calculated with Tensorflow and batch size=1.", "Table 3 and Table 2 display the accuracy and inference FLOPs of each model on GLUE benchmark respectively.", "As the FLOPs of PoWER-BERT is almost the same as that of FCA-BERT and the number of coarse units has little affect on FLOPs, Table 2 only lists the FLOPs of FCA-BERT.", "Comparison to BERT .", "The results demonstrate the high-efficiency of our model, which almost has no performance gap with BERT-base (<%1 accuracy loss) while reduces the inference FLOPs by half on majority datasets.", "Table 4 presents the sequence length of FCA at each layer, which illustrates substantial reduction of computation length for standard BERT.", "For example, the input sequence length for the dataset QQP is 128.", "Hence, standard BERT needs to process 128*12=1536 tokens over the twelve layers.", "In contrast, FCA only tackles [85, 78, 73, 69, 61, 57, 54, 52, 46, 41, 35, 35] summing to 686 tokens.", "Consequently, the computational load of self-attention and the feed forward network is economized significantly.", "Among our models, the weighted average pooling operation raises the better performance than the average pooling operation.", "The number of coarse units contributes the model accuracy for both two operations, especially for pooling operation.", "This is reasonable as when the number of coarse units increases, the information stored in each FCA gradually approaches the standard BERT.", "But overmuch coarse units grow FLOPs.", "Therefore, it is necessary to balance impact on FLOPS and performance brought by the coarse units.", "Comparison to Baselines .", "We first compare our model to Distil-BERT.", "Our models dramatically outperform Distil-BERT in accuracy by a margin of at least 3 average score.", "As mentioned before, the line of distillation framework is orthogonal to our proposed method.", "We further investigate whether FCA is compatible with distillation models.", "Table 5 shows the results of Distil-BERT with FCA-Pool 5 , which verify that FCA could further accelerate the inference speed on the basis of the distillation model with <1% loss in accuracy.", "As for the SOTA distillation models, Tiny-BERT and Mobile-BERT, our models still outperform them on average performance.", "Combined with the results of Table 2 where our models have slightly fewer 4816 Dataset CoLA RTE QQP SST-2 MNLI-m QNLI RACE Avg.", "inference FLOPs than the distillation methods, it can be proved that FCA has better accuracy and computational efficiency than them.", "We next compare our model to the SOTA pruning model PoWER-BERT.", "Their acceleration effects are comparable and we focus on comparing their accuracy.", "The results on Table 3 show that our models achieve better accuracy than PoWER-BERT on all datasets.", "This is because PoWER-BERT discards the computing units, which inevitably causes information loss.", "Instead of pruning, FCA layer stockpiles the information of uninformative tokens in a coarse fashion (aggregating operations).", "Moreover, we noticed that coarse units are not always classified as uninformative.", "In other words, they sometimes participate in the calculation of self-attention as informative tokens.", "This shows the total informativeness contained in uninformative tokens can not be directly negligible and can be automatically learned by self-attention.", "In order to visually demonstrate the advantages of our model, Figure 3 draws curves of trade-off between accuracy and efficiency on three datasets.", "The results of FCA-BERT and PoWER-BERT are obtained by tuning the hyper-parameter .", "For DistilBERT, the points correspond to the distillation version with 4 and 6 Transformer layers.", "It can be seen that with the decrease of FLOPs, (1) PoWER-BERT and our model outperform DistilBERT by a large margin; (2) our model exhibits the superiority over all the prior methods consistently; (3) more importantly, the advantage of our model over PoWER-BERT gradually becomes apparent.", "This is because PoWER-BERT prunes plenty of computation units to save FLOPs, which results in the dilemma of information loss.", "In contrast, our model preserves all information to a certain extent.", "Extensions to Other PLMs .", "To explore the generalization capabilities of FCA, we extend FCA to other pre-trained language models (PLMs), such as distil-BERT, BERT-large, and ELECTRA-base (Clark et al., 2020).", "The test results are displayed in Table 5, which proves that FCA is applicable to a variety of models, regardless of model size and variety.", "In this section, we explore that can we not differentiate between tokens and perform the average pooling on all tokens to reduce the computation cost.", "To make fair comparisons, we set the length of pooled sequence at each layer equal to the FCA-BERT-Pool 5 .", "The results show that pooling all tokens decreases the model accuracy from 75.0 to 73.8.", "This is because the pooling operation weakens the semantic features learned by the informative tokens, which are often decisive for the final prediction.", "On the contrary, our model does not conduct pooling on informative tokens and instead delegates the burden of saving computational overhead to uninformative tokens.", "And this does not cause serious damage to the representative features learned by the model.", "In this section, we further investigate the extent to which these compressed models can retain the essential information of the original BERT.", "Concretely, we adopt the Euclidean distance of the CLS representation between BERT and the compressed models as the evaluation metric, which is proportional to the information loss caused by model compression, formally: Distance(A,B) = M (cid:88) k =1 (cid:118)(cid:117)(cid:117)(cid:116) d h (cid:88) i =1 ( A clsi,k B clsi,k ) where M is the number of the instances in corresponding dataset.", "Table 4 shows the distance of baselines and our models with standard BERT.", "Combining the results in Table 3, it can be found that the distance is consistent with the test accuracy.", "Large distance leads to low accuracy and vice versa.", "This provides an inspiration, that is, we can add a distance regulation term to the objective function to forcibly shorten the distance between the compression model and the original BERT, i.e., Loss ,R = L ( , R )+ L (cid:88) l =1 l sum ( l ; R )+ Distance ( ) However, the experimental results show that the accuracy has not been significantly improved.", "This may be because the information learned by the compressed model has reached the limit of approaching the BERT, and the regulation term can not further improve the potential of the compressed model.", "Our proposed FCA is dedicated to the classification tasks that only require single-vector representations, and it can not be directly applied to the tasks of requiring to maintain the full input sequence in", "the output layer, such as NER and extractive MRC.", "On these tasks, we need to make some modifications of only performing FCA operation over K and V in self-attention and maintaining the full length of Q .", "The Eq.", "2 is modified to: head t = Attention ( QW Qt , FCA ( K ) W Kt , FCA ( V ) W Vt ) (11) We also attempted to maintain the full length of K and V and shorten Q , but the experimental results are unsatisfactory.", "In this paper, we propose FCA, a fineand coarse-granularity hybrid self-attention that reduces the computation cost through progressively shortening the computational sequence length in self-attention.", "Experiments on GLUE and RACE datasets show that BERT with FCA achieves 2x reduction in FLOPs over original BERT with <1% loss in accuracy.", "Meanwhile, FCA offers significantly better trade-off between accuracy and FLOPs compared to prior methods.", "We would like to thank three anonymous reviewers for their useful feedback.", "This work is supported by the National Key Research and Development Program of China under Grant No. 2020AAA0108600." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "result", "objective", "abstain", "abstain", "other", "other" ]
[ "Lemmatization aims to reduce the sparse data problem by relating the inflected forms of a word to its dictionary form.", "Using context can help, both for unseen and ambiguous words.", "Yet most context-sensitive approaches require full lemma-annotated sentences for training, which may be scarce or unavailable in low-resource languages.", "In addition (as shown here), in a low-resource setting, a lemmatizer can learn more from n labeled examples of distinct words (types) than from n (contigu-ous) labeled tokens, since the latter contain far fewer distinct types.", "To combine the efficiency of type-based learning with the benefits of context, we propose a way to train a context-sensitive lemmatizer with little or no labeled corpus data, using inflection tables from the UniMorph project and raw text examples from Wikipedia that provide sentence contexts for the unambiguous UniMorph examples.", "Despite these being unambiguous examples, the model successfully generalizes from them, leading to improved results (both overall, and especially on unseen words) in comparison to a baseline that does not use context.", "Many lemmatizers work on isolated wordforms (Wicentowski, 2002; Dreyer et al., 2008; Rastogi et al., 2016; Makarov and Clematide, 2018b,a).", "Lemmatizing in context can improve accuracy on ambiguous and unseen words (Bergmanis and Goldwater, 2018), but most systems for context-sensitive lemmatization must train on complete sentences labeled with POS and/or morphological tags as well as lemmas, and have only been tested with 20k-300k training tokens (Chrupaa et al., 2008; Muller et al., 2015; Chakrabarty et al., 2017).", "1 1 The smallest of these corpora contains 20k tokens of Bengali annotated only with lemmas, which Chakrabarty et al. (2017) reported took around two person months to create.", "Intuitively, though, sentence-annotated data is inefficient for training a lemmatizer, especially in low-resource settings.", "Training on (say) 1000 word types will provide far more information about a language's morphology than training on 1000 contiguous tokens, where fewer types are represented.", "As noted above, sentence data can help with ambiguous and unseen words, but we show here that when data is scarce, this effect is small relative to the benefit of seeing more word types.", "2 Motivated by this result, we propose a training data augmentation method that combines the efficiency of type-based learning and the expressive power of a context-sensitive model.", "3 We use Lematus (Bergmanis and Goldwater, 2018), a state-of-the-art lemmatizer that learns from lemma-annotated words in their N -character contexts.", "No predictions about surrounding words are used, so fully annotated training sentences are not needed.", "We exploit this fact by combining two sources of training data: 1k lemma-annotated types (with contexts) from the Universal Dependency Treebank (UDT) v2.2 4 (Nivre et al., 2017), plus examples obtained by finding unambiguous word-lemma pairs in inflection tables from the Universal Morphology (UM) project 5 and collecting sentence contexts for them from Wikipedia.", "Although these examples are noisy and biased, we show that they improve lemmatization accuracy in experiments on 10 languages, and that the use of context helps, both overall and especially on unseen words.", "Lematus (Bergmanis and Goldwater, 2018) is a neural sequence-to-sequence model with attention", "2 Garrette et al. (2013) found the same for POS tagging.", "3 Code and data: https://bitbucket.org/ tomsbergmanis/data_augumentation_um_wiki 4 http://hdl.handle.net/11234/1-2837 5 http://unimorph.org inspired by the re-inflection model of Kann and Schutze (2016), which won the 2016 SIGMORPHON shared task (Cotterell et al., 2016).", "It is built using the Nematus machine translation toolkit, 6 which uses the architecture of Sennrich et al. (2017): a 2-layer bidirectional GRU encoder and a 2-layer decoder with a conditional GRU (Sen-nrich et al., 2017) in the first layer and a GRU in the second layer.", "Lematus takes as input a character sequence representing the wordform in its N -character context, and outputs the characters of the lemma.", "Special input symbols are used to represent the left and right boundary of the target wordform ( <lc> , <rc> ) and other word boundaries ( <s> ).", "For example, if N = 15 , the system trained on Latvian would be expected to produce the characters of the lemma cels (meaning road ) given input such as: s a k a <s> p a s v a l d b u <lc> c e l u <rc> u n <s> i e l u <s> r e g i s t r When N = 0 ( Lematus 0-ch ), no context is used, making Lematus 0-ch comparable to other systems that do not model context (Dreyer et al., 2008; Rastogi et al., 2016; Makarov and Clematide, 2018b,a).", "In our experiments we use both Lematus 0-ch and Lematus 20-ch (20 characters of context), which was the best-performing system reported by Bergmanis and Goldwater (2018).", "Our data augmentation method uses UM inflection tables and creates additional training examples by finding Wikipedia sentences that use the inflected wordforms in context, pairing them with their lemma as shown in the inflection table.", "However, we cannot use all the words in the tables because some of them are ambiguous: for example, Figure 1 shows that the form celi could be lemmatized either as cels or celis .", "Since we don't know which would be correct for any particular Wikipedia example, we only collect examples for forms which are unambiguous according to the UM tables.", "However, this method is only as good as the coverage of the UM tables.", "For example, if UM doesn't include a table for the Latvian verb celt , then the underlined forms in Table 1 would be incorrectly labeled as unambiguous.", "6 Code for Nematus: https://github.com/ EdinburghNLP/nematus , Code for Lematus: https://bitbucket.org/tomsbergmanis/lematus.git noun: cels noun: celis SG PL SG PL NOM cels celi celis celi GEN cela celu cela celu DAT celam celiem celim celiem ACC celu celus celi celus INS celu celiem celi celiem LOC cela celos cel celos VOC cel celi celi celi Table 1: Example UM inflection tables for Latvian nouns cels ( road ) and celis ( knee ).", "There are several other issues with this method that could potentially limit its usefulness.", "First, the UM tables only include verbs, nouns and adjectives, whereas we test the system on UDT data, which includes all parts of speech.", "Second, by excluding ambiguous forms, we may be restricting the added examples to a non-representative subset of the potential inflections, or the system may simply ignore the context because it isn't needed for these examples.", "Finally, there are some annotation differences between UM and UDT.", "7 Despite all of these issues, however, we show below that the added examples and their contexts do actually help.", "Baselines and Training Parameters We use four baselines: (1) Lemming 8 (Muller et al., 2015) is a context-sensitive system that uses log-linear models to jointly tag and lemmatize the data, and is trained on sentences annotated with both lemmas and POS tags.", "(2) The hard monotonic attention model ( HMAM ) 9 (Makarov and Clematide, 2018b) is a neural sequence-to-sequence model with a hard attention mechanism that advances through the sequence monotonically.", "It is trained on word-lemma pairs (without context) 7 Recent efforts to unify the two resources have mostly focused on validating dataset schema (McCarthy et al., 2018), leaving conflicts in word lemmas unresolved.", "We estimated (by counting types that are unambiguous in each dataset but have different lemmas across them) that annotation inconsistencies affect up to 1% of types in the languages we used.", "8 http://cistern.cis.lmu.de/lemming 9 https://github.com/ZurichNLP/ coling2018-neural-transition-based-morphology with character-level alignments learned in a preprocessing step using an alignment model, and it has proved to be competitive in low resource scenarios.", "(3) Our naive Baseline outputs the most frequent lemma (or one lemma at random from the options that are equally frequent) for words observed in training.", "For unseen words it outputs the wordform itself.", "(4) We also try a baseline data augmentation approach ( AE Aug Baseline ) inspired by Bergmanis et al. (2017) and Kann and Schutze (2017), who showed that adding training examples where the network simply learns to auto-encode corpus words can improve morphological inflection results in low-resource settings.", "The AE Aug Baseline is a variant of Lematus 0-ch which augments the UDT lemmatization examples by auto-encoding the inflected forms of the UM examples (i.e., it just treats them as corpus words).", "Comparing AE Aug Baseline to Lematus 0-ch augmented with UM lemma-inflection examples tells us whether using the UM lemma information helps more than simply auto-encoding more inflected examples.", "To train the models we use the default settings for Lemming and the suggested lemmatization parameters for HMAM.", "We mainly follow the hy-perparameters used by Bergmanis and Goldwater (2018) for Lematus; details are in Appendix B. Languages and Training Data We conduct preliminary experiments on five development languages: Estonian, Finnish, Latvian, Polish, and Russian.", "In our final experiments we also add Bulgarian, Czech, Romanian, Swedish and Turkish.", "We vary the amount and type of training data (types vs. tokens, UDT only, UM only, or UDT plus up to 10k UM examples), as described in Section 4.", "To obtain N UM-based training examples, we select the first N unambiguous UM types (with their sentence contexts) from shuffled Wikipedia sentences.", "For experiments with j > 1 examples per type, we first find all UM types with at least j sentence contexts in Wikipedia and then choose the N distinct types and their j contexts uniformly at random.", "Evaluation To evaluate models' ability to lemmatize wordforms in their sentence context we follow Bergmanis and Goldwater (2018) and use the full UDT development and test sets.", "Unlike Bergmanis and Goldwater (2018) who reported token level lemmatization exact match accuracy, we report type-level micro averaged lemmatization ex-Ambig.", "act match accuracy.", "This measure better reflects improvements on unseen words, which tend to be rare but are more important (since a most-frequent-lemma baseline does very well on seen words, as shown by Bergmanis and Goldwater (2018)).", "We separately report performance on unseen and ambiguous tokens.", "For a fair comparison across scenarios with different training sets, we count as unseen only words that are not ambiguous and are absent from all training sets/scenarios introduced in Section 4.", "Due to the small training sets, between 70-90% of dev set types are classed as unseen in each language.", "We define a type as ambiguous if the empirical entropy over its lemmas is greater than 0.1 in the full original UDT training splits.", "10 According to this measure, only 1.2-5.3% of dev set types are classed as ambiguous in each language.", "Significance Testing All systems are trained and tested on ten languages.", "To test for statistically significant differences between the results of two systems we use a Monte Carlo method: for each set of results (i.e. a set of 10 numerical values) we generate 10000 random samples, where each sample swaps the results of the two systems for each language with a probability of 0 .", "5 .", "We then obtain a p-value as the proportion of samples for which the difference on average was at least as large as the difference observed in our experiments.", "Types vs. Tokens and Context in Very Low Resource Settings We compare training on the first", "10 This measure, adjusted ambiguity , was defined by Kirefu (2018), who noticed that many frequent wordforms appear have multiple lemmas due to annotation errors.", "The adjusted ambiguity filters out these cases.", "1k tokens vs. first 1k distinct types of the UDT training sets.", "Table 2 shows that if only 1k examples are available, using types is clearly better for all systems.", "Although Lematus does relatively poorly on the token data, it benefits the most from switching to types, putting it on par with HMAM and suggesting is it likely to benefit more from additional type data.", "Lemming requires token-based data, but does worse than HMAM (a context-free method) in the token-based setting, and we also see no benefit from context in comparing Lematus 20-ch vs Lematus 0-ch.", "So overall, in this very low-resource scenario with no data augmentation, context does not appear to help.", "Using UM + Wikipedia Only We now try training only on UM + Wikipedia examples, rather than examples from UDT.", "We use 1k, 2k or 5k unambiguous types from UM with a single example context from Wikipedia for each.", "With 5k types we also try adding more example contexts (2, 3, or 5 examples for each type).", "Figure 1 presents the results (for unseen words only).", "As with the UDT experiments, there is little difference between Lematus 20-ch and Lematus 0-ch in the smallest data setting.", "However, when the number of training types increases to 5k, the benefits of context begin to show, with Lematus 20-ch yielding a 1.6% statistically significant ( p < 0 . 001 ) improvement over Lematus 0-ch.", "The results for increasing the number of examples per type are numerically higher than the one-example case, but the differences are not statistically significant.", "It is worth noting that the accuracy even with 5k UM types is considerably lower than the accuracy of the model trained on only 1k UDT types (see Table 2).", "We believe this discrepancy is due to the issues of biased/incomplete data noted above.", "For example, we analyzed the Latvian data and found that the available tables for nouns, verbs, and adjectives give rise to 78 paradigm slots.", "The 17 POS tags in UDT give rise to about 10 times as many paradigm slots, although only 448 are present in the unseen words of the dev set.", "Of these, 197 are represented amongst the 1k UDT training types, whereas only 25 are included in the 1k UM training types.", "As a result, about 72% of the unseen types of dev set have no representative of their paradigm slot in 1k types of UM, whereas this figure is only 17% for the 1k types of UDT.", "Data Augmentation Although UM + Wikipedia examples alone are not sufficient to train a good lemmatizer, they might improve a low-resource baseline trained on UDT data.", "To see, we augmented the 1k UDT types with 1k, 5k or 10k UM Figure 2: Lematus 20-ch lemmatization accuracy for each language on all types in the dev sets.", "types with contexts from Wikipedia.", "Table 3 summarizes the results, showing that despite the lower quality of the UM + Wikipedia examples, using them improves results of all systems, and more so with more examples.", "Improvements are especially strong for unseen types, which constitute more than 70% of types in the dev set.", "Furthermore, the benefit of the additional UM examples is above and beyond the effect of auto-encoding (AE Aug Baseline) for all systems in all data scenarios.", "Considering the two context-free models, HMAM does better on the un-augmented 1k UDT data, but (as predicted by our results above) it benefits less from data augmentation than does Lematus 0-ch, so with added data they are statistically equivalent ( p = 0 . 07 on the test set with 10k UM).", "More importantly, Lematus 20-ch begins to outperform the context-free models with as few as 1k UM + Wikipedia examples, and the difference increases with more examples, eventually reaching over 4% better on the test set than the next best model (Lematus 0-ch) when 10k UM + Wikipedia examples are used ( p < 0 . 001 ) This indicates that the system can learn useful contextual cues even from unambiguous training examples.", "Finally, Figure 2 gives a breakdown of Lematus 20-ch dev set accuracy for individual languages, showing that data augmentation helps consistently, although results suggest diminishing returns.", "Data Augmentation in Medium Resource Setting To examine the extent to which augmented data can help in the medium resource setting of 10k continuous tokens of UDT used in previous work, we follow Bergmanis and Goldwater (2018) and train Lematus 20-ch models for all ten languages using the first 10k tokens of UDT and compare them with models trained on 10k tokens of UDT augmented with 10k UM types.", "To provide a better comparison of our results, we report both the type and the token level development set accuracy.", "First Type accuracy: Ambig.", "of all, Table 4 shows that training on 10k continuous tokens of UDT yields a token level accuracy that is about 8% higher than when using the 1k types of UDT augmented with 10k UM typesthe best-performing data augmentation systems (see Table 3).", "Again, we believe this performance gap is due to the issues with the biased/incomplete data noted above.", "For example, we analyzed errors that were unique to the model trained on the Latvian augmented data and found that 41% of the errors were due to wrongly lemmatized words other than nouns, verbs, and adjectivesthe three POSs with available inflection tables in UM.", "For instance, improperly lemmatized pronouns amounted to 14% of the errors on the Latvian dev set.", "Table 4 also shows that UM examples with Wikipedia contexts benefit lemmatization not only in the low but also the medium resource setting, yielding statistically significant type and token level accuracy gains over models trained on 10k UDT continuous tokens alone (for both Unseen and All p < 0 . 001 ).", "We proposed a training data augmentation method that combines the efficiency of type-based learning and the expressive power of a context-sensitive lemmatization model.", "The proposed method uses Wikipedia sentences to provide contextualized examples for unambiguous inflection-lemma pairs from UniMorph tables.", "These examples are noisy and biased, but nevertheless they improve lemmatization accuracy on all ten languages both in low (1k) and medium (10k) resource settings.", "In particular, we showed that context is helpful, both overall and especially on unseen wordsthe first work we know of to demonstrate improvements from context in a very low-resource setting." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "method", "abstain", "result", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "method", "method", "method", "method", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "other", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective" ]
[ "Modern deep learning models are notoriously opaque, which has motivated the development of methods for interpreting how deep models predict.", "This goal is usually approached with attribution method, which assesses the influence of features on model predictions.", "As an explanation method, the evaluation criteria of attribution methods is how accurately it reflects the actual reasoning process of the model (faithful-ness).", "Meanwhile, since the reasoning process of deep models is inaccessible, researchers design various evaluation methods to demonstrate their arguments.", "However, some crucial logic traps in these evaluation methods are ignored in most works, causing inaccurate evaluation and unfair comparison.", "This paper systematically reviews existing methods for evaluating attribution scores and summarizes the logic traps in these methods.", "We further conduct experiments to demonstrate the existence of each logic trap.", "Through both theoretical and experimental analysis, we hope to increase attention on the inaccurate evaluation of attribution scores.", "Moreover, with this paper, we suggest stopping focusing on improving performance under unreliable evaluation systems and starting efforts on reducing the impact of proposed logic traps.", "The opaqueness of deep models has grown in tandem with their power (Doshi-Velez and Kim, 2017), which has motivated efforts to interpret how these black-box models work (Sundararajan et al., 2017; Belinkov and Glass, 2019).", "Post-hoc explanation aims to explain a trained model and reveal how the model arrives at a decision (Jacovi and Goldberg, 2020; Molnar, 2020).", "This goal is usually approached with attribution method, which assesses the influence of features on model predictions as shown in Figure 1. Recent years have witnessed an increasing number of attribution methods being developed.", "For example, Erasure-based method calculate attribution scores by measuring the change of output after removing target features (Li et al., 2016; Feng et al., 2018; Chen et al., 2020); Gradient-based method uses gradients to study the influence of features on model predictions (Sun-dararajan et al., 2017; Wallace et al., 2019; Hao et al., 2020); Meanwhile, these methods also received much scrutiny, arguing that the generated attribution scores are fragile or unreliable (Alvarez-Melis and Jaakkola, 2018; Pruthi et al., 2020; Wang et al., 2020; Slack et al., 2020).", "As an explanation method, the evaluation criteria of attribution methods should be how accurately it reflects the true reasoning process of the model (faithfulness), not how convincing it is to humans (plausibility) Jacovi and Goldberg (2020).", "Meanwhile, since the reasoning process of deep models is inaccessible, researchers design various evaluation methods to support their arguments, some of which appear valid and are widely used in the research field.", "For example, meaningful perturbation is used for making comparison in many works (Samek et al., 2016; Chen et al., 2019; DeYoung et al., 2020; Chen et al., 2020; Kim et al., 2020).", "The philosophy of meaningful perturbation is simple, i.e., modifications to the input instances, which are in accordance with the generated attribution scores, can bring about significant differences to model predictions if the attribution scores are faithful to the target system.", "works, causing inaccurate evaluation and unfair comparison.", "For example, we found that we can manipulate the evaluation results when using meaningful perturbation to make comparisons: by choosing the modification strategy, we can assign any of the three candidate attribution methods as the best method.", "The neglect of these traps has damaged the community in many aspects: First, the existence of logic traps will lead to an inaccurate evaluation and unfair comparison, making the conclusions unreliable; Second, the wide use of evaluation metrics with logic traps brings pressure to newly proposed works, requiring them to compare with other works using the same metrics; Last, the over-belief in existing evaluation metrics encourages efforts to propose more accurate attribution methods, notwithstanding the evaluation system is unreliable.", "In this paper, we systematically review existing methods for evaluating attribution scores and categorize them into classes.", "We summarize the logic traps in these methods and further conduct experiments to demonstrate the existence of each logical trap.", "Though strictly accurate evaluation metrics for attribution methods might be a unicorn which will likely never be found, we should not just ignore logic traps in existing evaluation methods and draw conclusions recklessly.", "Through both theoretical and experimental analysis, we hope to increase attention on the inaccurate evaluation of attribution scores.", "Moreover, with this paper, we suggest stopping focusing on improving performance under unreliable evaluation systems and starting efforts on reducing the impact of proposed logic traps.", "Evaluation 1 verifies the validity of the attribution scores by comparing them with the human problem-solving process.", "In this evaluation, works (e.g., Murdoch et al. (2018); Kim et al. (2020); Sun-dararajan et al. (2017)) often give examples consistent with human understandings to demonstrate the superiority of their proposed method.", "For example, as shown in Table 1, Murdoch et al. (2018) shows heat maps for a yelp review generated by different attribution techniques.", "They argue that the proposed method: Contextual decomposition, Method Heat Map Leave One Out used to be my favorite Integrated gradients used to be my favorite Contextual decomposition used to be my favorite Legend: Very Negative Negative Neutral Positive Very Positive Table 1: Heat maps for a portion of a yelp review generated by different attribution techniques.", "is better than others because only it can identify favorite' as positive and used to be' as negative, which is consistent with human understandings.", "Furthermore, resorting to human-annotated explanations, works can also evaluate attribution methods quantitatively in evaluation 1. For example, the SST-2 (Socher et al., 2013) corpus provides not only sentence-level labels, but also five-class word-level sentiment tags ranging from very negative to very positive.", "Thus, many works (Lei et al., 2016; Li et al., 2016; Tsang et al., 2020; Kim et al., 2020) perform quantitative evaluation of attribution scores by comparing them with the word-level tags in SST-2.", "First, we cannot completely deny the rationality of evaluation 1. Since many attribution methods work without any human-annotated information, such as erasure-based and gradient-based methods, the similarity between human-annotated explanations and generated attribution scores can be seen as drawing from the reasoning process of target models.", "However, since the deep model often rely on unreasonable correlations, even when producing correct predictions, attribution scores preposterous to humans may reflect the reasoning process of the deep model faithfully.", "Thus we cannot deny the validity of an attribution score through its inconsistency to human-annotated explanations and cannot use human-annotated explanations to conduct a quantitative evaluation.", "In experiment 1, we give an example to demonstrate that the model might rely on correlations inconsistent with human understandings to get the prediction: though trained with questions, a question-answering model could maintain the same prediction for a large ratio of samples when the", "We experiment on RACE (Lai et al., 2017), a large-scale question-answering dataset.", "As shown in Table 2, RACE requires the model to choose the right answer from candidate options according to the given question and document.", "Document: ...Many people optimistically thought industry awards for better equipment would stimulate the production of quieter appliances. It was even suggested that noise from building sites could be alleviated ...", "Question: What was the author's attitude towards the industry awards for quieter?", "Options: A. suspicious B. positive C. enthusiastic D. indifferent Table 2: An sample taken from RACE dataset.", "We first train a model with BERT base (De-vlin et al., 2019) as encoder 1 with questions, and achieve 65.7% accuracy on the development set.", "Then, we replace the development set questions with empty strings and feed them into the trained model.", "Surprisingly, the trained MRC model maintained the original prediction on 64.0% of the test set samples (68.4% on correctly answered samples and 55.4% on wrongly answered samples).", "Moreover, we analyze the model confidence change in these unchanged samples, where the probability on the predicted label is used as the confidence score.", "As shown in Figure 2, most of the samples have confidence decrease smaller than 0.1, demonstrating question information are not essential for the model to get predictions in these samples.", "Since question information is usually crucial for humans to answer the question, attribution scores faithfully reflect the reasoning process of this model may be inconsistent with human annotations.", "Thus, it is improper to use human-annotation 1 Our implementations of experiment 1 and experiment 2 are based on the Huggingface's transformer model hub ( https://github.com/huggingface/ transformers ), and we use its default model architectures without change for corresponding tasks.", "explanations as the ground truth to evaluate attribution methods.", "Most existing methods for quantitatively evaluating attribution scores can be summarized as evaluations based on meaningful perturbation .", "The philosophy of meaningful perturbation is simple, i.e., modifications to the input instances, which are in accordance with the generated attribution scores, can bring about significant differences to the target model's predictions if the attribution scores are faithful to the target model.", "For example, Samek et al. (2016); Nguyen (2018); Chen and Ji (2020) use the area over the perturbation curve (AOPC) (Samek et al., 2016) as evaluation metrics.", "Specifically, given the attribution scores of a set of features, AOPC(k) modifies the top k% features and calculates the average change in the prediction probability as follows, AOP C ( K ) = 1 NN (cid:88) i =1 (cid:110) p ( y | x i ) p ( y | x ( k ) i ) (cid:111) where y is the predicted label, N is the number of examples, p ( y | ) is the probability on the predicted class, and x ( k ) i is modified sample.", "Higher AOPCs is better, which means that the features chosen by attribution scores are more important; Feng et al. (2018); Petsiuk et al. (2018); Kim et al. (2020) use area under the curve (AUC) to evaluate attribution scores.", "As shown in Figure 3, AUC plots a prediction probability curve about modified feature numbers where features are modified in order of attribution scores.", "The argument is if attribution scores are faithful, then the curve will drop rapidly, resulting in a small area under a curve.", "Besides these works, a lot of works (Shrikumar et al., 2017; Chen et al., 2019; Nguyen, 2018; DeYoung et al., 2020; Chen et al., 2020; Hao et al., 2020; Jiang et al., 2021) use similar metrics to perform evaluation and comparisons.", "The main difference between evaluation metrics in these works is the difference in the modification strategy.", "For example, to evaluate word-level attribution scores for SST-2, Chen et al. (2020) uses deleting tokens as modification while Kim et al. (2020) uses replacing tokens with tokens sampled from the distribution inferred by BERT.", "Evaluation methods based on meaningful perturbation can be seen as an attribution method too.", "For example, AOPC(k), which assesses the importance of k% features, can be seen as an attribution method calculating an attribution score for k% features.", "Specifically, when using deleting tokens as modification and narrowing the k% to one token, AOPC(k) degenerates into the basic attribution method: leave-one-out (Li et al., 2016).", "Thus, evaluation 2 uses an attribution method as the ground truth to evaluate the target attribution method, which measures the similarity between two attribution methods instead of faithfulness.", "Since meaningful perturbation assesses the importance of features by calculating output change after modifications, its results are mainly depend on how to conduct the modifications, which means different modification strategies might lead to different evaluation results.", "Evaluation 2 is widely used to compare attribution methods in the research field.", "Accordingly, the neglect of logic trap 2 has led to a high risk of unfair comparisons and unreliable conclusions.", "In experiment 2, we give an example of unfair comparisons in evaluation 2: the more similar the target attribution method to the modification strategy, the better the evaluation results.", "Specifically, by modifying the modification strategies in APOC and AUC, we can assign any of the three candidate attribution methods as the best method.", "We conduct experiments on on widely used SST-2 task of the GLUE benchmark (Wang et al., 2019)), and use BERT base as encoder to build the target model 1 Figure 4: The overview of LOO, Marg and HEDGE.", "Attribution Methods We experiment with three attribution methods: leave-one-out (LOO) (Li et al., 2016), HEDGE (Chen et al., 2020) and Marg (Kim et al., 2020).", "The schemes of these attribution methods are shown in Figure 4, LOO assign attribution scores to the target word good' by deleting it from the sentence and observing change in the model predictions; Marg marginalizes the target word good' out considering the likelihoods of all candidate words, which uses BERT to measure the likelihoods of candidate words to replace the target word; HEDGE builds hierarchical explanations by recursively detecting the weakest interactions and then dividing large text spans into smaller ones.", "HEDGE assign attribution scores to spans by using '[PAD]' token to replace other words in a sentence and measuring how far the prediction is to the prediction boundary.", "Evaluation metrics and Results We first evaluate three attribution methods with metrics drawn from Marg and HEDGE papers.", "Marg uses AUC as evaluation metrics and modifies words by gradually replacing them with a token sampled from the distribution inferred by BERT, denoted as AUC rep ; HEDGE uses AOPC as evaluation metrics and modifies words by deleting them directly, denoted as AOPC del .", "Both papers modify 20% of words in the sentence.", "The results are shown in Table 3.", "As shown in Table 3, Marg performs best in AUC rep while LOO performs best in AOPC del .", "Since the 5914 Method/Metric AOPC del AUC rep AOPC rep AUC del AOPC pad AUC pad LOO 0.541 0.666 0.378 0.526 0.935 0.896 HEDGE 0.466 0.702 0.324 0.638 0.978 0.984 Marg 0.477 0.617 0.391 0.588 0.928 0.874 Table 3: Evaluation results of three attribution methods.", "modification strategy of AOPC del is consistent with LOO, and that of AUC rep is most similar to Marg, the evaluation results are consistent with the inference in logic trap 2: the more similar the target evaluated method to the modification strategy, the better the evaluation results.", "Manipulate Evaluation Results We further conduct ablation experiments by changing the modification strategies in AOPC del and AUC rep .", "Concretely, we switched perturbing strategy in AOPC del and AUC rep and get new evaluation metrics: AOPC rep and AUC del .", "As shown in Table 3, different from the initial results, Marg performs best in APOC metric while LOO performs best in AUC metric.", "The opposite results demonstrate that evaluation results mainly depend on the modification strategies, and we can manipulate evaluation results by changing them.", "Moreover, we note that HEDGE performs worst in all four evaluation metrics.", "Thus, we further customize the modification strategy to HEDGE's advantage: padding unimportant features according to the attribution scores, denoted as AOPC pad and AUC pad .", "Not surprisingly, results in Table 3 show that HEDGE perform best in customized metrics.", "Summarization Because of the existence of logic trap 2, we can manipulate the evaluation results in evaluation 2 by changing the modification strategies, assigning any of the three candidate attribution methods as the best method.", "In fact, because we cannot simply assign a modification strategy as faithful, we should not use evaluation 2 to quantitatively evaluate attribution scores and make comparisons.", "Since the wide use of evaluation 2, the neglect of logic trap 2 has negatively impacted the research field for a long time.", "First, it brings a risk of unfair comparisons: works can customize evaluation metrics to their advantage and thus achieve the best performance.", "Second, the wide use of evaluation 2 also brings pressure to new proposed works, forcing them to make comparisons to others in such evaluation.", "In this evaluation, works evaluate attribution methods by examining the consistency of attribution scores for similar inputs.", "The philosophy of Evaluation 3 is that semantically similar inputs which share the same model predictions should have similar attribution scores if the attribution method is reliable.", "Evaluation 3 is often used to disprove the effectiveness of attribution methods by searching for counterexamples.", "For example, ExplainFooler (Sinha et al., 2021) attacks Integrated Gradients and (Sundararajan et al., 2017) and LIME (Sundararajan et al., 2017), which are two popular attribution methods in NLP, by searching adversarial sentences with different attribution scores.", "As shown in Figure 5, these adversarial sentences are semantically similar to the original sentence and share the same model predictions.", "However, the attribution scores of these sentences are very different from that of the original sentence.", "Sinha et al. (2021) observes the rank order correlation drops by over 20% when less than 10% of words are changed on average and thus draws the conclusion that Integrated Gradients and LIME are fragile.", "A lot of works (Alvarez-Melis and Jaakkola, 2018; Kindermans et al., 2019; Ghorbani et al., 2019; Ding and Koehn, 2021; Sinha et al., 2021) use evaluation 3 to examine the validity of existing attribution methods.", "For example, Ghorbani et al. (2019) argues that interpretations of neural networks are fragile by showing that systematic perturbations can lead to different interpretations without changing the label; Alvarez-Melis and Jaakkola (2018) argues that a crucial property that attribution methods should satisfy is robustness to local perturbations of the input.", "Logic Trap 3: The change in attribution scores maybe because the model reasoning process is really changed rather than the attribution method is unreliable.", "When solving similar samples like those shown in Figure 5, humans tend to use similar reasoning methods.", "However, deep models are not as robust enough as humans and often rely on unreasonable correlations.", "Semantically similar texts often cause different reasoning processes in deep models.", "For example, it is well known that deep models are vulnerable to adversarial samples (Goodfellow et al., 2015; Papernot et al., 2016).", "By deliberately adding some subtle interference that people cannot detect to the input sample, the target model will give a different prediction with high confidence.", "The success in adversarial attacks on deep models demonstrates similar inputs for humans can share very different reasoning processes in deep models.", "The main difference between attribution-attacking methods and model-attacking is that attribution-attacking methods require the model to give the same prediction for adversarial samples.", "However, giving the same prediction is very weak to constraint model reasoning because deep models have compressed the complicated calculation process into limited classes in the prediction.", "For example, there is always half probability of giving the same prediction for a binary classification task even with totally random reasoning.", "Thus, it is no surprise that attribution-attacking methods can find adversarial samples which share the same prediction label to the original sample yet have different attribution scores.", "The logic trap in evaluation 3 is that the change in attribution scores may be because the model reasoning process is really changed rather than the attribution method is unreliable.", "As shown in Fig-Figure 6: We use lines connecting inputs and outputs to represent the model reasoning process.", "(a) is a successful attack on the target model while", "(b) might be regarded as a successful attack on attribution methods, falling into the logic trap 3.", "ure 6.", "(b), an attribution method should generate different attribution scores for the original and adversarial samples if it faithfully reflects the model reasoning.", "However, it will be regarded as fragile or unreliable in evaluation 3.", "Unfortunately, existing works ignore this logic trap and propose various methods to attack attribution methods.", "Since the high susceptibility of deep models to adversarial samples, not surprisingly, all of these works get the same conclusion: existing attribution methods are fragile or unreliable.", "In experiment 3, we demonstrate that deep models can assign the same label to semantically similar samples yet use different reasoning.", "We experiment on widely used SST-2 and MNLI tasks of the GLUE benchmark (Wang et al., 2019)).", "MNLI requires the model to predict whether the premise entails the hypothesis, contradicts it, or is neutral.", "Model Since the attribution methods are defaulted as unreliable in evaluation 3, we cannot use existing attribution methods to judge whether the model reasoning is different.", "To solve the problem, we use a two-stage model framework, where the model first extracts a subset of inputs and gives prediction based only on the subset information.", "This way, we can observe whether the model reasoning is changed from the chosen subset, i.e., different subsets means the model chooses to use different 5916 Figure 7: The overview of the model scheme, which consists of two components: extractor and classifier.", "The overview of our model is shown in Figure 7.", "To guarantee that only the subset information is included in the classifier, we discretely select the words and pass words instead of the hidden states of the extractor to the classifier.", "Since gradients do not flow through discrete samples, we resort to HardKuma (Bastings et al., 2019) to jointly train the model, which gives support to binary outcomes.", "HardKuma allows for setting the percentage of selected words and is proved more effective and stable than REINFORCE (Williams, 1992) in such scenarios.", "We set the selection ratio as 20% for SST-2 and 40% for MNLI because larger ratios will not cause further performance improvement.", "Finally, We get 85.6% accuracy on SST-2 and 66.2/65.5 % accuracy on MNLI-m/mm.", "Adversarial Attack Method We use TextFooler (Jin et al., 2020) to generate adversarial samples.", "We use the same settings to Jin et al. (2020) to guarantee the semantical similarity of adversarial samples.", "The only difference is that we search for samples with minimal similarity in the selected subset instead of the model prediction.", "We guarantee that the model makes the same predictions, which is often used as the constraint for model reasoning in evaluation 3.", "We generate adversarial samples with 10% and 20% perturbation ratios.", "Results We use F1-score to compute the similarity score between subsets and report the Macro-averaging F1-score of the whole development set.", "2 Note that similar subsets are regarded as a necessary condition rather than a sufficient condition for the similar model reasoning process.", "A lower score is better, reflecting a larger difference in selected subsets.", "Note that since some words in original samples are replaced with their synonyms in adversarial samples, synonyms are seen as identical to their original words when evaluating.", "We evaluate all samples in the SST-2 development set and the first 1000 samples in MNLI-m/mm development sets.", "The results are shown in Table 4 As shown in Table 4, though semantically similar to the original samples and share the same model predictions, the adversarial samples can have subsets with low similarity to the original subset.", "Moreover, with a 10% perturbation ratio, 31.8% of samples in SST-2 have an adversarial subset with none word overlap with the original subset.", "This result increases to 50.5% with a 20% perturbation ratio.", "With no overlap between the two subsets, there is no way we can hypothesis the adversarial samples share similar model reasoning to the original samples.", "Summarization Though evaluation 3 seems reasonable, sharing similar semantics and the same model predictions is a weak constraint for similar model reasoning.", "Thus the change in attribution scores may come from different model reasoning instead of the instability of attribution methods.", "Because of deep models' high sensitivity to adversarial samples, works resorting to evaluation 3 all get the same conclusion that existing attribution methods are fragile or unreliable.", "We argue we should find a more strict constraint for model reasoning first instead of ignoring logic trap 3 and disproving attribution methods recklessly.", "Besides resorting to methods in evaluation 3, there are works (Jain and Wallace, 2019; Wang et al., 2020; Slack et al., 2020) disprove the reliability of attribution methods by replacing the target model which attribution methods should work on.", "For example, Slack et al. (2020) trains an adversarial classifier e ( x ) to distinguish whether the inputs have been perturbed or not and then uses a different sub-model to process perturbed instances.", "Specifically, if we want to attack the LOO method, we can build a loo set from the original dataset and train e ( x ) in the following form: e ( x ) = (cid:40) f ( x ) , if x original set ( x ) , if x loo set This way, ( x ) , a model irrelevant to model predictions, is used when using LOO to calculate attribution scores, making generated attribution scores meaningless.", "Slack et al. (2020) assert that results of perturbation-based attribution methods such as LIME and SHAP (Lundberg and Lee, 2017) are easily attacked by their method.", "Similarly, Wang et al. (2020) add an extra model to the original model, which has uniform outputs but large gradients for some particular tokens such as CLS' in BERT.", "Since the extra model generates uniform outputs, it will not affect predictions of the original model.", "However, the extra model's gradients will add to the original model and thus can confuse gradient-based attribution methods.", "The attack methods in Section 3.1 fool the attribution methods through designing a special structure and require attribution methods to be used in a black-box way.", "In this setting, the attribution methods are easily attacked and generate meaningless results.", "However, the question is: as a tool to help humans understand how deep models work, is it necessary to use attribution methods in a black-box way?", "Take the linear model as an example.", "The linear model is regarded as a white-box model, and humans don't need attribution methods to understand how it works.", "However, the understanding of a linear model is based on the analysis of its calculation process.", "Meanwhile, the deep model is regarded as a black-box model because its calculation process is too complicated to understand for humans, not because its calculation process is inaccessible.", "Thus, we believe there are no compelling reasons to require attribution methods to be used in a black-box way.", "The attacks in Wang et al. (2020); Slack et al. (2020) will fail when humans use attribution methods with knowing the model structures.", "Since logic traps in existing evaluation methods can cause an inaccurate evaluation, we believe reducing the impact of these traps is the next question in the research field of post-hoc interpretations.", "In this section, we provide some thoughts for reducing the impact of logic trap 3: The change in attribution scores may be because the model reasoning process is changed rather than the attribution method is unreliable.", "To reduce the impact of this logic trap, we should try to guarantee the similarity in model reasoning when processing semantically similar inputs.", "In other words, we hope the target model used to test attribution methods more robustness to adversarial samples, which can be conducted through the following ways: 1 Enhancing the target model.", "The success of adversarial attacks on deep models motivates efforts to defend against such attacks.", "Thus, we can use these defense techniques, such as adversarial training (Tramr et al., 2018) and randomization (Xie et al., 2018), to enhance the target model and make it more robustness to adversarial samples.", "2 Excluding predictions with low confidence.", "The deep model will give a prediction for a sample regardless of whether knowing how to deal with it.", "The randomness of reasoning increases with the uncertainty in model decisions (Bella et al., 2010).", "Thus, we can guarantee the stability of model reasoning by excluding low-confident predictions.", "For example, we can resorting to Confidence Calibration techniques (Guo et al., 2017; Seo et al., 2019), which calculate confidence interval for a predicted response.", "The proposed logic traps in existing evaluation methods have been ignored for a long time and negatively affected the research field.", "Though strictly accurate evaluation metrics for evaluating attribution methods might be a unicorn which will likely never be found, we should not just ignore these logic traps and draw conclusions recklessly.", "With a clear statement and awareness of these logic traps, we should reduce the focus on improving performance under such unreliable evaluation systems 5918 and shift it to reducing the impact of proposed logic traps.", "Moreover, other aspects of the research field should give rise to more attention, such as the applications of attribution scores (denoising data, improving the model performance, etc.) and proposing new explanation forms.", "This work is supported by the National Key Research and Development Program of China (No.2020AAA0106400), the National Natural Science Foundation of China (No.61922085, No.61976211, No.61906196).", "This work is also supported by the Key Research Program of the Chinese Academy of Sciences (Grant NO. ZDBS-SSW-JSC006), the independent research project of National Laboratory of Pattern Recognition and in part by the Youth Innovation Promotion Association CAS." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "objective", "objective", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "result", "objective", "abstain", "other", "other" ]
[ "Despite transformers' impressive accuracy, their computational cost is often prohibitive to use with limited computational resources.", "Most previous approaches to improve inference efficiency require a separate model for each possible computational budget.", "In this paper, we extend PoWER-BERT (Goyal et al., 2020) and propose Length-Adaptive Transformer that can be used for various inference scenarios after one-shot training.", "We train a transformer with LengthDrop , a structural variant of dropout, which stochastically determines a sequence length at each layer.", "We then conduct a multi-objective evolutionary search to find a length configuration that maximizes the accuracy and minimizes the efficiency metric under any given computational budget.", "Additionally, we significantly extend the applicability of PoWER-BERT beyond sequence-level classification into token-level classification with Drop-and-Restore process that drops word-vectors temporarily in intermediate layers and restores at the last layer if necessary.", "We empirically verify the utility of the proposed approach by demonstrating the superior accuracy-efficiency trade-off under various setups, including span-based question answering and text classification.", "Code is available at https://github.com/clovaai/length-adaptive-transformer.", "Pre-trained language models (Peters et al., 2018; Devlin et al., 2018; Radford et al., 2019; Yang et al., 2019; He et al., 2020) have achieved notable improvements in various natural language processing (NLP) tasks.", "Most of them rely on transformers (Vaswani et al., 2017), and the number of model parameters ranges from hundreds of millions to billions (Shoeybi et al., 2019; Raffel et al., 2019; Kaplan et al., 2020; Brown et al., 2020).", "Despite this high accuracy, excessive computational overhead during inference, both in terms of time and memory, has hindered its use in real applications.", "This level of excessive computation has further raised the concern over energy consumption as well (Schwartz et al., 2019; Strubell et al., 2019; Cao et al., 2020).", "Recent studies have attempted to address these concerns regarding large-scale transformers' computational and energy efficiency (see 7 for a more extensive discussion.) Among these, we focus on PoWER-BERT (Goyal et al., 2020) which progressively reduces sequence length by eliminating word-vectors based on the attention values as passing layers.", "PoWER-BERT establishes the superiority of accuracy-time trade-off over earlier approaches (Sanh et al., 2019; Sun et al., 2019; Michel et al., 2019).", "However, it requires us to train a separate model for each efficiency constraint.", "In this paper, we thus develop a framework based on PoWER-BERT such that we can train a single model that can be adapted in the inference time to meet any given efficiency target.", "In order to train a transformer to cope with a diverse set of computational budgets in the inference time, we propose to train once while reducing the sequence length with a random proportion at each layer.", "We refer to this procedure as LengthDrop, which was motivated by the nested dropout (Rip-pel et al., 2014).", "We can extract sub-models of shared weights with any length configuration without requiring extra post-processing nor additional fine-tuning.", "It is not trivial to find an optimal length configuration given the inference-time computational budget, although it is extremely important in order to deploy these large-scale transformers in practice.", "Once a transformer is trained with the proposed LengthDrop, we search for the length configuration that maximizes the accuracy given a computational budget.", "Because this search is combinatorial and has multiple objectives (accuracy and efficiency), in this work, we propose to use an evolutionary search algorithm, which further allows us to obtain a full Pareto frontier of accuracy-efficiency trade-off of each model.", "PoWER-BERT, which forms the foundation of the proposed two-stage procedure, is only applicable to sequence-level classification, because it eliminates some of the word vectors at each layer by design.", "In other words, it cannot be used for token-level tasks such as span-based question answering (Rajpurkar et al., 2016) because these tasks require hidden representations of the entire input sequence at the final layer.", "We thus propose to extend PoWER-BERT with a novel Drop-and-Restore process ( 3.3), which eliminates this inherent limitation.", "Word vectors are dropped and set aside, rather than eliminated, in intermediate layers to maintain the saving of computational cost, as was with the original PoWER-BERT.", "These set-aside vectors are then restored at the final hidden layer and provided as an input to a subsequent task-specific layer, unlike the original PoWER-BERT.", "The main contributions of this work are twofold.", "First, we introduce LengthDrop, a structured variant of dropout for training a single Length-Adaptive Transformer model that allows us to automatically derive multiple sub-models with different length configurations in the inference time using evolutionary search, without requiring any re-training.", "Second, we design Drop-and-Restore process that makes PoWER-BERT applicable beyond classification, which enables PoWER-BERT to be applicable to a wider range of NLP tasks such as span-based question answering.", "We empirically verify Length-Adaptive Transformer works quite well using the variants of BERT on a diverse set of NLP tasks, including SQuAD 1.1 (Rajpurkar et al., 2016) and two sequence-level classification tasks in GLUE benchmark (Wang et al., 2018).", "Our experiments reveal that the proposed approach grants us fine-grained control of computational efficiency and a superior accuracy-efficiency trade-off in the inference time compared to existing approaches.", "In this section, we review some of the building blocks of our main approach.", "In particular, we review transformers, which are a standard backbone used in natural language processing these days, and PoWER-BERT, which was recently proposed as an effective way to train a large-scale, but highly efficient transformer for sequence-level classification.", "A transformer is a particular neural network that has been designed to work with a variable-length sequence input and is implemented as a stack of self-attention and fully connected layers (Vaswani et al., 2017).", "It has recently become one of the most widely used models for natural language processing.", "Here, we give a brief overview of the transformer which is the basic building block of the proposed approach.", "Each token x t in a sequence of tokens x = ( x 1 , . . . , x N ) , representing input text, is first turned into a continuous vector h 0 t RH which is the sum of the token and position embedding vectors.", "This sequence is fed into the first transformer layer which returns another sequence of the same length h 1 RN H .", "We repeat this procedure L times, for a transformer with L layers, to obtain h L = ( h L 1 , . . . , h LN ) .", "We refer to each vector in the hidden sequence at each layer as a word vector to emphasize that there exists a correspondence between each such vector and one of the input words.", "Although the transformer was first introduced for the problem of machine translation, Devlin et al. (2018) demonstrated that the transformer can be trained and used as a sentence encoder.", "More specifically, Devlin et al. (2018) showed that the transformer-based masked language model, called BERT, learns a universally useful parameter set that can be fine-tuned for any downstream task, including sequence-level and token-level classification.", "In the case of sequence-level classification, a softmax classifier is attached to the word vector h L 1 associated with the special token [CLS] , and the entire network, including the softmax classifier and BERT, is fine-tuned.", "For token-level classification, we use each h Lt as the final hidden representation of the associated t -th word in the input sequence.", "This strategy of pre-training followed by fine-tuning, often referred to as transfer learning, is a dominant approach to classification in natural language processing.", "PoWER-BERT keeps only the topmost l j word vectors at each layer j by eliminating redundant ones based on the significance score which is the total amount of attention imposed by a word on the other words (Goyal et al., 2020).", "l j is the hyper-parameter that determines how many vectors to keep at layer j .", "PoWER-BERT has the same model parameters as BERT, but the extraction layers are interspersed after the self-attention layer in every transformer block (Vaswani et al., 2017).", "PoWER-BERT reduces inference time successfully, achieving better accuracy-time trade-off than DistilBERT (Sanh et al., 2019), BERT-PKD (Sun et al., 2019), and Head-Prune (Michel et al., 2019).", "Despite the original intention of maximizing the inference efficiency with the minimal loss in accuracy, it is possible to set up PoWER-BERT to be both more efficient and more accurate compared to the original BERT, which was observed but largely overlooked by Goyal et al. (2020).", "Training a PoWER-BERT model consists of three steps: (1) fine-tuning, (2) length configuration search, and (3) re-training.", "The fine-tuning step is just like the standard fine-tuning step of BERT given a target task.", "A length configuration is a sequence of retention parameters ( l 1 , l L ) , each of which corresponds to the number of word vectors that are kept at each layer.", "These retention parameters are learned along with all the other parameters to minimize the original task loss together with an extra term that approximately measures the number of retained word vectors across layers.", "In the re-training step, PoWER-BERT is fine-tuned with the length configuration fixed to its learned one.", "For each computational budget, we must train a separate model going through all three steps described above.", "Moreover, the length configuration search step above is only approximate, as it relies on the relaxation of retention parameters which are inherently discrete.", "This leads to the lack of guaranteed correlation between the success of this stage and true run-time.", "Even worse, it is a delicate act to tune the length configuration given a target computational budget because the trade-off is implicitly made via a regularization coefficient.", "Furthermore, PoWER-BERT has an inherent limitation in that it only applies to sequence-level classification because it eliminates word vectors in intermediate layers.", "In this section, we explain our proposed framework which results in a transformer that reduces the length of a sequence at each layer with an arbitrary rate.", "We call such a resulting transformer a Length-Adaptive Transformer.", "We train Length-Adaptive Transformer with LengthDrop which randomly samples the number of hidden vectors to be dropped at each layer with the goal of making the final model robust to such drop in the inference time.", "Once the model is trained, we search for the optimal trade-off between accuracy and efficiency using multi-objective evolutionary search, which allows us to use the model for any given computational budget without fine-tuning nor retraining.", "At the end of this section, we describe Drop-and-Restore process as a way to greatly in-crease the applicability of PoWER-BERT which forms a building block of the proposed framework.", "In short, we train a Length-Adaptive Transformer once with LengthDrop and Drop-and-Restore, and use it with an automatically determined length configuration for inference with any target computational budget, on both sequence-level and token-level tasks.", "Earlier approaches to efficient inference with transformers have focused on a scenario where the target computational budget for inference is known in advance (Sanh et al., 2019; Goyal et al., 2020).", "This greatly increases the cost of deploying transformers, as it requires us to train a separate transformer for each scenario.", "Instead, we propose to train one model that could be used for a diverse set of target computational budgets without re-training.", "Before each SGD update, LengthDrop randomly generates a length configuration by sequentially sampling a sequence length l i +1 at the ( i + 1) -th layer based on the previous layer's sequence length l i , following the uniform distribution U ((1 p ) l i , l i ) , where l 0 is set to the length of the input sequence, and p is the LengthDrop probability.", "This sequential sampling results in a length configuration ( l 1 , , l L ) .", "Length-Adaptive Transformer can be thought of as consisting of a full model and many sub-models corresponding to different length configurations, similarly to a neural network trained with different dropout masks (Srivastava et al., 2014).", "LayerDrop From the perspective of each word vector, the proposed LengthDrop could be thought of as skipping the layers between when it was set aside and the final layer where it was restored.", "The word vector however does not have any information based on which it can determine whether it would be dropped at any particular layer.", "In our preliminary experiments, we found that this greatly hinders optimization.", "We address this issue by using LayerDrop (Fan et al., 2019) which skips each layer of a transformer uniformly at random.", "The LayerDrop encourages each word vector to be agnostic to skipping any number of layers between when it is dropped and when it is restored, just like dropout (Srivastava et al., 2014) prevents hidden neurons from co-adapting with each other by randomly dropping them.", "Sandwich Rule and Inplace Distillation We observed that standard supervised training with LengthDrop does not work well in the preliminary experiments.", "We instead borrow a pair of training techniques developed by Yu and Huang (2019) which are sandwich rule and inplace distillation, for better optimization as well as final generalization.", "At each update, we update the full model without LengthDrop as usual to minimize the supervised loss function.", "We simultaneously update n s randomly-sampled sub-models (which are called sandwiches) and the smallest-possible sub-model, which corresponds to keeping only (1 p ) l i word vectors at each layer i , using knowledge distillation (Hinton et al., 2015) from the full model.", "Here, sub-models mean models with length reduction.", "They are trained to their prediction close to the full model's prediction (inplace distillation).", "After training a Length-Adaptive Transformer with LengthDrop, we search for appropriate length configurations for possible target computational budgets that will be given at inference time.", "The length configuration determines the model performance in terms of both accuracy and efficiency.", "In order to search for the optimal length configuration, we propose to use evolutionary search, similarly to Cai et al. (2019) and Wang et al. (2020a).", "This procedure is efficient, as it only requires a single pass through the relatively small validation set for each length configuration, unlike re-training for a new computational budget which requires multiple passes through a significantly larger training set for each budget.", "We initialize the population with constant-ratio configurations.", "Each configuration is created by l i +1 = (1 r ) l i for each layer i with r so that the amount of computation within the initial population is uniformly distributed between those of the smallest and full models.", "At each iteration, we evolve the population to consist only of configurations lie on a newly updated efficiency-accuracy Pareto frontier by mutation and crossover.", "Mutation alters an original length configuration ( l 1 , , l L ) to ( l (cid:48) 1 , , l (cid:48) L ) by sampling l (cid:48) i from the uniform distribution U ( l (cid:48) i 1 , l i +1 ) with the probability p m or keeping the original length l (cid:48) i = l i , sweeping the layers from i = 1 to i = L .", "A crossover takes two length configurations and averages the lengths at each layer.", "Both of these operations are performed while ensuring the monotonicity of the lengths over the layers.", "We repeat this iteration G times while maintaining n m mutated configurations and n c crossover'd configurations.", "Repeating this procedure pushes the Pareto frontier further to identify the best trade-off between two objectives, efficiency and accuracy, without requiring any continuous relaxation of length configurations nor using a proxy objective function.", "The applicability of the PoWER-BERT, based on which our main contribution above was made, is limited to sequence-level classification because it eliminates word vectors at each layer.", "In addition to our main contribution above, we thus propose to extend the PoWER-BERT so that it is applicable to token-level classification, such as span-based question-answering.", "Our proposal, to which we refer as Drop-and-Restore, does not eliminate word vectors at each layer according to the length configuration but instead sets them aside until the final hidden layer.", "At the final hidden layer, these word vectors are brought back to form the full hidden sequence, as illustrated graphically in Figure 1. 4 Experiment Setup Datasets We test the proposed approach on both sequence-level and token-level tasks, the latter of which could not have been done with the original PoWER-BERT unless for the proposed Drop-and-Restore.", "We use MNLI-m and SST-2 from GLUE benchmark (Wang et al., 2018), as was done to test PoWER-BERT earlier, for sequence-level classification.", "We choose them because consistent accuracy scores from standard training on them due to their sufficiently large training set imply that they are reliable to verify our approach.", "We use SQuAD 1.1 (Rajpurkar et al., 2016) for token-level classification.", "Evaluation metrics We use the number of float-ing operations (FLOPs) as a main metric to measure the inference efficiency given any length configuration, as it is agnostic to the choice of the underlying hardware, unlike other alternatives such as hardware-aware latency (Wang et al., 2020a) or energy consumption (Henderson et al., 2020).", "We later demonstrate that FLOPs and wall-clock time on GPU and CPU correlate well with the proposed approach, which is not necessarily the case for other approaches, such as unstructured weight pruning (Han et al., 2015; See et al., 2016).", "Pre-trained transformers Since BERT was introduced by Devlin et al. (2018), it has become a standard practice to start from a pre-trained (masked) language model and fine-tune it for each downstream task.", "We follow the same strategy in this paper and test two pre-trained transformer-based language models; BERT Base (Devlin et al., 2018) and DistilBERT (Sanh et al., 2019), which allows us to demonstrate that the usefulness and applicability of our approach are not tied to any specific architectural choice, such as the number of layers and the maximum input sequence length.", "Although we focus on BERT-based masked language models here, the proposed approach is readily applicable to any transformer-based models.", "Learning We train a Length-Adaptive Transformer with LengthDrop probability and LayerDrop probability both set to 0 .", "2 .", "We use n s = 2 randomly sampled intermediate sub-models in addition to the full model and smallest model for applying the sandwich learning rule.", "We start fine-tuning the pre-trained transformer without Drop-and-Restore first, just as Goyal et al. (2020) did with PoWER-BERT.", "We then continue fine-tuning it for another five epochs with Drop-and-Restore.", "This is unlike the recommended three epochs by Devlin et al. (2018), as learning progresses slower due to a higher level of stochasticity introduced by LengthDrop and LayerDrop.", "We use the batch size of 32 , the learning rate of 5 e 5 for SQuAD 1.1 and 2 e 5 for MNLI-m and SST, and the maximum sequence length of 384 for SQuAD 1.1 and 128 for MNLI-m and SST.", "Search We run up to G = 30 iterations of evolutionary search, using n m = 30 mutated configurations with mutation probability p m = 0 .", "5 and n c = 30 crossover'd configurations, to find the Pareto frontier of accuracy and efficiency.", "Efficiency-accuracy trade-off We use SQuAD 1.1 to examine the effect of the proposed approach on the efficiency-accuracy trade-off.", "When the underlying classifier was not trained with LengthDrop, as proposed in this paper, the accuracy drops even more dramatically as more word vectors are dropped at each layer.", "The difference between standard transformer and Length-Adaptive Transformer is stark in Figure 2. This verifies the importance of training a transformer in a way that makes it malleable for inference-time re-configuration.", "When the model was trained with the proposed LengthDrop, we notice the efficacy of the proposed approach of using evolutionary search to find the optimal trade-off between inference efficiency and accuracy.", "The trade-off curve from the proposed search strategy has a larger area-under-curve (AUC) than when constant-rate length reduction was used to meet a target computational budget.", "It demonstrates the importance of using both LengthDrop and evolutionary search.", "We make a minor observation that the proposed approach ends up with a significantly higher accuracy than DistillBERT when enough computational log FLOPs (G) F 1 80 82 84 86 88 90 9 10 20 30 BERT-Base Length-Adaptive BERT-Base Length-Adaptive BERT-Base ES DistilBERT Length-Adaptive DistilBERT Length-Adaptive DistilBERT ES Figure 2: Pareto curves of F1 score and FLOPs on SQuAD 1.1 (Rajpurkar et al., 2016).", "budget is allowed for inference ( log FLOPs > 10 ).", "This makes our approach desirable in a wide array of scenarios, as it does not require any additional pre-training stage, as does DistilBERT.", "With a severe constraint on the computational budget, the proposed approach could be used on DistilBERT to significantly improve the efficiency without compromising the accuracy.", "Maximizing inference efficiency We consider all three tasks, SQuAD 1.1, MNLI-m, and SST-2, and investigate how much efficiency can be gained by the proposed approach with minimal sacrifice of accuracy.", "First, we look at how much efficiency could be gained without losing accuracy.", "That is, we use the length configuration that maximizes the inference efficiency (i.e., minimize the FLOPs) while ensuring that the accuracy is above or the same as the accuracy of the standard approach without any drop of word vectors.", "The results are presented in the rows marked with Length-Adaptive from Table 1. For example, in the case of BERT Base , the proposed approach reduces FLOPs by more than half across all three tasks.", "From Figure 2, we have observed that the proposed Length-Adaptive Transformer generalizes better than the standard, base model in some cases.", "Thus, we try to maximize both the inference efficiency and accuracy in order to see whether it is possible for the proposed algorithm to find a length configuration that both maximizes inference efficiency and improves accuracy.", "We present the results in the rows marked with Length-Adaptive (cid:63) from Table 1. For all cases, Length-Adaptive Transformer achieves higher accuracy than a standard transformer does while reducing FLOPs significantly.", "Although it is not apparent from the table, tor MNLI-m and SST-2, the accuracy of the smallest sub-model is already greater than or equal to that of a standard transformer.", "FLOPs vs. Latency As has been discussed in recent literature (see, e.g., Li et al. (2020); Chin et al. (2020)), the number of FLOPs is not a perfect indicator of the real latency measured in wall-clock time, as the latter is affected by the combination of hardware choice and network architecture.", "To understand the real-world impact of the proposed approach, we study the relationship between FLOPs, obtained by the proposed procedure, and wall-clock time measured on both CPU and GPU by measuring them while varying length configurations.", "As shown in Figure 3, FLOPs and latency exhibit near-linear correlation on GPU, when the minibatch size is 16 , and regardless of the minibatch size, on CPU.", "In other words, the reduction in FLOPs with Model SQuAD 1.1 MNLI-m SST-2 PretrainedTransformer Method F1 FLOPs Acc FLOPs Acc FLOPs BERT Base Standard 88.5 1.00x 84.4 1.00x 92.8 1.00x Length-Adaptive (cid:63) 89.6 0.89x 85.0 0.58x 93.1 0.36x Length-Adaptive 88.7 0.45x 84.4 0.35x 92.8 0.35x DistilBERT Standard 85.8 1.00x 80.9 1.00x 90.6 1.00x Length-Adaptive (cid:63) 86.3 0.81x 81.5 0.56x 92.0 0.55x Length-Adaptive 85.9 0.59x 81.3 0.54x 91.7 0.54x Table 1: Comparison results of standard Transformer and Length-Adaptive Transformer.", "Convergence of search Although the proposed approach is efficient in that it requires only one round of training, it needs a separate search stage for each target budget.", "It is important for evolutionary search to converge quickly in the number of forward sweeps of a validation set.", "As exemplified in Figure 4, evolutionary search converges after about fifteen iterations.", "Our framework allows a novel method for anytime prediction with adaptive sequence length given any transformers.", "Thus, our goal is not state-of-the-art classification accuracy, although our experimental results ( 5) demonstrate that our method still attains a good accuracy level.", "We emphasize that other adaptive computation works ( 7) are orthogonal with ours, meaning that various adaptive dimensions (sequence length, depth, attention head, hidden dimension, etc.) can be jointly used.", "In other words, even if other adaptive methods show better curves than ours, our method and theirs can boost each other when combined.", "We provide some comparison results with PoWER-BERT (not anytime prediction method) and DynaBERT (Hou et al., 2020) (concurrent adaptive computation method) as follows.", "Comparison with PoWER-BERT According to Goyal et al. (2020), PoWER-BERT achieves 2 .", "6 x speedup for MNLI-m and 2 .", "4 x speedup for SST-2 by losing 1% of their accuracy.", "Length-Adaptive Transformer obtains a 2 .", "9 x speedup in terms of FLOPs without losing accuracy on MNLI-m and SST-2.", "Considering Figure 3, our speedup in execution time would be close to 2 .", "9 x in the same setting of PoWER-BERT where the time measurement is done with a batch size of 128 on GPU.", "It indicates that our model offers a better trade-off than PoWER-BERT, even with a single model.", "Comparison with DynaBERT According to Hou et al. (2020), DyanBERT obtains a gain of +1 .", "0 , +0 .", "1 , +0 .", "4 for the best accuracy in SQuAD 1.1, MNLI-m, and SST-2, respectively, while Length-Adaptive Transformer achieves a gain of +1 .", "1 , +0 .", "6 , +0 .", "3 .", "These results imply that Length-Adaptive Transformer can give a comparable (or better) performance with DynaBERT.", "The main purpose of the proposed algorithm is to improve the inference efficiency of a large-scale transformer.", "This goal has been pursued from various directions, and here we provide a brief overview of these earlier and some concurrent attempts in the context of the proposed approach.", "Weight pruning Weight pruning (Han et al., 2015) focuses on reducing the number of parameters that directly reflects the memory footprint of a model and indirectly correlates with inference speed.", "However, their actual speed-up in runtime is usually not significant, especially while executing a model with parallel computation using GPU devices (Tang et al., 2018; Li et al., 2020).", "Adaptive architecture There are three major axes along which computation can be reduced in a neural network; (1) input size/length, (2) network depth, and (3) network width.", "The proposed approach, based on PoWER-BERT, adaptively reduces the input length as the input sequence is processed by the transformer layers.", "In our knowledge, Goyal et al. (2020) is the first work in this direction for transformers.", "Funnel-Transformer (Dai et al., 2020) and multi-scale transformer language models (Subramanian et al., 2020) also successfully reduce sequence length in the middle and rescale to full length for the final computation.", "However, their inference complexity is fixed differently with PoWER-BERT because they are not designed to control efficiency.", "More recently, TR-BERT (Ye et al., 2021) introduces a policy network trained via reinforcement learning to decide which vectors to skip.", "LayerDrop (Fan et al., 2019) drops random layers during the training to be robust to pruning inspired by Huang et al. (2016).", "Word-level adaptive depth in Elbayad et al. (2019) and Liu et al. (2020b) might seemingly resemble with length reduction, but word vectors that reached the maximal layer are used for self-attention computation without updating themselves.", "Escaping a network early (Teer-apittayanon et al., 2016; Huang et al., 2017) based on the confidence of the prediction (Xin et al., 2020, 2021; Schwartz et al., 2020; Liu et al., 2020a; Li et al., 2021) also offers a control over accuracy-efficiency trade-off by changing a threshold, but it is difficult to tune a threshold for a desired computational budget because of the example-wise adaptive computation.", "Slimmable neural networks (Yu et al., 2018; Lee and Shin, 2018) reduce the hidden dimension for the any-time prediction.", "DynaBERT (Hou et al., 2020) can run at adaptive width (the number of attention heads and intermediate hidden dimension) and depth.", "Hardware-aware Transformers (Wang et al., 2020a) construct a design space with arbitrary encoder-decoder attention and heterogeneous layers in terms of different numbers of layers, attention heads, hidden dimension, and embedding dimension.", "SpAtten (Wang et al., 2020b) performs cascade token and head pruning for an efficient algorithm-architecture co-design.", "Structured dropout A major innovation we introduce over the existing PoWER-BERT is the use of stochastic, structured regularization to make a transformer robust to the choice of length configuration in the inference time.", "Rippel et al. (2014) proposes a nested dropout to learn ordered representations.", "Similar to LengthDrop, it samples an index from a prior distribution and drops all units with a larger index than the sampled one.", "Search There have been a series of attempts at finding the optimal network configuration by solving a combinatorial optimization problem.", "In computer vision, Once-for-All (Cai et al., 2019) use an evolutionary search (Real et al., 2019) to find a better configuration in dimensions of depth, width, kernel size, and resolution given computational budget.", "Similarly but differently, our evolutionary search is mutli-objective to find length configurations on the Pareto accuracy-efficiency frontier to cope with any possible computational budgets.", "Moreover, we only change the sequence length of hidden vectors instead of architectural model size like dimensions.", "Sequence Length Shortformer (Press et al., 2020) initially trained on shorter subsequences and then moved to longer ones achieves improved perplexity than a standard transformer with normal training while reducing overall training time.", "Novel architectures with the efficient attention mechanism (Kitaev et al., 2020; Beltagy et al., 2020; Zaheer et al., 2020; Ainslie et al., 2020; Choromanski et al., 2020; Peng et al., 2021) are suggested to reduce the transformer's quadratic computational complexity in the input sequence length.", "Tay et al. (2020b) and Tay et al. (2020a) provide a survey of these efficient transformers and their benchmark comparison, respectively.", "In this work, we propose a new framework for training a transformer once and using it for efficient inference under any computational budget.", "With the help of training with LengthDrop and Drop-and-Restore process followed by the evolutionary search, our proposed Length-Adaptive Transformer allows any given transformer models to be used with any inference-time computational budget for both sequence-level and token-level classification tasks.", "Our experiments, on SQuAD 1.1, MNLI-m and SST-2, have revealed that the proposed algorithmic framework significantly pushes a better Pareto frontier on the trade-off between inference efficiency and accuracy.", "Furthermore, we have observed that the proposed Length-Adaptive Transformer could achieve up to 3x speed-up over the standard transformer without sacrificing accuracy, both in terms of FLOPs and wallclock time.", "Although our approach finds an optimal length configuration of a trained classifier per computational budget, it leaves a open question whether the proposed approach could be further extended to support per-instance length configuration by for instance training a small, auxiliary neural network for each computational budget.", "Yet another aspect we have not investigated in this paper is the applicability of the proposed approach to sequence generation, such as machine translation.", "We leave both of these research directions for the future.", "Our approach is effective, as we have shown in this paper, and also quite simple to implement on top of existing language models.", "We release our implementation at https://github.com/clovaai/length-adaptive-transformer, which is based on Hugging-Face's Transformers library (Wolf et al., 2019), and plan to adapt it for a broader set of transformer-based models and downstream tasks, including other modalities (Dosovitskiy et al., 2020; Touvron et al., 2020; Gulati et al., 2020).", "The authors appreciate Clova AI members and the anonymous reviewers for their constructive feedback.", "Specifically, Dongyoon Han and Byeongho Heo introduced relevant works and gave insights from the view of the computer vision community.", "We use Naver Smart Machine Learning (Sung et al., 2017; Kim et al., 2018) platform for the experiments." ]
[ "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "objective", "objective", "abstain", "method", "abstain", "objective", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "objective", "method", "result", "objective", "abstain", "objective", "other", "other", "objective", "other", "other", "abstain", "objective", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "objective", "abstain", "other", "other", "other", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "result", "other", "other", "other", "other" ]
[ "We present a novel approach for determining learners' second language proficiency which utilizes behavioral traces of eye movements during reading.", "Our approach provides stand-alone eyetracking based English proficiency scores which reflect the extent to which the learner's gaze patterns in reading are similar to those of native English speakers.", "We show that our scores correlate strongly with standardized English proficiency tests.", "We also demonstrate that gaze information can be used to accurately predict the outcomes of such tests.", "Our approach yields the strongest performance when the test taker is presented with a suite of sentences for which we have eyetracking data from other readers.", "However, it remains effective even using eyetracking with sentences for which eye movement data have not been previously collected.", "By deriving proficiency as an automatic byproduct of eye movements during ordinary reading, our approach offers a potentially valuable new tool for second language proficiency assessment.", "More broadly, our results open the door to future methods for inferring reader characteristics from the behavioral traces of reading.", "It is currently estimated that over 1.5 billion people are learning English as a Second Language (ESL) worldwide.", "Their learning progress is commonly evaluated with classroom tests prepared by language instructors, quizzes in language learning software such as Duolingo and Rosetta Stone, and by official standardized language proficiency tests such as TOEFL, IELTS, MET and others.", "In high stakes scenarios, official language proficiency tests are the de-facto standards for language assessment; they are accepted by educational and professional institutions, and are taken by millions of language learners every year (for example, in 2016 over three million people took the IELTS test (IELTS, 2017)).", "These tests probe language proficiency based on performance on various linguistic tasks, including grammar and vocabulary exams, reading and listening comprehension questions, as well as essay writing and speaking assignments.", "Despite their ubiquity, traditional approaches to language proficiency testing have several drawbacks.", "First, such tests are typically prepared manually and require extensive resources for test development.", "Moreover, their validity can be undermined by test specific training, prior knowledge of the evaluation mechanisms (Powers et al., 2002), as well as plain cheating via unauthorized access to test materials.", "Further, the utilized testing and evaluation methodologies vary across different tests, and test materials are in most cases inaccessible to the research community.", "Perhaps most crucially, the reliance of these tests on the end products of linguistic tasks makes it challenging to study learners' language processing patterns and the difficulties they encounter in real time.", "In this work we propose a novel methodology for language proficiency assessment which marks a significant departure from traditional language proficiency tests and addresses many of their drawbacks.", "In our approach, we determine language proficiency from broad coverage analysis of eye movements during reading of free-form text in a foreign language , a special case of the general problem of inferring comprehender characteristics and cognitive state from the measurable traces of real-time language processing.", "Our framework does not require the test taker to prepare for the test or to perform any hand-crafted linguistic tasks, but simply to attentively read an arbitrary set of sentences.", "To the best of our knowledge, this work is the first to propose and implement such an approach, yielding a novel language proficiency evaluation scheme which relies solely 1986 on ordinary reading.", "Our framework builds on previous research in psycholinguistics demonstrating that the eyetracking record reflects how readers interact with the text and how language processing unfolds over time (Frazier and Rayner, 1982; Rayner, 1998; Rayner et al., 2012).", "In particular, it has been shown that key aspects of the reader's characteristics and cognitive state, such as mind wandering during reading (Reichle et al., 2010), dyslexia (Rello and Ballesteros, 2015) and native language (Berzak et al., 2017) can be inferred from their gaze record.", "Despite these advances, the potential of the rich and highly informative behavioral signal obtainable from human reading for automated inference about readers, and specifically about their linguistic proficiency has thus far been largely unutilized.", "Here, we first introduce EyeScore, an independent measure of ESL proficiency which reflects the extent to which a learner's English reading patterns resemble those of native speakers.", "Second, we present a regression model which uses gaze features to predict the learner's scores on specific external proficiency tests.", "We address each of our tasks in two data regimes: Fixed Text , which requires eyetracking training data for the specific sentences presented to the test taker, as well as the more general and challenging Any Text regime, where the test taker is presented with arbitrary sentences for which no previous eyetracking data is available.", "To enable prediction mechanisms in both regimes, we utilize previously proposed gaze features, and develop new linguistically and psychologically motivated feature sets which capture the interaction between eye movements and linguistic properties of the text.", "We demonstrate the effectiveness of our approach via score comparison to standardized English proficiency tests.", "Our primary benchmark test, taken in lab by 145 ESL participants, are the grammar and listening sections of the Michigan English Test (MET) whose scores range from 0 to 50.", "EyeScore yields 0.5 Pearson's correlation to MET in the Fixed Text regime, and 0.48 in the Any Text regime.", "Our regression model for predicting MET scores from eye movement features obtains a correlation of 0.7 and a Mean Absolute Error (MAE) of 3.31 points in the Fixed Text regime, and 0.49 correlation and 4.11 MAE in the Any Text regime.", "Our results are substantially stronger compared to a baseline using only raw reading speed, and are reasonably close to correlations among traditional proficiency tests.", "These outcomes confirm the promise of the proposed methodology to reliably measure language proficiency.", "This paper is structured as follows.", "Section 2 describes the data and the experimental setup.", "In section 3 we delineate our feature sets for charactering eye movements in human reading.", "Section 4 introduces EyeScore, a second language proficiency metric which is based on similarity of reading patterns to native speakers.", "In section 5 we use eyetracking patterns to predict scores on MET and TOEFL.", "In section 6 we survey related work.", "Finally, we conclude and discuss future work in section 7.", "Our study uses the dataset of eye movement records and English proficiency scores introduced in Berzak et al. (2017) 1 , which we describe here in brief.", "The dataset contains gaze recordings of 37 native English speakers and 145 ESL speakers belonging to four native language backgrounds: 36 Chinese, 36 Japanese, 36 Portuguese and 37 Spanish.", "Participants were presented with free-form English sentences appearing as one-liners.", "To encourage attentive reading each sentence was followed by a yes/no comprehension question.", "During the experiment participants held a controller with buttons for indicating sentence reading completion and answering the sentence comprehension questions.", "Participants' eye movements were recorded using a desktop mount EyeLink 1000 eyetracker (SR Research) at a sampling rate of 1000Hz.", "An experimental trial for a sentence starts with a presentation of a target circle at the center left of a blank screen.", "A 300ms fixation on this circle triggers a one-liner sentence on a new screen starting at the same location.", "After completing reading the sentence, participants are presented with the letter Q on a blank screen.", "A 300ms fixation on this letter triggers a question about the sentence on a new screen.", "Participants provide a yes/no answer to the question and are subsequently informed if 1 The data was collected under IRB approval, and all the participants provided written informed consent.", "they answered correctly.", "The first trial of the experiment was presented to familiarize participants with the experimental setup, and is discarded from the analysis.", "Each participant read a total of 156 English sentences, randomly drawn from the Wall Street Journal Penn Treebank (WSJ-PTB) (Marcus et al., 1993).", "The maximal sentence length was set to 100 characters, yielding an average sentence length of 11.4 words.", "All the sentences include the manual PTB annotations of POS tags (Santorini, 1990) and phrase structure trees, as well as Google universal POS tags (Petrov et al., 2012) and dependency trees obtained from the Universal Dependency Treebank (UDT) (McDonald et al., 2013).", "Half of the 156 sentences presented to each participant belong to the Fixed Text regime, and the other half belong to the Any Text regime.", "Sentences from the two regimes were interleaved randomly and presented to all participants in the same order.", "Fixed Text In this regime, all the participants read the same suite of 78 pre-selected sentences (900 words).", "The Fixed Text regime supports token-level comparisons of reading patterns for specific words in the same contexts across readers.", "It enables the construction of a proficiency test which relies on a fixed battery of reading materials for which previous eyetracking data was collected.", "Any Text In the second, Any Text regime, different participants read different sets of 78 sentences each (880 words on average).", "This regime generalizes the Fixed Text scenario; predicting reader characteristics in this regime requires formulating type-level abstractions that would allow meaningful comparisons of reading patterns across different sentences.", "It corresponds to a proficiency test in which the sentences presented to the test taker are completely arbitrary, and no prior eyetracking data is available for them.", "benchmarks of their English proficiency.", "Michigan English Test (MET) Our primary indicator of English proficiency is the listening and grammar sections of the MET (Form-B), which were administered by Berzak et al. (2017) in-lab, and taken by all the 145 non-native participants upon completion of the reading experiment.", "The test has a total of 50 multiple choice questions, comprising 20 listening comprehension questions and 30 written grammar questions.", "The test score is computed as the number of correct answers for these questions, with possible scores ranging from 0 to 50.", "The mean MET score in the dataset is 41.46 (std 6.27).", "TOEFL Berzak et al. (2017) also collected self-reported scores on the most recently taken official English proficiency test, which we use here as a secondary evaluation benchmark.", "We focus on the most commonly reported test, the TOEFL-iBT whose scores range from 0 to 120.", "We take into account only test results obtained less than four years prior to the experiment, yielding 33 participants.", "We sum the scores of the reading and listening sections of test, with a total possible score range of 0 to 60.", "In cases where participants reported only the overall score, we divided that score by two.", "We further augment this data with 20 participants who took the TOEIC Listening and Reading test within the same four years range, resulting in a total of 53 external proficiency scores.", "The TOEIC scores were converted to the TOEFL scale by fitting a third degree polynomial on an unofficial score conversion table 2 between the tests.", "The converted scores were then divided by two.", "Henceforth we refer to both TOEFL-iBT and TOEIC scores converted to TOEFL-iBT scale as TOEFL scores.", "The mean TOEFL score is 47.6 (std 9.55).", "The Pearson's r correlation between the TOEFL and MET scores in the dataset is 0.74.", "We divide the ESL speakers into train-ing/development and test sets in the following manner.", "For MET, we split our 145 ESL participants into a training/development set of 88 participants and a test set of 57 participants.", "The test set consists of an entire held out native language 36 speakers of Portuguese as well as 7 participants randomly sampled from each of the remaining three native languages.", "Our test set is thus particularly challenging due to the large fraction of participants belonging to the held out language, a design which emphasizes 2 http://theedge.com.hk/conversion-table-for-toefl-ibt-pbt-cbt-tests/ Although both TOEFL and TOEIC are administered by the same company (ETS), to the best of our knowledge there is no publicly available official conversion table between the two tests.", "generalization to language learner populations which are not part of the training set.", "Figure 1 presents a schematic overview of our MET split.", "For TOEFL, due to the limited available data, in Section 4 we report EyeScore correlations for all the 53 test takers, and in Section 5 we perform regression experiments using leave-one-out cross validation.", "In order to capture behavioral psycholinguistic traces of language proficiency we utilize several linguistically and psychologically motivated feature representations of eye movements in reading.", "We include features introduced in prior work (see Words in Fixed Context and Syntactic Clusters (Berzak et al., 2017)) as well as newly developed feature sets (see Word Property Coefficients and Transitions).", "All our features rely on the well established division of gaze trajectories into fixations (stops) and saccades (movements between fixations) that characterizes human reading (Rayner, 1998).", "Our fixation based features make use of several standard metrics of fixation times, defined below.", "First Fixation duration (FF) Duration of the first fixation on a word.", "First Pass duration (FP) Time spent from first entering a word to first leaving it (includ-ing re-fixations within the word).", "Total Fixation duration (TF) The sum of all fixation times on a word.", "Regression Path duration (RP) Time from first entering a word until proceeding to its right.", "Our feature sets are divided into two groups.", "The first group consists of type-level features, applicable both in the Any Text and Fixed Text regimes.", "The second group of feature sets is token-based and can be extracted only in the Fixed Text regime, because it presupposes the same textual input for all participants.", "This new feature set quantifies the influence of three key word characteristics on reading times of individual readers: word length, word frequency and surprisal.", "The last measures the difficulty of processing a word in a sentence (Hale, 2001; Levy, 2008), and is defined as its negative log probability given a sentential context: surprisal ( w i | w 1 ...i 1 ) = log( w i | w 1 ...i 1 ) (1) In the reading literature, these three characteristics were suggested as the most prominent linguistic factors influencing word reading times (e.g. Inhoff and Rayner, 1986; Rayner and Well, 1996; Pollatsek et al., 2008; Kliegl et al., 2004; Rayner et al., 2004, 2011; Smith and Levy, 2013; Luke and Christianson, 2016); whereby longer, less frequent and contextually less predictable words are fixated longer.", "To derive this feature set, we measure length as the number of characters in the word.", "Word (log) frequencies are obtained from the BLLIP-WSJ corpus (Charniak et al., 2000).", "Estimates of surprisal are obtained from a trigram language model with Chen and Goodman's modified Kneser-Ney smoothing trained on the BLLIP-WSJ using SRILM (Stolcke et al., 2002).", "We then fit for each participant four regression models that use these three word characteristics to predict the word's raw FF, FP, TF and RP durations.", "The regression models are fitted using Ordinary Least Squares (OLS).", "We also train a logistic regression model for predicting word skips.", "Finally, we extract the weights and intercepts of these models and encode them as features.", "As each of the five models has three coefficients and one intercept term, the resulting WP-Coefficients feature set has 20 features.", "Following Berzak et al. (2017), we extract average word reading times clustered by POS tags and", "syntactic functions.", "We utilize three metrics of reading times, FF, FP and TF durations.", "We then cluster words according to three types of syntactic criteria, Google Universal POS tags, PTB POS tags, and the syntactic function label of the word to its head word.", "To derive the feature set, we average the word fixation times of each cluster.", "An example of an S-Cluster feature is the average TF duration for words with the PTB POS tag DT.", "We take into account only cluster labels that appear at least once in the reading input of all the participants, yielding a total of 312 S-Clusters features in the Fixed Text regime.", "In the Any Text regime we obtain 156 S-Clusters features for MET and 165 S-Clusters features for TOEFL.", "Transitions is a new feature set which summarizes the sequence of saccades between words in a sentence.", "Given a sentence with n words, we construct an n n matrix T .", "A matrix entry t i,j records the number of saccades whose launch site falls within word i and landing site falls within word j .", "With a total of 11,616 possible transitions in the Fixed Text sentences, the resulting feature set contains 9,077 features with a non-zero value for at least one participant for MET, and 8,132 such features for TOEFL.", "This feature set was previously used in Berzak et al. (2017) and consists of reading times for words within fixed contexts.", "We extract FP and TF durations for the 900 words in the Fixed Text sentences, resulting in a total of 1,800 WFC features.", "We hypothesize that language proficiency influ-ences the way that learners process a second language, which in turn will be reflected in eye movement patterns in reading.", "Specifically, we propose to examine whether the more proficient is an ESL learner, the more similar are their reading patterns to those of native English speakers.", "We opera-tionalize the notion of native-like reading in the following manner.", "First, given a feature representation of choice and a dataset D comprising ESL learners DL 2 and native speakers DL 1 we Z score each feature in D using a Z scaler derived from DL 2 .", "We then obtain a prototype feature vector of native reading v L 1 by averaging the feature vectors of the native speakers.", "Finally, we obtain an eyetracking based proficiency score of an ESL learner by computing the cosine similarity of their feature vector to the native reading prototype.", "Hereafter we refer to this measure as EyeScore.", "Reading Speed Normalization To reduce bias towards fast readers, the feature representations used for Eyescore are normalized to be invariant to the reading speed of the participant.", "Specifically, for the S-Clusters and WFC feature sets we follow the normalization procedure of Berzak et al. (2017), where for a given participant, the reading time of a word w i according to a fixation metric M is normalized by S M,C , the metric's fixation time per word in the linguistic context C : S M,C = 1 | C | X w CM w (4) The linguistic context is defined as the surrounding sentence in the Fixed Text regime, and the entire textual input in the Any Text regime.", "The normalized fixation time is then obtained as: Mnorm w i = M w i S M,C (5) For the WC-Coefficients features we take into account only the 15 model coefficients, and omit the 5 intercept features which capture the reading speed of the participant.", "Finally, we also normalize the Transitions features matrix T by the total number of saccades in the sentence to obtain T norm in which P i,j t norm i,j = 1 .", "We evaluate the ability of EyeScore to capture language proficiency by comparing it against our two external proficiency tests, MET and TOEFL.", "Table 1 presents the Pearson's r correlation of EyeScore with MET and TOEFL for the feature sets described in section 3 using the MET train-ing/development set and all the participants who took TOEFL.", "The strongest correlations, 0.5 for MET and 0.54 for TOEFL, are obtained in the Fixed Text regime using the WFC features.", "This outcome confirms the effectiveness reading time comparisons when the presented sentences are shared across participants.", "To illustrate the quality of this result, Figure 2 presents a comparison of EyeScore and MET scores in the Fixed Text and WFC features setup.", "We further note good performance of the Transitions and S-Clusters features in this regime across both proficiency tests.", "The strongest performance in the Any Text regime is obtained using the S-Clusters features, yielding 0.48 correlation with MET and 0.45 correlation with TOEFL.", "These results are competitive with the WFC feature set in the Fixed Text regime, suggesting that reliable EyeScores can be obtained even when no prior eyetracking data is available for the sentences presented to the test taker.", "In order to contextualize the correlations obtained with the EyeScore approach, we first compare our results to raw reading speed, an informative baseline which does not rely on eyetracking.", "EyeScore substantially outperforms this baseline for nearly all the feature sets on both MET and TOEFL, clearly showing the benefit of eye movement information for our task.", "Next, we consider possible upper bounds for our correlations.", "While obtaining such upper bounds is challenging, we can use correlations between different traditional standardized proficiency tests as informative reference points.", "First, as mentioned previously, in our dataset the MET and reported TOEFL scores have a Pearson's r correlation of 0.74.", "We further note an external study conducted by the testing company Education First (EF) which measured the correlation of their flagship standardized English 20 25 30 35 40 45 50 Michigan 0.2 0.1 0.0 0.1 0.2 0.3 C o s i n e w i t h E n g li s h L 1 pearsonr = 0.5; p = 7.5e-07 Figure 2: Comparison of MET (training/development set, 88 participants) with EyeScore using Words in Fixed Context (WFC) features in the Fixed Text regime.", "proficiency test EFSET-PLUS with TOEFL-iBT (Luecht, 2015).", "Using 384 participants who took both tests, the study found a Pearson's r of 0.63 for the reading comprehension and 0.69 for the listening comprehension sections of these tests.", "Despite the radical difference of our testing methodology, our strongest feature sets obtain rather competitive results relative to these correlations, further strengthening the evidence for the ability of our approach to capture language proficiency.", "In section 4 we introduced EyeScore as an independent metric of language proficiency which is based on eye movements during reading.", "Here, we examine whether eye movements can also be used to explicitly predict the performance of participants on specific external standardized language proficiency tests.", "This task is of practical value for development of predictive tools for standardized proficiency tests, and constitutes an alternative framework for studying the relevance of eye movement patterns in reading to language proficiency.", "To address this task, we use Ridge regression to predict overall scores on an external proficiency test from eye movement features in reading.", "The model parameters are obtained by minimizing 1991 MET TOEFL Fixed Any Fixed Any Features r MAE r MAE r MAE r MAE Reading Speed 0.27 4.58 0.24 4.62 0.09 7.92 0.06 7.96 WP-Coefficients 0.43 4.11 0.44 4.14 0.34 7.76 0.31 7.49 S-Clusters 0.56 3.87 0.49 4.11 0.55 7.45 0.50 7.76 Transitions 0.52 3.93 NA NA 0.38 7.11 NA NA WFC 0.70 3.31 NA NA 0.50 6.68 NA NA Table 2: Pearson's r and Mean Absolute Error (MAE) for prediction of MET scores (test set, 57 participants) and TOEFL scores (leave-one-out cross validation, all 53 participants) from eye movement patterns in reading.", "where y i is a participant's test score, x i is their eye movement record, and f ( x i ) are the extracted eye movement features.", "To calibrate the model with respect to native English speakers, we augment each training set DL 2 tr with the group of 37 native speakers DL 1 whose proficiency scores are assigned to the maximum grade of the respective test (50 for MET and 60 for TOEFL) 3 .", "Based on MET performance on the train/dev set, the features used for predicting scores on both tests are not normalized for speed 4 .", "As a preprocessing step, we fit a Z scaler for each feature using the ESL participants in the training set, and apply it to all the participants in the training and test sets.", "We evaluate prediction accuracy using Pearson's r and Mean Absolute Error (MAE) from the true proficiency test scores.", "The parameter for MET is optimized for MAE on 10 fold cross validation within the training/development set.", "For TOEFL, which has a relatively small number of participants, we report results on leave-one-out cross validation with set to 1.", "3 Our experiments on the training/development set indicate that this training data augmentation step leads in most", "cases to improved regression performance.", "4 We note that in line with the low correlation of reading speed with TOEFL, speed normalized features tend to be better predictors of TOEFL scores, obtaining r 0.59 and MAE 6.47 with WFC features in the Fixed Text regime, and r 0.58 and MAE 7.19 with S-Clusters in the Any Text regime.", "score of the training participants.", "This baseline yields an MAE of 4.82 on MET and 8.29 on TOEFL.", "The second baseline uses reading speed as the sole feature for prediction.", "In all cases, our eyetracking based features outperform the average score and reading speed baselines.", "The performance of the different feature sets is in most cases consistent across the two proficiency tests and is largely in line with the correlations of EyeScore reported in Table 1.", "Similarly to the EyeScore outcomes, the best performance in the Fixed Text regime is obtained using the WFC feature set, with a Pearson's r of 0.7 and MAE of 3.31 for MET.", "This result is highly competitive with correlations between different standardized English proficiency tests.", "Figure 3 depicts a com-1992 parison between MET scores and our MET predictions in this setup.", "On TOEFL, WFC features obtain the strongest MAE of 6.68, while S-Clusters have a higher r coefficient of 0.55.", "In the Any Text regime, differently from EyeScore, we obtain comparable results for the S-Clusters and WP-Coefficients feature sets.", "Overall, the improvements of both feature sets over the baselines in the Any Text regime further support the ability of type-level features to generalize the task of language proficiency prediction to arbitrary sentences.", "Our work lies on the intersection of language proficiency assessment, second language acquisition (SLA), the psychology of reading and NLP.", "Automated language proficiency assessment from free-form linguistic performance has been studied mainly in language production (Dikli, 2006; Williamson, 2009; Shermis and Burstein, 2013).", "Over the past several decades, multiple essay and speech scoring systems have been developed for learner language using a wide range of linguistically motivated feature sets (e.g. Lonsdale and Strong-Krause, 2003; Landauer, 2003; Xi et al., 2008; Yannakoudakis et al., 2011).", "Some of these systems have been deployed in official language proficiency tests, for example the e-rater essay scoring system (Attali and Burstein, 2004) used in TOEFL (Ramineni et al., 2012).", "While this line of work focuses on assessment of language production, here we introduce and address for the first time automated language assessment during online language comprehension .", "In SLA, there has been considerable interest in eyetracking, where studies have mostly focused on controlled experiments examining processing of specific linguistic phenomena such as syntactic ambiguities, cognates and idioms (Dussias, 2010; Roberts and Siyanova-Chanturia, 2013).", "A notable exception is (Cop et al., 2015) who used freeform reading to study differences in fixation times and saccade lengths between native and non-native readers.", "Our work also adopts broad coverage analysis of reading patterns, which we use to formulate predictive models of language proficiency.", "Our study draws on a large body of work in the psychology of reading (see Rayner, 1998; Rayner et al., 2012, for overview) which has suggested that eye movement patterns during reading are systematically influenced by a broad range of linguistic characteristics of the text, and reflect how readers mentally engage with the text (Frazier and Rayner, 1982; Rayner and Frazier, 1989; Reichle et al., 1998; Engbert et al., 2005; Demberg and Keller, 2008; Reichle et al., 2009; Levy et al., 2009, among many others).", "Prior work on reading has also demonstrated that gaze provides valuable information about various characteristics of the reader and their cognitive state.", "For example, Reichle et at.", "(2010) have shown that eye movement patterns are categorically different in attentive versus mindless reading.", "In Rello and Balles-teros (2015) eye movements were used to distinguish between readers with and without dyslexia.", "Berzak et al. (2017) collected the dataset used in our work and used it to predict the first language of non-native English readers from gaze.", "We build on these studies to motivate our task and design feature representations which encode linguistic factors known to affect the human reading process.", "Related work in NLP developed predictive models of reading times in reading of free-form text (e.g. Nilsson and Nivre, 2009; Hara et al., 2012; Hahn and Keller, 2016).", "In a complementary vein, eyetracking signal has been used for linguistic annotation tasks such as POS tagging (Barrett and Sgaard, 2015a; Barrett et al., 2016) and prediction of syntactic functions (Barrett and Sgaard, 2015b).", "Both lines of investigation provide further evidence for the tight interaction between eye movements and linguistic properties of the text, which we leverage in our work for inference about the linguistic knowledge of the reader.", "We present a novel approach for automated assessment of language proficiency which relies on eye movements during reading of free-form text.", "Our EyeScore test captures the similarity of language learners' gaze patterns to those of native speakers, and correlates well with the standardized tests MET and TOEFL.", "A second variant of our approach accurately predicts participants' scores on these two tests.", "To the best of our knowledge, the proposed framework is the first proof-of-concept for a system which utilizes eyetracking to measure linguistic ability.", "In future work, we plan to extend the analysis of the validity and consistency of our approach, and further explore its applications for language 1993 proficiency evaluation.", "In particular, we will examine the impact of factors that can undermine the validity of language proficiency tests, such as test specific training, familiarity with the evaluation system's features (Powers et al., 2002), and cheating via unauthorized prior access to test materials.", "Since participants are less likely to be able to manipulate their eye movements in an informed and systematic mannerreaders are generally not even aware that their eye movements are saccadicand since our test can be performed on arbitrary sentences, we expect it to be robust to prior exposure to the test materials and testing methodology.", "We will further study the consistency of our scores for repeated tests by the same participants.", "A preliminary split-half analysis indicates that eyetracking based scores are expected to be highly consistent across tests.", "Finally, our approach can be combined with traditional proficiency testing methodologies, whereby gaze will be recorded while the participant is taking a standardized language proficiency test.", "This will enable developing novel approaches to language proficiency assessment which will integrate task based performance with real time monitoring of cognitive and linguistic processing.", "This material is based upon work supported in part by the Center for Brains, Minds, and Machines (CBMM), funded by NSF STC award CCF-1231216." ]
[ "objective", "method", "result", "objective", "method", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "objective", "abstain", "abstain", "abstain", "objective", "method", "method", "objective", "objective", "method", "abstain", "result", "result", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "other", "other", "objective", "other", "other", "method", "other", "other", "other", "other", "other", "abstain", "objective", "other", "other", "method", "objective", "abstain", "method", "objective", "objective", "abstain", "method", "method", "abstain", "method", "abstain", "other" ]
[ "Analysing whether neural language models encode linguistic information has become popular in NLP.", "One method of doing so, which is frequently cited to support the claim that models like BERT encode syntax, is called probing; probes are small supervised models trained to extract linguistic information from another model's output.", "If a probe is able to predict a particular structure, it is argued that the model whose output it is trained on must have implicitly learnt to encode it.", "However, drawing a generalisation about a model's linguistic knowledge about a specific phenomena based on what a probe is able to learn may be problematic: in this work, we show that semantic cues in training data means that syntactic probes do not properly isolate syntax.", "We generate a new corpus of semantically nonsensical but syntactically well-formed Jabberwocky sentences, which we use to evaluate two probes trained on normal data.", "We train the probes on several popular language models ( BERT , GPT-2 , and RoBERTa ), and find that in all settings they perform worse when evaluated on these data, for one probe by an average of 15 .", "4 UUAS points absolute.", "Although in most cases they still outperform the baselines, their lead is reduced substantially, e.g. by 53% in the case of BERT for one probe.", "This begs the question: what empirical scores constitute knowing syntax?", "Recently, unsupervised language models like BERT (Devlin et al., 2019) have become popular within natural language processing (NLP).", "These pre-trained sentence encoders, known affectionately as BERT oids (Rogers et al., 2020), have pushed forward the state of the art in many NLP tasks.", "Given their impressive performance, a natural question to ask is whether models like these implicitly learn to encode linguistic structures, such as part-of-speech tags or dependency trees.", "There are two strains of research that investigate this question.", "On one hand, stimuli-analysis compares the relative probabilities a language model assigns to words which could fill a gap in a cloze-style task.", "This allows the experimenter to test whether neural models do well at capturing specific linguistic phenomena, such as subjectverb agreement (Linzen et al., 2016; Gulordava et al., 2018) or negative-polarity item licensing (Marvin and Linzen, 2018; Warstadt et al., 2019).", "Another strain of research directly analyses the neural network's representations; this is called probing .", "Probes are supervised models which attempt to predict a target linguistic structure using a model's representation as its input (e.g. Alain and Bengio, 2017; Conneau et al., 2018; Hupkes and Zuidema, 2018); if the probe is able to perform the task well, then it is argued that the model has learnt to implicitly encode that structure in its representation.", "1 Work from this inchoate probing literature is frequently cited to support the claim that models like BERT encode a large amount of syntactic knowledge.", "For instance, consider the two excerpts below demonstrating how a couple of syntactic probing papers have been interpreted: 2 [The training objectives of BERT/GPT-2/XLNet] have shown great abilities to capture dependency between words and syntactic structures (Jawahar et al., 2019) (Tian et al., 2020) Further work has found impressive degrees of syntactic structure in Transformer encodings (Hewitt and Manning, 2019) (Soulos et al., 2020) 1 Methods which analyse stimuli are also sometimes termed probes' (e.g. Niven and Kao, 2019), but in this paper we use the term to refer specifically to supervised models.", "2 Jawahar et al. (2019) and Hewitt and Manning (2019) are more reserved about their claims; these examples merely show how such work is frequently interpreted, regardless of intent.", "Our position in this paper is simple: we argue that the literature on syntactic probing is methodologically flawed, owing to a conflation of syntax with semantics.", "We contend that no existing probing work has rigorously tested whether BERT encodes syntax, and a fortiori this literature should not be used to support this claim.", "To investigate whether syntactic probes actually probe syntax (or instead rely on semantics), we train two probes (4) on the output representations produced by three pre-trained encoders on normal sentences BERT (Devlin et al., 2019), GPT-2 (Radford et al., 2019), and RoBERTa (Liu et al., 2019).", "We then evaluate these probes on a novel corpus of syntactically well-formed sentences made up of pseudowords (3), and find that their performance drops substantially in this setting: on one probe, the average BERT oid UUAS is reduced by 15 .", "4 points, and on the other the relative advantage that BERT exhibits over a baseline drops by 53% .", "This suggests that the probes are leveraging statistical patterns in distributional semantics to aide them in the search for syntax.", "According to one of the probes, GPT-2 falls behind a simple baseline, but in some cases the leads remains substantial, e.g. 20 .", "4 UUAS points in the case of BERT .", "We use these results not to draw conclusions about any BERT oids' syntactic knowledge, but instead to urge caution when drawing conclusions from probing results.", "In our discussion, we contend that evaluating BERT oids' syntactic knowledge requires more nuanced experimentation than simply training a syntactic probe as if it were a parser (Hall Maudslay et al., 2020), and call for the separation of syntax and semantics in future probing work.", "When investigating whether a particular model encodes syntax, those who have opted for stimuli-analysis have been careful to isolate syntactic phenomena from semantics (Marvin and Linzen, 2018; Gulordava et al., 2018; Goldberg, 2019), but the same cannot be said of most syntactic probing work, which conflates the two.", "To see how the two can be separated, consider the famous utterance of Chomsky (1957): (1) Colourless green ideas sleep furiously whose dependency parse is give in Figure 1.", "Syntactic probes are typically evaluated on real-world data, not on Chomsky-style sentences of (1)'s ilk.", "The same is true for parsers, but from a machine-learning point of view this is not problematic, since the goal of a statistical parser is to parse well the data that one may encounter in the real world.", "The probing literature, however, is inherently making a epistemological claim: whether BERT knows syntax.", "3 Indeed, we already know that BERT significantly improves the performance of statistical parsing models on real-world data (Zhou and Zhao, 2019); there is no reason to develop specialist probes to reinforce that claim.", "As probing consider a scientific qustion, it follows that the probing literature needs to consider syntax from a linguistic point of view and, thus, it requires a linguistic definition of syntax.", "At least in the generative tradition, it taken as definitional that grammaticality, i.e. syntactic well-formedness, is distinct from the meaning of the sentence.", "It is this distinction that the nascent syntactic probing literature has overlooked.", "To tease apart syntax and semantics when evaluating probes, we construct a new evaluation corpus of syntactically valid English Jabberwocky sentences , so called after Carroll (1871) who wrote verse consisting in large part of pseudowords (see App. A).", "In written language, a pseudoword is a sequence of letters which looks like a valid word in a particular language (usually determined by acceptability judgments), but which carries with it no lexical meaning.", "For our Jabberwocky corpus, we make use of the ARC Nonword Database, which contains 358 , 534 monosyllabic English pseudowords (Rastle et al., 2002).", "We use a subset of these which were filtered 3 This is not an engineering claim because the NLP engineer is unlikely to care whether BERT 's representations encode syntactic structurethey just care about building reliable models that perform well on real data.", "An open question, however, is whether representations require a notion of syntax to properly generalise; this is not addressed in our work.", "out then manually validated for high plausibility by Kharkwal (2014).", "We conjugate each of these words using hand-written rules assuming they obey the standard English morphology and graphotactics.", "This results in 1361 word typesa total of 2377 varieties when we annotate these regular forms with several possible fine-grained part-of-speech realisations.", "To build sentences, we take the test portion of the English EWT Universal Dependency (UD; Nivre et al., 2016) treebank and substitute words (ran-domly) with our pseudowords whenever we have one available with matching fine-grained part-of-speech annotation.", "4 Our method closely resembles Kasai and Frank (2019), except they do so to analyse parsers in place of syntactic probes.", "An example of one of our Jabberwocky sentences is shown in Figure 2, along with its unlabeled undirected parse (used by the probes) which is taken from the vanilla sentence's annotation in the treebank.", "A syntactic probe is a supervised model trained to predict the syntactic structure of a sentence using representations produced by another model.", "The main distinction between syntactic probes and dependency parsers is one of researcher intent probes are not meant to best the state of the art, but are a visualisation method (Hupkes and Zuidema, 2018).", "As such, probes are typically minimally parameterised so they do not dig for information (but see Pimentel et al., 2020).", "If a syntactic probe performs well using a model's representations, it is argued that that model implicitly encodes syntax.", "(in UD notation) with Number=Sing or Number=Plur ; for verbs we treat VerbForm=Inf , VerbForm=Fin | Mood=Ind | Number=Sing | Person=3 | Tense=Pres , VerbForm=Fin | Mood=Ind | Tense=Pres , or VerbForm=Part | Tense=Pres ; for adjectives and adverbs we treat Degree=Cmp or Degree=Sup , along with unmarked.", "These cases cover all regular forms in the EWT treebank.", "Here we briefly introduce two syntactic probes, each designed to learn the syntactic distance between a pair of words in a sentence, which is the number of steps between them in an undirected parse tree (example in Figure 2).", "Hewitt and Manning (2019) first introduced syntactic distance, and propose the structural probe as a means of identifying it; it takes a pair of embeddings and learns to predict the syntactic distance between them.", "An alternative to the structural probe which learns parameters for the same function is a structured perceptron dependency parser, originally introduced in McDonald et al. (2005), and first applied to probing in Hall Maudslay et al. (2020).", "Here we call this the perceptron probe .", "Rather than learning syntactic distance directly, the perceptron probe instead learns to predict syntactic distances such that the minimum spanning tree that results from a sentence's predictions matches the gold standard parse tree.", "The difference between these probes is subtle, but they optimise for different metricsthis is reflected in our evaluation in 5.", "We train the probes on normal UDs, then evaluate them on Jabberwocky sentences; if the probes are really learning to extract syntax, they should perform just as well in the Jabberwocky setting.", "Models to Probe We probe three popular Transformer (Vaswani et al., 2017) models: BERT (De-vlin et al., 2019), GPT-2 (Radford et al., 2019), and RoBERTa (Liu et al., 2019).", "For all three we use the large' version.", "We train probes on the representations at multiple layers, and choose whichever layers result in the best performance on the development set.", "For each Transformer model, we also train probes on the layer 0 embeddings; we can treat these layer 0 embeddings as baselines since they are uncontextualised, with knowledge only of a single word and where it sits in a sentence, but no knowledge of the other words.", "As an additional baseline representation to probe, we use FastText embeddings (Bojanowski et al., 2017) appended with BERT position embeddings ( Fast+Pos ).", "We emphasise that none of these baselines can be said to encode anything about syntax (in a linguistic sense), since they are uncontextualised.", "Training details of these models and baselines can be found in App.", "B. BERT RoBERTa GPT-2 Fast+Pos Path Majority 0 20 40 60 80 100 UUAS ( P e r c e n t) 78.9 77.1 75.3 42.4 39.2 42.3 67.1 59.9 58.2 46.7 39.2 42.3 34.8 40.6 51.9 34.6 40.3 42.3 Best Layer Layer 0 Unchanged Jabberwocky", "Additional Simple Baselines In addition to the baseline representations which we probe, we compute two even simpler baselines, which ignore the lexical items completely.", "The first simply connects each word to the word next to it in a sentence ( Path ).", "The second returns, for a given sentence length, the tree which contains the edges occurring most frequently in the training data ( Majority ), which is computed as follows: first, we subdivide the training data into bins based on sentence length.", "For each sentence length n , we create an undirected graph G n with n nodes, each corresponding to a different position in the sentence.", "The edges are weighted according to the number of times they occur in the training data bin which contains sentences of length n .", "The majority tree' of sentence length n is then computed by calculating the maximum spanning tree over G n , which can be done by negating the edges, then running Prim's algorithm.", "For n > 40 , we use the Path baseline's predictions, owing to data sparsity.", "Metrics As mentioned in 4, the probes we experiment with each optimise for subtly different aspects of syntax; we evaluate them on different metrics which reflect this.", "We evaluate the structural probe on DSpr , introduced in Hewitt and Manning (2019)it is the Spearman correlation between the actual and predicted syntactic distances between each pair of words.", "We evaluate the perceptron probe using the unlabeled undirected attachment score ( UUAS ), which is the percentage of correctly identified edges.", "These different metrics reflect differences in the probe designs, which are elaborated in Hall Maudslay et al. (2020).", "Figure 3 shows the performance of the probes we trained, when they are evaluated on normal test data (plain) versus our specially constructed Jabberwocky data (hatched).", "Recall that the test sets have identical sentenceparse structures, and differ only insofar as words in the Jabberwocky test set have been swapped for pseudowords.", "5 For each BERT oid, the lower portion of its bars (in white) shows the performance of its layer 0 embeddings, which are uncontextualised and thus function as additional baselines.", "All the probes trained on the BERT oids perform worse on the Jabberwocky data than on normal data, indicating that the probes rely in part on semantic information to make syntactic predictions.", "This is most pronounced with the perceptron probe: in this setting, the three BERT oids' scores dropped by an average of 15 .", "4 UUAS points.", "Although they all still outperform the baselines under UUAS , their advantage is less pronounced, but in some cases it remains high, e.g. for BERT the lead is 20 .", "4 points over the Fast+Pos baseline.", "With the structural probe, BERT 's lead over the simple Majority baseline is reduced from 0 .", "078 to 0 .", "037 DSpr , and RoBERTa 's from 0 .", "074 to 0 .", "017 reductions of 53% and 77% , respectively.", "GPT-2 falls behind the baselines, and performs worse than even the simple Path predictions ( 0 . 580 compared to 0 . 584 ).", "5 This is why the Path and Majority baselines, which do not condition on the lexical items in a sentence, have identical scores on both datasets.", "Is BERT still the syntactic wunderkind we had all assumed?", "Or do these reductions mean that these models can no longer be said to encode syntax?", "We do not use our results to make either claim.", "The reductions we have seen here may reflect a weakness of the syntactic probes rather than a weakness of the models themselves, per se.", "In order to properly give the BERT oids their due, one ought train the probes on data which controls for semantic cues (e.g. more Jabberwocky data) in addition to evaluating them on it.", "Here, we wish only to show that existing probes leverage semantic cues to make their syntactic predictions; since they do not properly isolate syntax, they should not be cited to support claims about syntax.", "The high performance of the baselines (which inherently contain no syntax) is reason enough to be cautious about claims of these model's syntactic abilities.", "In general, single number metrics like these can be misleading: many correctly labeled easy dependencies may well obfuscate the mistakes being made on comparatively few hard ones, which may well be far more revealing (see, for instance, Briscoe and Carroll, 2006).", "Even if these syntactic probes achieved impressive results on Jabberwocky data, beating the baselines by some margin, that alone would not be enough to conclude that the models encoded a deep understanding of syntax.", "Dependency grammarians generally parse sentences into directed graphs with labels; these probes by comparison only identify undirected unlabeled parse trees (com-pare Figures 1 and 2 for the difference).", "This much-simplified version of syntax has a vastly reduced space of possible syntactic structures.", "Consider a sentence with e.g. n = 5 words, for which there are only 125 possible unlabeled undirected parse trees (by Cayley's formula, n n 2 ).", "As the high performance of the Majority baseline indicates, these are not uniformly distributed (some parse trees are more likely than others); a probe might well use these statistical confounds to advance its syntactic predictions.", "Although they remain present, biases like these are less easily exploitable in the labeled and directed case, where there are just over one billion possible parse trees to choose from.", "6 Syntax is an incredibly rich phenomenafar more so than when it is reduced to syntactic distance.", "6 O Frabjous Day!", "Callooh!", "Callay!", "In this work, we trained two syntactic probes on a variety of BERT oids, then evaluated them using Jabberwocky sentences, and showed that performance dropped substantially in this setting.", "This suggests that previous results from the probing literature may have overestimated BERT 's syntactic abilities.", "However, in this context, we do not use the results to make any claims about BERT ; we contend that to make such a claim one ought train the probes on Jabberwocky sentences, which would require more psuedowords than we had available.", "Instead, we advocate for the separation of syntax and semantics in probing.", "Future work could explore the development of artificial treebanks for use specifically for training syntactic probes, which minimise for any confounding statistical biases in the data.", "We make our Jabberwocky evaluation data and code publicly available at https: //github.com/rowanhm/jabberwocky-probing ." ]
[ "abstain", "abstain", "abstain", "result", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "other" ]
[ "Entity Matching (EM) aims at recognizing entity records that denote the same real-world object.", "Neural EM models learn vector representation of entity descriptions and match entities end-to-end.", "Though robust, these methods require many annotated resources for training, and lack of interpretability.", "In this paper, we propose a novel EM framework that consists of Heterogeneous Information Fusion (HIF ) and Key Attribute Tree (KAT ) Induction to decouple feature representation from matching decision.", "Using self-supervised learning and mask mechanism in pre-trained language modeling, HIF learns the embeddings of noisy attribute values by inter-attribute attention with unlabeled data.", "Using a set of comparison features and a limited amount of annotated data, KAT Induction learns an efficient decision tree that can be interpreted by generating entity matching rules whose structure is advocated by domain experts.", "Experiments on 6 pub-lic datasets and 3 industrial datasets show that our method is highly efficient and outperforms SOTA EM models in most cases.", "Our codes and datasets can be obtained from https:// github.com/THU-KEG/HIF-KAT .", "Entity Matching (EM) aims at identifying whether two records from different sources refer to the same real-world entity.", "This is a fundamental research task in knowledge graph integration (Dong et al., 2014; Daniel et al., 2020; Christophides et al., 2015; Christen, 2012) and text mining (Zhao et al., 2014).", "In real applications, it is not easy to decide whether two records with ad hoc linguistic descriptions refer to the same entity.", "In Figure 1, e 2 and e 3 refer to the same publication, while e 1 refers to a different Corresponding to L.Hou (houlei@tsinghua.edu.cn) Title Author Venue Conference (redundant) !", "one. Venue s of e 2 and e 3 have different expressions; Author s of e 3 is misplaced in its Title field.", "Early works include feature engineering (Wang et al., 2011) and rule matching (Singh et al., 2017; Fan et al., 2009). Recently, the robustness of Entity Matching has been improved by deep learning models, such as distributed representation based models (Ebraheem et al., 2018), attention based models (Mudgal et al., 2018; Fu et al., 2019, 2020), and pre-trained language model based models (Li et al., 2020). Nevertheless, these modern neural EM models suffer from two limitations as follows.", "Low-Resource Training. Supervised deep learning EM relies on large amounts of labeled training data, which is extremely costly in reality. Attempts have been made to leverage external data via transfer learning (Zhao and He, 2019; Thirumuruganathan et al., 2018; Kasai et al., 2019; Loster et al., 2021) and pre-trained language model based methods (Li et al., 2020). Other attempts have also been made to improve labeling efficiency via active learning (Nafa et al., 2020) and crowdsourcing techniques (Gokhale et al., 2014; Wang et al., 2012). However, external information may introduce noises, and active learning and crowdsourcing still require additional labeling work.", "pretability. Though some neural EM models analyze the model behavior from the perspective of attention (Nie et al., 2019), attention is not a safe indicator for interpretability (Serrano and Smith, 2019). Deep learning EM also fails to generate interpretable EM rules in the sense that they meet the criteria by domain experts (Fan et al., 2009).", "To address the two limitations, we propose a novel EM framework to decouple feature representation from matching decision. Our framework consists of Heterogeneous Information Fusion (HIF ) and Key Attribute Tree (KAT ) Matching Decision for low-resource settings. HIF is robust for feature representation from noisy inputs, and KAT carries out interpretable decisions for entity matching.", "In particular, HIF learns from unlabeled data a mapping function, which converts each noisy attribute value of entity into a vector representation. This is carried out by a novel self-supervised attention training schema to leverage the redundancy within attribute values and propagate information across attributes.", "KAT Matching Decision learns KAT using decision tree classification. After training, KAT carries out entity matching as a task of the classification tree. For each entity pair, it first computes multiple similarity scores for each attribute using a family of metrics and concatenates them into a comparison feature vector. This classification tree can be directly interpreted as EM rules that share a similar structure with EM rules derived by domain experts.", "Our EM method achieves at least SOTA performance on 9 datasets (3 structured datasets, 3 dirty datasets, and 3 industrial datasets) under various extremely low-resource settings. Moreover, when the number of labeled training data decreases from 60% to 10%, our method achieves almost the same performance. In contrast, other methods' performances decrease greatly.", "The rest of the paper is structured as follows. Section 2 defines the EM task; Section 3 presents HIF and KAT -Induction in details; Section 4 reports a series of comparative experiments that show the robustness and the interpretability our methods in low-resource settings; Section 5 lists some related works; Section 6 concludes the paper.", "Entity Matching. Let T 1 and T 2 be two collections of entity records with m aligned attributes {A 1 , A m } . We denote the i th attribute values", "values of entity record e as e [ A i ] . Entity matching aims to determine whether e 1 and e 2 refer to the same real-world object or not. Formally, entity matching is viewed as a binary classification function T 1 T 2 { T rue, F alse } that takes ( e 1 , e 2 ) T 1 T 2 as input, and outputs T rue ( F alse ), if e 1 and e 2 are matched (not matched).", "Current neural EM approaches simultaneously embed entities in low-dimensional vector spaces and obtain entity matching by computations on their vector representations. Supervised deep learning EM relies on large amounts of labeled training data, which is time-consuming and needs costly manual efforts. Large unlabelled data also contain entity feature information useful for EM, yet has not been fully exploited by the existing neural EM methods. In this paper, we aim at decoupling feature representation from matching decision. Our novel EM model consists of two sub-tasks: learning feature representation from unlabeled data and EM decision making.", "Feature Representation from Noisy Inputs. Entity records are gathered from different sources with three typical noises in attribute values: misplacing , missing , or synonym . Misplacing means that attribute value of A i drifts to A j ( i (cid:54) = j ) ; missing means that attribute values are empty; synonym means that attribute values with the same meaning have different literal forms. Our first task is to fusion noisy heterogeneous information in a self-supervised manner with unlabelled data.", "Interpretable EM. Domain experts have some valuable specifications on EM rules as follow: (1) an EM rule is an if-then rule of feature comparison; (2) it only selects a part of key attributes from all entity attributes for decision making; (3) feature comparison is limited to a number of similarity constraints, such as = , (Fan et al., 2009; Singh et al., 2017). Our second task is to realize an interpretable EM decision process by comparing feature representation per attribute by utilizing a fixed number of quantitative similarity metrics and then training a decision tree using a limited amount of labeled data. Our interpretable EM decision making will ease the collaboration with domain experts.", "In this section, we introduce (1) a neural model, Heterogeneous Information Fusion (HIF ), for the task of feature representation, and (2) a decision", "Figure 2 illustrates the overall work-flow of our method.", "The following subsections dive into details of the two tasks and propose a novel training scheme for low resource settings by exploiting unlabelled entity records.", "HIF : T R m d is a function that maps entity records into vector representations.", "An attribute value e [ A i ] of a record e is mapped to a d dimensional vector, written as HIF ( e )[ A i ] R d .", "HIF treats attribute values as strings of words and performs word embedding (EMB), word information aggregation (AGG), and attribute information propagation (PROP) successively.", "Word Embedding (EMB).", "Word embedding is a pre-train language model that contains features learned from a large corpus.", "We convert numerical and encoded attribute values into strings of digits or alphabets.", "For Chinese attribute values, we do word-segmentation using pkuseg (Luo et al., 2019).", "Then, we mark the beginning and the end of an attribute value with two special tokens, namely (cid:104) BEG (cid:105) and (cid:104) END (cid:105) .", "Finally, we pad each attribute value with (cid:104) PAD (cid:105) so that they are represented in the same length l .", "The representation after padding is illustrated as below: ( (cid:104) BEG (cid:105) , w 1 , w 2 , (cid:104) END (cid:105) , (cid:104) PAD (cid:105) , , (cid:104) PAD (cid:105) ) (cid:124) (cid:123)(cid:122) (cid:125) length = l Let W be the set of words, each word w W is mapped into a vector, and each attribute value is mapped into a matrix.", "Formally, EMB : WN RN d e maps N words into an N d e matrix by executing a look-up-table operation.", "N is the dictionary size.", "In particular, we have EMB ( e )[ A i ] R l d e , in which d e is the dimension of word embedding vectors.", "It is worth noting that (cid:104) PAD (cid:105) is embedded to zero vector to ensure that it does not interfere with other non-padding words in the following step.", "Word Information Aggregation (AGG).", "Summing up the l word embeddings as the embedding of an attribute value will neglect the importance weight among the l words.", "We leverage a more flexible framework, which aggregates word information by weighted pooling.", "The weighting coefficients i for different words are extracted by multiplying its embedding vector with a learnable, and attribute-specific vector a i R d e 1 .", "Subscript i implies that i and a i are associated with the i th attribute A i .", "The weighting coefficients are normalized by Softmax function among words.", "Finally, we enable a non-linear transformation (e.g., 2773 ReLU) during information aggregation with parameters W ai R d e d a .", "Formally, AGG maps each attribute value of entity record e into a d a dimensional vector AGG ( EMB ( e )[ A i ]) R d a as below: AGG ( EMB ( e )[ A i ]) = ReLU ( i EMB ( e )[ A i ] W ai ) i = Softmax ( EMB ( e )[ A i ] a i ) (cid:62) R 1 l Attribute Information Propagation (PROP).", "The mechanism of attribute information propagation is the key component for noise reduction and representation unification.", "This mechanism is inspired by the observation that missing attribute values often appear in other attributes (e.g., Venue and Conference in Figure 1, Mudgal et al. (2018) also reported the misplacing issue).", "We use Scaled Dot-Product Attention (Ashish et al., 2017) to propagate information among different attribute values.", "We use parameters Q , K , V i to convert AGG ( EMB ( e )[ A i ]) into query, key, and value vectors, respectively (Notice that only V i is attribute-specific).", "A R m m is the attention matrix.", "A ij denotes the attention coefficients from the i th attribute to the j th attribute: A ij = Softmax (cid:18) q i k j m (cid:19) q i = AGG ( EMB ( e )[ A i ]) Q k j = AGG ( EMB ( e )[ A i ]) K v i = AGG ( EMB ( e )[ A i ]) V i Record notation e is omitted in vectors q , k , v for brevity.", "To keep the identity information, each attribute value after attribute information propagation is represented by the concatenation of the context and the value vector: PROP ( AGG ( e ))[ A i ] = ReLU v i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) (cid:13) (cid:88) j (cid:54) = i A ij v j HIF outputs with Multiple Layer Perceptron (MLP).", "The whole process can be summarized as follows: HIF ( e ) = MLP PROP AGG EMB ( e ) R m d After HIF , each attribute A i of an entity record e has a feature embedding HIF ( e )[ A i ] .", "KAT Matching Decision consists of two steps: comparison feature computation (CFC ) and decision making with KAT .", "CFC computes similarity score for each paired attribute features by utilizing a family of well-selected metrics, and concatenate these similarity scores into a vector (comparison feature).", "KAT takes comparison feature as inputs, and perform entity matching with a decision tree.", "Comparison Feature Computing (CFC).", "Given a record pair ( e 1 , e 2 ) , CFC implements a function that maps ( e 1 , e 2 ) to a vector of similarity scores CFC ( e 1 , e 2 ) .", "The similarity score CFC ( e 1 , e 2 ) is a concatenation of a similarity vector between paired attribute values (i.e., e 1 [ A i ] , e 2 [ A i ] ) and a similarity vector between their vector embeddings (i.e., HIF ( e 1 )[ A i ] , HIF ( e 2 )[ A i ] ).", "To compare paired attribute values, we follow Konda et al. (2016) and classify attribute values into 6 categories, according to the type and the length, each with a set of comparison metrics for similarity measurement, such as Jaccard similarity, Levenshtein similarity, Monge-Elkan similarity, etc.", "More details are presented in Table", "1. For attribute value embeddings, we choose three metrics: the cosine similarity, the L 2 distance, and the Pearson coefficiency.", "In this way, we convert entity record pair into similarity score vector of attributes.", "Each dimension indicates the similarity degree of one attribute from a certain perspective.", "KAT Induction.", "In the matching decision, we take CFC ( e 1 , e 2 ) as input, and output binary classification results.", "We propose Key Attribute Tree, a decision tree, to make the matching decision based on key attribute heuristic, in the sense that some attributes are more important than others for EM.", "For example, we can decide whether two records of research articles are the same by only checking their Title and Venue without examining their Conference .", "Focusing only on key attributes not only saves computations, but also introduces interpretability that has two-folded meanings: (1) each dimension of CFC ( e 1 , e 2 ) is a candidate feature matching which can be interpreted as a component of an EM rule; (2) the decision tree learned by KAT can be converted into EM rules that follow the same heuristics as the EM rules made by domain experts (Fan et al., 2009).", "HIF Training.", "We design a self-supervised training method for HIF to learn from unlabeled data.", "Our strategy is to let the HIF model predict manually masked attribute values.", "We first represent attribute values, as strings of words, by Weighted Bag Of Words (WBOW) vectors, whose dimensions represent word frequencies.", "Then, we manually corrupt a small portion of entity records in T 1 T 2 by randomly replacing (mask) their attribute values with an empty string, which forms a new table T (cid:48) .", "HIF takes T (cid:48) as input and uses another MLP to predict the WBOW of masked attribute values.", "HIF is trained by minimizing the Cross-Entropy between the prediction and the ground-truth WBOW: min HIF CrossEntropy (cid:0) MLP ( HIF ( T (cid:48) )) , WBOW (cid:1) KAT Induction Training.", "KAT is trained with a normal decision tree algorithm.", "We constrain its depth, in part to maintain the interpretability of transformed EM rules.", "We use xgboost (Tianqi and Carlos, 2016) and ID3 algorithm (Quinlan, 1986) in the experiments.", "To preserve interpretability, the booster number of xgboost is set to 1, which means it only learns one decision tree.", "For ( e 1 , e 2 , T rue ) D , KAT takes CFC ( e 1 , e 2 ) as input, and T rue as the target classification output.", "In order to evaluate our model comprehensively, we collect multi-scaled datasets ranging from English corpus and Chinese corpus, including Structured datasets, Dirty datasets, and Real datasets.", "Struc-Type Dataset #Attr.", "tured and Dirty datasets are benchmark datasets 1 released in (Mudgal et al., 2018).", "The Real datasets are sampled from Taobaoone of the biggest E-commerce platform in China, a portion of which are manually labeled to indicate whether they are the same entity or not.", "The real datasets have notably more attributes than the structured or dirty datasets.", "Statistics of these datasets are listed in Table", "2. We focus on setting of low resource EM and use Rate% of labelled data as training set.", "The validation set uses the last 20% labeled pairs, and the rest pairs in the middle are the test set.", "This splitting is 1 http://pages.cs.wisc.edu/anhai/ data1/deepmatcher_data/ 2775 different from the sufficient resource EM (Mudgal et al., 2018; Konda et al., 2016) where up to 60% pairs are used in the training set.", "For I-A 1 , I-A 2 , and Phone , we use 10% labeled pairs as training data, because some of the baselines will crash, if the training data is too small.", "We remove trivial entity pairs from the Real datasets, as Structured and Dirty datasets have been released.", "For Real datasets, we remove matching pairs with large Jaccard similarity (0.32 for Phone, 0.36 for others) and non-matching pairs with small Jaccard similarity (0.3 for Phone, 0.332 for others).", "We implement 3 variants of our methods with different KAT Induction algorithms.", "HIF +K AT ID3 and HIF +K ATXGB inducts KAT with ID3 algorithm and xgboost respectively constraining maximum depth to", "3. HIF +DT inducts KAT with ID3 algorithm with no constraints on the tree depth.", "We include reproducibility details in Appendix B. We compare our methods with three SOTA EM methods, among which two are publicly available end-to-end neural methods, and one is feature engineering based method.", "1. DeepMatcher (Mudgal et al., 2018) (DM) is a general deep-learning based EM framework with multiple variants RNN DM-RNN , Attention DM-ATT , and Hybrid DM-HYB depending on what building block it chooses to construct 2 .", "2. HierMatcher (Fu et al., 2020) is also an end-to-end neural EM method that compare entity records at the word level 3 .", "3. Magellan (Konda et al., 2016) integrates both automatic feature engineering for EM and classifiers.", "Decision tree is used as the classifier of Magellan in our experiments.", "For ablation analysis, we replace a single component of our model with a new model as follows: HIF +LN replaces KAT with a linear classifier; HIF +LR replaces KAT with a logistic regression classifier; HIF-ALONE removes comparison metrics of attribute values (yellow segment of comparison features in Figure 2).", "We 2 https://github.com/anhaidgroup/ deepmatcher 3 https://github.com/cipnlu/ EntityMatcher Methods I-A 1 D-A 1 D-S 1 I-A 2 D-A 2 D-S 2 Phone Skirt Toner DM-RNN 63.6 85.4 74.8 42.3 45.7 39.0 90.0 67.6 68.6 DM-ATT 55.8 82.5 79.0 46.5 45.2 57.8 80.3 54.4 48.8 DM-HYB 60.9 86.6 78.0 49.5 46.2 60.4 91.9 64.2 67.4 HierMatcher 61.9 37.5 68.2 37.8 32.6 45.8 86.2 61.7 55.2 Magellan 92.3 93.7 85.1 50.6 65.6 71.1 93.6 96.6 97.2 HIF +DT 96.0 96.4 87.5 54.9 80.1 74.2 94.9 96.7 97.2 HIF +K AT ID3 95.8 96.6 88.2 51.6 79.0 79.5 94.5 96.7 97.2 HIF +K ATXGB 90.6 93.3 87.9 41.5 80.3 79.5 94.4 96.2 97.2 HIF +LN 77.9 21.0 54.7 41.6 -78.5 72.2 62.8 86.0 HIF +LR 84.2 87.1 84.6 46.5 -68.1 87.5 41.7 62.0 HIF-WBOW 93.0 92.7 75.4 43.2 47.9 43.7 91.6 66.3 74.0 HIF-EMB 91.1 90.9 76.6 30.8 53.9 46.8 89.9 65.7 79.8 HIF-ALONE 94.6 96.1 82.9 45.6 73.5 63.2 91.8 63.0 72.9 Table 3: F 1 score of all methods under low resource set-ting(%).", "also do ablation analysis for HIF-ALONE as follows: HIF-WBOW replaces outputs of HIF with d -dimensional WBOW vectors using PCA.", "HIFEMB replaces the outputs of HIF with the mean pooling of word embeddings.", "We use F 1 score as the evaluation metric.", "Experiment results are listed in Table 3 and Table", "5. All the reported results are averaged over 10 runs with different random seeds.", "General Results.", "We evaluate the performance of our model against 3 SOTA models under low resource settings, where only 1% or 10% of the total amount of labeled pairs are used for training (See Table 2).", "Comparative experiment results on the 9 datasets are listed in Table", "3. Our decoupled framework achieves SOTA EM results on all the nine datasets, and demonstrates significant performance on Dirty datasets, with a boosting of 4.3%, 14.7%, and 8.4% in terms of F 1 score on I-A 2 , D-A 2 , D-S 2 , compared to the best performance of baselines on their corresponding datasets.", "Our methods also outperforms all baselines on Structured and two Real datasets (the same as Magellan on Toner).", "The out-performance on Real datasets is marginal because attribute values in Real datasets are quite standard, which means that our model does not have many chances to fix noisy attribute values.", "Still, our methods achieve a high F 1 score ( 94 . 9 %) in Real datasets.", "These results indicate out methods are both effective under low resource settings and robust to noisy data.", "Effectiveness to Low Resource Settings We reduce the training rate from 60% to 10% to see whether our method is sensitive to the number of labeled record pairs as training resources.", "Experimental results are shown in Figure", "3. HIF +K AT (red line) achieves a stable performance as the number of labeled record pairs decreases, while the F 1 score of DeepMatcher and HierMatcher decrease simultaneously.", "Besides, our methods continuously outperform DeepMatcher and HierMatcher, ranging from low resource setting to sufficient resource setting.", "These results indicate that by exploring unlabelled data, HIF alleviates the reliance on labeled record pairs.", "Effectiveness to Noisy Heterogeneous Data.", "We manually aggravate the quality of datasets by randomly dropping p % of attribute values ( p % ranges from 0% to 40%), and see to what degree the feature representations delivered by HIF will affect the EM decision matching.", "From left to right, columns of subgraphs in Figure 3 demonstrates results with increasing dropping rate.", "On the I-A 1 dataset, the influence of dropping rate is marginal to HIF +K AT , whose F 1 score fluctuates around 95%.", "In contrast, F 1 scores of both DeepMatcher and HierMatcher will decrease if more attribute values are dropped.", "On the Phone dataset, the dropping rate's influence is not severe to HIF +K AT , especially when the training rate is low.", "These results show that HIF is efficient in recovering noisy heterogeneous inputs.", "The interpretability of our model means that the process of decision making of KAT can be easily transformed into EM rules whose structure is recommended by domain experts.", "Figure 4 illustrates a tree decision process of KAT that determines whether two records denote the same publication in the D-A 1 (DBLP and ACM) datasets.", "Each path from the root to a leaf node of the tree structure can be converted into an EM rule as follows: Rule 1: if L 2 ( HIF ( e 1 ) , HIF ( e 2 )) [ Authors ] 10 .", "21 then e 1 , e 2 are not a match; Rule 2: if L 2 ( HIF ( e 1 ) , HIF ( e 2 )) [ Authors ] < 10 .", "21 L 2 ( HIF ( e 1 ) , HIF ( e 2 )) [ Title ] < 0 .", "73 then e 1 , e 2 are a match; Rule 3: if L 2 ( HIF ( e 1 ) , HIF ( e 2 )) [ Authors ] < 10 .", "21 L 2 ( HIF ( e 1 ) , HIF ( e 2 )) [ Title ] 0 .", "73 then e 1 , e 2 are not a match They can be further read as descriptive rules: Rule 1: if two records have different authors, they will be different publications.", "Rule 2: if two records have similar authors and similar titles, they will be the same publication.", "Rule 3: if two records have similar authors and dissimilar titles, they will not be the same publication.", "The soundness of such rules can be examined by our experience.", "Important features of KAT are as follows: (1) KAT is conditioned on attribute comparison; (2) KAT only selects a few key attributes to compare features.", "In our example, there are 4 attributes, Author, Title, Venue and Conference in D-A 1 dataset, 2777 < (cid:1) < (cid:1) < (cid:1) < (cid:1) < (cid:1) Figure 4: The Key Attribute Tree generated by HIF +K ATXGB for D-A 1 dataset.", "KAT only selects Title and Author for EM decision making.", "The transformed rules meet the specifications of manually designed EM rules of domain experts (Fan et al., 2009; Singh et al., 2017).", "This kind of interpretability will ease the collaboration with domain experts, and increase the trustworthiness, compared with uninterpretable end-to-end Deep learning EM models.", "Ablation Analysis.", "Experiment results for ablation models are listed in Table", "3. On the one hand, HIF +LN and HIF +LR generally outperforms DeepMatcher and HierMatcher on 7 datasets with on-par performance on 2 Real datasets.", "This indicates that HIF and CFC together extract better comparison features than end-to-end neural methods under low resource settings.", "On the other hand, HIF +LN and HIF +LR are weaker than the tree induction classifier, suggesting that KAT is more reliable.", "Compared with HIF-KAT ID3 , Magellan, and HIFALONE, HIF-KAT ID3 achieves the highest performance, indicating that comparison on both attribute value embeddings and the original attribute values are important.", "Compared with HIF-ALONE, HIFWBOW, and HIF-EMB, HIF-ALONE outperforms HIF-WBOW and HIF-EMB on the Dirty datasets, showing the positive effects of its information re-construction.", "Finally, comparing HIF +K AT with HIF +DT, we find that HIF +K AT has better performances than HIF +DT on most of the datasets, except for (I-A 2 and Phone).", "This shows that non-key attributes Epoch I-A 1 D-A 1 D-S 1 Phone Skirt Toner DM-HYB 0.98 1.0 2.3 12.7 5.1 2.5 HierMatcher 0.47 0.3 0.7 41.7 4.0 1.4 HIF +K AT ID3 0.45 1.0 1.5 2.2 5.5 3.2 Train I-A 1 D-A 1 D-S 1 Phone Skirt Toner DM-HYB 86 434 958 1,418 2,984 1,473 HierMatcher 37 139 309 3,799 2,809 1,082 HIF +K AT ID3 344 819 1,085 1,097 1,669 968 Test I-A 1 D-A 1 D-S 1 Phone Skirt Toner DM-HYB 2.4 31.7 67.1 56.9 229.6 113.9 HierMatcher 2.0 25.1 50.1 113.0 181.1 74.4 HIF +K AT ID3 0.4 1.0 1.4 2.2 5.4 3.1 Table 4: (Epoch) Training time for one epoch & (Train) Training time until finish & (Test) Testing time.", "Efficiency.", "Table 4 shows the running times of our methods and of the two neural baselines.", "Our methods are highly efficient for inference, because our methods are highly parallel and are memory-saving.", "For example, on Phone datasets our methods can inference in a single batch, while HierMatcher can only run in a batch size of 4 with 24GiB RAM.", "The training efficiency of our method is comparable with baselines, because when the training data is small enough, baseline models may finish one epoch training with only few batches.", "Sufficient Resource EM.", "Table 5 shows the results with sufficient training data following the split method of Mudgal et al. (2018); Fu et al. (2020).", "Our method outperforms other methods on 4 datasets, and slightly fall behind on 5 datasets.", "The way of extracting comparison features falls into two categories: monotonic and non-monotonic.", "Monotonic features are (negatively) proportional similarities between attribute values.", "They can 2778 be calculated by symbolic rules, such as Jaccard similarity, Levenshtein similarity (Fan et al., 2009; Wang et al., 2011; Konda et al., 2016; Singh et al., 2017), or learned from differentiable comparison operations, such as subtracting, point-wise multiplication (Fu et al., 2019; Ebraheem et al., 2018; Fu et al., 2019).", "Non-monotonic features are hidden representations of end-to-end neural networks, such as Softmax or Sigmoid based similarity scores (Fu et al., 2020), attention based scores (Nie et al., 2019), or simply embedding based features (Mudgal et al., 2018; Li et al., 2020).", "EM with limited resources has recently intrigued research interest (Thirumuruganathan et al., 2018; Kasai et al., 2019).", "Existing explorations seek solution from leveraging external data to improving annotation efficiency.", "External data can be aggregated via transfer learning (Zhao and He, 2019; Thirumuruganathan et al., 2018; Kasai et al., 2019; Loster et al., 2021), or via pre-training language models (Li et al., 2020).", "For better annotations, researchers tried active learning (Kasai et al., 2019; Nafa et al., 2020; Sarawagi and Bhamidipaty, 2002; Arasu et al., 2010), or crowd sourcing techniques (Wang et al., 2012; Gokhale et al., 2014).", "The interpretability of neural models will contribute to the trust and the safety.", "It has become one of the central issues in machine learning.", "Chen et al. (2020) examines interpretability in EM risk analysis.", "There are also attempts to explain from the perspective of attention coefficients (Mudgal et al., 2018; Nie et al., 2019).", "We present a decoupled framework for interpretable entity matching.", "It is robust to both noisy heterogeneous input and the scale of training resources.", "Experiments show that our method can be converted to interpretable rules, which can be inspect by domain experts and make EM process more reliable.", "In the future, it is intriguing to explore more efficient ways to explore unlabeled data, such as lev-ering connections among entities, or combine with pre-trained language models.", "It is also valuable to explore how to use our heterogeneous information fusion module to boost other EM methods, such as injecting HIF representation as supplementary information into end-to-end models.", "This work is supported by Science and Technology Innovation 2030 New Generation of Artificial Intelligence Project (2020AAA0106501), the NSFC Key Project (U1736204), the NSFC Youth Project (62006136), the Federal Ministry of Education and Research of Germany as part of the competence center for machine learning ML2R (01IS18038C), and the grant from Alibaba Inc.", "Intended Use.", "The reported technique is intended for reliable entity matching in large scale E-commercial products, where attribute values are mostly heterogeneous descriptive sentences.", "The low resource' feature is intended to avoid heavy labor force.", "The interpretability' is intended to risk control in entity matching.", "Misuse Potential.", "As matching/alignment technique, our method may be misused in matching private information.", "Failure Modes.", "Our method provides a promising way to have domain experts check the generated rules, thus reducing the failure risk.", "Energy and Carbon Costs.", "The efficiency test in Section 4.4 shows that our method costs less computations and is more energy saving than existing methods." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "objective", "abstain", "abstain", "objective", "result", "abstain", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "objective", "objective", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "result", "abstain", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain" ]
[ "Transformers are unable to model long-term memories effectively, since the amount of computation they need to perform grows with the context length.", "While variations of efficient transformers have been proposed, they all have a finite memory capacity and are forced to drop old information.", "In this paper, we propose the -former , which extends the vanilla transformer with an unbounded long-term memory.", "By making use of a continuous-space attention mechanism to attend over the long-term memory, the -former's attention complexity becomes independent of the context length, trading off memory length with precision.", "In order to control where precision is more important, -former maintains sticky memories, being able to model arbitrarily long contexts while keeping the computation budget fixed.", "Experiments on a synthetic sorting task, language modeling, and document grounded dialogue generation demonstrate the -former's ability to retain information from long sequences.", "1 1 Introduction When reading or writing a document, it is important to keep in memory the information previously read or written.", "Humans have a remarkable ability to remember long-term context, keeping in memory the relevant details (Carroll, 2007; Kuhbandner, 2020).", "Recently, transformer-based language models have achieved impressive results by increasing the context size (Radford et al., 2018, 2019; Dai et al., 2019; Rae et al., 2019; Brown et al., 2020).", "However, while humans process information sequentially, updating their memories continuously, and recurrent neural networks (RNNs) update a single memory vector during time, transformers do not they exhaustively query every representation associated to the past events.", "Thus, the amount 1 The code is available at https://github.com/ deep-spin/infinite-former .", "of computation they need to perform grows with the length of the context, and, consequently, transformers have computational limitations about how much information can fit into memory.", "For example, a vanilla transformer requires quadratic time to process an input sequence and linear time to attend to the context when generating every new word.", "Several variations have been proposed to address this problem (Tay et al., 2020b).", "Some propose using sparse attention mechanisms, either with data-dependent patterns (Kitaev et al., 2020; Vyas et al., 2020; Tay et al., 2020a; Roy et al., 2021; Wang et al., 2021) or data-independent patterns (Child et al., 2019; Beltagy et al., 2020; Zaheer et al., 2020), reducing the self-attention complexity (Katharopoulos et al., 2020; Choromanski et al., 2021; Peng et al., 2021; Jaegle et al., 2021), and caching past representations in a memory (Dai et al., 2019; Rae et al., 2019).", "These models are able to reduce the attention complexity, and, consequently, to scale up to longer contexts.", "However, as their complexity still depends on the context length, they cannot deal with unbounded context.", "In this paper, we propose the -former ( infinite former ; Fig. 1): a transformer model extended with an unbounded long-term memory (LTM), which allows the model to attend to arbitrarily long contexts.", "The key for making the LTM unbounded is a continuous-space attention framework (Mar-tins et al., 2020) which trades off the number of information units that fit into memory (basis functions) with the granularity of their representations.", "In this framework, the input sequence is represented as a continuous signal , expressed as a linear combination of N radial basis functions (RBFs).", "By doing this, the -former's attention complexity is O ( L 2 + L N ) while the vanilla transformer's is O ( L ( L + LLTM )) , where L and LLTM correspond to the transformer input size and the long-term memory length, respectively (details in 3.1.1).", "Thus, this representation comes with 5468 two significant advantages:", "(i) the context can be represented using a number of basis functions N smaller than the number of tokens, reducing the attention computational cost; and", "(ii) N can be fixed , making it possible to represent unbounded context in memory, as described in 3.2 and Fig. 2, without increasing its attention complexity.", "The price, of course, is a loss in resolution: using a smaller number of basis functions leads to lower precision when representing the input sequence as a continuous signal, as shown in Fig. 3.", "To mitigate the problem of losing resolution, we introduce the concept of sticky memories", "(3.3), in which we attribute a larger space in the LTM's signal to regions of the memory more frequently accessed.", "This creates a notion of permanence in the LTM, allowing the model to better capture long contexts without losing the relevant information, which takes inspiration from long-term potentiation and plasticity in the brain", "(Mills et al., 2014; Bamji, 2005).", "To sum up, our contributions are the following: We propose the -former, in which we extend the transformer model with a continuous long-term memory", "(3.1).", "Since the attention computational complexity is independent of the context length, the -former is able to model long contexts.", "We propose a procedure that allows the model to keep unbounded context in memory", "(3.2).", "We introduce sticky memories, a procedure that enforces the persistence of important information in the LTM", "(3.3).", "We perform empirical comparisons in a synthetic task", "(4.1), which considers increasingly long sequences, in language modeling", "(4.2), and in document grounded dialogue generation", "(4.3).", "These experiments show the benefits of using an unbounded memory.", "A transformer", "(Vaswani et al., 2017)", "is composed of several layers, which encompass a multi-head self-attention layer followed by a feed-forward layer, along with residual connections", "(He et al., 2016)", "and layer normalization", "(Ba et al., 2016).", "input size and e is the embedding size of the attention layer.", "The queries Q , keys K , and values V , to be used in the multi-head self-attention computation are obtained by linearly projecting the input, or the output of the previous layer, X , for each attention head h : Q h = X h WQ h , K h = X h WK h , V h = X h WV h ,", "(1)", "where WQ h , WK h , WV h R d d are learnable projection matrices, d = e / H , and H is the number of heads.", "Then, the context representation Z h RL d , that corresponds to each attention head h , is obtained as: Z h = softmax", "where the softmax is performed row-wise.", "The head context representations are concatenated to obtain the final context representation Z RL e : Z = [ Z 1 , . . . , ZH ] WR ,", "Continuous attention mechanisms", "(Martins et al., 2020)", "have been proposed to handle arbitrary continuous signals, where the attention probability mass function over words is replaced by a probability density over a signal.", "This allows time intervals or compact segments to be selected.", "To perform continuous attention, the first step is to transform the discrete text sequence represented by X RL e into a continuous signal.", "This is done by expressing it as a linear combination of basis functions.", "To do so, each x i , with i { 1 , . . . , L } , is first associated with a position in an interval, t i [0 , 1] , e.g. , by setting t i = i/L .", "Then, we obtain a continuous-space representation X", "( t )", "R e , for any t [0 , 1] as: X", "( t )", "= B", "(cid:62)", "", "( t )", ",", "where", "( t )", "RN is a vector of N RBFs, e.g., j", "( t )", "= N", "( t ; j , j )", ", with j [0 , 1] , and B RN e is a coefficient matrix.", "B is obtained with multivariate ridge regression", "(Brown et al., 1980)", "so that X", "( t i )", "x i for each i [ L ] , which leads to the closed form", "(see App. A for details): B", "where F = [", "( t 1 )", ", . . . ,", "( t L )] RN L packs the basis vectors for the L locations.", "As G RL N only depends of F , it can be computed offline.", "Having converted the input sequence into a continuous signal X", "( t )", ", the second step is to attend over this signal.", "To do so, instead of having a discrete probability distribution over the input sequence as in standard attention mechanisms", "(like in Eq. 2), we have a probability density p , which can be a Gaussian N", "( t ; , 2 )", ", where and 2 are computed by a neural component.", "A unimodal Gaussian distribution encourages each attention head to attend to a single region, as opposed to scattering its attention through many places, and enables tractable computation.", "Finally, having p , we can compute the context vector c as: c = E p", "Martins et al.", "(2020)", "introduced the continuous attention framework for RNNs.", "In the following section", "(3.1), we will explain how it can be used for transformer multi-head attention.", "To allow the model to access long-range context, we propose extending the vanilla transformer with a continuous LTM, which stores the input embeddings and hidden states of the previous steps.", "We also consider the possibility of having two memories: the LTM and a short-term memory", "(STM), which consists in an extension of the transformer's hidden states and is attended to by the transformer's self-attention, as in the transformer-XL", "(Dai et al., 2019).", "A diagram of the model is shown in Fig.", "1. 3.1 Long-term Memory For simplicity, let us first assume that the long-term memory contains an explicit input discrete sequence X that consists of the past text sequence's input embeddings or hidden states, 2 depending on the layer 3", "(we will later extend this idea to an unbounded memory in 3.2).", "where", "( t )", "RN are basis functions and coefficients B RN e are computed as in Eq.", "5, 2 We stop the gradient with respect to the word embeddings or hidden states before storing them in the LTM.", "B", "(cid:62)", "= X", "(cid:62)", "G .", "Then, we can compute the LTM keys, K h RN d , and values, V h RN d , for each attention head h , as: K h = B h WK h , V h = B h WV h ,", "(8)", "where WK h , WV h R d d are learnable projection matrices.", "4 For each query q h,i for i { 1 , . . . , L } , we use a parameterized network, which takes as input the attention scores, to compute h,i ]0 , 1[ and 2 h,i R > 0 : h,i =sigmoid", "density h,i as", "( t ; h,i , 2 h,i )", ".", "Finally, having the value function V h", "( t )", "given as V h", "( t )", "= V", "(cid:62)", "h", "( t )", ", we compute the head-specific representation vectors as in Eq.", "6: z h,i = E p h,i [ V h ] = V", "h E p h,i [", "( t )]", "(11)", "which form the rows of matrix ZLTM ,h RL d that goes through an affine transformation, ZLTM = [ ZLTM , 1 , . . . , ZLTM ,H ] WO .", "(cid:62)", "The long-term representation, ZLTM , is then summed to the transformer context vector, ZT , to obtain the final context representation Z RL e : Z = ZT + ZLTM ,", "(12)", "which will be the input to the feed-forward layer.", "As the -former makes use of a continuous-space attention framework", "(Martins et al., 2020)", "to attend over the LTM signal, its key matrix size K h RN d depends only on the number of basis functions N , but not on the length of the context being attended to.", "Thus, the -former's attention complexity is also independent of the context's length.", "It corresponds to O", "( L", "( L + LSTM )", "+ L N )", "when also using a short-term memory and O", "( L 2 + L N )", "when only using the LTM, both", "(cid:28)", "O", "( L", "( L + LLTM ))", ", which would be the complexity of a vanilla transformer attending to the same context.", "For this reason, the -former can attend to arbitrarily long contexts without increasing the amount of computation needed.", "When representing the memory as a discrete sequence, to extend it, we need to store the new hidden states in memory.", "In a vanilla transformer, this is not feasible for long contexts due to the high memory requirements.", "However, the -former can attend to unbounded context without increasing memory requirements by using continuous attention, as next described and shown in Fig.", "2. To be able to build an unbounded representation, we first sample M locations in [0 , 1] and evaluate X", "( t )", "at those locations.", "These locations can simply be linearly spaced, or sampled according to the region importance, as described in 3.3.", "Then, we concatenate the corresponding vectors with the new vectors coming from the short-term memory.", "For that, we first need to contract this function by a factor of ]0 , 1[ to make room for the new vectors.", "We do this by defining: X contracted", "( t )", "= X", "( t/ )", "= B", "(cid:62)", "", "( t/ )", ".", "(13)", "Then, we can evaluate X", "( t )", "at the M locations 0 t 1 , t 2 , . . . , t M as: x m = B", "(cid:62)", "", "( t m / )", ", for m [ M ] ,", "(14)", "and define a matrix X past = [ x 1 , x 2 , . . . , x M ]", "(cid:62)", "RM e with these vectors as rows.", "After that, we concatenate this matrix with the new vectors X new , obtaining: X =", "(cid:104)", "X past", "(cid:62)", ", X new", "(cid:62)", "(cid:105)", "(cid:62)", "R", "( M + L )", "e .", "(15)", "Finally, we simply need to perform multivariate ridge regression to compute the new coefficient matrix B RN e , via B", "(cid:62)", "= X", "(cid:62)", "G , as in Eq.", "5.", "To do this, we need to associate the vectors in X past with positions in [0 , ] and in X new with positions in ] , 1] so that we obtain the matrix G R", "( M + L )", "N .", "We consider the vectors positions to be linearly spaced.", "When extending the LTM, we evaluate its current signal at M locations in [0 , 1] , as shown in Eq.", "14.", "These locations can be linearly spaced.", "However, some regions of the signal can be more relevant than others, and should consequently be given a larger memory space in the next step LTM's signal.", "To take this into account, we propose sampling the M locations according to the signal's relevance at each region", "(see Fig. 6 in App. B).", "To do so, we construct a histogram based on the attention given to each interval of the signal on the previous step.", "For that, we first divide the signal into 5471 D linearly spaced bins { d 1 , . . . , d D } .", "Then, we compute the probability given to each bin, p", "( d j )", "for j { 1 , . . . , D } , as: p", "where H is the number of attention heads and L is the sequence length.", "Note that Eq.", "16's integral can be evaluated efficiently using the erf function:", "a N", "( t ; , 2 )", "= 1 2", "erf", "b 2", "erf", "a 2", ".", "(17)", "Then, we sample the M locations at which the LTM's signal is evaluated at, according to p .", "By doing so, we evaluate the LTM's signal at the regions which were considered more relevant by the previous step's attention, and will, consequently attribute a larger space of the new LTM's signal to the memories stored in those regions.", "Discrete sequences can be highly irregular and, consequently, difficult to convert into a continuous signal through regression.", "Because of this, before applying multivariate ridge regression to convert the discrete sequence X into a continuous signal, we use a simple convolutional layer", "(with stride = 1 and width = 3 )", "as a gate, to smooth the sequence: X = sigmoid", "To train the model we use the cross entropy loss.", "Having a sequence of text X of length L as input, a language model outputs a probability distribution of the next word p", "( x t +1 | x t , . . . , x t L )", ".", "Given a corpus of T tokens, we train the model to minimize its negative log likelihood: LNLL = T 1", "Additionally, in order to avoid having uniform distributions over the LTM, we regularize the continuous attention given to the LTM, by minimizing the Kullback-Leibler", "(KL)", "divergence, DKL , between the attention probability density, N", "( h , h )", ", and a Gaussian prior, N", "( 0 , 0 )", ".", "As different heads can attend to different regions, we set 0 = h , regularizing only the attention variance, and get: LKL = T 1", "(cid:88)", "t =0 H", "(cid:88)", "h =1 DKL", "( N", "( h , h )", "|| N", "( h , 0 ))", "(20)", "= T 1", "(cid:88)", "t =0 H", "(cid:88)", "h =1 1 2", "(cid:18)", "2 h 20 log", "(cid:18)", "h 0", "(cid:19)", "1", "(cid:19)", ".", "(21)", "Thus, the final loss that is minimized corresponds to: L = LNLL + KLLKL ,", "To understand if the -former is able to model long contexts, we first performed experiments on a synthetic task, which consists of sorting tokens by their frequencies in a long sequence", "(4.1).", "Then, we performed experiments on language modeling", "(4.2)", "and document grounded dialogue generation", "(4.3)", "by fine-tuning a pre-trained language model.", "5 4.1 Sorting In this task, the input consists of a sequence of tokens sampled according to a token probability distribution", "(which is not known to the system).", "The goal is to generate the tokens in the decreasing order of their frequencies in the sequence.", "One example can be: 1 2 1 3 1 0 3 1 3 2", "To understand if the long-term memory is being effectively used and the transformer is not only performing sorting by modeling the most recent tokens, we design the token probability distribution to change over time : namely, we set it as a mixture of two distributions, p = p 0 +", "(1 )", "p 1 , where the mixture coefficient [0 , 1] is progressively increased from 0 to 1 as the sequence is generated.", "The vocabulary has 20 tokens and we experiment with sequences of length 4,000, 8,000, and 16,000.", "Baselines.", "We consider the transformer-XL 6", "(Dai et al., 2019)", "and the compressive transformer 7", "(Rae et al., 2019)", "as baselines.", "The transformer-XL consists of a vanilla transformer", "(Vaswani et al., 2017)", "extended with a short-term memory which is composed of the hidden states of the previous steps.", "The compressive transformer is an extension of the transformer-XL: besides the short-term memory, it has a compressive long-term memory which is composed of the old vectors of the short-term memory, compressed using a CNN.", "Both the transformer-XL and the compressive transformer require relative positional encodings.", "In contrast, there is no need for positional encodings in the memory in our approach since the memory vectors represent basis coefficients in a predefined continuous space.", "For all models we used a transformer with 3 layers and 6 attention heads, input size L = 1 , 024 and memory size 2,048.", "For the compressive transformer, both memories have size 1,024.", "For the -former, we also consider a STM of size 1,024 and a LTM with N = 1 , 024 basis functions, having the models the same computational cost.", "Further details are described in App.", "C.1.", "Results.", "As can be seen in the left plot of Fig. 3, the transformer-XL achieves a slightly higher accuracy than the compressive transformer and -former for a short sequence length", "(4,000).", "This is because the transformer-XL is able to keep almost the entire sequence in memory.", "However, its accuracy degrades rapidly when the sequence length is increased.", "Both the compressive trans-6 We use the authors' implementation available at https: //github.com/kimiyoung/transformer-xl .", "former and -former also lead to smaller accuracies when increasing the sequence length, as expected.", "However, this decrease is not so significant for the -former, which indicates that it is better at modeling long sequences.", "Regression error analysis.", "To better understand the trade-off between the -former's memory precision and its computational efficiency, we analyze how its regression error and sorting accuracy vary when varying the number of basis functions used, on the sorting task with input sequences of length 8,000.", "As can be seen in the right plot of Fig. 3, the sorting accuracy is negatively correlated with the regression error, which is positively correlated with the number of basis functions.", "It can also be observed, that when increasing substantially the number of basis functions the regression error reaches a plateau and the accuracy starts to drop.", "We posit that the latter is caused by the model having a harder task at selecting the locations it should attend to.", "This shows that, as expected, when increasing -former's efficiency or increasing the size of context being modeled, the memory loses precision.", "To understand if long-term memories can be used to extend a pre-trained language model, we fine-tune GPT-2 small", "(Radford et al., 2019)", "on Wikitext-103", "(Merity et al., 2017)", "and a subset of PG-19", "(Rae et al., 2019)", "containing the first 2,000 books", "( 200 million tokens)", "of the training set.", "To do so, we extend GPT-2 with a continuous long-term memory", "( -former)", "and a compressed memory", "(compressive transformer)", "with a positional bias, 5473 Wikitext-103 PG19 GPT-2 16.85 33.44 Compressive 16.87 33.09 -former 16.64 32.61 -former", "based on Press et al.", "(2021).", "8 For these experiments, we consider transformers with input size L = 512 , for the compressive transformer we use a compressed memory of size 512, and for the -former we consider a LTM with N = 512 Gaussian RBFs and a memory threshold of 2,048 tokens, having the same computational budget for the two models.", "Further details and hyperparameters are described in App.", "C.2.", "Results.", "The results reported in Table 1 show that the -former leads to perplexity improvements on both Wikitext-103 and PG19, while the compressive transformer only has a slight improvement on the latter.", "The improvements obtained by the -former are larger on the PG19 dataset, which can be justified by the nature of the datasets: books have more long range dependencies than Wikipedia articles", "(Rae et al., 2019).", "In document grounded dialogue generation, besides the dialogue history, models have access to a document concerning the conversation's topic.", "In the CMU Document Grounded Conversation dataset", "(CMU-DoG)", "(Zhou et al., 2018), the dialogues are about movies and a summary of the movie is given as the auxiliary document; the auxiliary document is divided into parts that should be considered for the different utterances of the dialogue.", "In this paper, to evaluate the usefulness of the long-term memories, we make this task slightly more challenging by only giving the models access to the document before the start of the dialogue.", "We fine-tune GPT-2 small", "(Radford et al., 2019)", "using an approach based on Wolf et al.", "(2019).", "To allow the model to keep the whole document on memory, we extend GPT-2 with a continuous LTM", "( -former)", "with N = 512 basis functions.", "As baselines, we use GPT-2, with and without ac-8 The compressive transformer requires relative positional encodings.", "When using only GPT-2's absolute positional encodings the model gives too much attention to the compressed memory, and, consequently, diverges.", "Thus, we adapted it by using positional biases on the attention mechanism.", "cess", "(GPT-2 w/o doc)", "to the auxiliary document, with input size L = 512 , and GPT-2 with a compressed memory with attention positional biases", "(compressive), of size 512.", "Further details and hyper-parameters are stated in App.", "C.3.", "To evaluate the models we use the metrics: perplexity, F1 score, Rouge-1 and Rouge-L", "(Lin, 2004), and Meteor", "(Banerjee and Lavie, 2005).", "Results.", "As shown in Table 2, by keeping the whole auxiliary document in memory, the -former and the compressive transformer are able to generate better utterances, according to all metrics.", "While the compressive and -former achieve essentially the same perplexity in this task, the -former achieves consistently better scores on all other metrics.", "Also, using sticky memories leads to slightly better results on those metrics, which suggests that attributing a larger space in the LTM to the most relevant tokens can be beneficial.", "Analysis.", "In Fig. 4, we show examples of utterances generated by -former along with the excerpts from the LTM that receive higher attention throughout the utterances' generation.", "In these examples, we can clearly see that these excerpts are highly pertinent to the answers being generated.", "Also, in Fig. 5, we can see that the phrases which are attributed larger spaces in the LTM, when using sticky memories, are relevant to the conversations.", "Continuous attention.", "Martins et al.", "(2020)", "introduced 1D and 2D continuous attention, using Gaussians and truncated parabolas as densities.", "They applied it to RNN-based document classi-fication, machine translation, and visual question answering.", "Several other works have also proposed the use of", "(discretized)", "Gaussian attention for natural language processing tasks: Guo et al.", "(2019)", "proposed a Gaussian prior to the self-attention mechanism to bias the model to give higher attention to nearby words, and applied it to natural language inference; You et al.", "(2020)", "proposed the use 5474 Cast: Macaulay Culkin as Kevin.", "Joe Pesci as Harry.", "Daniel Stern as Marv.", "John Heard as Peter.", "Roberts Blossom as Marley.", "The film stars Macaulay Culkin as Kevin McCallister, a boy who is mistakenly left behind when his family flies to Paris for their Christmas vacation.", "Kevin initially relishes being home alone, but soon has to contend with two would-be burglars played by Joe Pesci and Daniel Stern.", "The film also features Catherine O'Hara and John Heard as Kevin's parents.", "Previous utterance: Or maybe rent, anything is reason to", "celebrate..I would like to talk about a movie called \"Home Alone\" Answer: Macaulay Culkin is the main actor and it is a comedy.", "Previous utterance: That sounds like a great movie.", "Any more details?", "Answer: The screenplay came out in 1990 and has been on the air for quite a while.", "Toy Story: Tom Hanks as Woody | animated buddy comedy | Toy Story was the first feature length computer animated film | produced by Pixar | toys pretend to be lifeless whenever humans are present | focuses on the relationship between Woody and Gold | fashioned pull string cowboy doll La La Land: Ryan Gosling | Emma Stone as Mia | Hollywood | the city of Los Angeles | Meta critics: 93/100 | 2016 | During a gig at a restaurant Sebastian slips into a passionate jazz | despite warning from the owner | Mia overhears the music as she passes by | for his disobedience Figure 5: Phrases that hold larger spaces of the LTM, when using sticky memories, for two dialogue examples", "of hard-coded Gaussian attention as input-agnostic self-attention layer for machine translation; Dubois et al.", "(2020)", "proposed using Gaussian attention as a location attention mechanism to improve the model generalization to longer sequences.", "However, these approaches still consider discrete sequences and compute the attention by evaluating the Gaussian density at the token positions.", "Farinhas et al.", "(2021)", "extend continuous attention to multimodal densities, i.e. , mixtures of Gaussians, and applied it to VQA.", "In this paper, we opt for the simpler case, an unimodal Gaussian, and leave sparse and multimodal continuous attention for future work.", "Efficient transformers.", "Several methods have been proposed that reduce the transformer's attention complexity, and can, consequently, model longer contexts.", "Some of these do so by performing sparse attention, either by selecting pre-defined attention patterns", "(Child et al., 2019; Beltagy et al., 2020; Zaheer et al., 2020), or by learning these patterns from data", "(Kitaev et al., 2020; Vyas et al., 2020; Tay et al., 2020a; Roy et al., 2021; Wang et al., 2021).", "Other works focus on directly reducing the attention complexity by applying the", "(reversed)", "kernel trick", "(Katharopoulos et al., 2020; Choromanski et al., 2021; Peng et al., 2021; Jaegle et al., 2021).", "Closer to our approach are the transformer-XL and compressive transformer models", "(Dai et al., 2019; Rae et al., 2019), which extend the vanilla transformer with a bounded memory.", "Memory-augmented language models.", "RNNs, LSTMs, and GRUs", "(Hochreiter et al., 1997; Cho et al., 2014)", "have the ability of keeping a memory state of the past.", "However, these require backprop-agation through time which is impractical for long sequences.", "Because of this, Graves et al.", "(2014), Weston et al.", "(2014), Joulin and Mikolov", "(2015)", "and Grefenstette et al.", "(2015)", "proposed extending RNNs with an external memory, while Chandar et al.", "(2016)", "and Rae et al.", "(2016)", "proposed efficient procedures to read and write from these memories, using hierarchies and sparsity.", "Grave et al.", "(2016)", "and Merity et al.", "(2017)", "proposed the use of cache-based memories which store pairs of hidden states and output tokens from previous steps.", "The distribution over the words in the memory is then combined with the distribution given by the language model.", "More recently, Khandelwal et al.", "(2019)", "and Yogatama et al.", "(2021)", "proposed using nearest neighbors to retrieve words from a key-based memory constructed based on the training data.", "Similarly, Fan et al.", "(2021)", "proposed retrieving sentences from a memory based on the training data and auxiliary information.", "Khandelwal et al.", "(2019)", "proposed interpolating the retrieved words probability distributions with the probability over the vocabulary words when generating a new word, while Yogatama et al.", "(2021)", "and Fan et al.", "(2021)", "proposed combining the information at the architecture level.", "These models have the disadvantage of needing to perform a retrieval step when generating 5475 each token / utterance, which can be computationally expensive.", "These approaches are orthogonal to the -former's LTM and in future work the two can be combined.", "In this paper, we proposed the -former: a transformer extended with an unbounded long-term memory.", "By using a continuous-space attention framework, its attention complexity is independent of the context's length, which allows the model to attend to arbitrarily long contexts while keeping a fixed computation budget.", "By updating the memory taking into account past usage, the model keeps sticky memories, enforcing the persistence of relevant information in memory over time.", "Experiments on a sorting synthetic task show that former scales up to long sequences, maintaining a high accuracy.", "Experiments on language modeling and document grounded dialogue generation by fine-tuning a pre-trained language model have shown improvements across several metrics.", "Transformer models that attend to long contexts, to improve their generation quality, need large amounts of computation and memory to perform self-attention.", "In this paper, we propose an extension to a transformer model that makes the attention complexity independent of the length of the context being attended to.", "This can lead to a reduced number of parameters needed to model the same context, which can, consequently, lead to gains in efficiency and reduce energy consumption.", "On the other hand, the -former, as well as the other transformer language models, can be used on questionable scenarios, such as the generation of fake news", "(Zellers et al., 2019), defamatory text", "(Wallace et al., 2019), or other undesired content.", "This work was supported by the European Research Council", "(ERC StG DeepSPIN 758969), by the P2020 project MAIA", "(contract 045909), by the Fundao para a Cincia e Tecnologia through project PTDC/CCI-INF/4703/2021", "(PRELUNA, contract UIDB/50008/2020), by the EU H2020 SELMA project", "(grant agreement No 957017), and by contract PD/BD/150633/2020 in the scope of the Doctoral Program FCT PD/00140/2013 NET-SyS.", "We thank Jack Rae, Tom Schaul, the SARDINE team members, and the reviewers for helpful discussion and feedback." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "abstain", "objective", "method", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "method", "other", "other", "other", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other" ]
[ "The open-ended nature of visual captioning makes it a challenging area for evaluation.", "The majority of proposed models rely on specialized training to improve human-correlation, resulting in limited adoption, generalizability, and explainabilty.", "We introduce typicality, a new formulation of evaluation rooted in information theory, which is uniquely suited for problems lacking a definite ground truth.", "Typicality serves as our framework to develop a novel semantic comparison, SPARCS, as well as referenceless fluency evaluation metrics.", "Over the course of our analysis, two separate dimensions of fluency naturally emerge: style, captured by metric SPURTS, and grammar, captured in the form of grammatical outlier penalties.", "Through extensive experiments and ablation studies on benchmark datasets, we show how these decomposed dimensions of semantics and fluency provide greater system-level insight into captioner differences.", "Our proposed metrics along with their combination, SMURF, achieve state-of-the-art correlation with human judgment when compared with other rule-based evaluation metrics 1 .", "Visual captioning serves as a foundation for im-age/video understanding tools and relies on caption evaluation for identifying promising research directions.", "Rule-based caption evaluation approaches like the n-gram based CIDEr (Vedantam et al., 2015) and parsed semantic proposal based SPICE (Anderson et al., 2016) specifically are able to provide researchers with meaningful feedback on what their algorithm is lacking.", "However, n-gram based methods are sensitive to stop words and sentence parsers are often inconsistent, leading to Liu et al. (2017) showing that neither method 1 SMURF source codes and data will be released at https: //github.com/JoshuaFeinglass/SMURF .", "fully captures either the fluency or the semantic meaning of text.", "More recently proposed metrics attempt to learn cues of caption quality by training models via image grounding techniques (Cui et al., 2018) or human and generated captions (Sellam et al., 2020).", "These approaches, however, lack generality, require domain specific training, and offer little insight for improving captioners, leading to none of the proposed models being adopted for use as a caption evaluation benchmark.", "We instead postulate that quality in semantics and descriptive language is universally recognizable.", "is no longer a single outcome, but a large set of potential outcomes of varying levels of quality.", "From this problem setting, the novel concept of typical-ity arises naturally.", "A desirable caption is one that is atypical enough linguistically that it uniquely describes the scene, follows typical natural language protocols, and matches a typical semantic description of a scene.", "Linguistically, the number of typical sequences is characterized by the entropy rate (Cover, 1999).", "Current work estimates the English language as having an entropy rate of only 1 .", "44 bits/letter (Taka-hashi and Tanaka-Ishii, 2018), implying that the typical set of English is only a tiny fraction of the full space of potential text.", "Self-attention transformers are language models that are able to identify the distinguishing contextual features of this typical set and as a result have now become the staple of natural language understanding tasks.", "Here we define typicality based on the distance of a candidate text's features from expected features of the typical set.", "We call this linguistic typicality estimation method Model-Integrated Meta-Analysis (MIMA) and use the function, f MIMA , to create referenceless fluency metrics attune to captioning needs.", "Rather than assuming a predefined evaluation task and introducing bias by fine-tuning the self-attention transformer, our method extracts the inherent properties of language learned by transformers (Devlin et al., 2019; Liu et al., 2019) by treating self-attention layers as probability distributions as demonstrated in Clark et al. (2019).", "Our approach represents the first integration of a fluency specific metric that demonstrably improves correlation with human judgment for caption evaluation.", "By removing stop words from the candidate text, f MIMA is able to create a metric that assesses a relatively new fluency criteria in captioning: style.", "We refer to this metric as Stochastic Process Understanding Rating using Typical Sets (SPURTS).", "Style can be thought of as the instantiation of diction and is necessary for generating human-level quality captions.", "Stylized captions describe a much smaller set of media, leading to machines instead generating the most typical caption that is still semantically correct.", "This results in a significant gap between machine and human captioners that can be seen in diction-based examples such as the use of the common words like dog and food instead of more descriptive words like Schnauzer and lasagna.", "The other aspect of fluency assessed by f MIMA is grammar.", "Unlike style, grammar is not essential for caption quality, however, highly atypical syntax can potentially lead to awkward captions, so we develop a separate grammatical outlier penalty.", "We then define a lightweight and reliable typicality based semantic similarity measure, Semantic Proposal Alikeness Rating using Concept Similarity (SPARCS), which complements our referenceless metrics and grounds them to the reference captions.", "By matching word sequences, current methods limit the scope of their evaluation.", "Instead, we take non-stopword unigrams and further coalesce them into concepts through stemming, then combine the reference texts, like in Yi et al. (2020), using a novel semantic typicality measure of the reference text's concepts to evaluate the semantic similarity of a candidate and reference text.", "SPURTS and SPARCS can be used to assess system-level differences between captioners as shown in Figure", "1. Based on this analysis, the M 2 Transformer lags behind 2015 models in terms of similarity to human captions, even though both 2020 captioners achieved state-of-the-art results based on CIDEr standards.", "This difference becomes even more significant when you consider that the use of style makes it more difficult for a caption to be semantically correct.", "Human captions, M 2 Transformer (Cornia et al., 2020), X-Transformer (Pan et al., 2020), and Google (Vinyals et al., 2015) incur a total grammar outlier penalty of 44 .", "93 , 7 .", "47 , 7 .", "56 , and 4 .", "46 , respectively.", "In order to provide caption-level insight as well, we combine SPURTS, SPARCS, and our grammar outlier penalty into one metric SeMantic and linguistic UndeRstanding Fusion (SMURF) which rewards captions based on semantics and fluency.", "Contributions: Our key contributions are:", "1. A novel and widely-applicable model meta-analysis technique, MIMA, which estimates the typicality of candidate text and which provides a means of assessing transformer robustness.", "2. Three novel evaluation metrics useful for both caption-level and system-level evaluation: style-focused SPURTS, semantic-focused SPARCS, and their combination which incorporates grammatical outliers as well, SMURF.", "3. Experiments showing that SPARCS and SMURF achieve SOTA performance in their respective areas of semantic evaluation and human-machine evaluation at both a system and caption-2252 level.", "4. Evidence showing that the performance of automatic evaluation metrics has been underestimated relative to voting-based human evaluation metrics.", "Originally, popular rule-based metrics from machine translation that were mostly n-gram based, namely METEOR (Banerjee and Lavie, 2005), BLEU (Papineni et al., 2002), and ROUGE (Lin, 2004), were used for caption evaluation.", "Vedan-tam et al. (2015) introduced the more semantically sensitive CIDEr which uses tf-idf to identify distinguishing n-grams and then compares them using cosine similarity.", "SPICE (Anderson et al., 2016) greatly improved upon n-gram based approaches by using a sentence parser to generate semantic propositions.", "Word moving distance scores (Zhao et al., 2019; Kilickaya et al., 2017) have also been used for semantic evaluation with limited success.", "BERTScore (Zhang et al., 2019) used cosine similarity of embeddings from the self-attention transformer, BERT, and achieved state-of-the-art results on COCO but provided little interpretation of their approach.", "Domain specific training approaches have also been introduced with limited adoption.", "Cui et al. (2018); Jiang et al. (2019); Sharif et al. (2019) present a training approach for caption evaluation where an image grounding and/or caption based Turing test is learned based on training data from human and machine captioners.", "An adjusted BERTScore (Yi et al., 2020), BLEURT (Sellam et al., 2020), and NUBIA (Kane et al., 2020) utilize transformer embeddings for comparison between reference and candidate text, then perform caption dataset specific fine-tuning of the model downstream.", "The importance of fluency in captioning has been widely recognized.", "Liu et al. (2017) attempted to integrate CIDEr and SPICE to create a cost function attune to both lexicographical and semantic qualities for captioning optimization.", "Cui et al. (2018) identified the presence of less frequent, distinguishing words within human-generated text in the COCO dataset.", "Mathews et al. (2018) recognized the importance of style in captions and integrated it into their model without sacrificing semantics.", "Referenceless evaluation, first proposed in Napoles et al. (2016) as a referenceless grammar error correction (GEC) evaluation metric, has been recognized as an effective avenue for fluency evaluation as a whole (Asano et al., 2017), along with combined approaches (Choshen and Abend, 2018).", "More recently, Perception Score (Gu et al., 2021) outlined a general paradigm for training referenceless quality evaluation.", "First introduced in Vaswani et al. (2017), transformers are made of layers of parallel attention heads which extract contextual information about inputs using attention.", "They take in a sequence vector of tokenized words from candidate text, y n , add start and separator/end tokens, and pass the input through a series of separate linear transforms with parameters, p , to create query, key, and value vectors, denoted as q i , k i , v i , respectively.", "These vectors are then used to compute the attention weight parameters of the heads as shown: ij ( y n , p ) = exp ( q Ti k j ) (cid:80) nl =1 exp ( q Ti k l ) , (1) o i ( y n , p ) = n (cid:88) j =1 ij v j , (2) where ij and o i are each layer's attention weights and output, respectively.", "Here ij ( y n , p ) is a joint distribution with marginal distributions i ( y n , p ) = (cid:80) j ij ( y n , p ) and j ( y n , p ) = (cid:80) i ij ( y n , p ) .", "BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) are encoder-decoder instantiations of transformers, pretrained on fundamental language tasks over large corpora.", "Both BERT and RoBERTa have achieved state-of-the-art results in various language understanding tasks.", "In order to speed up inference time, many papers have employed knowledge distillation to reduce the number of parameters these transformers require while still preserving their inference capabilities (Sun et al., 2019; Sanh et al., 2019; Chen et al., 2020).", "Transformers like BERT and RoBERTa take text tokenized into sub-word components as input, capturing both the syntax and morphology of the text.", "The text sequences used as training data, x n , can be modelled as a stationary ergodic stochastic process, { X k } k =1 , with instantiations limited to finite 2253 Figure 2: Visualization of the typicality formulation introducing the concept of a typical set on the left and showing the distance proportional to typicality on the right.", "alphabet X and based on joint probability distribution, P ( X 1 = x 1 , ..., X n = x n ) , whose transition predictability is governed by entropy rate, H ( X ) .", "The entropy of a distribution, or entropy rate in the case of a stochastic process, can be used to describe the number of instantiations expected to be observed from a random variable or process, referred to as the typical set.", "From the Asymptotic Equipartition Property (AEP), it is known that the size of the typical set of sequences is bounded by | A (cid:15)n | 2 n ( H ( X )+ (cid:15) ) , (3) where 2 nH ( X ) estimates the size of the typical set.", "We assume that a self-attention transformer learns to fill in words from a sentence by extracting features, F .", "The quality of a piece of text can then be assessed by determining the distance of features taken by the model from candidate text, Y n = y n , from the expected value of features taken from correctly written text, X n = ( x n A (cid:15)n ) , shown visually in Figure 2 and mathematically in Equation 4 D typical = dist ( F | y n , E [ F | ( x n A (cid:15)n )]) .", "Here dist does not does not refer to a specific distance metric and is instead an unspecified norm that exists in some realizable projection space.", "We then postulate the existence of a surrogate function, f MIMA , which maps the sequence input and transformer parameter set, p , such that f MIMA ( y n , p ) D typical , (5) resulting in a value indicating the typicality of a candidate input sequence.", "This value can be used to characterize the input for evaluation purposes.", "We postulate that input text that differs more greatly from members of the typical set generates a greater spark of interest in a transformer, resulting in greater information flow through parts of the network as shown in Figure", "3. Conversely, if the input text is similar to the positive examples the transformer trains on, less information flows in through the layer, indicating that the model has already captured information about the sequence previously.", "We formulate information flow in terms of the attention dimensions i ( y n , p ) , j ( y n , p ) , and their joint distribution ij ( y n , p ) as defined in Section 3.1.", "We consider information flow based on the redundancy between i ( y n , p ) and j ( y n , p ) and use normalized mutual information (MI): I flow ( y n , p ) = MI = 2 H ( i ( y n , p )) + H ( j ( y n , p )) H ( ij ( y n , p )) H ( i ( y n , p )) + H ( j ( y n , p )) , (6) as defined in Witten and Frank (2005) to capture this redundancy.", "We are interested in attention heads with large information flow values, but find empirically that heads with the largest information flow values depend very little on the input and simply function as all-pass layers.", "Thus, we downselect to a single attention head information flow value to obtain f MIMA ( y n , p ) = 1 median layer ( max head [ I flow ( y n , p )]) .", "Here, the max over a given layer's attention heads captures the largest spark of interest.", "The median removes outlier layers that have largely invariant information flow values.", "MIMA provides us with a foundation for computing the fluency of input text.", "We divide fluency into two categories: grammar and style.", "Grammar depends on the typicality of the sequence as a whole, f MIMA , and is computed using the distilled BERT model since it achieves the highest Pearson correlation in the grammar experiment from Table", "1. Style depends on the distinctness, or atypicality, of the words directly associated with the image description, which we evaluate by removing the stop words from the text, then computing what we define as SPURTS as shown SPURTS = 1 f MIMA ( y w/o , p ) , (8) where y w/o is the candidate sequence without stop words and f MIMA is computed using the distilled RoBERTa model since it performs well on out-of-distribution text as shown in Figure", "5. We formulate semantic similarity using typicality as well.", "Assuming a comprehensive set of all valid captions for a single image were available, we consider the distribution of all concepts, S .", "Here we define concepts as the set stem terms that would remain if all stop words and affix/suffixes were removed from the text.", "The distribution of concepts sampled from such a set of captions, S m , would have a typical set, S m , of the most relevant concepts.", "Thus, a valid caption that is representative of the image semantically and demonstrates fluency should contain concepts that are members of the typical set of concepts, S m , and be a member of the typical set of correctly formed language sequences defined in Section 3.2, A (cid:15)n , as shown in Figure", "4. To extract concepts from a caption, we use a stemmer on y w/s and estimate the typicality of each reference concept using the document frequency, df , of the concept across the available reference captions, gt ( S ) , where gt is the function that maps concepts to a reference caption set.", "We then use an adjusted F1 score to determine the similarity between the reference concepts and candidate concepts.", "The first portion of the F1 score is precision, corresponding to caption correctness.", "Our adjusted precision is P ( C , S ) = (cid:80) i df gt ( S ) ( C i ) | gt ( S ) | (cid:80) i ( df gt ( S ) ( C i ) | gt ( S ) | + I [ df gt ( S ) ( C i ) = 0]) , (9) where C is the candidate concept set and gt ( S ) is the reference caption set.", "Our approach equally weights correct and incorrect concepts if only one reference is used, but as the number increases, gradually decreases the importance of less common correct concepts.", "The second portion of the F1 score is recall, corresponding to caption detail.", "Our adjusted recall is R ( C , S ) = (cid:80) i df gt ( S ) ( C i ) (cid:80) i df gt ( S ) ( S i ) .", "(10) where a candidate concept set, C , which included all concepts from the reference set, S , would achieve a score of", "1. We then use the standard F1 score combination SPARCS = F 1 ( C , S ) = 2 P ( C , S ) R ( C , S ) P ( C , S ) + R ( C , S ) .", "To give an overall evaluation of performance, we fuse the proposed metrics.", "To begin, we standardize the output score distribution of human generated captions for each metric using the captions from the COCO Karpathy test split from Figure 1, metric (cid:48) = metric E [ metric ( COCO test )] ( metric ( COCO test )) , creating SPARCS (cid:48) , SPURTS (cid:48) , and f (cid:48) MIMA .", "Utilizing the standardization, we use threshold, T = 1 .", "96 , corresponding to the left tail of a 95% confidence interval, to represent the lower bound of expected human captioning performance.", "We then use T to define a grammatical outlier penalty G = min ( MIMA (cid:48) T, 0) and a style reward D = max ( SPURTS (cid:48) T, 0) .", "The quantities are combined as follows SMURF = (cid:40) SPARCS (cid:48) + G if SPARCS (cid:48) < T, SPARCS (cid:48) + D + G otherwise.", "It can be interpreted as applying a semantic threshold, then incorporating the style reward since style is only beneficial for caption quality if the caption 2255 proof is a matter of rig or while is a o cer of rig or while is a o cer both rig or while and a o cer both rig a while and parts time both all a i t e r a t i o n Figure 5: Degradation iteration example and plot of each model's average f MIMA value as text degrades.", "is semantically correct.", "For all of our proposed metrics, a larger value corresponds to higher quality caption.", "We first seek to validate that our proposed f MIMA , extracted from the attention layers of BERT, RoBERTa, and their knowledge distilled versions, is proportional to the distance from the expected value of features of the typical set.", "To this end, we create an experiment where we can control the randomness of input text.", "We begin with 11 different paragraphs from unrelated Wikipedia articles.", "We extract all the words from the paragraphs and create a word set corpus.", "We then sample 25 sentences from the paragraphs randomly.", "Each sentence is iteratively degraded by substituting a fraction of the words with random words from the word set corpus.", "At each iteration step, the sentences are passed through the transformers and the value of f MIMA is computed.", "Eventually the sentence is incoherent and bears no resemblance to natural text.", "The process and results can be seen in Figure", "5. The average f MIMA value for our information flow formulation shows a strong correlation with the degradation in both models up until about 10% of the tokens have been replaced, beyond which RoBERTa remains reliable but BERT does not, demonstrating RoBERTa's superior robustness.", "CoNLL-2014 The CoNLL-2014 competition (Ng et al., 2014) was a shared task of correcting grammatical errors of all types present in different sentences of an essay written by a learner of English as a second language.", "The essay consisted of 1312 separate sections to correct.", "A system-level human evaluation study of the grammatical quality of the corrected sentences from 12 competition submissions was presented in Grundkiewicz et al. (2015).", "Participants were asked to rate how natural the corrected sentences sounded and did not have access to any reference sentence.", "Microsoft COCO 2014 We use the Microsoft COCO validation set (Chen et al., 2015), comprised of 40,504 images, for a system-level human correlation experiment.", "These images are annotated with five human-generated captions, one of which is used as a baseline caption candidate.", "Human evaluations of competition entries were collected using Amazon Mechanical Turk (AMT).", "These evaluations were framed as questions from which 2 primary dimensions of system-level caption quality were derived as a ground truth to rank competitors: M1 (percentage better than or equal to human description) and M2 (percentage passing the Turing Test).", "Three additional categories were also included as an experimental ablation study but were not considered in the final competition ranking.", "In total, 255,000 evaluations were collected.", "Flickr 8K We use the graded human quality scores for the 5,822 remapped captions from the Flickr 8k dataset (Hodosh et al., 2013) for a caption-level semantic human correlation study.", "The dataset was formed by selecting captions from one image and assigning them to another.", "These captions are then graded based on how well they align with the image using two different standards.", "The first standard is Expert Annotation, where human experts rate the image-caption pairing on a scale of 1 (caption and image unrelated) to 4 (caption describes image with no errors).", "Each caption-image pairing has 3 scores, which we combine by taking the average.", "The second standard is Crowd Flower Annotation, where at least 3 students vote yes or no on whether the caption and image are aligned.", "Composite Dataset An additional dataset for caption-level study of semantic human correlation from Aditya et al. (2018).", "It contains 11,095 human judgments (on a scale of 1-5) over Flickr 8K, Flickr 30K (Young et al., 2014), and COCO and in contrast to the Flickr 8K dataset, includes machine generated captions in addition to human reference captions as candidates.", "Each evaluation is either based purely on correctness or detailedness.", "PASCAL-50S Human evaluators were asked to identify which of two sentences, B or C, is more similar to reference sentence A. Unlike other caption datasets, human evaluators in Pascal-2256 50S (Vedantam et al., 2015) did not have access to the original image.", "The captions for sentence A were sourced from a 1000 image subset of the UIUC PASCAL Sentence Dataset (Rashtchian et al., 2010) for which additional human captions were collected using AMT.", "Sentence B and C were sourced from both human and machine generated captions.", "The human captions were sourced from the original PASCAL dataset, resulting in four different pairing combinations: human-correct (HC), human-incorrect (HI), human-model (HM), and model-model (MM).", "System-level experiments evaluate how closely human evaluation and automatic evaluation models align in terms of their overall evaluation of captioning models.", "To confirm that f MIMA can capture grammar information, we replicate the experiment performed in Napoles et al. (2016) and show improved performance over previous benchmarks in Table", "1. GLEU (Napoles et al., 2015), I-measure (Felice and Briscoe, 2015), and M 2 (Dahlmeier and Ng, 2012) are reference-based while their proposed ER, LT, and LFM are referenceless and based on linguistic features like f MIMA .", "We then benchmark our proposed caption evaluation metrics against the rule-based metrics used in the Microsoft COCO 2015 Captioning Competition, which still serve as the standard for caption evaluation, and the recall-idf configuration of BERTScore.", "We observe that the original COCO submissions and many of the original codebases for the submissions are not publicly available or do not provide pretrained models.", "Other authors attempt to reproduce the submissions using open source reimplementations that they have trained themselves, which will not be consistent with the submissions for which the human evaluations were performed.", "Thus, we instead opt to use the 4 representative baseline caption sets (Vinyals et al., 2015; Xu et al., 2015; Karpathy and Fei-Fei, 2015) provided publicly by Cui et al. (2018), which include 3 competition submissions from open sourced models and 1 human caption baseline.", "These are guaranteed to be consistent with their work and reproducible.", "In Table 2, we show the COCO results for SPARCS, SPURTS, and SMURF.", "SMURF and BERTScore demonstrate the highest correlation with human judgment in this dataset.", "BERTScore's performance is partially due to incorporation of idf dataset priors also used by CIDEr, which we do not utilize to keep our metrics as general and consistent as possible.", "To illustrate this point, we also report BERTScore's correlation without idf weighting (BS-w/oidf) for this experiment.", "Despite its simplicity, SPARCS also performs well along with SPURTS.", "The rest of the metrics fail to adequately reflect human judgment.", "Caption level experiments evaluate how closely human evaluation and automatic evaluation models align for each individual caption.", "We begin with the Pascal-50S dataset in Table", "3. We follow the procedure used in Anderson et al. (2016) and use the first 5 sentence A entries of each image.", "The Pascal-50S dataset is based on a direct comparison between the reference and candidate captions, which gives similarity based metrics a distinct advantage.", "As a result, SPARCS achieves the top score in this experiment.", "Another interesting result is the fact that SPURTS performs reasonably well in the human-machine category despite having 2257 no access to the reference sentence.", "This shows SPURTS effectiveness as a Turing Test at both a system and caption-level, independent of semantic information.", "The additional information provided by SPURTS to SMURF in the human-machine category actually improves its performance.", "To evaluate our semantic metric specifically, we use the Flickr 8K and Composite dataset and follow the experiments specified in Anderson et al. (2016).", "However, we have discovered a flaw in previous comparisons between the correlation of automatic evaluation metrics with expert evaluation and interhuman correlation using the Flickr 8k dataset.", "Only a small subset of annotations between the Crowd Flower and Expert Annotations overlap, which often consists of ties causing the ranking metric to fail.", "To give a fair comparison, we also test the automatic metrics on a tie-free subset of the Flickr 8k data and use these results for human comparison.", "All of these results can be seen in Table", "4. SPARCS outperforms other metrics in the Flickr 8k dataset.", "However, SPICE outperforms SPARCS on the Composite dataset.", "This is likely due to the fact that evaluations of correctness in the Composite dataset are based on semantic propositions and do not consider partial correctness.", "Additionally, these new results show that automatic metrics can actually outperform voting-based human metrics in terms of their correlation with experts, further motivating their use.", "This warrants further study as some recent datasets opt to use voting-based human metrics due to their ease of collection (Levinboim et al., 2021).", "We perform a caption-level generalizability and robustness case study on the most commonly used caption evaluation algorithms using the COCO validation set in Table", "5. We define a critical fail-Metric Composite Flickr 8K Flickr Sub.", "ure, F , as a disparity of greater than 1 between system-level human (M2) and caption-level algorithm correlation of a reference evaluation metric and a tested evaluation metric for a given caption set of an image.", "The last column of Table 5 shows the likelihood of a critical failure occurring for each metric.", "In a human study, we identify the primary cause of critical failure in the 20 most severe discrepancies in order to identify potential areas for improvement for each metric.", "We use SMURF as a reference evaluator for the other evaluators and SPICE as a reference for SMURF.", "The estimated probability of each of these failure causes is shown in the first three columns of Table", "5. The first failure cause, c 1 , refers to a scenario where the metric fails despite there being enough word overlap between the candidate and reference captions for a correct judgment to be made.", "This implies that the choice of words/sequences made by the metric for the comparison needs improvement.", "The second failure cause, c 2 , refers to the use of correct and distinct words or phrases by the human captioner that are not seen in the references.", "Lastly, we include the case where the reference evaluator may have incorrectly identified the correct caption ranking (according to the human annotator) as matching system-level human judgment.", "We refer to this as a reference failure, RF .", "The focus of previous studies has been robustness to distractors (Sharif et al., 2019; Cui et al., 2018; Hodosh and Hockenmaier, 2016).", "We ob-2258 serve no captions where this is a primary cause of failure.", "On the contrary, we find that each metric is highly susceptible to specific c 1 scenarios: n-gram based : Both CIDEr and METEOR are sensitive to stopwords, leading to rewards for words or sequences that supply no additional information.", "SPICE : Semantic proposal formation or sentence parsing issues can lead to the metric unpredictably failing to recognize highly informative proposals.", "SMURF : The metric may fail to adequately reward additional information if the words used are too common, like few' or some'.", "In this paper, we use information theory based typicality analysis to capture a new perspective on the problem of caption evaluation.", "Our analysis leads us to two caption evaluation metrics that capture separate dimensions of caption quality and a fused metric.", "We have performed experiments demonstrating their correlation with human judgment, showed how these methods could be used to perform multi-aspect system-level analysis of algorithm performance, and performed caption-level studies explaining why combining these two algorithms leads to more robust and generalizable evaluations.", "The underlying mechanism, MIMA, opens many new avenues for the analysis of self-attention transformers and potentially other models.", "Future work could also focus on optimal weighting between semantics and style.", "Harmful bias, especially towards gender (Hen-dricks et al., 2018), has been shown to be present in image caption datasets and is often further mag-nified by automatic captioners.", "Prior caption evaluation methods have the potential to further exacerbate the problem by rewarding such captions due to their reliance on dataset specific images or captions.", "Referenceless evaluations like our style metric, SPURTS, offer a preemptive approach for mitigating harmful dataset bias, like in Simpson's Paradox (Mehrabi et al., 2019), by utilizing intrinsic properties of descriptive language learned by self-attention models over far larger and more diverse corpora.", "This gives the evaluator a more wholistic view of caption quality rather than viewing the world through the lens of a single visual dataset.", "The authors acknowledge support from the NSF Project VR-K #1750082, the DARPA KAIROS program (LESTAT project), and the anonymous reviewers for their insightful discussion.", "Any opinions, findings, and conclusions in this publication are those of the authors and do not necessarily reflect the view of the funding agencies." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "result", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "other" ]
[ "Named entity recognition (NER) is a well-studied task in natural language processing.", "Traditional NER research only deals with flat entities and ignores nested entities.", "The span-based methods treat entity recognition as a span classification task.", "Although these methods have the innate ability to handle nested NER, they suffer from high computational cost, ignorance of boundary information, under-utilization of the spans that partially match with entities, and difficulties in long entity recognition.", "To tackle these issues, we propose a two-stage entity identifier.", "First we generate span proposals by filtering and boundary regression on the seed spans to locate the entities, and then label the boundary-adjusted span proposals with the corresponding categories.", "Our method effectively utilizes the boundary information of entities and partially matched spans during training.", "Through boundary regression, entities of any length can be covered theoretically, which improves the ability to recognize long entities.", "In addition, many low-quality seed spans are filtered out in the first stage, which reduces the time complexity of inference.", "Experiments on nested NER datasets demonstrate that our proposed method outperforms previous state-of-the-art models.", "Named entity recognition (NER) is a fundamental task in natural language processing, focusing on identifying the spans of text that refer to entities.", "NER is widely used in downstream tasks, such as entity linking (Ganea and Hofmann, 2017; Le and Titov, 2018) and relation extraction (Li and Ji, 2014; Miwa and Bansal, 2016).", "ken in a sentence.", "Such models lack the ability to identify nested named entities.", "Various approaches for nested NER have been proposed in recent years.", "Some works revised sequence models to support nested entities using different strategies (Alex et al., 2007; Ju et al., 2018; Strakova et al., 2019; Wang et al., 2020a) and some works adopt the hyper-graph to capture all possible entity mentions in a sentence (Lu and Roth, 2015; Katiyar and Cardie, 2018).", "We focus on the span-based methods (Sohrab and Miwa, 2018; Zheng et al., 2019; Tan et al., 2020), which treat named entity recognition as a classification task on a span with the innate ability to recognize nested named entities.", "For example, Sohrab and Miwa (2018) exhausts all possible spans in a text sequence and then predicts their categories.", "However, these methods suffer from some serious weaknesses.", "First, due to numerous low-quality candidate spans, these methods require high computational costs.", "Then, it is hard to identify long entities because the length of the span enumerated during training is not infinite.", "Next, boundary information is not fully utilized, while it is important for the model to locate entities.", "Although some methods (Zheng et al., 2019; Tan et al., 2020) have used a sequence labeling model to predict boundaries, yet without dynamic adjustment, the boundary information is not fully utilized.", "Finally, the spans which partially match with entities are not effectively utilized.", "These methods simply treat the partially matched spans as negative examples, which can introduce noise into the model.", "Different from the above studies, we observed that NER and object detection tasks in computer vision have a high degree of consistency.", "They both need to locate regions of interest (ROIs) in the context (image/text) and then assign corresponding categories to them.", "Furthermore, both flat NER and nested NER have corresponding structures in the object detection task, as shown in Figure 1.", "For the flat structure, there is no overlap between entities or between objects.", "While for nested structures, fine-grained entities are nested inside coarse-grained entities, and small objects are nested inside large objects correspondingly.", "In computer vision, the two-stage object detectors (Girshick et al., 2014; Girshick, 2015; Ren et al., 2017; Dai et al., 2016; He et al., 2017; Cai and Vasconcelos, 2018) are the most popular object detection algorithm.", "They divide the detection task into two stages, first generating candidate regions, and then classifying and fine-tuning the positions of the candidate regions.", "Inspired by these, we propose a two-stage entity identifier and treat NER as a joint task of boundary regression and span classification to address the weaknesses mentioned above.", "In the first stage, we design a span proposal module, which contains two components: a filter and a regressor.", "The filter divides the seed spans into contextual spans and span proposals, and filters out the former to reduce the candidate spans.", "The regressor locates entities by adjusting the boundaries of span proposals to improve the quality of candidate spans.", "Then in the second stage, we use an entity classifier to label entity categories for the number-reduced and quality-improved span proposals.", "During training, to better utilize the spans that partially match with the entities, we construct soft examples by weighting the loss of the model based on the IoU.", "In addition, we apply the soft non-maximum suppression (Soft-NMS) (Bodla et al., 2017) algorithm to entity decoding for dropping the false positives.", "Our main contributions are as follow: Inspired by the two-stage detector popular in object detection, we propose a novel two-stage identifier for NER of locating entities first and labeling them later.", "We treat NER as a joint task of boundary regression and span classification.", "We make effective use of boundary information.", "Taking the identification of entity boundaries a step further, our model can adjust the boundaries to accurately locate entities.", "And when training the boundary regressor, in addition to the boundary-level SmoothL1 loss, we also use a span-level loss, which measures the overlap between two spans.", "During training, instead of simply treating the partially matched spans as negative examples, we construct soft examples based on the IoU.", "This not only alleviates the imbalance between positive and negative examples, but also effectively utilizes the spans which partially match with the ground-truth entities.", "Experiments show that our model achieves state-of-the-art performance consistently on the KBP17, ACE04 and ACE05 datasets, and outperforms several competing baseline models on F1-score by +3.08% on KBP17, +0.71% on ACE04 and +1.27% on ACE05.", "Figure 2 illustrates an overview of the model structure.", "We first obtain the word representation through the encoder and generate seed spans.", "Among these seed spans, some with higher overlap with the entities are the proposal spans, and others with lower overlap are the contextual spans.", "In the span proposal module, we use a filter to keep the proposal spans and drop the contextual spans.", "Meanwhile, a regressor regresses the boundary of each span to locate the left and right boundaries of entities.", "Next, we adjust the boundaries of the span proposals based on the output of the regressor, and then feed them into the entity classifier module.", "Finally, the entity decoder decodes the entities using the Soft-NMS algorithm.", "We will cover our model in the following sections.", "Consider the i -th word in a sentence with n words, we represent it by concatenating its word embedding x wi , contextualized word embedding x lmi , part-Representation", "of-speech(POS) embedding x posi and character-level embedding x chari together.", "The character-level embedding is generated by a BiLSTM module with the same setting as (Ju et al., 2018).", "For the contextualized word embedding, we follow (Yu et al., 2020) to obtain the context-dependent embedding for a target token with one surrounding sentence on each side.", "Then, the concatenation of them is fed into another BiLSTM to obtain the hidden state as the final word representation h i R d .", "Seed spans are subsequences sampled from a sequence of words.", "By filtering, adjusting boundaries, and classifying on them, we can extract entities from the sentence.", "Under the constraint of a pre-specified set of lengths, where the maximum does not exceed L , we enumerate all possible start and end positions to generate the seed spans.", "We denote the set of seed spans as B = { b 0 , . . . , b K } , where b i = ( st i , ed i ) denotes i -th seed span, K denotes the number of the generated seed spans, and st i , ed i denote the start and end positions of the span respectively.", "For training the filter and the regressor, we need to assign a corresponding category and regression target to each seed span.", "Specifically, we pair each seed span in B and the ground-truth entity with which the span has the largest IoU.", "The IoU measure the overlap between spans, defined as IoU( A, B ) = A B A B , where A and B are two spans.", "Then we divide them into positive and negative spans based on the IoU between the pair.", "The spans whose IoU with the paired ground truth is above the threshold 1 are classified as positive examples, and those less than threshold 1 are classified as negative examples.", "For the positive span, we assign it the same category y with the paired ground truth and compute the boundary offset t between them.", "For the negative span, we only assign a NONE label.", "We downsample the negative examples such that the ratio of positive to negative is 1:5.", "The quality of the generated seed spans is variable.", "If we directly input them into the entity classifier, it will lead to a lot of computational waste.", "High-quality spans have higher overlap with entities, while low-quality spans have lower overlap.", "We denote them as span proposals and contextual spans , respectively.", "Our Span Proposal module consists of two components: Span Proposal Filter and Boundary Regressor.", "The former is used to drop the contextual spans and keep the span proposals, while the latter is used to adjust the boundaries of the span proposals to locate entities.", "Span Proposal Filter For the seed span b i ( st i , ed i ) , we concatenate the maximum pooled span representation h pi with the inner boundary word representations ( h st i , h ed i ) to obtain the span representation h filteri .", "Based on it we calculate the probability p filteri that the span b i belongs to the span proposals, computed as follows: h pi = MaxPooling( h st i , h st i +1 , . . . , h ed i ) (1) h filteri = [ h pi ; h st i ; h ed i ] (2) p filteri = Sigmoid (cid:16) MLP (cid:16) h filteri (cid:17)(cid:17) (3) where [; ] denotes the concatenate operation, MLP consists of two linear layers and a GELU (Hendrycks and Gimpel, 2016) activation function.", "Boundary Regressor Although the span proposal has a high overlap with the entity, it cannot hit the entity exactly.", "We design another boundary regression branch where a regressor locates entities by adjusting the left and right boundaries of the span proposals.", "The boundaries regression requires not only the information of span itself but also the outer boundary words.", "Thus we concatenate the maximum pooled span representation h pi with the outer boundary word representations ( h st i 1 , h ed i +1 ) to obtain the span representation h regi .", "Then we calculate the offsets t i of left and right boundaries: h regi = [ h pi ; h st i 1 ; h ed i +1 ] (4) t i = W 2 GELU( W 1 h regi + b 1 ) + b 2 (5) where W 1 R 3 d d , W 2 R d 2 , b 1 R d and b 2 R 2 are learnable parameters.", "With the boundary offsets t i predicted by the boundary regressor, we adjust the boundaries of span proposals.", "The adjusted start postion (cid:101) st i and end position (cid:101) ed i of b i are calculated as follow: (cid:101) st i = max(0 , st i + (cid:22) t li + 1 2 (cid:23) ) (6) (cid:101) ed i = min( L 1 , ed i + (cid:22) t ri + 1 2 (cid:23) ) (7) where t li and t ri denote the left and right offsets, respectively.", "As in the filter above, we concatenate the maximum pooled span representation (cid:101) h pi with the inner boundary word representations ( h (cid:101) st i , h (cid:101) ed i ) .", "Then we perform entity classification: (cid:101) h pi = MaxPooling( h (cid:101) st i , h (cid:101) st i +1 , . . . , h (cid:101) ed i ) (8) h clsi = [ (cid:101) h pi ; h (cid:101) st i ; h (cid:101) ed i ] (9) p i = Softmax (cid:16) MLP (cid:16) h clsi ) (cid:17)(cid:17) (10) where MLP consists of two linear layers and a GELU activation function, as in the filter above.", "For training the entity classifier, we need to reassign the categories based on the IoU between the new adjusted span proposal and paired ground-truth entity.", "Specifically, if the IoU between a span and its corresponding entity is higher than the threshold 2 , we assign the span the same category with the entity, otherwise we assign it a NONE category and treat the span as a negative example.", "The spans that partially match with the entities are very important, but previous span-based approaches simply treat them as negative examples.", "Such practice not only fails to take advantage of these spans but also introduces noise into the model.", "We treat partially matched spans as soft examples by weighting its loss based on its IoU with the corresponding ground truth.", "For the i -th span b i , the weight w i is calculated as follows: (cid:26) IoU( b i , e i ) , IoU( b i , e i ) (1 IoU( b i , e i )) , IoU( b i , e i ) < (11) where { 1 , 2 } denotes the IoU threshold used in the first or the second stage and e i denotes corresponding ground-truth entity of b i .", "is a focusing parameter that can smoothly adjust the rate at which partially matched examples are down-weighted.", "We can find that if we set = 0 , the above formula degenerates to a hard one.", "Also, if a span does not overlap with any entity or match exactly with some entity, the loss weight w i = 1 .", "Then, we calculate the losses for the span proposal filter, boundary regressor and entity classifier respectively.", "For the span proposal filter, we use focal loss (Lin et al., 2017) to solve the imbalance problem: L filter = (cid:88) i w i I y (cid:54) =0 (1 p filteri ) log( p filteri ) + w i I y =0 ( p filteri ) log(1 p filteri ) (12) where w i is the weight of i -th example calculated at Equation 11 and denotes focusing parameter of focal loss.", "For the boundary regressor, the loss consists of two components, the smooth L1 loss at the boundary level and the overlap loss at the span level, calculated as follows: L reg (cid:0) t, t (cid:1) = L f 1 + L olp (13) L f 1 (cid:0) t, t (cid:1) = (cid:88) i (cid:88) j { l,r } smoothL1 (cid:16) t ji , t ji (cid:17) (14) L olp = (cid:88) i (cid:18) 1 min ( d i ) max ( e i ) max ( d i ) min ( e i ) (cid:19) (15) where d i = (cid:110) (cid:101) ed i , ed i (cid:111) , e i = (cid:8)(cid:101) st i , st i (cid:9) .", "st i , ed i , t li and t ri denote the ground-truth left boundary, right boundary, left offset and right offset, respectively.", "For the entity classifier, we simply use the cross-entropy loss: L cls = (cid:88) i w i CELoss( y, p i ) (16) where w i is the weight of i -th example calculated at Equation 11.", "L = 1 L filter + 2 L reg + 3 L cls (17)", "where 1 , 2 and 3 are the weights of filter, regressor and classifier losses respectively.", "In the model prediction phase, after the above steps, we get the classification probability and boundary offset regression results for each span proposal.", "Based on them, we need to extract all entities in the sentence (i.e., find the exact start and end positions of the entities as well as their corresponding cate-gories).", "We assign label y i = argmax( p i ) to span s i and use score i = max( p i ) as the confidence of span s i belonging to the y i category.", "Now for each span proposal, our model has predicted the exact start and end positions, the entity class and the corresponding score, denoted as s i = ( l i , r i , y i , score i ) .", "Given the score threshold and the set of span proposals S = { s 1 , . . . , s N } , where N denotes as the number of span proposals, we use the Soft-NMS (Bodla et al., 2017) algorithm to filter the false positives.", "As shown in Algorithm 1, we traverse the span proposals by the order of their score (the traversal term is denoted as s i ) and then adjust the scores of other span proposals s j to f ( s i , s j ) , which is defined as: (cid:26) score j u, IoU( s i , s j ) k score j , IoU( s i , s j ) < k (18) where u (0 , 1) denotes the decay coefficient of the score and k denotes is the IoU threshold.", "Then we keep all span proposals with a score > as the final extracted entities.", "To provide empirical evidence for effectiveness of the proposed model, we conduct our experiments on four nested NER datasets: ACE04 1 , ACE05 2 , KBP17 3 and GENIA 4 .", "Please refer to Appendix A.1 for statistical information about the datasets.", "ACE 2004 and ACE 2005 (Doddington et al., 2004; Christopher Walker and Maeda, 2006) are two nested datasets, each of them contains 7 entity categories.", "We follow the same setup as previous work Katiyar and Cardie (2018); Lin et al. (2019) split them into train, dev and test sets by 8:1:1.", "1 https://catalog.ldc.upenn.edu/LDC2005T09 2 https://catalog.ldc.upenn.edu/LDC2006T06 3 https://catalog.ldc.upenn.edu/LDC2019T02 4 http://www.geniaproject.org/genia-corpus KBP17 (Ji et al., 2017) has 5 entity categories, including GPE, ORG, PER, LOC, and FAC.", "We follow Lin et al. (2019) to split all documents into 866/20/167 documents for train/dev/test set.", "GENIA (Ohta et al., 2002) is a biology nested named entity dataset and contains five entity types, including DNA, RNA, protein, cell line, and cell type categories.", "Following Yu et al. (2020), we use 90%/10% train/test split.", "We use strict evaluation metrics that an entity is confirmed correct when the entity boundary and the entity label are correct simultaneously.", "We employ precision, recall and F1-score to evaluate the performance.", "In most experiments, we use GloVE (Pennington et al., 2014) and BERT (Devlin et al., 2019) in our encoder.", "For the GENIA dataset, we replace GloVE with BioWordvec (Chiu et al., 2016), BERT with BioBERT (Lee et al., 2019).", "The dimensions for x wi , x lmi , x posi , x chari and h i are 100, 1024, 50, 50 and 1024, respectively.", "For all datasets, we train our model for 35 epochs and use the Adam Optimizer with a linear warmup-decay learning rate schedule, a dropout before the filter, regressor and entity classifier with a rate of 0.5.", "See Appendix A for more detailed parameter settings and baseline models we compared 5 .", "Table 1 illustrates the performance of the proposed model as well as baselines on ACE04, ACE05, GENIA and KBP17.", "Our model outperforms the state-of-the-art models consistently on three nested NER datasets.", "Specifically, the F1-scores of our model advance previous models by +3.08%, +0.71%, +1.27% on KBP17, ACE04 and ACE05 respectively.", "And on GENIA, we achieve comparable performance.", "We analyze the performance on entities of different lengths on ACE04, as shown in Table 2.", "We observe that the model works well on the entities whose lengths are not enumerated during training.", "For example, although entities of length 6 are not enumerated, while those of length 5 Our code is available at https://github.com/ tricktreat/locate-and-label .", "5 and 7 are enumerated, our model can achieve a comparable F1-score for entities of length 6.", "In particular, the entities whose lengths exceed the maximum length (15) enumerated during training, are still well recognized.", "This verifies that our model has the ability to identify length-uncovered entities and long entities by boundary regression.", "We also evaluated our model on two flat NER datasets, as shown in Appendix B. 4.2 Ablation Study We choose the ACE04 and KBP17 datasets to conduct several ablation experiments to elucidate the main components of our proposed model.", "To illustrate the performance of the model on entities of different lengths, we divide the entities into three groups according to their lengths.", "The re-Length ACE04 Pr.", "sults are shown in Table", "3. Firstly, we observe that the boundary regressor is very effective for the identification of long entities.", "Lack of the boundary regressor leads to a decrease in F1-score for long entities ( L 10 ) on ACE04 by 36.73% and KBP17 by 30.54%.", "Then, compared with the w/o filter setting, the F1-scores of our full model on the two datasets improved by 0.52% and 0.75%, respectively.", "In addition, experimental results also demonstrate that the soft examples we constructed are effective.", "This allows the model to take full advantage of the information of partially matched spans in training, improving the F1-score by 0.87% on ACE04 and 0.16% on KBP17.", "However, Soft-NMS play a limited role and improve the model performance only a little.", "We believe that text is sparse data compared to images and the number of false positives predicted by our model is quite small, so the Soft-NMS can hardly perform the role of a filter.", "Theoretically, the number of possible spans of a sentence of length N is N ( N +1)2 .", "Previous span-based methods need to classify almost all spans into corresponding categories, which leads to the high computational cost with O ( cN 2 ) time complexity where c is the number of categories.", "The words in a sentence can be divided into two categories: contextual words and entity words.", "Traditional approaches waste a lot of computation on the spans composed of contextual words.", "However, our approach retains only the span proposals containing entity words by the filter, and the time complexity is O ( N 2 ) .", "Although in the worst case the model keeps all seed spans, generating N ( N +1)2 span proposals, we observe that we generate approximately three times as many span proposals as the entities in practice.", "Assuming that the number of entities in the sentence is k , the total time complexity of our model is O ( N 2 + ck ) where k << N 2 .", "Examples of model predictions are shown in Table", "4. The first line illustrates that our model can recognize entities with multi-level nested structures.", "We can see that the three nested entities from inside to outside are united nations secretary general kofi annan , united nations secretary general and united nations , all of which can be accurately recognized by our model.", "The second line illustrates that our model can recognize long entities well, although trained without seed spans of the same length as it.", "The long entity Aceh, which is rich in oil and gas and has a population of about 4.1 million people , with a length of 20, exceeds the maximum length of generated seed spans, but can still be correctly located and classified.", "However, our model has difficulties in resolving ambiguous entity references.", "As shown in the third line, our model incorrectly classifies the reference phrase both sides , which refers to ORG , into the PER category.", "NER is usually modeled as a sequence labeling task, and a sequence model (e.g., LSTM-CRF (Huang et al., 2015)) is employed to output the sequence of labels with maximum probability.", "However, traditional sequence labeling models cannot handle nested structures because they can only assign one label to each token.", "In recent years, several approaches have been proposed to solve the nested named entity recognition task, mainly including tagging-based (Alex et al., 2007; Wang et al., 2020a), hypergraph-based (Muis and Lu, 2017; Katiyar and Cardie, 2018), and span-based Model F1-score on ACE04 F1-score on KBP17 1 L < 5 5 L < 10 L 10 ALL 1 L < 5 5 L < 10 L 10 ALL support 2612 309 114 3035 11594 756 250 12600 Full model 88.73 83.71 66.06 87.41 85.52 67.67 58.58 84.05 w/o regressor 88.63 66.41 29.33 85.18 83.99 50.50 28.04 82.54 w/o filter 88.35 83.87 60.55 86.89 84.77 67.04 59.06 83.30 w/o filter & regressor 88.59 65.65 31.08 85.12 85.28 51.76 26.03 82.85 w/o soft-NMS 88.66 83.50 65.16 87.28 85.49 67.62 58.77 84.02 w/o soft examples 88.39 80.39 55.96 86.54 85.27 68.95 60.85 83.89 Table 3: Ablation study on ACE04 and KBP17.", "(Sohrab and Miwa, 2018; Zheng et al., 2019) approaches.", "The tagging based nested NER model transforms the nested NER task into a special sequential tagging task by designing a suitable tagging schema.", "Layered-CRF (Alex et al., 2007) dynamically stacks flat NER layers to identify entities from inner to outer.", "Pyramid (Wang et al., 2020a) designs a pyramid structured tagging framework that uses CNN networks to identify entities from the bottom up.", "The hypergraph-based model constructs the hypergraph by the structure of nested NER and decodes the nested entities on the hypergraph.", "Lu and Roth (2015) is the first to propose the use of Mention Hypergraphs to solve the overlapping mentions recognition problem.", "Katiyar and Cardie (2018) proposed hypergraph representation for the nested NER task and learned the hypergraph structure in a greedy way by LSTM networks.", "The span-based nested NER model first extracts the subsequences (spans) in a sequence and then classifies these spans.", "Exhaustive Model (Sohrab and Miwa, 2018) exhausts all possible spans in a text sequence and then predicts their classes.", "Zheng et al. (2019); Tan et al. (2020) took a sequence labeling model to identify entity boundaries and then predicted the categories of boundary-relevant regions.", "Different from the above methods, some works adopt the methods from other tasks.", "For example, Yu et al. (2020) reformulated NER as a structured prediction task and adopted a biaffine model for nested and flat NER.", "While Li et al. (2020b) treated NER as a reading comprehension task, and constructed type-specific queries to extract entities from the context.", "Object detection is a computer vision technique that can localize and identify objects in an image.", "With this identification and localization, object detection can determine the exact location of objects while assigning them categories.", "Neural-based object detection algorithms are divided into two main categories: one-stage and two-stage approach.", "The one-stage object detector densely proposes anchor boxes by covering the possible positions, scales, and aspect ratios, and then predicts the categories and accurate positions based on them in a single-shot way, such as OverFeat (Sermanet et al., 2013), YOLO (Redmon et al., 2016) and SSD (Liu et al., 2016).", "The two-stage object detector can be seen as an extension of the dense detector and has been the most dominant object detection algorithm for many years (Girshick et al., 2014; Girshick, 2015; Ren et al., 2017; Dai et al., 2016; He et al., 2017; Cai and Vasconcelos, 2018).", "It first obtains sparse proposal boxes containing objects from a dense set of region candidates, and then adjusts the position and predicts a category for each proposal.", "In this paper, we treat NER as a joint task of boundary regression and span classification and propose a two-stage entity identifier.", "First we generate span proposals through a filter and regressor, then classify them into the corresponding categories.", "Our proposed model can make full use of the boundary information of entities and reduce the computational cost.", "Moreover, by constructing soft samples during training, our model can exploit the spans that partially match with the entities.", "Experiments illustrate that our method achieves state-of-the-art performance on several nested NER datasets.", "For future work, we will combine named entity recognition and object detection tasks, and try to use a unified framework to address joint identification on multimodal data.", "This work is supported by the Key Research and Development Program of Zhejiang Province, China(No. 2021C01013), the National Key Research and Development Project of China (No. 2018AAA0101900), the Chinese Knowledge Center of Engineering Science and Technology (CK-CEST) and MOE Engineering Research Center of Digital Library." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "result", "method", "method", "objective", "abstain", "abstain", "method", "method", "method", "objective", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "result", "result", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "objective", "method", "result", "method", "other" ]
[ "In this paper, we propose a novel end-to-end neural architecture for ranking candidate answers, that adapts a hierarchical recurrent neural network and a latent topic clustering module.", "With our proposed model, a text is encoded to a vector representation from an word-level to a chunk-level to effectively capture the entire meaning.", "In particular, by adapting the hierarchical structure, our model shows very small performance degradations in longer text comprehension while other state-of-the-art recurrent neural network models suffer from it.", "Additionally, the latent topic clustering module extracts semantic information from target samples.", "This clustering module is useful for any text related tasks by allowing each data sample to find its nearest topic cluster, thus helping the neural network model analyze the entire data.", "We evaluate our models on the Ubuntu Dialogue Corpus and consumer electronic domain question answering dataset, which is related to Samsung products.", "The proposed model shows state-of-the-art results for ranking question-answer pairs.", "Recently neural network architectures have shown great success in many machine learning fields such as image classification, speech recognition, machine translation, chat-bot, question answering, and other task-oriented areas.", "Among these, the automatic question answering (QA) task has long been considered a primary objective of artificial intelligence.", "In the commercial sphere, the QA task is usually tackled by using pre-organized knowledge bases and/or by using information retrieval (IR) based methods, which are applied in popular intelligent voice agents such as Siri , Alexa , and Google Assistant (from Apple, Amazon, and Google, respec-tively).", "Another type of advanced QA systems is IBM's Watson who builds knowledge bases from unstructured data.", "These raw data are also indexed in search clusters to support user queries (Fan et al., 2012; Chu-Carroll et al., 2012).", "In academic literature, researchers have intensely studied sentence pair ranking task which is core technique in QA system.", "The ranking task selects the best answer among candidates retrieved from knowledge bases or IR based modules.", "Many neural network architectures with end-to-end learning methods are proposed to address this task (Yin et al., 2016; Wang and Jiang, 2016; Wang et al., 2017).", "These works focus on matching sentence-level text pair (Wang et al., 2007; Yang et al., 2015; Bowman et al., 2015).", "Therefore, they have limitations in understanding longer text such as multi-turn dialogue and explanatory document, resulting in performance degradation on ranking as the length of the text become longer.", "With the advent of the huge multi-turn dialogue corpus (Lowe et al., 2015), researchers have proposed neural network models to rank longer text pair (Kadlec et al., 2015; Baudis et al., 2016).", "These techniques are essential for capturing context information in multi-turn conversation or understanding multiple sentences in explanatory text.", "In this paper, we focus on investigating a novel neural network architecture with additional data clustering module to improve the performance in ranking answer candidates which are longer than a single sentence.", "This work can be used not only for the QA ranking task, but also to evaluate the relevance of next utterance with given dialogue generated from the dialogue model.", "The key contributions of our work are as follows: First, we introduce a Hierarchical Recurrent Dual Encoder (HRDE) model to effectively calculate the affinity among question-answer pairs to determine the ranking.", "By encoding texts from an word-level to a chunk-level with hierarchi-1575 cal architecture, the HRDE prevents performance degradations in understanding longer texts while other state-of-the-art neural network models suffer.", "Second, we propose a Latent Topic Clustering (LTC) module to extract latent information from the target dataset, and apply these additional information in end-to-end training.", "This module allows each data sample to find its nearest topic cluster, thus helping the neural network model analyze the entire data.", "The LTC module can be combined to any neural network as a source of additional information.", "This is a novel approach using latent topic cluster information for the QA task, especially by applying the combined model of HRDE and LTC to the QA pair ranking task.", "Extensive experiments are conducted to investigate efficacy and properties of the proposed model.", "Our proposed model outperforms previous state-of-the-art methods in the Ubuntu Dialogue Corpus, which is one of the largest text pair scoring datasets.", "We also evaluate the model on real world QA data crawled from crowd-QA web pages and from Samsung's official web pages.", "Our model also shows the best results for the QA data when compared to previous neural network based models.", "Researchers have released question and answer datasets for research purposes and have proposed various models to solve these datasets.", "(Wang et al., 2007; Yang et al., 2015; Tan et al., 2015) introduced small dataset to rank sentences that have higher probabilities of answering questions such as WikiQA and insuranceQA.", "To alleviate the dif-ficulty in aggregating datasets, that are large and have no license restrictions, some researchers introduced new datasets for sentence similarity rankings (Baudis et al., 2016; Lowe et al., 2015).", "As of now, the Ubuntu Dialogue dataset is one of the largest corpus openly available for text ranking.", "To tackle the Ubuntu dataset, (Lowe et al., 2015) adopted the term frequency-inverse document frequency approach to capture important words among context and next utterances (Ramos et al., 2003).", "(Bordes et al., 2014; Yu et al., 2014) proposed deep neural network architecture for embedding sentences and measuring similarities to select answer sentence for a given question.", "(Kadlec et al., 2015) used convolution neural network (CNN) architecture to embed the sentence while a final output vector was compared to the target text to calculate the matching score.", "They also tried using long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997), bidirectional LSTM and ensemble method with all of those neural network architectures and achieved the best results on the Ubuntu Dialogues Corpus dataset.", "Another type of neural architecture is the RNN-CNN model, which encodes each token with a recurrent neural network (RNN) and then feeds them to the CNN (Baudis et al., 2016).", "Researchers also introduced an attention based model to improve the performance (Tan et al., 2015; Wang and Jiang, 2016; Wang et al., 2017).", "Recently, the hierarchical recurrent encoder-decoder model was proposed to embed contextual information in user query prediction and dialogue generation tasks (Sordoni et al., 2015; Ser-ban et al., 2016).", "This shows improvement in the dialogue generation model where the context for the utterance is important.", "As another type of neural network architecture, memory network was proposed by (Sukhbaatar et al., 2015).", "Several researchers adopted this architecture for the reading comprehension (RC) style QA tasks, because it can extract contextual information from each sentence and use it in finding the answer (Xiong et al., 2016; Kumar et al., 2016).", "However, none of this research is applied to the QA pair ranking task directly.", "In this section, we depict a previously released neural text ranking model, and then introduce our proposed neural network model.", "A subset of sequential data is fed into the recurrent neural network (RNN) which leads to the formation of the network's internal hidden state h t to model the time series patterns.", "This internal hidden state is updated at each time step with the input data w t and the hidden state of the previous time step h t 1 as follows: h t = f ( h t 1 , w t ) , (1) where f is the RNN function with weight parameter , h t is hidden state at t -th word input, w t is t -th word in a target question w Q = { w Q 1: t q } or an answer text w A = { w A 1: t a } .", "The previous RDE model uses two RNNs for encoding question text and answer text to calculate affinity among texts (Lowe et al., 2015).", "After encoding each part of the data, the affinity among the text pairs is calculated by using the final hidden state value of each question and answer RNNs.", "The matching probability between question text w Q and answer text w A with the training objective are as follows: p ( label ) = (( h Qt q ) TM h At a + b ) , L = log NY n =1 p ( label n | h Qn,t q , h An,t a ) , (2) where h Qt q and h At a are last hidden state of each question and answer RNN with the dimensionality h t R d .", "The M R d d and bias b are learned model parameters.", "The N is total number of samples used in training and is the sigmoid function.", "From now we explain our proposed model.", "The previous RDE model tries to encode the text in question or in answer with RNN architecture.", "It would be less effective as the length of the word sequences in the text increases because RNN's natural characteristic of forgetting information from long ranging data.", "To address this RNN's forgetting phenomenon, (Bahdanau et al., 2014) proposed an attention mechanism, however, we found that it still showed a limitation when we consider very large sequential length data such as 162 steps average in the Ubuntu Dialogue Corpus dataset (see Table 1).", "To overcome this limitation, we designed the HRDE architecture.", "The HRDE model divides long sequential text data into small chunk such as sentences, and encodes the whole text from word-level to chunk-level by using two hierarchical level of RNN architecture.", "Figure 1 shows a diagram of the HRDE model.", "The word-level RNN part is responsible for encoding the words sequence w c = { w c, 1: t } in each chunk.", "The chunk can be sentences in paragraph, paragraphs in essay, turns in dialogue or any kinds of smaller meaningful sub-set from the text.", "Then the final hidden states of each chunk will be fed into chunk-level RNN with its original sequence order kept.", "Therefore the chunk-level RNN can deal with pre-encoded chunk data with less sequential steps.", "The hidden states of the hierarchical RNNs are as follows: h c,t = f ( h c,t 1 , w c,t ) , u c = g ( u c 1 , h c ) , (3) where f and g are the RNN function in hierarchical architecture with weight parameters , h c,t is word-level RNN's hidden status at t -th word in c -th chunk.", "The w c,t is t -th word in c -th chunk of target question or answer text.", "The u c is chunk-level RNN's hidden state at c -th chunk sequence, and h c is word-level RNN's last hidden state of each chunk h c { h 1: c,t } .", "We use the same training objective as the RDE model, and the final matching probability between question and answer text is calculated using chunk-level RNN as follows: p ( label ) = (( u Qc q ) TM u Ac a + b ) , (4) where u Qc q and u Ac a are chunk-level RNN's last hidden state of each question and answer text with the dimensionality u c R d u , which involves the M R d u d u .", "To learn how to rank QA pairs, a neural network should be trained to find the proper feature that represents the information within the data and fits the model parameter that can approximate the true-hypothesis.", "For this type of problem, we propose the LTC module for grouping the target data to help the neural network find the true-hypothesis with more information from the topic cluster in end-to-end training.", "The blue-dotted box on the right-side of Figure 2 shows LTC structure diagram.", "To assign topic information, we build internal latent topic memory 1577 Figure 2: Diagram of the HRDE-LTC.", "m R d m K , which is only model parameter to be learned, where d m is vector dimension of each latent topic and K is number of latent topic cluster.", "For a given input sequence x = { x 1: t } with these K vectors, we construct LTC process as follows: p k = softmax (( x ) T m k ) , x K = KX k =1 p k m k , e = concat { x , x K } .", "(5) First, the similarity between the x and each latent topic vector is calculated by dot-product.", "Then the resulting K values are normalized by the softmax function softmax ( z k ) = e z k / P i e z i to produce a similarity probability p k .", "After calculating the latent topic probability p k , x K is retrieved from summing over m k weighted by the p k .", "Then we concatenate this result with the original encoding vector to generate the final encoding vector e with the LTC information added.", "Note that the input sequence of the LTC could be any type of neural network based encoding function x = f enc ( w ) such as RNN, CNN and multilayer perceptron model (MLP).", "In addition, if the dimension size of x is different from that of memory vector, additional output projection layer should be placed after x before applying dot-product to the memory.", "As the LTC module extracts additional topic cluster information from the input data, we can combine this module with any neural network in their end-to-end training flow.", "In our experiments, we combine the LTC module with the RDE and HRDE models.", "The RDE model encodes question and answer texts to h Qt q and h At a , respectively.", "Hence, the LTC module could take these vectors as the input to generate latent topic cluster information added vector e .", "With this vector, we calculate the affinity among question and answer texts as well as additional cluster information.", "The following equation shows our RDE-LTC process: p ( label ) = (( h Qt q ) TM e A + b ) .", "In this case, we applied the LTC module only for the answer side, assuming that the answer text is longer than the question.", "Thus, it needs to be clustered.", "To train the network, we use the same training objective, to minimize cross-entropy loss, as in equation (2).", "The LTC can be combined with the HRDE model, in the same way it is applied to the RDE-LTC model by modifying equation (6 as follows:", "where u Qc q is the final network hidden state vector of the chunk-level RNN for a question input sequence.", "The e u,A is the LTC information added vector from equation (5), where the LTC module takes the input x = u A from the HRDE model equation (3).", "The HRDE-LTC model also use the same training objective, minimizing cross-entropy loss, as in equation (2).", "Figure 2 shows a diagram of the combined model with the HRDE and the LTC.", "The Ubuntu Dialogue Corpus has been developed by expanding and preprocessing the Ubuntu Chat Logs 1 , which refer to a collection of logs from the Ubuntu-related chat room for solving problem in using the Ubuntu system by (Lowe et al., 2015).", "Among the utterances in the dialogues, they consider each utterance, starting from the third one, as a potential { response } while the previous utterance is considered as a { context } .", "The data 1 These logs are available from http://irclogs.ubuntu.com 1578 Dataset # Samples Message (Avg.) Response (Avg.) Train Val.", "was processed extracting ( { context } , { response } , flag) tuples from the dialogues.", "We called this original Ubuntu dataset as Ubuntu-v1 dataset.", "After releasing the Ubuntu-v1 dataset, researchers published v2 version of this dataset.", "Main updates are separating train/valid/test dataset by time so that mimics real life implementation, where we are training a model on past data to predict future data, changing sampling procedure to increase average turns in the { context } .", "We consider this Ubuntu dataset is one of the best dataset in terms of its quality, quantity and availability for evaluating the performance of the text ranking model.", "To encode the text with the HRDE and HRDE-LTC model, a text needs to be divided into several chunk sequences with predefined criteria.", "For the Ubuntu-v1 dataset case, we divide the { context } part by splitting with end-of-sentence delimiter eos , and we do not split the { response } part since it is normally short and does not contain eos information.", "For the Ubuntu-v2 dataset case, we split the { context } part in the same way as we do in the Ubuntu-v1 dataset while only using end-of-turn delimiter eot .", "Table 1 shows properties of the Ubuntu dataset.", "Answer 1 from within the clock application, tap timer tab.", "2 tap the hours, minutes, or seconds field and use the on-screen keypad to enter the hour, minute, or seconds.", "the timer plays an alarm at the end of the countdown.", "3 tap start to start the timer.", "4 tap stop to stop the timer or reset to reset the timer and start over.", "5 tap restart to resume the timer counter.", "To test the robustness of the proposed model, we introduce an additional question and answer pair dataset related to an actual user's interaction with the consumer electronic product domain.", "We crawled data from various sources like the Samsung Electronics' official web site 2 and crowd QA web sites 34 in a similar way that (Yoon et al., 2016) did in building QA system for consumer products.", "On the official web page, we can retrieve data consisting of user questions and matched answers like frequently asked questions and troubleshooting.", "From the crowd QA sites, there are many answers from various users for each question.", "Among these answers, we choose answers from company certificated users to keep the reliability of the answers high.", "If there are no such answers, we skip that question answer pair.", "Table 2 shows an example of question-answer pair crawled from the web page.", "In addition, we crawl hierarchical product category information related to QA pairs.", "In particular, mobile , office , photo , tv/video , accessories , and home appliance as top-level categories, and specific categories like galaxy s7 , tablet , led tv , and others are used.", "We collected these meta-information for further use.", "The total size of the Samsung QA data is over 100,000 pairs and we split the data into approximately 80,000/10,000/10,000 samples to create train/valid/test sets, respectively.", "To create the train set, we use a QA pair sample as a ground-truth and perform negative sampling for answers among training sets to create false -label datasets.", "In this way, we generated ( { question } , { answer } , flag) triples (see Table 1).", "We do the same procedure to create valid and test sets by only differentiating more negative sampling within each dataset to generate 9 false -label samples with one 2 http://www.samsung.com/us 3 http://answers.yahoo.com 4 http://answers.us.samsung.com 1579 ground-truth sample.", "We apply the same method in such a way that the Ubuntu dataset is generated from the Ubuntu Dialogue Corpus to maintain the consistency.", "The Samsung QA dataset is available via web repository.", "We refer the readers to Appendix A for more examples of each dataset.", "To implement the RDE model, we use two single layer Gated Recurrent Unit (GRU) (Chung et al., 2014) with 300 hidden units .", "Each GRU is used to encode { context } and { response } , respectively.", "The weight for the two GRU are shared.", "The hidden units weight matrix of the GRU are initialized using orthogonal weights (Saxe et al., 2013), while input embedding weight matrix is initialized using a pre-trained embedding vector, the Glove (Pen-nington et al., 2014), with 300 dimension.", "The vocabulary size is 144,953 and 183,045 for the Ubuntu-v1/v2 case, respectively.", "We use the Adam optimizer (Kingma and Ba, 2014), with gradients clipped with norm value 1.", "The maximum time step for calculating gradient of the RNN is determined according to the input data statistics in Table 1.", "For the HRDE model, we use two single layer GRU with 300 hidden units for word-level RNN part, and another two single layer GRU with 300 hidden units for chunk-level RNN part.", "The weight of the GRU is shared within the same hierarchical part, word-level and chunk-level.", "The other settings are the same with the RDE model case.", "As for the combined model with the (H)RDE and the LTC, we choose the latent topic memory dimensions as 256 in both ubuntu-v1 and ubuntu-v2.", "The number of the cluster in LTC module is decided to 3 for both the RDE-LTC and the HRDE-LTC cases.", "In HRDE-LTC case, we applied LTC module to the { context } part because we think it is longer having enough information to be clustered with.", "All of these hyper-parameters are selected from additional parameter searching experiments.", "The dropout (Srivastava et al., 2014) is applied for the purpose of regularization with the ratio of: 0.2 for the RNN in the RDE and the RDE-LTC, 0.3 for the word-level RNN part in the HRDE and the HRDE-LTC, 0.8 for the latent topic memory in the RDE-LTC and the HRDE-LTC.", "We need to mention that our implementation of the RDE module has the same architecture as the LSTM model (Kadlec et al., 2015) in ubuntu-v1/v2 experiments case.", "It is also the same architecture with the RNN model (Baudis et al., 2016) in ubuntu-v2 experiment case.", "We implement the same model ourselves, because we need a baseline model to compare with other proposed models such as the RDE-LTC, HRDE and HRDE-LTC.", "To test the Samsung QA dataset, we use the same implementation of the model (RDE, RDE-LTC, HRDE and HRDE-LTC) used in testing the Ubuntu dataset.", "Only the differences are, we use 100 hidden units for the RDE and the RDE-LTC, 300 hidden units for the HRDE and 200 hidden units for the HRDE-LTC, and the vocabulary size of 28,848.", "As for the combined model with the (H)RDE and LTC, the dimensions of the latent topic memory is 64 and the number of latent cluster is 4.", "We chose best performing hyper-parameter of each model by additional extensive hyper-parameter search experiments.", "We regards all the tasks as selecting the best answer among text candidates for the given question.", "Following the previous work (Lowe et al., 2015), we report model performance as recall at k (R@k) relevant texts among given 2 or 10 candidates (e.g., 1 in 2 R@1).", "Though this metric is useful for ranking task, R@1 metric is also meaningful for classifying the best relevant text.", "Each model we implement is trained multiple times (10 and 15 times for Ubuntu and the Samsung QA datasets in our experiments, respectively) with random weight initialization, which largely influences performance of neural network model.", "Hence we report model performance as mean and standard derivation values (Mean Std).", "As Table 3 shows, our proposed HRDE and HRDE-LTC models achieve the best performance for the Ubuntu-v1 dataset.", "We also find that the RDE-LTC model shows improvements from the baseline model, RDE.", "5 http://github.com/david-yoon/QA HRDE LTC 1580 Model Ubuntu-v1 1 in 2 R@1 1 in 10 R@1 1 in 10 R@2 1 in 10 R@5 TF-IDF [1] 0.659 0.410 0.545 0.708 CNN [2] 0.848 0.549 0.684 0.896 LSTM [2] 0.901 0.638 0.784 0.949 CompAgg [3] 0.884 0.631 0.753 0.927 BiMPM [4] 0.897 0.665 0.786 0.938 RDE 0 .", "For the ubuntu-v2 dataset case, Table 4 reveals that the HRDE-LTC model is best for three cases (1 in 2 R@1, 1 in 10 R@2 and 1 in 10 R@5).", "Comparing the same model with our implementation (RDE) and (Baudi s et al., 2016)'s implementation (RNN), there is a large gap in the accuracy (0.610 and 0.664 of 1 in 10 R@1 for RDE and RNN, re-ceptively).", "We think this is largely influenced by the data preprocessing method, because the only differences between these models is the data preprocessing, which is (Baudis et al., 2016)'s contribution to the research.", "We are certain that our model performs better with the exquisite datasets which adapts extensive preprocessing method, because we see improvements from the RDE model to the HRDE model and additional improvements with the LTC module in all test cases (the Ubuntu-v1/v2 and the Samsung QA).", "In the Samsung QA case, Table 5 indicates that the proposed RDE-LTC, HRDE, and the HRDE-LTC model show performance improvements when compared to the baseline model, TF-IDF and RDE.", "The average accuracy statistics are higher in the Samsung QA case when compared to the Ubuntu case.", "We think this is due to in the smaller vocabulary size and context variety.", "The Samsung QA dataset deals with narrower topics than in the Ubuntu dataset case.", "We are certain that our proposed model shows robustness in several datasets and different vocabulary size environments.", "To verify the HRDE model's ability compared to the baseline model RDE, we split the testset of the Ubuntu-v1/v2 datasets based on the number of chunks in the { context } .", "Then, we measured the top-1 recall (same case as 1 in 10 R@1 in Table 3, and", "4) for each group.", "Figure 3 demonstrates that the HRDE models, in darker blue and red colors, shows better performance than the RDE models, in lighter colors, for every number of chunks evaluations.", "In particular, the HRDE models are consistent when the number-of-chunks increased, while the RDE models degrade as the number-of-chunks increased.", "We analyze the RDE-LTC model for different numbers of latent clusters.", "Table 6 indicates that the model performances increase as the number of latent clusters increase (until 3 for the Ubuntu and 4 for the Samsung QA case).", "This is probably a major reason for the different number of subjects in each dataset.", "The Samsung QA dataset has an internal category related to the type of consumer electronic products (6 top-level categories ; mobile , office , photo , tv/video , accessories , and home appliance ), so that the LTC module makes clusters these categories .", "The Ubuntu dataset, however, has diverse contents related to issues in using the Ubuntu system.", "Thus, the LTC module has fewer clusters with the sparse topic compared to the Samsung QA dataset.", "We conduct quantitative and qualitative analysis on the HRDE-LTC model for four latent topic clusters.", "The Samsung QA dataset has category Cluster Example 1 How to adjust the brightness on the s**d300 series monitors 2 How do I reject an incoming call on my Samsung Galaxy Note 3?", "information; hence, latent topic clustering results can be compared with real categories .", "We randomly choose 20 k samples containing real category information and evaluate each sample with the HRDE-LTC model.", "The cluster with the highest similarity among the latent topic clusters is considered a representative cluster of each sample.", "Figure 4 shows proportion of four latent clusters among these samples according to real category information.", "Even though the HRDE-LTC model is trained without any ground-truth category labels, we observed that the latent cluster is formed accordingly.", "For instance, cluster 2 is shown mostly in Mobile category samples while clusters 2 and 4 are rarely shown in Home Ap-pliance category samples.", "Additionally, we explore sentences with higher similarity score from the HRDE-LTC module for each four cluster.", "As can be seen in Table 7, cluster 1 contains screen related sentences (e.g., brightness, pixel, display type) while cluster 2 contains sentences with exclusive information re-1582 lated to the Mobile category (e.g., call rejection, voice level).", "This qualitative analysis explains why cluster 2 is shown mostly in the Mobile category in Figure 4.", "We also discover that cluster 3 has the largest portion of samples.", "As cluster 3 contains security and maintenance related sentences (e.g., password, security, log-on, main-tain), we assume that this is one of the frequently asked issues across all categories in the Samsung QA dataset.", "Table 7 shows example sentences with high scores from each cluster.", "In this paper, we proposed the HRDE model and LTC module.", "HRDE showed higher performances in ranking answer candidates and less performance degradations when dealing with longer texts compared to conventional models.", "The LTC module provided additional performance improvements when combined with both RDE and HRDE models, as it added latent topic cluster information according to dataset properties.", "With this proposed model, we achieved state-of-the-art performances in Ubuntu datasets.", "We also evaluated our model in real world question answering dataset, Samsung QA.", "This demonstrated the robustness of the proposed model with the best results.", "K. Jung is with the Department of Electrical and Computer Engineering, ASRI, Seoul National University, Seoul, Korea.", "This work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2016M3C4A7952587), the Ministry of Trade, Industry & Energy (MOTIE, Korea) under Industrial Technology Innovation Program (No.10073144)." ]
[ "objective", "objective", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "other", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "other", "abstain", "method", "method", "result", "abstain", "abstain", "method", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "method", "abstain", "other", "other" ]
[ "Open Information Extraction (OpenIE), the problem of harvesting triples from natural language text whose predicate relations are not aligned to any pre-defined ontology, has been a popular subject of research for the last decade.", "However, this research has largely ignored the vast quantity of facts available in semi-structured webpages.", "In this paper, we define the problem of OpenIE from semi-structured websites to extract such facts, and present an approach for solving it.", "We also introduce a labeled evaluation dataset to motivate research in this area.", "Given a semi-structured website and a set of seed facts for some relations existing on its pages, we employ a semi-supervised label propagation technique to automatically create training data for the relations present on the site.", "We then use this training data to learn a classifier for relation extraction.", "Experimental results of this method on our new benchmark dataset obtained a precision of over 70%.", "A larger scale extraction experiment on 31 websites in the movie vertical resulted in the extraction of over 2 million triples.", "Knowledge extraction is the problem of extracting (subject, predicate, object) triples from unstructured or semi-structured data, where the subject and object are entities and the predicate indicates the relationship between them.", "In conventional information extraction (which we call ClosedIE), a closed set of potential predicates and their semantics are pre-defined in an ontology.", "Open Information Extraction (OpenIE) is an alternative approach that has no pre-defined ontology and instead represents the predicate with a string extracted from the source data.", "These extractions can capture a much vaster array of semantic relationships than ClosedIE and have been used to support many downstream use-cases, including question-answering, ontology discovery, embedding generation, fact checking, and summarization (Mausam, 2016).", "Previous OpenIE work has concentrated on raw text, with the aim to extract open triples from natural language sentences (Niklaus et al., 2018), with another line of work focused on extracting from webtables (Cafarella et al., 2008).", "Semi-structured websites (e.g. IMDb) contain many pages displaying information in stand-alone fields in relatively consistent locations on each page, with entities and the relationship between them indicated via formatting features such as section headers and lists of key-value pairs.", "Figure 1 shows an example page and the triples it conveys.", "Semi-structured websites have recently been shown to be a rich target for IE; the Knowledge Vault large-scale web extraction experiment, which extracted from semi-structured websites, natural language text, webtables, and Semantic Web annotations, found that semi-structured websites contributed 75% of total extracted facts and 94% of high-confidence extractions (Dong et al., 2014); the Ceres system showed that one can automatically extract from semi-structured sites with a precision over 90% using distant supervision (Lockard et al., 2018).", "These works, however, all build on the tradition of semi-structured ClosedIE techniques (Kushmerick et al., 1997; Soderland, 1999; Gulhane et al., 2011; Furche et al., 2014).", "Interestingly, we are not aware of any work that applies OpenIE on semi-structured sources, despite the great potential to identify new relationships and new knowledge triples.", "We investigated 8 movie websites from the Structured Web Data Extraction (SWDE) corpus (Hao et al., 2011) and found that the IMDb ontology can cover only 7% of semantically unique predicates on these sites.", "The major challenges that distinguish natural language text and semi-structured data are the basic unit and the inherent structure of the data.", "In natural language text, each sentence is a unit; it typically consists of a subject, a verb, and an object, which corresponds naturally to the subject, predicate, and object of a knowledge triple.", "Similarly, in webtables, each table is a unit; its rows, columns, and cells also naturally correspond to subjects, predicates, and objects in triples.", "In the semi-structured setting, the basic unit is the webpage, which may contain hundreds or thousands of entity mentions.", "There is no fixed layout between the subject entity, object entity, and their relation, which may be far apart on the webpage.", "For example, Figure 1 contains object strings that are below, to the left, and to the right of their corresponding predicates; even trickier, for object string Uma Thurman, the correct predicate string Cast is much farther away than the incorrect one Crew.", "Despite the challenges, semi-structured pages do provide inherent visual structure to help distinguish the subject of a page, and (predicate, object) pairs for the subject.", "In this paper we answer the following question: Given semi-structured webpages in a website, how can we tell which field contains the subject, and which fields contain the (predicate, object) pairs for the subject through any visual or DOM-structured clue?", "This paper makes three contributions.", "Our first contribution is to formally define a new problem of OpenIE from semi-structured websites (Sec-tion 2).", "We created a benchmark 1 for this problem by enhancing the SWDE corpus (Hao et al., 2011); our benchmark contains a high accuracy and coverage set of ground truth extractions for 21 websites spanning three domains, comprising 855,748 labels across 27,641 pages (Section 4).", "Our second contribution is OpenCeres, a solution for OpenIE on semi-structured websites (Sec-tion 3).", "Our solution is novel in three aspects.", "First, whereas ClosedIE techniques on semi-structured data focus on extracting objects for given predicates, we also identify predicate strings on the website that represent the relations.", "Second, while ClosedIE techniques can only learn extraction patterns for predicates where there exists seed knowledge, we identify unseen predicates by applying semi-supervised label propagation.", "Third, whereas most existing extraction techniques on semi-structured sites leverage only DOM patterns as evidence, we use visual aspects of the webpage 1 https://homes.cs.washington.edu/ lockardc/expanded_swde.html Figure 1: A cropped portion of the detail page from allmovie.com for the film Tape with some triples indicated.", "Our final contribution is a comprehensive evaluation on our new benchmark dataset and online websites (Section 5).", "Our proposed method obtained an F1 of 0.68, significantly higher than baseline systems, while extracting 7 times as many predicates as were present in the original ontology.", "In addition, we evaluate on a set of 31 movie websites, yielding 1.17 million extractions at a precision of 0.70.", "Our results inspire new directions for improvement as discussed in Section 7, and serve as a good baseline for future work.", "We propose the problem of OpenIE from semi-structured websites.", "A semi-structured website W consists of a set of detail pages that each contains information about a particular entity, the topic entity of the page.", "This information is typically populated from an underlying database into an HTML template to create the detail pages.", "The goal of semi-structured OpenIE is to recover all (sub-ject, predicate, object) triples represented in these templatized fields, including the extraction of the string that semantically represents each predicate.", "Relation objects are sometimes present without an explicit predicate string defining the relationship; since OpenIE requires extraction of a predicate string, we only consider the case where a meaningful predicate string exists on the page.", "w i containing facts about a topic entity t i .", "Semi-Structured OpenIE extracts a set of triples from W such that the subject, predicate, and object of each triple is a string value on a page in W , with the subject representing t i and the predicate string semantically representing the relation between the subject and object as asserted by the webpage.", "Following the tradition of relation extraction, which considers only binary relationships, we do not consider extraction of compound value types (CVTs) (Freebase, 2018), which express multiway relationships.", "In this work, we narrow our focus to Semi-structured OpenIE from a given domain, since we will rely on pre-existing knowledge about that domain to provide us with seed annotations.", "We leave the extension to the general semi-structured OpenIE problem for future work.", "We first summarize the Ceres techniques proposed in (Lockard et al., 2018), which is the state-of-the-art for ClosedIE from semi-structured websites.", "Ceres learns a model capable of generalizing across variations in a website from training labels automatically generated by the distant supervision technique.", "The automatic annotation contains two steps.", "First, topic annotation annotates the topic name on the page.", "Second, relation annotation annotates each object field, where the relation is guessed as a relationship in the seed ontology that is valid between the topic and the object.", "OpenIE needs to go beyond existing relations in the ontology, identifying relations not existing in seed knowledge.", "As such, it raises two challenges for the relation annotation step.", "First, in addition to annotating the objects, we also need to be able to identify the predicate fields in order to extract predicate strings.", "Second, in addition to annotating the predicates already in the seed knowledge, we also need to identify new predicates on a webpage.", "Figure 2 shows the infrastructure of our OpenIE solution, OpenCeres.", "We propose a relation annotation method that is suitable for OpenIE (shown in the shaded blocks), and inherit other components from Ceres (Lockard et al., 2018).", "Our key intuition to solve this problem is that different predicates often share some visual features, such as being aligned vertically or horizontally, sharing the same font, size or color, and so on.", "Thus, if we can identify at least one (predicate, object) pair on the page, we can look for other similarly formatted Figure 2: An overview of our proposed semi-structured OpenIE model learning process.", "pairs of nodes and assume that they also represent (predicate, object) pairs.", "Accordingly, we propose a three-stage process that combines distant supervision and semi-supervised learning.", "1. Predicate-object candidate generation: We first generate potential candidate (predicate, object) pairs, as described in Section 3.1.", "The search for these candidate pairs is quasilinear in the number of DOM nodes, thereby avoiding examination of every pair of DOM nodes.", "2. Distant supervision: We then use a seed knowledge base (KB) to identify instances of (predicate, object) pairs appearing in the seed, where the predicate exists in a pre-defined ontology, as described in Section 3.2.", "For example, for the page in Figure 1, we would hope to identify (Director, Richard Linklater) assuming that fact was in our KB.", "3. Semi-supervised label propagation: We perform a semi-supervised label propagation step to identify pairs of nodes that are formatted similarly to the known (predicate, object) pairs as described in Section 3.3; these new (predicate, object) pairs give us training labels for new predicates.", "For example, in Figure 1, we should identify the pair (Cin-ematographer, Maryse Alberti) since it is formatted similarly to our seed pair, even though the concept of cinematographer does not exist in our seed ontology.", "We now describe the three key steps we use to generate training labels.", "Recall that for ClosedIE we only need to annotate objects; for OpenIE we need to additionally identify the predicates, which will then allow us to find other predicates not found in the seed KB.", "Our first step is thus to find all potential (predicate, object) pairs on a webpage for further annotation.", "Determining which nodes should be checked as potential predicates for a given object is not trivial.", "On the one hand, there may be hundreds of nodes on the page, so considering all potential pairs of nodes would be computationally expensive.", "On the other hand, consider that a webpage about a movie may contain a long list of actors under the section header Cast; in Euclidean distance, the actor at the bottom of the list may be quite far away from the Cast string at the top of the list, so only searching nearby nodes may miss the predicate node.", "To identify potential predicate nodes, we use our intuition that predicate strings should be more common across a website than their related object strings.", "Consider (Language, English) as an example.", "Even though the object string En-glish might be quite common on a site, it should be less frequent than its corresponding predicate string Language.", "According to the site-wide frequency rankings, we consider a (predicate, object) pair a candidate pair only if the predicate appears more frequently than the object 2 .", "Following this intuition, we start by computing the frequency of each string found across all pages in the website, and create a ranked list.", "Second, for each DOM node, we create candidate pairs consisting of the node as object and the k -nearest higher ranking nodes as predicate .", "Third, to identify potential predicates that may be farther away but still represent a section header for the region of the page containing the object, we recursively travel up the DOM tree, and at each level we find the k -highest ranked candidate predicates paired with any candidate object in that DOM subtree.", "We create additional candidate pairs pairing those candidate predicates with all candidate objects in that subtree.", "Thus in total each candidate object is paired with up to dk candidate predicates, where d is the depth of the DOM.", "2 We consider strings consisting of numeric values only as potential objects and not predicates, regardless of their frequency.", "Cast section of Figure 1, with a k value of", "1. Since Cast appears on every page, an initial candidate pair would be created for (Cast, Ethan Hawke).", "If Hawke is mentioned more frequently across the site than the other actors, he would be paired as the potential predicate for their strings since they are closer to his name than Cast, such as (Ethan Hawke, Robert Sean Leonard).", "However, since Hawke and Leonard are in the same <div> , the recursive process would add the candidate pair (Cast, Robert Sean Leonard) since Cast would be the most highly ranked string associated with any object in that section.", "In the second stage, given a webpage, the subject and objects we have identified on the page, and the candidate (predicate, object) pairs, we are now ready for distant supervision annotation to generate seed labels that are (predicate, object) pairs for", "the subject appearing in the seed knowledge.", "We start with the Ceres object identification to generate a list of nodes containing object strings corresponding to KB predicates, and look up (predicate, object) pairs in the candidate list that contain the object node.", "We use lexical clues to we filter a candidate (predicate, object) pair if the predicate name is not semantically similar to the predicate in the ontology.", "There are multiple ways of doing this.", "One way is to compare the cosine similarity of word embeddings (such as FastText (Bojanowski et al., 2017)) representing the predicate string and the ontology predicate name and filter using a threshold.", "Another way is to manually compile a few terms for each predicate in our ontology, and filter a predicate if it does not contain any of the terms as a substring.", "Empirically we found using a manually compiled list, which takes about a minute per predicate, gives higher precision than using embeddings, though it limits us to the particular language of those terms.", "After the filtering step, we can fairly safely choose the (predicate, object) pair where the predicate is closest to the object in Euclidean distance.", "In the third stage, given a set of (predicate, object) pairs on a webpage generated in the first stage, we aim at following visual clues to find other (pred-icate, object) pairs on the same page.", "These new candidate pairs serve as training labels for predicates that may not occur in the seed knowledge.", "We apply semi-supervised learning, which typically resorts to a similarity graph where similar instances are connected by edges in the graph, and propagates existing labels to neighbor nodes.", "Our intuition is that (predicate, object) pairs should share similar formatting; we capture this intuition as we construct the graph.", "Graph construction: Each vertex in the similarity graph represents a candidate pair, an edge connecting two vertices indicates that the two candidate pairs are similar, and the edge weight gives the level of similarity.", "We compute similarity between candidate pairs by visual clues 3 , creating an edge between them if they have similar predicate formatting and simliar object formatting.", "Formatting similarity requires having the same font, font size, font weight, and text alignment, and being either vertically or horizontally aligned.", "We then weight the edges by adding up similarities of the horizontal, vertical, and DOM relationship between predicate and object.", "Similarity of DOM relationship is 1 for exact match and 0 otherwise.", "Similarity of horizontal relationship is computed by measuring the distance between the predicate and the object in a (pred, obj) pair, and then taking the ratio of minimum distance and maximum distance 4 .", "We compute similarity of vertical relationship in a similar way, giving: w i,j = 1 r d ( i )= r d ( j ) + max (cid:18) 0 , min ( r h ( i ) , r h ( j )) max ( r h ( i ) , r h ( j )) (cid:19) + max (cid:18) 0 , min ( r v ( i ) , r v ( j )) max ( r v ( i ) , r v ( j )) (cid:19) where i and j are candidate pairs, r d calculates the DOM path, r h calculates the horizontal distance between candidate predicate and candidate object, and r v calculates the vertical distance.", "A sample graph for the webpage in Figure 1 is shown in Figure", "3. The pair (Director, Richard Linklater) is connected to the pair (Cinematog-rapher, Maryse Alberti) with a weight of 3, since they have identical values for all three relationships, while (Sub-Genres:, Psychological 3 To harvest these features, we render the page using the headless Chrome browser and access element attributes with Selenium (https://www.seleniumhq.org/).", "4 In practice, there are multiple ways to calculate horizontal distance: Left side to left side, left side to right side, right to right, and right to left.", "The same holds for vertical distance.", "We calculate each possible ratio and use the one that gives the highest weight.", "In the case that the ratio is negative (e.g. one pair had predicate to the left of the object while the other pair had it to the right), we set it to 0 .", "Drama) and (Sub-Genres, Reunion Films) have an edge weight of 2.1 since the latter's horizontal distance is ten times greater than the former.", "To speed up propagation, we keep only the 10 top-weighted edges for each pair.", "On average, on the dataset in Section 4, pages have 1,142 text fields resulting in 2,813 candidate pairs connected by 14,733 edges, far less than the 1.3 million candidate pairs (and corresponding increase in edges) that would result from a naive pairwise matching.", "Label propagation: We use the MultiRankWalk label propagation algorithm (Lin and Cohen, 2010), which has been shown to be successful in very low-data situations.", "This allows us to propagate even when we only have a single seed label on a page.", "MultiRankWalk adapts the Personalized PageRank (PPR) algorithm for a classification setting, conducting a PageRank run for each class, with the personalization vector set to equally divide the restart probability among positive labeled examples of the class.", "The PageRank runs are conducted over the weighted graph constructed in the prior step.", "Each unlabeled vertex is assigned to the class whose PageRank run gives it the highest score.", "In our case we have two PPR runs: a positive run for labeled (predicate, object) candidates and a negative run for unlabeled candidates.", "The results of this process are then used as training data to train a supervised Ceres (Lockard et al., 2018) extraction model.", "The Structured Web Data Extraction (SWDE) dataset has served as a benchmark for semi-structured web extraction, with webpages and ground truth extractions from 10 websites in each of 8 domains (Hao et al., 2011).", "However, the ground truth in SWDE only covers a subset of the predicates found on each site, typically 3-4 predicates per domain.", "We extend it as follows: Of the 8 domains, we kept 4 domains whose page topics are named entities.", "We extended their gold set to include extractions identifying all key-value semi-structured fields on the websites.", "Since not all SWDE websites can still be rendered in the browser (due to missing resources), we eliminated websites that we were unable to successfully render in the Chrome browser, resulting in 30 websites.", "We then attempted to create ground truth via a combination of wrapper induction based on manually-labeled training data (with an extractor implementation based on (Gulhane et al., 2011)), hand-crafted extraction rules, and manual cleanup of remaining errors.", "Eventually, we generated accurate labels for 21 sites in 3 domains.", "This new extended benchmark includes both extracted object values as well as the predicate string that accompanies each value on the page.", "The statistics of the augmented dataset are shown in Table", "1. We enhanced SWDE in two ways.", "First, SWDE on average contains 4,480 triples for 3 predicates from these 21 websites, whereas we have an average of 41K triples for 36 predicates.", "The number of predicates per website ranges from 5 to 272 (Hollywood features very fine-grained relationships like Assistant Art Director).", "Second, when multiple predicate strings may apply on the webpage, we list all of them in order of specificity.", "Taking Figure 1 as an example, we include both Director and Crew for a relation, considering the former to be more specific.", "To our knowledge, this is the first dataset that represents all key-value pairs found in semi-structured web data.", "Datasets: Our primary dataset is the augmented SWDE corpus described in Section", "4. In addition, we used the set of 31 5 movie websites (com-prising 433,000 webpages) found in CommonCrawl 6 from Lockard et al. (2018).", "To generate seed KBs for the distant supervision, we relied on the methodology from Lockard et al. (2018), using the IMDb database for the Movie domain, and us-5 We removed the two sites on which Lockard et al. (2018) reported Ceres made no annotations.", "6 www.commoncrawl.org Domain Site # Predicates # Labels Movie AllMovie 65 104,303 Movie AMCTV 20 85,916 Movie Hollywood 272 77,047 Movie iHeartMovies 9 21,253 Movie IMDb 36 152,880 Movie Metacritic 18 43,450 Movie RottenTomatoes 12 65,524 Movie Yahoo 12 28,354 University CollegeProwler 26 40,707 University ECampusTours 17 18,448 University Embark 67 46,431 University MatchCollege 68 107,763 University USNews 22 21,269 NBAPlayer ESPN 22 6,757 NBAPlayer FanHouse 16 6,656 NBAPlayer FoxSports 15 6,157 NBAPlayer MSNca 12 5,208 NBAPlayer SI 12 6,082 NBAPlayer Slam 13 5,453 NBAPlayer USAToday 5 2,178 NBAPlayer Yahoo 9 3,912 Table 1: Statistics of the augmented SWDE dataset.", "ing the original SWDE ground truth for websites CollegeBoard and ESPN to create a KB for the University and NBAPlayer domains respectively.", "Implementations: We compared OpenCeres with two baselines.", "The three algorithms apply the same method to extract topic subjects but differ in how they extract (predicate, object) pairs.", "1. WEIR: Proposed by Bronzi et al. (2013), the Web Extraction and Integration of Redundant data (WEIR) approach takes as input a set of websites in the same subject domain and makes use of overlap in observed entities across sites to learn extraction rules for predicates.", "The system is unsupervised, though it does require a dictionary of potential page topic entities for the domain to align pages between sites.", "WEIR also contains a method for automatically identifying predicate strings for the extraction rules it learned by finding strings that frequently occur nearby extracted objects in the HTML templates of sites in the domain.", "2. Colon Baseline: Semi-structured pages frequently represent a (predicate, object) pair via a set of adjacent DOM nodes, with the predicate string ending in a colon and the object string either to its right or below it.", "This baseline starts with the (predicate, object) candidate pairs generated in Section 3.1, identifies those where the predicate field ends with a colon, and extracts them as a predicate along with their closest candidate object either to their right or below.", "3. OpenCeres: This implements our system exactly as described in Section 3, using the generated training data to train a Ceres extractor.", "In addition, to understand the uper bound of OpenCeres, we implemented two versions using ground truth data for training seeds:", "4. OpenCeres-Gold: This implements our system, but skips the label propagation step and replaces noisy seed labels (Section 3.2) with samples from ground truth triples.", "We sampled 25% of triples for each predicate, so this method is essentially ClosedIE Ceres with incomplete but clean training labels, giving an upper bound on the system's performance when no errors are introduced during training data generation and label propagation.", "5. OpenCeres-GoldProp: This implements OpenCeres-Gold, but adds the label propagation step described in Section 3.3.", "Rather than sampling 25% of ground truth triples from all predicates, we instead sample p % of ground truth predicates for a site (with p varying from 10 to 100) and then sample 25% of the corresponding triples for each page.", "The process is run five times for each setting of p and the results are averaged.", "Evaluation: Evaluation is tricky for semi-structured OpenIE because a page may contain multiple valid predicates for a relation.", "Recall that the SWDE benchmark data we generated (Sec-tion 4) lists all predicate strings that are valid, ranked in their order of specificity.", "We thus define two scores for an extracted triple.", "A strict score that requires an exact match between the extracted predicate and the most-specific predicate string in the ground truth.", "A lenient score that counts an extraction as correct if the extracted predicate matches any of the predicate strings in the ground truth.", "For the SWDE dataset, where we have complete ground truth, we compute precision, recall, and F1.", "For the CommonCrawl dataset, where no ground truth exists, we sampled 500 extractions at each confidence threshold (giving a 4% margin of error) and manually scored them; since we cannot calculate true recall, we report precision and yield.", "Overall results: Table 2 shows the precision and recall obtained via lenient scoring.", "Our results show that OpenCeres outperformed both baselines, achieving an average precision of 72% across the three domains, with an average recall of 48%.", "Comparing with OpenCeres-Gold on Movie, our precision is 22% lower, while recall is only 5% lower, showing that our label propagation is fairly effective in preserving recall, but introduces errors reducing precision.", "WEIR does not perform as well as ColonBaseline, showing that our (pred-icate, object) candidate identification technique works well.", "Our recall is a robust 68% in the Movie domain, but is much lower in the other two domains.", "This is because we failed to make any extraction in 3 of the 5 University sites and 2 of the 8 NBA sites due to the inability to find a predicate string for the seed predicates.", "In some cases no predicate string existed, but in others the string was not in our lexicon.", "In fact, if we skip those websites where we extract nothing, our recall increases to 58% for NBA and 44% for University.", "Other recall misses occur when a page has some semi-structured fields that differ significantly in format from those found in our seed ontology, so they were too dissimilar for the label propagation to extend to them.", "Details: We now deep dive to results of OpenCeres, shown in Table", "3. First, our scoring under the strict rules is only slightly lower than under lenient rules, because the case that multiple predicates apply is not common and we are often able to find the most specific ones.", "Across all triples, the overall lenient F1 is 0.68 and strict F1 is 0.61.", "Second, at predicate-level, OpenCeres has an average precision of 74% and recall of 39%, showing that our method attains high precision for the new predicates it identifies.", "Third, through the label propagation technique, we are able to extract an average of 10.5 new predicates for every predicate in our seed ontology.", "A sample of 100 erroneous OpenCeres extractions shows that 33% of errors are due to the presence of CVTs on the page.", "For example, the movie site Rotten Tomatoes contains a Full Review predicate that contains review date, writer, publication, and text; we extracted only the date, which arguably is not useful.", "Considering these extractions as correct will increase the precision to 81%.", "Among the errors, 22% were due to incoherent predicates such as See More, while 20% were due to incorrect extraction of a template string as an object of a correct predicate.", "Label propagation: Figure 4 shows how the label propagation process successfully creates new training examples from a small number of seeds.", "While propagation does introduce some precison errors, when only 10% of predicates are given to OpenCeres-GoldProp as seeds (and only 25% of triples sampled for each predicate), training data recall is already nearly 50%.", "As the percentage of seed predicates rises, the seeds become more likely to capture the full variety of predicate formatting, and recall rises.", "There are a number of reasons why the recall upper bound demonstrated by OpenCeres-Gold (and OpenCeres-GoldProp) is less than perfect.", "First, a small number of relations in the dataset have predicate or object strings that span multiple text fields (particularly in the University vertical); the implemented system can only extract a single text field, so these will be missed.", "Second, the Candidate Pair Identification algorithm has imperfect recall.", "Finally, because only 25% of ground truth triples were used for each page of training data, some true positive examples were sampled as negative examples for training, thereby lowering classification recall.", "Parameter setting: Table 4 shows that Candidate Pair Identification has increasing recall in capturing true candidate pairs in the SWDE-Movie vertical with more neighbors considered, with a tradeoff in increased runtime due to the creation of more pairs; we used k = 5 in our experiments.", "We now report results of ClosedIE and OpenIE extractions on the 31 CommonCrawl websites; the ClosedIE implementation is a subset of the OpenIE system, without the shaded components in", "Figure", "2. Of these 31 websites, we successfully extracted from 22 sites using OpenIE, and failed to extract from 9 sites because of our inability to match their predicate strings to our lexicon for seed predicates (4 sites were in foreign languages while our lexicon is in English, on 3 sites the pages had no predicate strings labeling the seed object, and 2 sites used terms outside our lexicon).", "Figure 5 shows the precision-yield curve of our ClosedIE and OpenIE extractions as we vary the confidence threshold.", "At a 0.5 confidence threshold, we extracted 2.3M triples at a precision of 0.58, where 1.17M (51%) have new predicates.", "A higher threshold of 0.8 yielded 1.17M extractions at a precision of 0.70, with 50% of extractions representing new predicates.", "The high percentage of extractions with new predicates shows the big promise of our method in enriching existing knowledge bases not only with new entities and new facts, but also with new relationships.", "In unstructured text, OpenIE was originally proposed by Banko et al. (2007), an approach extended by ReVerb (Fader et al., 2011) and Ollie (Mausam et al., 2012), which relied on syntactic", "constraints to identify relation patterns.", "Our approach is influenced by Wu and Weld (2010), who aligned Wikipedia infobox contents to article text to automatically create training data for an extractor.", "Recent work on neural extraction models (Cui et al., 2018) has explored entirely supervised models learned from a modified version of the QA-SRL dataset (Stanovsky et al., 2018).", "The line of research that has most closely examined the prospect of OpenIE-style extractions using webpage structure is the work on Webtables (Cafarella et al., 2008; Dalvi et al., 2012; Balakr-ishnan et al., 2015; Cafarella et al., 2018).", "This work specifically examines the identification of subjects, predicate, and object strings but is limited to fields in rows and columns created using HTML <table> tags.", "Extractions from webtables have recently been harnessed as a source of facts for question-answering systems (Pasupat and Liang, 2015; Krishnamurthy et al., 2017).", "In extraction from semi-structured websites, the traditional approach is wrapper induction , in which a rule learning algorithm is applied to a set of labeled training examples (Kushmerick et al., 1997).", "Work in this line of research has achieved high accuracy from only a few labeled examples, but requires manually-annotated examples for each website (Gulhane et al., 2011).", "To remove this bottleneck, researchers have explored alternative ways to automatically create labeled data and learn models from such potentially noisy labels (Dalvi et al., 2011; Gentile et al., 2015; Furche et al., 2014; Lockard et al., 2018).", "However, these approaches cannot find triples for predicates that are not in the seed ontology.", "The Roadrunner project (Crescenzi et al., 2001) does attempt to identify the objects of all relations represented on a site, but does not extract predicate strings.", "However, the WEIR project (Bronzi et al., 2013) extended this framework with a heuristic to harvest predicate strings based on words found in DOM nodes that form part of the path of the learned XPath extraction rule.", "This is the first work that could truly be considered an OpenIE approach to semi-structured extraction.", "However, as we show in our experiments, the constraints of their heuristic limit the recall of this approach.", "We presented a new problem of Open Information Extraction from semi-structured websites, and are releasing a new set of over 855,000 ground truth extractions for 21 websites available in the SWDE corpus.", "We also proposed an algorithm for OpenIE that employs semi-supervised label propagation to discover new predicates based on a set of seed predicates in a known ontology.", "This method attained a 68% F1 score in OpenIE extractions on our benchmark.", "In addition, a large-scale evaluation on 31 CommonCrawl movie websites yielded extractions of over two million triples.", "In the future, we would like to improve extraction by training a model to extract (predicate, object) pairs directly without having to train on particular predicates.", "Such a model could potentially be based on visual clues common across websites, so a single model could be applied to many sites.", "We wish to thank Hannaneh Hajishirzi for helpful advice, Paolo Merialdo and Valter Crescenzi for help in running WEIR, and Andrew Bridges, Marc Landers, and Alexander Macdonald for their help in annotating data." ]
[ "abstain", "abstain", "objective", "objective", "method", "method", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "objective", "objective", "objective", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "result", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "method", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "objective", "abstain", "abstain", "result", "abstain", "other" ]
[ "Machine learning has shown promise for automatic detection of Alzheimer's disease (AD) through speech; however, efforts are hampered by a scarcity of data, especially in languages other than English.", "We propose a method to learn a correspondence between independently engineered lexicosyntactic features in two languages, using a large parallel corpus of out-of-domain movie dialogue data.", "We apply it to dementia detection in Mandarin Chinese, and demonstrate that our method outperforms both unilingual and machine translation-based baselines.", "This appears to be the first study that transfers feature domains in detecting cognitive decline.", "Alzheimer's disease (AD) is a neurodegenerative disease affecting 5.7 million people in the US (As-sociation et al., 2018), and is the most common cause of dementia.", "Although no cure yet exists, early detection of AD is crucial for an effective treatment to delay or prepare for its effects (Dubois et al., 2016).", "One of the earliest symptoms of AD is speech impairment, including a difficulty in finding words and changes to grammatical structure (Taler and Phillips, 2008).", "These early signs can be detected by having the patient perform a picture description task, such as the Cookie Theft task from the Boston Diagnostic Aphasia Examination (Goodglass and Kaplan, 1983).", "Previous models have applied machine learning to automatic detection of AD, for example, Fraser et al. (2016) extracted a wide variety of lexicosyntactic and acoustic features to classify AD and obtained 82% accuracy on the DementiaBank (DB) dataset.", "However, clinical studies of AD are expensive, so datasets of patient data are often scarce.", "Noorian et al. (2017) augmented DB with more a much larger corpus of normative data and improved the classification accuracy to 93% on DB.", "Similar linguistic differences between healthy and AD speech have been observed in Mandarin Chinese (Lai et al., 2009), but machine learning has not yet been applied to detecting AD in Mandarin.", "Daume III (2007) proposed a simple way of combining features in different domains, assuming that the same features are extracted in each domain.", "In our case, ensuring consistency of features across domains is challenging because of the grammatical differences between Mandarin and English.", "For example, Mandarin doesn't have determiners or verb tenses, and has classifiers, which don't exist in English (Chao, 1965).", "Another method trains a classifier jointly on multiple domains with different features on each domain, by learning a projection to a common subspace (Duan et al., 2012).", "However, this method only accepts labelled samples in each domain, and cannot make use of unlabelled, out-of-domain data.", "Other work from our broader group (Fraser et al., 2019) combined English and French data by extracting features based on conceptual information units rather than words, thus limiting the effects of multilingual differences.", "In the current work, we train an unsupervised model to detect dementia in Mandarin, requiring only the English DB dataset and a large parallel Mandarin-English corpus of normative dialogue.", "We extract lexicosyntactic features in Mandarin and English using separate pipelines, and use the OpenSubtitles corpus of bilingual parallel movie dialogues to learn a correspondence between the different feature sets.", "We combine this correspondence model with a classifier trained on DB to predict dementia on Mandarin speech.", "To evaluate our system, we apply it to a dataset of speech from Mandarin-speakers with dementia, and demon-Figure 1: Diagram of our model.", "We train two separate models: the first is trained on OpenSubtitles and learns to map Mandarin features to English features; the second is trained on DementiaBank and predicts dementia given English features.", "During evaluation, the two models are combined to predict dementia in Mandarin.", "strate that our method outperforms several baselines.", "We use the following datasets:", "DementiaBank (Boller and Becker, 2005): a corpus of Cookie Theft picture descriptions, containing 241 narrations from healthy controls and 310 from patients with dementia.", "Each narration is professionally transcribed and labelled with part-of-speech tags.", "In this work, we only use the narration transcripts, and neither the part-of-speech tags or raw acoustics.", "Lu Corpus (MacWhinney et al., 2011): contains 49 patients performing the Cookie theft picture description, category fluency, and picture naming tasks in Taiwanese Mandarin.", "The picture description narrations were human-transcribed; patient diagnoses are unspecified but exhibit various degrees of dementia.", "OpenSubtitles2016 (Lison and Tiedemann, 2016): a corpus of parallel dialogues extracted from movie subtitles in various languages.", "We use the Traditional Chinese / English language pair, which contains 3.3 million lines of dialogue.", "The Lu Corpus is missing specifics of diagnosis, so we derive a dementia score for each patient using the category fluency and picture naming tasks.", "For each category fluency task, we count the number of unique items named; for the picture naming tasks, we score the number of pictures correctly named, awarding partial credit if a hint was given.", "We apply PCA to the scores across all tasks, and assign the first principal component to be the dementia score for each patient.", "This gives a relative ordering of all patients for degree of dementia, which we treat as the ground-truth for evaluating our models.", "We extract a variety of lexicosyntactic features in Mandarin and English, including type-token-ratio, the number of words per sentence, and proportions of various part-of-speech tags 1 .", "A detailed description of the features is provided in the supplementary materials (Section A.1).", "In total, we extract 143 features in Mandarin and 185 in English.", "To reduce sparsity, we remove features in both languages that are constant for more than half of the dataset.", "Due to the size of the OpenSubtitles corpus, it was computationally unfeasible to run feature extraction on the entire corpus.", "Therefore, we randomly select 50,000 narrations from the corpus, where each narration consists of between 1 to 50 contiguous lines of dialogue (about the length of a Cookie Theft narration).", "For English, we train a logistic regression classifier to classify between dementia and healthy controls on DB, using our features as input.", "Using L1 regularization and 5-fold CV, our model achieves 77% classification accuracy on DB.", "This is slightly lower than the 82% accuracy reported 1 The feature extraction pipeline is open-source, available at: https://github.com/SPOClab-ca/COVFEFE .", "The lex and lex chinese pipelines were used for English and Chinese, respectively.", "by Fraser et al. (2016), but it does not include any acoustic features as input.", "Next, we use the OpenSubtitles corpus to train a model to transform Mandarin feature vectors to English feature vectors.", "For each target English feature, we train a separate ElasticNet linear regression (Zou and Hastie, 2005), using the Mandarin features of the parallel text as input.", "We perform a hyperparameter search independently for each target feature, using 3-fold CV to minimize the MSE.", "Although the output of the ElasticNet regressions may be given directly to the logistic regression model to predict dementia, this method has two limitations.", "First, the model considers each target feature separately and cannot take advantage of correlations between target features.", "Second, it treats all target feature equally, even though some are noisier than others.", "We introduce two regularization mechanisms to address these drawbacks: reduced rank regression and joint feature selection.", "Reduced rank regression (RRR) trains a single linear model to predict all the target features: it minimizes the sum of MSE across all target features, with the constraint that the rank of the linear mapping is bounded by some given R (Izen-man, 1975).", "Following recommended procedures (Davies, 1982), we standardize the target features and find the best value of R with cross validation.", "However, this procedure did not significantly improve results so it was not included in our best model.", "A limitation of the above models is that they are not robust to noisy features.", "For example, if some English feature is useful for predicting dementia, but cannot be accurately predicted using the Mandarin features, then including this feature might hurt the overall performance.", "A desirable English feature in our pipeline needs to not only be useful for predicting dementia in English, but also be reconstructable from Mandarin features.", "features by their R 2 (coefficient of determination) measured on the training set, where higher values indicate a better fit.", "Then, for each K between 1 and the number of features, we select only the top K features and re-train the DB classifier (3.1) to only use those features as input.", "The result of this experiment is shown in Figure", "2. 4 Experiments 4.1 Baseline Models We compare our system against two simple baselines:", "1. Unilingual baseline: using the Mandarin features, we train a linear regression to predict the dementia score.", "We take the mean across 5 cross-validation folds.", "2. Translate baseline: The other intuitive way to generate English features from a Mandarin corpus is by using translation.", "We use Google Translate 2 to translate each Mandarin transcript to English.", "Then, we extract features from the translated English text and feed them to the dementia classifier described in section 3.1.", "We evaluate each model by comparing the Spearman's rank-order correlation (Spearman, 1904) between the ground truth dementia scores and the model's predictions.", "This measures the model's ability to rank the patients from the highest to the lowest severities of dementia, without requiring a threshold value.", "2 https://translate.google.com/ 0 20 40 60 80 100 120 K (number of features) 0.0 0.2 0.4 0.6 0.8 1.0 A cc u r a c y Mean Accuracy 0.0 0.2 0.4 0.6 0.8 1.0 C oo r e l a t i o n Spearman's rho Figure 2: Accuracy of DementiaBank classifier model and Spearman's on Lu corpus, using only the top K English features ordered by R 2 on the OpenSubtitles corpus.", "Our best model achieves a Spearman's of 0.549, beating the translate baseline ( n = 49, p = 0.06).", "Joint feature selection appears to be crucial, since the model performs worse than the baselines if we use all of the features.", "This is the case no matter if we predict each target feature independently or all at once with reduced rank regression.", "RRR does not outperform the baseline model, probably because it fails to account for the noisy target features in the correspondence model and considers each feature equally important.", "We did not attempt to use joint feature selection and RRR at the same time, because the multiplicative combination of hyperparameters K and R would produce a multiple comparisons problem using the small validation set.", "Using joint feature selection, we find that the best score is achieved when we use K = 13 target features (Figure 2).", "With K < 13 , performance suffers because the DementiaBank classifier is not given enough information to make accurate clas-sifications.", "With K > 13 , the accuracy for the DementiaBank classifier improves; however, the overall performance degrades because it is given noisy features with low R 2 coefficients.", "A list of the top features is given in Table 2 in the supplementary materials.", "In our experiments, the correspondence model worked better when absolute counts were used for the Chinese CFG features (e.g., the number of NP P N productions in the narration) rather than ratio features (e.g., the proportion of CFG 10 1 10 2 10 3 10 4 N (Number of OpenSubtitles samples) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Sp e a r m a n r h o Figure 3: Ablation experiment where a various number of OpenSubtitles samples were used for training.", "productions that were NP P N ).", "When ratios were used for source features, the R 2 coefficients for many target features decreased.", "A possible explanation is that the narrations have varying lengths, and dividing features by the length introduces a nonlinearity that adversely affects our linear models.", "However, more experimentation is required to examine this hypothesis.", "Next, we investigate how many parallel OpenSubtitles narrations were necessary to learn the correspondence model.", "We choose various training sample sizes from 10 to 50,000 and, for each training size, we train and evaluate the whole model from end-to-end 10 times with different random seeds (Figure 3).", "As expected, the Spearman's increased as more samples were used, but only 1000-2000 samples were required to achieve comparable performance to the full model.", "We propose a novel method to use a large parallel corpus to learn mappings between engineered features in two languages.", "Combined with a dementia classifier model for English speech, we constructed a model to predict dementia in Mandarin Chinese.", "Our method achieves state-of-the-art results for this task and beats baselines based on unilingual models and Google Translate.", "It is successful despite the stark differences between English and Mandarin, and the fact that the parallel corpus is out-of-domain for the task.", "Lastly, our method does not require any Mandarin data for training, which is important given the difficulty of acquiring sensitive clinical data.", "Future work will investigate the use of automatic speech recognition to reduce the need for manual transcripts, which are impractical in a clinical setting.", "Also, our model only uses lexicosyntactic features, and ignores acoustic features (e.g., pause duration) which are significant for dementia detection in English.", "Finally, it remains to apply this method to other languages, such as French (Fraser et al., 2019), for which datasets have recently been collected.", "We thank Kathleen Fraser and Nicklas Linz for their helpful comments and earlier collaboration which inspired this project." ]
[ "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "objective", "abstain", "result", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "result", "abstain", "other" ]
[ "This work deals with the challenge of learning and reasoning over language and vision data for the related downstream tasks such as visual question answering (VQA) and natural language for visual reasoning (NLVR) .", "We design a novel cross-modality relevance module that is used in an end-to-end framework to learn the relevance representation between components of various input modalities under the supervision of a target task, which is more generalizable to unobserved data compared to merely reshaping the original representation space.", "In addition to modeling the relevance between the textual entities and visual entities, we model the higher-order relevance between entity relations in the text and object relations in the image.", "Our proposed approach shows competitive performance on two different language and vision tasks using public benchmarks and improves the state-of-the-art published results.", "The learned alignments of input spaces and their relevance representations by NLVR task boost the training efficiency of VQA task.", "Real-world problems often involve data from multiple modalities and resources.", "Solving a problem at hand usually requires the ability to reason about the components across all the involved modalities.", "Examples of such tasks are visual question answering (VQA) (Antol et al., 2015; Goyal et al., 2017) and natural language visual reasoning (NLVR) (Suhr et al., 2017, 2018).", "One key to intelligence here is to identify the relations between the modalities, combine and reason over them for decision making.", "Deep learning is a prominent technique to learn representations of the data for decision making for various target tasks.", "It has achieved supreme performance based on large scale corpora (Devlin et al., 2019).", "However, it is a challenge to learn joint representations for cross-modality data because deep learning is data-hungry.", "There are many recent efforts to build such multimodality datasets (Lin et al., 2014; Krishna et al., 2017; Johnson et al., 2017; Antol et al., 2015; Suhr et al., 2017; Goyal et al., 2017; Suhr et al., 2018).", "Researchers develop models by joining features, aligning representation spaces, and using Transformers (Li et al., 2019b; Tan and Bansal, 2019).", "However, generalizability is still an issue when operating on unobserved data.", "It is hard for deep learning models to capture high-order patterns of reasoning, which is essential for their generalizability.", "There are several challenging research directions for addressing learning representations for cross-modality data and enabling reasoning for target tasks.", "First is the alignment of the representation spaces for multiple modalities; second is designing architectures with the ability to capture high-order relations for generalizability of reasoning; third is using pre-trained modules to make the most use of minimal data.", "An orthogonal direction to the above-mentioned aspects of learning is finding relevance between the components and the structure of various modalities when working with multi-modal data.", "Most of the previous language and visual reasoning models try to capture the relevance by learning representations based on an attention mechanism.", "Finding relevance, known as matching, is a fundamental task in information retrieval (IR) (Mitra et al., 2017).", "Ben-efiting from matching, Transformer models gain the excellent ability to index, retrieve, and combine features of underlying instances by a matching score (Vaswani et al., 2017), which leads to the state-of-the-art performance in various tasks (De-vlin et al., 2019).", "However, the matching in the attention mechanism is used to learn a set of weights to highlight the importance of various components.", "In our proposed model, we learn representations directly based on the relevance score inspired by the ideas from IR models.", "In contrast to the attention mechanism and Transformer models, we claim that the relevance patterns are as important .", "With proper alignment of the representation spaces of different input modalities, matching can be applied to those spaces.", "The idea of learning relevance patterns is similar to Siamese networks (Koch et al., 2015) which learn transferable patterns of similarity of two image representations for one-shot image recognition.", "Similarity metric between two modalities is shown to be helpful for aligning multiple spaces of modalities (Frome et al., 2013).", "The contributions of this work are as follows: 1) We propose a cross-modality relevance (CMR) framework that considers entity relevance and high-order relational relevance between the two modalities with an alignment of representation spaces.", "The model can be trained end-to-end with customizable target tasks.", "2) We evaluate the methods and analyze the results on both VQA and NLVR tasks using VQA v2.0 and NLVR 2 datasets respectively.", "We improve state-of-the-art on both tasks' published results.", "Our analysis shows the significance of the patterns of relevance for the reasoning, and the CMR model trained on NLVR 2 boosts the training efficiency of the VQA task.", "Language and Vision Tasks.", "Learning and decision making based on natural language and visual information has attracted the attention of many researchers due to exposing many interesting research challenges to the AI community.", "Among many other efforts (Lin et al., 2014; Krishna et al., 2017; Johnson et al., 2017), Antol et al. proposed the VQA challenge that contains open-ended questions about images that require an understanding of and reasoning about language and visual components.", "Suhr et al. proposed the NLVR task that asks models to determine whether a sentence is true based on the image.", "Attention Based Representation.", "Transformers are stacked self-attention models for general purpose sequence representation (Vaswani et al., 2017).", "They have been shown to achieve extraordinary success in natural language processing not only for better results but also for efficiency due to their parallel computations.", "Self-attention is a mechanism to reshape representations of components based on relevance scores.", "They have been shown to be effective in generating contextualized representations for text entities.", "More importantly, there are several efforts to pre-train huge Transformers based on large scale corpora (Devlin et al., 2019; Yang et al., 2019; Radford et al., 2019) on multiple popular tasks to enable exploiting them and performing other tasks with small corpora.", "Researchers also extended Transformers with both textual and visual modalities (Li et al., 2019b; Sun et al., 2019; Tan and Bansal, 2019; Su et al., 2020; Tsai et al., 2019).", "Sophisticated pre-training strategies were introduced to boost the performance (Tan and Bansal, 2019).", "However, as mentioned above, modeling relations between components is still a challenge for the approaches that try reshaping the entity representation space while the relevance score can be more expressive for these relations.", "In our CMR framework, we model high-order relations in relevance representation space rather than the entity representation space.", "Matching Models.", "Matching is a fundamental task in information retrieval (IR).", "There are IR models that focus on comparing the global representation matching (Huang et al., 2013; Shen et al., 2014), the local components ( a.k.a terms) matching (Guo et al., 2016; Pang et al., 2016), and hybrid methods (Mitra et al., 2017).", "Our relevance framework is partially inspired by the local components matching which we apply here to model the relevance of the components of the model's inputs.", "However, our work differs in several significant ways.", "First, we work under the cross-modality setting.", "Second, we extend the relevance to a high-order, i.e. model the relevance of entity relations.", "Third, our framework can work with different target tasks, and we show that the parameters trained on one task can boost the training of another.", "Cross-Modality Relevance (CMR) aims to establish a framework for general purpose relevance in various tasks.", "As an end-to-end model, it encodes the relevance between the components of input modalities under task-specific supervision.", "We further add a high-order relevance between relations that occur in each modality.", "Figure 1 shows the proposed architecture.", "We first encode data from different modalities with single modality Transformers and align the encoding Raw Text Faster-RCNN Input Single-ModalityTransformer Cross-ModalityTransformer Entity Representations Output Entity Relevance Affinity Matrix Relational Relevance Affinity Matrix ATTQ K VMUL Add & Norm Add & Norm Feed Forward ATTQ K VMUL Add & Norm Add & Norm Feed Forward ATTQ K VMUL Add & Norm Add & Norm Feed Forward VisualEntity visualentity1 visualentity2 visualentity3 visualentityn TextualEntity textualentity1 textualentity2 textualentity3 textualentityn BERTCNN CNN max !", "spaces by a cross-modality Transformer.", "We consistently refer to the words in text and objects in images ( i.e. bounding boxes in images) as enti-ties and their representations as Entity Repre-sentations.", "We use the relevance between the components of the two modalities to model the relation between them.", "The relevance includes the relevance between their entities, as shown in the Entity Relevance, and high-order relevance between their relations, as shown in the Relational Relevance.", "We learn the representations of the affinity matrix of relevance score by convolutional layers and fully-connected layers.", "Finally, we predict the output by a non-linear mapping based on all the relevance representations.", "This architecture can help to solve tasks that need reasoning on two modalities based on their relevance.", "We argue that the parameters trained on one task can boost the training of the other tasks that deal with multi-modality reasoning.", "In this section, we first formulate the problem.", "Then we describe our cross-modality relevance (CMR) model for solving the problem.", "The architecture, loss function, and training procedure of CMR are explained in detail.", "We will use the VQA and NLVR tasks as showcases.", "Formally, the problem is to model a mapping from a cross-modality data sample D = {D } to an output y in a target task, where denotes the type of modality.", "And D = (cid:8) d 1 , , d N (cid:9) is a set of entities in the modality .", "In visual question answering, VQA, the task is to predict an answer given two modalities, that is a textual question ( D t ) and a visual image ( D v ).", "In NLVR, given a textual statement ( D t ) and an image ( D v ), the task is to determine the correctness of the textual statement.", "Single Modality Representations.", "For the textual modality D t , we utilize BERT (Devlin et al., 2019) as shown in the bottom-left part of Figure 1, which is a multi-layer Transformer (Vaswani et al., 2017) with three different inputs: WordPieces embeddings (Wu et al., 2016), segment embeddings, and position embeddings.", "We refer to all the words as the entities in the textual modality and use the BERT representations for textual single-modality representations (cid:8) s t 1 , , s tN t (cid:9) .", "We assume to have N t words as textual entities.", "For visual modality D v , as shown in the top-left part of Figure 1, Faster-RCNN (Ren et al., 2015) is used to generate regions of interest (ROIs), extract dense encoding representations of the ROIs, and predict the probability of each ROI.", "We refer to the ROIs on images as the visual entities { d v 1 , , d vN v } .", "We consider a fixed number, N v , of visual entities with highest probabilities predicted by Faster-RCNN each time.", "The dense representation of each ROI is a local latent representation of a 2048 -dimensional vector (Ren et al., 2015).", "To enrich the visual entity representation with the visual context, we further project the vectors with feed-forward layers and encode them by a single-modality Transformer as shown in the second column in Figure", "1. The visual Transformer takes the dense representation, segment embedding, and pixel position embedding (Tan and Bansal, 2019) as input and generates the single-modality representation { s v 1 , , s vN v } .", "In case there are multiple images, for example, NLVR data (NLVR 2 ) has two images in each example, each image is encoded by the same procedure and we keep N v visual entities per image.", "We refer to this as different sources of the same modality throughout the paper.", "We restrict all the single-modality representations to be vectors of the same dimension d .", "However, these original representation spaces should be aligned.", "Cross-Modality Alignment.", "To align the single-modality representations in a uniformed representation space, we introduce a cross-modality Transformer as shown in the third column of Figure", "1. All the entities are treated uniformly in the modality Transformer.", "Given the set of entity representations from all modalities we de-fine the matrix with all the elements in the set S = (cid:2) s t 1 , , s tN t , s v 1 , , s vN v (cid:3) R d ( N t + N v ) .", "Each cross-modality self-attention calculation is computed as follows (Vaswani et al., 2017) 1 , Attention ( K, Q, V ) = softmax (cid:18) K (cid:62) Q d (cid:19) V, (1) where in our case the key K , query Q , and value V , all are the same tensor S , and softmax ( ) normalizes along the columns.", "A cross-modality Transformer layer consists of a cross-modality self-attention representation followed by residual connection with normalization from the input representation, a feed-forward layer, and another residual connection normalization.", "We stack several cross-modality Transformer layers to get a uniform representation over all modalities.", "We refer to the resulting uniformed representations as the entity representation and denote the set of the entity representations of all the entities as (cid:110) s (cid:48) t 1 , , s (cid:48) v N t , s (cid:48) v 1 , , s (cid:48) v N v (cid:111) .", "Although the representations are still organized by their original modalities per entity, they carry the information from the interactions with the other modality and are aligned in uniform representation space.", "The entity representations, as the fourth column in Figure 1, alleviate the gap between representations from different modalities, as we will show in the ablation studies, and allow them to be matched in the following steps.", "1 Please note here we keep the usual notation of the attention mechanism for this equation.", "The notations might have been overloaded in other parts of the paper.", "Relevance plays a critical role in reasoning ability, which is required in many tasks such as information retrieval, question answering, intraand inter-modality reasoning.", "Relevance patterns are in-dependent from input representation space, and can have better generalizability to unobserved data.", "To consider the entity relevance between two modalities D and D , the entity relevance representation is calculated as shown in Figure", "1. Given entity representation matrices S (cid:48) = (cid:104) s (cid:48) 1 , , s (cid:48) N (cid:105) R d N and S (cid:48) = (cid:104) s (cid:48) 1 , , s (cid:48) N (cid:105) R d N , the relevance representation is calculated by A , = (cid:16) S (cid:48) (cid:17) (cid:62) S (cid:48) , (2a) M ( D , D ) = CNND , D ( A , ) , (2b) where A , is the affinity matrix of the two modalities as shown in the right side of Figure", "1. A ,ij is the relevance score of i th entity in D and j th entity in D .", "CNN , ( ) is a CNN, corresponding to the sixth column of Figure 1, which contains several convolutional layers and fully connected layers.", "Each convolutional layer is followed by a max-pooling layer.", "Fully connected layers finally map the flatten feature maps to d -dimensional vector.", "We refer to D , D = M ( D , D ) as the entity relevance representation between and .", "We compute the relevance between different modalities.", "For the modalities considered in this work, when there are multiple images in the visual modality, we calculate the relevance representation between them too.", "In particular, for VQA dataset, the above setting results in one entity relevance representation: a textual-visual entity relevance D t , D v .", "For NLVR 2 dataset, there are three entity relevance representations: two textual-visual entity relevance D t , D v 1 and D t , D v 2 , and a visual-visual entity relevance D v 1 , D v 2 between two images.", "Entity relevance representations will be flattened and joined with other features in the next layer of the network.", "We also consider the relevance beyond entities, that is, the relevance of the entities' relations.", "This extension allows our CMR to capture higher-order relevance patterns.", "We consider pair-wise non-directional relations between entities in each modality and calculate the relevance of the rela-Same procedure as textural relations Entity Top-K Textual Relations Entity Relevance Affinity Matrix T op -K Inter-modalityImportance Intra-modalityRelevancescore Rankingscore m a x CandidateRelation Top-K Visual Relations MLP $ , & &(,$ )(,$ * + (,$ 1 2 1 3 1 N 2 3 N-1 N (&,))$ (&,/)$ (&,* + ) $ (),/)$ (* + 0&,* + ) $ MLP $ , ) Relation (&,/)$ (),/)$ $ (&,5)6 (),7)6 6 Figure 2: Relational Relevance is the relevance of top-K relations in terms of intra-modality relevance score and inter-modality importance.", "tions across modalities.", "The procedure is similar to entity relevance as shown in Figure", "1. We denote the relational representation as a nonlinear mapping R 2 d R d modeled by fully-connected layers from the concatenation of representations of the entities in the relation r ( i,j ) = MLP , 1 (cid:16)(cid:104) s (cid:48) i , s (cid:48) j (cid:105)(cid:17) R d .", "Relational relevance affinity matrix can be calculated by matching the relational representation, (cid:110) r ( i,j ) , i (cid:54) = j (cid:111) , from different modalities.", "However, there will be C 2 N possible pairs in each modality D , most of which are irrelevant.", "The relational relevance representations will be sparse because of the irrelevant pairs on both sides.", "Computing the relevance score of all possible pairs will introduce a large number of unnecessary parameters which makes the training more difficult.", "We propose to rank the relation candidates (i.e. pairs) by the intra-modality relevance score and the inter-modality importance.", "Then we compare the topK ranked relation candidates between two modalities as shown in Figure", "2. For the intra-modality relevance score, shown in the bottom left part of the figure, we estimate a normalized score based on the relational representation by a softmax layer.", "To evaluate the inter-modality importance of a relation candidate, which is a pair of entities in the same modality, we first compute the relevance of each entity in text with respect to the visual objects.", "As shown in Figure 2, we project a vector that includes the most relevant visual object for each word, denoted this importance vector as v t .", "This helps to focus on words that are grounded in the visual modality.", "We use the same procedure to compute the most relevant words to each visual object.", "Then we calculate the relation candidates importance matrix V by an outer product, , of the importance vectors as follows, v i = max j A ,ij , (4a) V = v v , (4b) where v i is the i th scalar element in v that corresponds to the i th entity, and A , is the affinity matrix calculated by Equation 2a.", "Notice that the inter-modality importance V is symmetric.", "The upper triangular part of V , excluding the diagonal, indicates the importance of the corresponding elements with the same index in intra-modality relevance scores U .", "The ranking score for the candidates is the combination (here the product) of the two scores W ( i,j ) = U ( i,j ) V ij .", "We select the set of topK ranked candidate relations K = { 1 , 2 , , K } .", "We reorganize the representation of the topK relations as R = [ r 1 , r K ] R d K .", "The relational relevance representation between K and K can be calculated similar to the entity relevance representations as shown in Figure", "1. M ( K , K ) = CNNK , K (cid:16) ( R ) (cid:62) R (cid:17) .", "M ( K , K ) has its own parameters which results in a d -dimensional feature space K , K .", "In particular, for VQA task, the above setting results in one relational relevance representation: a textual-visual relevance M ( K t , K v ) .", "For NLVR task, there are three entity relevance representations: two textual-visual relational relevance M ( K t , K v 1 ) and M ( K t , K v 2 ) , and a visual-visual relational relevance M ( K v 1 , K v 2 ) between two images.", "Relational relevance representations will be flattened and joined with other features in the next layers of the network.", "After acquiring all the entity and relational relevance representations, namely D , D and K , K , we concatenate them and use the result as the final feature = (cid:2) D , D , , K , K , (cid:3) .", "A task-specific classifier MLP () predicts the output of the target task as shown in the right-most column in Figure", "1. 3.5 Training End-to-end Training.", "CMR can be considered as an end-to-end relevance representation extractor.", "We simply predict the output y from a specific task with the final feature with a differentiable regression or classification function.", "The gradient of the loss function is back-propagated to all the components in CMR to penalize the prediction and adjust the parameters.", "We freeze the parameters of the basic feature extractors, namely BERT for textual modality and Faster-RCNN for visual modality.", "The parameters of the following parts will be updated by gradient descent: single modality Transformers (except BERT), the cross-modality Transformers, CNND , D ( ) , CNNK , K ( ) , MLP , 1 ( ) , MLP , 2 ( ) for all modalities and modality pairs, and the task-specific classifier MLP () .", "The VQA task can be formulated as a multi-class classification that chooses a word to answer the question.", "We apply a softmax classifier on and penalize with the cross-entropy loss.", "For NLVR 2 dataset, the task is binary classification that determines whether the statement is correct regarding the images.", "We apply a logistic regression on and penalize with the cross-entropy loss.", "Pre-training Strategy.", "To leverage the pre-trained parameters of our cross-modality Transformer and relevance representations, we use the following training settings.", "For all tasks, we freeze the parameters in BERT and faster-RCNN.", "We used pre-trained parameters in the (visual) single modality Transformers as proposed by (Tan and Bansal, 2019) and leave them being fine-tuned with the following procedure.", "Then we randomly initialize and train all the parameters in the model on NLVR with NLVR 2 dataset.", "After that, we keep and fine-tune all the parameters on the VQA task with the VQA v2.0 dataset.", "(See data description Section 4.1.)", "In this way, the parameters of the cross-modality Transformer and relevance representations, pre-trained by NLVR 2 dataset, are reused and fine-tuned on the VQA dataset.", "Only the final task-specific classifier with the input features is initialized randomly.", "The pre-trained cross-modality Transformer and relevance representations help the model for VQA to converge faster and achieve a competitive performance compared to the state-of-the-art results.", "NLVR 2 (Suhr et al., 2018) is a dataset that aims to joint reasoning about natural language descriptions and related images.", "Given a textual statement and a pair of images, the task is to indicate whether the statement correctly describes the two images.", "NLVR 2 contains 107 , 292 examples of sentences paired with visual images and designed to emphasize semantic diversity, compositionality, and visual reasoning challenges.", "VQA v2.0 (Goyal et al., 2017) is an extended version of the VQA dataset.", "It contains 204 , 721 images from the MS COCO (Lin et al., 2014), paired with 1 , 105 , 904 free-form, open-ended natural language questions and answers.", "These questions are divided into four categories: Yes/No, Number, and Other.", "We implemented CMR using Pytorch 2 .", "We consider the 768 -dimension single-modality representations.", "For textural modality, the pre-trained BERT base model (Devlin et al., 2019) is used to generate the single-modality representation.", "For visual modality, we use Faster-RCNN pre-trained by Anderson et al., followed by a five-layers Transformer.", "Parameters in BERT and Faster-RCNN are fixed.", "For each example, we keep 20 words as textual entities and 36 ROIs per image as visual entities.", "For the relational relevance, top-10 ranked pairs are used.", "For each relevance CNN, CNND , D ( ) and CNNK , K ( ) , we use two convolutional layers, each of which is followed by a max-pooling, and fully connected layers.", "For the relational representations and their intra-modality relevance score, MLP , 1 ( ) and MLP , 2 ( ) , we use one hidden layer for each.", "The task-specific classifier MLP () contains three hidden layers.", "The model is optimized using the Adam optimizer with = 10 4 , 1 = 0 .", "9 , 2 = 0 .", "999 , (cid:15) = 10 6 .", "The model is trained with a weight decay 0 .", "01 , a max gradient normalization 1 .", "0 , and a batch size of 32.", "VisualBERT (Li et al., 2019b) is an End-to-End model for language and vision tasks, consists of", "2 Our code and data is available at https://github.", "com/HLR/Cross_Modality_Relevance .", "Transformer layers that align textual and visual representation spaces with self-attention.", "VisualBERT and CMR have a similar cross-modality alignment approach.", "However, VisualBERT only uses the Transformer representations while CMR uses the relevance representations.", "LXMERT (Tan and Bansal, 2019) aims to learn cross-modality encoder representations from Transformers.", "It pre-trains the model with a set of tasks and fine-tunes on another set of specific tasks.", "LXMERT is the currently published state-of-the-art on both NLVR 2 and VQA v 2 .", "0 .", "NLVR 2 : The results of NLVR task are listed in Table", "1. Transformer based models (Visual-BERT, LXMERT, and CMR) outperform other models (N2NMN (Hu et al., 2017), MAC (Hud-son and Manning, 2018), and FiLM (Perez et al., 2018)) by a large margin.", "This is due to the strong pre-trained single-modality representations and the Transformers' ability to reshape the representations that align the spaces.", "Furthermore, CMR shows the best performance compared to all Transformer-based baseline methods and achieves state-of-the-art.", "VisualBERT and CMR have similar cross-modality alignment approach.", "CMR outperforms VisualBERT by 12 .", "4% .", "The gain mainly comes from entity relevance and relational relevance that model the relations.", "VQA v2.0: In Table 2, we show the comparison with published models excluding the ensemble ones.", "Most competitive models are based on Transformers (ViLBERT (Lu et al., 2019), VisualBERT (Li et al., 2019b), VL-BERT (Su et al., 2020), LXMERT (Tan and Bansal, 2019), and CMR).", "BUTD (Anderson et al., 2018; Teney et al., 2018), ReGAT (Li et al., 2019a), and BAN (Kim et al., 2018) also employ attention mechanism for a relation-aware model.", "The proposed CMR achieves the best test accuracy on Y/N questions and Other questions.", "However, CMR does not Model Dev % Test Standard % Overall Y/N Num Other Overall BUTD 65.32 81.82 44.21 56.05 65.67 ReGAT 70.27 86.08 54.42 60.33 70.58 ViLBERT 70.55 --70.92 VisualBERT 70.80 --71.00 BAN 71.4 87.22 54.37 62.45 71.84 VL-BERT 71.79 87.94 54.75 62.54 72.22 LXMERT 72.5 87.97 54.94 63.13 72.54 CMR 72.58 88.14 54.71 63.16 72.60 Table 2: Accuracy on VQA v2.0.", "achieve the best performance on Number questions.", "This is because Number questions require the ability to count numbers in one modality while CMR focuses on modeling relations between modalities.", "Performance on counting might be improved by explicit modeling of quantity representations.", "CMR also achieves the best overall accuracy.", "In particular, we can see a 2 .", "3% improvement over VisualBERT (Li et al., 2019b), as in the above mentioned NLVR 2 results.", "This shows the significance of the entity and relational relevance.", "Another observation is that, if we train CMR for VQA task from scratch with random initialization while still use the fixed BERT and Faster-RCNN, the model converges after 20 epochs.", "As we initialize the parameters with the model trained on NLVR 2 , it takes 6 epochs to converge.", "The significant improvement of convergence speed indicates that the optimal model for VQA is close to that of NLVR.", "To investigate the influence of model sizes, we empirically evaluated CMR on NLVR 2 with various sets of Transformers sizes which contain the most parameters of the model.", "All other details are kept the same as descriptions in Section 4.2.", "Textual Transformer remains 12 layers because it is the pre-trained BERT.", "Our model contains 285 M parameters.", "Among these parameters, around 230 M parameters belong to pre-trained BERT and Transformer.", "Table 3 shows the results.", "As we increase the number of layers in the visual Transformer and the cross-modality Transformer, it tends to improve accuracy.", "However, the performance becomes stable when there are more than five layers.", "We choose five layers of visual Transformer and cross-modality Transformer in other experiments.", "To better understand the influence of each part in CMR, we perform the ablation study.", "Table 4 shows the performances of four variations on NLVR 2 .", "Effect of Single Modality Transformer.", "We remove both textual and visual single-modality Transformers and map the raw input with a linear transformation to d -dimensional space instead.", "Notice that the raw input of textual modality is the WordPieces (Wu et al., 2016) embeddings, segment embeddings, and the position embeddings of each word, while that of visual modality is the 2048 dimension dense representation of each ROI extracted by Faster-RCNN.", "It turns out that removing single-modality Transformers decreases the accuracy by 9 .", "0% .", "Single modality Transformers play a critical role in producing a strong contextualized representation for each modality.", "Effect of Cross-Modality Transformer.", "We remove the cross-modality Transformer and use single-modality representations as entity representations.", "As shown in Table 4, the model degenerates dramatically, and the accuracy decreases by 16 .", "2% .", "The huge accuracy gap demonstrates the unparalleled contribution of the cross-modality Transformer to aligning representation spaces from input modalities.", "Effect of Entity Relevance.", "We remove the entity relevance representation D , D from the final feature .", "As shown in Table 4, the test accuracy is reduced by 5 .", "4% .", "This is a significant difference of performance among Transformer based models (Li et al., 2019b; Lu et al., 2019; Tan and Bansal, 2019).", "To highlight the significance of entity relevance, we visualize an example affinity matrix in Figure", "3. The two major entities, bird and branch, are matched perfectly.", "More interestingly, the three ROIs which are matching the phrase looking to left capture an indicator (the beak), a direction (left), and the semantic of the whole phrase.", "Effect of Relational Relevance.", "We remove the entity relevance representation K , K from the final feature .", "A 2 .", "5% decrease in test accuracy is observed in Table", "4. We argue that CMR models high-order relations, which are not captured in entity relevance, by modeling relational relevance.", "We present two examples of textual relation ranking scores in Figure", "4. The learned ranking score highlights the important pairs, for example gold -top, looking left, which describe the important relations in textual modality.", "In this paper, we propose a novel cross-modality relevance (CMR) for language and vision reasoning.", "Particularly, we argue for the significance of relevance between the components of the two modalities for reasoning, which includes entity relevance and relational relevance.", "We propose an end-to-end cross-modality relevance framework that is tailored for language and vision reasoning.", "We evaluate the proposed CMR on NLVR and VQA tasks.", "Our approach exceeds the state-of-the-art on NLVR 2 and VQA v2.0 datasets.", "Moreover, the model trained on NLVR 2 boosts the training of VQA v2.0 dataset.", "The experiments and the empirical analysis demonstrate CMR's capability of modeling relational relevance for reasoning and consequently its better generalizability to unobserved data.", "This result indicates the significance of relevance patterns.", "Our proposed architectural component for capturing relevance patterns can be used independently from the full CMR architecture and is potentially applicable for other multi-modal tasks.", "We thank the anonymous reviewers for their helpful comments.", "This project is supported by National Science Foundation (NSF) CAREER award #1845771 ." ]
[ "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "result", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "objective", "objective", "result", "abstain", "abstain", "abstain", "objective", "other", "other" ]
[ "Target-dependent classification tasks, such as aspect-level sentiment analysis, perform fine-grained classifications towards specific targets.", "Semantic compositions over tree structures are promising for such tasks, as they can potentially capture long-distance interactions between targets and their contexts.", "However, previous work that operates on tree structures resorts to syntactic parsers or Treebank annotations, which are either subject to noise in informal texts or highly expensive to obtain.", "To address above issues, we propose a reinforcement learning based approach, which automatically induces target-specific sentence representations over tree structures.", "The underlying model is a RNN encoder-decoder that explores possible binary tree structures and a reward mechanism that encourages structures that improve performances on downstream tasks.", "We evaluate our approach on two benchmark tasks: firm-specific cumulative abnormal return prediction (based on formal news texts) and aspect-level sentiment analysis (based on informal social media texts).", "Experimental results show that our model gives superior performances compared to previous work that operates on parsed trees.", "Moreover, our approach gives some intuitions on how target-specific sentence representations can be achieved from its word constituents.", "We investigate target-dependent classification problem in this paper, with a special focus on the sentence level.", "Target-dependent classification aims to identify the fine-grained polarities of sentences towards specific targets, which is challenging but also important for deep text understanding.", "The definitions of polarity vary across different tasks, which can be positive or negative in * Corresponding author Task Example Aspect-level Sentiment Analysis The food is good but the service is dreadful.", "aspect-level sentiment analysis, favor or against in stance detection, and rise or drop in financial news analysis towards the stock price movement of a particular firm.", "Table 1 gives examples of three target-dependent classification tasks.", "We can find that there can be multiple target mentions in the same text scope, which makes it challenging for generic sentence representation approaches.", "For the first example, a restaurant manager or a potential customer may be interested in both food and service ; however, the sentiment polarities towards the two targets are different.", "Hence, it would be beneficial for such tasks to tailor the sentence representations with respect to particular targets.", "Tree structures are promising for such tasks, as they can potentially capture long-distance dependencies between target words and their contexts (Li et al., 2015).", "Therefore, it is not surprising to find work that exploits the syntactically parsed trees for learning target-specific sentence representations.", "Dong et al. (2014) and Chang et al. (2016) adapted the word orders in a parsed tree, depending on their distances to the target entities.", "Nguyen et al. (2015) extended Dong et al. (2014) by combining the constituency tree and the dependency tree of a sentence.", "An important assump-tion of such work is that different tree structures lead to different semantic representations even for the same sentence.", "However, they all resort to ex-551 ternal syntactic resources, such as parse trees or Treebank annotations (Marcus et al., 1993), which limits their broader applications.", "On the one hand, annotated data are highly expensive to produce; and informal texts, such those on the social media, remain a challenge for syntactic parsers (Kong et al., 2014).", "On the other hand, the tree structures in their pipeline-style architecture are fixed during training, which cascade errors to later representation learning stage.", "A desirable solution would be to automatically and dynamically induce the tree structures for target-specific sentence representations.", "However, the challenge is that the absence of external supervisions makes it difficult to evaluate the quality of the tree structures and train the parameters.", "Inspired by Yogatama et al. (2016), we propose a reinforcement learning based approach that integrates target information and generates target-specific tree structures that benefit downstream classification tasks.", "The underlying framework consists of two key components, a RNN encoder-decoder that explores possible binary tree structures according to a given target, and a tree-structured neural network that composes the input words into sentence representation based on the structure.", "The REINFORCE algorithm with the self-critic baseline (Rennie et al., 2016) is applied to update the parameters of the two components.", "We evaluate our approach on two benchmark tasks: a firm-specific cumulative abnormal return prediction task (based on formal news texts) and an aspect-level sentiment analysis task (based on informal social media texts).", "Experimental results show that our approach achieves superior performances compared to baseline methods that operate on parsed trees.", "Moreover, our model sheds lights on understanding how sentences are composed from its word constituents towards specific targets.", "We formalize the problem of learning sentence representations for target-dependent classification tasks as constructing and semantically composing the target-specific binary syntactic trees of sentences.", "The input of the model is a tuple ( x , x target , c target ) , in which x is a sentence of n words { x 1 , x 2 , , x n } ; x target is the target of interest mentioned in the sentence and c target Figure 1: For input sequence { x 1 , x 2 , x 3 } , the shift-reduce orders can be { S,S,R,S,R } and { S,S,S,R,R } , where S stands for SHIFT and R stands for REDUCE.", "is the polarity regarding the target.", "For sentence x , we can construct a valid binary syntactic tree by n SHIFT and n 1 REDUCE transitions a = { a 0 , a 1 , , a 2 n 1 } , in which a t { SHIFT , REDUCE } specifies the transition taken at step t .", "The SHIFT transition adds a leaf node to the tree while the REDUCE transition combines two leaf nodes to form a parent node.", "Figure 1 illustrates two examples on how can we construct a binary tree by only using SHIFT and REDUCE transitions and how can we obtain different binary trees by varying the SHIFT-REDUCE transition orders.", "We design a transition generator G (Sec-tion 3.1) for generating transition orders a , G ( x , x target ) a and a composition function C (Section 3.2) that composes sentence x following the transition orders a into sentence representation s , C ( a , x ) s .", "Our ultimate goal is to use the sentence representation s for target-dependent classification.", "The objective is thus to minimize the negative log-likelihood Eq 1 with L2 norm, in which denotes all the parameters of our model.", "The architecture of our proposed approach is illustrated in Figure 2, which is made up of two main components, a transition generator G and a composition function C .", "The transition generator is a RNN encoder-decoder that generates discrete target-specific SHIFT-REDUCE transition orders, given a sentence and the target of interest.", "The composition function is a tree-structured neural network that semantically composes the word constituents following the transition orders.", "The main challenges for such a framework are two-fold.", "On 552 Figure 2: The framework of our proposed method.", "the one hand, the transition generator is fully unsupervised as we do not resort to external syntactic resources.", "On the other hand, the transitions generated at each step are discrete, making it difficult to train and propagate errors to update the model parameters.", "We give details of the two components and how we address the challenges in this section.", "The basic idea of the transition generator is to generate different transition orders given different targets.", "We propose using the RNN encoder-decoder framework (Cho et al., 2014), which has shown capacity in shift-reduce parsing (Vinyals et al., 2015; Liu and Zhang, 2017b).", "A standard RNN encoder-decoder contains two recurrent neural networks, one for encoding a sequence of variable-length into a vector representation and the other for decoding the representation back into another variable-length sequence.", "Encoder We employ a standard Long Short-Term Memory (LSTM) (Hochreiter and Schmid-huber, 1997) as our encoder.", "Given the input sentence { x 1 , x 2 , , x n } , we first obtain their word vectors { e ( x 1 ) , e ( x 2 ) , , e ( x n ) } by looking them up from a pre-trained embedding matrix e .", "We reverse the input sentence and feed their word embeddings sequentially to the LSTM.", "The hidden states of each token { h 1 , h 2 , , h n } are kept for the decoding stage.", "The hidden state and cell state of the last LSTM unit are used as the initial states for decoder.", "Decoder Following Bahdanau et al. (2014), we use an attention-based decoder.", "The decoder aligns with all the encoder hidden states at each step of decoding to obtain a context vector c t , such that each input words show different weights at decoding.", "We denote the hidden states of our decoder as { d 1 , d 2 , , d 2 n 1 } .", "The attention score over each of the encoder hidden state h i is computed by: u it = d t 1 (cid:12) h i (2) a it = exp( u it ) P i 0 exp( u i 0 t ) (3) c t = n X i =1 a it h i , (4) in which (cid:12) denotes element-wise dot product; a it is the normalized attention score and the context vector c t is a weighted sum of all the encoder hidden states.", "To enable the target of interest to influence the decoding process, we enrich the input of the decoder by concatenating the target entity.", "The hidden state of the decoder at time t is obtained by: d t = LSTM ( c t e ( a t 1 ) e ( x target ) , d t 1 ) , (5) in which denotes concatenation operation; x target is the embedding of the target entity; e ( a t 1 ) is the embedding of the last decoded transition and c t is the context vector.", "Decoding In a supervised RNN decoder setting, the goal of each step is to estimate the conditional probability P ( a t | a 1: t 1 , c t , d t ) = g ( a t 1 , c t , d t ) , (6) in which a 1: t 1 are previously decoded transitions, c t is the context vector, d t is current decoder hidden state and g is non-linear network.", "P ( a t | a 1: t 1 , c t , d t } is a distribution over the transition space { SHIFT , REDUCE } .", "By comparing 553 the decoded outputs with the ground-truth labels, the prediction errors can back-propagate to update parameters of the encoder-decoder network.", "However, it is no more applicable in our settings, as we do not have any explicit supervisions from external syntactic resources.", "To make training the transition generator possible, we resort to a reinforcement learning framework, obtaining the transitions by sampling from a policy network.", "We represent the current state S t by concatenating e ( a t 1 ) , e ( x target ) , c t , d t .", "The policy network ( a t | S t ) is defined by Eq 8,", "in which g is a one-layer non-linear feed-forward neural network.", "We decode the transition a t by sampling from the distributions given by the policy network.", "When a valid binary tree of a sentence is generated, we use the composition function to obtain the representation following the transition orders.", "We maintain two data structures at composition; a buffer that stores words yet to be processed and a stack that stores the partially completed subtrees.", "Initially, the stack is empty, and the buffer stores all the words in the sentence.", "The operations spec-ified by SHIFT and REDUCE are as follows.", "For a SHIFT transition, the buffer pops the topmost word out and pushes it to the top of the stack.", "For a REDUCE transition, the topmost two elements of the stack are popped out and composed.", "Their compositions are then pushed back to the stack.", "To produce a valid binary tree, we follow Yogatama et al. (2016) to disallow SHIFT transition when the buffer is empty and forbid REDUCE transition when the stack has no more than two elements.", "We use a tree-LSTM (Tai et al., 2015) to semantically compose the top two elements of the stack.", "Initially, the hidden state h t and the cell state s t of leaf nodes are given by another LSTM.", "The tree-LSTM works as follows, i t f lt f rt o t g t = tanh W (cid:20) h lt h rt (cid:21) (9) s t = f lt (cid:12) s lt + f rt (cid:12) s rt + i t (cid:12) g t h t = o t (cid:12) tanh( s t ) , in which (cid:12) denotes element-wise dot product; i t and o t are the input and output gate, respectively; f lt and f rt are the left and right forget gates; h lt , h rt , s lt , s rt are the hidden and cell states of the left and right nodes in the subtree.", "The hidden state of the topmost node is used as the representation for the input sentence.", "The goal for training is to optimize the parameters of the transition generator G and the composition function C .", "It is easy to optimize C , the output of which is directly connected to the classifier, the classification loss can back-propagate to update its parameters.", "However, the transitions sampled from the policy network ( a | S ) are discrete, which makes G no more differentiable to our objective.", "A possible solution is to maximize the expected reward E p ( a ; G ) R ( a ) .", "As we are in a reinforcement learning setting, we can immediately receive a reward R ( a ) for transitions a = { a 1 , a 2 , , a t } at the end of the classification.", "The reward is defined as the logarithm of classification probability for the right label c target , R ( a ) = log P ( c target | C ( a , x )) .", "However, it is computationally intractable to compute E p ( a ; G ) R ( a ) , as the number of possible transition orders a is exponentially large.", "To address this, we use the REINFORCE algorithm to approximate the gradients by running M examples.", "The G log p ( a ) can be used to update G .", "REINFORCE algorithm is non-biased but may have high variance.", "To reduce the variance, a widely used trick is to subtract a baseline from the reward.", "It has been theoretically proven that 554 any baselines that do not depend on the actions are applicable.", "In this paper, we follow Rennie et al. (2016) to apply a self-critical baseline to the rewards.", "Rather than estimating a baseline reward, the self-critical method uses the outputs given by the test-time inference algorithm as the baselines.", "This can thus alleviate the over-fitting problem on test dataset.", "At inference, we use a greedy decoding strategy by selecting the most probable transitions given by the policy network (Eq 8).", "5 GJ ( G ) 1 MMX m =1 [ 5 G log p ( a )( R m ( a ) R m ( a ))] (12) 4 Experiments and Results The proposed approach is evaluated on two aspect-level tasks: (1) firm-oriented cumulative abnormal return prediction on formal financial news texts and (2) aspect-level sentiment analysis on informal social media texts.", "return prediction Firm-specific Cumulative Abnormal Return (CAR) prediction task (Chang et al., 2016) studies the impact of new information towards a specific firm.", "Multiple firms may be involved in the same new event, however, the event can present different impacts to these firms.", "Conceptually, Abnormal Return is the difference between the actual return of a stock and its expected return.", "The expected return can be approximated by daily indexes, such as S&P 500 index.", "For example, if a stock is expected to rise by 5%, but on the event day, it rises by 2%, although it gives a positive return, the abnormal return is -3%.", "Cumulative Abnormal Return is the accumulated abnormal return in an event window, which is usually triggered by new events.", "We use a three-day window (-1, 0, 1), denoted as CAR 3 , with event day centering at day 0.", "We predict whether an event has positive or negative impact to the cumulative abnormal return of a given firm.", "We use the same news dataset as Chang et al. (2016), which are abstracts extracted from the Reuters news dataset released by Ding et al. (2014; 2015; 2016).", "Compared to the full texts of news documents, abstracts are supposed to be more informative and less noisy.", "Ding et al. (2014) show that modeling abstracts alone can achieve comparable or even better performances compared to full texts in stock market prediction.", "To better interpret our approach, we only extract event days with a single news document, which covers over 70% cases in the dataset.", "This final dataset yields a total of 16469 instances, including 1291 firms, of which 10% are reserved for validation, and 20% are used for testing.", "The numbers of positive and negative CAR 3 examples and number of firms in the subsets are listed in Table", "2. 4.1.2 Baseline To evaluate the performance of our approach on formal news texts, we compare with state-of-the-art target-independent and target-dependent baselines.", "Among the baselines, Sentiment-based and Bi-LSTM are target-independent, which learn generic representations for sentences, while Bi-LSTM + Attention and TGT-CTX-TLSTM are target-dependent.", "Sentiment-Based Sentiments among breaking news, earning reports and online message boards, are found to be correlated with market volatility (Schumaker and Chen, 2009; Das and Chen, 2007).", "We adopt lexicon-based sentiment analysis as our baseline, using the sentiment lexicons released by Loughran and McDonald (2011).", "We follow the prior literature (Mayew and Venkat-achalam, 2012) and use the count of positive words, negative words, the differences between positives and negatives, and their length-normalized values as our feature vectors.", "Bi-LSTM We stack a forward and a backward LSTM to capture the contextual representations for the sentence.", "The last hidden states of both 555 Parameters Value word dimension 200 LSTM hidden dimension 200 dropout probablity 0.5 batch size 64 initial learning rate 0.0005 Table 3: Hyper-parameters for firm-oriented cumulative abnormal return task directions are concatenated and then used for classification.", "Bi-LSTM + Attention We extend vanilla Bi-LSTM by adding an attention mechanism over the hidden states.", "We concatenated the hidden states h t = { h lt , h rt } of each input token x t , the target representation e target is adopted to weigh each of the hidden states.", "u t = v > tanh( W 1 e ( x target ) + W 2 h t + b ) (13) a t = softmax ( u t ) (14) d t = X a t h t (15) TGT-CTX-TLSTM The method of Chang et al. (2016), which we follow and is used as our main baseline.", "It is a hybrid model which integrates both sequential information and syntactic parse tree information.", "As the first step, the abstract is parsed with an external syntactic parser to obtain the dependency relations between the words.", "The parse tree are then adapted and binarized depending on their distances to targets in the dependency graph.", "A tree-structured Long Short-Term Memory Network (Tai et al., 2015) is then applied to learn a vector representation of the binarized tree structure.", "The hyper-parameters used in this paper are listed in Table", "3. We pretrain word vectors with the Word2Vec (Mikolov et al., 2013) tool on the news dataset released by Ding et al. (2014), which are fine-tuned during training.", "The embeddings of target firms are obtained by averaging their words of constituents.", "The macro-F1 scores of our method and baselines are presented in Table", "4. Sentiment-based method Figure 3: Accuracy with respect to sentence length.", "gives the highest F1 score on the positive class.", "However, its performance is not consistent on the negative class, which suggests that it tends to misclassify the sentence as positive.", "Bi-LSTM + Attention outperforms the vanilla one without attention and is much robust in both positive and negative analysis.", "Our approach achieves an overall Macro-F1 of 58.2%, with an F1 score of 57.2% and 59.2% on positive and negative classes, respectively.", "Compared to the state-of-the-art model that exploits automatically parsed structures, we obtain an over 2% absolute gains without using explicit supervisions in learning the structures.", "Longer sentences are much more challenging for syntactic parsers.", "To gain insights on the performances of our approach on long sentences, we further inspect the accuracies with regards to different sentence lengths.", "As shown in Figure 3, we compare with structure-dependent baseline TGT-CTX-TLSTM.", "We divide the sentences into seven bins, each of which contains sentences with length [5 556 i, 5 ( i + 1)] .", "TGT-CTX-TLSTM gives higher accuracies over sentences with shorter lengths, while the accuracies decline sharply over sentences with lengths of over 30.", "Our approach is more consistent on both long and short sentences.", "As the sentence length grows, the accuracy our model gradually increases, showing its robustness and effectiveness across sentences of variable lengths.", "To verify our proposed approach on informal social media texts, we apply it to aspect-level sentiment analysis on tweets.", "Aspect-level sentiment analysis aims to identify sentiment polarities towards specific targets mentioned in a sentence.", "Target-specific sentence representations can be naturally applied to this task.", "Dataset #Target #Positive #Negative #Neutral Training 6248 1568 1560 3127 Testing 692 173 173 346 Table 5: Statistics of aspect-level sentiment analysis datasets 4.2.1 Dataset We apply our model to a benchmark aspect-level sentiment analysis dataset used in previous work (Dong et al., 2014).", "The statistics of the dataset are shown in Table", "5. The target entities and corresponding ground-truth labels are annotated.", "The labels belong to one of { positive, neutral, negative } , thus the task is a three-way classification.", "Dong et al. (2014) They adapt the parse tree of a sentence concerning the target with predefined rules and use recursive neural network (Socher et al., 2013) to learn a target-specific sentence representation.", "The parameter settings are listed in Table", "6. We use 100-dimension GloVe vectors which are pre-trained on a large Twitter Corpus (Penning-ton et al., 2014) and fine-tuned during training.", "The commonly-used metrics classification accuracy and macro-F1 are adopted to evaluate the performances.", "The final results on aspect-level sentiment analysis task are shown in Table", "7. Dong et al. (2014) are used as our main baseline, as they build target-specific sentence representation over adapted tree structures.", "Neural-based models outperform Jiang et al. (2011), which did a lot of feature engineerings, showing the effectiveness of automatically induced features.", "Our approach gives superior performances compared to Dong et al. (2014), which operates on parsed trees.", "We achieve 68.2% classification accuracy and 66.3 macro-F1.", "We do not rely on a preprocessing syntactic parser as the first step to obtain the tree structures.", "On the one hand, social media texts are informal and extremely noisy, which remains a challenge for syntactic parsers.", "The pipeline-style architecture of Dong et al. (2014) cascades parse errors to later stages, which will hurt the performances on downstream tasks.", "On the other hand, the adapted tree structures in Dong et al. (2014), while in our approach, the tree structures are also tuned dynamically during training, so as to find the optimal structures that would benefit downstream classification tasks.", "To gain further insights on the induced structures, we inspect the shift-reduce trees our approach generated in this section.", "We present two examples that our model gives high confidences in Figure", "4. For the sentence Nike NKE.N has sued Wal-Mart WMT.N, saying the world 's largest retailer 557 Figure 4: Two tree structures generated by our model. We removed stop words and punctuations. The upper tree structure is for the sentence Nike NKE.N has sued Wal-Mart WMT.N, saying the world 's largest retailer is selling athletic shoes that infringe on its design patents and the bottom one is for the sentence Walgreen WAG.N , which operates the largest U.S. drugstore chain , raised its dividend on Monday. is selling athletic shoes that infringe on its design patents , the core part Nike sued Wal-mart and the rest of the sentence are in two separate subtrees, which reduces potentially information loss about the key event when composing them into sentence representation.", "Similarly, for the sentence Walgreen WAG.N , which operates the largest U.S. drugstore chain , raised its dividend on Monday. , the model learns to make the target Walgreen and key event raised its dividend on Monday close to each other in the tree, although there are sequentially many words in between.", "These are good examples given by our model, we also find a lot of highly leftor right-biased tree structures.", "Intuitively, the completely leftand right-biased tree structures are equivalent to forward and backward sequential structures, respectively.", "It is beneficial for numerous tasks, such as aspect-level sentiment analysis and stance detection, to have the sentence representations being tailored to specific targets.", "Early approaches rely on feature engineering by extracting target-dependent features (Jiang et al., 2011), while recent work mainly focuses on semantic compositions over the vector space with deep neural models.", "Depending on how they model the target and context, we further classify related work into three categories.", "The first category relies on syntactic parse trees.", "Dong et al. (2014) are among the first to exploit tree structures, in which they adapt the parse trees based on the dependency relations between the words and the target, and then use a recursive neural network to learn the sentence representations.", "Similarly, Chang et al. (2016) explore a hybrid model that considers both sequential and structural information of a sentence.", "Nguyen et al. (2015) extend Dong et al. (2014) by combining the constituency tree and the dependency tree of a sentence.", "The performances of their methods highly rely on external parsers, which is subject to noise in informal social media texts.", "The second category models the interactions between the target and its left context and right context.", "Vo and Zhang (2015) split a sentence into three parts and use pooling function to automatic inducing features for a given target.", "Similar to Vo and Zhang (2015), Zhang et al. (2016) exploit the gates instead of pooling functions to control the information flow of contexts.", "Tang et al. (2015) model by concatenating the word embeddings and target entity embeddings and use two LSTMs to encode leftand right contexts.", "Liu et al. (2017a) propose to use the attention mechanism to assign different weights to the left and right context depending on the target.", "The third category controls the information flow from the target to the sentence representation.", "Au-genstein et al. (2016) use conditional encoding to encode the target and use it as the initial states for the sentence representation.", "Our method belongs to the first category that exploits tree structures.", "The main difference is we do not use external supervision from dependency parser or treebank annotations.", "Our work is related to syntactic constituency parsing as we build the tree structure in a transition", "manner.", "Syntactic constituency parsing is a fundamental task in natural language processing, which uses phrase structure to organize words into nested constituents.", "Early approaches rely on probabilistic context-free grammars or transition-based models with rich features (Collins, 1997; Klein and Manning, 2003).", "Recently, recursive neural network (Socher et al., 2013) and neural-based transition model (Liu and Zhang, 2009) are also applied, which achieve competitive or even better performances compared to traditional state-of-the-art approaches that rely on hand-crafted features.", "Vinyals et al. (2015), from which we get inspirations, use the RNN Encoder-Decoder to encode the sentence and generate its corresponding full parse tree.", "Bowman et al. (2016) propose a Stack SPINN framework that integrates parsing and interpreting the sentence in a hybrid model.", "Yogatama et al. (2016) extend their model by using reinforcement learning to build the tree structures that can improve performances of end tasks.", "We differ from the aforementioned approaches in two aspects.", "First, we do not use any explicit supervisions to guide the decoder.", "The parameters of our framework are optimized by the objective of end tasks.", "Another difference is that we learn target-specific instead of general-purpose sentence representations.", "In this paper, we propose a framework that automatically induces target-specific sentence representations over tree structures without recourse to external syntactic resources.", "Experimental results on formal and informal texts showed that our approach is both robust and effective compared to previous work that operates on parsed trees.", "Moreover, the approach gives intuitions on how sentence structures are composed from their word constituents concerning a specific target.", "We would like to thank the anonymous reviewers for their insightful comments and suggestions to help improve this paper.", "This work was partly supported by the National Key Basic Research Program of China via grant 2014CB340503, the National Natural Science Foundation of China (NSFC) via grant 61472107 and 61702137." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "method", "result", "result", "objective", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "method", "method", "method", "abstain", "abstain", "abstain", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "method", "other", "other", "other", "other", "method", "other", "other", "objective", "objective", "objective", "objective", "objective", "result", "abstain", "other", "other" ]
[ "We probe pre-trained transformer language models for bridging inference.", "We first investigate individual attention heads in BERT and observe that attention heads at higher layers prominently focus on bridging relations in-comparison with the lower and middle layers, also, few specific attention heads concentrate consistently on bridging.", "More importantly, we consider language models as a whole in our second approach where bridging anaphora resolution is formulated as a masked token prediction task ( Of-Cloze test ).", "Our formulation produces optimistic results without any fine-tuning, which indicates that pre-trained language models substantially capture bridging inference.", "Our further investigation shows that the distance between anaphor-antecedent and the context provided to language models play an important role in the inference.", "Bridging inference involves connecting conceptually related discourse entities anaphors and antecedents (Clark, 1975).", "A bridging anaphor shares non-identical relation with its antecedent and depends on it for complete interpretation.", "This differs from coreference resolution which links mentions that refer to the same entity (i.e., mentions in the same entity share identical relations).", "Consider the following example In Poland's rapid shift from socialism to an undefined alternative, environmental issues have become a cutting edge of broader movements to restructure the economy , cut cumbersome bureaucracies , and democratize local politics. Bridging inference connects the anaphor the economy and its antecedent Poland and deduces that the economy specifically refers to the economy of Poland.", "We want to investigate if the pre-trained transformer language models capture any bridging inference information.", "Recently there has been an increasing interest in analyzing pre-trained language models' ability at capturing syntactic information (Clark et al., 2019), semantic information (Koval-eva et al., 2019), as well as commonsense knowledge (Talmor et al., 2020).", "There are also a few studies focusing on probing coreference information in pre-tained language models (Clark et al., 2019; Sorodoc et al., 2020).", "So far, there has no work on analyzing bridging, which is an important type of entity referential information.", "We try to fill this gap in our work.", "We employ two different but complementary approaches for the probing of pre-trained transformer language models for bridging inference.", "In the first approach (Section 4), we investigate the core internal part of transformer models self-attention heads in vanilla BERT (Devlin et al., 2019).", "We look at the attention heads of each layer separately and measure the proportion of attention paid from anaphor to antecedent and vice versa.", "This captures the magnitude of bridging signal corresponding to each attention head.", "We observed that attention heads of higher layers are more active at attending at bridging relations as well as some of the individual attention heads prominently look at the bridging inference information.", "In the second approach (Section 5), we treat pre-trained transformer language models as a black box and form bridging inference as a masked token prediction task.", "This formulation takes into consideration the whole architecture and weights of the model rather than concentrating on individual layers or attention heads, thus, complementing our first approach where we looked at the individual parts of the transformer model.", "For each bridging anaphor, we provide input as context anaphor of [MASK] to language models and get the scores of different antecedent candidates for mask tokens.", "We then select the highest scoring candidate as the predicted antecedent.", "Surprisingly, the best variation of this approach produces a high accuracy score of 28.05% for bridging anaphora resolution on ISNotes (Markert et al., 2012) data without any task-specific fine-tuning of the model.", "On the same corpus, the current state-of-the-art bridging anaphora resolution model BARQA (Hou, 2020a) achieves an accuracy of 50.08%, while a solid mention-entity pairwise model with carefully crafted semantic features (Hou et al., 2013) produces an accuracy score of 36.35%.", "This shows that substantial bridging information is captured in the pre-trained transformer language models.", "Bridging inference requires both commonsense world knowledge as well as context-dependent text understanding.", "The above-mentioned fill-in-the-gap formulation for the antecedent selection task is flexible enough to easily explore the role of different types of context for bridging inference.", "Our analysis shows that pre-trained language models capture bridging inference substantially however the overall performance depends on the context provided to the model.", "It is also observed that bigger language models are more accurate at capturing bridging information.", "This work has two main contributions.", "First, we thoroughly investigate bridging information encoded in pre-trained language models using two probing approaches ( attention heads analysis and fill-in-the-gap ).", "Second, we provide a deeper understanding of the bridging referential capabilities in the current pre-trained language models.", "Our experimental code is available at https://github.", "com/oapandit/probBertForbridging .", "Entity Referential Probing.", "Previous studies on entity referential probing mainly focus on coreference.", "Clark et al. (2019) showed that certain attention heads in pre-trained BERT correspond well to the linguistic knowledge of coreference.", "Particularly, the authors found that one of BERT's attention heads achieves reasonable coreference resolution performance compared to a string-matching baseline and performs close to a simple rule-based system.", "Sorodoc et al. (2020) investigated the factors affecting pronoun resolution in transformer architectures.", "They found that transformer-based language models capture both grammatical properties and semantico-referential information for pronoun resolution.", "Recently, Hou (2020b) analyzed the attention patterns of a fine-tuned BERT model for information status (IS) classification and found that the model pays more attention to signals that correspond well to the linguistic features of each IS class.", "For instance, the model learns to focus on a few premodifiers (e.g., more, other, and higher) that indicate the comparison between two entities.", "In this work, we focus on probing bridging, which is a more challenging entity referential relation and one of the oldest topics in computational linguistics (Clark, 1975; Bos et al., 1995; Asher and Lascarides, 1998).", "Attention Analysis.", "Recently there has been an increasing interest in analyzing attention heads in transformer language models.", "Although some researchers argue that attention does not explain model predictions (Jain and Wallace, 2019), analyzing attention weights still can help us to understand information learned by the models (Clark et al., 2019).", "Researchers have found that some BERT heads specialize in certain types of syntactic relations (Htut et al., 2019).", "Kovaleva et al. (2019) reported that pre-trained BERT's heads encode information correlated to FrameNet's relations between frame-evoking lexical units (predicates, such as address ) and core frame elements (such as issues ).", "In our work, we try to analyze whether certain attention heads in a pre-trained BERT model capture bridging relations between entities in an input text.", "Fill-in-the-gap Probing.", "One of the popular approaches to probe pre-trained language models is fill-in-the-gap probing, in which the researchers have constructed various probing datasets to test a model's ability on different aspects.", "Goldberg (2019) found that BERT considers subject-verb agreement when performing the cloze task.", "Petroni et al. (2019) reported that factual knowledge can be recovered surprisingly well from pre-trained language models.", "For instance, JDK is developed by [Oracle].", "Similarly, we apply fill-in-the-gap to probe bridging by formulating bridging anaphora resolution as a of-Cloze test .", "Commonsense Knowledge Probing.", "A lot of work has been carried out to analyze various types of commonsense knowledge encoded in transformer language models.", "Talmor et al. (2020) constructed a set of probing datasets and test whether specific reasoning skills are captured by pre-trained language models, such as age comparison and antonym negation.", "Da and Kasai (2019) found that pre-trained BERT failed to encode some abstract attributes of objects, as well as visual and perceptual properties that are likely to be assumed rather than mentioned.", "In our work, we focus on investigating the effect of context on bridging inference using a well-established task on bridging resolution.", "We extensively analyze the impacts of different contexts for bridging anaphora resolution.", "We found that a pre-trained BERT model achieves reasonable results for bridging anaphora resolution by using the word of as the additional context.", "This indicates that pre-trained language models capture certain commonsense world knowledge for bridging.", "In this paper, we mainly investigate the following research questions:", "How important are the self-attention patterns of different heads for bridging anaphora resolution?", "Whether pre-trained LMs capture information beneficial for resolving bridging anaphora in English?", "How does distance between anaphor-antecedent and context influence pre-trained language models for bridging inference?", "We designed a series of experiments to answer these questions which will be detailed in the coming sections.", "In these experiments, we used Py-Torch (Wolf et al., 2020) implementation of BERT-base-cased, BERT-large-cased, ROBERTA-base and ROBERTA-large pre-trained transformer language models with the standard number of layers, attention heads, and parameters.", "In the attention head-based experiments, we have limited our investigation only to the BERT-base-cased model as it is relatively smaller compared to other models and findings of this model can be generalized to other models as well.", "Probing Dataset We used ISNotes (Markert et al., 2012) dataset for all experiments.", "We choose this corpus because it contains unrestricted anaphoric referential bridging annotations among all available English bridging corpora (Roesiger et al., 2018) which covers a wide range of different relations.", "ISNotes contains 663 bridging anaphors but only 622 anaphors have noun phrase Figure 1: Bridging signals with BERT-base-cased model with only anaphor and antecedent sentences provided.", "antecedents.", "1 In our experiments, we only consider these 622 anaphors for investigation.", "For any anaphor, the predicted antecedent is selected from the set of antecedent candidates.", "This set is formed by considering all the mentions which occur before the anaphor.", "We obtained the candidate set for each anaphor by considering gold mentions annotated in ISNotes.", "Further, we observed that only 531 anaphors have antecedents in either previous 2 sentences from the anaphor or the first sentence of the document.", "Therefore, in the experiments when antecedent candidates are considered from the window of previous two sentences plus the document's first sentence, only 531 anaphors are considered.", "In all the experiments, accuracy is measured as the ratio between correctly linked anaphors to the total anaphors used in that particular experiment (not total 663 anaphors).", "Attention heads are an important part of transformer based language models.", "Each layer consists of a certain number of attention heads depending on the model design and each attention head assigns different attention weight from every token of the input sentence to all the tokens.", "In our approach, we measure the attention flow between anaphors and antecedents for each attention head separately.", "In this experiment we investigate all the attention heads of every layer one-by-one.", "Specifically, the BERT-base-cased model used for probing contains 12 layers and 12 attention heads at each layer.", "Therefore, we investigate 144 attention heads for their ability to capture bridging signals.", "We look for two distinct bridging signals one from anaphor to antecedent and other from antecedent to anaphor.", "The bridging signal from anaphor to antecedent is calculated as the ratio of the attention weight assigned to antecedent and the total cumulative attention paid to all the words in the input.", "Similarly, the bridging signal from antecedent to anaphor is found in a reverse way.", "There are two difficulties while getting the attention weights corresponding to anaphor or antecedent.", "First, the anaphor or antecedent can be a phrase with multiple words.", "So, we need to decide how to aggregate words' weights.", "For this, we decide to consider the semantic heads of both anaphor and antecedent, and get the attention weight between them.", "For instance, the semantic head for the political value of imposing sanction against South Africa is value .", "Most of the time, a semantic head of an NP is its syntactic head word as in the above example.", "However, for coordinated NPs such as the courts and the justice department , the syntactic head will be and which does not reflect this NP's semantic meaning.", "In such cases, we use the head word of the first element as its semantic head (i.e., courts ).", "Secondly, transformer language models use the wordpiece tokenizer to break words further.", "This produces multiple tokens from a single word if this word is absent from the language model's dictionary.", "Here, for a bridging anaphor a and its head word a h , we first calculate the average weight of all word piece tokens of the head word a h to other words.", "From these weights, we consider the weight from the anaphor a to its antecedent ( w 1 ).", "Subsequently, we add weights from a h to all other tokens present in the sentence and normalize the weight using sentence length ( w 2 ).", "Note that we neglected weights assigned to special tokens (i.e. [CLS], [SEP], [PAD], etc.,) while calculating both weights as previous work suggest that these special tokens are heavily attended in deep heads and might be used as a no-op for attention heads (Clark et al., 2019).", "Finally, bridging signal is measured as the ratio between w 1 and w 2 as mentioned earlier.", "We provide sentences containing a bridging anaphor ( Ana ) and its antecedent ( Ante ) to the pre-trained BERT model as a single sentence without the [SEP] token in-between.", "However, an anaphor and its antecedent do not always lie in the same or adjacent sentence(s).", "Therefore, we design two different experiments.", "In the first setup, we provide the model with only those sentences which contain Ana and Ante while ignoring all the other sentences in-between.", "This setting is a bit unnatural as we are not following the original discourse narration.", "In the second setup, we provide the model with sentences which contain Ana and Ante as well as all the other sentences between Ana and Ante .", "Note that in both experiments we add markers to denote the anaphor and its antecedent in order to get exact corresponding attention weights.", "For the input of only sentences containing anaphors and antecedents, we plot the bridging signals corresponding to each attention head separately (see the heatmaps in Figure 1).", "The left heatmap shows the signals from anaphors to antecedents and the right one shows the signals from antecedents to anaphors.", "Both heatmaps are based on the pre-trained BERT-base-cased model.", "The x-axis represents the number of attention heads from 1-12 and the y-axis represents the number of layers from 1-12.", "The darker shade of the color indicates stronger bridging signals and brighter color indicates a weak signal.", "The plot shows that the lower layers capture stronger bridging signal in comparison with the middle layers with an exception at the first attention head in the fifth layer.", "Also, the higher layers pay most attention to bridging relations in comparison to the middle and lower layers.", "The observation is consistent in both directions from anaphors to antecedents and from antecedents to anaphors.", "As stated earlier, for an anaphor, the antecedent can lie in the same sentence or any previous sentence.", "This demands a separate investigation of bridging signals depending on the distance (mea-sured in terms of sentences) between anaphors and antecedents.", "Therefore, we plot bridging signals captured by all attention heads depending on the distance between anaphors and antecedents in Figure 2.", "The first plot shows the signals between anaphors and antecedents where the distance between them is 0 (i.e., they occur in the same sen-tence).", "The second and the third plots show the bridging signals between anaphors and antecedents in which the anaphor-antecedent sentence distance is 1 and 2, respectively.", "In ISNotes, 77% of anaphors have antecedents occurring in the same or up to two sentences prior to the anaphor.", "The remaining anaphors have distant antecedents and each distance group only contains a small number of anaphor-antecedent pairs.", "Therefore, we divide the remaining anaphors into two coarse groups.", "The plots in Figure 2d and Figure 2e are plotted by combining anaphor-antecedent pairs which are apart by 3 to 5 sentences and 6 to 10 sentences, respectively.", "Note that we could not plot attention signals for bridging pairs with sentence distance longer than 10 sentences because of the limitation of the input size in BERT.", "We observe that, the patterns which are visible with only anaphor-antecedent sentences as the input (Section 4.3) are consistent even with considering all the sentences between anaphors and antecedents.", "It is clear that higher layers attend more to bridging relations in comparison with lower and middle layers.", "Also, the lower layers fail to capture bridging signal as the distance between anaphors and antecedents increases.", "Attention weights assigned by certain attention heads (5:1, 9:12, 11:3 and 12:2-4) are fairly consistent.", "One more important thing to observe is that as the distance between anaphors and antecedents increases the overall bridging signal decreases.", "This can be observed by looking at all the heatmaps in Figure 2 as the heatmaps with lower distances are on the darker side.", "Based on the results from the previous two experiments, we observed that in the pre-trained BERT model, the higher layers pay more attention to bridging relations in comparison with the middle and the lower layers.", "This observation is in-line with other studies in which the authors found that simple surface features were captured in the lower layers and complex phenomenons like coreference were captured in the higher layers (Jawahar et al., 2019).", "Also, the overall attention decreases with the increase in the distance between anaphors and antecedents.", "We also observed that there are some prominent attention heads which consistently capture bridging relations (5:1, 9:12, 11:3 and 12:2-4).", "In order to check which bridging relations are easier or harder for these prominent attention heads to capture, we Easy Bridging Relations The move will make the drug available free of charge for a time to children with the disease and symptoms of advanced infection .", "further investigated qualitatively to identify bridging pairs that get higher or lower attentions in these attention heads.", "Specifically, we consider pairs which have the bridging signal ratio (defined in Section 4.1) more than 70% as easier bridging relations for BERT heads to recognize.", "If the bridging signal ratio is less than 10%, then the corresponding bridging relation is considered as difficult for BERT heads to identify.", "We list a few easy and difficult examples in Table 1.", "In general, we observe that semantically closer pairs are easy for prominent heads to identify (e.g., house-basement, disease-infection).", "On the other hand, pairs that are distant and require more context-dependent as well as common-sense knowledge inference are difficult for the prominent heads to recognize.", "The transformer-based language models are trained with an objective to predict the masked tokens given the surrounding context.", "Thus, they can also produce a score for a word which can be placed at the masked token in a given sentence.", "We make use of this property of the language models and propose a novel formulation to understand the bridging anaphora resolution capacity of the pre-trained language models.", "The syntactic prepositional structure ( X of Y , such as the door of house or the chairman of com-pany) encodes a variety of bridging relations.", "Previous work has used this property to design features and develop embedding resources for bridging (Hou et al., 2013; Hou, 2018a,b).", "Inspired by this observation, we formulate bridging anaphora resolution as a cloze task.", "Specifically, given a bridging anaphor and its context, we insert of [MASK] after the head word of the anaphor (see Example 1).", "We then calculate the probability of each candidate to be filled as the mask token.", "The highest scoring candidate is selected as the predicted antecedent for the anaphor.", "One of the advantages of our formulation is that we can easily control the scope of the context for each bridging anaphor (e.g., no-context , local context or global context ).", "This allows us to test the effect of different types of context for bridging inference.", "(1) Original context : The survey found that over a three-year period 22% of the firms said employees or owners had been robbed on their way to or from work or while on the job.", "Seventeen percent reported their customers being robbed.", "Cloze test context : The survey found that over a three-year period 22% of the firms said employees or owners had been robbed on their way to or from work or while on the job.", "Seventeen percent of [MASK] reported their customers being robbed.", "Recall that in our Of-Cloze test , antecedent candidates are provided and the highest scoring candidate is selected as the predicted antecedent.", "These candidates are formed by considering mentions which are occuring prior to the anaphor.", "We design two different experiment sets based on the scope of antecedent candidates and the surrounding context.", "Candidates Scope In the first set of experiments, we consider two different sets of antecedent candidates for an anaphor a .", "The first set contains salient and nearby mentions as antecedent candidates.", "Here, mentions only from the first sentence of the document, previous two sentences preceding a and the sentence containing a are considered as candidates.", "This setup follows previous work on selecting antecedent candidates (Hou, 2020a).", "The second set contains all mentions occurring before the anaphor a from the whole document.", "The second setup of forming antecedent candidates is more challenging than the first one because the number of candidates increases which makes selecting the correct antecedent difficult.", "Next, we provide the same context for anaphors in both of the experiments described above.", "We construct the context c for the bridging anaphor a .", "Precisely, c contains the first sentence of the document, the previous two sentences occurring before a , as well as the sentence containing a .", "We replace the head of a as of [MASK].", "We also compare this fill-in-the-gap probing approach with the attention heads-based approach for resolving bridging anaphors.", "Specifically, we use the prominent heads in BERT for identifying bridging relations from Section 4. Here, we obtained attention weights from an anaphor head to all antecedent candidate heads by adding attentions from prominent heads 5:1, 9:12, 11:3, and 12:2-4.", "Then the highest scoring candidate is predicted as the antecedent for the anaphor.", "Context Scope In the second set of experiments, we concentrate on probing the behavior of language models at capturing bridging relations with different contexts.", "We experiment with the following four settings:", "anaphor phrase (with of [MASK] being inserted", "inserted after the anaphor's head word) is given as the input to the model.", "", "b. Anaphor sentence: the sentence containing the anaphor is provided.", "The phrase of [MASK] is inserted after the head word of the anaphor.", "", "c. Ante+Ana sentence: on top of b, the sentence containing the antecedent is also included in the context.", "", "d. More context: on top of b, the first sentence from the document as well as the previous two sentences preceding the anaphor are included.", "Without of Context To test the effect of the strong bridging indicating signal of , we further execute another set of experiments.", "Specifically, We remove of from anaphor head of [MASK] and instead, provide anaphor head [MASK] for each type of the context described above.", "Perturbed Context In this setting, we perturb the context by randomly shuffling the words in the context except for the anaphor and antecedent phrases for each type of the context mentioned above.", "Note that we still have the of indicator in this setup.", "Table 2 shows the accuracy of using only the prominent heads and our Of-Cloze test approach for bridging anaphora resolution.", "All experiments are based on the same context (i.e., the sentence containing an anaphor, the previous two sentences preceding the anaphor as well as the first sentence from the document).", "We find that the Of-Cloze probing approach achieves higher result in comparison to the prominent attention head approach (31.64% vs. 20.15%) under the same conditions.", "One reason might be that although other attention heads do not significantly attend to bridging relations but cumulatively they are effective.", "We also observe that in the Of-Cloze test, the results of using salient/nearby mentions as antecedent candidates are better than choosing antecedents from all previous mentions (Row (2) vs. Row (3), and Row (2) vs. Row (4)).", "This is because the model has to choose from a smaller number of candidates in the first case as the average number of Antecedent Candidate Scope No.", "We further divide 622 anaphors in Row (3) into two groups (Row (4) and Row (5) in Table 2) depending on whether the corresponding antecedents occur in the provided contexts.", "It can be seen that the performance is significantly better when antecedents occur in the contexts.", "Finally, when comparing the results of each language model in each row separately, it seems that the bigger models are always better at capturing bridging information.", "In general, the RoBERTa-large model performs better than other models except when antecedents do not occur in the provided contexts (Row (5)).", "Note that the results in Table 2 are not calculated over all 663 anaphors in ISNotes.", "Therefore, if the results are normalized over all anaphors then we get the best result with the RoBERTa-large model (28.05%), which is reasonably fine in comparison with the state-of-the-art result of 50.08% (Hou, 2020a) given that the model is not fine-tuned for the bridging task.", "We further analyze the results of choosing antecedents obtained using the BERT-base-cased model with all previous mentions as the antecedent candidate scope in our Of-Cloze test probing experiment (Row (3) in Table 2) to understand the effect of distance between anaphors and antecedents.", "The results are shown in Table 3. In general, it seems that the accuracy decreases as the distance between anaphors and antecedents increases except when antecedents are from the first sentences of the documents.", "This is related to the position bias in news articles from ISNotes.", "Normally globally salient entities are often introduced in the beginning of a new article and these entities are preferred as antecedents.", "The other reason for the lower results in case of antecedents being away for more than two sentences might be that these antecedents are absent from the provided context.", "The results of experiments with different types of context are shown in Table 4. All experiments are based on the BERT-base-cased model with all previous", "previous mentions as the antecedent candidate scope.", "We refer to this model as BERT-Of-Cloze in the following discussion.", "In the first column of the table, BERT-Of-Cloze achieves an accuracy score of 17.20% with only the anaphor information plus of [mask] .", "We can see that the results improve incrementally with the addition of context.", "More specifically, the accuracy score improves from 17.20% to 22.82% by adding sentences containing anaphors.", "Adding sentences which contain antecedents ( ana + ante sent. ) further improves the accuracy score to 27.81%.", "Finally, adding more local context and the first sentence leads to an accuracy score of 26.36%.", "Note that compared to ana + ante sent. , more context represents a more realistic scenario in which we do not assume that the antecedent position information is known beforehand.", "In general, the results in the first column of Table 4 indicate that the model can leverage context information when predicting antecedents for bridging anaphors.", "Results reduce drastically when of is removed from the anaphor of [MASK] phrase (Table 4, column:2) from all context scopes.", "Without this indicator, the language model cannot make sense of two adjacent tokens such as consultant company .", "It is interesting to see that the results reduced drastically as well when we perturb the context between the anaphor and antecedent (Table 4, last column).", "This establishes the importance of meaningful context for performing bridging inference effectively in transformer language models.", "We analyzed anaphor-antecedent pairs that are linked wrongly by the Of-Cloze formulation and observed some common erros.", "Failure at capturing sophisticated commonsense knowledge: We found that the pre-trained transformer language model such as BERT acquires simple common-sense knowledge, therefore it can link anaphor-antecedent pairs such as sanddunes and principalschool .", "But it fails at capturing sophisticated knowledge, such as consultantDelmed (a company) and pool OPEC (Organization of petroleum countries) .", "This might be happening because of the rare co-occurrences of these pairs in the original text on which BERT is pre-trained.", "Also, BERT has inherent limitations at acquiring such structured knowledge (Park et al., 2020).", "Language modelling bias: In our Of-Cloze test probing, we use pre-trained transformer language models without fine-tuning.", "As a result, the model fills masked tokens that are fit according to the language modeling objective, not for bridging resolution.", "Thus, sometimes, the selected token perfectly makes sense in the single sentence but the choice is incorrect in the broader context.", "Consider the example, Only 22% of [MASK] supported private security patrols [...].", "BERT predicts police as a suitable antecedent that produces a meaningful local sentence.", "However, the correct antecedent is correspondents according to the surrounding context of this sentence.", "Unsuitable formulation for set-relations: Our Of-Cloze formulation produces awkward phrases for some bridging pairs that possess set-relations.", "Considering a bridging pair One man employees , in this case the model should assign high score for the phrase One man of employees.", "But, as this phrase is quite clumsy, BERT naturally being a language model assigns low scores for these pairs.", "We investigated the effectiveness of pre-trained transformer language models in capturing bridging relation inference by employing two distinct but complementary approaches.", "In the first approach, we probed individual attention heads in BERT and observed that attention heads from higher layers prominently captured bridging compared to the middle and lower layers and some specific attention heads consistently looked for bridging relation.", "In our second approach, we considered using language models for bridging anaphora resolution by formulating the task as a Of-Cloze test.", "We carefully designed experiments to test the influence of different types of context for language models to resolve bridging anaphors.", "Our results indicate that pre-trained transformer language models encode substantial information about bridging.", "Finally, in this work, we only focus on understanding the capacity of the pre-trained language models for bridging inference.", "Based on the insights we gained from the current probing study, in the future, we plan to explore how to better use pre-trained transformer language models for bridging resolution.", "We thank the three anonymous reviewers for their comments and feedback.", "This work was partially supported by the French National Research Agency via grant no ANR-16-CE33-0011-01 as well as by CPER Nord-Pas de Calais/FEDER DATA Advanced data science and technologies 2015-2020." ]
[ "method", "objective", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "result", "method", "objective", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "objective", "method", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "method", "other", "other", "method", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "objective", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "result", "abstain", "result", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "objective", "method", "method", "result", "method", "objective", "other", "other" ]
[ "Document-level event extraction (DEE) is indispensable when events are described throughout a document.", "We argue that sentence-level extractors are ill-suited to the DEE task where event arguments always scatter across sentences and multiple events may co-exist in a document.", "It is a challenging task because it requires a holistic understanding of the document and an aggregated ability to assemble arguments across multiple sentences.", "In this paper, we propose an end-to-end model, which can extract structured events from a document in a parallel manner.", "Specifically, we first introduce a document-level encoder to obtain the document-aware representations.", "Then, a multi-granularity non-autoregressive decoder is used to generate events in parallel.", "Finally, to train the entire model, a matching loss function is proposed, which can bootstrap a global optimization.", "The empirical results on the widely used DEE dataset show that our approach significantly outperforms current state-of-the-art methods in the challenging DEE task.", "Code will be available at https:// github.com/HangYang-NLP/DE-PPN .", "The goal of event extraction (EE) is to identify events of a pre-specified type along with corresponding arguments from plain texts.", "A great number of previous studies (Ahn, 2006; Ji and Grishman, 2008; Liao and Grishman, 2010; Hong et al., 2011; Li et al., 2013; Chen et al., 2015; Nguyen et al., 2016; Yang and Mitchell, 2016; Chen et al., 2017; Huang et al., 2018; Yang et al., 2019; Liu et al., 2020) focus on the sentence-level EE (SEE), while most of these works are based on the ACE evaluation (Doddington et al., 2004).", "1 However, these SEE-based methods make predictions within 1 https://www.ldc.upenn.edu/ collaborations/past-projects/ace [S3] On November 1, 2018 , Shenzhen 007 Co., Ltd. received a notice that the corporate shareholder Shanghai Fukong Co., Ltd and the actual controller Jing Yan were judicial frozen .", "a sentence and fail to extract events across sentences.", "To this end, document-level EE (DEE) is needed when the event information scatters across the whole document.", "In contrast to SEE, there are two specific challenges in DEE: arguments-scattering and multi-events .", "Specifically, arguments-scattering indicates that arguments of an event may scatter across multiple sentences.", "For example, As shown in Figure 1, the arguments of Event-1 are distributed in different sentences ([S3] and [S7]) and extraction within an individual sentence will lead to incomplete results.", "So this challenge requires the DEE model to have a holistic understanding of the entire document and an ability to assemble all relevant arguments across sentences.", "Furthermore, it will be more difficult when coupled with the second challenge: multi-events, where multiple events are contained in a document.", "2 As shown in Figure 1, there are two events Event-1 and Event-2 in a document with the same event type and there is no obvious textual boundary between the two events.", "The multi-events problem requires the DEE method to recognize how many events are contained in a document and achieve accurate arguments assembling (i.e., assign arguments to the corresponding event).", "As a result of these two complications, SEE methods are ill-suited for the DEE task, which calls for a model that can integrate document-level information, assemble relevant arguments across multiple sentences and capture multiple events simultaneously.", "To handle these challenges in DEE, previous works (Yang et al., 2018; Zheng et al., 2019) formulate DEE as an event table filling task, i.e., filling candidate arguments into a predefined event table.", "Specifically, they model the DEE as a serial prediction paradigm, in which arguments are predicted in a predefined role order and multiple events are also extracted in predefined event order.", "Such a manner is restricted to the extraction of individual arguments, and the former extraction will not consider the latter extraction results.", "As a result, errors will be propagated and the extraction performance is under satisfaction.", "In this paper, to avoid the shortage of serial prediction and tackle the aforementioned challenges in DEE, we propose an end-to-end model, named D ocument-toE vents via P arallel P rediction N etworks ( DE-PPN ).", "DE-PPN is based on an encoder-decoder framework that can extract structured events from a whole document in a parallel manner.", "In detail, we first introduce a document-level encoder to obtain the document-aware representations.", "In such a way, a holistic understanding of the entire document is obtained.", "Then, we leverage a multi-granularity decoder to generate events, which consists of two key parts: a role decoder and an event decoder.", "The role decoder is designed for handling the argument-scattering challenge, which can assemble arguments for an event based on document-aware representations.", "For addressing the challenge of multi-events effectively, an event decoder is designed to support generating 2 According to our statistics, there are about 30% documents include multiple events in the widely used ChFinAnn (Zheng et al., 2019) multiple events.", "Both of them are based on the non-autoregressive mechanism (Gu et al., 2018), which supports the extraction of multiple events in parallel.", "Finally, for comparing extracted events to ground truths, we propose a matching loss function inspired by the Hungarian algorithm (Kuhn, 1955; Munkres, 1957).", "The proposed loss function can perform a global optimization by computing a bipartite matching between predicted and ground-truth events.", "In summary, our contributions are as follows: We propose an encoder-decoder model, DE-PPN, that is based on a document-level encoder and a multi-granularity decoder to extract events in parallel with document-aware representations.", "We introduce a novel matching loss function to train the end-to-end model, which can bootstrap a global optimization.", "We conduct extensive experiments on the widely used DEE dataset and experimental results demonstrate that DE-PPN can significantly outperform state-of-the-art methods when facing the specific challenges in DEE.", "Before introducing our proposed approach for DEE in this section, we first describe the task formalization of DEE.", "Formally, we denote T and R as the set of pre-defined event types and role categories, respectively.", "Given an input document comprised of N s sentences D = { S i } N s i =1 , the DEE task aims to extract one or more structured events Y = { y i } k i =1 , where each event y t i with event type t contains a series of roles ( r 1 i , r 2 i , . . . , r ni ) filled by arguments ( a 1 i , a 2 i , . . . , a ni ) .", "k is the number of events contained in the document, n is the number of pre-defined roles for the event type t , t T and r R .", "The key idea of our proposed model, DE-PPN, is that aggregate the document-level context to predict events in parallel.", "Figure 2 illustrates the architecture of DE-PPN, which consists of five key components: (1) candidate argument recognition, (2) document-level encoder, (3) multi-granularity decoder, (4) events prediction, and (5) matching loss function.", "Given a document D = { S i } N s i =1 with N s sentences, each sentence S i with a sequence of tokens is first embedded as [ w i, 1 , w i, 2 , . . . , w i,l ] , where l is the sentence length.", "Then, the word embeddings are fed into an encoder to obtain the contextualized representation.", "In this paper, we adopt the Transformer (Vaswani et al., 2017) as the primary context encoder.", "Through the encoder, we can get the context-aware embedding C i of sentence S i : C i = Transformer-1 ( S i ) (1) where C i R l d and d is the size of the hidden layer, and we represent each sentence in the given document as { C i } N s i =1 .", "Finally, following Zheng et al. (2019), we model the sentence-level candidate argument recognition as a typical sequence tagging task.", "Through candidate argument recognition, we can obtain candidate arguments A = { a i } N a i =1 from the given sentence S i , where N a is the number of recognized candidate arguments.", "To enable the awareness of document-level contexts for sentences and candidate arguments, we employ a document-aware encoder to facilitate the interaction between all sentences and candidate arguments.", "Formally, given an argument a i with its span covering j -th to k -th in sentence S i , we conduct a max-pooling operation over the token-level embedding [ c i,j , . . . , c i,k ] C i to get the local embedding c ai R d for it.", "Similarly, the sentence embedding c s i R d can be obtained by the max-pooling operation over the token sequence representation C i of sentence S i .", "Then, we employ the Transformer module, Transformer-2, as the encoder to model the interaction between all sentences and candidate arguments by a multi-head self-attention mechanism.", "Then we can get the document-aware representations for sentences and arguments.", "Note that we add the sentence representation with sentence position embeddings to inform the sentence order before feeding them into Transformer-2.", "since arguments may have many mentions in a document, we utilize the max-pooling operation to merge multiple argument embeddings with the same char-level tokens into a single embedding.", "After the document-level encoding stage, we can obtain the document-aware sentences representation H s RN s d and candidate arguments A (cid:48) = { a i } N (cid:48) a i =1 with representation H a RN (cid:48) a d .", "Before decoding, we stack a linear classifier over the document representation by operating the max-pooling over H s to conduct a binary classification for each event type.", "Then, for the predicted event type t with pre-defined role types, DE-PPN learns to generate events according to the document-aware candidate argument representations H a RN (cid:48) a d and sentence representations H s RN s d .", "To effectively address arguments-scattering and multi-events in DEE, we introduce a multi-granularity decoder to generate all possible events in parallel based on document-aware representations ( H a and H s ).", "The multi-granularity decoder is composed of three parts: event decoder, role decoder, and event-to-role decoder.", "All of these decoders are based on the non-autoregressive mechanism (Gu et al., 2018), which supports the extraction of all events in parallel.", "Event Decoder.", "The event decoder is designed to support the extraction of all events in parallel and is used to model the interaction between events.", "Before the decoding stage, the decoder needs to know the size of events to be generated.", "We use m learnable embeddings as the input of the event decoder, which are denoted as event queries Q event R m d .", "m is a hyperparameter that denotes the number of the generated events.", "In our work, m is set to be significantly large than the average number of events in a document.", "Then, the event query embeddings Q event are fed into a non-autoregressive decoder which is composed of a stack of N identical Transformer layers.", "In each layer, there are a multi-head self-attention mechanism to model the interaction among events and a multi-head cross-attention mechanism to integrate the document-aware representation H s into event queries Q event .", "Formally, the m event queries are decoded into m output embeddings H event by: H event = Event-Decoder ( Q event ; H s ) (3) where H event R m d .", "Role Decoder.", "The role decoder is designed to support the filling of all roles in an event in parallel and model the interaction between roles.", "As the predicted event type t with semantic role types ( r 1 , r 2 , . . . , r n ) , we use n learnable embeddings as the input of the role decoder, which are denoted as event queries Q role R n d .", "Then, the role query embeddings Q role are fed into the decoder, which has the same architecture as the event decoder.", "Specifically, the self-attention mechanism can model the relationship among roles, and the cross-attention mechanism can fuse the information of the document-aware candidate argument representations H a .", "Formally, the n role queries are decoded into n output embeddings H role by: H role = Role-Decoder ( Q role ; H a ) (4) where H role R n d .", "Event-to-Role Decoder.", "To generate diversiform events with relevant arguments for different event queries, an event-to-role decoder is designed to model the interaction between the event queries H event and the role queries H role : H e 2 r = Event2Role-Decoder ( H role ; H event ) (5) where H e 2 r R m n d .", "After the multi-granularity decoding, the m event queries and n role queries are transformed into m predicted events and each of them contains n role embeddings.", "To filter the spurious event, the m event queries H event are fed into a feed-forward networks (FFN) to judge each event prediction is non-null or null.", "Concretely, the predicted event can be obtained by: p event = softmax( H event W e ) (6) where W e R d 2 is learnable parameters.", "Then, for each predicted event with pre-defined roles, the predicted arguments are decoded by filling the candidate indices or the null value with ( N (cid:48) a + 1) -class classifiers: 3 P role = softmax(tanh( H e 2 r W 1 + H a W 2 ) v 1 ) (7) where W 1 R d d , W 2 R d d and v 1 R d are learnable parameters, and P role R m n ( N (cid:48) a +1) .", "After the prediction network, we can obtain the m events Y = ( Y 1 , Y 2 , . . . , Y m ) where each event Y i = ( P 1 i , P 2 i , . . . , P ni ) contains n predicted arguments with role types.", "Where P ji = P role [ i, j, :] R ( N (cid:48) a +1) .", "The main problem for training is that how to assign predicted m events with a series of arguments to the ground truth k events.", "Inspired by the assigning problem in the operation research (Kuhn, 1955; Munkres, 1957), we propose a matching loss function, which can produce an optimal bipartite matching between predicted and ground-truth events.", "Formally, we denote predicted and ground truth events as Y = ( Y 1 , Y 2 , . . . , Y m ) and Y = ( Y 1 , Y 2 , . . . , Y k ) , respectively.", "Where k is the 3 Note that we append candidate argument representations H a with a learnable embedding to represent the null value.", "real number of events in the document and m is fixed size for generated events.", "Note that m (cid:62) k .", "The i -th predicted event is denoted as Y i = ( P 1 i , P 2 i , . . . , P ni ) , where P ji can be calculated by the Equation 7.", "And the i -th ground truth event is denoted as Y i = ( r 1 i , r 2 i , . . . , r ni ) , where r ji is the candidate argument indix for j -the role type in i -th target event.", "To find a bipartite matching between these two sets, we search for a permutation of m elements with the lowest cost: = argmax (cid:81) ( m ) m (cid:88) i C match ( Y ( i ) , Y i ) (8) where (cid:81) ( m ) is the space of all m -length permutations and C match ( Y ( i ) , Y i ) is a pair-wise matching cost between ground truth y i and a prediction Y ( i ) with index ( i ) .", "By taking into account all of the prediction arguments for roles in an event, we define C match ( Y ( i ) , Y i ) as: C match ( Y ( i ) , Y i ) = 1 { judge i (cid:54) = } n (cid:88) j =1 P j ( i ) ( r ji )) (9) where the judge i is the judgement of event i to be non-null or null that is calculated by the Equation 6.", "The optimal assignment ( i ) can be computed effectively with the Hungarian algorithm.", "4 Then for all pairs matched in the previous step, we define the loss function with negative log-likelihood as: L ( Y , Y ) = m (cid:88) i =1 1 { judge i (cid:54) = } [ n (cid:88) j =1 log P j ( i ) ( r ji )] (10) Where is the optimal assignment computed in the Equation 8.", "During training, we sum the matching loss for events prediction with preconditioned steps before decoding as follows:", "where L ae and L ec are the cross-entropy loss function for sentence-level candidate argument recognition and event type classification, respectively.", "1 , 2 and 3 are hyper-parameters.", "In this section, we present empirical studies to answer the following questions:", "1. What is the overall performance of our DE-PPN compared to the state-of-the-art (SOTA) method evaluated on the DEE task?", "2. How does DE-PPN perform when facing the arguments-scattering and multi-event challenges in DEE?", "3. How does each design of our proposed DE-PPN matter?", "4. What is the influence of setting different numbers of the generated events on the results?", "Dataset.", "Following Zheng et al. (2019), we use the ChFinAnn dataset 5 to evaluate our proposed DEE method.", "The ChFinAnn is a large-scale DEE dataset, which contains 32,040 documents in total and includes five financial event types: Equity Freeze (EF), Equity Repurchase (ER), Equity Underweight (EU), Equity Overweight (EO) and Equity Pledge (EP).", "Evaluation Metrics.", "For a fair comparison, we adopt the evaluation standard used in Doc2EDAG (Zheng et al., 2019).", "Specifically, for each predicted event, the most similar ground-truth is selected without replacement to calculate the Precision (P), Recall (R), and F1-measure (F1-score).", "As an event type often includes multiple roles, micro-averaged role-level scores are calculated as the final DEE metric.", "Implementation Details.", "For a document as input, we set the maximum number of sentences and the maximum sentence length as 64 and 128, respectively.", "We adopt the basic Transformer, each layer has 768 hidden units, and 8 attention heads, as the encoder and decoder architecture.", "During training, we employ the AdamW optimizer (Kingma and Ba, 2014) with the learning rate 1e-5 with batch size 16.", "Testing set performance is chosen by the best development set performance step within 100 epochs.", "We leave detailed hyper-parameters and additional results in the Appendix.", "Models EF ER EU EO EP P R F1 P R F1 P R F1 P R F1 P R F1 DCFEE-O 66.0 41.6 51.1 84.5 81.8 83.1 62.7 35.4 45.3 51.4 42.6 46.6 64.3 63.6 63.9 DCFEE-M 51.8 40.7 45.6 83.7 78.0 80.8 49.5 39.9 44.2 42.5 47.5 44.9 59.8 66.4 62.9 GreedyDec 79.5 46.8 58.9 83.3 74.9 78.9 68.7 40.8 51.2 69.7 40.6 51.3 85.7 48.7 62.1 Doc2EDAG 77.1 64.5 70.2 91.3 83.6 87.3 80.2 65.0 71.8 82.1 69.0 75.0 80.0 74.8 77.3 DE-PPN-1 77.8 55.8 64.9 75.6 76.4 76.0 76.4 63.7 69.4 77.1 54.3 63.7 85.5 43.0 57.2 DE-PPN 78.2 69.4 73.5 89.3 85.6 87.4 69.7 79.9 74.4 81.0 71.3 75.8 83.8 73.7 78.4", "We compare our DE-PPN with the SOTA methods as follows: DCFEE (Yang et al., 2018) proposed a key-event detection to guide event table filled with the arguments from key-event mention and surrounding sentences.", "There are two versions of DCFEE: DCFEE-O only extracts one event and DCFEE-M extracts multiple events from a document.", "Doc2EDAG (Zheng et al., 2019) proposed an end-to-end model for DEE, which transforms DEE as directly filling event tables with entity-based path expending.", "There is a simple baseline of Doc2EDAG, named GreedyDec , which only fills one event table entry greedily.", "Besides, we further introduce a simple baseline of DE-PPN, named as DE-PPN-1 , which only generates one event.", "DE-PPN vs. SOTA.", "Table 1 shows the comparison between DE-PPN and baseline methods on the test set for each event type.", "Overall, our proposed model DE-PPN significantly outperforms other baselines and achieves SOTA performance in all event types.", "Specifically, DE-PPN improves 3.3, 0.1, 2.6, 0.8, 1.1, 1.6 F1-score over the SOTA method, Doc2EDAG, on the event type EF, ER, EU, EO, EP and the average F1-score, respectively.", "The improved performance indicates that the encoder-decoder generative framework of DE-PPN is effective, which can predict events in parallel with a global optimization for training.", "Besides, as the baseline of our proposed method, DE-PPN-O can achieve the best performance compared with DCFEE-O and GreedyDec while all of them only predict one event for a document, which also proves the effectiveness of the document-aware end-to-end modeling of DE-PPN.", "Results on Arguments-Scattering.", "To show the extreme difficulty of the arguments-scattering challenge in DEE, we conduct experiments on different scenarios.", "We introduce an arguments-scattering ratio (ASR) to measure the scatter of arguments in an event for a document.", "The ASR is calculated by: ASR = Num ments / Num args (12) where Num ments denotes the number of event mentions (i.e., sentences that contains arguments) and Num args denotes the number of arguments.", "The higher the ASR, the more scattering of the arguments in an event.", "Table 3 shows the results with the different intervals of ASR.", "We can observe that it is more difficult to extract scattering arguments as the ASR increase.", "But DE-PPN still 0 1 2 3 4 Number of Event Decoder Layers 60 65 70 75 80 85 90 F 1 -S c o r e ( % ) 75.3 82.5 83.5 82.5 82.6 61.4 68.1 68.6 68.0 68.2 0 1 2 3 4 Number of Role Decoder Layers 60 65 70 75 80 85 90 F 1 -S c o r e ( % ) 79.4 81.3 83.0 82.6 83.9 65.3 66.3 68.7 68.1 68.7 Multi_event Single_event Figure 3: F1-score for performance differences of event decoder and role decoder layers.", "maintains the best performance and the results indicate that the encoder-decoder framework can better assemble arguments to the corresponding event across sentences with the parallel prediction and the document-aware representations.", "Single-Event vs. Multi-Event.", "To show the extreme difficulty when arguments-scattering meets multi-events for DEE, we conduct experiments on two scenarios: single-event (i.e., documents contain one event) and multi-event (i.e., documents contain multiple events).", "Table 2 shows the F1-score on single-event and multi-event sets for each event type and the averaged (Avg.).", "We can observe that multi-events is extremely challenging as the extraction performance of all models drops significantly.", "But DE-PPN still improves the average F1-score from 67.3% to 68.7% over the Doc2EDAG.", "The results demonstrate the effectiveness of our proposed method when handling the challenge of multi-events.", "This performance improvement ben-efits from the event decoder which can generate multiple events in parallel and the matching loss function which can perform a global optimization.", "Besides, the DE-PPN-1 model achieves an acceptable performance on the scenario of single event extraction which demonstrates the effectiveness of our end-to-end model.", "But DE-PPN-1 only generates one event and cannot deal with the multi-events problem, resulting in low performance on the multi-event sets.", "To verify the effectiveness of each component of DE-PPN, we conduct ablation tests on the next variants: 1) -DocEnc : removing the Transformer-based document-level encoder, which can support the document-aware information for decoding.", "2) -MultiDec : replacing the multi-granularity decoder module with simple embedding initialization for event queries and role queries.", "3) -MatchingLoss : Model EF ER EU EO EP Avg.", "replacing the matching loss function with normal cross-entropy loss.", "The results are shown in Table 4 and we can observe that: 1) the document-level encoder is of prime importance that enhances the document-aware representations for the generative decoder and contributes +2.6 F1-score on average; 2) the multi-granularity decoder alleviates the challenges of argument-scattering and multi-events by assembling arguments and generating events in parallel, improving by +4.3 F1-score on average.", "3) the matching loss function is a very important component for events extraction with +13.4 F1-score improvement which indicates that the matching loss guide a global optimization between predicted and ground-truth events during training.", "To investigate the importance of the multi-granularity decoder, we explore the effect of different layers of the event decoder and the role decoder on the results.", "Specifically, the number of decoder layers is set to 0,1,2,3 and 4, where 0 means removing this decoder.", "1) The effect of different event decoder layers are shown in the left of Figure 3, and our method can achieve the best average F1-score when the number of layers is set to be", "2. We conjecture that more layers of the non-autoregressive decoder allow for better modeling the interaction between event queries and generating diversiform events.", "However, when the layer is set to be large, it is easy to generate redundant events.", "2) The effect of different role decoder layers are shown in the right of Figure 3, and we can observe that the more decoder layers, the better performance on the results.", "We conjecture that more layers of the decoder with the more self-attention modules allow for better modeling the relationship between event roles and more inter-attention modules allow for integrating information of candidate arguments into roles.", "For the training and testing process of the DE-PPN, the number of generated events is an important hyperparameter.", "In this section, we explore the influence of setting different numbers of generated events on the results.", "We divide the development set into 5 sub-class where each class contains 1,2,3,4 and (cid:62) 5 events.", "Table 5 shows the statistics of the documents with different annotated events in the development set.", "To validate the impact of the number of generated events on the performance, we evaluate DE-PPN with various numbers of generated events: 1, 2, 5, 10, named DE-PPN-1, DE-PPN-2, DE-PPN-5, DE-PPN-10, respectively.", "The results of DE-PPN with different generated events are shown in Figure 4, which are also compared with the SOTA model Doc2EDAG.", "We can observe that as the number of events increases, it is more difficult for events prediction, which can be reflected in the decline of all performance.", "In general, DE-PPN almost achieves the best performance on the average F1-score when the number of generated sets is set to be 5.", "Besides, there is a performance gap between Doc2EDAG and our method DE-PPN when the number of annotated events is large than 2 in a document.", "It also demonstrates that our proposed parallel decoder can better handle the challenge of multi-events in DEE.", "Most work in EE has focused on the sentence level and is based on the benchmark dataset ACE 2005 (Doddington et al., 2004).", "Many approaches 1 2 3 4 >=5 Number of Annotated Events 30 40 50 60 70 80 F 1 -S c o r e ( % ) 83.2 73.4 56.2 73.1 69.3 65.368.2 42.1 56.4 69.8 61.6 67.3 35.7 50.8 71.3 62.965.4 26.4 38.3 62.159.6 53.2 DE-PPN-1 DE-PPN-2 DE-PPN-5 DE-PPN-10 Doc2EADG Figure 4: F1-score for performance differences of generated events.", "have been proposed to improve performance on this task.", "These studies are mainly based on hand-designed features (Li et al., 2013; Kai and Grishman, 2015) and neural-based to learn features automatically (Chen et al., 2015; Nguyen et al., 2016; Bjorne and Salakoski, 2018; Yang et al., 2019; Chan et al., 2019; Yang et al., 2019; Liu et al., 2020).", "A few methods make extraction decisions beyond individual sentences.", "Ji and Grishman (2008) and Liao and Grishman (2010) used event type co-occurrence patterns for event detection.", "Yang and Mitchell (2016) introduced event structure to jointly extract events and entities within a document.", "Although these approaches make decisions beyond sentence boundary, their extractions are still done at the sentence level.", "Many real-world applications need DEE, in which the event information scatters across the whole document.", "MUC-4 (1992) proposed the MUC-4 template-filling task that aims to identify event role fillers with associated role types from a document.", "Recent works explore the local and additional context to extract the role fillers by manually designed linguistic features (Patwardhan and Riloff, 2009; Huang and Riloff, 2011, 2012) or neural-based contextual representation (Chen et al., 2020; Du et al., 2020; Du and Cardie, 2020).", "Recently, Ebner et al. (2020) published the Roles Across Multiple Sentences (RAMS) dataset, which contains annotation for the task of multi-sentence argument linking.", "A two-step approach (Zhang et al., 2020) is proposed for argument linking by detecting implicit argument across sentences.", "Li et al. (2021) extend this task and compile a new benchmark dataset WIKIEVENTS for exploring document-level argument extraction task.", "Then, Li et al. (2021) propose an end-to-end neural event argument extraction model by conditional text generation.", "However, these works focused on the sub-task of DEE (i.e., role filler extraction or argument extraction) and ignored the challenge of multi-events.", "To simultaneously address both challenges for DEE (i.e., arguments-scattering and multi-events), previous works focus on the ChFinAnn (Zheng et al., 2019) dataset and model DEE as an event table filling task, i.e., filling candidate arguments into predefined event table.", "Yang et al. (2018) proposed a key-event detection to guide event table filled with the arguments from key-event mention and surrounding sentences.", "Zheng et al. (2019) transforms DEE into filling event tables following a predefined order of roles with an entity-based path expanding, which achieved the SOTA for DEE.", "However, these methods suffered from a serial prediction which will lead to error propagation and individual argument prediction.", "In this paper, we propose an encoder-decoder model, DE-PPN, to extract events in parallel from a document.", "For addressing the challenges (i.e., arguments-scattering and multi-events) in DEE, we introduce a document-level encoder and a multi-granularity decoder to generate events in parallel with document-aware representations.", "For training the parallel networks, we propose a matching loss function to perform a global optimization.", "Experimental results show that DE-PPN can significantly outperform SOTA methods especially facing the specific challenges in DEE.", "We thank the anonymous reviewers for their constructive and insightful comments.", "This work is supported by the National Natural Science Foundation of China (No. U1936207, No. 61922085 and No. 61806201), Beijing Academy of Artificial Intelligence (No. BAAI2019QN0301), the Key Research Program of the Chinese Academy of Sciences (No. ZDBS-SSW-JSC006), independent research project of National Laboratory of Pattern Recognition and a grant from Ant Group." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "result", "other", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "method", "abstain", "method", "abstain", "objective", "abstain", "objective", "objective", "objective", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "objective", "abstain", "other", "other" ]
[ "Syntax is fundamental to our thinking about language.", "Failing to capture the structure of input language could lead to generalization problems and over-parametrization.", "In the present work, we propose a new syntax-aware language model: Syntactic Ordered Memory (SOM).", "The model explicitly models the structure with an incremental parser and maintains the conditional probability setting of a standard language model (left-to-right).", "To train the incremental parser and avoid exposure bias, we also propose a novel dynamic oracle, so that SOM is more robust to wrong parsing decisions.", "Experiments show that SOM can achieve strong results in language modeling, incremental parsing and syntactic generalization tests, while using fewer parameters than other models.", "Several recent works have systematically studied the linguistic abilities of modern language models, particularly syntax (Linzen et al., 2016; Marvin and Linzen, 2018; Gulordava et al., 2018).", "They find that most language models are good at capturing frequent syntactic structures but do not generalize well to those in the long tail.", "Moreover, although some excel at having low perplexity scores, this is less due to their syntactic ability but more due to capturing collocations (frequently co-occurring words).", "Recently, Hu et al. (2020) show that RNNs underperform on a syntactic generalization (SG) test set, whereas models that have an explicit notion of syntax, such as RNNG (Dyer et al., 2016), fare well on SG but at the cost of generally poorer language modeling (higher perplexity).", "Transformer-based models achieve strong performance when trained with large datasets, but are worse than random when trained on a small dataset.", "helps in achieving better performance in SG tasks and is also thought to help learn more efficiently in low data settings.", "However, building syntax-aware models that also obtain strong language modeling performance, when compared with recent transformer-based models, has until now seemed elusive.", "In this work, we propose a new syntax-aware language model dubbed Syntactic Ordered Memory (SOM; Fig. 1), which jointly acts as a language model and an incremental parser.", "SOM inherits the syntax representation used in Ordered Memory (OM; Shen et al. 2019) in which syntax trees are embedded in a grid-like memory representation.", "Whereas OM was trained as an unsupervised parser, SOM is explicitly trained to predict both ground-truth syntax trees incrementally and, using the predicted partial syntactic structure, to predict the next token.", "Fig.1 shows the mechanism of SOM.", "SOM factorizes the next-token prediction process into two steps: first, we predict the attachment position for the next token with a zero-step lookahead parser, trained in a supervised fashion; then, we predict the next token distribution conditioned on the partially predicted structure.", "One way of training the incremental parser is to use teacher-forcing.", "However, this can lead to exposure bias , due to the fact that the model was never exposed to its own predictions during training.", "To avoid this, we introduce a dynamic oracle (Goldberg and Nivre, 2012) for our model, so that our model can learn to recover from previous parsing mistakes during inference.", "We found this to be crucial to obtain good performance.", "We compare SOM with existing methods that integrate syntax into language models.", "RNNGs (Dyer et al., 2016) and Ordered Neurons (Shen et al., 2018) are particularly related.", "RNNGs are generative models of language which define a joint distribution on syntactic structures and sequence of words.", "Ordered Neurons attempt to model the hierarchical structure of language by defining an ordering to the hidden states and the gates that impose that structure.", "We show that our proposed SOM model can achieve strong language modeling, parsing and SG performance even when trained on small amounts of data.", "In summary, our contributions are threefold: We introduce SOM, a new syntax-augmented language model that learns an incremental parser and use its predictions to improve language modeling.", "We propose a novel dynamic oracle that allows to reduce the exposure bias and is instrumental to achieving good downstream performance.", "We report high SG score, language modeling and incremental parsing performance for various dataset sizes.", "We also find that jointly learning both language modelling and parsing improves both these capabilities in the model.", "Syntax-aware models There has been work to integrate syntax into our current models of language.", "Socher et al. (2013) used parse trees for composing sentences in order to predict sentiment over movie reviews.", "However, having an external parser and restriction of batched computations in that early model made the method unwieldy.", "Bowman et al. (2016) introduced the SPINN model, which alleviated those issues, turning sentences into a sequence of actions to be executed by a shift-reduce parser.", "Our SOM model is based on shift-reduce as well, because of the incremental nature of the parsing we want to achieve.", "RNNG (Dyer et al., 2016; Kuncoro et al., 2016) was an example of integrating syntax information for language modelling.", "There is also work that attempts to learn these syntactic structures without supervision.", "Kim et al. (2019) later devised an unsupervised version of the RNNG, a method which produced good parsing performance.", "DIORA (Drozdov et al., 2019, 2020) was a method that leveraged the Inside-Outside algorithm to construct sentence embeddings for downstream tasks, with the benefit of being able to read off parse trees in the encoding process.", "Swayamdipta et al. (2019) finds that there are no improvements over using ELMo (Peters et al., 2018) embeddings when shallow syntactic information is included, concluding that ELMo-style pretraining has learned the syntactic information.", "However, Kuncoro et al. (2019) investigated the importance of the learnt syntactic knowledge RNNG in a large pre-trained model like BERT, they found that syntax information helps with downstream tasks.", "In our experiments, we find that explicitly training OM with syntax (with our dynamic oracle scheme) improves performance on syntactic generalization tasks.", "Incremental Parsing & Language Modelling In SOM, we specifically focus on incremental parsing.", "Ghezzi and Mandrioli (1979) discusses incremental parsing in the context of programming languages, with shift-reduce parsers being a spe-cific type of incremental parsing.", "OM, RNNG, and SPINN are models that were designed with shift-reduce in mind.", "Incremental parsing lends itself well to the task of autoregressive language modelling.", "Since the parser only sees the prefix of a sentence, the model can use the partial parse to make a prediction about upcoming words.", "Demberg et al. (2013) sum-marises several empirical results that provide evidence for incremental and predictive parsing in humans, and makes several connections between incrementality (that comprehenders do not wait to the end of the sentence before building a representation) and prediction about future words coming in the sentence.", "Given that an incremental parser processes a sentence from left to right, there are naturally some limitations.", "Hassan et al. (2009) show why either a beam or delay is necessary if performing incremental parsing with monotonic extensions: They experiment with a parser based on Combinatory Categorial Grammar (Steedman, 2000).", "They find that without the look-ahead, there is a 30 % point reduction in the parsing results.", "One of our contributions in this paper is the one-step lookahead while performing parsing, but zero-step lookahead when performing next-word prediction, allowing the model to be trained jointly as a incremental parser and language model.", "Despite the left-to-right nature of incremental parsing, this setting may aid language modelling too.", "Shieber (1983) suggests the biases may correspond to the way humans parse English, and use a modified shift-reduce parser to disambiguate between different parses of a sentence.", "There have been work that show that incremental parsing can improve language modelling.", "Khn and Baumann (2016) demonstrate that combining an incremental dependency parser with a language model yields improvements in perplexity.", "Roark (2001) presents a top-down phrase structure parser that performs beam-search to generate connected intermediate structures for every sentence prefix.", "This model can be used for language modeling and beats trigram models on the Penn Treebank (Marcus et al., 1994) Dynamic Oracles Since incremental parsing requires that we break down the problem of structure prediction into sequential decisions, we are prone to exposure bias .", "There are techniques to address this by allowing the model to make mistakes and supervising future actions based on the state arrived at (Daum et al., 2009).", "Goldberg and Nivre (2012) introduces the concept of dynamic oracles for dependency parsing.", "Coavoux and Crabb (2016) uses this technique for incremental constituency parsing, but uses morphological features, and does not perform language modelling.", "Fried and Klein (2018) cover in further detail the related work relating to dynamic oracles and parsing.", "We find that using dynamic oracles for training is crucial in seeing benefits in both language modelling and incremental parsing.", "Evaluating Syntactic Generalization Recent tests have been developed that attempt to probe the linguistic abilities of language models.", "Gulordava et al. (2018) explores the extent to which RNNs are able to model grammar, independent of the semantics of the sentence.", "Marvin and Linzen (2018) evaluate language models on their ability to score sentences with and without the proper subject-verb agreements over a variety of different settings.", "for language models over a series of different sized datasets.", "They find that while GPT-2 performs well, their performance is highly dependent on the scale of the language modeling training dataset, while other models remain more robust.", "In this paper, we use this test suite for the evaluation.", "We first provide useful background on Ordered Memory.", "Ordered Memory (OM, Shen et al. 2019) is a recurrent neural network that explicitly models recursive structure through memory writing and erasing operations.", "OM maps the latent syntax into a T N memory grid M , where T is the length of input sequence and N is the maximum number of memory slots.", "Figure 2 gives an intuition of what the grid contains.", "Empty blocks in the figure represent memory slots that can be discarded during inference.", "Ideally, the memory network should generate the t -th column of the grid M t at time step t .", "But generating M t requires the model to have access about the tree structure which is usually latent.", "For this reason, OM induces the latent structure through inductive biases of its reading and writing operations.", "Memory M t : a matrix of dimension N D , where each occupied slot is a distributed representation for a node spanning an subsequence in x 1 ,", ".., x t 1 conditioned on x t , i.e. M t represents a one-step look-ahead parser stack.", "It's represented by gray blocks in Figure 3.", "(a) The transition from time step 4 to 5.", "1 The one-step lookahead parser combines M t 1 and M t 1 considering on the current input x t , in this example, the split point of M t 1 and M t 1 is i = 2 .", "2 Current input x t is written into the lower slot of new candidate memory M i 1 t .", "3 The rest of new candidate memories M i t are generated with bottom-up recurrent composition.", "(b) Predicting the next token at time step 4.", "1 The zero-step look-ahead parser combines M t and M t at time step t .", "2 The recurrent network takes the combined memory M out t as input and output a hidden state h t = f ( w t ) .", "3 h t is then fed into an linear layer to compute p ( x t +1 | x t ) .", "Candidate memory M t : a matrix of dimension N D contains representations for all possible new nodes at time step t .", "At next time step t + 1 , the model will decide whether or not to write these candidates into memory M t +1 conditioned on x t +1 .", "They are represented by orange blocks in Figure 3.", "if the model is making correct parsing decisions, then M t = M t 1 .", "Memory mask t : t { 0 , 1 } N , where each entry indicates whether the respective slot in M t is occupied by a candidate, e.g., if t = (0 , 1 , 1) , then the occupied slots are M 2 t .", "At next time step, the model can only choose a candidate from masked slots to write into the memory M t +1 .", "At each time step, the model takes [ M t 1 , M t 1 , t 1 ] and word embedding x t as inputs, returning the outputs [ M t , M t , t ] .", "To generate the new memory M t , we combine M t 1 and M t 1 to match M t 1 .", "The model uses x t as its query to attend on previous candidates M t 1 .", "The attention distribution is p t , which models the split point of gray blocks and orange blocks in Figure 2.", "Suppose p t is a one-hot distribution and p it = 1 .", "The candidates M i t 1 are written into the respective memory slot M i t , while M >it 1 are copied to M >it : M i t = M i t 1 , M >it = M >it 1 (1) We will refer to the process of generating M t as a one-step look-ahead parser, since the model is using the current input x t as extra information to build the partial parse for time step t 1 .", "To generate new candidates M t , the input embedding x t is written into M i 1 t , and M i t are computed recurrently with eq.3: M <i 1 t = , M i 1 t = x t (2) M jt = cell( M jt , M j 1 t ) , j i (3) where cell() is the composition function that takes its childrens' representations as input and output the parent's representation.", "The non-empty slots in candidate memory are then M i 1 t , and they can be masked by: <i 1 t = 0 , i 1 t = 1 (4) In other words, it = (cid:80) j i +1 p jt , and it is monotonically increasing.", "More details of the OM can be found in Shen et al. (2019).", "We propose two augmentations to OM in order to better perform language modelling and incremental parsing: a prediction network and the dynamic oracle.", "a) Previous language models mostly focus on predicting the next token or a missing token.", "In our case, we are explicitly modeling the latent structure.", "By predicting the structure for the next token, we exploit this latent structure for word prediction.", "This helps the model better organize information for predicting next word, allowing shortcuts to be created for long-term dependencies, as shown in Fig.1.", "b) If the model only observes states resulting from correct past decisions at training time, it will not be prepared to recover from its own mistakes during prediction, suffering from exposure bias (Schmidt, 2019; Fried and Klein, 2018).", "In the experiment section, we demonstrate how this phenomenon will significantly hurt the language model performance and, to a lesser extent, also hurt the parsing performance.", "At time step t , the prediction network takes [ M t , M t , t ] as input, and produces a probability distribution over the next token p ( w t +1 | w t ) .", "To do this, we need to have a temporary estimate of the local structure.", "We therefore need to approximate p t +1 with a zero-step look-ahead prediction p (cid:48) t : it = w Att 2 ReLU (cid:16) W Att 1 M it + b 1 (cid:17) + b 2 N (5) p (cid:48) t = masked _ softmax ( t , mask = t ) (6) where W Att 1 is N N weight matrix, w Att 2 is a N dimension weight vector, and it is a scalar.", "We then sample the slot at index i from the distribution p (cid:48) t .", "i is the zero-step look-ahead parsing decision, which means that the next phrase will be a sibling of node M it .", "We therefore need to predict the next token conditioned on M it and its previous contexts.", "So we feed memory slots [ M Nt , MN 1 t , ..., M i +1 t , M it ] into a recurrent neural network: h t = RNN (cid:16) M Nt , MN 1 t , ..., M i +1 t , M it (cid:17) (7) where h t is the final hidden state of the RNN.", "As shown in Figure 3b, the input sequence are representations of non-overlapping subtrees spanning from x 1 to x t .", "h t can therefore be seen as a distributed representation of the sequence w t .", "In the RNN, we use the same architecture as the cell function in OM to model the recurrent transition function: f j i j c j u j = W Cell 2 ReLU (cid:18) W Cell 1 (cid:20) h j +1 t M j (cid:21) + b 1 (cid:19) + b 2 (8) h jt = LN ( ( f j ) (cid:12) h j +1 t + ( i j ) (cid:12) M j + ( c j ) (cid:12) u j ) (9) where is the sigmoid function, LN is layer normalization function, f j , i j , c j are controlling gates, c j is cell state, and h N +1 t is a zero vector.", "After obtaining h t , we can compute the distribution over the next token and the language modelling loss: p ( w t +1 | w t ) = softmax ( W emb h t + b ) (10) LLM = (cid:88) t log( p ( w t +1 | w t )) (11) 4.2 Dynamic Oracle for SOM Data: 1 , ..., T , Result: 1 , ..., T initialize 1 = N ; for i 2 to T do j = first _ sibling ( i ) ; i = max( j +1 , ..., i 1 ) ; i = max( j 1 , i ) ; end Algorithm 1: The structure label generation algorithm, where is the ground-truth tree and i is the structural decisions made by our model.", "This algorithm produces a parse close to the original given the errors already made, and that new gold parse is converted into grid decisions.", "Given , the function first _ sibling ( i ) returns the index of the first token in the smallest clause that contains w i , and where w i is not the first token.", "Ideally, w i should be written into the slot ( j 1) .", "For example, in Figure 2, c is written into the slot 2, then d, e should be written into the slot 1.", "However, the model could make a wrong decision between w j and w i .", "If the model has merged information from w j into a higher slot i , x i should be written into slot i as well.", "One way to provide a supervision signal for p t and p (cid:48) t is to train the parser with static oracle: feed the gold tree to the model, and have the model predict future decisions.", "However, static oracle makes the language model overfit on the gold tree, resulting in bad perplexity scores (Table 2).", "Inspired Type Max Median Mean Constituency 29 7 7.7 Dependency 16 4 4.2 Table 1: Statistics of tree depth for Penn Treebank.", "by the dynamic oracles proposed in (Goldberg and Nivre, 2012; Coavoux and Crabb, 2016), we propose a dynamic oracle for ordered memory, which dynamically changes the reference structure based on mistakes made by our model on previous steps.", "To do this, we build the structure label for each time step based on the gold tree and previous decisions made by the model.", "During training, we sample the model's decision from p t : t = Multinomial ( p t ) (12) and we make greedy decisions during evaluation: t = argmax ( p t ) (13) The same operations are applied to p (cid:48) t as well.", "We use the Algorithm.1 to convert the gold tree into labels t for p t .", "Since the zero-step lookahead distribution p (cid:48) t should match the one-step look-ahead distribution p t +1 at next time step t + 1 , we use t +1 as label for p (cid:48) t .", "The structure loss is the negative log-likelihood: LS = (cid:88) t (cid:0) log( p t ( t | w t )) + log( p (cid:48) t ( t +1 | w t )) (cid:1) For our model, the depth of has a linear relation to the computational complexity and GPU memory consumption.", "To maximize the model's efficiently, the gold tree is constructed from universal dependency trees .", "1 There are two reasons 1 https://universaldependencies.org/ we chose universal dependency trees instead of constituency trees: 1) In Table 1, the dependency trees are on average shallower than constituency trees; this means faster computation time and less memory consumption for our model.", "2) Universal dependency trees can be applied to many more languages than Penn Treebank-style constituency grammar.", "Additionally, Penn Treebank-style trees can easily be converted to universal dependency trees.", "As shown in Figure 4, we convert the universal dependency tree into by merging the head and its children into one single constituent.", "We present the results of SOM on language modeling, syntactic generalization, and incremental parsing.", "Details of hyperparameters and experiment settings can be found in Appendix B. 5.1 Language Modeling Penn Treebank has one million words of 1989 Wall Street Journal corpus annotated with constituency trees.", "Since SOM primarily focuses on sentence-level structure and language modeling, we use the same preprocessing schema as RNNG 2 (Dyer et al., 2016).", "Sentences are modeled separately, punctuation is retained, and singleton words are replaced with the Berkeley parser's mapping rules 3 , resulting in 23,815-word types.", "Orthographic case distinction is preserved, and numbers (beyond singletons) are not normalized.", "BLLIP is a large Penn Treebank-style parsed corpus of approximately 24 million sentences.", "We train and evaluate SOM on three splits of BLLIP: BLLIP-XS (40k sentences, 1M tokens), BLLIP-SM (200K sentences, 5M tokens), and BLLIP-MD (600K sentences, 14M tokens).", "They are obtained by randomly sampling sections from BLLIP 1987-89 Corpus Release 1.", "All models are tested on a shared held-out tested set.", "Following the settings provided in (Hu et al., 2020), datasets are preprocessed into two different versions.", "The first setting is similar to the PTB dataset.", "Singleton words are mapped to UNK classes that preserve fine-grained information, such as orthographic case distinctions and morphological suffixes (e.g. UNK-ed , UNK-ly ).", "The second setting use subword-level vocabulary extracted 2 2-21 for training, 24 for validation, 23 for evaluation.", "Results of language modeling are given in Table 3 and Table 4.", "SOM consistently outperforms both the annotated model and non-annotated models.", "While GPT-2 seems to fail to learn on smaller datasets, SOM still outperforms GPT-2 on the BLLIP-MD dataset with far fewer parameters (34.8M vs 124.4M), and achieves comparable results with the GPT-2 that is trained on a 3 times larger dataset BLLIP-LG (Hu et al., 2020).", "The biggest performance drop comes from replacing the dynamic oracle with static oracle.", "We believe that this is due to the model overfitting on the gold tree, and suffering from exposure bias as a result.", "Another big performance drop happens after removing the prediction network.", "This suggests that predicting the attaching nodes of the next phrase with the zero-step look-ahead parsers helps to predict the next token.", "Replacing the gold tree labels with trivial left-branching tree labels also hurts the perplexity.", "This suggests that learning syntactic structure helps language modeling.", "Syntactic Generalization (SG) test suites evaluate the syntactic knowledge of neural language models.", "Hu et al. (2020) proposed a set of 34 test suites to evaluation 6 different aspects of syntax: 1) agreement, 2) licensing, 3) garden-path effects, 4) gross syntactic expectation, 5) center embedding, 6) long-distance dependencies.", "Following their settings, we evaluate our language models trained on the BLLIP datasets.", "Language models are presented with a group of sentences with minor differences.", "To pass each test, the model needs to assign higher conditional probabilities to designated phrases in the sentence that are more grammatical.", "Figure 6 shows the average accuracy over all model on the complete set of SG test suites.", "SOM achieves the best average accuracy, outperforms models with hierarchical structure bias (RNNG, ON-LSTM), and transformer-based model (GPT-2).", "However, according to Figure 8a in Appendix C.1, GPT-2 trained on BLLIP-LG and BLLIP-MD still outperform SOM.", "This could due to that the number of parameters in SOM is largely falling behind GPT-2.", "Figure 5 provides fine-grained results on six SG classes.", "SOM achieves strong performance on licensing, gross syntactic state, center embedding, and long-distance embeddings.", "These classes require the model to keep track of syntactic features across large syntactic chunks (e.g., relative or subordination clauses).", "SOM can effectively keep this long-term information in higher-level memory slots, and revisit the information after the clause in the middle is ended.", "More detailed results can be found in Appendix C.1.", "5.3 Incremental Parsing Model UF1 PRPN* 41.2 ONLSTM* 47.7 ONLSTM-SYD (Du et al., 2020) 61.3 Incremental Shift-reduce Parser 56.82 Shift-reduce + LM + Dynamic Oracle 58.04 SOM 67.27 Oracle Binary Trees 82.5 Table 5: Incremental parsing results on the standard PTB constituency trees.", "To evaluate SOM's performance on incremental parsing, we trained and evaluated our models on the standard PTB constituency trees.", "Baseline models include:", "a) a standard incremental shift-reduce parser with one-step look-ahead;", "b) a incremental shift-reduce parser that equipped with our prediction network and trained on same dynamic oracle and language model loss as our model;", "c) a recently proposed ONLSTM-SYD model (Du et al., 2020) that is also trained on both language model and parsing loss;", "d) unsupervised ONLSTM;", "e) unsupervised PRPN.", "As shown in Table 5, SOMs outperform all baseline models, including the shift-reduce parser that has the same extra components as SOMs.", "For language modelling performance, original constituency tree based models achieve similar perplexity as dependency tree based counterparts.", "But constituency tree based models require 2 GPU time and memory to train and evaluate.", "For ablation test, we also compare parsing results given by SOM with binary constituency trees converted from universal dependency trees.", "4 These results are shown in Table 2.", "We observe that using static oracle instead of dynamic oracle results in the worst parsing performance.", "This suggests that our dynamic oracle helps the model to learn a better parser.", "After removing the language model loss, the UF1 drops 1.7 points.", "This suggests that the language model loss helps the model to learn better representations for syntax.", "In this work, we propose a new language model with an integrated incremental parser.", "This was done by augmenting the Ordered Memory model with a prediction network, and by using a dynamic oracle for training it to perform incremental parsing.", "The resulting model models the joint distribution of syntactic structure and sequence words.", "We find that by using the dynamic oracle and explicitly modeling the syntax, we can achieve strong performance on language modelling and syntactic generalization and both these techniques are crucial in the model's performance.", "4 UF1 scores are computed by EVALB https://nlp.", "cs.nyu.edu/evalb/ References Samuel R Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D Manning, and Christopher Potts." ]
[ "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "result", "abstain", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "other", "abstain" ]
[ "Cross-document event coreference resolution is a foundational task for NLP applications involving multi-text processing.", "However, existing corpora for this task are scarce and relatively small, while annotating only modest-size clusters of documents belonging to the same topic.", "To complement these resources and enhance future research, we present W ikipedia E vent C oreference ( WEC ), an efficient methodology for gathering a large-scale dataset for cross-document event coreference from Wikipedia, where coreference links are not restricted within predefined topics.", "We apply this methodology to the English Wikipedia and extract our large-scale WEC-Eng dataset.", "Notably, our dataset creation method is generic and can be applied with relatively little effort to other Wikipedia languages.", "To set baseline results, we develop an algorithm that adapts components of state-of-the-art models for within-document coreference resolution to the cross-document setting.", "Our model is suitably efficient and outperforms previously published state-of-the-art results for the task.", "Cross-Document (CD) Event Coreference resolution is the task of identifying clusters of text mentions, across multiple texts, that refer to the same event.", "Successful identification of such coreferring mentions is beneficial for a broad range of applications at the multi-text level, which are gaining increasing interest and need to match and integrate information across documents, such as multi-document summarization (Falke et al., 2017; Liao et al., 2018), multi-hop question answering (Dhin-gra et al., 2018; Wang et al., 2019) and Knowledge Base Population (KBP) (Lin et al., 2020).", "Unfortunately, rather few datasets of reasonable scale exist for CD event coreference.", "Notable datasets include ECB+ (Cybulska and Vossen, 2014), MEANTIME (Minard et al., 2016) and the Gun Violence Corpus (GVC) (Vossen et al., 2018) (described in Section 2), where recent work has been evaluated solely on ECB+.", "When addressed in a direct manner, manual CD coreference annotation is very hard due to its worst-case quadratic complexity, where each mention may need to be compared to all other mentions in all documents.", "Indeed, ECB+ contains less than 7000 event mentions in total (train, dev, and test sets).", "Further, effective corpora for CD event coreference are available mostly for English, limiting research opportunities for other languages.", "Partly as a result of this data scarcity, rather little effort was invested in this field in recent years, compared to dramatic recent progress in modeling within-document coreference.", "Furthermore, most existing cross-document coreference datasets are restricted in their scope by two inter-related characteristics.", "First, these datasets annotate sets of documents, where the documents in each set all describe the same topic , mostly a news event (consider the Malaysia Airlines crash as an example).", "While such topic-focused document sets guarantee a high density of coreferring event mentions, facilitating annotation, in practical settings the same event might be mentioned across an entire corpus, being referred to in documents of varied topics.", "Second, we interestingly observed that event mentions may be (softly) classified into two different types.", "One type, which we term a descriptive mention, pertains to a mention involved in presenting the event or describing new information about it.", "For example, news about the Malaysian Airline crash will include mostly descriptive mentions of the event and its sub-events, such as shot-down , crashed and investigated .", "Naturally, news documents about a topic, as in prior event coreference datasets, include mostly descriptive event mentions.", "The other type, which we term a referential mention, pertains to mentions of the event in sentences that do not focus on presenting new information about the event but rather mention it as a point of reference.", "For example, mentions referring to the airplane crash, such as the Malaysian plane crash , Flight MH17 or disaster may appear in documents about the war in Donbass or about flight safety.", "Since referential event mentions are split across an entire corpus, they are less trivial to identify for coreference annotation, and are mostly missing in current news-based datasets.", "As we demonstrate later, these two mention types exhibit different lexical distributions and seem to require corresponding training data to be properly modeled.", "In this paper, we present the W ikipedia E vent C oreference methodology ( WEC ), an efficient method for automatically gathering a large-scale dataset for the cross-document event coreference task.", "Our methodology effectively complements current datasets in the above-mentioned respects: data annotation is boosted by leveraging available information in Wikipedia, practically applicable for any Wikipedia language; mentions are gathered across the entire Wikipedia corpus, yielding a dataset that is not partitioned by topics; and finally, our dataset consists mostly of referential event mentions.", "In its essence, our methodology leverages the coreference relation that often holds between anchor texts of hyperlinks pointing to the same Wikipedia article (see Figure 1), similar to the ba-sic idea introduced in the Wikilinks dataset (Singh et al., 2012).", "Focusing on CD event coreference, we identify and target only Wikipedia articles denoting events.", "Anchor texts pointing to the same event article, along with some surrounding context, become candidate mentions for a corresponding event coreference cluster, undergoing extensive filtering.", "We apply our method to the English Wikipedia and extract WEC-Eng , our English version of a WEC dataset.", "The automatically-extracted data that we collected provides a training set of a very large scale compared to prior work, while our development and test sets underwent relatively fast manual validation.", "Due to the large scale of the WEC-Eng training data, current state-of-the-art CD coreference models cannot be easily trained and evaluated on it, for scalability reasons.", "We therefore developed a new, more scalable, baseline model for the task, while adapting components of recent competitive Figure 1: Example of two anchor texts ( 100 metres , 100 m final ) from different Wikipedia articles (Usain Bolt, Yohan Blake) pointing to the same event.", "within-document coreference models (Lee et al., 2017; Kantor and Globerson, 2019; Joshi et al., 2019).", "In addition to setting baseline results for WEC-Eng, we assess our model's competitiveness by presenting a new state-of-the-art on the commonly used ECB+ dataset.", "Finally, we propose that our automatic extraction and manual validation methods may be applied to generate additional annotated datasets, particularly for other languages.", "Overall, we suggest that future cross-document coreference models should be evaluated also on the WEC-Eng dataset, and address its complementary characteristics, while the WEC methodology may be efficiently applied to create additional datasets.", "To that end, our dataset and code 12 are released for open access.", "This section describes the main characteristics of notable datasets for CD event coreference (ECB+, MEANTIME, GVC).", "Table 1 presents statistics for all these datasets, as well as ours.", "We further refer to the Wikilinks dataset, which also leveraged Wikipedia links for CD coreference detection.", "ECB+ This dataset (Cybulska and Vossen, 2014), which is an extended version of the EventCoref-Bank (ECB) (Bejan and Harabagiu, 2010), is the most commonly used dataset for training and testing models for CD event coreference (Choubey and", "Huang, 2017; Kenyon-Dean et al., 2018; Barhom et al., 2019).", "This corpus consists of documents partitioned into 43 clusters, each corresponding to a certain news topic.", "In order to introduce some ambiguity and to limit the use of lexical features, each topic is composed of documents describing two different events (called sub-topics ) of the same event type (e.g. two different celebrities checking into rehab facilities).", "Nonetheless, as can be seen in Table 1, the ambiguity level obtained is still rather low.", "ECB+ is relatively small, where on average only 1.9 sentences per document were selected for annotation, yielding only 722 non-singleton coreference clusters in total.", "MEANTIME Minard et al. (2016) proposed a dataset that is similar in some respects to ECB+, with documents partitioned into a set of topics.", "The different topics do not correspond to a specific news event but rather to a broad topic of interest (e.g. Apple, stock market).", "Consequently, different documents rarely share coreferring event mentions, resulting in only 11 event coreference clusters that include mentions from multiple documents, making this dataset less relevant for training CD coreference models.", "Gun Violence Corpus (GVC) This dataset (Vossen et al., 2018) was triggered by the same motivation that drove us, of overcoming the huge complexity of direct manual annotation of CD event coreference from scratch.", "To create the dataset, the authors leveraged a structured database recording gun violence events, in which the record for an individual event points at documents describing that event.", "The annotators were then asked to examine the linked documents and mark in them mentions of 5 gun-violence event classes (firing a gun, missing, hitting, injuring, death).", "Considering the recorded event as a pivot, all mentions found for a particular class were considered as coreferring.", "Using this process, they report an annotation rate of about 190 mentions per hour.", "As this corpus assumes a specific event structure scheme related to gun violence, it is more suitable for studying event coreference within a narrow domain rather than for investigating models for broad coverage event coreference.", "Wikilinks (Singh et al., 2012) is an automatically-collected large-scale cross-document coreference dataset, focused on entity coreference.", "It was constructed by crawling a large portion of the web and collecting as mentions hyperlinks pointing to Wikipedia articles.", "Since their method does not include mention distillation or validation, it was mostly used for training models for the Entity Linking task, particularly in noisy texts (Chisholm and Hachey, 2015; Eshel et al., 2017).", "We now describe our methodology for gathering a CD event coreference dataset from Wikipedia, and the WEC-Eng dataset created by applying it to the English Wikipedia.", "We also denote how this methodology can be applied, with some language-specific adjustments, to other Wikipedia languages.", "Our data is collected by clustering together anchor texts of (internal) Wikipedia links pointing to the same Wikipedia concept.", "This is generally justified Cluster-1 (2010 Polish Air Force Tu-154 crash) Cluster-2 (Lokomotiv Yaroslavl plane crash) ...following the death of President Lech Kaczynski in a plane crash ... ...On 7 September 2011, nearly the entire team perished in an airplane crash ... ...following the Smolensk air disaster which killed the incumbent Polish president... ...fourth season was overshadowed by the Yaroslavl air disaster on 7 September... ...died when the presidential plane went down about a half mile from the runway... ...Early in September, tragedy rocked the hockey world...", "since all these links refer to the same real world theme described by that article, as illustrated in Figure", "1. Accordingly, our dataset consists of a set of mentions, each including the mention span corresponding to the link anchor text, the surrounding context, and the mention cluster ID.", "Since Wikipedia is not partitioned into predefined topics, mentions can corefer across the entire corpus (unlike most prior datasets).", "Since mention annotation is not exhaustive, coreference resolution is performed over the gold mentions.Thus, our goal is to support the development of CD event coreference algorithms, rather than of mention extraction algorithms.", "Our dataset also includes metadata information, such as source and target URLs for the links, but these are not part of the data to be considered by algorithms, as our goal in this work is CD coreference development rather than Event Linking (Nothman et al., 2012).", "In this paper, we focus on deriving from Wikipedia an event coreference dataset.", "The choice to focus on event coreference was motivated by two observations: (1) coreference resolution for Wikipedia anchor texts would be more challenging for event mentions than for entity mentions, since the former exhibits much higher degrees of both ambiguity and lexical diversity, and (2) event structures, with their arguments (such as participants, location and time) available in the surrounding context, would facilitate a more natural dataset for the corpus-wide CD coreference task, compared to Wikipedia entity mentions which are comprised mostly of named entities.", "Accordingly, we seek to consider only Wikipedia pages denoting events, then collect hyperlinks pointing at these pages.", "All anchor texts pointing to the same event then become the mentions of a corresponding event coreference cluster, and are extracted along with their surrounding paragraph as context (see Table 2).", "The following paragraphs describe this process in detail, and how it was applied to generate the WEC-Eng dataset from English Wikipedia.", "Event Identification Many Wikipedia articles contain an infobox 3 element.", "This element can be selected by a Wikipedia author from a pre-defined list of possible infobox types (e.g. Civilian At-tack, Game, Scientist, etc.), each capturing typical information fields for that type of articles.", "For example, the Scientist infobox type consists of fields such as birth date, awards, thesis etc.", "We leverage the infobox element and its parameters in order to identify articles describing events (e.g. accident, disaster, conflict, ceremony, etc.) rather than entities (e.g. a person, organization, etc.).", "To that end, we start by automatically compiling a list of all Wikipedia infobox types that are associated with at least dozens of Wikipedia articles.", "Of those, we manually identify all infobox types related to events (WEC-Eng examples include Awards, Meetings, Civilian Attack, Earthquake, Contest, Concert and more).", "We then (man-ually) exclude infobox types that are frequently linked from related but non-coreferring mentions, such as sub-events or event characteristics, like location and time (see Appendix A.1 for further de-tails).", "For WEC-Eng, we ended up with 28 English Wikipedia event infobox types (see Appendix A.2 for the full list).", "Gathering Initial Dataset Once the infobox event list is determined, we apply a fully automatic pipeline to obtain an initial crude version of the dataset.", "This pipeline consists of: (1) Collecting all Wikipedia articles (event pivot pages) whose Infobox type is in our list.", "(2) Collecting all Wikipedia anchor texts (mentions) pointing to one of the pivot pages, along with their surrounding paragraph.", "(3) Filtering mentions that lack context or those belonging to Wikipedia metadata, such as tables, images, lists, etc., as well as mentions whose surrounding context contains obvious Wikipedia boilerplate code (i.e. HTML and JSON tags) (4) Finally, all collected mentions are clustered accord-3 https://en.wikipedia.org/wiki/Help: Infobox Cluster Mention link and context Validation 1", "Mention-level Filtering An event coreference dataset mined this way may still require some re-finement in order to further clean the dataset at the individual mention level.", "Indeed, we observed that many Wikipedia editors have a tendency to position event hyperlinks on an event argument, such as a Named Entity (NE) related to the event date or location (as in the case of the disqualified mention for cluster 3 in Table 3).", "To automatically filter out many of the cases where the hyperlink is placed on an event argument instead of on the event mention itself, we use a Named Entity tagger and filter out mentions identified by one of the following labels: PERSON, GPE, LOC, DATE and NORP (for WEC-Eng we used the SpaCy Named Entity tagger (Honnibal and Montani, 2017)).", "Controlling Lexical Diversity So far, we addressed the need to avoid having invalid mentions in a cluster, which do not actually refer to the linked pivot event.", "Next, we would like to ensure a reasonably balanced lexical distribution of the mentions within each cluster.", "Ideally, it would be desired to preserve the natural data distributions as much as possible.", "However, we observed that in many Wikipedia hyperlinks, the anchor texts used in an event mention may be lexically unbalanced.", "Indeed, Wikipedia authors seem to have a strong bias to use the pivot article title when creating hyperlinks pointing at that article, while additional ways by which the event can be referred are less frequently hyper-linked.", "Consequently, preserving the original distribution of hyperlink terms would create a too low level of lexical diversity.", "As a result, training a model on such a dataset might overfit to identifying only the most common mention phrases, leaving little room for identifying the less frequent ones.", "To avoid this, we applied a simple filter 4 that allows a maximum of 4 mentions having identical strings in a given cluster.", "This hyperparameter was tuned by making the lexical repetition level in our clusters more similar to that of ECB+, in which lexical diversity was not controlled (resulting with an average of 1.9 same-string mentions per cluster in WEC-Eng train set compared to 1.3 in ECB+).", "Using this process we automatically generated a large dataset.", "We designated the majority of the automatically generated data to serve as the WEC-Eng training set.", "The remaining data was left for the development and test sets, which underwent a manual validation phase, as described next.", "Inevitably, some noise would still exist in the automatically derived dataset just described.", "While partially noisy training data can be effective, as we show later, and is legitimate to use, the development set, and particularly the test set, should be of high quality to allow for proper evaluation.", "To that end, we manually validated the mentions in the development and test sets.", "For CD coreference evaluation, we expect to include a mention as part of a coreferring cluster only if it is clear, at least from reading the given surrounding context, that this mention indeed refers to the linked pivot event.", "Otherwise, we cannot expect a system to properly detect coreference for that mention.", "Such cases occasionally occur in Wikipedia, where identifying context is missing while relying on the provided hyperlink (see cluster-1 in Table 3, where the tournament year is not mentioned).", "Such mentions are filtered out by the manual validation.", "Additionally, misplaced mention boundaries that do not include the correct event trigger (Table 3 cluster-2,3), as well as 4 We also release the non controlled version of the dataset.", "Summing up, to filter out these cases, we used a strict and easy-to-judge manual validation criterion, where a mention is considered valid only if: (1) the mention boundaries contain the event trigger phrase; (2) the mention's surrounding paragraph suffices to verify that this mention refers to the pivot page and thus belongs to its coreference cluster; and (3) the mention does not represent a subevent of the referenced event.", "Table 3 shows examples of validated vs. disqualified mentions judged for the WEC-Eng development set.", "For the WEC-Eng validation, we randomly selected 588 clusters and validated them, yielding 1,250 and 1,893 mentions for the development and test sets, respectively.", "Table 1 presents further statistics for WEC-Eng.", "The validation was performed by a competent native English speaker, to whom we explained the guidelines, after making a practice session over 150 mentions.", "Finally, all training mentions that appeared in the same (source) article with a validated mention were discarded from the training set.", "Our manual validation method is much faster and cheaper compared to a full manual coreference annotation process, where annotators would need to identify and compare all mentions across all documents.", "In practice, the average annotation rate for WEC-Eng yielded 350 valid mentions per hour, with the entire process taking only 9 hours to complete.", "In addition, since our validation approach is quite simple and does not require linguistic expertise, the eventual data quality is likely to be high.", "To assess the validation quality, one of the authors validated 50 coreference clusters (311 mentions), randomly selected from the development and test sets, and then carefully consolidated these annotations with the original validation judgements by the annotator.", "Relative to this reliable consolidated annotation, the original annotations scored at 0.95 Precision and 0.96 Recall, indicating the high quality of our validated dataset (the Cohen's Kappa (Cohen, 1960) between the original and consolidated annotations was 0.75, considered substantial agreement).", "In all, 83% of the candidate mentions were positively validated in the development and test sets, indicating a rough estimation of the noise level in the training set.", "That being said, we note that a majority of these noisy mentions were not totally wrong mentions but rather were filtered out due to the absence of substantial surrounding context or the misplacement of mention boundaries (see examples in Table 3).", "This section describes the WEC-Eng dataset content and some of its characteristics.", "The final WEC-Eng dataset statistics are presented in Table", "1. Notably, the training set includes 40,529 mentions distributed into 7,042 coreference clusters, facilitating the training of deep learning models.", "The relatively high level of lexical ambiguity shown in the table 5 is an inherent characteristic caused by many events (coreference clusters) sharing the same event type, and thus sharing the same terms, as illustrated in Table", "2. Identifying that identical or lexically similar mentions refer to different events is one of the major challenges for CD coreference resolution, particularly in the corpus-wide setting, where different documents might refer to similar yet different events.", "With respect to the distinction between descriptive and referential event mentions, proposed in Section 1, WEC-Eng mentions are predominantly referential.", "This stems from the fact that its mentions correspond to hyperlinks that point at a different Wikipedia article, describing the event, while the mention's article is describing a different topic.", "On the other hand, ECB+, being a news dataset, is expected to include predominantly descriptive mentions.", "Indeed, manually analyzing a sample of 30 mentions from each dataset, in WEC-Eng 26 were referential while in ECB+ 28 were descriptive.", "This difference also imposes different lexical distributions for mentions in the two datasets, as sampled in Appendix A.3.", "When describing an event, verbs are more frequently used as event mentions, but nominal mentions are abundant as well.", "This is apparent for the predominately descriptive ECB+, where 62% of the mentions in our sample are verbal vs. 38% nominal.", "On the other hand, when a previously known event is only referenced, it is mostly referred by a nominal mention.", "Indeed, in the predominantly referential WEC-Eng, a vast majority of the mentions are nominal (93% in our sample).", "5 The higher ambiguity level of the training sets stems from its larger size including many more coreference clusters in total, and accordingly more clusters under individual event types.", "While our process was applied to the English Wikipedia, it can be adapted with relatively few adjustments and resources to other languages for which a large-scale Wikipedia exists.", "Here we summarize the steps needed to apply our dataset creation methodology to other Wikipedia languages.", "To generate a dataset, the first step consists of manually deciding on the list of suitable infobox types corresponding to (non-noisy) event types.", "Then, the automatic corpus creation process can be applied for this list, which takes only a few hours to run on a single CPU.", "After the initial dataset was created, a language specific named-entity tagger should be used to filter mentions of certain types, like time and location (see Mention-level Filtering (3.2)).", "Next, the criterion for ensuring balanced lexical diversity in a cluster (see Controlling Lexical Diversity (3.2)), which was based on a simple same-string test for English, may need to be adjusted for languages requiring a morphological analyzer.", "Finally, as we perform manual validation of the development and test sets, this process should be performed for any new dataset (see Section 3.3).", "Supporting this step, our validation guidelines are brief and simple, and are not language specific.", "They only require identifying subevents and misplaced mention boundaries, as well as validating the sufficiency of the mention context.", "The current state-of-the-art CD event coreference system (Barhom et al., 2019) cannot be effectively trained on WEC-Eng for two main reasons: (1) computational complexity and (2) reliance on verbal SRL features.", "With respect to computation time, the training phase of this model simulates the clustering operations done at inference time, while recalculating new mention representations and pairwise scores after each cluster merging step.", "Consequently, training this model on our large scale training data, which is further not segmented to topics, is computationally infeasible.", "In addition, the model of Barhom et al. (2019) uses an SRL system to encode the context surrounding verbal event mentions, while WEC-Eng is mostly composed of nominal event mentions (Section 3.4).", "establishing baseline results for WEC-Eng.", "As common in CD coreference resolution, we train a pairwise scorer s ( i, j ) indicating the likelihood that two mentions i and j in the dataset are coreferring, and then apply agglomerative clustering over these scores to find the coreference clusters.", "Following the commonly used average-link method (Choubey and Huang, 2017; Kenyon-Dean et al., 2018; Barhom et al., 2019), the merging score for two clusters is defined as the average mention pair score s ( i, j ) over all mention pairs ( i, j ) across the two candidate clusters to be merged.", "For the pairwise model, we replicate the architecture of mention representation and pairwise scorer from the end-to-end within document coreference model in (Lee et al., 2017), while including the recent incorporation of transformer-based encoders (Joshi et al., 2019; Kantor and Globerson, 2019).", "Concretely, we first apply a pre-trained RoBERTa (Liu et al., 2019) language model (without fine-tuning), separately for each mention.", "Given a mention span i , we include as context, T (set to 250) tokens to the left of i and T tokens to the right of i .", "Applying RoBERTa to this window, we represent each mention by a vector g i , which is the concatenation of three vectors: the contextualized representations of the mention span boundaries (first and last) and the weighted sum of the mention token vectors according to the head-finding attention mechanism in (Lee et al., 2017).", "The two mention representations g i and g j , and the element-wise multiplication of these vectors are then concatenated and fed into a simple MLP, which outputs a score s ( i, j ) , indicating the likelihood that mentions i and j belong to the same cluster.", "The head-attention layer and the MLP are trained to optimize the standard binary cross-entropy loss over all pairs of mentions, where the label is 1 if they belong to the same coreference cluster and 0 otherwise.", "6 4.2 Experiments We first train and evaluate our model on the commonly used dataset ECB+, to assess its relevance as an effective baseline model, and then evaluate it on WEC-Eng, setting baseline results for our dataset.", "We also present the performance of the challenging same-head-lemma baseline, which clusters mentions sharing the same syntactic-head lemma.", "For 6 We note that this optimization is different than the (linear) antecedent ranking in the model of Lee et al. (2017), since in the CD setting there is no linear order between mentions from different documents.", "the experiment on ECB+, we follow the recent evaluation setting (Kenyon-Dean et al., 2018; Barhom et al., 2019), clustering gold mentions and concatenating all test documents into one meta-document, as proposed by Upadhyay et al. (2016).", "For a fair comparison, we use the output of pre-processing document clustering obtained by Barhom et al. (2019) and apply our coreference model separately on each predicted document cluster.", "For both datasets, the positive examples for training consist of all the mention pairs in the dataset that belong to the same coreference cluster.", "For the ECB+ model, we consider only negative examples that belong to the same subtopic, while for WEC-Eng we sample k (tuned to 10) negative examples for each positive one.", "Results are reported by precision, recall, and F1 for the standard coreference metrics MUC, B 3 , CEAF-e, and the average F1 of the three metrics, using the official CoNLL scorer (Pradhan et al., 2012).", "7 4.3 Results Table 4 presents the results on ECB+.", "Our model outperforms state-of-the-art results for both the JOINT model and the DISJOINT event model of Barhom et al. (2019), with a gain of 1.3 CoNLL F 1 points and 2.3 CoNLL F 1 points respectively.", "The JOINT model jointly clusters event and entity mentions, leveraging information across the two subtasks, while the DISJOINT event model considers only event mentions, taking the same input as our model.", "These results assess our model as a suitable baseline for WEC-Eng.", "Table 5 presents the results on WEC-Eng.", "First, we observe that despite the certain level of noise in the automatically gathered training data, our model outperforms the same-head-lemma baseline by 9.2 CoNLL F 1 points.", "In fact, it achieves similar error reduction rates relative to the lemma baseline as obtained over ECB+, where training is performed on a clean but smaller training data (18.3% error reduction in ECB+ and 19.6% in WEC).", "Furthermore, the performance of both the same-head-lemma baseline and our model are substantially lower on WEC-Eng (Table 5) than on ECB+ (Ta-ble 4).", "This indicates the more challenging nature of WEC-Eng, possibly due to its corpus wide nature and higher degree of ambiguity (Tables 1).", "Further examining the different nature of the two datasets, we applied cross-domain evaluation, applying the ECB+ trained model on WEC-Eng test data and vice versa.", "The results suggest that due to their different characteristics, with respect to mention type (descriptive vs. referential) and structure (topic-based vs. corpus wide), a model trained on one dataset is less effective (by 8-12 points) when applied to the other (further details are presented in Appendix B.1).", "To obtain some qualitative insight about the learned models for both ECB+ and WEC-Eng, we manually examined their most certain predictions, looking at the top 5% instances with highest predicted probability and at the bottom 5%, of lowest predictions.", "Some typical examples are given in Appendix B.2.", "Generally, both models tend to assign the highest probabilities to mention pairs that share some lemma, and occasionally to pairs with different lemmas with similar meanings, with the WEC-Eng model making such lexical generalizations somewhat more frequently.", "Oftentimes in these cases, the models fail to distinguish between (gold) positive and negative cases, despite quite clear distinguishing evidence in the context, such as different times or locations.", "This suggests that the RoBERTa-based modeling of context may not be sufficient, and that more sophisticated models, injecting argument structure more extensively, may be needed.", "In both models, the lowest predictions (correctly) correspond mostly to negative mention pairs, and occasionally to positive pairs for which the semantic correspondence is less obvious (e.g. offered vs. candidacy ).", "In addition, we observe that longer spans common in WEC-Eng challenge the span representation model of Lee et al. (2017).", "This model emphasizes mention boundaries, but these often vary across lexically-similar coreferring mentions with different word order.", "In this paper, we presented a generic low-cost methodology and supporting tools for extracting cross-document event coreference datasets from Wikipedia.", "The methodology was applied to create the larger-scale WEC-Eng corpus, and may be easily applied to additional languages with relatively few adjustments.", "Most importantly, our dataset complements existing resources for the task by addressing a different appealing realistic setup: the targeted data is collected across a full corpus rather than within topical document clusters, and, accordingly, mentions are mostly referential rather than descriptive.", "Hence, we suggest that future research should be evaluated also on WEC-Eng, while future datasets, particularly for other languages, can be created using the WEC methodology and tool suite, all made publicly available.", "Our released model provides a suitable baseline for such future work.", "We would like to thank Valentina Pyatkin, Daniela Stepanov and Oren Pereg for their valuable assistance in the data validation process.", "The work described herein was supported in part by grants from Intel Labs, Facebook, the Israel Science Foundation grant 1951/17, the Israeli Ministry of Science and Technology and the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1)." ]
[ "abstain", "abstain", "method", "method", "method", "objective", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "method", "method", "result", "method", "abstain", "method", "method", "abstain", "objective", "abstain", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "other", "other" ]
[ "Nowadays social media platforms are the most popular way for people to share information, from work issues to personal matters.", "For example, people with health disorders tend to share their concerns for advice, support or simply to relieve suffering.", "This provides a great opportunity to proactively detect these users and refer them as soon as possible to professional help.", "We propose a new representation called Bag of Sub-Emotions (BoSE), which represents social media documents by a set of fine-grained emotions automatically generated using a lexical resource of emotions and subword embeddings.", "The proposed representation is evaluated in the task of depression detection.", "The results are encouraging; the usage of fine-grained emotions improved the results from a representation based on the core emotions and obtained competitive results in comparison to state of the art approaches.", "Mental Disorders affect millions of people around the world.", "Out of these disorders, depression has been ranked among the most common, even with a high incidence in mortality rates (Kessler et al., 2017; Mathers and Loncar, 2006).", "It is imperative then, to come with effective approaches to detect depression before it causes irreparable damage to mere individuals that suffer it and their loved ones.", "In a connected world where we live, it is very normal to share personal information, matters and concerns in social media platforms.", "This fact poses an opportunity, since the understanding of depression through the analysis of social media documents increases the chances to detect people that present signs of depression and could lead to provide them professional help (Guntuku et al., 2017; Pestian et al., 2010).", "Several works in literature have explored how to use linguistic and sentiment analysis to detect depression (Xue et al., 2013).", "For example, in (Huang et al., 2014) the authors applied sentiment analysis (SA) to assign polarity to tweets.", "They count the number of positive, negative, neutral words, and the ratio of the negative and positive words, and found that depressed users post longer emotional tweets.", "The work of Wang et al. (2013) enriched SA with features derived from psychological research like the use of first person pronouns, user social interaction and user behaviors in micro blogs.", "An interesting finding is that the time of the posts is useful to detect people with high risk of committing suicide.", "In a recent work, Chen et al. (2018) proposed to use emotions with the aim to identify depression on Twitter users.", "That study openly exposed the potential of using discrete emotions as features, instead of only using linguistic features, and broad categories to represent them.", "To further investigate this latter point, in this study we propose to model emotions in a fine-grained way and use them to build a new representation to tackle the problem of detecting depression in users of social media.", "We construct these fine-grained emotions using lexical information extracted from emotions combined with subword embeddings.", "The leading hypothesis of our study is that emotions could be better, and more flexible, represented at a lower level, instead of only using broad categories such as as anger, joy, negative or positive.", "Figure 1 depicts ourproposed approach.", "In a first step, we compute a set of fine-grained emotions for each broad emotion described in the lexical resource by Mohammad and Turney (2013).", "Then, Figure 1: Diagram that represents the creation of the Bag of Sub-Emotions ( BoSE ) representation.", "we use the obtained fine-grained emotions to mask the texts, eventually representing them by a histogram of their frequencies.", "Accordingly, we named this new representation BoSE , for Bag of Sub-Emotions.", "In the following sections we detail each step of our proposed approach.", "To generate the fine-grained emotions we use a lexical resource based on eight recognized emotions, e.g., Anger, Anticipation, Disgust, Fear, Joy, Sadness, Surprise and Trust (Ekman and Davidson, 1994), and two main sentiments 1 , Positive and Negative.", "We represent this as E = { E 1 , E 2 , ..., E 10 } , where E is the set of emotions in the lexical resources and E i = { w 1 ,", ".., w n } is the set of words associated to the emotion E i .", "We compute a word vector for each word using pre-trained Wikipedia sub-word embeddings from FastText (Bojanowski et al., 2016) of size 300, and then we create subgroups of words by emotion using the Affinity Propagation ( AP ) clustering algorithm (Thavikulwat, 2008).", "This AP clustering algorithm has several appealing characteristics, e.g. it does not employ artificial elements (centroids) to create clusters and it does not require to specify the number of groups before running the al-1 In the rest of the paper we refer to these sentiments as emotions as well.", "gorithm.", "To have an idea of how the vocabulary was distributed among emotions and the number of generated clusters after applying AP to the lexical resource we present Table", "1. After this process, now each (broad) emotion is represented by a set of fine-grained emotions, E i = { F i 1 , ..., F ij } , where each F ij is a subset of the words from E i and is represented by the average vector of their respective embeddings.", "These subgroups of words allow separating each broad emotion in different topics that help identify and capture the fine-grained emotions expressed by users in their posts.", "Figure 2 presents some examples of groups of fine-grained emotions that were automatically computed by our approach.", "We can appreciate that words with similar context tend to group together, as shown in each column.", "We can also notice that words corresponding to the same broad emotion consider very different topics, for example, in the Anger emotion, the group anger3 is related to fighting and battles, whereas the group anger2 is about growls or loud noises.", "Another interesting example are the groups from the Surprise emotion, where groups express different kinds of surprises like art and museums ( surprise2 ), accidents and disasters ( surprise1 ), as well as magic and illusions ( sur-prise3 ).", "Text masking: In this step documents are masked by replacing each word with the label of its closest fine-grained emotion.", "To this end, we compute the vector representation of each word using sub-word embeddings from FastText, then we measure the distance of each word vector against the centroid vectors from all fine-grained emotions by means of the cosine similarity, and, finally, we substitute each word by the label of its closest fine-grained emotion.", "To illustrate this process consider the text Leave no stone unturned , which will be masked as fear2 negative8 anger10 antic-ipation3 .", "Text representation: Based on the masked documents, we build their BoSE representations computing a frequency histogram of their fine-grained emotions.", "To build these representations we follow two different approaches: i ) similar to the Bag-of-Words representation we create a histogram counting the number of occurrences of each fine-grained emotion in the text, we refer to this representation as BoSE-unigrams , and ii ) we create a histogram counting the number of occurrences of fine-grained emotion sequences in the text, we refer to this representation as BoSE-ngrams .", "For the latter representation, we tested different sizes and combinations of sequences; using unigrams and bigrams we obtained the best performance for this task.", "Preprocessing: For our experiments, we normalized", "normalized the texts by removing special characters and lowercasing all the words.", "After preprocessing we masked the texts using the fine-grained emotions.", "Classification: Once built the BoSE representation, we selected the more relevant features (i.e., sequences of fine-grained emotions) using the chi 2 distribution X 2 k (Walck, 2007).", "Then, we used a Support Vector Machine (SVM) with a linear kernel and C = 1 to classify the documents.", "Baselines: To properly evaluate the relevance of using fine-grained emotions in the detection of depression, we considered a representation based on the occurrences of broad emotions and the words that do not have an associated emotion.", "We named this approach Bag-of-Emotions (BoE).", "We also compared our results against a Bag-of-Words representation based on word unigrams and n-grams, since they are the common baseline approaches for text classification.", "Additionally, we compared our results against the f 1 results from the participants of the eRisk 2017 and 2018 evaluation tasks (Losada et al., 2017, 2018).", "Data Collections: We evaluated our approach in the task of depression detection, using the data sets from the eRisk 2017 and 2018 evaluation tasks (Losada et al., 2017, 2018).", "These data sets contain Reddit posts for several users.", "The users which explicitly mentioned that were diagnosed with depression were automatically labeled as positive.", "Vague expressions like I think I have depression or I'm depressed were discarded, the rest of them were labeled as negative.", "Table 2 shows some numbers from these data sets; (Losada and Crestani, 2016) describes these collections in more detail.", "The goal of our first experiment was to evaluate the appropriateness of the BoSE representation to identify depressed users.", "To accomplish this, we compared its performance against the results from a traditional BOW representation as well as to a representation considering only the broad emotions.", "Table 3 shows the f 1 performance over the positive class for the BOW, BoE and BoSE approaches.", "It can be noticed that the BoSE representation outperforms both baseline results, particularly when sequences of fine-grained emotions were considered.", "To better characterize the BoSE representation, we evaluated it without considering the clusters associated to the positive and negative sentiments.", "We refer to these experiments as BoSE8.", "Results from this variant show a drop in performance, confirming that sentiment information is relevant to the identification of depressed users.", "To further evaluate the relevance of the BoSE representation, Table 4 compares its results against those from the first three places at the eRisk 2017 and 2018 evaluation tasks (Losada et al., 2017, 2018).", "To contextualize this comparison, consider that the first place in both years (Trotzek et al., 2017, 2018) defined multiple strategies Method Dep'17 Dep'18 BoW-unigrams 0.59 0.58 BoE-unigrams 0.57 0.60 BoSE8-unigrams 0.56 0.60 BoSE-unigrams 0.61 0.61 BOW-ngrams 0.58 0.60 BoE-ngrams 0.61 0.58 BoSE8-ngrams 0.57 0.59 BoSE-ngrams 0.64 0.63 Table 3: F1 results over the positive class against baseline methods and considered a wide range of features to build their models, e.g., they extract readability features, LIWC features, user-level linguistic metadata, neural word embeddings, specific terms related to depression, and used models based on LSTM neural networks and convolutional neural networks, using four machine learning models in an ensemble model.", "Other top performers (Ville-gas et al., 2017) combined semantic representation considering partial information and temporal variation features.", "In (Funez et al., 2018) they implemented two models; one based on flexible temporal variation of terms and a second model based on sequential incremental classification.", "From the obtained results we highlight the following observations:", "1. Our approach outperformed the traditional BOW representation in both data sets, indicating that considering emotional information is quite relevant for the detection of depression in online communications.", "2. The use of fine-grained emotions as features helps improving the results from a representation that only considers broad emotions.", "This result confirms our hypothesis that depressive users tend to express their emotions in a different way than non depressive users.", "3. Our approach obtained comparable results to the best reported approaches in both data sets.", "It is important to highlight that the participants of these tasks tested different complex models with a wide range of features and sophisticated approaches based on traditional and deep learning representations of texts, whereas ours only relies on the use of fine-grained emotions as features.", "To offer a glimpse of what fine-grained emotions actually capture, we selected the most relevant sequences for the detection of depression according to the chi 2 distribution.", "Table 5 shows some relevant sequences of fine-grained emotions as well as some examples of the words that correspond to these sequences.", "Most of the fine-grained emotions that present high relevance for the detection of depression are related to negative topics, for example, the anger emotion is associated to the feeling of abandonment or unsociable, and the disgust emotion is related to dilution, insecurity and desolation.", "These fine-grained emotions seems to capture the way a depressed user expresses about himself or his environment.", "In this study we proposed a new representation that creates fine-grained emotions that were automatically generated using a lexical resource of emotions and sub-word embeddings from FastText.", "Using these fine-grained emotions our approach can automatically capture more specific topics and emotions that are expressed in the documents by users that have depression.", "BoSE obtained better results than the proposed baselines and also improved the results of only using broad emotions.", "It is worth mentioning the simplicity and interpretability of our approach, which contrasts with the best previous eRisk competition methods that are much more complex and difficult to interpret (most of the participants used plenty of different features and a vast range of models, including deep).", "Our results encourage to attempt this approach based on fine-grained emotions in other relevant health and safety tasks such as the detection of anorexia and self-harm.", "In addition, we also plan to explore the learning of emotional-based representations by means of a deep neural network from which we could exploit local invariance properties to model fine-grained emotions." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "abstain", "abstain", "abstain", "result", "objective", "objective", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "objective", "result", "objective" ]
[ "Conducting a manual evaluation is considered an essential part of summary evaluation methodology.", "Traditionally, the Pyramid protocol, which exhaustively compares system summaries to references, has been perceived as very reliable, providing objective scores.", "Yet, due to the high cost of the Pyramid method and the required expertise, researchers resorted to cheaper and less thorough manual evaluation methods, such as Responsiveness and pairwise comparison, attainable via crowdsourcing.", "We revisit the Pyramid approach, proposing a lightweight sampling-based version that is crowdsourcable.", "We analyze the performance of our method in comparison to original expert-based Pyramid evaluations, showing higher correlation relative to the common Responsiveness method.", "We release our crowdsourced Summary-Content-Units, along with all crowdsourcing scripts, for future evaluations.", "Evaluating content quality of summaries is an integral part of summarization research.", "Measuring the performance of a summarization system can be done through either automatic or manual evaluation.", "An automatic evaluation, in practice working at the lexical level, provides an inexpensive means of measuring the validity of a system, both for system comparisons and for quick development cycle testing.", "Due to the shallowness of the automatic approaches, their reliability is often perceived as insufficient (Owczarzak et al., 2012; Chaganty et al., 2018).", "This calls for the more expensive manual evaluation, which employs human-in-the-loop protocols for assessment.", "The Pyramid method (Nenkova and Passonneau, 2004) is a prominent manual evaluation methodology that is considered highly reliable for comparing summarization systems.", "It relies on a small set of manually-crafted reference summaries, out of which all summary content units (SCUs) are manually extracted.", "System summaries are then manually checked for coverage of each individual SCU, from which an overall system score is derived.", "The Pyramid evaluation method's reliability comes at a cost.", "It requires laborious manual work performed by annotators who must browse through non-trivial guidelines (Passonneau, 2006).", "Due to these drawbacks, it was only used in a few DUC and TAC (NIST, 2014, 2018) benchmarks.", "Instead, summarization work in recent years has mostly employed simpler manual evaluation approaches, such as Responsiveness and pairwise comparison, which do not rely on reference summaries and can be attained via crowdsourcing.", "Yet, these methods are quite subjective, since evaluators need to provide only a single global judgment for the quality of a summary (or a pair of summaries).", "Such judgments are far more subjective than the Pyramid score, which is derived from many, more objective, local decisions, each judging independently the presence of an individual SCU.", "Indeed, it was shown that the above subjective crowdsourcing-based evaluation methods are not reliable enough to produce consistent scores across experiments (Gillick and Liu, 2010).", "We propose a simplified crowdsourcable and reproducible version of the Pyramid method, that suggests appealing advantages over prior crowdsourcable evaluation methods.", "Like the original Pyramid, our method leverages the strong signal of the reference summaries and similarly bases its score on less subjective SCU judgments.", "In contrast to the original Pyramid, we rely on statistical sampling rather than exhaustive SCU extraction and testing, lowering overall cost.", "Empirically, our method correlates with the original Pyramid scores better than the common Responsiveness method, and shows better stability.", "The Pyramid method (Nenkova and Passonneau, 2004) consists of two manual phases.", "The first phase is pyramid creation , performed once when a dataset is constructed, per each input topic to be summarized (either a single document or a set of documents).", "In this phase, experts exhaustively extract all SCU contributors (mentions), each being a text span describing an individual fact.", "SCU contributors are extracted from several reference summaries of the source text.", "Coreferring SCU contributors across reference summaries are then merged into a single SCU , which is given a representative label.", "Each SCU is then assigned a weight, equal to the number of reference summaries in which it was found, indicating its salience.", "The second phase is system evaluation , performed over the summaries produced by the evaluated system.", "Each Pyramid SCU for the source text is manually checked for its presence in the given system summary, whose Pyramid score is then computed as a normalized sum of the weights of the SCUs it contains.", "The overall system score is defined as the average Pyramid score over all its evaluated summaries.", "Although certain normalization variants attempt to weigh in SCU precision, the score is essentially an absolute recall-style interpretation reflecting the system's ability to cover the content units found in the reference summaries.", "Such a fairly robust score allows, in principle, system comparison across experiments (Nenkova and Passonneau, 2004).", "We note that due to the Pyramid method's reliability, some research has been carried out on simulating the Pyramid method as a fully automatic one (Yang et al., 2016; Hirao et al., 2018).", "The hope of such a line of work is to find an automatic evaluation method that is more reliable than the commonly used ones, by taking the reference summary semantic content into account.", "Despite these efforts, automated Pyramid evaluations did not make their way yet to mainstream summary evaluation practices, where variants of the ROUGE metric (Lin, 2004) still prevail.", "In any case, as this paper focuses on manual evaluation, we compare our results to those of the manual Pyramid.", "The Responsiveness method, introduced in DUC 2003 (NIST, 2003), does not require reference summaries.", "Instead, human evaluators typically read both the source text and the system summary.", "They then assign a single subjective score on a Likert scale for the summary quality, often with respect to a topic statement or guiding question.", "Finally, compared systems are ranked by the average score of their summaries.", "This method naturally developed into a crowdsourcing task, and is now used frequently in some variants (Grusky et al., 2018; Paulus et al., 2018).", "Another common crowdsourcable evaluation method is pairwise comparison (Gao et al., 2018; Falke et al., 2017; Fan et al., 2018): an evaluator is asked to judge which of two competing summaries of the same text is superior, usually while observing the source text.", "This protocol allows comparing only two systems at a time, where the superior is determined by the total votes over all input texts.", "The obvious disadvantage of the approach is the difficulty of comparing many systems, in the absence of absolute scores.", "Also, this method may tend to suffer from transitivity inconsistencies when comparing multiple system pairs (Gillick and Liu, 2010).", "The lightweight crowdsourcable Pyramid version we propose aims to preserve the interpretability and relative objectiveness of the Pyramid scores.", "This could provide absolute scores for comparing multiple systems, which the pairwise method does not, in a more reliable manner than Responsiveness evaluation.", "Our Lightweight Pyramid method mimics the two phases of the original Pyramid protocol in crowdsourced setting, with some adjustments.", "Pyramid creation.", "The input for this phase is several reference summaries of a topic.", "Each reference is presented to two crowd workers, asking to extract eight SCU-like statements, yielding 16 potential SCUs per reference summary.", "The instructions guide workers to copy-and-paste extractions from the text, possibly modifying them to stand-alone sentences, that should", "(a) be brief and focused on a single fact;", "(b) capture important information;", "(c) rely solely on the text rather than general knowledge of the worker.", "Further, the statements should appear in different places in the text.", "The copy-and-paste approach allows us to easily detect and filter duplicate statements extracted from the same reference by both annotators, which we identify via bag-of-lemmas cosine similarity.", "Further, too long sentences are filtered.", "In our experiments (see Section 4), we were left with an average of about 13 SCUs per reference summary.", "Then, we take the union of SCUs from all reference summaries, which yielded in our experiments 51 SCUs on average per topic, coming from four reference summaries.", "These SCUs are used to create tasks for the system evaluation phase.", "Recall that in the original Pyramid, SCUs are exhaustively collected; then, coreferring SCUs between reference summaries are merged and weighted by the number of reference summaries from which they originate.", "In contrast, our method enables using a sample of SCUs for evaluation, out of the SCUs collected in this phase (we have sampled, for uniformity, 32 SCUs per topic).", "Further, it avoids posing the task of merging coreferring SCUs across references, which is difficult and error-prone, particularly when expected from crowd workers.", "Instead, we rely on the higher likelihood of a repeated fact to be included in our sample, possibly more than once.", "This implicitly increases the expected impact of repeated facts on our evaluation.", "System evaluation.", "In this phase, a crowd worker is presented with a system summary and a fixed-sized small set of SCUs (we used sets of 16 SCUs).", "The worker is asked whether each SCU can be inferred from the system summary text.", "The guidelines advise workers to refrain from using general knowledge and to ignore minor content differences between the SCU and the system summary.", "Each SCU should be assessed by a few crowd workers, to ensure the stability of the results (in our experiments, each SCU was assigned for evaluation to 5 workers).", "Scoring.", "Following common practice in crowdsourcing, we use techniques of filtering out noisy workers who had high disagreement with others (pairwise worker agreement < 0.5).", "Then, using the remaining answers, we take the majority vote for each SCU to decide whether it appears in the system summary.", "1 We resolve ties with a not present default, as the more likely answer.", "We 1 In our experiments, we have also examined the option of using the average answer, which was significantly worse.", "then compute the system summary score as the percentage of SCUs it matched out of the set of judged SCUs.", "A system's final score is its average score over all topics.", "Experimental setup.", "We used the DUC 2005 and 2006 multi-document summarization datasets (NIST, 2014), which contain expert evaluations for both Pyramid and Responsiveness.", "Each of the two datasets includes 20 document clusters, each pertaining to a target topic, with four reference summaries and 25 (2005) or 22 (2006) system summaries per topic.", "All summaries are 250 words long.", "On average, 105 weighted SCUs were extracted, by experts, for each topic.", "In comparison, our setup gathers 32 sampled crowdsourced unweighted SCUs.", "As suggested in Dang (2006) and Passonneau et al. (2006), the 2005 data tends to be easier to evaluate than the 2006 data, seemingly due to less natural document clusters with respect to practical summarization settings.", "Passonneau et al. (2006) show that the document sets in 2005 were overall more difficult for systems to summarize, as reflected by a lower average Pyramid score across all systems.", "The 2005 topics are more complex as they yield fewer general, context-independent SCUs.", "For example, as Dang (2006) indicates, there are more topics that had a relatively large number of specific named entities.", "Consequently, due to the topic hardness, Passonneau et al. (2006) indicate very few significant differences between overall system Pyramid scores, as evident by Tukey's HSD test.", "While 2006 systems can be divided into eight significantly different Pyramid score groups, in 2005 only two such groups emanate.", "Additionally, the guidelines and scoring method were slightly improved in 2006, relative to 2005.", "For these reasons, we focused on the 2006 dataset, fully annotating it, while utilizing half the topics, randomly chosen, from the 2005 data.", "Using Amazon Mechanical Turk, 2 we qualified workers with over 5000 approved assignments and a 99% approval rate.", "We paid workers $0.50 per reference summary annotation assignment (gener-ating 8 SCUs), yielding a total Pyramid creation cost of $48 (including fees) for the 2005 dataset (10 topics) and $96 for 2006 (20 topics).", "Pyramid 2 https://www.mturk.com/ Pearson ( p ) Spearman ( s ) Ours Expert Resp.", "creation cost per topic is thus $4.8.", "For the system summary evaluation phase we split the 32 SCUs to two tasks of 16 SCUs each, in order to ensure that the crowdsourcing platform assigns each SCU to 5 distinct workers.", "We paid workers $0.45, and evaluated all 25 (2005) and 22 (2006) systems.", "The total benchmark evaluation cost was $1350 (includ-ing fees) for 2005 and $2376 for 2006, equaling $5.4 per system per topic, or $108 per system evaluation over all 20 topics.", "We release 3 our SCU dataset for DUC 2005 and DUC 2006 as a complementary resource, accompanied by the HTML pages for our tasks on Amazon Mechanical Turk and processing and evaluation scripts.", "In the SCU dataset, we mark the SCUs we used in our experiments, including their grouping as tasks in the system evaluation phase.", "These enable future crowdsourced Pyramid evaluations of new systems on these datasets, as well as developing new datasets with crowdsourced pyramids.", "Correlations with original Pyramid.", "We first assess our evaluation methodology by computing the correlation of its system scores (and rankings) to those of the original Pyramid.", "These are compared with the analogous correlations for the expert Responsiveness scores, available in the datasets.", "As seen in Table 1, our method produces better correlations, and substantially so on the more characteristic 2006 dataset.", "Importantly, notice that Responsiveness scores here were obtained by experts , and therefore the gap for crowdsourced Responsiveness is expected to be greater, further indicating the advantage of our method as a crowdsourcable approach.", "Stability.", "As an additional assessment, we test the robustness of our method, in terms of its reproducibility.", "To that end, we reran the system evaluation phase on eight randomly chosen systems of the 2006 data, which enabled us to compare our results with those obtained by Gillick and 3 https://github.com/OriShapira/ LitePyramids 4 8 12 16 20 24 28 32 0 .", "Liu (2010) for crowdsourced Responsiveness for a similar setting (8 random systems of the 2006 dataset).", "Notably, the lightweight Pyramid obtained an average 10% relative change in overall system scores, whereas crowdsourced Responsiveness exhibited lower stability with an average of 24% relative change.", "Cost analysis.", "We analyze the impact of randomly reducing the various resources involved in our methodology, aiming to see whether overall cost might be reduced without harming correlation with the original Pyramid.", "The results below, reported as averages over 70 re-sampled iterations for each setting, suggest that such cost reductions would be harmful.", "Number of workers.", "Reducing the number of workers per SCU judgment from five to three drops the correlations by about 8 points in 2006 and 6 points in 2005.", "Number of SCUs.", "Figure 1 shows that correlation increases as a function of the number of judged SCUs per topic.", "The correlation improvement seems to stabilize around 32 SCUs.", "Number of topics.", "Figure 2 presents the effect of the number of topics on which systems are evaluated, showing a steady correlation increase, which does not necessarily saturate at the number of 20 topics available in these datasets.", "Qualitative analysis.", "To identify certain limitations of our methodology, we manually analyzed some suspected topics, for which either worker Krippendorff agreement or correlation with the original Pyramid was low.", "We noticed two interesting phenomena.", "First, some topics seem inherently more difficult to evaluate, particularly for crowd workers.", "Such difficulty may be attributed to SCUs that are more difficult to assess or to less coher-2 4 6 8 10 12 14 16 18 20 0 .", "ent system summaries, due to the respective document set's complexity.", "Indeed, Passonneau et al. (2006) indicated that topic characteristics and annotator training experience effect evaluation quality.", "It seems worthwhile investigating, in future research, whether correlations improve by increasing further the overall number of topics, reducing the impact of the problematic ones.", "Another possibility may be to filter out topics with low annotator agreement when computing systems' scores by the lightweight Pyramid method.", "We hypothesize that doing so might improve the reliability of this method, and hence increase its correlation with the original, expert-based, Pyramid method (when the latter is computed over all test topics).", "Indeed, in a preliminary test, we filtered out those 20% of the topics with lowest Krippendorff annotator agreement.", "This yielded a 6-point Spearman score increase (relative to the correlations reported in Table 1) when correlated with the original Pyramid ranking, as computed over the full set of topics.", "We note that while Figure 2 shows a slight decrease in average correlation when removing 4 random topics, removing specifically the 4 low-agreement topics seems to improves it notably.", "Further analysis might conclude that filtering problematic topics generically improves the reliability of the lightweight Pyramid method.", "The second phenomenon observed among the difficult topics was that in some, the 32 sampled SCUs seem to miss important information, causing an unjustified degradation in system scores.", "In analogy to the variance in the number of SCUs in exhaustive Pyramids, it would be interesting to investigate methods for varying the sample size in our lightweight approach, based on some automatically detected parameters of topic complexity.", "To the best of our knowledge, our method is the first to mimic the reliable Pyramid method as an affordable crowdsourced procedure.", "Our experiments suggest that this lightweight Pyramid is more reliable than the common Responsiveness method.", "It also allows comparing multiple systems with absolute scores, which pairwise comparison does not.", "Future work may improve correlation with the original Pyramid, or reduce annotation cost, by following our qualitative analysis and by reducing crowdsourcing noise (via qualification tests, enhanced guidelines, and post-processing result normalization (Hovy et al., 2013; Plank et al., 2014; Hosseini et al., 2012)).", "It would be appealing to investigate applying our methods to additional evaluation datasets, for which original Pyramid evaluations are not available for comparison.", "For example, addressing the CNN/DailyMail dataset (Nallapati et al., 2016) would involve testing single document summarization, utilizing a single reference summary per source text and addressing varying lengths of reference and system summaries.", "The Pyramid method is mainly a measurement of recall, which thus also applies to our lightweight Pyramid; but other measurements for summary quality, such as precision, non-redundancy and grammaticality, may also be considered.", "In particular, it may be possible to extend our design of crowdsourcing tasks to supply indications for these complementary measurements as well.", "We would like to thank the anonymous reviewers for their constructive comments, as well as Ani Nenkova for her helpful remarks.", "This work was supported in part by the Bloomberg Data Science Research Grant Program; by the German Research Foundation through the German-Israeli Project Cooperation (DIP, grants DA 1600/1-1 and GU 798/17-1); by the BIU Center for Research in Applied Cryptography and Cyber Security in conjunction with the Israel National Cyber Bureau in the Prime Minister's Of-fice; by the Israel Science Foundation (grants 1157/16 and 1951/17); by DARPA Young Faculty Award YFA17-D17AP00022; and by the ArguAna Project GU 798/20-1 (DFG)." ]
[ "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "Implicit relation classification on Penn Discourse TreeBank (PDTB) 2.0 is a common benchmark task for evaluating the understanding of discourse relations.", "However, the lack of consistency in preprocessing and evaluation poses challenges to fair comparison of results in the literature.", "In this work, we highlight these inconsistencies and propose an improved evaluation protocol.", "Paired with this protocol, we report strong baseline results from pretrained sentence encoders, which set the new state-of-the-art for PDTB 2.0.", "Furthermore, this work is the first to explore fine-grained relation classification on PDTB 3.0.", "We expect our work to serve as a point of comparison for future work, and also as an initiative to discuss models of larger context and possible data augmentations for downstream transferability.", "Understanding discourse relations in natural language text is crucial to end tasks involving larger context, such as question-answering (Jansen et al., 2014) and conversational systems grounded on documents (Saeidi et al., 2018; Feng et al., 2020).", "One way to characterize discourse is through relations between two spans or arguments (ARG 1/A RG", "2) as in the Penn Discourse TreeBank (PDTB) (Prasad et al., 2008, 2019).", "For instance: [ Arg 1 I live in this world ,] [ Arg 2 assuming that there is no morality, God or police. ] (wsj_0790) Label: EXPANSION", ".M ANNER", ".A RG 2AS-MANNER The literature has focused on implicit discourse relations from PDTB 2.0 (Pitler et al., 2009; Lin et al., 2009), on which deep learning has yielded substantial performance gains (Chen et al., 2016; Liu and Li, 2016; Lan et al., 2017; Qin et al., 2017; Bai and Work done while at IBM Research. Zhao, 2018; Nguyen et al., 2019, i.a. ).", "However, inconsistencies in preprocessing and evaluation such as different label sets (Rutherford et al., 2017) pose challenges to fair comparison of results and to analyzing the impact of new models.", "In this paper, we revisit prior work to explicate the inconsistencies and propose an improved evaluation protocol to promote experimental rigor in future work.", "Paired with this guideline, we present a set of strong baselines from pretrained sentence encoders on both PDTB 2.0 and 3.0 that set the state-of-the-art.", "We furthermore reflect on the results and discuss future directions.", "We summarize our contributions as follows: We highlight preprocessing and evaluation inconsistencies in works using PDTB 2.0 for implicit discourse relation classification.", "We expect our work to serve as a comprehensive guide to common practices in the literature.", "We lay out an improved evaluation protocol using section-based cross-validation that preserves document-level structure.", "We report state-of-the-art results on both top-level and second-level implicit discourse relation classification on PDTB 2.0, and the first set of results on PDTB 3.0.", "We expect these results to serve as simple but strong baselines that motivate future work.", "We discuss promising next steps in light of the strength of pretrained encoders, the shift to PDTB 3.0, and better context modeling.", "In PDTB, two text spans in a discourse relation are labeled with either one or two senses from a three-level sense hierarchy.", "PDTB 2.0 contains around 43K annotations with 18.4K explicit and 16K implicit relations in over 2K Wall Street Journal (WSJ) articles.", "Identifying implicit relations (i.e., without explicit discourse markers such as Model Ji Lin P&K X-Accuracy Majority class 26.18 26.11 28.54 26.42 Adversarial Net (Qin et al., 2017) 46.23 44.65 -Seq2Seq+MemNet (Shi and Demberg, 2019) 47.83 45.82 -41.29 ELMo (Bai and Zhao, 2018) 48.22 45.73 -ELMo, Memory augmented (Bai et al., 2019) 49.15 46.08 -Multitask learning (Nguyen et al., 2019) 49.95 46.48 -BERT+MNLI (Nie et al., 2019) -53.7 -BERT+DisSent Books 5 (Nie et al., 2019) -54.7 BERT (base, uncased) 52.13 ( 0 . 50 ) 51.41 ( 1 . 02 ) 52.00 ( 1 . 02 ) 49.68 ( 0 . 35 ) BERT (large, uncased) 57.34 ( 0 . 79 ) 55.07 ( 1 . 01 ) 55.61 ( 1 . 32 ) 53.37 ( 0 . 22 ) XLNet (base, cased) 54.73 ( 1 . 26 ) 55.82 ( 0 . 79 ) 54.71 ( 0 . 45 ) 52.98 ( 0 . 29 ) XLNet (large, cased) 61.29 ( 1 . 49 ) 58.77 ( 0 . 99 ) 59.90 ( 0 . 96 ) 57.74 ( 0 . 90 ) Table 1: Accuracy on PDTB 2.0 L2 classification.", "but ) is more challenging than explicitly signaled relations (Pitler et al., 2008).", "The new version of the dataset, PDTB 3.0 (Prasad et al., 2019), introduces a new annotation scheme with a revised sense hierarchy as well as 13K additional datapoints.", "2 The third-level in the sense hierarchy is modified to only contain asymmetric (or directional) senses.", "We survey the literature to identify several sources of variation in preprocessing and evaluation that could lead to inconsistencies in the results reported.", "Choice of label sets.", "Due to the hierarchical annotation scheme and skewed label distribution, a range of different label sets has been employed for formulating classification tasks (Rutherford et al., 2017).", "The most popular choices for PDTB 2.0 are: (1) top-level senses (L1) comprised of four labels, and (2) finer-grained Level-2 senses (L2).", "For L2, the standard protocol is to use 11 labels after eliminating five infrequent labels as proposed in Lin et al. (2009).", "Sometimes ENTREL is also included in the L2 label set (Xue et al., 2015).", "Level-3 senses (L3) are not often used due to label sparsity.", "Data partitioning.", "The variability of data splits used in the literature is substantial.", "This is problematic considering the small number of examples in a typical setup with 1-2 WSJ sections as test sets.", "For instance, choosing sections 23-24 rather than 21-22 results in an offset of 149, and a label offset as large as 71 (COMPARISON .C ONTRAST ).", "This is a large enough difference to cast doubt on claims for state-of-the-art, considering the small size of the test sets ( 1000 ).", "We illustrate the variability of split choices in published work in Appendix B. Recently, splits recommended by Prasad et al. (2008) and Ji and Eisenstein (2015) ( Ji ) are the most common, but splits from Patterson and Kehler (2013) ( P&K ), Li and Nenkova (2014), i.a., have also been used.", "The Prasad et al. split is frequently attributed to Lin et al. (2009) ( Lin ), and thus we adopt this naming convention.", "Multiply-annotated labels.", "Span pairs in PDTB are optionally annotated with multiple sense labels.", "The common practice is either taking only the first label or the approach in Qin et al. (2017), i.a., where instances with multiple annotations are treated as separate examples during training.", "A prediction is considered correct if it matches any of the labels during testing.", "However, a subtle inconsistency exists even across works that follow the latter approach.", "In PDTB, two connectives (or inferred connectives for implicit relations) are possible for a span pair, where the second connective is optional.", "A connective can each have two semantic classes (i.e., the labels), where the second class is optional.", "Thus, a maximum of four distinct labels are possible for each span pair.", "However, in the actual dataset, the maximum number of distinct labels turns out to be two.", "An inconsistency arises depending on which of the four possible label fields are counted.", "For instance, Qin et al. (2017) treat all four fields (SCLASS 1A, SCLASS 1B, SCLASS 2A, SCLASS 2B; see link) as possible labels, whereas Bai and Zhao (2018); Bai et al. (2019) use only SCLASS 1A,SC LASS 2A.", "Often, this choice is implicit and can only be deduced from the codebase.", "Random initialization.", "Different random initializations of a network often lead to substantial variability (Dai and Huang, 2018).", "It is important to consider this variability especially when the reported margin of improvement can be as small as half a percentage point (see cited papers in Table 1).", "We report the mean over 5 random restarts for existing splits, and the mean of mean cross-validation accuracy over 5 random restarts.", "3 3 Proposed Evaluation Protocol While Xue et al. (2015) lay out one possible protocol, it does not fully address the issues we have raised in Section 2.", "Another limitation is the unavailability of the preprocessing code as of the date of this submission.", "We describe our proposal below, which will be accompanied by a publicly available preprocessing code.", "4 In addition to accounting for the variation previously discussed, we take Shi and Demberg (2017)'s concerns into consideration.", "Cross-validation.", "We advocate using cross-validation for L2 classification, sharing the concerns of Shi and Demberg (2017) on label sparsity.", "However, we propose using cross-validation at section -level rather than individual example -level as suggested by Shi and Demberg (2017).", "This is to preserve paragraph and document structures, which are essential for investigating the effect of modeling larger context (e.g., Dai and Huang 2018).", "We further illustrate the potential utility of document structure in Section 4.", "We suggest dividing the 25 sections of PDTB into 12 folds with 2 development, 2 test and 21 training sections in each fold.", "We used a sliding window of two sections starting from P&K (dev: 0-1, test: 23-24, train: 2-22).", "All but one section (22) is used exactly once for testing.", "Whether future works should evaluate on these particular cross-validation splits or on randomized splits (Gorman and Bedrick, 2019) is an open issue; we provide an additional discussion in Appendix F. Label sets.", "We recommend reporting results on both L1 and L2, using the standard 11-way classification for L2 in PDTB 2.0.", "A standardized label set 3 Due to limitations of compute, we only report random restarts of cross-validation (5 seeds x 12 folds) for our main results.", "For additional experiments in Section 4, we report the average over folds only.", "Generally, variance over seeds were smaller than over folds for our models.", "4 https://github.com/najoungkim/pdtb3 does not exist yet for L2 in PDTB 3.0 (L1 remains unchanged).", "We propose using only the labels with > 100 instances, which leaves us with 14 senses from L2 (see Appendix A for counts).", "We suggest using all four possible label fields if the senses are multiply-annotated, as discussed in Section 2.1.", "Following our proposed protocol, we report baseline results from two strong sentence encoder models: BERT (Devlin et al., 2019) and XLNet (Yang et al., 2019), using a publicly available codebase.", "5 See Appendix C for training details.", "We present L2 results on PDTB 2.0 in Table 1 and results on PDTB 3.0 in Table 2 (see Appendix D for L1 re-sults).", "To maintain backwards compatibility to the literature, we also report PDTB 2.0 results on Ji, Lin and P&K splits (see Section 2.1).", "Ji & Lin are the most common splits, and P&K is the split used by Nie et al. (2019) who claim the current state-of-the-art for L2.", "For PDTB 2.0 (Table 1), our baselines showed strong performance on all splits.", "XLNet-large was the single best model, significantly outperforming every best reported result.", "6 3.2 Single-span baselines Table 4 lists the performance of single-span (ei-ther ARG 1 or ARG", "2) baseline models for both PDTB 2.0 and 3.0.", "This baseline adapts the idea of hypothesis-only baselines in Natural Language Inference (Poliak et al., 2018), where we limit the training data by only showing the models one of the two spans that are in a discourse relation.", "We discuss these baselines further in Section 4.", "6 We used the N 1 2 test to compare proportions instead of a matched test like McNemar's, because we only had access to reported accuracies (rather than raw predictions) of the best models in the literature.", "over PDTB 2.0.", "For instance, the annotation manual (Prasad et al., 2019) remarks that LIST was removed since it was not in practice distinguishable from CONJUNCTION .", "Indeed, models trained on PDTB 2.0 behaved exactly so, classifying most of LIST as CONJUNCTION (but not vice versa, likely due to frequency effect; see Appendix G).", "We conducted an additional experiment testing the impact of the new annotation scheme, in an attempt to address the question If we want to detect relation X in a downstream task, which PDTB should we use to train our models?.", "We trained the same model (BERT-large) twice on the same set of datapoints, only varying the annotation scheme.", "Since PDTB 3.0 has both added and removed examples, we filtered the datasets so that the two PDTBs contained exactly the same span pairs.", "With the model and inputs fixed, the labeling scheme should be the only effective factor.", "After filtering, the majority-class baseline for both were less than 30%.", "Table 5 suggests that PDTB 3.0's annotation scheme does lead to improved distinguishability of CONJUNCTION .", "7 PDTB 3.0 overall yielded better 7 We used pooled cross-validation accuracy (compared us-(or unchanged) distinguishability of shared labels except for CONTRAST .", "This trend was especially salient for CONCESSION that was practically unlearnable from PDTB 2.0.", "This supports the utility of PDTB 3.0 over 2.0 if downstream transfer is considered, motivating a transition to 3.0.", "Unsurprisingly, the change in distinguishability was highly dependent on the change in label counts in the training data (Table 5, ).", "But change in frequency alone does not give us the full picture.", "For instance, SYNCHRONOUS remained difficult to learn even with a substantial increase in labeled examples.", "The absolute size of the class was also not deterministic of performance.", "There were 192 training instances of SYNCHRONOUS in the filtered PDTB 2.0 and 261 for PDTB 3.0.", "Similar/smaller classes such as | ALTERNATIVE | = 118 in PDTB 2.0 and | SUBSTITUTION | = 191 in PDTB 3.0 were still learnable with 26% and 48% accuracy, respectively.", "This was mostly due to SYNCHRONOUS being mislabeled as CONJUNCTION , which was also the case in the unfiltered dataset (see Appendix G).", "anno-ing Fisher's exact test and Bonferroni correction) because label sparsity made fold-wise comparisons underpowered for small classes like ASYNCHRONOUS .", "tation scheme for PDTB 3.0 marks the directionality of relations (e.g., ARG 1vs ARG 2ASMANNER ).", "These relations are important for naturally-occurring discourse, where order-variable asymmetrical relations are common.", "For example, in Figure 1, span [2] is conditionally dependent on [3], and [5] has a dependency on [4]; such ordered dependencies must be correctly tracked across discourse contexts.", "We investigated whether directional labels are sufficiently identifiable with our models.", "We replaced L2 classes with L3 subclasses (L2+L3), if both subclasses had > 100 examples.", "Except for REASON and RESULT , the distribution of L3 classes under the same L2 is heavily skewed, which led to low performance (Table 3).", "This calls for a data augmentation that would balance subclass ratios and alleviate label sparsity at L3.", "informative, even for shallow discourse parsing.", "We have advocated for an evaluation scheme that preserves larger contexts.", "This is motivated by the fact that discourse relations are not independently distributed from one another (even when they are annotated in isolation, as in PDTB).", "For instance, implicit CONJUNCTION ( IC ) relations are likely to be adjacent; in PDTB 3.0, the probability of one IC following another is P ( IC 2 | IC 1 ) = 0 .", "14 , when P ( IC ) = 0 .", "08 .", "Implicit REASON is likely to be adjacent to RESULT ; P ( IReason | IResult ) = 0 .", "12 , P ( IReason ) = 0 .", "05 .", "Vanilla pretrained encoders are strong, but are overreliant on lexical cues.", "A simple fine-tuning of pretrained encoders yielded impressive gains.", "At the same time, they overrelied on lexical cues.", "For instance, ARG 2-initial to often signals PURPOSE ; 79.9% of such cases are true PURPOSE relations.", "It is reasonable for our models to utilize this strong signal, but the association was much amplified in their prediction.", "For example, XLNet-base predicted PURPOSE for 95.8% of the examples with ARG 2-initial to .", "We also found that model predictions were in general brittle; a simplistic lexical perturbation with no semantic effect, such as appending ' to the beginning of spans, resulted in a 9% p drop in performance for BERT-large models.", "Overall, there still remains much overhead for improvement, with our best model at 66% accuracy on PDTB 3.0 L2 classification.", "Combining pretrained encoders and expanded context modeling to better capture document-level distributional 1 Why can't I receive recovery email?", "Aggregation of single-span baselines as decontextualized upper-bounds.", "Lexical cues continue to be informative even for implicit relations, as with the case of ARG 2-initial to .", "Although these signals could be genuine rather than artifactual, they require comparatively less multi-span reasoning.", "Then, how much of our dataset only requires shallower reasoning as such?", "To address this question, we constructed a de contextualized baseline by aggregating predictions of single-span models, and assuming that an oracle always chooses the right answer if it is in the prediction set.", "This provides an upper-bound estimate of the performance of a model that only disjointly considers the two input spans, but still has full lexical access.", "Comparing the final rows of Table 4 and Table 2, we see that no model reliably outperforms its decontextualized upper-bound counterpart.", "We have surveyed the literature to highlight experimental inconsistencies in implicit discourse relation classification, and suggested an improved protocol using section-level cross-validation.", "We provided a set of strong baselines for PDTB 2.0 and 3.0 following this protocol, as well as results on a range of existing setups to maintain comparability.", "We discussed several future directions, including data augmentation for downstream transferability, applicability of pretrained encoders to discourse, and utilizing larger discourse contexts.", "This work was supported by IBM Research.", "We thank the three anonymous reviewers for their insightful comments.", "We also thank Sadhwi Srinivas, Grusha Prasad and Tal Linzen for their advice on statistical analysis." ]
[ "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "method", "result", "objective", "abstain", "method", "objective", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "result", "method", "other", "other", "other" ]
[ "Wikipedia abstract generation aims to distill a Wikipedia abstract from web sources and has met significant success by adopting multi-document summarization techniques.", "However, previous works generally view the abstract as plain text, ignoring the fact that it is a description of a certain entity and can be decomposed into different topics.", "In this paper, we propose a two-stage model TWAG that guides the abstract generation with topical information.", "First, we detect the topic of each input paragraph with a classifier trained on existing Wikipedia articles to divide input documents into different topics.", "Then, we predict the topic distribution of each abstract sentence, and decode the sentence from topic-aware representations with a Pointer-Generator network.", "We evaluate our model on the WikiCatSum dataset, and the results show that TWAG outperforms various existing baselines and is capable of generating comprehensive abstracts.", "Our code and dataset can be accessed at https://github.com/THU-KEG/TWAG 1 Introduction Wikipedia, one of the most popular crowd-sourced online knowledge bases, has been widely used as the valuable resources in natural language processing tasks such as knowledge acquisition (Lehmann et al., 2015) and question answering (Hewlett et al., 2016; Rajpurkar et al., 2016) due to its high quality and wide coverage.", "Within a Wikipedia article, its abstract is the overview of the whole content, and thus becomes the most frequently used part in various tasks.", "However, the abstract is often contributed by experts, which is labor-intensive and prone to be incomplete.", "collected from referred websites or search engines, which is essentially a multi-document summarization problem.", "This problem is studied in both extractive and abstractive manners.", "The extractive models attempt to select relevant textual units from input documents and combine them into a summary.", "Graph-based representations are widely exploited to capture the most salient textual units and enhance the quality of the final summary (Erkan and Radev, 2004; Mihalcea and Tarau, 2004; Wan, 2008).", "Recently, there also emerge neural extractive models (Yasunaga et al., 2017; Yin et al., 2019) utilizing the graph convolutional network (Kipf and Welling, 2017) to better capture inter-document relations.", "However, these models are not suitable for Wikipedia abstract generation.", "The reason is that the input documents collected from various sources are often noisy and lack intrinsic relations (Sauper and Barzilay, 2009), which makes the relation graph hard to build.", "The abstractive models aim to distill an informative and coherent summary via sentence-fusion and paraphrasing (Filippova and Strube, 2008; Banerjee et al., 2015; Bing et al., 2015), but achieve little success due to the limited scale of datasets.", "Liu et al. (2018) proposes an extractive-then-abstractive model and contributes WikiSum, a large-scale dataset for Wikipedia abstract generation, inspiring a branch of further studies (Perez-Beltrachini et al., 2019; Liu and Lapata, 2019; Li et al., 2020).", "The above models generally view the abstract as plain text, ignoring the fact that Wikipedia abstracts describe certain entities, and the structure of Wikipedia articles could help generate comprehensive abstracts.", "We observe that humans tend to describe entities in a certain domain from several topics when writing Wikipedia abstracts.", "As illustrated in Figure 1, the abstract of the Arctic Fox contains its adaption, biology taxonomy and geographical distribution, which is consistent with 4624 Abstract Content Table The Arctic fox (Vulpes lagopus), also known as the white fox, polar fox, or snow fox, is a small fox native to the Arctic regions of the Northern Hemisphere and common throughout the Arctic tundra biome.", "the content table.", "Therefore, given an entity in a specific domain, generating abstracts from corresponding topics would reduce redundancy and produce a more complete summary.", "In this paper, we try to utilize the topical information of entities within its domain (Wikipedia categories) to improve the quality of the generated abstract.", "We propose a novel two-stage Topic-guided Wikipedia Abstract Generation model ( TWAG ).", "TWAG first divides input documents by paragraph and assigns a topic for each paragraph with a classifier-based topic detector.", "Then, it generates the abstract in a sentence-wise manner, i.e., predicts the topic distribution of each abstract sentence to determine its topic-aware representation, and decodes the sentence with a Pointer-Generator network (See et al., 2017).", "We evaluate TWAG on the WikiCatSum (Perez-Beltrachini et al., 2019) dataset, a subset of the WikiSum containing three distinct domains.", "Experimental results show that it significantly improves the quality of abstract compared with several strong baselines.", "In conclusion, the contributions of our work are as follows: We propose TWAG, a two-stage neural abstractive Wikipedia abstract generation model utilizing the topic information in Wikipedia, which is capable of generating comprehensive abstracts.", "We simulate the way humans recognize entities, using a classifier to divide input documents into topics, and then perform topic-aware abstract generation upon the predicted topic distribution of each abstract sentence.", "Our experiment results against 4 distinct baselines prove the effectiveness of TWAG.", "Multi-document summarization is a classic and challenging problem in natural language processing, which aims to distill an informative and coherent summary from a set of input documents.", "Compared with single-document summarization, the input documents may contain redundant or even contradictory information (Radev, 2000).", "Early high-quality multi-document summarization datasets are annotated by humans, e.g., datasets for Document Understanding Conference (DUC) and Text Analysis Conference (TAC).", "These datasets are too small to build neural models, and most of the early works take an extractive method, attempting to build graphs with inter-paragraph relations and choose the most salient textual units.", "The graph could be built with various information, e.g., TF-IDF similarity (Erkan and Radev, 2004), discourse relation (Mihalcea and Tarau, 2004), document-sentence two-layer relations (Wan, 2008), multi-modal (Wan and Xiao, 2009) and query information (Cai and Li, 2012).", "Recently, there emerge attempts to incorporate neural models, e.g., Yasunaga et al. (2017) builds a discourse graph and represents textual units upon the graph convolutional network (GCN) (Kipf and Welling, 2017), and Yin et al. (2019) adopts the entity linking technique to capture global dependencies between sentences and ranks the sentences with a neural graph-based model.", "In contrast, early abstractive models using sentence-fusion and paraphrasing (Filippova and Strube, 2008; Banerjee et al., 2015; Bing et al., 2015) achieve less success.", "Inspired by the re-cent success of single-document abstractive models (See et al., 2017; Paulus et al., 2018; Gehrmann et al., 2018; Huang et al., 2020), some works (Liu et al., 2018; Zhang et al., 2018) try to transfer single-document models to multi-document settings to alleviate the limitations of small-scale datasets.", "Specifically, Liu et al. (2018) defines Wikipedia generation problem and contributes the large-scale WikiSum dataset.", "Fabbri et al. (2019) constructs a middle-scale dataset named Multi-4625 News and proposes an extractive-then-abstractive model by appending a sequence-to-sequence model after the extractive step.", "Li et al. (2020) models inter-document relations with explicit graph representations, and incorporates pre-trained language models to better handle long input documents.", "Sauper and Barzilay (2009) is the first work focusing on Wikipedia generation, which uses Integer Linear Programming (ILP) to select the useful sentences for Wikipedia abstracts.", "Banerjee and Mitra (2016) further evaluates the coherence of selected sentences to improve the linguistic quality.", "Liu et al. (2018) proposes a two-stage extractive-then-abstractive model, which first picks paragraphs according to TF-IDF weights from web sources, then generates the summary with a transformer model by viewing the input as a long flat sequence.", "Inspired by this work, Perez-Beltrachini et al. (2019) uses a convolutional encoder and a hierarchical decoder, and utilizes the Latent Dirichlet Allocation model (LDA) to render the decoder topic-aware.", "HierSumm (Liu and Lapata, 2019) adopts a learning-based model for the extractive stage, and computes the attention between paragraphs to model the dependencies across multiple paragraphs.", "However, these works view Wikipedia abstracts as plain text and do not explore the underlying topical information in Wikipedia articles.", "There are also works that focus on generating other aspects of Wikipedia text.", "Biadsy et al. (2008) utilizes the key-value pairs in Wikipedia infoboxes to generate high-quality biographies.", "Hayashi et al. (2021) investigates the structure of Wikipedia and builds an aspect-based summarization dataset by manually labeling aspects and identifying the aspect of input paragraphs with a fine-tuned RoBERTa model (Liu et al., 2019).", "Our model also utilizes the structure of Wikipedia, but we generate the compact abstract rather than individual aspects, which requires the fusion of aspects and poses a greater challenge to understand the connection and difference among topics.", "of size n as input, and outputs a Wikipedia abstract S = ( s 1 , s 2 , . . . , s m ) with m sentences.", "The goal is to find an optimal abstract S that best concludes the input, i.e., S = arg max SP ( S|D ) (1) Previous works generally view S as plain text, ignoring the semantics in Wikipedia articles.", "Before introducing our idea, let's review how Wikipedia organizes articles.", "Wikipedia employs a hierarchical open category system to organize millions of articles, and we name the top-level category as domain.", "As for a Wikipedia article, we concern three parts, i.e., the abstract, the content table, and textual contents.", "Note that the content table is composed of several section labels { l } , pairing with corresponding textual contents { p } .", "As illustrated in Figure 1, the content table indicates different aspects (we call them topics) of the article, and the abstract semantically corresponds to these topics, telling us that topics could benefit the abstract generation.", "However, general domains like Person or Animal consist millions of articles with diverse content tables, making it not feasible to simply treat section labels as topics.", "Considering that articles in specific domains often share several salient topics, we manually merge similar section labels to convert the sections titles to a set of topics.", "Formally, the topic set is denoted as T = { T 1 , T 2 , ..., T n t } of size n t , where each topic T i = { l 1 i , l 2 i , . . . , l mi } .", "Definition 2 Given the input paragraphs D , we introduce the latent topics Z = { z 1 , z 2 , . . . , z n } , where z i T is the topic of i -th input paragraph d i , and our objective of Wikipedia abstract generation is re-written as", "Therefore, the abstract generation could be completed with two sub-tasks, i.e., topic detection to optimize arg max ZP ( Z|D ) and topic-aware abstract generation to optimize arg max SP ( S|D , Z ) .", "As shown in Figure 2, our proposed TWAG adopts a two-stage structure.", "First, we train a topic detector based on existing Wikipedia articles to predict the topic of input paragraphs.", "Second, we group the 4626 (cid:1833) (cid:2868) (cid:3404) (cid:4668)(cid:1856) (cid:2871) (cid:481)(cid:485)(cid:1856) (cid:3041)(cid:2879)(cid:2869) (cid:4669) with a broader contour , strong keel and dextral coil (cid:1833) (cid:2870) (cid:3404) (cid:4668)(cid:1856) (cid:2870) (cid:481)(cid:485)(cid:481)(cid:1856) (cid:3041) (cid:4669) endemic to the remaining wet forests on the hawaiianisland of lanai Topic Encoder (cid:2189) (cid:2868) (cid:2189) (cid:2870) GRU (cid:2190) (cid:3047) (cid:2199) (cid:3047) TopicSelector (cid:2190) (cid:3047)(cid:2879)(cid:2869) (cid:2187) (cid:3047)(cid:2879)(cid:2869) (cid:2187) (cid:3047) Topic Predictor (Step (cid:1872) ) (cid:2190) (cid:3047)(cid:2878)(cid:2869) (cid:2187) (cid:3047)(cid:2878)(cid:2869) Sentence Decoder (cid:1871) (cid:3047)(cid:2879)(cid:2869) (cid:1871) (cid:3047)(cid:2878)(cid:2869) (cid:1871) (cid:3047) : this species is endemic to hawaii (Step (cid:1872) + (cid:883) ) (Step (cid:1872) (cid:883) ) (cid:1833) (cid:2869) (cid:3404) (cid:4668)(cid:1856) (cid:2869) (cid:481)(cid:485)(cid:4669) arkive species -partulina snail ( partulina semicarinata ) (cid:2189) (cid:2869) (cid:1856) (cid:2869) (cid:1856) (cid:2870) (cid:1856) (cid:2871) (cid:1856) (cid:3041)(cid:2879)(cid:2869) (cid:1856) (cid:3041) ALBERT FC+Softmax Argmax 1 2 2 0 0 (cid:1853)(cid:1870)(cid:1859)(cid:1865)(cid:1853)(cid:1876) (cid:2292) (cid:1842)(cid:4666)(cid:2292)(cid:513)(cid:2270)(cid:4667) (cid:1853)(cid:1870)(cid:1859)(cid:1865)(cid:1853)(cid:1876) (cid:2285) (cid:1842)(cid:4666)(cid:2285)(cid:513)(cid:2270)(cid:481)(cid:2292)(cid:4667) <START> (cid:2200) (cid:3047) Figure 2: The TWAG framework.", "input paragraphs by detected topics to encode them separately, and generate the abstract in a sentence-wise manner.", "In each step, we predict the topic distribution of the current sentence, fuse it with the global hidden state to get the topic-aware representation, and generate the sentence with a copy-based decoder.", "Next, we will detail each module.", "The topic detector aims to annotate input paragraphs with their optimal corresponding topics.", "To formalize, given the input paragraphs D , Det returns its corresponding topics Z = { z 1 , z 2 , . . . , z n } , i.e., Z = Det ( D ) (3) We view topic detection as a classification problem.", "For each paragraph d D , we encode it with ALBERT(Lan et al., 2019) and then predict its topic z with a fully-connected layer, i.e., d = ALBERT ( d ) (4) z = arg max( linear ( d )) (5) where d is the vector representation of d , and we fine-tuned the ALBERT model on a pretrained version.", "the abstract.", "Specifically, it contains three modules: a topic encoder to encode the input paragraphs into topical representations, a topic predictor to predict the topic distribution of abstract sentences and generate the topic-aware sentence representation, and a sentence decoder to generate abstract sentences based on the topic-aware representations.", "Given the input paragraphs D and the detected topics Z , we concatenate all paragraphs belonging to the same topic T k to form a topic-specific text group (TTG) G k , which contains salient information about a certain topic of an entity:", "g k is the final hidden state of the G k , and U k = ( u 1 , u 2 , . . . , u n Gk ) represents the hidden state of each token in G k , where n G k denotes the number of tokens in G k .", "To generate the abstract S , we first predict the topic distribution of every sentence s i with a GRU decoder.", "At each time step t , the topic predictor produces a global hidden state h t , and then estimates the probability distribution q t over topics.", "h t = GRU ( h t 1 , e t 1 ) (9) q t = softmax ( linear ( h t )) (10) where e t 1 denotes the topical information in the last step.", "e 0 is initialized as an all-zero vector, and e t could be derived from q t in two ways.", "The second way named soft topic , is to view every sentence as a mixture of different topics, and take the weighted sum over topic representations, i.e., e softt = q t G (12) where G = ( g 1 , g 2 , . . . , g n t ) is the matrix of topic representations.", "With the observation that Wikipedia abstract sentences normally contain mixed topics, we choose the soft topic mechanism for our model (see Section 5.3 for details).", "Finally, we compute the topic-aware hidden state r t by adding up h t and e t , which serves as the initial hidden state of sentence decoder: r t = h t + e t (13) Additionally, a stop confirmation is executed at each time step: p stop = ( linear ( h t )) (14) where represents the sigmoid function.", "If p stop > 0 .", "5 , TWAG will terminate the decoding process and no more abstract sentences will be generated.", "Our sentence decoder adopts the Pointer-Generator network (See et al., 2017), which picks tokens both from input paragraphs and vocabulary.", "To copy a token from the input paragraphs, the decoder requires the token-wise hidden states U = ( u 1 , u 2 , . . . , u n u ) of all n u input tokens, which is obtained by concatenating the token-wise hidden states of all TTGs, i.e., U = [ U 1 , U 2 , . . . , U n u ] (15) For the k -th token, the decoder computes an attention distribution a k over tokens in the input paragraphs, where each element a ik could be viewed as the probability of the i -th token being selected, a ik = softmax (tanh( W u u i + W s s k + b a )) (16) where s k denotes the decoder hidden state with s 0 = r t to incorporate the topic-aware representation, and W u , W s , b a are trainable parameters.", "To generate a token from the vocabulary, we first use the attention mechanism to calculate the weighted sum of encoder hidden states, known as the context vector, c k = (cid:2) i a ik u i .", "which is further fed into a two-layer network to obtain the probability distribution over vocabulary,", "where represents the sigmoid function and W Tc , W Ts , W Tx and b p are trainable parameters.", "The final probability distribution of words is 2 P ( w ) = p gen P voc ( w ) + (1 p gen ) (cid:3) i : | ww i = w a ik (20) 4.3 Training The modules for topic detection and abstract generation are trained separately.", "Since there are no public benchmarks for assigning input paragraphs with Wikipedia topics, we construct the dataset with existing Wikipedia articles.", "In each domain, we collect all the label-content pairs { ( l, p ) } (defined in Section 3), and split the content into paragraphs p = ( d 1 , d 2 , . . . , d n p ) to form a set of label-paragraph pairs { ( l, d ) } .", "Afterwards, we choose all pairs ( l, d ) whose section label l belongs to a particular topic T T to complete the dataset construction, i.e., the topic-paragraph set { ( T, d ) } .", "Besides, a NOISE topic is 2 ww i means the token corresponding to u i .", "set up in each domain, which refers to meaningless text like scripts and advertisements, and the corresponding paragraphs are obtained by utilizing regular expressions to match obvious noisy texts.", "The details are reported in Appendix A. Note that the dataset for abstract generation is collected from non-Wikipedia websites (refer to Section 5 for details).", "These two datasets are in-dependent of each other, which prevents potential data leakage.", "In the training step, we use the negative log-likelihood loss to optimize the topic detector.", "The loss of topic-aware abstract generation step consists of two parts: the first part is the average loss of sentence decoder for each abstract sentence L sent , and the second part is the cross-entropy loss of stop confirmation L stop .", "Following (See et al., 2017), we compute the loss of an abstract sentence by averaging the negative log likelihood of every target word in that sentence, and achieve L sent via averaging over all m sentences, L sent = 1 m m (cid:2) t =1 (cid:3) 1 n s t n st (cid:2) i =1 log P ( w i ) (cid:4) (21) where n s t is the length of the t -th sentence of the abstract.", "L stop = y s log( p stop ) (1 y s ) log(1 p stop ) (22)", "Dataset.", "To evaluate the overall performance of our model, we use the WikiCatSum dataset proposed by (Perez-Beltrachini et al., 2019), which contains three distinct domains ( Company , Film and Animal ) in Wikipedia.", "Each domain is split into train (90%), validation (5%) and test (5%) set.", "We build the dataset for training and evaluating the topic detector from the 2019-07-01 English Wikipedia full dump.", "For each record in the WikiCatSum dataset, we find the article with the same title in Wikipedia dump, and pick all section label-content pairs { ( l, p ) } in that article.", "We remove all hyperlinks and graphics in contents, split the contents into paragraphs with the spaCy library, and follow the steps in Section 4.3.1 to complete dataset construction.", "Finally, we conduct an 8:1:1 split for train, validation and test.", "Table 1 presents the detailed parameters of used datasets.", "Evaluation Metrics.", "We evaluate the performance of our model with ROUGE scores (Lin, 2004), which is a common metric in comparing generated and standard summaries.", "Considering that we do not constrain the length of generated abstracts, we choose ROUGE F1 score that combines precision and recall to eliminate the tendency of favoring long or short results.", "Implementation Details.", "We use the open-source PyTorch and transformers library to implement our model.", "All models are trained on NVIDIA GeForce RTX 2080.", "In topic detection, we choose the top 20 frequent section labels in each domain and manually group them into different topics (refer to the Appendix A for details).", "For training, we use the pretrained albert-base-v2 model in the transformers library, keep its default parameters and train the module for 4 epochs with a learning rate of 3e-5.", "For abstract generation, we use a single-layer BiGRU network to encode the TTGs into hidden states of 512 dimensions.", "The first 400 tokens of input paragraphs are retained and transformed into GloVe (Pennington et al., 2014) embedding of 300 dimensions.", "The vocabulary size is 50000 and out-of-vocabulary tokens are represented with the average embedding of its adjacent 10 tokens.", "This module is trained for 10 epochs, the learning rate is 1e-4 for the first epoch and 1e-5 for the rest.", "Before evaluation, we remove sentences that have an overlap of over 50% with other sentences to reduce redundancy.", "Baselines.", "We compare our proposed TWAG with the following strong baselines: TF-S2S (Liu et al., 2018) uses a Transformer decoder and compresses key-value pairs in self-attention with a convolutional layer.", "CV-S2D+T (Perez-Beltrachini et al., 2019) uses a convolutional encoder and a two-layer hierarchical decoder, and introduces LDA to model topical information.", "HierSumm (Liu and Lapata, 2019) utilizes the attention mechanism to model inter-4629 Domain #Examples R1-r R2-r RL-r #Topics Train Valid Test Company 62,545 .551 .217 .438 4 35,506 1,999 2,212 Film 59,973 .559 .243 .456 5 187,221 10,801 10,085 Animal 60,816 .541 .208 .455 4 51,009 2,897 2,876 Table 1: Details about used datasets.", "We fine-tune the pretrained BART-base model on our dataset and set beam size to 5 for all models using beam search at test time.", "The parameters we use for training and evaluation are identical to these in corresponding papers.", "Table 2 shows the ROUGE F1 scores of different models.", "In all three domains, TWAG outperforms other baselines.", "Our model surpasses other models on ROUGE-1 score by a margin of about 10%, while still retaining advantage on ROUGE-2 and ROUGE-L scores.", "In domain Company , our model boosts the ROUGE-L F1 score by about 30%, considering that ROUGE-L score is computed upon the longest common sequence, the highest ROUGE-L score indicates that abstracts generated by TWAG have the highest holistic quality.", "While CVS2D+T and BART retain reasonable scores, TF-S2S and HierSumm do not reach the scores they claim in their papers.", "Notice that the WikiCatSum dataset is a subset of WikiSum, which is used as the training dataset of these two models, we infer that TF-S2S and HierSumm require more training data to converge, and suffer from under-fitting due to the dataset scale.", "Learning Rate of Topic Detector.", "We tried two learning rates when training the topic detector module.", "A learning rate of 1e-7 would result in a precision of 0 .", "922 in evaluation, while a learning rate of 3e-5 would result in a precision of 0 .", "778 .", "However, choosing the former learning rate causes a drop of about 10% in all ROUGE scores, which is the reason why we use the latter one in our full model.", "We infer that human authors occasionally make mistakes, assigning paragraphs into section labels that belong to other topics.", "A topic detector with low learning rate overfits these mistakes, harming the overall performance of our model.", "Soft or Hard Topic.", "To further investigate the effectiveness of TWAG's soft topic mechanism, we compare the results of soft and hard topic and report them in Table 4, from which we can see that hard topic does quite poorly in this task.", "standard abstract express more than one topic.", "Assigning one topic to each sentence will result in semantic loss and thus harm the quality of generated abstract, while the soft topic could better simulate the human writing style.", "Number of Section Labels.", "The number of section labels n t plays a key role in our model: a small n t would not be informative enough to build topics, while a large one would induce noise.", "We can see from Figure 3 that the frequency of section labels is long-tailed, thus retaining only a small portion is able to capture the major part of information.", "Ta-Figure 3: The frequency of section labels in three domains.", "When ignoring section labels with extra high or low frequency, remaining section labels' frequency and rank generally form a straight line in log scale, which matches the Zipf's law for long-tail distributions.", "ble 5 records the experiment results we conducted on domain Company .", "n t = 20 reaches a peak on ROUGE 1, 2 and L scores, indicating that 20 is a reasonable number of section labels.", "Table 3 shows the generated Wikipedia abstracts by different models about film Majina There .", "We #Labels R1 R2 RL 10 .337 .117 .312 20 .340 .118 .315 30 .336 .117 .311 Table 5: ROUGE F1 scores of different n t .", "can see that the gold abstract contains information about three topics: basic information (region, director, and producer), actors, and music.", "Among the models, TF-S2S produces an abstract with a proper pattern but contains wrong information and BART misses the musical information topic.", "CV-S2D+T, HierSumm, and our TWAG model both cover all three topics in the gold abstract, however, CV-S2D+T makes several factual errors like the release date and actors and HierSumm suffers from redundancy.", "TWAG covers all three topics in the gold abstract and discovers extra facts, proving itself to be competent in generating comprehensive abstracts.", "We follow the experimental setup of (Perez-Beltrachini et al., 2019) and conduct a human evaluation consisting of two parts.", "A total of 45 examples (15 from each domain) are randomly selected from the test set for evaluation.", "The first part is a question-answering (QA) scheme proposed in (Clarke and Lapata, 2010) in order to examine factoid information in summaries.", "We create 2-5 questions 3 based on the golden sum-3 Example questions are listed in the Appendix C, and the whole evaluation set is included in the our code repository.", "mary which covers the appeared topics, and invite 3 participants to answer the questions by taking automatically-generated summaries as background information.", "The more questions a summary can answer, the better it is.", "To quantify the results, we assign a score of 1/0.5/0.1/0 to a correct answer, a partially correct answer, a wrong answer and those cannot be answered, and report the average score over all questions.", "Notice that we give a score of 0.1 even if the participants answer the question incorrectly, because a wrong answer indicates the summary covers a certain topic and is superior to missing information.", "Results in Table 6 shows that", "1) taking summaries generated by TWAG is capable of answering more questions and giving the correct answer,", "2) TF-S2S and HierSumm perform poorly in domain Film and Animal , which is possibly a consequence of under-fitting in small datasets.", "The second part is an evaluation over linguistic quality.", "We ask the participants to read different generated summaries from 3 perspectives and give a score of 1-5 (larger scores indicates higher qual-ity): Completeness (does the summary contain sufficient information?), Fluency (is the summary fluent and grammatical?) and Succinctness (does the summary avoid redundant sentences?) Specifi-cally, 3 participants are assigned to evaluate each model, and the average scores are taken as the final results.", "Table 7 presents the comparison results, from which we can see that, the linguistic quality of TWAG model outperforms other baseline models, validating its effectiveness.", "In this paper, we propose a novel topic-guided abstractive summarization model TWAG for generating Wikipedia abstracts.", "It investigates the section labels of Wikipedia, dividing the input document into different topics to improve the quality of generated abstract.", "This approach simulates the way how human recognize entities, and experimental results show that our model obviously outperforms existing state-of-the-art models which view Wikipedia abstracts as plain text.", "Our model also demonstrates its high data efficiency.", "In the future, we will try to incorporate pretrained language models into the topic-aware abstract generator module, and apply the topic-aware model to other texts rich in topical information like sports match reports.", "We thank the anonymous reviewers for their insightful comments.", "This work is supported by the National Key Research and Development Program of China (2017YFB1002101), NSFC Key Project (U1736204) and a grant from Huawei Inc. 4632 Ethical Considerations TWAG could be applied to applications like automatically writing new Wikipedia abstracts or other texts rich in topical information.", "It can also help human writers to examine whether they have missed information about certain important topics.", "The benefits of using our model include saving human writers' labor and making abstracts more comprehensive.", "There are also important considerations when using our model.", "Input texts may violate copyrights when inadequately collected, and misleading texts may lead to factual mistakes in generated abstracts.", "To mitigate the risks, researches on how to avoid copyright issues when collecting documents from the Internet would help." ]
[ "abstain", "abstain", "objective", "objective", "method", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "objective", "abstain", "abstain", "method", "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "objective", "abstain", "result", "objective", "method", "other", "other", "other", "other", "other", "other", "other" ]
[ "Supervised training of abstractive language generation models results in learning conditional probabilities over language sequences based on the supervised training signal.", "When the training signal contains a variety of writing styles, such models may end up learning an 'average' style that is directly influ-enced by the training data make-up and cannot be controlled by the needs of an application.", "We describe a family of model architectures capable of capturing both generic language characteristics via shared model parameters, as well as particular style characteristics via private model parameters.", "Such models are able to generate language according to a specific learned style, while still taking advantage of their power to model generic language phenomena.", "Furthermore, we describe an extension that uses a mixture of output distributions from all learned styles to perform on-the-fly style adaptation based on the textual input alone.", "Experimentally, we find that the proposed models consistently outperform models that encapsulate single-style or average-style language generation capabilities.", "Encoder-decoder models have recently pushed forward the state-of-the-art performance on a variety of language generation tasks, including machine translation (Bahdanau et al., 2015; Wu et al., 2016; Vaswani et al., 2017), text summarization (Rush et al., 2015; Nallapati et al., 2016; See et al., 2017), dialog systems (Li et al., 2016; Asghar et al., 2017), and image captioning (Xu et al., 2015; Ranzato et al., 2015; Liu et al., 2017).", "This framework consists of an encoder that reads the input data and encodes it as a sequence of vectors, which is in turn used by a decoder to generate anWork done as an intern at Google AI.", "symbols step by step.", "The prevalent approach to training such a model is to update all the model parameters using all the examples in the training data (over multiple epochs).", "This is a reasonable approach, under the assumption that we are modeling a single underlying distribution in the data.", "However, in many applications and for many natural language datasets, there exist multiple underlying distributions, characterizing a variety of language styles.", "For instance, the widely-used Gigaword dataset (Graff and Cieri, 2003) consists of a collection of articles written by various publishers (The New York Times, Agence France Presse, Xinhua News, etc.), each with its own style characteristics.", "Training a model's parameters on all the training examples results in an averaging effect across style characteristics, which may lower the quality of the outputs; additionally, this averaging effect may be completely undesirable for applications that require a level of control over the output style.", "At the opposite end of the spectrum, one can choose to train one independent model per each underlying distribution (assuming we have the appropriate signals for identifying them at training time).", "This approach misses the opportunity to exploit common properties shared by these distributions (e.g., generic characteristics of a language, such as noun-adjective position), and leads to models that are under-trained due to limited data availability per distribution.", "In order to address these issues, we propose a novel neural architecture called SHAPED (shared-private encoder-decoder).", "This architecture has both shared encoder/decoder parameters that are updated based on all the training examples, as well as private encoder/decoder parameters that are updated using only examples from their corresponding underlying training distributions.", "In addition 1528 to learning different parametrization between the shared model and the private models, we jointly learn a classifier to estimate the probability of each example belonging to each of the underlying training distributions.", "In such a setting, the shared parameters ('shared model') are expected to learn characteristics shared by the entire set of training examples (i.e., language generic), whereas each private parameter set ('private model') learns particular characteristics (i.e., style specific) of their corresponding training distribution.", "At the same time, the classifier is expected to learn a probability distribution over the labels used to identify the underlying distributions present in the input data.", "At test time, there are two possible scenarios.", "In the first one, the input signal explicitly contains information about the underlying distribution (e.g., the publisher's identity).", "In this case, we feed the data into the shared model and also the corresponding private model, and perform sequence generation based on a concatenation of their vector outputs; we refer to this model as the SHAPED model.", "In a second scenario, the information about the underlying distribution is either not available, or it refers to a distribution that was not seen during training.", "In this case, we feed the data into the shared model and all the private models; the output distribution of the symbols of the decoding sequence is estimated using a mixture of distributions from all the decoders, weighted according to the classifier's estimates for that particular example; we refer to this model as the Mix-SHAPED model.", "We test our models on the headline-generation task based on the aforementioned Gigaword dataset.", "When the publisher's identity is presented as part of the input, we show that the SHAPED model significantly surpasses the performance of the shared encoder-decoder baseline, as well as the performance of private models (where one individual, per-publisher model is trained for each in-domain style).", "When the publisher's identity is not presented as part of the input (i.e., not presented at run-time but revealed at evaluation-time for measurement purposes), we show that the Mix-SHAPED model exhibits a high level of classifi-cation accuracy based on textual inputs alone (ac-curacy percentage in the 80s overall, varying by individual publisher), while its generation accuracy still surpasses the performance of the baseline models.", "Finally, when the publisher's identity is unknown to the model (i.e., a publisher that was not part of the training dataset), we show that the Mix-SHAPED model performance far surpasses the shared model performance, due to the ability of the Mix-SHAPED model to perform on-the-fly adaptation of output style.", "This feat comes from our model's ability to perform two distinct tasks: match the incoming, previously-unseen input style to existing styles learned at training time, and use the correlations learned at training time between input and output style characteristics to generate style-appropriate token sequences.", "Encoder-decoder architectures have been successfully applied to a variety of structure prediction tasks recently.", "Tasks for which such architectures have achieved state-of-the-art results include machine translation (Bahdanau et al., 2015; Wu et al., 2016; Vaswani et al., 2017), automatic text summarization (Rush et al., 2015; Chopra et al., 2016; Nallapati et al., 2016; Paulus et al., 2017; Nema et al., 2017), sentence simplification (Filip-pova et al., 2015; Zhang and Lapata, 2017), dialog systems (Li et al., 2016, 2017; Asghar et al., 2017), image captioning (Vinyals et al., 2015; Xu et al., 2015; Ranzato et al., 2015; Liu et al., 2017), etc.", "By far the most used implementation of such architectures is based on the original sequence-to-sequence model (Sutskever et al., 2014), augmented with its attention-based extension (Bah-danau et al., 2015).", "Although our SHAPED and Mix-SHAPED model formulations do not depend on a particular architecture implementation, we do make use of the (Bahdanau et al., 2015) model to instantiate our models.", "One general approach to domain adaptation for natural language tasks is to perform data/feature augmentation that represents inputs as both general and domain-dependent data, as originally proposed in (Daume III, 2009), and ported to neural models in (Kim et al., 2016).", "For computer vision tasks, a line of work related to our approach has been proposed by Bousmalis et al. (2016) using what they call domain separation networks.", "As a tool for studying unsupervised domain adaptation 1529 for image recognition tasks, their proposal uses CNNs for encoding an image into a feature representation, and also for reconstructing the input sample.", "It also makes use of a private encoder for each domain, and a shared encoder for both the source and the target domain.", "The approach we take in this paper shares this idea of model parametrization according to the domain/style, but goes further with the Mix-SHAPED model, performing on-the-fly adaptation of the model outputs.", "Other CNN-based domain adaptation methods for object recognition tasks are presented in (Long et al., 2016; Chopra et al., 2013; Tzeng et al., 2015; Sener et al., 2016).", "For NLP tasks, Peng and Dredze (2017) take a multi-task approach to domain adaptation and sequence tagging.", "They use a shared encoder to represent instances from all of the domains, and use a domain projection layer to project the shared layer into a domain-specific space.", "They only consider the supervised domain-adaptation case, in which labeled training data exists for the target domain.", "Glorot et al. (2011) use auto-encoders for learning a high-level feature extraction across domains for sentiment analysis, while Zhou et al. (2016) employ auto-encoders to directly transfer the examples across different domains also for the same sentiment analysis task.", "Hua and Wang (2017) perform an experimental analysis on domain adaptation for neural abstractive summarization.", "An important requirement of all the methods in the related work described above is that they require access to the (unlabeled) target domain data, in order to learn a domain-invariant representation across source and target domains.", "In contrast, our Mix-SHAPED model does not need access to a target domain or style at training time, and instead performs the adaptation on-the-fly, according to the specifics of the input data and the correlations learned at training time between available input and output style characteristics.", "As such, it is a more general approach, which allows adaptation for a much larger set of target styles, under the weaker assumption that there exists one or more styles present in the training data that can act as representative underlying distributions.", "where f enc is the computation unit in the encoder; and, a decoder that generates output symbols at each time stamp t , conditioned on H as well as the decoder inputs y 1: t 1 ,", "where f dec is the computation unit in the decoder.", "Instantiations of this framework include the widely-used attention-based sequence-to-sequence model (Bahdanau et al., 2015), in which f enc and f dec are implemented by an RNN architecture using LSTM (Hochreiter and Schmid-huber, 1997) or GRU (Chung et al., 2014) units.", "A more recent instantiation of this architecture is the Transformer model (Vaswani et al., 2017), built using self-attention layers.", "The abstract encoder-decoder model described above is usually trained over all examples in the training data.", "We call such a model a shared encoder-decoder model, because the model parameters are shared across all training and test instances.", "Formally, the shared encoder-decoder consists of the computation units f s enc and f s dec .", "Given an instance x , it generates a sequence of vectors S s = ( s s 1 , ...s sT ) by: H s = f s enc ( x ) , s st = f s dec ( y 1: t 1 , H s ) .", "The drawback of the shared encoder-decoder is that it fails to account for particular properties of each style that may be present in the data.", "In order to capture such particular style characteristics, a straightforward solution is to train a private model for each style.", "Assuming a style set D = { D 1 , D 2 ..., D | D | } , such a solution implies that each style has its own private encoder computation unit and decoder computation unit.", "At both training and testing time, each private encoder and decoder only process instances that belong to their own style.", "Given an instance along with its style ( x , z ) where z { 1 , . . . , | D |} , the private encoder-decoder generates a sequence of vectors S z = ( s z 1 , ...s zT ) by: H z = f z enc ( x ) , s zt = f z dec ( y 1: t 1 , H z ) .", "Figure 1 : Illustration of the SHAPED model using two styles D 1 and D 2 .", "D 1 articles pass through the private encoder f 1 enc and decoder f 1 dec .", "D 2 articles pass through the private encoder f 2 enc and decoder f 2 dec .", "Both of them also go through the shared encoder f s dec and decoder f s dec .", "Although the private encoder/decoder models do preserve style characteristics, they fail to take into account the common language features shared across styles.", "Furthermore, since each style is represented by a subset of the entire training set, such private models may end up as under-trained, due to limited number of available data examples.", "In order to efficiently capture both common and unique features of data with different styles, we propose the SHAPED model.", "In the SHAPED model, each data-point goes through both the shared encoder-decoder and its corresponding private encoder-decoder.", "At each step of the decoder, the output from private and shared ones are concatenated to form a new vector: s r z t = [ s zt , s st ] , (5) that contains both private features for style z and shared features induced from other styles, as illustrated in Fig 1.", "The output symbol distribution over tokens o t V (where V is the output vocabulary) at step t is given by: p ( o t | x , y 1: t 1 , z ) = Softmax ( g ( s r z t )) , (6) where g is a multi-layer feed-forward network that maps s r z t to a vector of size | V | .", "Given N training examples ( x (1) , y (1) , z (1) ) , . . . , ( x ( N ) , y ( N ) , z ( N ) ) , the conditional probability of the output y ( i ) given article x ( i ) and its style z ( i ) { 1 , . . . , | D |} is: p ( y ( i ) | x ( i ) , z ( i ) ) = Y t p ( o t = y ( i ) t | x ( i ) , y ( i ) 1: t 1 , z ( i ) ) .", "(7) At inference time, given an article x with style z , we feed x into f s enc , f s dec , f z enc , f z dec (Eq. 3-4) and obtain symbol distributions at each step t using Eq.", "6.", "We sample from the distribution and obtain a symbol o t which will be used as the estimated y t and fed to the next steps.", "One limitation of the above model is that it can only handle test data containing an explicit style label from D = { D 1 , D 2 ..., D | D | } .", "However, there is frequently the case that, at test time, the style label is not present as part of the input, or that the input style is not part of the modeled set D .", "We treat both of these cases similarly, as a case of modeling an unknown style.", "We first describe our treatment of such a case at run-time.", "We use a latent random variable z { 1 , . . . , | D |} to denote the underlying style of a given input.", "When generating a token at step t , the output token distribution takes the form of a mixture of SHAPED (Mix-SHAPED) model outputs: p ( o t | x , y 1: t 1 ) = | D | X d =1 p ( o t | x , y 1: t 1 , z = d ) p ( z = d | x ) , (8) where p ( o t | x , y 1: t 1 , z = d ) is the output symbol distribution of SHAPED decoder d , evaluated as in Eq.", "6.", "Fig. 2 contains an illustration of such a model.", "In this formulation, p ( z | x ) denotes the style conditional probability distribution from a trainable style classifier.", "Training the Mix-SHAPED model involves minimizing a loss function that combines the negative log-likelihood of the style labels and the negative log-likelihood of the symbol sequences (see the model in Fig 3):", "Loss Mix-SHAPED = NX i =1 log p ( z ( i ) | x ( i ) ) NX i =1 log p ( y ( i ) | x ( i ) , z ( i ) ) .", "(10) 1531 D ?", "Figure 2 : Decoding data with unknown style using a Mix-SHAPED model.", "The data is run through all encoders and decoders.", "The output of private encoders is fed into a classifier that estimates style distribution.", "The output symbol distribution is a mixture over all decoder outputs.", "At run-time, if the style d of the input is available and d D , we decode the sequence using Eq.", "6.", "This also corresponds to the case p ( z = d | x ) = 1 and 0 for all other styles, and reduces Eq.", "8 to Eq.", "6.", "If the style of the input is unknown (or known, but with d 0 6 D ), we decode the sequence using Eq.", "8, in which case the mixture over SHAPED models given by p ( z | x ) is approximating the desired output style.", "As an implementation of the encoder-decoder model, we use the attention-based sequence-to-sequence model from (Bahdanau et al., 2015), with an RNN architecture using GRU units (Chung et al., 2014).", "The input token sequences are first projected into an embedding space via an embedding matrix E , resulting in a sequence of vectors as input representations.", "The private and shared RNN cells generate a sequence of hidden state vectors H z = { h zj } , z { 1 , ..., | D |} and H s = { h sj } , for j { 1 , ..., T x } .", "At each step in the encoder, h zj and h sj are concatenated to form a new output vector h r z j = [ h zj , h sj ] .", "The final state of each encoder is used as the initial state of the corresponding decoder.", "At time step t in the decoder, the private and shared RNN cell first generate hidden state vectors { s zt } , z { 1 , ..., | D |} and s st , then s st is concatenated with each s zt to form new vectors { s r z t } ( z { 1 , ..., | D |} ).", "We apply the attention mechanism on s r z t , using Style Classifier CAT SOFTMAXD 1 D 2 D 1 Style Classifier Figure 3 : Training a Mix-SHAPED model.", "(a) Each example is fed to all private encoders f 1 enc , f 2 enc , whose outputs are concatenated and fed to a style classifier.", "(b) The D 1 examples only use f 1 enc , f 1 dec , f senc , f sdec to decode texts.", "Private encoder-decoders of other styles are not used.", "attention weights calculated as: q r z tj = v a tanh ( W a h r z j + U a s r z t ) , (11) which are normalized to a probability distribution: r z tj = exp ( q r z tj ) PT x i =1 exp ( q r z ti ) (12) Context vectors are computed using normalized attention weights: c r z t = T x X j =1 r z tj h r z j (13) Given the context vector and the hidden state vectors, the symbol distribution at step t is: p ( o t | x , y 1: t , z ) = softmax ( g ([ c r z t , s r z t ])) (14) The attention weights in W a , U a , and v a , as well as the embedding matrix E and vocabulary V are shared by all encoders and decoders.", "We use Eq.", "14 to calculate the symbol loss in Eq.", "10.", "We perform a battery of quantitative experiments, designed to answer several main questions: 1) Do the proposed model improve generation performance over alternative approaches?", "2) Can a style classifier built using an auxiliary loss provide a reliable estimate on text style?", "3) In the case of unknown style, does the Mix-SHAPED model improve generation performance over alternative approaches?", "4) To what extent do our models capture style characteristics as opposed to, say, content characteristics?", "We perform our experiments using text summarization as the main task.", "More precisely, we train and evaluate headline generation models using the publicly-available Gigaword dataset (Graff and Cieri, 2003; Napoles et al., 2012).", "The Gigaword dataset contains news articles from seven publishers: Agence France-Presse (AFP), Associated Press Worldstream (APW), Central News Agency of Taiwan (CNA), Los Angeles Times/Washington Post Newswire Service (LTW), New York Times (NYT), Xinhua News Agency (XIN), and Washington Post/Bloomberg Newswire Service (WPB).", "We pre-process this dataset in the same way as in (Rush et al., 2015), which results in articles with average length 31.4 words, and headlines with average length 8.5 words.", "We consider the publisher identity as a proxy for style, and choose to model as in-domain styles the set D = { AFP, APW, NYT, XIN } , while holding out CNA and LTW for out-of-domain style testing.", "This results in a training set containing the following number of (article, headline) instances: 993,584 AFP, 1,493,758 APW, 578,259 NYT, and 946,322 XIN.", "For the test set, we sample a total number of 10,000 in-domain examples from the original Gigawords test dataset, which include 2,886 AFP, 2,832 APW, 1,610 NYT, and 2,012 XIN.", "For out-of-domain testing, we randomly sample 10,000 LTW and 10,000 CNA test data examples.", "We remove the WPB articles due to their small number of instances.", "We compare the following models:", "A suite of Private encoder-decoder models (P), each one trained on a particular style from D = { AFP, APW, NYT, XIN } ; 1 A SHAPED model (SP) trained on all styles in D ; at test time, the style of test data is provided to the model; the article is only run through its style-specific private network and shared network (style classifier is not needed); A Mix-SHAPED model (M-SP) trained on all styles in D ; at test time, the style of article is not provided to the model; the output is computed using the mixture model, with the estimated style probabilities from the style classifier used as weights.", "When testing on the out-of-domain styles CNA/LTW, we only compare the Shared (S) model with the Mix-SHAPED (M-SP) model, as the others cannot properly handle this scenario.", "As hyper-parameters for the model instantiation, we used 500-dimension word embeddings, and a three-layer, 500-dimension GRU-cell RNN architecture; the encoder was instantiated as a bidirectional RNN.", "The lengths of the input and output sequences were truncated to 40 and 20 tokens, respectively.", "All the models were optimized using Adagrad (Duchi et al., 2011), with an initial learning rate of 0.01.", "The training procedure was done over mini-batches of size 128, and the updates were done asynchronously across 40 workers for 5M steps.", "The encoder/decoder word embedding and the output projection matrices were tied to minimize the number of parameters.", "To avoid the slowness from the softmax operator over large vocabulary sizes, and also mitigate the impact of out-of-vocabulary tokens, we applied a subtokeniza-tion method (Wu et al., 2016), which invertibly transforms a native token into a sequence of subto-kens from a limited vocabulary (here set to 32K).", "Comparison with Previous Work In the next section, we report our main results using the in-domain and out-of-domain (w.r.t. the selected publisher styles) test sets described above, since these test sets have a balanced publisher style frequency that allows us to measure the impact of our style-adaptation models.", "However, we also report 1 We also tried to warm-start a private model using the best checkpoint of the shared model, but found that it cannot improve over the shared model.", "Table 1 : ROUGE F1 scores on the combined AFP/APW/XIN/NYT in-domain test set.", "here the performance of our Shared (S) baseline model (with the above hyper-parameters) on the original 2K test set used in (Rush et al., 2015).", "On that test set, our S model obtains 30.13 F1 ROUGE-L score, compared to 28.34 ROUGE-L obtained by the ABS+ model (Rush et al., 2015), and 30.64 ROUGE-L obtained by the words-lvt2k-1sent model (Nallapati et al., 2016).", "This comparison indicates that our S model is a competitive baseline, making the comparisons against the SP and M-SP models meaningful when using our in-domain and out-of-domain test sets.", "The Rouge scores for the in-domain testing data are reported in Table 1 (over the combined AFP/APW/XIN/NYT testset) and Fig. 4a (over individual-style test sets).", "The numbers indicate that the SP and M-SP models consistently outperform the S and P model, supporting the conclusion that the S model loses important characteristics due to averaging effects, while the P models miss the opportunity to efficiently exploit the training data.", "Additionally, the performance of SP is consistently better than M-SP in this setting, which indicates that the style label is helpful.", "As shown in Fig. 4b, the style classifier achieves around 80% accuracy overall in predicting the style under the M-SP model, with some styles (e.g., XIN) being easier to predict than others.", "The performance of the classifier is directly reflected in the quantitative difference between the SP and M-SP models on individual-style test sets (see Fig. 4a, where the XIN style has the smallest difference between the two models).", "The evaluation results for the out-of-domain scenario are reported in Table 2.", "The numbers indicate that the M-SP model significantly outperforms the S model, supporting the conclusion that the M-SP model is capable of performing on-the-fly adaptation of output style.", "This conclusion is further strengthened by the style probability distributions shown in Fig 5: they indicate that, for P S S PM-S P 16.4 16.6 16.8 17.0 17.2 17.4 ROUGE-L ( % ) NYTP S S PM-S P 39.4 39.6 39.8 40.0 40.2 40.4 40.6 AFPP S S PM-S P 33.0 33.2 33.4 33.6 33.8 34.0 34.2 34.4 APWP S S PM-S P 51.5 52.0 52.5 53.0 53.5 XIN", "(a) Rouge-L scores on headline generation, shown separately on four in-domain styles.", "(b) Average estimated probability distribution by the MSP model over the four styles, for each in-domain target style in the test set.", "Figure 4 : Experimental results on the headline generation task, for in-domain styles.", "the out-of-domain CNA style, the output mixture is heavily weighted towards the XIN style (0.6 of the probability mass), while for the LTW style, the output mixture weights heavily the NYT style (0.72 of the probability mass).", "This result is likely to reflect true style characteristics shared by these publishers, since both CNA and XIN are produced by Chinese news agencies (from Taiwan and main-land China, respectively), while both LTW and NYT are U.S. news agencies owned by the same media corporation.", "Model capacity In order to remove the possibility that the improved performance of the SP model is due simply to an increased model size compared to the S model, we perform an experiment in which we triple the size of the GRU cell dimensions for the S model.", "However, we find no sig-nificant performance difference compared to the 1534 CNA Test LTW Test Rouge-1 Rouge-2 Rouge-L Rouge-1 Rouge-2 Rouge-L S 40.73 0.21 17.75 0.18 37.70 0.20 27.08 0.19 8.97 0.15 25.01 0.17 M-SP 42.00 0.20 19.48 0.21 39.24 0.22 27.79 0.19 9.31 0.18 25.60 0.17 Table 2 : ROUGE F1 scores on out-of-domain style test sets CNA and LTW.", "original dimensions (the ROUGE-L score of the triple-size S model is 36.61, compared to 36.51 obtained of the original S model).", "Style embedding A competitive approach to modeling different styles is to directly encode the style information into the embedding space.", "In (Johnson et al., 2016), the style label is converted into a one-hot vector and is concatenated with the word embedding at each time step in the S model.", "The outputs of this model are at 36.68 ROUGE-L, slightly higher than the baseline S model, but significantly lower than the SP model performance (37.52 ROUGE-L).", "Another style embedding approach is to augment the S model with continuous trainable style embeddings for each predefined style label, similar to (Ammar et al., 2016).", "The resulting outputs achieve 37.2 ROUGE-L, which is better than the S model with one-hot style embedding, but still worse than the SP method (statistically significant at p-value=0.025 using paired t-test).", "However, neither of these approaches apply to the cases when the style is out-of-domain or unknown during testing.", "In contrast, such cases are handled naturally by the proposed M-SP model.", "multiple models rather than style adaptation.", "To answer this question, we apply a uniform mixture over the private model output along with the shared model output, rather than using the learnt probability distribution from the style classifier.", "The ROUGE-1/2/L scores are 39.9/19.7/37.0.", "They are higher than the S model but significantly lower than the SP model and the M-SP model (p-value 0.016).", "This result confirms that the information that the style classifier encodes is benefi-ciary, and leads to improved performance.", "Style vs. Content Previous experiments indicate that the SP and M-SP models have superior generation accuracy, but it is unclear to what extent the difference comes from improved modeling of style versus modeling of content.", "To clarify this issue, we performed an experiment in which we replace the named entities appearing in both article and headline with corresponding entity tags, in effect suppressing almost completely any content signal.", "For instance, given an input such as China called Thursday on the parties involved in talks on North Korea's nuclear program to show flexibility as a deadline for implementing the first steps of a breakthrough deal approached., paired with goldtruth output China urges flexibility as NKo-rea deadline approaches, we replaced the named entities with their types, and obtained: LOC 0 called Thursday on the ORG 0 involved in NON 2 on LOC 1 's NON 3 to show NON 0 as a NON 1 for implementing the first NON 4 of a NON 5 approached ., paired with LOC 0 urges NON 0 as LOC 1 NON 1 approaches.", "Under this experimental conditions, both the SP and M-SP models still achieve significantly better performance compared to the S baseline.", "On the combined AFP/APW/XIN/NYT in-domain test set, the SP model achieves 61.70 ROUGE-L and M-SP achieves 61.52 ROUGE-L, compared to 60.20 ROUGE-L obtained by the S model.", "On the CNA/LTW out-of-domain test set, M-SP achieves 60.75 ROUGE-L, compared to 59.47 ROUGE-L by the S model.", "In Table 3, we show an example which indi-1535 article the org 2 is to forge non 1 with the org 3 located in loc 2 , loc 1 , the per 0 of the loc 0 org 4 said tuesday .", "Table 3 : Examples of input article (and groundtruth title) and output generated by S and M-SP.", "Named entities in the training instances (both article and title) are replaced the entity type.", "cates the ability of style adaptation benifiting summarization.", "For instance, we find that both CNA and XIN make more frequent use of the style pattern xxx will/to [verb] yyy . . . , zzz said ???day (about 15% of CNA articles contain this pattern, while only 2% of LTW articles have it).", "From Table 3, we can see that the S model sometimes misses or misuses the verb in its output, while the M-SP model does a much better job at capturing both the verb/action as well as other relations (via prepositions, etc.) Fig. 6 shows the estimated style probabilities over the four styles AFP/APW/XIN/NYT for CNA and LTW, under this experiment condition.", "We observe that, in this version as well, CNA is closely matching the style of XIN, while LTW is matching that of NYT.", "The distribution is similar to the one in Fig. 5, albeit a bit flatter as a result of content removal.", "As such, it supports the conclusion that the classifier indeed learns style (in addition to content) characteristics.", "In this paper, we describe two new style-adaptation model architectures for text sequence generation tasks, SHAPED and Mix-SHAPED.", "Both versions are shown to significantly outperform models that are either trained in a manner that ignores style characteristics (and hence exhibit a style-averaging effect in their outputs), or models that are trained single-style.", "The latter is a particularly interesting result, as a model that is trained (with enough data) on a single-style and evaluated on the same style would be expected to exhibit the highest performance.", "Our results show that, even for single-style models NYT AFP APW XIN 0.0 0.1 0.2 0.3 0.4 0.5 CNANYT AFP APW XIN 0.0 0.1 0.2 0.3 0.4 0.5 LTW Figure 6 : Estimated style probabilities over the four in-domain styles AFP/APW/XIN/NYT, for out-of-domain styles CNA and LTW, after named entities in the article and summary are replaced with entity tags.", "trained on over 1M examples, their performance is inferior to the performance of SHAPED models on that particular style.", "Our conclusion is that the proposed architectures are both efficient and effective in modeling both generic language phenomena, as well as particular style characteristics, and are capable of producing higher-quality abstractive outputs that take into account style characteristics." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "result", "other", "result", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "objective", "abstain", "abstain", "result", "abstain", "objective" ]
[ "We propose a Transformer-based sequence-to-sequence model for automatic speech recognition (ASR) capable of simultaneously transcribing and annotating audio with linguistic information such as phonemic transcripts or part-of-speech (POS) tags.", "Since linguistic information is important in natural language processing (NLP), the proposed ASR is especially useful for speech interface applications, including spoken dialogue systems and speech translation, which combine ASR and NLP.", "To produce linguistic annotations, we train the ASR system using modified training targets: each grapheme or multi-grapheme unit in the target transcript is followed by an aligned phoneme sequence and/or POS tag.", "Since our method has access to the underlying audio data, we can estimate linguistic annotations more accurately than pipeline approaches in which NLP-based methods are applied to a hypothesized ASR transcript.", "Experimental results on Japanese and English datasets show that the proposed ASR system is capable of simultaneously producing high-quality transcriptions and linguistic annotations.", "End-to-end automatic speech recognition (E2E ASR), which transcribes speech using a single neural network (NN), has recently gained traction (Graves and Jaitly, 2014; Chorowski et al., 2015; Chan et al., 2016; Graves, 2012; Dong et al., 2018).", "Existing E2E ASR models generate audio transcripts by sequentially producing likely graphemes, or multi-graphemic units, from which lexical items of a language can be recovered.", "However, other linguistic annotations such as phonemic transcripts, part-of-speech (POS) tags, or word boundaries, help understand the underlying audio characteristics (Simonnet et al., 2017).", "Such linguistic annotations are especially important in natural language processing (NLP) tasks done on audio !!", "(b) One-to-one model with a conditional chain mapping.", "\" !\"# #\" #\"# #$#\" $ \"# !!", "\" !\"!", "#\" #\"!", "#\" !\"# #\" #\"# #$#\" $ \" \"! #\" $ !", "\"# ! !\"+##',* !\"#$%#&'#(-(.()* ! \" #\" #$#\"", "(c) One-to-one model with a single sequence.", "data, including spoken dialogue systems (Jurafsky and Martin, 2008).", "This study aims to endow existing E2E ASR models with the ability to produce such linguistic annotations.", "Prior work explored using E2E ASR systems to predict multiple kinds of labels.", "Fig. 1 shows a diagram of these systems.", "These approaches use one of the following models: a one-to-many (O2M) model (Kubo and Bacchiani, 2020; Ueno et al., 2018; Gowda et al., 2019; Sanabria and Metze, 2018; Adams et al., 2019), a one-to-one (O2O) model with a conditional chain mapping (Shi et al., 2020), or an O2O model with a single sequence (Audhkhasi et al., 2018; Ghannay et al., 2018; Shafey et al., 2019; Yadav et al., 2020).", "In O2M models shown in Fig.", "1(a), a multitask objective is used in which an extra branch is tasked with estimating the secondary label sequence.", "For example, in (Kubo and Bacchiani, 2020), the phonemic transcript is produced in addition to the graphemic transcript.", "The O2M model can estimate each sequence more accurately than separate models responsible for producing phonemic and graphemic transcripts independently.", "We can implement this approach with less effort by attaching multiple loss functions to the base architecture.", "However, this O2M model does not explicitly consider dependencies between phonemic and graphemic transcripts.", "Furthermore, aligning phoneme and grapheme sub-sequences requires additional post-processing based on time alignment or alignment across the multiple sequences during inference.", "Performance of downstream NLP tasks built on top of ASR outputs will suffer if this post-processing fails to generate alignment.", "Fig.", "1(b) shows an O2O model with a conditional chain mapping.", "This method for multiple sequence modeling has been applied to dialog modeling (Liang et al., 2020), speaker diarization (Fu-jita et al., 2020a), and multi-speaker ASR (Shi et al., 2020).", "Unlike the O2M model, this model can predict a variable number of output sequences while explicitly considering dependencies between the multiple sequences based on the probabilistic chain rule.", "However, modeling these inter-sequence dependencies requires more complicated neural architectures, and alignment of the sequences still requires post-processing during inference.", "Another option for using O2O models is to output multiple sequences as a single sequence instead of using conditional chain mapping, as shown in Fig.", "1(c).", "For example, in (Audhkhasi et al., 2018), the O2O model produces word transcripts by first generating a word's constituent graphemes followed by the word itself.", "Another application, explored in (Shafey et al., 2019) used the O2O model to produce graphemes followed by speaker role.", "This approach is the simplest to implement because we can reuse the neural network architecture used to produce the primary sequence to sequence mapping to produce the secondary label sequence (e.g., connectionist temporal classification (CTC) based systems).", "In contrast to the previous two approaches, the O2O model does not require postprocessing to align the label sequences during inference since the output sequence preserves the alignment between the word and corresponding annotation labels; alignment is only needed for the data preparation stage during training to produce the appropriate target sequences.", "For this reason, we used the O2O model in this study.", "2019) for the O2O model with a single sequence, instead of CTC-based approaches which are frequently supported (Audhkhasi et al., 2018; Ghannay et al., 2018).", "Compared with the CTC-based systems, this approach can explicitly model the relationship between the output labels thanks to the autoregressive decoder network, similar to the conditional chain rule model in Fig.", "1(b).", "We also demonstrate improved performance compared to the CTC-based systems.", "Another contribution is that we conducted an extensive empirical evaluation to analyze and demonstrate the utility of our approach.", "For example, we applied the method to English and Japanese ASR tasks in which phonemic transcripts and POS tags are simultaneously produced.", "Our approach predicts linguistic annotations correctly even though corresponding graphemes are wrong, while the pipeline approach, in which NLP-based methods are applied to a hypothesized ASR transcript, fails.", "This feature is helpful for the downstream NLP system like slot filling or intent detection.", "Besides, our approach is suitable for on-device applications because the E2E model archives small-footprint prediction (Pang et al., 2018).", "Note that our primary goal is to provide aligned transcripts and linguistic annotations with minimal degradation in ASR performance.", "We are not aiming to improve ASR performance.", "The features of the proposed method are summarized as follows: The proposed Transformer-based O2O model can explicitly model the relationship between the output graphemes and corresponding linguistic annotations, unlike the O2M and CTC-based O2O models.", "Our approach does not require additional alignment post-processing across the transcriptions and the sequence of the linguistic annotations during inference.", "We can easily combine the proposed O2O model with downstream NLP tasks and also conduct an intuitive error analysis (e.g., detecting the error caused due to the homonym by checking the word and the corresponding phoneme output).", "The objective of E2E ASR is to estimate the output token sequence y = { y m Y} Lm =1 from input feature sequences X = { x i (cid:60) D in } I in i =1 .", "Here, D in and I in denote the number of the dimension of the input feature and the length of the input sequence, respectively; and L and Y denote output sequence length and the token vocabulary.", "To predict the output token sequence, an NN is trained to maximize the following conditional likelihood objective function: L = log p ( y | X ) = M (cid:88) m =1 log p ( y m | y 1 , . . . , y m 1 , X ) .", "(1) During run-time, the ASR output y is predicted by y = arg max y Y log p ( y | X ) , (2) where Y denotes a set of all possible hypotheses.", "The Transformer (Vaswani et al., 2017) is a state-of-the-art NN architecture that can be used to maximize Eq.", "(1).", "The Transformer consists of two NNs: The Encoder network and the Decoder network.", "Let I emb and D emb be the sequence length and dimension of the acoustic embedding.", "The Encoder network generates a sequence of embeddings of the acoustic information E = { e i (cid:60) D emb } I emb i =1 from input feature sequences, i.e. E = Encoder( X ) .", "The Decoder network predicts the output of the M -th step y M given a sub-sequence, including the current output y = { y 1 , , y M 1 } and E , i.e. y M = Decoder( y , E ) .", "This conditional autoregressive modeling function is particularly important in this paper since it can explicitly model the relationship between output labels, unlike CTC.", "This study aims to estimate a word/morpheme sequence and linguistic annotations, such as phonemic transcripts or POS tags, simultaneously.", "Here, we define the number of sequences including the subword sequence y 1 in Section 2.1 and additional linguistic annotation sequences as y 2 , . . . , y K .", "To predict both transcriptions and linguistic annotations, the NN is trained to maximize the following log-likelihood of the joint probability: L = log p ( y 1 , , y K | X ) , (3) where y k = { y 1 ,k , , y M k ,k | y m,k Y k } denotes an M k -length sequence of the k -th type of tokens or linguistic annotations and Y k denotes a set of the corresponding tokens or symbols.", "In the rest of this subsection, we explain the following existing models to maximize Eq.", "(3): the O2M model trained with multi-task learning and the O2O model trained with the conditional chain mapping.", "One frequently used NN architecture (Ueno et al., 2018; Gowda et al., 2019; Sanabria and Metze, 2018; Adams et al., 2019) that maximizes Eq.", "(3) is the O2M model trained with multi-task learning.", "Fig.", "1(a) shows the architecture of the model.", "The O2M model outputs several types of sequences independently.", "In other words, multi-task learning is derived by assuming conditional independence of output token types for Eq.", "(3), as follows: L = log K (cid:89) k =1 p ( y k | (cid:40)(cid:40)(cid:40)(cid:40)(cid:40)(cid:40)(cid:40) y 1 , , y k 1 , X ) (4) = K (cid:88) k =1 M k (cid:88) m =1 log p ( y m,k | y 1: m 1 ,k , X ) , (5) where y 1: m 1 ,k = { y 1 ,k , , y m 1 ,k } denotes a sub-sequence of the k -th type of tokens or linguistic annotations up to m 1 .", "The line crossing part of Eq.", "(4) represents that the sequences y 1 , , y k 1 are neglected by assuming conditional independence.", "The purpose of this study is to predict words/morphemes and aligned linguistic annotation jointly.", "Since the O2M model deals with different lengths of sequences, post-processing is needed to align the multiple sequences.", "Also, Eq.", "(4) shows that multi-task learning assumes conditional independence, but transcripts and linguistic annotations are often conditionally dependent.", "Hence the O2M model is not ideal for this study.", "The O2O model trained with a conditional chain mapping (Fujita et al., 2020a; Shi et al., 2020) can also be used to maximize Eq.", "(3).", "Fig.", "1(b) shows the architecture of this model.", "This model predicts the different sequence types sequentially, each time conditioning on all previously decoded sequence types 1 . . . k 1 .", "Different from the multi-task training loss (Eq.", "(4)) used in the O2M model, the O2O conditional chain mapping model is trained to maximize the joint log-likelihood (Eq.", "(3)) via a recursive expansion of the probabilistic chain rule.", "This model does not require or assume conditional independence between sequence types.", "Formally, the O2O model is trained to maximize the following loss function: L = log K (cid:89) k =1 p ( y k | y 1 , , y k 1 , X ) = K (cid:88) k =1 M k (cid:88) m =1 log p ( y m,k | y m 1 ,k , Y 1: k 1 , X ) , (6) where Y 1: k 1 = { y 1 , , y k 1 } denotes ( k 1 ) sequences.", "While this approach can explicitly model inter-sequence dependencies, it still requires post-processing to align the output sequences.", "Fig.", "1(c) depicts the proposed single sequence O2O E2E ASR model.", "The single sequence O2O model predicts the word/morpheme and the corresponding linguistic annotations simultaneously by regarding multiple sequences as a single sequence.", "In the single sequence representation, the K output sequences are collapsed into a single sequence of S segments.", "The i -th segment s i consists of a fixed order of K jointly aligned sub-sequences.", "Let y i,k be the i -th sub-sequence of the k -th type of tokens or annotations from index B ( i, k ) to E ( i, k ) , i.e., y i,k := y B ( i,k ): E ( i,k ) ,k .", "Then, s i = ( y i, 1 , . . . , y i,K ) denotes the i -th variable-length segment composed of aligned graphemic and linguistic annotation sub-sequences.", "Equation 8 shows how the K sequences are collapsed into a single sequence of composed of segment s i .", "To obtain s i , we use existing annotation tools or manual annotations to jointly align the training sets of the K output sequence types.", "These segments are used as training targets in an auto-regressive prediction task.", "In this way, our model implicitly learns to simultaneously predict and align K output sequences from an input X .", "We discuss further details of the data preparation in Section 3.2.", "Letting y i denote elements of the collapsed single-sequence representation (cid:0) s 1 , . . . , s S (cid:1) , the joint log-likelihood (Eq.", "(3)) can be written as L = log p ( y 1 , . . . , y K | X ) = M (cid:88) m =1 log p ( y m | y 1 , . . . , y m 1 , X ) .", "(9) Note that this form is almost equivalent to the single sequence objective function in Eq.", "(1) except for the variable y m takes values from the union of the K symbol sets that represent the K output sequences and the length of this sequence M = (cid:80) Kk =1 M k , is the sum of the lengths of the K output sequences.", "This framework has various benefits compared with the existing frameworks described in Section 2. Similar to the O2O model trained with the conditional chain mapping in Section 2.2.2, this framework does not assume the conditional independence between output labels and has the flexibility to model the dependency between words/morphemes and linguistic annotations.", "Related works are using the O2O model, e.g., (Yadav et al., 2020), but they are based on CTC and do not consider such an explicit output dependency.", "Also, the proposed method using Transformer can preserve a relationship between the word/morpheme and the corresponding linguistic annotations across the sequence based on the aligned representation s i in Eq.", "(8).", "Finally, this framework is equivalent to the original single-sequence objective function, and we can use an existing strong sequence-to-sequence model (transformer in this paper) without any mod-ifications of the algorithm.", "The only process is to prepare the collapsed single sequence composed of s i , which is discussed in the next section.", "This section describes how we prepare the collapsed single sequence composed of s i in Eq.", "(8).", "We explain this data preparation with both English (TED-LIUM release 2 (TEDLIUM2) (Rousseau et al., 2014)) and Japanese (corpus of spontaneous Japanese (CSJ) (Maekawa et al., 2000)) data as an example.", "The sequence type includes the graphemic and phonemic transcripts 1 , as well as the POS tags.", "Fig. 2 shows how to obtain the target sequence.", "First, we predict sequences of phonemes and POS tags from the graphemic sequences using manually 1 In the Japanese task, we used the kana character, a syllabic character, and this paper regards it as a phoneme.", "annotated labels or annotation tools (Fig.", "2(a),(b)).", "For the Japanese data, we use the annotation labels provided in the corpus.", "Note that some of the POS tags are estimated using a morphological analysis model.", "For the English data, we obtain these sequences from the pronunciation dictionary provided in the corpus and WordNet (Miller, 1998), respectively.", "Some words in the vocabulary have two or more pronunciations in the pronunciation dictionary.", "To obtain phoneme sequences, we randomly selected a single pronunciation per word from the candidate pronunciations.", "Since in WordNet, 57 % of the words in the corpus are not annotated with the POS tags, we annotated these labels with the output of the POS tagging system (Loper and Bird, 2002).", "Next, we replaced these phonemes and POS tags with special symbols (Fig.", "2(c)) to distinguish them from the grapheme symbols.", "Third, we split graphemic and linguistic annotation sequences at word boundaries and obtain sub-sequences ( y i,k in Eq.", "(8)) (Fig.", "2(d)).", "Then sub-sequences are aggregated with the segments ( s i in Eq.", "(8)) and collapsed into the target sequence in the manner of Eq.", "(8) (Fig.", "2(e)).", "For the English data, we applied byte-pair encoding (BPE) (Kudo and Richardson, 2018) to the collapsed target sequence (Fig.", "2(f)).", "We built a Transformer-based ASR system using the ESPnet toolkit (Watanabe et al., 2018).", "The Transformer architecture and hyper-parameters for training/decoding are based on existing recipes in ESPnet.", "We investigated three models: self-attention-based CTC (Pham et al., 2019), the Transformer (Dong et al., 2018), and a hybrid Transformer trained with an auxiliary CTC objective (Transformer+CTC) (Karita et al., 2019).", "The CTC model was used in prior studies based on O2O models, e.g., (Audhkhasi et al., 2018; Yadav et al., 2020).", "During training, the CTC model was regularized with the Transformer decoder in the multitask learning fashion similar to Transformer+CTC.", "Such regularization techniques yield a significant improvement over a pure CTC baseline (Fujita et al., 2020b).", "For the training of Transformer+CTC, we applied joint CTC training to improve performance (Karita et al., 2019).", "For CTC-based decoding, we used the greedy search algorithm.", "For Transformer decoding, we used the beam search algorithm and tuned search parameters using the development set.", "For the Transformer+CTC model, we applied Transformer/CTC joint decoding (Karita et al., 2019).", "and tuned the weights of the objective using the development set.", "Note that the language model shallow fusion (Hori et al., 2018) is not applied since we could not find effectiveness in our preliminary experiment.", "We evaluate the performance of the proposed method using the character error rate (CER), phoneme error rate (PER), and word error rate", "(WER).", "CER and WER measure the quality of graphemic transcripts in Japanese and English respectively.", "PER is used to evaluate the quality of phonemic transcripts in both languages.", "This study aims to incorporate linguistic annotation prediction into the state-of-the-art Transformer-based E2E ASR.", "We computed the CER/WER/PER to verify that the E2E model can perform ASR adequately even though the additional downstream NLP tasks are incorporated.", "To obtain a sequence with alignment ( s i in Eq.", "(8)) on the inference stage, grapheme, phoneme, and POS should be generated in the same order as the training stage.", "To confirm this, we define annotation structure accuracy (ASA) as a metric.", "We can compute the correct number of the predicted structure and compute the accuracy.", "For example, the correct order of the output must follow the following grapheme-phoneme-POS order: <s> I <Ph12> <Pos3> go <Ph21> <Pos5> </s> where <s> and </s> denote the start and end symbols of a sentence, respectively.", "However, our sequence-to-sequence model does not have such explicit output constraints and it possibly outputs the following wrong order of the sequence: <s> I <Ph12> <Pos3> go <Pos5> </s> Thus, the second case has 5 correct transition counts among 6 total transition counts, and we can compute the accuracy as 5/6.", "We assume the transition from \"go\" to <Pos5> is incorrect.", "To evaluate Japanese ASR's word segmentation performance, we measure the precision p , recall r , and F-value f of the hypothesized segmentation compared to the ground-truth segmentation.", "Let N hyp , N ref , and N cor be the numbers of the predicted graphemes, the graphemes of the reference, and the graphemes whose predicted linguistic annotation is correct, respectively.", "The precision p , recall r , and F-value f are defined as follows: p = N cor /N hyp , r = N cor /N ref , f = 2 pr/ ( p + r ) .", "We only compared 1,919 utterances whose reference and hypothesis transcripts are exactly matched in order to ignore the effect of the ASR errors.", "Additionally, hypothesized ASR transcripts and reference transcripts are aligned with graphemes, and we computed an annotation accuracy to measure the performance of the linguistic annotation.", "Let N in and N cor be the number of input words whose estimated grapheme is correct and the words whose estimated grapheme and linguistic annotations are correct, respectively.", "The accuracy is computed by N cor /N in .", "Since we do not deal with the words whose grapheme is predicted incorrectly by ASR for computing the annotation accuracy, the annotation accuracy is robust to the ASR error.", "Since the above measures for the word segmentation and linguistic annotation do not consider the ASR errors, we finally computed the following measures using all of the utterances (i.e., including ASR errors): normalized edit distance, precision, recall, and F-values.", "To compare the linguistic annotation performance, we prepared a pipeline system, i.e., ASR followed by an NLP-based linguistic annotation.", "In the pipeline system, the separated model of CTC+Transformer first predicts graphemic sequences.", "Then, the linear SVM with L2 normalization, trained using KyTea (Graham and Mori, 2010), predicts word boundaries and linguistic annotation from the predicted sequences.", "To train KyTea, we only used the transcriptions in the ASR training set to perform a fair comparison to the proposed method.", "The pipeline system for the Japanese task requires word segmentation before predicting linguistic annotations.", "The proposed ASR, on the other hand, achieves word segmentation and linguistic annotations simultaneously.", "Additionally, the proposed ASR achieves these estimates using graphemic information and acoustic information, but the pipeline system uses only the graphemic information.", "Hence, we expect that the proposed method can predict better word boundary and linguistic annotations for the sentence, which is hard to estimate only from graphemic information.", "Besides, our model might predict linguistic annotations correctly even though its transcripts are mis-predicted, while the pipeline approach fails to predict linguistic annotation when the hypothesized ASR transcriptions include ASR errors.", "It is helpful for the downstream NLP-based system like slot filling or intent detection.", "We evaluated ASR performance to confirm the proposed method can produce high-quality transcriptions and linguistic annotations.", "Note that our primary goal is to simultaneously predict transcription and linguistic annotations by keeping sufficient performance, not improving the ASR performance it-CSJ TEDLIUM2 eval1 eval2 eval3 dev test outputs model CER PER CER PER CER PER WER PER WER PER graphemes, phonemes CTC (baseline) 7.4 5.0 5.5 3.1 5.9 3.3 15.7 7.3 15.6 7.7 Transformer 6.9 4.4 4.7 2.6 6.1 3.7 15.8 9.3 15.0 9.1 Transformer+CTC 6.1 3.8 4.3 2.3 4.6 2.5 10.3 4.9 9.3 4.7 graphemes, phonemes,POS CTC (baseline) 10.0 7.0 7.3 4.4 8.3 5.1 15.8 7.2 14.9 7.0 Transformer 6.4 4.1 4.7 2.7 5.2 3.0 14.6 8.8 13.5 8.2 Transformer+CTC 6.7 4.3 4.9 2.7 5.3 2.9 10.3 4.7 9.5 4.7 Table 1: Comparison between CTC models, Transformer models, and Transformer+CTC models.", "self.", "Table 1 and Fig. 3 show the ASR performance of the Japanese (CSJ) and English (TEDLIUM2) tasks.", "First, we discuss which model architecture is appropriate for predicting the grapheme and phoneme sequences.", "Table 1 shows the Transformer or Trans-fromer+CTC achieves better performance compared to the CTC model, which corresponds to the conventional method.", "This means that the Transformer is better for predicting transcriptions and linguistic annotations (phoneme in this experiment) than CTC thanks to the explicit dependency modeling, as discussed in Section 3. Since Transformer+CTC yields better or equivalent performance than the Transformer, we used Trans-former+CTC architecture as a base model in the rest of this paper (refer to as a joint model).", "Second, we discuss whether the proposed joint models predict the grapheme and phoneme with sufficient performance.", "To confirm that, we trained two separate models, which predict either a grapheme sequence or a phoneme sequence.", "Since Transformer+CTC yields better performance than the CTC model and Transformer, we used Trans-former+CTC architecture as a base model.", "Fig. 3 shows that the proposed joint model is almost comparable to the separated model, especially when it predicts both graphemes and phonemes.", "When the joint model prediction includes the POS tag, we observed a slight degradation, especially in the Japanese task.", "However, such degradation is still less than 1%, and we can conclude the proposed O2O model of Transformer+CTC can predict graphemes and phonemes simultaneously with sufficient performance.", "We would emphasize that the proposed joint model can have alignment between grapheme/phoneme/POS while the conventional separated model can not.", "As we discussed in Section 4.1.2, we computed the annotation structure accuracy (ASA), and it turns out that its range was from 98.9 % to 100.0 %.", "This means that the proposed joint model can consistently predict transcriptions and the linguistic annotations in the correct order almost perfectly.", "We found that almost all errors of the transition occurred in the last word, which might be caused by beam search errors.", "(b) Transcription including heteronym.", "We evaluated the performance of word segmentation and linguistic annotations using the output of the proposed ASR, which predicts graphemes, phonemes, and POS tags.", "Note that we did not compute the word segmentation performance in the English task because the English sentences include word boundaries.", "Tables 2 and 3 show the performance of the word segmentation and of the predicting linguistic annotations, respectively.", "Note that these results do not consider the ASR error.", "These tables show that the proposed ASR system achieves better word segmentation and predicts linguistic annotations CSJ TEDLIUM2 System Phoneme POS Phoneme POS Pipeline 0.08 0.06 0.10 0.08 Proposed 0.08 0.05 0.10 0.07 Table 4: Normalized edit distance averaged over the whole evaluation set.", "better than the pipeline system.", "To compute accuracy, we used 41k and 65k morphemes for CSJ and TEDLIUM2, respectively, and we consider the number of samples is enough to show our model is better than the pipeline approach.", "Table 4 and 5 show the performance which considers ASR errors.", "Table 4 shows that the proposed ASR system achieves better prediction of the POS tags than the pipeline, even though the proposed system sometimes failed to predict the transcriptions 2 .", "Table 5 also confirms that the proposed ASR system predicts better performance in the Japanese task.", "Although the performance of the pipeline system and the proposed ASR system is comparable in the English task, we would like to emphasize that the proposed ASR does not require extra memory for the additional downstream NLP task.", "This is useful for developing a small footprint system.", "Fig. 4 shows some examples that the proposed ASR can estimate the word boundary and phonemes correctly.", "For example, the first sentence correctly segments the word boundary based on the \"repetition\" POS tag estimated from the acoustic information.", "Similarly, the second sentence appropriately chooses the correct pronunciation from the acoustic information.", "Since our system of E2E ASR can estimate pairs of graphemes and phonemes for each word, we can build a pronunciation dictionary by considering both graphemes to phoneme sequences and acoustic information.", "Table 6 shows the entries of the pronunciation dictionary extracted from the output of our system.", "The first row of the table lists the entries 2 We conducted Welch's t-test and found a significant difference between the POS values of the pipeline system and the proposed ASR system ( p < 0 . 01 ).", "whose words did not appear in the text of the training set and whose phoneme sequence is estimated correctly.", "These entries indicate that our system can predict the phonemes of OOV words.", "The second row of the table shows the entries whose phonemes are different from the reference but exist in the CMU pronunciation dictionary (CMU).", "In other words, these entries have variations of the phoneme sequence for each word, and the phoneme sequences are predicted correctly.", "In this study, we removed the phoneme sequence variations for each grapheme from the training set.", "If the Transformer is trained to predict phoneme sequences using only linguistic information, the phoneme sequences are likely to be mapped into words deterministically.", "Interestingly, our Transformer recovers the variations of the phoneme sequences for each word.", "It seems that the acoustic information contributed to predicting the phoneme sequences.", "One of the Transformer's additional benefits is that we can deduce what is happening inside the Transformer by visualizing the patterns of self-attention and source-target attention weights.", "Fig. 5 depicts patterns of the self-attention and source-target attention weights on the third layer of the Decoder network.", "This figure shows that self-attention changes monotonically but has additional diagonal dotted lines.", "This means that self-attention uses the multiple (both grapheme and %&$\"# ! \" # $\" #", "phoneme) output symbols but mostly preserving the order of the sequence.", "Similarly, it also shows that source-target attention focuses on the acoustic feature of the same time step twice.", "It shows that both graphemes and phonemes are predicted using the same acoustic features at the same time step, respectively.", "We proposed a novel E2E ASR Transformer system for simultaneously estimating transcriptions and linguistic annotations such as phonemic transcripts or POS tags.", "This paper showed that the proposed ASR could estimate these features with sufficient performance and also showed reasonable phoneme and grapheme analyses and attention patterns thanks to the aligned output of both output symbols.", "In future work, we will extend the proposed approach to predict other linguistic annotations such as named entities." ]
[ "objective", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "objective", "objective", "method", "method", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective" ]
[ "How to identify, extract, and use phrasal knowledge is a crucial problem for the task of Recognizing Textual Entailment (RTE).", "To solve this problem, we propose a method for detecting paraphrases via natural deduction proofs of semantic relations between sentence pairs.", "Our solution relies on a graph reformulation of partial variable unifications and an algorithm that induces subgraph alignments between meaning representations.", "Experiments show that our method can automatically detect various paraphrases that are absent from existing paraphrase databases.", "In addition, the detection of paraphrases using proof information improves the accuracy of RTE tasks.", "Recognizing Textual Entailment (RTE) is a challenging natural language processing task that aims to judge whether one text fragment logically follows from another text fragment (Dagan et al., 2013).", "Logic-based approaches have been successful in representing the meanings of complex sentences, ultimately having a positive impact on RTE (Bjerva et al., 2014; Beltagy et al., 2014; Mineshima et al., 2015, 2016; Abzianidze, 2015, 2016).", "Although logic-based approaches succeed in capturing the meanings of functional or logical words, it is difficult to capture the meanings of content words or phrases using genuine logical inference alone.", "This remains a crucial problem in accounting for lexical relations between content words or phrases via logical inference.", "To solve this problem, previous logic-based approaches use knowledge databases such as WordNet (Miller, 1995) to identify lexical relations within a sentence pair.", "While this solution has been successful in handling word-level paraphrases, its extension to phrase-level semantic relations is still an unsolved problem.", "There are three main difficul-ties that prevent an effective identification and use of phrasal linguistic knowledge.", "The first difficulty is the presence of out-of-context phrase relations in popular databases such as the Paraphrase Database (PPDB) (Ganitkevitch et al., 2013).", "PPDB may suggest paraphrases that do not adhere to the context of our relevant text segments nor to their semantic structure, which might be problematic.", "The second difficulty is finding semantic phrase correspondences between the relevant text segments.", "Typical approaches only rely on surface (Beltagy et al., 2013) or syntactic correspondences (Arase and Tsujii, 2017), often producing inaccurate alignments that significantly impact our inference capabilities.", "Instead, a mechanism to compute semantic phrase correspondences could potentially produce, if available, more coherent phrase pairs and solve the recurring issue of discontinuity.", "The third difficulty is the intrinsic lack of coverage of databases for logical inference despite their large size.", "Whereas there is a relatively small number of possible word-to-word correspondences and thus their semantic relations can be enumerated, the same is not true for all phrase pairs that might be of interest.", "One alternative is to use functions of infinite domain (e.g., cosine similarity) between phrase representations (Tian et al., 2016), but these techniques are still under development, and we have not seen definitive successful applications when combined with logic systems.", "In this study, we tackle these three problems.", "The contributions of this paper are summarized as follows: First, we propose a new method of detecting phrase correspondences through natu-756 ral deduction proofs of semantic relations for a given sentence pair.", "Second, we show that our method automatically extracts various paraphrases that compensate for a shortage in previous paraphrase databases.", "Experiments show that extracted paraphrases using proof information improve the accuracy of RTE tasks.", "In this section, we review previous logical inference systems that are combined with lexical knowledge.", "The RTE system developed by Abzianidze (2016) uses WordNet as axioms and adds missing knowledge manually from the training dataset; however, this technique requires considerable human effort and is not extended to handle phrasal knowledge.", "Martnez-Gomez et al. (2017) proposed an RTE system with an on-the-fly axiom injection mechanism guided by a natural deduction theorem prover.", "Pairs of unprovable sub-goals and plausible single premises are identified by means of a variable unification routine and then linguistic relations between their logical predicates are checked using lexical knowledge such as WordNet and VerbOcean (Chklovski and Pantel, 2004).", "However, this mechanism is limited to capturing word-to-word relations within a sentence pair.", "Bjerva et al. (2014) proposes an RTE system where WordNet relations are used as axioms for word-to-word knowledge in theorem proving.", "For phrasal knowledge, PPDB is used to rephrase an input sentence pair instead of translating paraphrases into axioms.", "However, this solution ignores logical contexts that might be necessary when applying phrasal knowledge.", "Moreover, it does not apply to discontinuous phrases.", "Beltagy et al. (2016) uses WordNet and PPDB as lexical knowledge in the RTE system.", "To increase their coverage of phrasal knowledge, the system combines a resolution strategy to align clauses and literals in a sentence pair and a statistical classifier to identify their semantic relation.", "However, this strategy only considers one possible set of alignments between fragments of a sentence pair, possibly causing inaccuracies when there are repetitions of content words and meta-predicates.", "In our research, we propose an automatic phrase abduction mechanism to inject phrasal knowledge during the proof construction process.", "In addition, we consider multiple alignments by backtracking the decisions on variable and predicate unifications, which is a more flexible strategy.", "We represent logical formulas using graphs, since this is a general formalism that is easy to visualize and analyze.", "However, we use natural deduction (see Section 3.2) as a proof system instead of Markov Logic Networks for inference.", "Some research has investigated graph operations for semantic parsing (Reddy et al., 2014, 2016) and abstractive summarization (Liu et al., 2015); we contribute to these ideas by proposing a subgraph mapping algorithm that is useful for performing natural language inferences.", "Considerable research efforts have been focused on the identification and extraction of paraphrases.", "One successful technique is associated with bilingual pivoting (Bannard and Callison-Burch, 2005; Zhao et al., 2008), in which alternative phrase translations are used as paraphrases at a certain probability.", "However, this technique requires large bilingual parallel corpora; moreover, word alignment errors likely cause noisy paraphrases.", "Another strategy is to extract phrase pairs from a monolingual paraphrase corpus using alignments between syntactic trees, guided by a linguistically motivated grammar (Arase and Tsujii, 2017).", "The main difference between these studies and ours is that they typically attempt alignment between words or syntactic trees, whereas we perform alignments between meaning representations, which enables the acquisition of more general paraphrases by distinguishing functional words from content words.", "This point is important in distinguishing among different semantic relations (e.g., antonyms and synonyms).", "In addition, word and syntactic alignments potentially ignore coreferences, making it difficult to find relations between many-to-many sentences.", "Semantic alignments enable this because coreferences must refer to the same variable as the original entity.", "In logic-based approaches to RTE, a text T and a hypothesis H are mapped onto logical formulas T 0 and H 0 .", "To judge whether T entails H , we check whether T 0 H 0 is a theorem in a logical system.", "For meaning representations, we use Neo-Davidsonian event semantics (Parsons, 1990).", "In this approach, a verb is analyzed as a one-place predicate over events.", "Both the arguments of a 757 y 1 skip x 1 girl x 2 rope x 3 sidewalk subj obj on Figure 1: A graph for the basic formula (2).", "verb and modifiers are linked to events by semantic roles, and the entire sentence is closed by existential quantification over events.", "For example, (1) is mapped onto (2).", "(1) A girl is skipping rope on a sidewalk.", "(2) x 1 x 2 x 3 y 1 ( girl ( x 1 ) rope ( x 2 ) sidewalk ( x 3 ) skip ( y 1 ) ( subj ( y 1 ) = x 1 ) ( obj ( y 1 )= x 2 ) on ( y 1 , x 3 )) We use x i as a variable for entities and y j for events.", "In this semantics, we represent all content words (e.g., girl and skip ) as one-place predicates.", "Regarding functional words, we represent a preposition like on as a two-place predicate, e.g., on ( y 1 , x 3 ) .", "We also use a small set of semantic roles such as subj and obj as a functional term and use equality ( = ) to connect an event and its participant, as in subj ( y 1 )= x 1 .", "A | | where F ( t ) is a one-place predicate (for content words), G ( t, u ) is a two-place predicate (for prepositions), t and u are a term.", "A term is defined as a constant, a variable, or a functional term of the form f ( t ) where f is a semantic role and t is a term.", "We call a formula constructed by conjunctions and existential quantifiers a basic formula in event semantics.", "Thus, a set of basic formulas in event semantics is defined as: ::= A | | t The formula in (2) is an instance of a basic formula, which captures the predicate-argument structure of a sentence.", "On top of the system of basic formulas, we have a full language of event semantics with negation ( ) , disjunction ( ) , implication ( ) , and a universal quantifier ( ).", "These operators are used to represent additional logical features.", "There is a natural correspondence between basic formulas and directed acyclic graphs (DAGs).", "Figure 1 shows an example 1 .", "In the graph representation, constants and variables correspond to vertices; both two-place predicates for prepositions (e.g., on ( y 1 , x 1 ) ) and functional terms for semantic roles (e.g., subj ( y 1 ) = x 1 ) are represented as edges.", "A one-place predicate F ( t ) in a logical formula can be represented as a functional relation isa ( t, F ) , where isa is an expression relating a term t and a predicate F represented as a vertex.", "The isa edges are unlabeled for simplicity.", "We use the system of natural deduction (Prawitz, 1965; Troelstra and Schwichtenberg, 2000) to capture phrase correspondences from a sentence pair ( T, H ) , following the strategies for word axiom injection developed by Martnez-Gomez et al. (2017) and Yanaka et al. (2017).", "The sentence pair ( T, H ) is first mapped to a pair of formulas ( T 0 , H 0 ) .", "T 0 is initially set to the premise P , and H 0 is set to the goal G to be proved.", "If formulas P and G are basic formulas, then the proving strategy is to decompose them into a set of atomic formulas using inference rules for conjunctions and existential quantifiers.", "The premise P is decomposed into a pool of premises P = { p i ( i ) | i { 1 , . . . , m }} , where each p i ( i ) is an atomic formula and i is a list of terms appearing in p i ( i ) .", "The goal G is also decomposed into a set of sub-goals G = { g j ( 0 j ) | j { 1 , . . . , n }} , where 0 j is a list of terms appearing in g j ( 0 j ) .", "The proof is performed by searching for a premise p i ( i ) whose predicate matches that of a sub-goal g j ( 0 j ) .", "If such a premise is found, then variables in 0 j are unified to those in i and the sub-goal g j ( 0 j ) can be removed from G .", "If all the sub-goals can be removed, we prove T 0 H 0 .", "In the presence of two or more variables with the same predicate, there might be multiple possible variable unifications.", "Modern theorem provers explore these multiple possibilities in search of a configuration that proves a theorem.", "Sub-goals may remain unproved when T logically does not entail H i.e., when there are no premise predicates p i that are matched with g j .", "In this case, the system tries word axiom injection, called word abduction .", "More specifically, if there 1 See Jones (2016) for some variants of graphical representations of logical formulas.", "is a premise p i ( i ) whose predicate has a linguistic relation (according to linguistic knowledge 2 ) with that of a sub-goal g j ( 0 j ) , then variables in 0 j are unified with those in i and the sub-goal g j ( 0 j ) can be removed from G .", "Figure 2 shows an example to illustrate how the system works.", "To begin with, the input sentence pair ( T, H ) is mapped onto a pair of formulas, ( T 0 , H 0 ) .", "T 0 is initially placed to the premise P , and H 0 to the goal G .", "Note that these are basic formulas, and they are thus decomposed to the following sets of formulas P and G , respectively: P = { lady ( x 1 ) , meat ( x 2 ) , cut ( y 1 ) , up ( y 1 ) , precisely ( y 1 ) , subj ( y 1 )= x 1 , obj ( y 1 )= x 2 } G = { woman ( x 3 ) , meat ( x 4 ) , cut ( y 2 ) , piece ( x 5 ) , into ( y 2 , x 5 ) , subj ( y 2 )= x 3 , obj ( y 2 )= x 4 } Steps 1 to 3 in Figure 2 demonstrate the variable unification routine and word axiom injection using graphs.", "Note that in step 1, all variables in formulas in P or G are initially different.", "In step 2, we run a theorem proving mechanism that uses graph terminal vertices as anchors to unify variables between formulas in P and those in G .", "The premise meat ( x 2 ) in P matches the predicate meat of the sub-goal meat ( x 4 ) in G and the variable unification x 4 := x 2 is applied (and similarly for the sub-goal cut ( y 2 ) in G with the variable unification y 2 := y 1 ).", "In step 3, we use the previous variable unification on y 1 , the subj edge in P and G and the axiom x.", "lady ( x ) woman ( x ) from external knowledge to infer that x 3 := x 1 .", "There is one critical reason that the word-to-word axiom injection described in Section 3.2 fails to detect phrase-to-phrase correspondences.", "That is, the natural deduction mechanism decomposes the goal G into atomic sub-goals that are then proved one-by-one (word-by-word), independently of each other except for the variable unification effect.", "This mechanism is particularly problematic when we attempt to prove phrases that resist decomposition, two-place predicates (e.g., into ( x, y )), or failures in variable unification (e.g., due to inaccurate semantics).", "Thus, we propose a method to detect phrase-to-phrase correspondence through natural deduction proofs.", "We detect phrase-to-phrase entailing relations between T 0 and H 0 by finding alignments between the subgraphs of their meaning representations when T 0 H 0 or T 0 H 0 hold.", "Finding subgraph alignments is a generalization of the subgraph isomorphism problem, which is NP-complete 3 .", "In this paper, we approximate a solution to this problem by using a combination of a backtracking variable unification and a deterministic graph search on the neighborhood of non-unified variables.", "Using our running example in Figure 2, step 4 displays our proposed subgraph alignment.", "The variable x 5 in the graph of G cannot be unified with any variable in the graph of P .", "This is a very common case in natural language inferences, as there might be concepts in H that are not directly supported by concepts in T .", "In this research, we propose spanning a subgraph starting at non-unified variables (e.g., x 5 in G ) whose boundaries are semantic roles (e.g., subj , obj ).", "Its candidate semantics from P are then the attributes of its corresponding unified variables from G (e.g. cut up precisely cut into pieces ).", "To formalize this solution we introduce some graph notation.", "Let V = V u V u L be the set of vertices, where V u is the set of unified variables (e.g. x 1 , x 2 , y 1 ), V u is the set of non-unified variables (e.g. x 5 ), and L is a set of predicates (e.g., lady , woman ).", "Let E be the set of labeled, directed edges h v, l, v 0 i where v, v 0 V and l are labels that may represent a functional relation isa , a preposition or a semantic role.", "We denote a set of two-place predicates for prepositions as PREP and a set of functional terms for semantic roles as ARGS; e.g., ARGS = { subj , obj } .", "A graph that represents P is then a tuple GP = h VP , EP i , and similarly, for G , GG = h VG , EG i .", "We can now define a function to span a subgraph in the neighborhood of non-unified variables v V u G in the graph of G .", "We call a connected set of edges in which no semantic roles appear, i.e., {h v, l, v 0 i | l 6 ARGS } , a phrase set .", "Let E ( x ) be the phrase set in E such that each vertex is connected to x with an incoming or outgoing edge, that is, E ( x ) = { ( v i , l, v k ) E | ( x = v i x = v k ) l 6 ARGS } .", "3 Emmert-Streib et al. (2016) gives a good overview.", "Note that E ( x ) induces a subgraph in a given graph G and the condition l / ARGS sets the boundaries of the subgraph by excluding the semantic roles of verb phrases.", "Given two phrase sets E and E 0 , we say E 0 is reachable from E , written E E 0 , if E and E 0 share at least one variable vertex.", "Let be the transitive closure of .", "Given a set of edges EG and a variable v , we define the extended phrase set , written Reach ( v ) , as follows: Reach ( v ) = { e E | EG ( v ) E } that is, the set of edges e that can be reached from v without crossing an edge with a semantic role label.", "This function defines a partition or equivalence class for non-unified variables v V u G , and each of these partitions induce a (possibly discontinuous) phrase in G that remains unproved.", "The corresponding subgraph in P to each of these partitions is given by the vertices and edges connected with a path of length one to the unified variables that appear in Reach ( v ) .", "That is, Corr ( v ) = { e EP ( v 0 ) , v 0 V [ v ] G VP } where V [ v ] G denotes the vertices in the subgraph of G induced by the partition Reach ( v ) .", "A subgraph alignment between P and G is given by the pair of h Corr ( v ) , Reach ( v ) i for all v V u G , where the phrases can be read from the predicates in the vertices and edges labeled with prepositions.", "We define a mapping ( ) from a labeled edge h v, l, v 0 i to an atomic formula as follows.", "h v, l, v 0 i = v 0 ( v ) if l is isa l ( v, v 0 ) if l PREP l ( v ) = v 0 if l ARGS Let E be a set of labeled edges, and let E be (cid:8) h v, l, v 0 i | h v, l, v 0 i E (cid:9) .", "The phrase axiom generated for each non-unified variable v V u G is defined as C .", "( V Corr ( v ) R .", "( V Reach ( v ) )) , where C is a set of free variables appearing in Corr ( v ) (which includes v ) and R is a set of free variables appearing in Reach ( v ) but not in Corr ( v ) .", "In Figure 2, the only non-unified variable in the sub-goal in step 4 is x 5 , that is, V u G = { x 5 } .", "Then, starting from the variable x 5 , Reach ( x 5 ) is {h y 1 , into , x 5 i , h x 5 , isa , piece i} .", "Now V [ x 5 ] G = { y 1 , x 5 } , and thus Corr ( x 5 ) is {h y 1 , isa , cut i , h y 1 , isa , up i , h y 1 , isa , precisely i} .", "Finally, the following is the axiom generated from h Corr ( x 5 ) , Reach ( x 5 ) i 4 .", "y 1 ( cut ( y 1 ) up ( y 1 ) precisely ( y 1 ) x 5 ( into ( y 1 , x 5 ) piece ( x 5 ))) .", "If formulas P and G are not basic formulas (i.e., they contain logical operators other than and ), they are decomposed according to inference rules of natural deduction.", "There are two types of inference rules: introduction rules decompose a goal formula into smaller sub-goals, and elimination rules decompose a formula in the pool of premises into smaller ones.", "Figure 3 shows introduction rules and elimination rules for decomposing non-basic formulas including negation, disjunction, implication, and a universal quantifier.", "By applying inference rules, a proof of non-basic formulas appearing in sub-goals can be decomposed to a set of subproofs that only have basic formulas in sub-goals.", "If a universal quantifier appears in premises, it is treated in the same way as other premises.", "4 Note that this axiom is logically equivalent to y 1 ( cut ( y 1 ) up ( y 1 ) precisely ( y 1 ) x 5 ( cut ( y 1 ) into ( y 1 , x 5 ) piece ( x 5 ))) indicated in the colored subgraphs in step 4 of Figure 2.", "For example, consider the following sentence pair with the gold label no (contradiction): T : A man is not cutting a potato H : A man is slicing a potato into pieces Figure 4 shows the proof process of T 0 H 0 .", "To prove the contradiction, the formulas T 0 and H 0 are set to P and G , respectively.", "Then, the negation in G is removed by applying the introduction rule ( I ) to G .", "Here, False is the propositional constant denoting the contradiction.", "In the second stage of the proof, the goal is to prove False in G 0 from the two premises P and P 0 .", "By applying ( E ) to P , we can eliminate the negation from P , resulting in the new goal G 1 .", "As both the premise P 0 and the sub-goal G 1 are 761 basic formulas, the procedure described in the previous sections applies to the pair ( P 0 , G 1 ) ; these basic formulas are decomposed into atomic ones, and then the word-to-word abduction generates the desired axiom y 1 ( cut ( y 1 ) slice ( y 1 )) .", "Finally, the graph alignment applies in the same way as described in Figure 2, which generates the phrase axiom: y 1 ( cut ( y 1 ) x 5 ( into ( y 1 , x 5 ) piece ( x 5 ))) Using this axiom, one can complete the proof of the contradiction between T 0 and H 0 .", "We use the SemEval-2014 version of the SICK dataset (Marelli et al., 2014) for evaluation.", "The SICK dataset is a dataset for semantic textual similarity (STS) as well as for RTE.", "It was originally designed for evaluating compositional distributional semantics, so it contains logically challenging problems involving quantifiers, negation, conjunction, and disjunction, as well as inferences with lexical and phrasal knowledge.", "The SNLI dataset (Bowman et al., 2015) contains inference problems requiring phrasal knowledge.", "However, it is not concerned with logically challenging expressions; the semantic relationships between a premise and a hypothesis are often limited to synonym/hyponym lexical substitution, replacements of short phrases, or exact word matching.", "This is because hypotheses are often parallel to the premise in structures and vocabularies.", "The FraCaS dataset (Cooper et al., 1994) also contains logically complex problems.", "However, it is confined to purely logical inferences and thus does not contain problems requiring inferences with lexical and phrasal knowledge.", "For these reasons, we choose the SICK dataset to evaluate our method of using logical inference to extract phrasal knowledge.", "The SICK dataset contains 9927 sentence pairs with a 5000 / 4927 training/test split.", "These sentence pairs are manually annotated with three types of labels yes (entailment), no (contradic-tion), or unknown (neutral) (see Table 1 for exam-ples).", "In RTE tasks, we need to consider a directional relation between words such as hypernym and hyponym to prove entailment and contradiction.", "Hence, to extract phrasal knowledge for RTE tasks, we use the training data whose gold label is entailment or contradiction , excluding those with the neutral label.", "For the natural deduction proofs, we used ccg2lambda (Martnez-Gomez et al., 2016) 5 , a higher-order automatic inference system, which converts CCG derivation trees into semantic representations and conducts natural deduction proofs automatically.", "We parsed the tokenized sentences of the premises and hypotheses using three wide-coverage CCG parsers: C&C (Clark and Curran, 2007), EasyCCG (Lewis and Steedman, 2014), and depccg (Yoshikawa et al., 2017).", "CCG derivation trees (parses) were converted into logical semantic representations based on Neo-Davidsonian event semantics (Section 3.1).", "The validation of semantic templates used for semantic representations was conducted exclusively on the trial split of the SICK dataset.", "We used Coq (Bertot and Castran, 2010), an interactive natural deduction theorem prover that we run fully automatically with a number of built-in theorem-proving routines called tactics, which include first-order logic.", "We compare phrase abduction with different experimental conditions.", "No axioms is our system without axiom injection.", "W2W is the previous strategy of word abduction (Martnez-Gomez et al., 2017).", "P2P is our strategy of phrase abduction; W2W+P2P combines phrase abduction with word abduction.", "In addition, we compare our system with three purely logic-based (unsuper-vised) approaches: The Meaning Factory (Bjerva et al., 2014), LangPro (Abzianidze, 2015), and UTexas (Beltagy et al., 2014).", "We also compare our system with machine learning-based approaches: the current state-of-the-art deep learning model GRU (Yin and Schutze, 2017), a log-linear regression model SemEval-2014 best (Lai and Hockenmaier, 2014), and a hybrid approach combining a logistic regression model and probabilistic logic PL+eclassif (Beltagy et al., 2016).", "We extracted 9445 axioms from the SICK training dataset.", "The proving time average to extract phrasal axioms was only 3.0 seconds for a one-sentence pair 6 .", "Table 2 shows some examples of 5 Available at https://github.com/mynlp/ccg2lambda.", "6 Ours is a polynomial-time instance of the graph matching problem, where the vertex cover set (maximum number of variables in a phrase) is bounded to a small constant.", "paraphrases we extracted from the natural deduction proof in the training set.", "In particular, the examples of verb phrases show our method has the potential to capture long paraphrases.", "Each paraphrase in Table 2 is not contained in WordNet and PPDB.", "There are many instances of noncontiguous phrases in the SICK dataset, in particular, verb-particle phrases.", "Shown in Table 2, our semantic alignment can detect non-contiguous phrases through the variable unification process, which is one of the main advantages over other shallow/syntactic methods.", "In addition, Table 2 shows our method is not limited to hypernym or hyponym relations, but it is also capable for detecting antonym phrases.", "Table 3 shows the experimental results.", "The results show that the combination of word abduction and phrase abduction improved the accuracy.", "When the W2W+P2P result is substituted for the W2W result, there is a 1.1% increase in accuracy (from 83.1% to 84.3%).", "The accuracy of P2P is almost equal to that of W2W .", "This is because the recall improves from 63.6% to 72.1% while the precision decreases from 97.1% to 85.6%.", "The increase in false positive cases caused this result; some details of false positive cases are described in the next subsection.", "W2W+P2P outperformed other purely logic-based systems.", "The machine learning-based approaches outperform W2W+P2P , but unlike these approaches, parameter estimation is not used in our method.", "This suggests that our method has the potential to increase the accuracy by using a classifier.", "Table 4 shows some positive and negative examples on RTE with the SICK dataset.", "For ID 9491, the sentence pair requires the paraphrase from a field of brown grass to a grassy area , not included in previous lexical knowledges.", "Our phrasal axiom injection can correctly generate this paraphrase from a natural deduction proof, and the system proves the entailment relation.", "ID 2367 is also a positive example of phrasal axiom injection.", "The phrasal axiom between set fire to cameras and burn cameras with a blow torch was generated.", "This example shows that our semantic alignment succeeds in acquiring a general paraphrase by separating logical expressions such as some from content words and also by accounting for syntactic structures such as the passive-active alternation.", "For ID 3628, the axiom shown in the table was extracted from the following sentence pair with 763 ID Sentence Pair Gold Pred Axiom 9491 A group of four brown dogs are playing in a field of brown grass Yes Yes x 1 ( field ( x 1 ) brown ( x 1 ) grass ( x 1 ) Four dogs are playing in a grassy area grassy ( x 1 ) area ( x 1 )) 2367 A person is burning some cameras with a blow torch Yes Yes x 1 y 1 ( burn ( y 1 ) with ( y 1 ,x 1 ) blow torch ( x 1 ) camera ( obj ( y 1 )) The person is setting fire to the cameras set ( y 1 ) fire ( obj ( y 1 )) to ( y 1 , obj ( y 1 )) camera ( obj ( y 1 ))) 3628 A pan is being dropped over the meat Unk Yes y 1 ( pan ( obj ( y 1 )) into ( y 1 , obj ( y 1 ))) The meat is being dropped into a pan 96 A man is jumping into an empty pool There is no biker jumping in the air Unk No y 1 ( jump ( y 1 ) x 1 ( in ( y 1 ,x 1 ) air ( x 1 ))) y 1 ( man ( y 1 ) biker ( y 1 )) 408 A group of explorers is walking through the grass Yes Unk Some people are walking Table 4: Positive and negative examples on RTE from the SICK dataset.", "But the phrase drop over does not entail the phrase drop into , and a proof for the inference is over-generated in ID 3628.", "We extracted all possible phrasal axioms from the training dataset, so noisy axioms can be extracted as a consequence of multiple factors such as parsing errors or potential disambiguation in the training dataset.", "One possible solution for decreasing such noisy axioms would be to use additive composition models (Tian et al., 2016) and asymmetric learnable scoring functions to calculate the confidence on these asymmetric entailing relations between phrases.", "ID 96 is also an example of over-generation of axioms.", "The first axiom, y 1 ( jump ( y 1 ) x 1 ( in ( y 1 , x 1 ) air ( x 1 ))) was extracted from the proof of T 1 H 1 : T 1 : A child in a red outfit is jumping on a trampoline H 1 : A little boy in red clothes is jumping in the air The second axiom y 1 ( man ( y 1 ) biker ( y 1 )) was extracted from the proof of T 2 H 2 : T 2 : A man on a yellow sport bike is doing a wheelie and a friend on a black bike is catching up H 2 : A biker on a yellow sport bike is doing a wheelie and a friend on a black bike is catching up Although these axioms play a role in the proofs of T 1 H 1 and T 2 H 2 , the wrong axiom y 1 ( man ( y 1 ) biker ( y 1 )) causes the over-generation of a proof for the inference in ID 96.", "The correct one would rather be x 1 y 1 ( man ( y 1 ) on ( y 1 , x 1 ) bike ( x 1 ) biker ( y 1 )) .", "In this case, it is necessary to bundle predicates in a noun-phrase by specifying the types of a variable (entity or event) when making phrase alignments.", "For ID 408, the word explorer is not contained in the training entailment dataset and hence the relevant axiom x 1 ( explorer ( x 1 ) people ( x 1 )) was not generated.", "While our logic-based method enables detecting semantic phrase correspondences in a sentence pair in an unsupervised way, our next step is to predict unseen paraphrases of this type.", "In this paper, we proposed a method of detecting phrase correspondences through natural deduction proofs of semantic relations between sentence pairs.", "The key idea is to attempt a proof with automatic phrasal axiom injection by the careful management of variable sharing during the proof construction process.", "Our method identifies semantic phrase alignments by monitoring the proof of a theorem and detecting unproved sub-goals and logical premises.", "The method of detecting semantic phrase alignments would be applicable to other semantic parsing formalisms and meaning representation languages such as abstract meaning representations (AMR) (Banarescu et al., 2013).", "Experiment results showed that our method detected various phrase alignments including noncontiguous phrases and antonym phrases.", "This result may contribute to previous phrase alignment approaches.", "The extracted phrasal axioms improved the accuracy of RTE tasks.", "In future work, we shall enhance this methodology of phrasal axiom injection to predict unseen paraphrases.", "The pairs of premises and sub-goals that can be detected through the proof process conduct semantic alignments in a sentence pair.", "With the use of an additive composition model of distributional vectors, we can evaluate the validity of such semantic alignments.", "A combination of our phrasal axiom injection and additive composition model of distributional vectors has the potential to detect unseen paraphrases in a sentence pair.", "We thank the three anonymous reviewers for their detailed comments.", "This work was supported by JST CREST Grant Number JPMJCR1301 and AIP Challenge Program, Japan." ]
[ "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "method", "objective", "result", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "method", "abstain", "abstain", "other", "other", "other", "other", "method", "other", "other", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "objective", "abstain", "objective", "abstain", "result", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "other" ]
[ "While GPT has become the de-facto method for text generation tasks, its application to pinyin input method remains unexplored.", "In this work, we make the first exploration to leverage Chinese GPT for pinyin input method.", "We find that a frozen GPT achieves state-of-the-art performance on perfect pinyin.", "However, the performance drops dramatically when the input includes abbreviated pinyin.", "A reason is that an abbreviated pinyin can be mapped to many perfect pinyin, which links to even larger number of Chinese characters.", "We mitigate this issue with two strategies, including enriching the context with pinyin and optimizing the training process to help distinguish homophones.", "To further facilitate the evaluation of pinyin input method, we create a dataset consisting of 270K instances from fifteen domains.", "Results show that our approach improves the performance on abbreviated pinyin across all domains.", "Model analysis demonstrates that both strategies contribute to the performance boost.", "GPT (Radford et al., 2018, 2019) is a Transformer-based (Vaswani et al., 2017) language model that predicts tokens in an autoregressive manner.", "With a generic model architecture and the availability of vast web text data, GPT has been successfully developed for English, Chinese (Du, 2019; Zhang et al., 2021b), and many other languages.", "It shows extraordinary ability to generate fluent sentences and has been successfully applied to a wide range of natural language generation tasks.", "However, it remains unexplored to what extent GPT handles Chinese pinyin input method 1 , which is used by Work done during internship at Tencent AI Lab.", "Pinyin input method allows users to enter Chinese characters based on their pronunciations.", "Given a pinyin 2 as the input, pinyin input method returns a list of Chinese characters pronounced with that pinyin.", "Fundamental elements of pinyin include initials ( ) and finals ( ).", "In most cases, a Chinese character is spelled with one initial followed by one final.", "For example, as shown in Table 1, the initial and final for the Chinese character (me) are w and o , respectively.", "People may enter perfect pinyin (e.g., wo men for ), where initials and finals of all Chinese characters are entered.", "There are about 420 perfect pinyin in common use.", "Sometimes, especially when multiple Chinese characters are entered at once, people may use abbreviated pinyin by only entering the initials of characters (e.g., w m for ).", "This work, to the best of our knowledge, is the first one to explore the use of Chinese GPT for pinyin input method.", "We start by testing the performance of a frozen GPT.", "In this setting, we fix the parameters of GPT and predict Chinese characters from left to right in an autoregressive manner.", "At each time step, only characters pronounced with the same pinyin are legitimate candidates to be predicted.", "We find that, when the input is perfect pinyin, a frozen GPT performs comparably to state-of-the-art systems on the benchmark dataset (Yang et al., 2012).", "However, when the input is abbreviated pinyin with only initials of characters, the 2 https://en.wikipedia.org/wiki/Pinyin 1899 Id Context of Characters Input Pinyin Target Pinyin Type s1 li bai yi you dian shi Perfect s2 l b y y d s Abbreviated s3 l b y y d s Abbreviated Table 2: Illustrative examples of the task of pinyin input method with perfect pinyin and abbreviated pinyin.", "performance of GPT has a drastic drop.", "A major reason is that an abbreviated pinyin maps to many perfect pinyin.", "For example, the initial w can be the abbreviation for wo , wei , wang , wai , wu , etc.", "This would lead to exponentially larger number of legitimate candidates of Chinese characters.", "We mitigate this problem by incorporating pinyin information from two directions.", "One is to enrich the input by adding pinyin as additional context.", "The other is learning over pinyin-constrained vocabulary, which enhances the model's ability to distinguish between Chinese characters pronounced with the same pinyin.", "To further facilitate the research on pinyin input method, we construct a new dataset based on the WuDaoCorpora (Yuan et al., 2021).", "Our dataset includes 270K instances from 15 commonly used news domains.", "3 To evaluate towards multiple facets, the dataset covers instances with different numbers of context characters and pinyin.", "From our experiment results, we have these key findings:", "1. On perfect pinyin, frozen GPT achieves state-of-the-art results.", "2. On abbreviated pinyin, the performance of frozen GPT drops drastically.", "Context enrichment with pinyin and pinyin-constrained training both improve the performance.", "3. The performance of GPT-based models increases as the context of Chinese characters becomes longer.", "The input of pinyin input method includes a sequence of Chinese characters C = { w 1 , . . . , w n } as the context and a sequence of pinyin P = { p n +1 , . . . , p n + k } , where w i V w , p n + j V p , and V w and V p are the vocabularies of words and", "pinyin, respectively.", "The output is a sequence of Chinese characters O = { w n +1 , . . . , w n + k } , where w n + i V w .", "The number of output characters is the same as the number of pinyin (i.e., k ) and each character should be pronounced with the corresponding pinyin.", "The output sequence is desired to follow the context of Chinese characters to form a coherent sentence.", "As mentioned earlier in the introduction section, the input pinyin might be perfect (e.g., wo men ) or abbreviated (e.g., w m ).", "Examples of the task are given in Table", "2. 4 In our definition, one situation is that the context of characters is empty, which corresponds to the scenario that people are entering pinyin at the beginning of a sentence.", "The other situation is that the context includes real words, which stands for the scenario that people are entering pinyin in the middle of a written sentence.", "In this paper, we assume that the oracle pinyin segmentation results are provided.", "Sometimes, a raw pinyin sequence can be mapped to different segmentation results.", "For example, the raw pinyin input jianshi can be segmented as ji an shi ( , a city in the southwestern part of Jilin province, China) or jian shi ( , which is translated as experience in English).", "Pinyin segmentation is a subtask (Zhao et al., 2006; Zhou et al., 2007) of pinyin input method, which is well solved with the accuracy of 98% (Zhang et al., 2017).", "We leave the integration of pinyin segmentation as future work.", "In this section, we first introduce standard text-based GPT models adopted in this work (Sec-tion 3.1).", "Afterwards, we introduce how to extend GPT models for pinyin input method with enriched pinyin context (Section 3.2) and pinyin-constrained 4 People may also input pinyin like l b y you dian shi , we leave this as a future work.", "training (Section 3.3), respectively.", "In this work, we use character-level Chinese GPT as the backbone.", "We describe character-level GPT models in this subsection.", "We start with a publicly available character-level GPT (Du, 2019) 5 , which we call GPT (public) .", "The model has the same configuration as the standard 12-layer GPT 6 .", "It is trained on the CLUECor-pusSmall dataset of 14GB (Xu et al., 2020), which consists of Chinese news, Wikipedia, online forum message, and consumer comments.", "We have tried another well known Chinese pretrained language model called CPM (Zhang et al., 2021b), which is trained on 100GB data.", "The vocabulary of CPM contains both Chinese characters and words.", "7 We built a baseline with the CPM model of 12 layers 8 and forced the generated token to be a Chinese character.", "However, this baseline does not work well on pinyin input method, partly because our character-level decoding is inconsistent with the way how CPM is trained.", "It is promising to leverage the advantage of CPM on word-level decoding, and we leave this as a future work.", "To build a stronger Chinese GPT baseline, we use GPT (public) as the starting point and further pretrain on a 800GB data crawled by us that is composed of news, Wikipedia, and novel texts.", "The model is trained with a batch size of 2,560 on 32x Tesla V100 GPUs.", "We adopt the Adam optimizer (Kingma and Ba, 2015) and set the learning rate to 1e-5 with a linear warmup scheduler.", "We run the warmup process for 10k steps and train 100k steps in total.", "We call this 12-layer GPT model as GPT (ours) .", "To apply GPT (public) and GPT (ours) to pinyin input method, we use the traditional decoding pipeline of GPT to generate the sequence of Chinese characters in an autoregressive way.", "After encoding all the context of characters, the model predicts a Chinese character at each time step conditioned on the pinyin.", "Only Chinese characters pronounced with the same pinyin are legitimate 5 https://github.com/Morizeyao/ GPT2-Chinese 6 https://huggingface.co/gpt2 7 A Chinese word may consist of multiple Chinese characters.", "We explore two simple ways to incorporate pinyin information and build two models correspondingly.", "The first model uses pinyin information horizontally by concatenating pinyin input to the context of characters.", "The second model incorporates pinyin information vertically by adding a pinyin embedding layer at the bottom of GPT.", "PinyinGPT-Concat In this model, we append a pinyin sequence to the context of Chinese characters.", "In the inference stage, the input has the form of x = [ w 1 , . . . , w n , [SEP] , p n +1 , . . . , p n + k , [SEP] ] , where [SEP] is a special token to separate text and pinyin.", "The model largely follows the architecture of the standard GPT.", "Since there is one-one relationship between pinyin tokens and generated Chinese characters (i.e., the pronunciation of w n + j is p n + j ), we adjust the absolute positions of the characters to be generated.", "We assign the position of p n + j to w n + j , expecting the model to learn the alignments between pinyin and target characters.", "9 We further expand the vocabulary of the word embedding layer by adding pinyin tokens.", "In the training stage, given an training instance of [ w 1 , . . . , w n , [SEP] , p n +1 , . . . , p n + k , [SEP] , w n +1 , . . . , w n + k ] , the model is trained to minimize the following loss function, where w <n + j stands for the characters before w n + j and p = [ p n +1 , . . . , p n + k ] .", "PinyinGPT-Embed The original GPT model includes a word embedding layer and a position embedding layer.", "In this model, we add a pinyin embedding layer.", "The basic idea is to provide the model with the pinyin of the character to be generated next.", "Specifically, the embedding of each character is the sum of the token embedding of the current character, the position embedding of the current character and the pinyin embedding of the next character.", "When a word (e.g., numbers, punctuations and symbols) has no corresponding pinyin, we use a special token [unk] to represent it instead.", "The training process is similar with 9 On abbreviated pinyin, this strategy could bring 0.3 points in terms of P@5.", "1. The loss function is given as follows.", "In the inference stage, we transform the input sequence to the same format.", "We describe training details in this subsection.", "In standard GPT, the loss function is computed over the whole vocabulary.", "However, this is suboptimal for pinyin input method because the major challenge in the inference stage is how to select the best one from characters pronounced with the same pinyin (as described in the end of Section 3.1).", "This leads to inconsistency between training and inference stages.", "Therefore, in the training stage, the probability of a character is calculated over characters pronounced with the same pinyin, which is formulated as follows.", "where V p i is the set of Chinese characters whose pinyin is p i and g is the logit before the softmax layer.", "In this section, we show the results on pinyin input method over the two settings (i.e., perfect pinyin and abbreviated pinyin).", "We describe the two datasets used in the following experiments and the evaluation metric.", "PD Dataset PD dataset (Yang et al., 2012) is a commonly used benchmark dataset for the evaluation of pinyin input method (Jia and Zhao, 2014; Zhang et al., 2017; Huang et al., 2018; Zhang et al., 2019).", "The texts in PD are extracted from the Peo-ple's Daily 10 from 1992 to 1998.", "It contains 5.04 million segments of consecutive Chinese characters (or Maximum Input Unit in some literature) for training and 2,000 segments for testing.", "For each test case, the input pinyin are all perfect pinyin and the context is null.", "WD Dataset Since the PD data includes out-of-date news from 20 years ago and does not support us to study the scenario where the context includes real words, we construct a new dataset called WD.", "We use the WuDaoCorpora (Yuan et al., 2021) that contains 3TB Chinese corpus collected from 822 million Web pages.", "Currently, 200GB of the corpus 10 http://www.people.com.cn/ 1902 has been made publicly available 11 .", "We randomly select 15 domains from WuDaoCorpora.", "For each domain, we first use an off-the-shelf Chinese segmentation toolkit (Zhang et al., 2020) to segment the documents into sentences.", "Then we automatically obtain the perfect pinyin and abbreviated pinyin of characters with pinyin converting tools.", "For each sentence, we randomly choose a context with a range from 0-3, 4-9 and 10+ words.", "Consecutively, we choose the target to be 1-3, 4-9 or 10+ words, respectively.", "It's further required that the target should be continuous characters that each has its own pinyin.", "We call each context-target length tuple like (4-9, 10+) as an evaluation configuration.", "For each configuration, we sample 2,000 test cases.", "In total, there are 9 configurations of 18,000 cases for each domain.", "The whole dataset consists of 270,000 examples.", "We investigate extremely long target lengths for the purpose of research that these configurations may not appear in real cases.", "All the instances in the WD dataset are only used for evaluation.", "Evaluation Metric We use precision at top-K (P@K) as the evaluation metric, which is widely adopted in the literature (Jia and Zhao, 2014; Zhang et al., 2017, 2019).", "It measures if the ground truth exists in the topK generated results.", "Some existing works also use keystroke-based metrics (Jia and Zhao, 2013; Huang et al., 2015) and human evaluation, which we don't use in this work because the evaluation process is more complex and time-consuming.", "Other Settings We train both PinyinGPT models with the training data of GPT (ours).", "To preprocess the corpus, we use a public library pypinyin 12 to get the pinyin of Chinese characters.", "13 We initialize both PinyinGPT models with GPT (ours).", "Both models are trained for 100k steps on 32 GPUs of NVIDIA V100 Tensor Core with a bach size of 25,000.", "The learning rate is 5e-5.", "We maintain a maximum of 128 tokens for every training example.", "We use a probability of 50% to sample a target sequence with less than 5 words, otherwise we randomly sample a target sequence with 6 to 25 words.", "This rate is empirically selected as it's less practical for users to type very long sequences.", "During inference stage, we use beam search with a beam size of 16 for text generation.", "We report results on the PD dataset (Yang et al., 2012).", "We use pinyin-constraint training in all configurations and train PinyinGPT models with different pinyin vocabularies for perfect pinyin and abbreviated pinyin, respectively.", "We compare with the following baselines.", "Google IME is a commercial Chinese IME which provides a debuggable API.", "On-OMWA (Zhang et al., 2017) is an online model for word acquisition which adaptively learns new words for Chinese IME.", "On-P2C (Zhang et al., 2019) is a neural pinyin-to-Chinese character conversion model, which is augmented by an online updated vocabulary to support open vocabulary learning.", "In Table 3, the first group (top) shows the results of the aforementioned baselines, which are directly extracted from On-P2C (Zhang et al., 2019).", "The bottom group shows the performance of GPT (pub-lic) and GPT (ours) with frozen parameters.", "We can find that GPT (public) achieves comparative performance with existing systems in terms of P@5 and P@10.", "After being trained with a larger corpus, GPT (ours) surpasses all the baseline models in terms of all metrics.", "It is worth noting that existing baselines are supervised models that are fine-tuned on training instances.", "The results demonstrate the effectiveness of GPT models pretrained on vast amount of texts.", "In this section, we report results for both perfect pinyin and abbreviated pinyin on WD.", "In Table 4, we list the overall experiment results of two GPT baselines as well as our PinyinGPT models.", "We have several findings based on the results.", "First, from each row, we can see that there is a drastic performance drop for all models.", "The reason is that each abbreviated pinyin can be mapped to a large amount of candidate characters, so that the problem is more challenging compared to perfect pinyin.", "We also believe that the evaluation metric of P@1 might be too strict for abbreviated pinyin because sometimes the top predictions might be correct (as reflected in Figure 3) even though they may be different from the ground truth.", "Second, adding pinyin information to GPT obtains limited improvement on perfect pinyin, but boosts the abbreviated setting by 5 points on P@5 and 7 points on P@10, respectively.", "Third, concatenating pinyin context horizontally is better than adding pinyin embedding vertically.", "Last, fine-tuning all the parameters performs better than keeping the parameters of GPT fixed.", "In this section, we conduct experiments to understand the importance of pinyin context and pinyin-constrained training.", "Results are given in Figure", "2. The baseline model is GPT (ours).", "The model + Pinyin Context means that we concatenate pinyin context (i.e., PinyinGPT-Concat) and learn over the whole vocabulary.", "The model + Pinyin Context + PC-LOSS means that we use both pinyin context and pinyin-constrained training.", "The figure shows that taking pinyin as extra context works well to improve results in terms of P@5 and P@10.", "When the two components are adopted, the performance is further improved.", "To analyze how context length and target length affect performance, we aggregate experiment results", "to form a matrix of accuracy for each configuration in Table", "5. Each score is averaged over all the domains.", "From each column, we can see that longer context benefits both GPT and our model in pinyin input method, which verifies the power of context understanding ability of GPT models.", "An interesting finding is that, when the context is long enough (e.g., 10+), adding pinyin does not help improve the P@1.", "We list three cases in Figure 3 to compare model outputs produced by GPT (ours) and PinyinGPT-Concat.", "The first case shows that, given perfect pinyin as the input, both GPT (ours) and PinyinGPT-Concat make the correct predictions.", "In the second case, abbreviated pinyin is given as the input.", "PinyinGPT-Concat makes the correct prediction while the prediction of GPT (ours) does not fit to the context well.", "In Case 3, even if PinyinGPT-Concat ranks the ground truth as the second best, the top 1 prediction still makes much sense and fit well with the context.", "In all cases, GPT (ours) usually generate predictions which are grammatically sound but semantically inappropriate.", "In this subsection, we analyze how performance differs with respect to domains.", "We sample six domains for illustration in Table", "6. 14 The table shows that PinyinGPT-Concat achieves consistent improvement over GPT on all domains.", "We also 14 The table of all 15 domains is attached in the Appendix.", "find that the absolute scores vary a lot across domains.", "This reflects different predictability for texts on different domains.", "For example, the P@10 score of the Culture domain is 16 points lower than the Medical domain.", "In the Medical domain, the texts contain plenty of descriptions of symptoms and instructions of medicines, which are somehow canonically used.", "While in the Culture domain, the texts are less constrained and have more variations.", "Considering pinyin input method requires both accuracy and efficiency, we further train a 6-layer GPT to investigate the trade-off.", "Our 6-layer GPT is directly truncated and initialized from the 12-layer GPT and is continually trained for 50k steps with the same configuration of 12-layer GPT.", "The evaluation is conducted over the 9 configurations of context-target length and averaged across all domains.", "Specifically, each configuration is inferred using a data center GPU NVIDIA V100 Tensor Core, and the GPU is fully occupied by one model.", "The beam size is set to be 16.", "We report the average inference time in millisecond as well as accuracy in terms of P@ K of PinyinGPT-Concat.", "Table 7 is the result for the configuration (4-9, 4-9).", "The table shows that the inference time of the model with 6-layer transformer is almost 30% faster than that with 12-layer.", "However, the performance of the 6-layer model drops consistently in the abbreviated setting.", "15 15 We also list the experiment results for all configurations 1905 Model Games Culture Sports P@1 P@5 P@10 P@1 P@5 P@10 P@1 P@5 P@10 GPT (ours) 24.04 32.78 34.23 21.86 29.33 30.94 28.54 37.13 38.69 PinyinGPT-Concat 25.78 38.26 41.89 22.10 33.33 36.72 29.81 43.56 46.95 Real Estate Medical Finance P@1 P@5 P@10 P@1 P@5 P@10 P@1 P@5 P@10 GPT (ours) 26.53 35.27 36.74 33.59 43.54 44.93 29.00 37.24 38.47 PinyinGPT-Concat 27.28 40.16 43.86 34.76 49.28 52.56 29.17 42.17 45.52 Table 6: Performance of six sample domains over WD using abbreviated pinyin.", "Pinyin Input Method We describe existing works based on whether the input pinyin is perfect or abbreviated.", "A majority of existing works focus on perfect pinyin.", "Traditional models are typically based on statistical language models (Chen and Lee, 2000) and statistical machine translation (Yang et al., 2012).", "Recent works are usually built with neural network.", "For example, Moon IME (Huang et al., 2018) integrates attention-based neural network and an information retrieval module.", "Zhang et al. (2019) improves an LSTM-based encoder-decoder model with online vocabulary adaptation.", "For abbreviated pinyin, Co-CAT (Huang et al., 2015) uses machine translation technology to reduce the number of the typing letters.", "Huang and Zhao (2018) propose an LSTM-based encoder-decoder approach with the concatenation of context words and abbreviated pinyin as input.", "Our work differs from existing works in that we are the first one to exploit GPT and verify the pros and cons of GPT in different situations.", "In addition, there are some works handling pinyin with typing errors.", "Chen and Lee (2000) investigate a typing model which handles spelling correction in sentence-based pinyin input method.", "CHIME (Zheng et al., 2011) is a error-tolerant Chi-in the Appendix.", "nese pinyin input method.", "It finds similar pinyin which will be further ranked with Chinese specific features.", "Jia and Zhao (2014) propose a joint graph model to globally optimize the tasks of pinyin input method and typo correction.", "We leave error-tolerant pinyin input method as a future work.", "Pinyin-enhanced Pretrained Models Our methodology also relates to pretrained models that use pinyin information.", "Sun et al. (2021) propose a general-purpose Chinese BERT with new embedding layers to inject pinyin and glyph information of characters.", "There are also task-specific BERT models, especially for the task of grammatical error correction since an important type of error is caused by characters pronounced with the same pinyin.", "Zhang et al. (2021a) add a pinyin embedding layer and learns to predict characters from similarly pronounced candidates.", "PLOME (Liu et al., 2021) add two embedding layers implemented with two GRU networks to inject both pinyin and shape of characters, respectively.", "Xu et al. (2021) add a hierarchical encoder to inject the pinyin letters at character and sentence levels, and add a ResNet encoder to use graphic features of character image.", "In this paper, we explore how to adapt pretrained Chinese GPT to pinyin input method.", "To begin with, we find that a frozen GPT with decoding conditioned on pinyin can reach state-of-the-art performance on perfect pinyin.", "However, in abbreviated setting, the performance drops by a large gap.", "Through our experiments, we find that both context enrichment with pinyin and pinyin-constrained training improve the performance.", "In the future, we would like to investigate more challenging settings including error-tolerant pinyin input method and mixtures of perfect pinyin and abbreviated pinyin." ]
[ "abstain", "objective", "result", "abstain", "abstain", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "result", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "abstain", "method", "other", "other", "other", "other", "other", "objective", "result", "abstain", "result", "objective" ]
[ "Text-to-image retrieval is an essential task in cross-modal information retrieval, i.e., retrieving relevant images from a large and unlabelled dataset given textual queries.", "In this paper, we propose VisualSparta, a novel ( Visual text Spar se T ransformer M a tching) model that shows significant improvement in terms of both accuracy and efficiency.", "VisualSparta is capable of outperforming previous state-of-the-art scalable methods in MSCOCO and Flickr30K.", "We also show that it achieves substantial retrieving speed advantages, i.e., for a 1 million image index, VisualSparta using CPU gets 391X speedup compared to CPU vector search and 5.4X speedup compared to vector search with GPU acceleration.", "Experiments show that this speed advantage even gets bigger for larger datasets because VisualSparta can be efficiently implemented as an inverted index.", "To the best of our knowledge, VisualSparta is the first transformer-based text-to-image retrieval model that can achieve real-time searching for large-scale datasets, with significant accuracy improvement compared to previous state-of-the-art methods.", "Text-to-image retrieval is the task of retrieving a list of relevant images from a corpus given text queries.", "This task is challenging because in order to find the most relevant images given text query, the model needs to not only have good representations for both textual and visual modalities, but also capture the fine-grained interaction between them.", "Existing text-to-image retrieval models can be broadly divided into two categories: query-agnostic and query-dependent models.", "The dual-encoder architecture is a common query-agnostic model, which uses two encoders to encode the query This work was partially done during an internship at SOCO Figure 1: Inference Time vs. Model Accuracy.", "and images separately and then compute the relevancy via inner product (Faghri et al. , 2017; Lee et al. , 2018; Wang et al. , 2019a).", "The transformer architecture is a well-known query-dependent model (Devlin et al. , 2018; Yang et al. , 2019).", "In this case, each pair of text and image is encoded by concatenating and passing into one single network, instead of being encoded by two separate encoders (Lu et al. , 2020; Li et al. , 2020b).", "This method borrows the knowledge from large pretrained transformer models and shows much better accuracy compared to dual-encoder methods (Li et al. , 2020b).", "Besides improving the accuracy, retrieval speed has also been a long-existing subject of study in the information retrieval (IR) community (Man-ning et al. , 2008).", "Query-dependent models are prohibitively slow to apply to the entire image corpus because it needs to recompute for every different query.", "On the other hand, query-agnostic model is able to scale by pre-computing an image data index.", "For dual-encoder systems, further speed improvement can be obtained via Approximate Nearest Neighbors (ANN) Search and GPU acceleration (Johnson et al. , 2019).", "In this work, we propose VisualSparta, a simple yet effective text-to-image retrieval model that outperforms all existing query-agnostic retrieval models in both accuracy and speed.", "By modeling fine-grained interaction between visual regions with query text tokens, our model is able to harness the power of large pre-trained visual-text models and scale to very large datasets with real-time response.", "To our best knowledge, this is the first model that integrates the power of transformer models with real-time searching, showing that large pre-trained models can be used in a way with significantly less amount of memory and computing time.", "Lastly, our method is embarrassingly simple because its image representation is essentially a weighted bag-of-words, and can be indexed in a standard Inverted Index for fast retrieval.", "Comparing to other sophisticated models with distributed vector representations, our method does not depend on ANN or GPU acceleration to scale up to very large datasets.", "Contributions of this paper can be concluded as the following: (1) A novel retrieval model that achieves new state-of-the-art results on two benchmark datasets, i.e., MSCOCO and Flickr 30K.", "(2) Weighted bag-of-words is shown to be an effective representation for cross-modal retrieval that can be efficiently indexed in an Inverted Index for fast retrieval.", "(3) Detailed analysis and ablation study that show advantages of the proposed method and interesting properties that shine light for future research directions.", "Large amounts of work have been done on learning a joint representation between texts and images (Karpathy and Fei-Fei, 2015; Huang et al. , 2018; Lee et al. , 2018; Wehrmann et al. , 2019; Li et al. , 2020b; Lu et al. , 2020).", "In this section, we revisit dual-encoder based retrieval model and transformer-based retrieval model.", "encode information from text and image modalities.", "In Karpathy and Fei-Fei (2015), the author used a Bi-directional Recurrent Neural Network (BRNN) to encode the textual information and used a Region Convolutional Neural Network (RCNN) to encode the image information, and the final similarity score is computed via the interaction of features from two encoders.", "Lee et al. (2018) proposed stacked cross-attention network, where the text features are passed through two attention layers to learn interactions with the image region.", "Wang et al. (2019a) encoded the location information as yet another feature and used both deep RCNN features (Ren et al. , 2016) and the fine-grained location features for the Region of Interest (ROI) as image representation.", "In Wang et al. (2020), the author utilized the information from Wikipedia as an external corpus to construct a Graph Neural Network (GNN) to help model the relationships across objects.", "Large pre-trained language models (PLM) show great success over multiple tasks in NLP areas in recent years (Devlin et al. , 2018; Yang et al. , 2019; Dai et al. , 2019).", "After that, research has also been done on cross-modal transformer-based models and proves that the self-attention mechanism also helps jointly capture visual-text relationships (Li et al. , 2019; Lu et al. , 2020; Qi et al. , 2020; Li et al. , 2020b).", "By first pretraining model under large-scale visual-text dataset, these transformer-based models capture rich semantic information from both texts and images.", "Models are then fine-tuned for the text-to-image retrieval task and show improvements by a large margin.", "However, the problem of using transformer-based models is that it is prohibitively slow in the retrieval context: the model needs to compute pair-wise similarity scores between all queries and answers, making it almost impossible to use the model in any real-world scenarios.", "Our proposed method borrows the power of large pre-trained models while reducing the inference time by orders of magnitude.", "PLM has shown promising results in Information Retrieval (IR), despite its slow speed due to the complex model structure.", "The IR community recently started working on empowering the classical full-text retrieval methods with contextualized information from PLMs (Dai and Callan, 2019; MacAvaney et al. , 2020; Zhao et al. , 2020).", "Dai and Callan (2019) proposed DeepCT, a model that learns to generate the query importance score from the contextualized representation of large transformer-based models.", "Zhao et al. (2020) proposed sparse transformer matching model (SPARTA), where the model learns term-level interaction between query and text answers and generates weighted term representations for answers during index time.", "Our work is motivated by works in this direction and extends the scope to the cross-modal understanding and retrieval.", "In this section, we present VisualSparta retriever, a fragment-level transformer-based model for efficient text-image matching.", "The focus of our proposed model is two-fold: Recall performance: fine-grained relationship between queries and image regions are learned to enrich the cross-modal understanding.", "Speed performance: query embeddings are non-contextualized, which allows the model to put most of the computation offline.", "As query processing is an online operation during retrieval, the efficiency of encoding query needs to be well considered.", "Previous methods pass the query sentence into a bi-RNN to give token representation provided surrounding tokens (Lee et al. , 2018; Wang et al. , 2019a, 2020).", "Instead of encoding the query in a sequential manner, we drop the order information of the query and only use the pretrained token embeddings to represent each token.", "In other words, we do not encode the local contextual information for the query and purely rely on independent word embedding E tok of each token.", "Let a query be q = [ w 1 , ..., w m ] after tokenization, we have: w i = E tok ( w i ) (1) where w i is the i -th token of the query.", "Therefore, a query is represented as w = { w 1 , ..., w m } , w i R d H .", "In this way, each token is represented independently and agnostic to its local context.", "This is essential for the efficient indexing and inference, as described next in section 3.3.", "Compared with query information which needs to be processed in real-time, answer processing can be rich and complex, as answer corpus can be indexed offline before the query comes.", "Therefore, we follow the recent works in Vision-Language Transformers (Li et al. , 2019, 2020b) and use the contextualized representation for the answer corpus.", "Specifically, for an image, we represent it using information from three sources: regional visual features, regional location features, and label features with attributes, as shown in Figure 2.", "Regional visual features and location features Given an image v , we pass it through Faster-RCNN (Ren et al. , 2016) to get n regional visual features v i and their corresponding location features l i : v 1 , ..., v n = RCNN( v ) , v i R d rcnn (2) and the location features are the normalized top left and bottom right positions of the region proposed from Faster-RCNN, together with the region width and height: l i = [ l xmin , l xmax , l ymin , l ymax , l width , l height ] (3) Therefore, we represent one region by the concatenation of two features: E i = [ v i ; l i ] (4) E image = [ E 1 , ..., E n ] , E i R d rcnn + d loc (5) where E image is the representation for a single image.", "Label features with attributes Additional to the deep representations from the proposed image region, previous work by Li et al. (2020b) shows that the object label information is also useful as an additional representation for the image.", "We also encode the predicted objects and corresponding attributes obtained from Faster-RCNN model with pretrained word embeddings: o i = E tok ( o i ) + E pos ( o i ) + E seg ( o i ) (6) E label = [ o 1 , ..., o k ] , o i R d H (7) where k represents the number of tokens after the tokenization of attributes and object labels for n Image Embeddings a person in ... labels w.", "image regions.", "E tok , E pos , and E seg represent token embeddings, position embeddings, and segmentation embeddings respectively, similar to the embedding structure in Devlin et al. (2018).", "Therefore, one image can be represented by the linear transformed image features concatenated with label features: a = [( E image W + b ); E label ] (8) where W R ( d rcnn + d loc ) d H and b R d H are the trainable linear combination weights and bias.", "The concatenated embeddings a are then passed into a Transformer encoder T image , and the final image feature is the hidden output of it: H image = T image ( a ) (9) where H image R ( n + k ) d H is the final contextualized representation for one image.", "Given the visual and query representations, the matching score can now be computed between a query and an image.", "Different from other dual-encoder based interaction model, we adopt the fine-grained interaction model proposed by Zhao et al. (2020) to compute the relevance score by: y i = max j [1 ,n + k ] ( w Ti H j ) (10) ( y i ) = ReLU ( y i + b ) (11) f ( q, v ) = m (cid:88) i =1 log( ( y i ) + 1) (12) where Eq.10 captures the fragment-level interaction between every image region and every query word token; Eq.11 produces sparse embedding outputs via a combination of ReLU and trainable bias, and Eq.12 sums up the score and prevents an overly large score using log operation.", "Following the training method presented in Zhao et al. (2020), we use cross entropy loss to train VisualSparta.", "Concretely, we maximize the objective in Eq.", "13, which tries to decide between the ground truth image v + and irrelevant/random images V for each text query q .", "The parameters to learn include both the query encoder E tok and the image transformer encoder T image .", "Parameters are optimized using Adam (Kingma and Ba, 2014).", "In order to achieve efficient training, we use other image samples from the same batch as negative examples for each training data, an effective technique that is widely used in response selection (Zhang et al. , 2018; Henderson et al. , 2019).", "Preliminary experiments found that as long as the batch size is large enough (we choose to use batch size of 160), this simple approach performs equally well compared to other more sophisticated methods, for example, sample similar images that have nearby labels.", "VisualSparta model structure is suitable for real-time inference.", "As discussed in section 3.1.1, since query embeddings are non-contextualized, we are able to compute the relationship between each query term w i and every image v offline.", "Concretely, during offline indexing, for each image v , we first compute fragment-level interaction between its regions and every query term in the vocabulary, same as in Eq.", "10.", "Then, we cache the computed ranking score: CACHE ( w, v ) = Eq.", "During test time, given a query q = [ w 1 , ..., w m ] , the ranking score between q and an image v is:", "As shown in Eq.", "15, the final ranking score during inference time is an O(1) look-up operation followed by summation.", "Also, the query-time computation can be fit into an Inverted Index architecture (Manning et al. , 2008), which enables us to use VisualSparta index with off-the-shelf search engines, for example, Elasticsearch (Gheorghe et al. , 2015).", "In this paper, we use MSCOCO (Lin et al. , 2014) 1 and Flickr30K (Plummer et al. , 2015) 2 datasets for the training and evaluation of text-to-image retrieval tasks.", "MSCOCO is a large-scale multitask dataset including object detection, semantic segmentation, and image captioning data.", "In this experiment, we follow the previous work and use the image captioning data split for text-to-image model training and evaluation.", "Following the experimental settings from Karpathy and Fei-Fei (2015), we split the data into 113,287 images for training, 5,000 images for validation, and 5,000 images for testing.", "Each image is paired with 5 different captions.", "The performance of 1,000 (1K) and 5,000 (5K) test splits are reported and compared with previous results.", "Flickr30K (Plummer et al. , 2015) is another publicly available image captioning dataset, which contains 31,783 images in total.", "Following the split from Karpathy and Fei-Fei (2015), 29,783 images are used for training, and 1,000 images are used for validation.", "Scores are reported based on results from 1,000 test images.", "For speed experiments, in addition to MSCOCO 1K and 5K splits, we create 113K split and 1M split, two new data splits to test the performance in the large-scale retrieval setting.", "Since these splits are only used for speed experiments, we directly reuse the training data from the existing dataset without the concern of data leaking between training and testing phases.", "Specifically, the 113K split refers to the MSCOCO training set, which contains 113,287 images, 23 times larger than the MSCOCO 5K test set.", "The 1M split consists of one million images randomly sampled from the MSCOCO training set.", "Speed experiments are done on these four splits to give comprehensive comparisons under different sizes of image index.", "Following previous works, we use recall rate as our accuracy evaluation metrics.", "In both MSCOCO and Flikr30K datasets, we report Recall@ t , t =[1, 5, 10] and compare with previous works.", "For speed performance evaluation, we choose query per second and latency(ms) as the evaluation metric to test how each model performs in terms of speed under different sizes of image index.", "All experiments are done using the PyTorch library.", "During training, one NVIDIA Titan X GPU is used.", "During speed performance evaluation, one NVIDIA Titan X GPU is used for models that need GPU acceleration.", "One 10-core Intel 9820X CPU is used for models that needs CPU acceleration.", "For the image encoder, we initialize the model weights from Oscar-base model (Li et al. , 2020b) with 12 layers, 768 hidden dimensions, and 110M parameters.", "For the query embedding, we initialize it from the Oscar-base token embedding.", "The Adam optimizer (Kingma and Ba, 2014) is used with the learning rate set to 5e-5.", "The number of training epochs is set to 20.", "The input sequence length is set to 120, with 70 for labels with attributes features and 50 for deep visual features.", "We search on batch sizes (96, 128, 160) with Recall@1 validation accuracy, and set the batch size to 160.", "We compare both recall and speed performance with the current state-of-the-art retrieval model in text-to-image search.", "Query-dependent model refers to models in which image information cannot be encoded offline, because each image encoding is dependent on the query information.", "These models usually achieve promising performance in recall but suffer from prohibitively slow inference speed.", "Query-agnostic model refers to models in which image information can be encoded offline and is independent of query information.", "In section 4.4.1 and 4.4.2, we evaluate accuracy and speed performance respectively for both lines of methods.", "As shown in Table 1, the results reveal that our model is competitive compared with previous methods.", "Among query-agnostic methods, our model is significantly superior to the state-of-the-art results in all evaluation metrics over both MSCOCO and Flickr30K datasets and outperforms previous methods by a large margin.", "Specifically, in MSCOCO 1K test set, our model outperforms the previously best query-agnostic method (Wang et al. , 2019a) by 7.1%, 1.6%, 1.0% for Recall@1, 5, 10 respectively.", "In Flickr30K dataset, VisualSparta also shows strong improvement compared with the previous best method: in Recall@1,5,10, our model gets 4.2%, 2.2%, 0.4% improvement respectively.", "We also observe that VisualSparta reduces the gap by a large margin between query-agnostic and query-dependent methods.", "In MSCOCO-1K split, the performance of VisualSparta is only 1.0%, 2.3%, 1.0% lower than Unicoder-VL method (Li et al. , 2020a) for Recall@1,5,10 respectively.", "Compared to Oscar (Li et al. , 2020b), the current state-of-the-art query-dependent model, our model is 7% lower than the Oscar model in MSCOCO-1K Re-call@1.", "This shows that there is still room for improvement in terms of accuracy for query-agnostic model.", "To show the efficiency of VisualSparta model in both small-scale and large-scale settings, we create 113K dataset and 1M dataset in addition to the original 1K and 5K test split, as discussed in section 4.2.", "Speed experiments are done using these four splits as testbeds.", "To make a fair comparison, we benchmark each method with its preferred hardware and software for speed acceleration.", "Specifically, For CVSE model (Wang et al. , 2020), both CPU and GPU inference time are recorded.", "For CPU setting, the Maximum Inner Product Search (MIPS) is performed using their original code based on Numpy (Harris et al. , 2020).", "For GPU setting, we adopt the model and use FAISS (Johnson et al. , 2019), an optimized MIPS library, to test the speed performance.", "For Oscar model (Li et al. , 2020b), since the query-dependent method cannot be formulated as a MIPS problem, we run the original model using GPU acceleration and record the speed.", "For VisualSparta, we use the top-1000 term scores settings for the experiment.", "Since VisualSparta can be fit into an inverted-index architecture, GPU ac-MSCOCO-1k MSCOCO-5k n Inf.", "celeration is not required.", "For all experiments, we use 5000 queries from MSCOCO-1K split as query input to test the speed performance.", "As we can see from Table 2, in all four data splits (1K, 5K, 113K, 1M), VisualSparta significantly outperforms both the best query-agnostic model (CVSE (Wang et al. , 2020)) and the best query-dependent model (Oscar (Li et al. , 2020b)).", "Under CPU comparison, the speed of VisualSparta is 2.5, 2.4, 51, and 391 times faster than that of the CVSE model in 1K, 5K, 113K, and 1M splits respectively.", "This speed advantage also holds even if previous models are accelerated with GPU acceleration.", "To apply the latest MIPS progress to the comparison, we adopt the CVSE model to use FAISS (Johnson et al. , 2019) for better speed acceleration.", "Results in the table reveal that the speed of VisualSparta can also beat that of CVSE by 2.5X in the 1K setting, and this speed advantage increases to 5.4X when the index size increases to 1M.", "Our model holds an absolute advantage when comparing speed to query-dependent models such as Oscar (Li et al. , 2020b).", "Since the image encoding is dependent on the query information, no offline indexing can be done for the query-dependent model.", "As shown in Table 2, even with GPU acceleration, Oscar model is prohibitively slow: In the 1K setting, Oscar is 1128 times slower than VisualSparta.", "The number increases to 391,000 when index size increases to 1M.", "As described in section 3.3, each image can be well represented by a list of weighted tokens independently.", "This feature makes VisualSparta flexible during indexing time: users can choose to index using topn term scores based on their memory constraint or speed requirement.", "Table 3 compares recall and speed in both MSCOCO 1K and 5K split under different choices of n .", "From the comparison between using all term scores and using top-2000 term scores, we found that VisualSparta can get 1.8X speedup with almost no performance drop.", "if higher speed is needed, n can always be set to a lower number with a sacrifice of accuracy, as shown in Table 3.", "Figure 1 visualizes the trade-off between model accuracy and inference speed.", "The x-axis represents the average inference time of a single query in millisecond, and the y-axis denotes the Recall@1 on MSCOCO 1K test set.", "For VisualSparta, each dot represents the model performance under certain topn term score settings.", "For other methods, each dot represents their speed and accuracy performance.", "The curve reveals that with larger n, the recall becomes higher and the speed gets slower.", "From the comparison between VisualSparta and other methods, we observe that by setting topn term scores to 500, VisualSparta can already beat the accuracy performance of both PFAN (Wang et al. , 2019a) and CVSE (Wang et al. , 2020) with 2.8X speedup.", "As shown in Figure 2, the image encoder takes a concatenation of object label features with attributes and deep visual features as input.", "In this section, we do an ablation study and analyze the contributions of each part of the image features to the final score.", "In Table 4, different components are removed from the image encoder for performance comparison.", "From the table, we observe that removing either attributes features (row", "1) or label features with attributes (row", "2) only hurts the performance by a small margin.", "However, when dropping visual features and only using label with attributes features for image representation (row 3), it appears that the model performance drops by a large margin, where the Recall@1 score drops from 68.7% to 49.1%( 19.6%).", "From this ablation study, we can conclude that MSCOCO-1k MSCOCO-5k # R@1 R@5 R@10 R@1 R@5 R@10 1 VisualSparta 68.7 91.2 96.2 45.1 73.0 82.5 2 attributes features 68.2(-0.5) 91.8 (+0.6) 96.3 (+0.1) 44.4(-0.7) 72.8(-0.2) 82.4(-0.1) 3 labels w.", "deep visual features make the most contribution to the VisualSparta model structure, which shows that deep visual features are significantly more expressive compared to textual features, i.e., label with attributes features.", "More importantly, it shows that VisualSparta is capable of learning cross-modal knowledge, and the biggest gain indeed comes from learning to match query term embeddings with deep visual representations.", "5.3 Cross-domain Generalization Models R@1 R@5 R@10 VSE++(Faghri et al. , 2017) 28.4 55.4 66.6 LVSE(Engilberge et al. , 2018) 34.9 62.4 73.5 SCAN(Lee et al. , 2018) 38.4 65.0 74.4 CVSE(Wang et al. , 2020) 38.9 67.3 76.1 VisualSparta (ours) 45.4 71.0 79.2 Table 5: Cross-dataset performance; models are trained on MSCOCO dataset and tested on Flickr30K dataset.", "Table 5 shows the cross-domain performance for different models.", "All models are trained on MSCOCO and tested on Flickr30K.", "We can see from the table that VisualSparta consistently outperforms other models in this setting.", "This indicates that the performance of VisualSparta is consistent across different data distributions, and the performance gain compared to other models is also consistent when testing in this cross-dataset settings.", "We query VisualSparta on the MSOCO 113K split and check the results.", "As shown in Figure 3, visual and label features together represent the max attended features for given query tokens.", "Interestingly, we observe that VisualSparta model is capable of grounding adjectives and verbs to the relevant image regions.", "For example, graz grounds to the head of giraffe in the first example.", "This further confirms the hypothesis that weighted bag-of-words is a valid and rich representation for images.", "In conclusion, this paper presents VisualSparta, an accurate and efficient text-to-image retrieval model that shows the state-of-the-art scalable performance in both MSCOCO and Flickr30K.", "Its main novelty lies in the combination of powerful pre-trained image encoder with fragment-level scoring.", "Detailed analysis also demonstrates that our approach has substantial scalability advantages compared to previous best methods when indexing large image datasets for real-time searching, making it suitable for real-world deployment." ]
[ "abstain", "objective", "abstain", "result", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "abstain", "method", "objective", "abstain", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "objective", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective" ]
[ "We study the task of long-form opinion text generation, which faces at least two distinct challenges.", "First, existing neural generation models fall short of coherence, thus requiring efficient content planning.", "Second, diverse types of information are needed to guide the generator to cover both subjective and objective content.", "To this end, we propose DYPLOC, a generation framework that conducts dynamic planning of content while generating the output based on a novel design of mixed language models.", "To enrich the generation with diverse content, we further propose to use large pre-trained models to predict relevant concepts and to generate claims.", "We experiment with two challenging tasks on newly collected datasets: (1) argument generation with Reddit ChangeMyView, and (2) writing articles using New York Times' Opinion section.", "Automatic evaluation shows that our model significantly outperforms competitive comparisons.", "Human judges further confirm that our generations are more coherent with richer content.", "Opinion articles serve as an important media to convey the authors' values, beliefs, and stances on important societal issues.", "Automatically generating long-form opinion articles has the potential of facilitating various tasks, such as essay writing and speech drafting, and it is the focus of this work.", "Though opinion generation has been investigated for constructing arguments (Hua and Wang, 2018), writing reviews (Ni and McAuley, 2018), and producing emotional dialogue responses (Song et al., 2019), those outputs are relatively short.", "While impressive progress in generation has been achieved by using large pre-trained Transformers (Radford et al., 2019; Lewis et al., 2020a), directly adopting United_States, Intelligence knowledge, attack America was never prepared and had a bad intelligence system.", "First, large models still fall short of producing coherent text due to the lack of efficient content control and planning (Ko and Li, 2020; Wu et al., 2020; Tan et al., 2021).", "A common solution is to use concatenated phrases or semantic representations to guide the generation process (Yao et al., 2019; Harkous et al., 2020; Ribeiro et al., 2020; Goldfarb-Tarrant et al., 2020), where content planning, including both content selection and ordering, is expected to be learned by attention mechanisms.", "However, attentions have only achieved limited improvements.", "Recent work also explores training a separate planning module to produce sorted content, which is then fed into a generator (Fan et al., 2019; Hua and Wang, 2020; Goldfarb-Tarrant et al., 2020).", "Nonetheless, this strategy results in a disconnection between planning and realization, and the output is not guaranteed to respect the planning results (Castro Ferreira et al., 2019; Prabhumoye et al., 2020).", "The second challenge for opinion generation resides in the diversity of information that is needed to produce an output with consistent stances and supported by pertinent facts.", "Though large models memorize significant amounts of knowledge, they cannot retrieve and operate with them precisely (Lewis et al., 2020b).", "Due to the argumentative nature of opinion text, simply including knowledge bases (Guan et al., 2020; Zhou et al., 2020) is insufficient to uphold the desired quality, as it requires the combination of subjective claims and objective evidence as supports.", "To this end, we propose a novel generation framework, DYPLOC (dynamic planning of content), to conduct content selection and ordering as text is produced.", "1 Concretely, given a set of unordered content items, as displayed in Figure 1, we design mixed language models, with each implemented as a sequence-to-sequence model to encode one item and the input statement.", "At each decoding step, our system selects which items to reflect, and predicts a word based on probabilities marginalized over all language models.", "Crucially, our end-to-end trained framework (1) enables the generator to access multiple content items at all times and select content based on what has been generated so far, (2) can be directly built on large pre-trained Transformers, e.g., BART (Lewis et al., 2020a), with planning and generation modules jointly trained, and (3) outputs learned content selection scores to provide an interface for system decision interpretation.", "Furthermore, to ensure that our framework can be applied to a broad range of generation tasks, we design content items to cover three critical elements: entities and concepts that are central to many generation applications, and claims that are building blocks for opinion text.", "We show an example for counter-argument generation in Figure 1. Importantly, we employ BART to predict additional relevant concepts, derived from Concept-1 Data and code are available at: xinyuhua.github.", "Net (Speer et al., 2017), and generate claims, as central propositions, to enrich the generated text with both objective and subjective content.", "For experiments, we collect two datasets: (1) posts from Reddit ChangeMyView for argument generation, and (2) articles from the New York Times Opinion section (Sandhaus, 2008) for opinion article writing.", "Our proposed framework outperforms competitive comparisons, such as fine-tuning BART with the same content items, based on automatic metrics of BLEU, ROUGE, and METEOR.", "Human assessment further confirms that our system outputs have richer content and are more coherent in both tasks.", "below: We present a dynamic content planning generation framework, which is directly built on top of BART.", "Our design of mixed language models overcomes the lack of control by existing models that use implicit planning with attentions or hard copying.", "We propose content plan augmentation by automatically generating relevant concepts and claims.", "We construct two opinion text generation datasets with content plans that capture prominent entities and concepts.", "Neural Generation with Planning.", "Text planning is seen as a crucial step to guide the generation of high-quality, well-organized natural language text (McKeown, 1992; Reiter and Dale, 2000).", "Incorporating planning modules to neural text generator has attracted significant research interests (Shen et al., 2019; Moryossef et al., 2019; Puduppully et al., 2019), which proves to be especially beneficial for long-form output (Fan et al., 2019; Hua and Wang, 2019).", "More recently, large pre-trained Transformers have established new state-of-the-arts for a wide range of text generation tasks (Lewis et al., 2020a; Roller et al., 2020; Kale and Rastogi, 2020).", "But it is non-trivial to integrate planning modules into them.", "Existing approaches resort to decoupling planning and decoding stages (Hua and Wang, 2020; Kedzie and McKeown, 2020), which inevitably increases system complexities and potentially introduces cascading errors.", "We take inspiration from the retrieval-augmented generation framework (Lewis et al., 2020b), which is designed to incorporate relevant documents for VC on t en t It e m E n c od i ng ( i n pa r a ll e l ) Plan Scoring Mixed Conditional Language Model Content Item-Conditioned LM <s> Claim Generation Concept Expansion <s> <s> <s> <s> <s> <s> <s> <s> <s> <s> <s> <s> <s> <s> <s> CMV. I believe 9/11 would not have happened if Al Gore were elected President. 0.80 x 0.10 x 0.05 x 0.05 x Figure 2: Our proposed text generation framework, DYPLOC.", "question answering.", "Our adaptation uses a trainable plan scoring module to reflect content selection and ordering, which is more suitable for long text generation and offers better interpretability.", "Concurrent work by Zhang et al. (2021) presents a mixture-of-expert decoder to tackle knowledge-grounded generation.", "However, their score distribution for language models is fixed across all decoding steps, whereas ours is updated as generation progresses and can better reflect the dynamic nature of content planning.", "Controllable Text Generation.", "Another related line of research investigates the controllability of generation models (Wiseman et al., 2017), including conditioning over keywords (Keskar et al., 2019; Hua and Wang, 2020; Xu et al., 2020), syntactic structures (Casas et al., 2020; Goyal and Durrett, 2020), or semantic representations (Wen et al., 2015; Elder et al., 2018).", "Our work differs from all previous methods as we combine different types of content, covering both objective and subjective information, and attain fine-grained sentence-level control using a novel design of mixed conditional language models.", "Opinion Text Generation.", "Our model tackles opinion articles, which differs from traditional text generation systems that mostly concern fact-based generations (Gardent et al., 2017; Novikova et al., 2017; Puduppully et al., 2019).", "An extensive body of work has studied summarizing (Wang and Ling, 2016; Suhara et al., 2020; Brazinskas et al., 2020) or generating (Ni and McAuley, 2018; Li et al., 2019) reviews and building dialogue systems enhanced with emotions (Li et al., 2016; Song et al., 2019).", "More recently, developments are made in generating argumentative text (El Baff et al., 2019; Hidey and McKeown, 2019), which primarily focus on constructing single sentence claims on a limited number of topics.", "In comparison, our model can handle substantially longer output with improved quality.", "Task Formulation.", "Our opinion text generation framework takes as input a set of content items .", "Each content item consists of a title t , a set of entities E i 2 , such as { United States , 9/11 attacks } , and a set of core concepts C i , such as { attack , knowledge } , that are often abstract notions.", "Our model first expands C i by predicting additional relevant concepts C + i and optionally generates a pertinent claim m i , and then outputs the final text with multiple sentences as y = { y t } , to faithfully reflect the content items with a coherent structure.", "An overview of our system is illustrated in Figure 2. Below we first describe the content item augmentation methods ( 3.1), followed by our generator with mixed language models that condition on expanded content items ( 3.2).", "Concept Expansion.", "With limited number of entities and concepts as input, generation systems are often incapable of producing long text with rich content, resulting in hallucination (Wiseman et al., 2017; Tian et al., 2019).", "Therefore, from the often-abstract core concepts, we aim to predict more specific concepts that are also relevant to the given title.", "For instance, as displayed in Figure 1, for core concepts { make , happen } and 2 Note that i distinguishes the items.", "Their order is random.", "entities { Bill Clinton , 9/11 attacks } , we grow the input with more concrete concepts of { mistake , administration } .", "We thus consider a concept expansion module g ( ) , which predicts additional relevant concepts, denoted as C + i , by conditioning on the original content item: C + i = g ( t , E i , C i ) (1) While g ( ) can be any conditional predictor, our experiment shows that fine-tuned BART model performs best on our tasks, where it generates C + i word-by-word by consuming the content item.", "3 Training data construction is described in 4.2.", "Claim Generation.", "As discussed in 1, opinion text generation should be controlled with consistent propositions, which cannot be effectively expressed by disconnected concepts.", "Therefore, we argue that natural languages are more suitable for delivering central claims, since they better encode stylistic languages, e.g., persuasion strategies.", "Concretely, we fine-tune another BART model by taking in the title t and the entities E i , which then produces a claim with nucleus sampling for decoding (Holtzman et al., 2020).", "In this work, we assume the subset of content items that can be used to generate claims is known.", "Possible future work includes predicting such subsets and filtering claims with quality measurement.", "After obtaining the augmented content items, we leverage the BART model to encode each of them as a sequence, as illustrated in Figure 2. Segmenter <s> is added to indicate the change of elements in a content item.", "Our encoders run over all items { x i } in parallel, from which we extract content item representations { h i } , based on the last layer's hidden states of the first token.", "The standard sequence-to-sequence (seq2seq) framework models output probabilities by taking a single sequence as input.", "It is challenging to extend seq2seq to consider multiple sequences simultaneously, and conduct content planning concurrently.", "Therefore, we introduce a plan scoring network , 3 We also exploited a model that uses the structure of knowledge bases, e.g., ConceptNet, for learning to expand concepts, but it yields lower precision and recall than fine-tuning BART does.", "d ( x i | y <t ) , which learns to dynamically select and order content based on what has been produced previously while generating the outputs.", "As outlined in Figure 2, our generator is informed of all content items during generation.", "At each decoding step t , the probabilities of output words are estimated as a weighted sum of all content item-conditioned language models as follows: p ( y t | y <t ) = (cid:88) i d ( x i | y <t ) p ( y t | y <t , x i ) (2) d ( x i | y <t ) = softmax i ( e it ) (3) where p ( y t | y <t , x i ) corresponds to the i -th language model with x i as the input.", "Crucially, d ( x i | y <t ) determines the importance of x i when generating token y t and thus achieves the effect of content planning.", "We design a two-layer feed-forward network to estimate e it : e it = W o tanh( W d [ h i ; s t ]) (4) where h i denotes the representation of content item x i , s t is the decoder state, and W o and W d are learnable parameters.", "Although mixed language models have been used by Lewis et al. (2020b) to include retrieved documents for question answering, their relevance scores are given by external retrieval models, whereas our plan scorer d ( x i | y <t ) is learned together with the generator.", "Training and Decoding.", "Our model is end-to-end trained with both the standard cross-entropy loss L gen over the tokens in the target generations and a separate loss L plan for learning d ( x i | y <t ) : L ( ) = L gen ( ) + L plan ( ) (5) To create labels for L plan , we leverage the correspondence between content items and target tokens, i.e., d ( x i | y <t ) is optimized to approach 1 if y i is in the sentence that derives x i , otherwise 0 .", "4 Details about training data construction is in 4.2.", "At each decoding step, the individual language models, p ( y t | y <t , x i ) , and the distribution scores, d ( x i | y <t ) , are first calculated in parallel.", "We then decode each token greedily based on the mixed language models in an autoregressive way.", "4 We also experimented with a training objective consisting of the generation loss only, but the performance degraded significantly.", "Future directions include removing the training signals for planning.", "We experiment with the tasks of argument generation and opinion article writing ( 4.1).", "Both tasks require generating multi-sentence output, and contain a substantial amount of opinions and factual content.", "We describe the construction of initial content items and the training data for generating expanded concepts and claims in 4.2.", "We present models for comparison in 4.3.", "Finally, we provide implementation details in 4.4.", "Argument Generation.", "We collect arguments from Reddit ChangeMyView 5 (CMV) community, an online forum that features argumentative discussions.", "Each thread begins with an original post (OP) stating an opinion towards a controversial topic, e.g., The U.S. is too big for one government .", "High-quality replies that counter-argue with the OP and are labeled with community endorsement are collected in our prior work (Hua and Wang, 2020), covering content posted from 2013 to 2018.", "In this work, we extend the data collection to 2019.", "Our goal is to generate the entire reply (i.e., the target) given the OP title.", "Statistics about the CMV dataset are listed in Table 1. We reserve the most recent 1 , 000 samples for test and another 1 , 000 for validation.", "Opinion Article Writing.", "Our second task is to generate opinion articles, as collected from the New York Times (NYT) corpus (Sandhaus, 2008).", "We retain articles whose taxonomy labels include Top/Opinion .", "To ensure that articles can be processed by our computing resource, we only keep the ones with at most 20 sentences, representing 60% of all opinion articles.", "As shown in Table 1, NYT outputs tend to be significantly longer and contain less claims than CMV.", "Similarly, we keep 1 , 000 examples each for test and validation sets.", "From target references, we describe how to automatically construct the input content items consisting of entities and core concepts, and how to collect training data to fine-tune BART to predict more specific concepts and additional claims.", "Prior work has demonstrated the benefits of incorporating knowledge bases for text generation (Clark et al., 2018; Puduppully et al., 2019; Guan et al., 2020).", "We 5 https://www.reddit.com/r/ changemyview/ CMV NYT # Samples 77 , 245 113 , 616 Avg.", "thus consider two sources of knowledge: (1) entities from Wikipedia, which are useful for modeling events and opinion targets, and (2) concept words from ConceptNet (Speer et al., 2017), that cover more related details.", "Note that our setup is generally applicable to other text generation tasks, as these input items can be obtained through standard NLP pipelines, as described below.", "Entity Linking.", "We first segment a reference into sentences.", "The ones with fewer than 5 tokens are discarded for content item construction.", "For the rest, we extract entity mentions using Stanford CoreNLP (Manning et al., 2014), and further include nominal noun phrases.", "For entity linking, we adopt CrossWiki (Spitkovsky and Chang, 2012), which can process our large-scale data within a reasonable amount of time.", "CrossWiki maps a mention to a list of frequently linked Wikipedia entries.", "We further manually verify and correct the linking results for the top 500 most frequent mentions.", "Concept Extraction.", "To identify concepts in a reference, we match the lemmatized unigrams and their part-of-speech (POS) tags against all ConceptNet entries.", "To create a reasonably challenging task, we only keep a subset of the matches for inclusion in the core concept set (i.e., C i ), with the rest used as C + i , to be generated by our concept expansion model.", "Furthermore, we conjecture that an opinion article author tends to start with high-level topics that cover more abstract topical words.", "We thus leverage a lexicon (Brysbaert et al., 2014) with concreteness scores, ranging from 0 (abstract) to 5 (concrete), for over 40 k English words.", "We keep concepts that are verbs or have a concreteness score lower than 3 .", "0 .", "Word coverage of references by using core concepts and additionally with augmented concepts are 13 .", "2% and 16 .", "9% on CMV respectively, and similarly on NYT (Table 1).", "Finally, we train a concept generator with BART to produce C + i , conditional on C i , the title, and the entities.", "Claim Detection and Generation.", "Claims are indispensable for opinion articles.", "As described in 3.1, we aim to enrich content items with claims targeting the given entities within the title's context.", "To this end, we first train a claim detector by fine-tuning a BERT base (Devlin et al., 2019) sequence classifier with a dataset consisting of sentences of claims and facts .", "Concretely, we collect 54 , 802 claim sentences from Kialo 6 , a repository for debate arguments.", "We then sample 50 , 000 sentences from Wikipedia, which are treated as facts.", "This classifier is applied on a reference, and sentences that are labeled as claims become the target for our claim generator.", "We then learn a claim generator using BART, which takes in the title and the entities, and outputs the claim.", "We augment our training data with replies collected from 30 active subreddits related to political discussions, with details in Appendix A. In total, 80 , 566 sentences, which contain at least one entity and are labeled by our classifier as claim s, are kept to train the generator.", "We compare with three baselines: (1) RETRIEVAL first calculates the TF-IDF weighted bag-of-words vectors for each content item, which is then used to query the training set sentences.", "The one with the highest cosine similarity is picked for each query, which are then ordered by a trained Pointer-Network (Vinyals et al., 2015) as described in Gong et al. (2016).", "(2) SENTPLANNER (Hua and Wang, 2019) is an LSTM-based seq2seq model with a separate sentence planning decoder, where the planner selects keyphrases by using attentions and the generator reflects the selections.", "We treat our entities and concepts as keyphrases to feed to this model.", "(3) SEQ 2 SEQ is a fine-tuned BART model, whose input is the original content items without augmentation , thus does not have access to the predicted concepts and claims.", "Additionally, we consider a strong comparison SEQ 2 SEQFULL , by fine-tuning BART with the same augmented content items as inputs as in our model.", "The difference is that the content items are 6 https://www.kialo.com/ concatenated before being used as input.", "We implement all models using the Huggingface Transformers library (Wolf et al., 2020) with Py-Torch (Paszke et al., 2019).", "We use the base model for BART, which has 768 dimensional states and 6 layers for both encoder and decoder ( 140 M parameters in total).", "Our newly added plan scoring network only contains 1 .", "2 M parameters, less than 1% of the pre-trained model.", "Our generation model is optimized using Adam (Kingma and Ba, 2014), with a batch size of 3 .", "To improve efficiency, we adopt the mixed-precision (FP16) to train each model, using one NVIDIA Titan RTX GPU card with 24GB memory.", "The number of content items is limited to 10 per sample, and the numbers of entities and concepts per content item are capped at 20 , respectively.", "We also truncate the target output to at most 200 tokens during training.", "Early stopping is applied over validation loss.", "Our model converges after being trained for 38 hours ( 19 epochs) on CMV, and 45 hours ( 15 epochs) on NYT.", "The best validation perplexity reaches about 6 .", "1 after model convergence on both datasets.", "Here we report results on test sets with standard automatic metrics: BLEU (Papineni et al., 2002) measures the n-gram precision (here we consider up to bigrams); ROUGE (Lin, 2004), calculated based on n-gram recall; and METEOR (Denkowski and Lavie, 2014), which also accounts for synonyms.", "In Table 2, we first present the results when gold-standard concept expansion is used.", "Our proposed DYPLOC model achieves significantly higher performance across all metrics on both datasets.", "In particular, the substantial lead over SEQ 2 SEQFULL , which has access to the same content items as ours, indicates that dynamic content planning with mixed language models produces superior generations .", "Among comparison models, the gap between SEQ 2 SEQFULL and SEQ 2 SEQ shows the effectiveness of content item augmentation.", "We also observe a significant drop for baselines without using large models, highlighting the importance of pre-training.", "Ablation Study.", "To verify the effect of each element in content items, we further train ablated models by removing concepts, claims, or entities.", "The Argument Generation (CMV) Opinion Article Generation (NYT) BLEU-2 ROUGE-2 METEOR Len.", "Effect of Hard Selection of Content Items.", "To test the necessity of using weighted-sum marginal-ization (Eq. 2), we experiment with two comparisons with hard selections, i.e., either randomly choosing a content item, or using the one with the highest predicted plan score (greedy selection).", "For both cases, we set the selected content item's plan score as 1 .", "0 , with the rest of the candidates having a score of 0 .", "0 , to ensure the probabilities summed up to 1 .", "0 .", "As can be seen from the bottom two rows of Table 2, not surprisingly, random selection performs much worse.", "We observe that its generations lack coherence and fluency, implying the effectiveness of our learnable content planner.", "On the other hand, using greedily selected content items obtains comparable results with DYPLOC, where a weighted sum of content items is considered.", "Indeed, we find that DYPLOC's plan scores are often sharp where one content item has much Data System Gram.", "Results with Generated Concepts.", "Table 3 lists generation results with our system generated concepts as expansion.", "While all systems yield worse results compared to using gold-standard concepts, our DYPLOC still outperforms other models by substantial margins, showing its robustness when input concepts are noisy .", "Yet it also suggests the importance of having more accurate and comprehensive concept expansion, which should be explored in the future work.", "We hire three proficient English speakers to evaluate four key aspects of the generated outputs: (1) grammaticality ; (2) coherence , measuring if the text is logical and cohesive; (3) relevance , gauging topic relatedness to the input title; and (4) content richness , assessing the specificity and whether there is enough details in the outputs.", "Each aspect is rated on a scale of 1 (worst) to 5 (best).", "In addi-Content Items", "We randomly select 50 samples from the test sets for both tasks, and present outputs by SEQ 2 SEQ , SEQ 2 SEQFULL , and DYPLOC in random orders.", "Table 4 shows that DYPLOC receives higher scores across all aspects and tasks.", "In particular, the considerable differences in coherence and content richness indicate that our framework yields better content organization as well as retains more useful information .", "Overall, our system outputs are ranked best for 44 .", "7% and 45 .", "9% of the time in two tasks, significantly more than the comparisons.", "Analysis on Argumentative Quality.", "In the ablation study, we find that our full model's performance is similar to the version without having claims as input.", "We suspect this is because claims are often paraphrased or even not directly used when delivering an argument, which cannot be captured by the automatic metrics.", "To better understand how claims are used for generation, we randomly select 50 examples by DYPLOC and its variant without claims, and ask the same human judges to decide whether there is a clear central argument conveyed by each generated argument on CMV.", "We observe that 66 .", "7% of the outputs by our full model are recognized as successfully delivering arguments with consistent stances , whereas only 61 .", "3% are true for the model variant without claims.", "This gap confirms that claim drafts can indeed promote the argumentative quality as perceived by human readers.", "Evaluation results on generation quality have shown the effectiveness of our mixed language models.", "In this section, we aim to further understand the behavior of the plan scoring network, d ( x | y <t ) , such as how it affects the usage of content items for generation.", "Specifically, we adopt the following procedure to construct alignment between each sentence in the output and content items: for each token y t , we establish a mapping y t (cid:55) x i if x i is the most important item for producing y t , i.e., x i = argmax x d ( x | y <t ) , and d ( x i | y <t ) > 0 .", "5 .", "If all tokens in an entire sentence are mapped to the same x i , we consider this sentence is aligned to that content item.", "Based on this rule, we show sample output and corresponding alignments in Table 5.", "For the rest of this section, we conduct analyses based on this alignment result.", "We first examine whether the model learns to utilize enough content items, i.e., high coverage.", "Then we provide insights on whether the generation faithfully reflects the argumentative claims using entailment relation labeling by human inspection.", "How many content items are used by the output?", "Human judges have rated our model output to contain more relevant information (Table 4).", "We believe this can be attributed to the enhanced capacity to access and reflect the input data with dynamic content planning, as a result of mixed language models.", "To verify this hypothesis, we calculate the percentage of content items that are aligned to at least one output sentence.", "Figure 3 shows that, using our system, the coverage reaches 87 .", "25% on CMV and 83 .", "89% for NYT.", "If we replace the generated concepts with gold-standard concepts (as extracted from references) instead, the coverage exceeds 90% on both tasks.", "These observations indicate that our model can indeed adequately utilize the input data, with more accurate concepts further encouraging higher coverage .", "How are claim content items realized?", "Claims are the central elements for opinion text construction.", "As mentioned in 4.2, a subset of the content items are supplied with claim sentences.", "In order to examine whether they are realized as claim sentences in the outputs, we leverage the fine-tuned BERT classifier ( 4.2) to label all output sentences.", "90 .", "96% of the sentences that are aligned to a claim element in the input are also labeled as claim on CMV.", "The percentage is only 69 .", "41% for NYT, though, likely because the NYT opinion articles still contain more objective information.", "claim input and its aligned generated sentence.", "We randomly sample 50 outputs from test sets, and ask four human judges to read each.", "For each sample, we highlight one output sentence that is aligned to a content item with claim element.", "The judges determine a three-way ( ENTAIL , NEUTRAL , CONTRADICTORY ) entailment relation between the input claim (premise) and the output (hypothesis).", "Results show that ENTAIL accounts for 49 .", "06% of all instances, while only 3 .", "77% are deemed CONTRADICTORY .", "Upon inspection, the contradictory pairs are usually disagreements with regard to implicit sentiments, e.g., Journalist is the most responsible for the problem vs. Media coverage is a good thing. .", "This suggests that while our conditional language model achieves reasonable semantic control in most cases, it is still not guaranteed to capture more nuanced semantics encoded in opinions and arguments.", "Future work includes designing representations that can better model stances in opinions as well as argumentative structures.", "We present a novel text generation framework that enables dynamic content planning based on mixed conditional language models.", "We further employ large models to augment system inputs with diverse content that covers both objective and subjective information.", "The experiments on two distinct opinion text generation tasks show that our proposed model compares favorably against strong comparisons based on fine-tuned BART models with the same input.", "Human evaluation further confirms that our model generations have richer information and better content organization.", "This research is supported in part by National Science Foundation through Grant IIS-1813341.", "We thank three anonymous reviewers for their valuable suggestions on various aspects of this work.", "Large models that are pre-trained on heterogeneous web data are shown to encode biases and can be potentially harmful for marginalized populations.", "Along with the improved controllability, we also recognize that our system might be misused to create fabricated or offensive content.", "We therefore advocate cautious and responsible practices in real-world deployment." ]
[ "method", "abstain", "abstain", "objective", "objective", "method", "result", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "method", "method", "result", "abstain", "method", "objective", "method", "method", "method", "objective", "method", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "objective", "other", "abstain", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "other", "abstain", "method", "abstain", "objective", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "method", "method", "method", "method", "method", "objective", "method", "abstain", "abstain", "method", "abstain", "method", "other", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "method", "result", "result", "abstain", "method", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "objective", "objective", "method", "other", "other", "abstain", "method", "abstain" ]
[ "Current commonsense reasoning research focuses on developing models that use commonsense knowledge to answer multiple-choice questions.", "However, systems designed to answer multiple-choice questions may not be useful in applications that do not provide a small list of candidate answers to choose from.", "As a step towards making commonsense reasoning research more realistic and useful, we propose to study open-ended commonsense reasoning (OpenCSR) the task of answering a commonsense question without any pre-defined choices using as a resource only a knowledge corpus of commonsense facts written in natural language.", "OpenCSR is challenging due to a large decision space, and because many questions require implicit multi-hop reasoning.", "As an approach to OpenCSR, we propose DRFACT , an efficient D ifferentiable model for multi-hop Reasoning over knowledge Facts.", "To evaluate OpenCSR methods, we adapt three popular multiple-choice datasets, and collect multiple new answers to each test question via crowd-sourcing.", "Experiments show that DRFACT outperforms strong baseline methods by a large margin.", "1 1 Introduction The conventional task setting for most current commonsense reasoning research is multiple-choice question answering (QA) i.e., given a question and a small set of pre-defined answer choices, models are required to determine which of the candidate choices best answers the question.", "Existing commonsense reasoning models usually work by scoring a question-candidate pair (Lin et al., 2019; Lv et al., 2020; Feng et al., 2020).", "Hence, even an accurate multiple-choice The work was mainly done during Bill Yuchen Lin's internship at Google Research.", "1 Our code and data are available at the project website https://open-csr.github.io/ .", "QA model cannot be directly applied in practical applications where answer candidates are not provided (e.g., answering a question asked on a search engine, or during conversation with a chat-bot).", "Because we seek to advance commonsense reasoning towards practical applications, we propose to study open-ended commonsense reasoning (OpenCSR), where answers are generated efficiently, rather than selected from a small list of candidates (see Figure 1).", "As a step toward this, here we explore a setting where the model produces a ranked list of answers from a large question-independent set of candidate concepts that are extracted offline from a corpus of common-sense facts written in natural language.", "The OpenCSR task is inherently challenging.", "One problem is that for many questions, find-ing an answer requires reasoning over two or more natural-language facts from a corpus.", "In the multiple-choice QA setting, as the set of candidates is small, we can pair a question with an answer, and use the combination to retrieve relevant facts and then reason with them.", "In the open-ended setting, this is impractical: instead one needs to retrieve facts from the corpus using the question alone.", "In this respect, OpenCSR is similar to multi-hop factoid QA about named entities, e.g. as done for HotpotQA (Yang et al., 2018).", "However, the underlying reasoning chains of most multi-hop factoid QA datasets are relatively clear and context-independent , and are thus easier to infer.", "Commonsense questions, in contrast, exhibit more variable types of reasoning, and the relationship between a question and the reasoning to answer the question is often unclear.", "(For example, a factoid question like who starred in a movie directed by Bradley Cooper? clearly suggests following a directed-by relationship and then a starred-in relationship, while the underlying reasoning chains of a question like what can help alleviate global warming? is relatively implicit from the question.)", "Furthermore, annotations are not available to identify which facts are needed in the latent reasoning chains that lead to an answer the only supervision is a set of questions and their answers.", "We discuss the formulation of OpenCSR and its challenges further in Section", "3. As shown in Fig. 1, another challenge is that many commonsense questions require reasoning about facts that link several concepts together.", "E.g., the fact trees remove carbon dioxide from the atmosphere through photosynthesis cannot be easily decomposed into pairwise relationships between trees, carbon dioxide, the atmosphere, and photosynthesis, which makes it more difficult to store in a knowledge graph (KG).", "However, such facts have been collected as sentences in common-sense corpora, e.g., GenericsKB (Bhakthavatsalam et al., 2020).", "This motivates the question: how can we conduct multi-hop reasoning over such a knowledge corpus, similar to the way multi-hop reasoning methods traverse a KG?", "Moreover, can we achieve this in a differentiable way, to support end-to-end learning?", "To address this question, we extend work by Seo et al. (2019) and Dhingra et al. (2020), and propose an efficient, differentiable multi-hop reasoning method for OpenCSR, named DRFACT (for Differentiable Reasoning over Facts).", "Specifically, we formulate multi-hop reasoning over a corpus as an iterative process of differentiable fact-following operations over a hypergraph.", "We first encode all fact sentences within the corpus as dense vectors to form a neural fact index, such that a fast retrieval can be done via maximum inner product search (MIPS).", "This dense representation is supplemented by a sparse fact-to-fact matrix to store symbolic links between facts (i.e., a pair of facts are linked if they share common concepts).", "DRFACT thus merges both neural and symbolic aspects of the relationships between facts to model reasoning in an end-to-end differentiable framework (Section 4).", "To evaluate OpenCSR methods, we construct new OpenCSR datasets by adapting three existing multiple-choice QA datasets: QASC (Khot et al., 2020), OBQA (Mihaylov et al., 2018), and ARC (Clark et al., 2018).", "Note that unlike factoid questions that usually have a single correct answer, open-ended commonsense questions can have multiple correct answers.", "Thus, we collect a collection of new answers for each test question by crowd-sourcing human annotations.", "We compare with several strong baseline methods and show that our proposed DRFACT outperforms them by a large margin.", "Overall DRFACT gives an 4.6% absolute improvement in Hit@100 accuracy over DPR (Karpukhin et al., 2020), a state-of-the-art text retriever for QA, and 3.2% over DrKIT (Dhin-gra et al., 2020), a strong baseline for entity-centric multi-hop reasoning.", "With a relatively more expensive re-ranking module, the gap between DRFACT and others is even larger.", "(Sec. 5) 2 Related Work Commonsense Reasoning.", "Many recent commonsense-reasoning (CSR) methods focus on multiple-choice QA.", "For example, KagNet (Lin et al., 2019) and MHGRN (Feng et al., 2020) use an external commonsense knowledge graph as structural priors to individually score each choice.", "These methods, though powerful in determining the best choice for a multi-choice question, are less realistic for practical applications where answer candidates are typically not available.", "UnifiedQA (Khashabi et al., 2020) and other closed-book QA models (Roberts et al., 2020) generate answers to questions by fine-tuning a text-to-text transformer such as BART (Lewis et al., 2020a) or T5 (Raffel et al., 2020), but a disadvantage of closed-book QA models is that they do not provide intermediate explanations for their answers, i.e., the supporting facts, which makes them less trustworthy in downstream applications.", "Although closed-book models exist that are augmented with an additional retrieval module (Lewis et al., 2020b), these models mainly work for single-hop reasoning.", "QA over KGs or Text.", "A conventional source of commonsense knowledge is triple-based symbolic commonsense knowledge graphs (CSKGs) such as ConceptNet (Speer et al., 2017).", "However, the binary relations in CSKGs greatly limit the types of the knowledge that can be encoded.", "Here, instead of a KB, we use a corpus of generic sentences about commonsense facts, in particular GenericsKB (Bhakthavatsalam et al., 2020).", "The advantage of this approach is that text can represent more complex commonsense knowledge, including facts that relate three or more concepts.", "Formalized in this way, OpenCSR is a question answering task requiring (possibly) iterative retrieval, similar to other open-domain QA tasks (Chen et al., 2017) such as HotpotQA (Yang et al., 2018) and Natural Questions (Kwiatkowski et al., 2019).", "As noted above, however, the surface of commonsense questions in OpenCSR have fewer hints about kinds of multi-hop reasoning required to answer them than the factoid questions in open-domain QA, resulting in a particularly challenging reasoning problem (see Sec. 3).", "Multi-Hop Reasoning.", "Many recent models for open-domain QA tackle multi-hop reasoning through iterative retrieval, e.g., GRAFT-Net (Sun et al., 2018), MUPPET (Feldman and El-Yaniv, 2019), PullNet (Sun et al., 2019), and GoldEn (Qi et al., 2019).", "These models, however, are not end-to-end differentiable and thus tend to have slower inference speed, which is a limitation shared by many other works using reading comprehension for multi-step QA (Das et al., 2019; Lee et al., 2019).", "As another approach, Neural Query Language (Cohen et al., 2020) designs differentiable multi-hop entity-following templates for reasoning over a compactly stored symbolic KG, but this KG is limited to binary relations between entities from an explicitly enumerated set.", "DrKIT (Dhingra et al., 2020) is the most similar work to our DRFACT , as it also supports multi-hop reasoning over a corpus.", "Unlike DRFACT , DrKIT is designed for entity-centric reasoning.", "DrKIT begins with an entity-linked corpus, and computes both sparse and dense indices of entity mentions (i.e., linked named-entity spans).", "DrKIT's fundamental reasoning operation is to hop from one weighted set of X entities to another, by 1) find-ing mentions of new entities x (cid:48) that are related to some entity in X , guided by the indices, and then 2) aggregating these mentions to produce a new weighted set of entities.", "DrKIT's operations are differentiable, and by learning to construct appropriate queries to the indices, it can be trained to answer multi-hop entity-related questions.", "Prior to our work DrKIT been applied only on factoid questions about named entities.", "In CSR, the concepts that drive reasoning are generally less precise than entities, harder to disambiguate in context, and are also much more densely connected, so it is unclear to what extent DrKIT would be effective.", "We present here novel results using DrKIT on OpenCSR tasks, and show experimentally that our new approach, DRFACT , improves over DrKIT.", "DRFACT mainly differs from DrKIT in that its reasoning process learns to hop from one fact to another, rather than from one entity to another, thus effectively using the full information from a fact for multi-hop reasoning.", "Task Formulation.", "We denote a corpus of knowledge facts as F , and use V to denote a vocabulary of concepts ; both are sets consisting of unique elements.", "A fact f i F is a sentence that describes generic commonsense knowledge, such as trees remove carbon dioxide from the atmosphere through photosynthesis .", "A concept c j V is a noun or base noun phrase mentioned frequently in these facts (e.g., tree' and carbon dioxide').", "Concepts are considered identical if their surface forms are the same (after lemma-tization).", "Given only a question q (e.g., what can help alleviate global warming? ), an open-ended commonsense reasoner is supposed to answer it by returning a weighted set of concepts, such as { ( a 1 = renewable energy' , w 1 ), ( a 2 = tree' , w 2 ), . . . } , where w i R is the weight of the predicted concept a i V .", "To learn interpretable, trustworthy reasoning models, it is expected that models can output intermediate results that justify the reasoning process i.e., the supporting facts from F .", "E.g., an explanation for tree' to be an answer to the ques= carbon dioxide is the major greenhouse gas contributing to global warming .", "Implicit Multi-Hop Structures.", "Commonsense questions (i.e., questions that need commonsense knowledge to reason) contrast with better-studied multi-hop factoid QA datasets, e.g., HotpotQA (Yang et al., 2018), which primarily focus on querying about evident relations between named entities .", "For example, an example multihop factoid question can be which team does the player named 2015 Diamond Head Classic's MVP play for?", "Its query structure is relatively clear and self-evident from the question itself: in this case the reasoning process can be decomposed into q 1 = the player named 2015 DHC's MVP and q 2 = which team does q 1 . answer play for.", "The reasoning required to answer commonsense questions is usually more implicit and relatively unclear.", "Consider the previous example in Fig. 1, q = what can help alleviate global warm-ing?' can be decomposed by q 1 = what contributes to global warming and q 2 = what removes q 1 . answer from the atmosphere but many other decompositions are also plausible.", "In addition, unlike HotpotQA, we assume that we have no ground-truth justifications for training, which makes OpenCSR even more challenging.", "In this section we present DRFACT , a model for multi-hop reasoning over facts.", "More implementation details are in Appendix B. 4.1 Overview In DRFACT , we propose to model reasoning as traversing a hypergraph , where each hyperedge corresponds to a fact in F , and connects the concepts in V that are mentioned in that fact.", "This is shown in Figure", "2. Notice that a fact, as a hyperedge, connects multiple concepts that are mentioned, while the textual form of the fact maintains the contextual information of the original natural language statement, and hence we do not assume a fixed set of relations.", "Given such a hypergraph, our open-ended reasoning model will traverse the hypergraph starting from the question (concepts) and finally arrive at a set of concept nodes by following multiple hyperedges (facts).", "A probabilistic view of this process over T hops is: P ( c | q ) = P ( c | q, FT ) (cid:81) Tt =1 P ( F t | q, F t 1 ) P ( F 0 | q ) Intuitively, we want to model the distribution of a concept c V being an answer to a question q as P ( c | q ) .", "This answering process can be seen as a process of multiple iterations of fact-following, or moving from one fact to another based on shared concepts, and finally moving from facts to concepts.", "We use F t to represent a weighted set of retrieved facts at the hop t , and F 0 for the initial facts below.", "Then, given the question and the current retrieved facts, we iteratively retrieve the facts for the next hop.", "Finally, we score a concept using retrieved facts.", "We encode the hypergraph (Fig. 2) with a concept-to-fact sparse matrix E and a fact-to-fact sparse matrix S .", "The dense fact index D is pre-computed with a pre-trained bi-encoder.", "A weighed set of facts is represented as a sparse vector F .", "The workflow (left) of DRFACT starts mapping a question to a set of initial facts that have common concepts with it.", "Then, it recursively performs Fact-Follow operations (right) for computing F t and A t .", "Finally, it uses learnable hop-weights t to aggregate the answers.", "2019), which learns to maximize the score of facts that contain correct answers to a given question, following the steps of Karpukhin et al. (2020) (i.e., dense passage retrieval), so that we can use MIPS to do dense retrieval over the facts.", "After pre-training, we embed each fact in F with a dense vector (using the [CLS] token representa-tion).", "Hence D is a |F| d dense matrix.", "Sparse Fact-to-Fact Index S .", "We pre-compute the sparse links between facts by a set of connection rules, such as f i f j when f i and f j have at least one common concept and f j introduces at least two more new concepts that are not in f i (see Appendix B (2) for more).", "Hence S is a binary sparse tensor with the dense shape |F| |F| .", "Sparse Index of Concept-to-Fact Links E .", "As shown in Figure 2, a concept can appear in multiple facts and a fact also usually mentions multiple concepts.", "We encode these co-occurrences between each fact and its mentioned concepts into a sparse matrix with the dense shape |V| |F| i.e., the concept-to-fact index .", "The most important part in our framework is how to model the fact-following step in our formulation, i.e., P ( F t | F t 1 , q ) .", "For modeling the translation from a fact to another fact under the context of a question q , we propose an efficient approach with a differentiable operation that uses both neural embeddings of the facts and their symbolic connections in the hypergraph.", "S , which in our model is efficiently implemented with the tf.RaggedTensor construct of TensorFlow (Dhingra et al., 2020).", "S stores a precomputed dependency between pairs of facts, S ij .", "Intuitively, if we can traverse from f i to f j these facts should mention some common concepts, and also the facts' semantics are related, so our S ij will reflect this intuition.", "The fact embeddings computed by a pre-trained bi-encoder are in the dense index of fact vectors D , which contains rich semantic information about each fact, and helps measure the plausibility of a fact in the context of a given question.", "The proposed fact-follow operation has two parallel sub-steps: 1) sparse retrieval and 2) dense retrieval.", "The sparse retrieval uses a fact-to-fact sparse matrix to obtain possible next-hop facts.", "We can compute F st = F t 1 S efficiently thanks to the ragged representation of sparse matrices.", "For the neural dense retrieval, we use a maximum inner product search (MIPS) (Johnson et al., 2019; Guo et al., 2020) over the dense fact embedding index D : z t 1 = F t 1 D h t 1 = g ( z t 1 , q t ) F dt = MIPSK ( h t 1 , D ) We first aggregate the dense vectors of the facts in F t 1 into the dense vector z t 1 , which is fed into a neural layer with the query embedding at the current step, q t (encoded by BERT), to create a query vector h t 1 .", "Here g ( ) is an MLP that maps the concatenation of the two input vectors to a dense output with the same dimensionality as the fact vectors, which we named to be fact-translating function.", "Finally, we retrieve the next-hop top-K facts F dt with the MIPSK operator.", "To get the best of both symbolic and neural world, we use element-wise multiplication to combine the sparse and dense retrieved results: F t = F st (cid:12) F dt .", "We summarize the fact-following operation with these differentiable steps: F t = Fact-Follow( F t 1 , q ) (1) = F t 1 S (cid:12) MIPSK ( g ( F t 1 D, q t ) , D ) After each hop, we multiply F t with a precomputed fact-to-concept matrix E , thus generating A t , a set of concept predictions.", "To aggregate the concept scores, we take the maximum score among the facts that mention a concept c .", "Finally we take the weighted sum of the concept predictions at all hops as the final weighted concept sets A = (cid:80) Tt =1 t A t , where t is a learnable parameter.", "Please read Appendix B for more details.", "Equation 1 defines a random-walk process on the hypergraph associated with the corpus.", "We found that performance was improved by making this a lazy random walkin particular by augmenting F t with the facts in F t 1 which have a weight higher than a threshold : F t = Fact-Follow( F t 1 , q ) + Filter( F t 1 , ) .", "We call this as self-following , which means that F t contains highly-relevant facts for all distances t (cid:48) < t , and thus improve models when there are variable numbers of hops for different questions.", "Initial Facts.", "Note that the set of initial facts F 0 is computed differently, as they are produced using the input question q , instead of a previous-hop F t 1 .", "We first use our pre-trained bi-encoder and the associated index D via MIPS query to finds facts related to q , and then select from the retrieved set those facts that contain question concepts (i.e., concepts that are matched in the question text), using the concept-to-fact index E .", "Intermediate evidence, i.e., supporting facts, is significant for guiding multi-hop reasoning models during training.", "In a weakly supervised setting, however, we usually do not have ground-truth annotations as they are expensive to obtain.", "based on the training questions.", "Specifically, we concatenate the question and the best candidate answer to build a query to our pre-trained index D , and then we divide the results into four groups depending on whether they contain question/answer concepts: 1) question-answer facts, 2) question-only facts, 3) answer-only facts, and 4) none-facts.", "Then, to get a 2-hop evidence chain, we first check if a question-only fact can be linked to an answer-only fact through the sparse fact-to-fact matrix S .", "Similarly, we can also get 3-hop distant evidence.", "In this manner, we can collect the set of supporting facts at each hop position, denoted as { F 1 , F 2 , . . . , F T } .", "The final learning objective is thus to optimize the sum of the cross-entropy loss l between the final weighed set of concepts A and the answer set A , as well as the auxiliary loss from distant evidence i.e., the mean of the hop-wise loss between the predicted facts F t and the distant supporting facts at that hop F t , defined as follows: L = l ( A, A ) + 1 TT (cid:88) t =1 l ( F t , F t ) 5 Experiments 5.1 Experimental Setup Fact corpus and concept vocabulary We use the GenericsKB-Best corpus as the main knowledge source 2 .", "In total, we have 1,025,413 unique facts as our F .", "We use the spaCy toolkit to prepossess all sentences in the corpus and then extract frequent noun chunks within them as our concepts.", "The vocabulary V has 80,524 concepts, and every concept is mentioned at least 3 times.", "To facilitate the research on open-ended commonsense reasoning (OpenCSR), we reformatted three existing multi-choice question answering datasets to allow evaluating OpenCSR methods.", "We choose three datasets: QASC, OBQA, and ARC, as their questions require commonsense knowledge about science and everyday objects and are presented in natural language.", "By applying a set of filters and rephrasing rules, we selected those open-ended commonsense questions that query concepts in our vocabulary V .", "As we know that there can be multiple correct answers for a question in OpenCSR, we employed crowd-workers to collect more answers for each test question based on a carefully designed annotation protocol.", "In total, we collect 15,691 answers for 2,138 rephrased questions for evaluation, which results in 7.5 answers per question on average.", "Please find more details about crowd-sourcing and analysis in Appendix A. We show some statistics of the OpenCSR datasets and our new annotations in Table", "1. To understand the multi-hop nature and the difficulty of each dataset, we use a heuristic to estimate the percentage of single-hop questions, for which we can find a fact (from top-1k facts retrieved by BM25) containing both a question concept and an answer concept.", "The ARC dataset has about 67% one-hop questions and thus is the easiest, while OBQA has only 50%.", "Recall that, given a question q , the final output of every method is a weighted set of concepts A = { ( a 1 , w 1 ) , . . . } .", "We denote the set of true answer concepts , as defined above, as A = { a 1 , a 2 , . . . } .", "We define Hit@K accuracy to be the fraction of questions for which we can find at least one correct answer concept a i A in the topK concepts of A (sorted in descending order of weight).", "As questions have multiple correct answers, recall is also an important aspect for evaluating OpenCSR, so we also use Rec@K to evaluate the average recall of the top-K proposed answers.", "We present baseline methods and an optional reranker component for boosting the performance on OpenCSR.", "Table 3 shows a summary of the comparisions of the three methods and our DrFact.", "retrieve relevant facts, and then use the concepts mentioned in the top-ranked facts as answer predictions.", "BM25 is one of the most popular unsupervised method for retrieval, while the Dense Passage Retrieval (DPR) model is a state-of-the-art trainable, neural retriever (Karpukhin et al., 2020).", "Following prior work with DPR, we used BM25-retrieved facts to create positive and (hard-)negative examples as supervision.", "For both methods, we score a concept by the max 3 of the relevance scores of retrieved facts that mention it.", "DrKIT.", "Following Dhingra et al. (2020), we use DrKIT for OpenCSR, treating concepts as entities.", "DrKIT is also an efficient multi-hop reasoning model that reasons over a pre-computed indexed corpus, which, as noted above (Sec. 2), differs from our work in that DrKIT traverses a graph of entities and entity mentions, while DRFACT traverses a hypergraph of facts.", "Multiple-choice style re-ranking (MCQA).", "A conventional approach to multiple-choice QA (MCQA) is to fine-tune a pre-trained language model such as BERT, by combining a question and a particular concept as a single input sequence in the form of [CLS] question [SEP] choice and using [CLS] vectors for learning to score choices.", "We follow this schema and train 4 such a multiple-choice QA model on top of BERT-Large, and use this to re-rank the topK concept predictions.", "Main results.", "For a comprehensive understanding, we report the Hit@K and Rec@K of all methods, at K=50 and K=100, in Table", "2. The overall results are the average over the three datasets.", "3 We also tried mean and sum , but max performs the best.", "4 Specifically, we fine-tune BERT-Large to score truth answers over 9 sampled distractors, and use it to rank the top-500 concepts produced by each above retrieval method.", "We can see that DRFACT outperforms all baseline methods for all datasets and metrics.", "Comparing with the state-of-the-art text retriever DPR, DRFACT improves by about 4.1% absolute points in Hit@50 accuracy overall.", "With the expensive yet powerful MCQA reranker module DRFACT gives an even large gap ( 8% gain in H@50 acc).", "The performance gains on the QASC and OBQA datasets are larger than the one on ARC.", "This observation correlates the statistics that the former two have more multi-hop questions and thus DRFACT has more advantages.", "As shown in Figure 4, we can see that DRFACT consistently outperforms other retrieval methods at different K by a considerable margin.", "Interestingly, we find that with the MCQA reranker, DrKIT does not yield a large improvement over DPR, and it usually has a lower than other methods.", "We conjecture this is because that entity-centric reasoning schema produces too many possible concepts and thus is more likely to take more irrelevant concepts at the top positions.", "The results on Rec@K in bottom section of Table 2 show that even our DRFACT +MCQA model only recalls about 50% of the correct answers in top-100 results on average.", "This suggests that OpenCSR is still a very challenging problem and 10 20 30 40 50 60 70 80 90 100 K 0.3 0.4 0.5 0.6 0.7 0.8 0.9 H i t @ KA cc u r a c y ( % ) BM25 DPR DrKIT DrFact BM25+MCQA DPR+MCQA DrKIT+MCQA DrFact+MCQA Figure 4: The curve of Hit@K accuracy in overall .", "Run-time efficiency analysis.", "We use Table 4 to summarize the online inference speed of each OpenCSR method.", "At inference time, DPR will make one call to BERT-base for encoding a question and do one MIPS search.", "Similarly, DrKIT and DRFACT with T hops will make one call to BERT-base for query encoding and do TMIPS searches.", "However, since the entity-to-mention Methods Major Computations Speed (sec/q) BM25 Sparse Retrieval 0.14 DPR BERT-base + MIPS 0.08 DrKIT BERT-base + T *(MIPS+ sp e 2 m ) 0.47 DRFACT BERT-base + T *(MIPS+ sp f 2 f ) 0.23 X+ MCQA X + K * BERT-Large + 14.12 Table 4: The major competitions of each method and their online (batch-size=1) inference speed in sec/q .", "matrix ( sp e 2 m ) of DrKIT is much larger than the fact-to-fact matrix ( sp f 2 f ) of DRFACT , DrKIT is about twice as slow as DRFACT .", "The MCQA is much more computationally expensive, as it makes K calls to BERT-Large for each combination of question and choice.", "Note that in these experiments we use T =2 for DrKIT, T =3 for DRFACT and K =500 for the MCQA re-rankers.", "5 Ablation study.", "Varying the maximum hops (T= { 1,2,3 } ) i.e., the number of calls to Fact-Follow indicates that overall performance is the best when T=3 as shown in Table 5.", "The performance with T=2 drops 0.7% point on OBQA.", "We conjecture this is due to nature of the datasets, in particular the percentage of hard questions.", "We also test the model (with T=3) without the auxiliary learning loss (Sec. 4.4) or the self-following trick.", "Both are seen to be important to DRFACT .", "Self-following is especially helpful for QASC and OBQA, where there are more multihop questions.", "It also makes learning and inference more faster than an alternative approach of ensembling multiple models with different maximum hops as done in some prior works.", "Qualitative analysis.", "We show a concrete example in Fig. 5 to compare the behaviour of DPR and DRFACT in reasoning.", "DPR uses purely dense retrieval without any regularization, yielding irrelevant facts.", "The fact f 2 matches the phrase sepa-5 We note the MCQA-reranker could be speed up by scoring more choices in parallel. All run-time tests were performed on NVIDIA V100 (16GB), but MCQA with batch-size of 1 requires only 5GB. This suggests more parallel inference on a V100 could obtain 4.5 sec/q for MCQA. Q: What will separate iron filings from sand? magnets attract magnetic metals through magnetism (in F2) iron filings show the magnetic fields .", "rating...from sand, but does not help reason about the question.", "The f 3 shows here for the semantic relatedness of steel and iron while fill-ing here is not related to question concepts.", "Our DRFACT , however, can faithfully reason about the question via fact-following over the hypergraph, and use neural fact embeddings to cumulatively reason about a concept, e.g., magnet .", "By backtracking with our hypergraph, we can use retrieved facts as explanations for a particular prediction.", "We introduce and study a new task open-ended commonsense reasoning (OpenCSR) which is both realistic and challenging.", "We construct three OpenCSR versions of widely used datasets targeting commonsense reasoning with a novel crowdsourced collection of multiple answers, and evaluate a number of baseline methods for this task.", "We also present a novel method, DRFACT .", "DRFACT is a scalable multi-hop reasoning method that traverses a corpus (as a hypergraph) via a differentiable fact-following reasoning process, employing both a neural dense index of facts and sparse tensors of symbolic links between facts, using a combination of MIPS and sparse-matrix computation.", "DRFACT outperforms several strong baseline methods on our data, making a significant step towards adapting commonsense reasoning approaches to more practical applications.", "Base on the multi-hop reasoning framework of DRFACT , we hope the work can benefit future research on neural-symbolic commonsense reasoning.", "Xiang Ren is supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Contract No. 2019-19051600007, the DARPA MCS program under Contract No.", "N660011924033 with the United States Office Of Naval Research, the Defense Advanced Research Projects Agency with award W911NF-19-20271, and NSF SMA 18-29268.", "We thank all reviewers for their constructive feedback and comments.", "Crowd-workers.", "This work presents three datasets for addressing a new problem, open common-sense reasoning.", "The datasets are all derived from existing multiple-choice CSR datasets, and were produced by filtering questions and using crowd-workers to annotate common-sense questions by suggesting additional answers.", "Most of the questions are about elementary science and common knowledge about our physical world.", "None of the questions involve sensitive personal opinions or involve personally identifiable information.", "We study posted tasks to be completed by crowd-workers instead of crowd-workers themselves, and we do not retrieve any identifiable private information about a human subject.", "Data bias.", "Like most crowdsourced data, and in particular most common-sense data, these crowdsourced answers are inherently subject to bias: for example, a question like what do people usually do at work might be answered very differently by people from different backgrounds and cultures.", "The prior multiple-choice CSR datasets which our datasets are built on are arguably more strongly biased culturally, as they include a single correct answer and a small number of distractor answers, while our new datasets include many answers considered correct by several annotators.", "However, this potential bias (or reduction in bias) has not been systematically measured in this work.", "Sustainability.", "For most of the experiments, we use the virtual compute engines on Google Cloud Platform, which is committed to purchasing enough renewable energy to match consumption for all of their operations globally. 6 With such virtual machine instances, we are able to use the resources only when we have jobs to run, instead of holding them all the time like using physical machines, thus avoiding unnecessary waste.", "Application.", "The work also evaluates a few proposed baselines for OpenCSR, and introduced a new model which outperforms them.", "This raises the question of whether harm might arise from applications of OpenCSRor more generally, since 6 https://cloud.google.com/ sustainability OpenCSR is intended as a step toward making multiple-choice CSR more applicable, whether harm might arise more generally from CSR methods.", "Among the risks that need to be considered in any deployment of NLP technology are that responses may be wrong, or biased, in ways that would lead to improperly justified decisions.", "Although in our view the current technology is still relatively immature, and unlikely to be fielded in applications that would cause harm of this sort, it is desirable that CSR methods provide audit trails, and recourse so that their predictions can be explained to and critiqued by affected parties.", "Our focus on methods that provide chains of evidence is largely a reflection of this perceived need." ]
[ "abstain", "abstain", "objective", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "other", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "result", "abstain", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "method" ]
[ "Current state of the art systems in NLP heavily rely on manually annotated datasets, which are expensive to construct.", "Very little work adequately exploits unannotated data such as discourse markers between sentences mainly because of data sparseness and ineffective extraction methods.", "In the present work, we propose a method to automatically discover sentence pairs with relevant discourse markers, and apply it to massive amounts of data.", "Our resulting dataset contains 174 discourse markers with at least 10 K examples each, even for rare markers such as coincidentally or amazingly .", "We use the resulting data as supervision for learning transferable sentence embeddings.", "In addition, we show that even though sentence representation learning through prediction of discourse markers yields state of the art results across different transfer tasks, it is not clear that our models made use of the semantic relation between sentences, thus leaving room for further improvements.", "Our datasets are publicly available 1 1 Introduction An important challenge within the domain of natural language processing is the construction of adequate semantic representations for textual units from words over sentences to whole documents.", "Recently, numerous approaches have been proposed for the construction of vector-based representations for larger textual units, especially sentences.", "One of the most popular frameworks aims to induce sentence embeddings as an intermediate representation for predicting relations between sentence pairs.", "For instance, similarity judgements (paraphrases) or inference relations have been used as prediction tasks, and the resulting embeddings perform well in practice, even when 1 https://github.com/ synapse-developpement/Discovery the representations are transfered to other semantic tasks (Conneau et al., 2017).", "However, the kind of annotated data that is needed for such supervised approaches is costly to obtain, prone to bias, and arguably fairly limited with regard to the kind of semantic information captured, as they single out a narrow aspect of the entire semantic content.", "Unsupervised approaches have also been proposed, based on sentence distributions in large corpora in relation to their discourse context.", "For instance, Kiros et al. (2015) construct sentence representations by trying to reconstruct neighbouring sentences, which allows them to take into account different contextual aspects of sentence meaning.", "In the same vein, Logeswaran et al. (2016) propose to predict if two sentences are consecutive, even though such local coherence can be straightforwardly predicted with relatively shallow features (Barzilay and Lapata, 2008).", "A more elaborate setting is the prediction of the semantic or rhetorical relation between two sentences, as is the goal of discourse parsing.", "A number of annotated corpora exist, such as RST-DT (Carlson et al., 2001) and PDTB (Prasad et al., 2008), but in general the available data is fairly limited, and the task of discourse relation prediction is rather difficult.", "The problem, however, is much easier when there is a marker that makes the semantic link explicit (Pitler et al., 2008), and this observation has often been used in a semi-supervised setting to predict discourse relations in general (Rutherford and Xue, 2015).", "Building on this observation, one approach to learn sentence representations is to predict such markers or clusters of markers explicitly (Jernite et al., 2017; Malmi et al., 2018; Nie et al., 2017).", "Consider the following sentence pair: I live in Paris.", "The discourse marker but highlights an opposition between the first sentence (the speaker s1 Paul Prudhomme's Louisiana Kitchen created a sensation when it was published in 1984. c happily, s2' This family collective cookbook is just as good Table 1: Sample from our Discovery dataset lives in Paris) and the second sentence (the speaker is often abroad).", "The marker can thus be straightforwardly used as a label between sentence pairs.", "In this case, the task is to predict c = but (among other markers) for the pair ( I live in Paris , I'm often abroad ) .", "Note that discourse markers can be considered as noisy labels for various semantic tasks, such as entailment ( c = therefore ), subjectivity analysis ( c = personally ) or sentiment analysis ( c = sadly ).", "More generally, discourse markers indicate how a sentence contributes to the meaning of a text, and they provide an appealing supervision signal for sentence representation learning based on language use.", "A wide variety of discourse usages would be desirable in order to learn general sentence representations.", "Extensive research in linguistics has resulted in elaborate discourse marker inventories for many languages.", "2 These inventories were created by manual corpus exploration or annotation of small-scale corpora: the largest annotated corpus, the English PDTB consists of a few tens of thousand examples, and provides a list of about 100 discourse markers, organized in a number of categories.", "Previous work on sentence representation learning with discourse markers makes use of even more restricted sets of discourse markers, as shown in table 2.", "Jernite et al. (2017) use 9 categories as labels, accounting for 40 discourse markers in total.", "It should be noted that the aggregate labels do not allow for any fine-grained distinctions; for instance, the TIME label includes both now and next , which is likely to impair the supervision.", "Moreover, discourse markers may be ambiguous; for example now can be used to express contrast.", "On the other hand, Nie et al. (2017) make use of 15 discourse markers, 5 of which are accounting for more than 80% of their training data.", "In order to ensure the quality of their examples, they only select pairs matching a dependency pattern manually specified for each marker.", "As such, 2 See for instance a sample of language on the Textlink project website: http://www.textlink.ii.metu.", "edu.tr/dsd-view both of these studies use a restricted or impoverished set of discourse markers; they also both use the BookCorpus dataset, whose size ( 4 . 7 M sentences that contain a discourse marker, according to Nie et al., 2017) is prohibitively small for the prediction of rare discourse markers.", "In this work we use web-scale data in order to explore the prediction of a wide range of discourse markers, with more balanced frequency distributions, along with application to sentence representation learning.", "We use English data for the experiments, but the same method could be applied to any language that bears a typological resemblance with regard to discourse usage, and has sufficient amounts of textual data available (e.g. German or French).", "Inspired by recent work (Dasgupta et al., 2018; Poliak et al., 2018; Levy et al., 2018; Glock-ner et al., 2018) on the unexpected properties of recent manually labelled datasets (e.g. SNLI), we will also analyze our dataset to check whether labels are easy to guess, and whether the proposed model architectures make use of high-level reasoning for their predictions.", "Our contributions are as follows: we propose a simple and efficient method to discover new discourse markers, and present a curated list of 174 markers for English; we provide evidence that many connectives can be predicted with only simple lexical features; we investigate whether relation prediction actually makes use of the relation between sentences; we carry out extensive experiments based on the Infersent/SentEval framework.", "Our goal is thus to capture semantic aspects of sentences by means of distributional observations.", "For our training signal, we aim at something more evolved than just plain contextual co-occurrence, author discourse markers / classes classes markers Jernite et al. (2017) ADDITION , CONTRAST , TIME , RESULT , SPECIFIC , COMPARE , STRENGTH , RETURN , RECOGNIZE 9 40 Nie et al. (2017) and, but, because, if, when, before, though, so, as, while, after, still, also, then, although 15 15 current work later, often, understandably, gradually, or, ironically, namely, . . . 174 174 Table 2: Discourse markers or classes used by previous work on unsupervised representation learning but simpler than a full-fledged encoder-decoder `a la Skip-Thought.", "In that respect, discourse relations are an interesting compromise, if we can reliably extract them in large quantities.", "This objective is shared with semi-supervised approaches to discourse relation prediction, where automatically extracted explicit instances feed a model tar-getting implicit instances (Marcu and Echihabi, 2002; Sporleder and Lascarides, 2008; Pitler and Nenkova, 2009; Rutherford and Xue, 2015).", "In this perspective, it is important to collect unambiguous instances of potential discourse markers.", "To do so, previous work used heuristics based on specific constructs, especially syntactic patterns for intra-sentential relations, based on a fixed list of manually collected discourse markers.", "Since we focus on sentence representations, we limit ourselves to discourse arguments that are well-formed sentences, thus also avoiding clause segmentation issues.", "Following a heuristic from Rutherford and Xue (2015), also considered by Malmi et al. (2018) and Jernite et al. (2017), we collect pairs of sentences ( s 1 , s 2 ) where s 2 starts with marker c .", "We only consider the case where c is a single word, as detecting longer adverbial constructions is more difficult.", "We remove c from the beginning of s 2 and call the resulting sentence s (cid:48) 2 .", "Malmi et al. (2018) make use of a list of the 80 most frequent discourse markers in the PDTB in order to extract suitable sentence pairs.", "We stay faithful to Rutherford and Xue (2015)'s heuristic, as opposed to Malmi et al. (2018); Jernite et al. (2017): if s 2 starts with c followed by a comma, and c is an adverbial or a conjunction, then it is a suitable candidate.", "By limiting ourselves to sentences that contain a comma, we are likely to ensure that s (cid:48) 2 is meaningful and grammatical.", "As opposed to all the cited work mentioned above, we do not restrict the pattern to a known list of markers, but try to collect new reliable cues.", "This pattern is decisively restrictive, since discourse markers often appear at the clausal level (e.g. I did it but now I regret it ).", "But clauses are not meant to be self contained, and it is not obvious that they should be included in a dataset for sentence representation learning.", "At the same time, one could easily think of cases where c is not a discourse marker, e.g. ( s 1 , s 2 ) = (It's cold., Very, very cold.).", "However, these uses might be easily predicted with shallow language models.", "In the next section, we use the proposed method for the discovery of discourse markers, and we investigate whether the resulting dataset leads to improved model performance.", "We use sentences from the Depcc corpus (Panchenko et al., 2017), which consists of English texts harvested from commoncrawl web data.", "We sample 8.5 billion consecutive sentence pairs from the corpus.", "We keep 53% of sentence pairs that contain between 3 and 32 words, have a high probability of being English ( > 75% ) using FastText langid from Grave et al. (2018), have balanced parentheses and quotes, and are mostly lowercase.", "We use NLTK (Bird et al., 2009) as sentence tokenizer and NLTK PerceptronTagger as part of speech tagger for adverb recognition.", "In addition to our automatically discovered candidate set, we also include all (not necessarily adverbial) PDTB discourse markers that are not induced by our method.", "Taking this into account, 3.77% of sentence pairs contained a discourse marker candidate, which is about 170 M sentence pairs.", "An example from the dataset is shown in table 1.", "We only keep pairs in which the discourse marker occurs at least 10 K times.", "We also subsample pairs so that the maximum occurrence count of a discourse marker is 200 K .", "The resulting dataset conFigure 1: Frequency distribution of candidate discourse markers; the horizontal line indicates the subsampling threshold.", "We discovered 243 discourse marker candidates.", "Figure 1 shows their frequency distributions.", "As expected, the most frequent markers dominate the training data, but when a wide range of markers is included, the rare ones still contribute up to millions of training instances.", "Out of the 42 single word PDTB markers that precede a comma, 31 were found by our rule.", "Some markers are missing because of NLTK errors, which mainly result from morphological issues.", "3 2.3 Controlling for shallow features As previously noted, some candidates discovered by our rule may not be actual discourse markers.", "In order to discard them, we put forward the hypothesis that actual discourse markers cannot be predicted with shallow lexical features.", "Inspired by Gururangan et al. (2018), we use a Fasttext classifier (Joulin et al., 2016) in order to predict c from s (cid:48) 2 .", "The Fasttext classifier predicts labels from an average of word embeddings fed to a linear classifier.", "We split the dataset in 5 folds, and we predict markers for each fold, while training on the remaining folds.", "We use a single epoch, randomly initialized vectors of size 100 (that can be unigrams, bigrams or trigrams) and a learning rate of 0.5.", "In addition, we predict c from the concatenation of s 1 and s (cid:48) 2 (using separate word representations for each case).", "One might assume that the prediction of c in this case relies on the interaction between s 1 and s 2 ; however, the features of s 1 and s 2 within Fasttext's setup only interact additively, 3 For instance, lovely is tagged as an adverb because of its suffix, while besides was never tagged as an adverb which means that the classification most likely relies on individual cues in the separate sentences, rather than on their combination.", "In order to test this hypothesis, we introduce a random shuffle operation: for each example ( s 1 , s (cid:48) 2 , c ), s (cid:48) 2 is replaced by a random sentence from a pair that is equally linked by c (we perform this operation separately in train and test sets).", "Table 3 indicates that shallow lexical features indeed yield relatively high prediction rates.", "Moreover, the shuffle operation indeed increases accuracy, which corroborates the hypothesis that classification with shallow features relies on individual cues from separate sentences, rather than their combination.", "Tables 4 and 5 show the least and most predictable discourse markers, and the corresponding recognition rate with lexical features.", "Interestingly, the two most predictable candidates are not discourse markers.", "Upon inspection of harvested pairs, we noticed that even legitimate discourse markers can be guessed with relatively simple heuristics in numerous examples.", "For example, c = thirdly is very likely to occur if s 1 contains secondly .", "We use this information to optionally filter out such simple instances, as described in the next section.", "In the following, we call our method Discovery .", "We create several variations of the sentence pairs dataset.", "In DiscoveryHard , we remove examples where the candidate marker was among the top 5 predictions in our Fasttext shallow model and keep only the 174 candidate markers with a frequency of at least 10 k .", "Instances are then sampled randomly so that each marker appears exactly 10 k times in the dataset.", "Subsequently, the resulting set of discourse markers is also used in the other variations of our dataset.", "DiscoveryBase designates the dataset for which examples predicted with the Fasttext model were not removed.", "In order to measure the extent to which the model makes use of the relation between s 1 and s (cid:48) 2 , we also create a DiscoveryShuffled dataset, which is the DiscoveryBase dataset subjected to the random shuffle operation described previously.", "To isolate the contribution of our discovery method, the dataset DiscoveryAdv discards all discourse markers from PDTB that were not found by our method.", "Also, in order to measure the impact of label diversity, Discovery10 uses 174 k examples for each of the 10 most frequent markers, 4 thus totalling as many instances as DiscoveryBase .", "Finally, DiscoveryBig contains almost twice as many instances as DiscoveryBase , i.e. 20 k instances for each discourse marker (although, for a limited number of markers, the number of instances is slightly lower due to data sparseness).", "setups.", "Thus, we follow the exact setup of Infersent (Conneau et al., 2017), also used in the Dissent (Malmi et al., 2018) model: we learn to encode sentences into h with a bi-directional LSTM sentence encoder using element-wise max pooling over time.", "The dimension size of h is 4096 .", "Word embeddings are fixed GloVe embeddings with 300 dimensions, trained on Common Crawl 840B.", "5 A sentence pair ( s 1 , s 2 ) is represented with [ h 1 , h 2 , h 1 (cid:12) h 2 , | h 2 h 1 | ] , 6 which is fed to a softmax in order to predict a marker c .", "Our datasets are split in 90% train, 5% validation, and 5% test.", "Optimization is done with SGD (learning rate is initialized at 0 . 1 , decayed by 1% at each epoch and by 80% if validation accuracy decreases; learning stops when learning rate is below 10 5 and the best model on training task validation loss is used for evaluation; gradient is clipped when its norm exceeds 5 ).", "Once the sentence encoder has been trained on a base task, the resulting sentence embeddings are tested with the SentEval library (Conneau et al., 2017).", "We evaluate the different variations of our dataset we described above in order to analyze their effect, and compare them to a number of existing models.", "Table 7 displays the tasks used for evaluation.", "For further analysis, table 9 displays the result of Linguistic Probing using the method by Conneau et al. (2018).", "Although these tasks are primarily designed for understanding the content of embeddings, they also focus on aspects that are desirable to perform well in general semantic tasks (e.g. prediction of tense, or number of object).", "Table 6 gives an overview of transfer learning evaluation, also comparing to other supervised and unsupervised approaches.", "Note that we outperform DisSent on all tasks except TREC 7 with less than half the amount of training examples.", "In addition, our approach is arguably simpler and faster.", "MTL (Subramanian et al., 2018) only achieves stronger results than our method on the MRPC and SICK tasks.", "The MTL model uses 124 M training examples with an elaborate multi-task setup, training on 45 M sentences with manual translation, 1 M pairs from SNLI/MNLI, 4 M parse trees of sentences, and 74 M consecutive sentence pairs.", "5 https://nlp.stanford.edu/projects/glove/ 6 h 1 (cid:12) h 2 = ( h 11 .h 21 , .., h 1 i .h 2 i , ... ) 7 This dataset is composed of questions only, which are underrepresented in our training data.", "The model also fine-tunes word embeddings in order to achieve a higher capacity.", "It is therefore remarkable that our model outperforms it on many tasks.", "Besides, MTL is not a direct competitor to our approach since its main contribution is its multi-task setup, and it could benefit from using our training examples.", "Our best model rivals (and indeed often outperforms) QuickThought on all tasks, except relatedness (SICK-R).", "QuickThought's training task is to predict whether two sentences are contiguous, which might incentivize the model to perform well on a relatedness task.", "We also outperform InferSent on many tasks except entailment and relatedness.", "Entailment prediction is the explicit training signal for Infersent.", "To help the analysis of our different model variations, table 8 displays the test scores on each dataset for the original training task.", "It also shows the related PDTB implicit relation prediction scores.", "The PDTB is annotated with a hierarchy of relations, with 5 classes at level 1 (includ-ing the EntRel relation), and 16 at level 2 (with one relation absent from the test).", "It is interesting to see that this form of simple semi-supervised learning for implicit relation prediction performs quite well, especially for fine-grained relations, as the best model slightly beats the best current dedicated model, listed at 40.9% in Xue et al. (2017).", "DiscoveryHard scores lower on its training task than DiscoveryBase , and it also performs worse on transfer learning tasks.", "This makes sense, since lexical features are important to solve the evaluation tasks.", "Our initial hypothesis was that more difficult instances might force the model to use higher-level reasoning, but this does not seem to be the case.", "More surprisingly, preventing the encoders to use the relationship between sentences, as in DiscoveryShuffled , does not substantially hurt the transfer performance, which remains on average higher than Nie et al. (2017).", "Additionally, our models score well on linguistic probing tasks.", "They outperform Infersent on all tasks, which seems to contradict the claim that SNLI data allows for learning of universal sentence representations (Conneau et al., 2017).", "And a final interesting outcome is that the diversity of markers (e.g. using DiscoveryBase instead of Discovery10 ) seems to be important for good performance on those tasks, since Discovery10 has the worst overall performance on average.", "The softmax weights learned during the training phase can be interpreted as embeddings for the markers themselves, and used to visualize their relationships.", "Figure 2 shows a TSNE (van der Maaten and Hinton, 2008) plot of the markers' representations.", "Proximity in the feature space seems to reflect semantic similarity (e.g. usually / normally ).", "In addition, the markers we discovered, colored in red, blend with the PDTB markers (depicted in black).", "It would be interesting to cluster markers in order to empirically de-fine discourse relations, but we leave this for future work.", "Though discourse marker prediction in itself is an interesting and useful task (Malmi et al., 2017), discourse markers have often been used as a training cue in order to improve implicit relation prediction (Marcu and Echihabi, 2001; Sporleder and Lascarides, 2005; Zhou et al., 2010; Braud and Denis, 2016).", "This approach has been extended to general representation learning by Jernite et al. (2017)although with empirically unconvincing results, which might be attributed to an inappropriate training/evaluation set-up, or the use of a limited number of broad categories instead of actual discourse markers.", "Nie et al. (2017) used the more standard InferSent framework and obtained better results, although they were still outperformed by QuickThought (Logeswaran and Lee, 2018), BShift CoordInv Depth ObjNum SubjNum OddM Tense TC WC AVG InferSent 56.5 65.9 37.5 79.9 84.3 53.2 87 78.1 95.2 70.8 SkipThought 69.5 69 39.6 83.2 86.2 54.5 90.3 82.1 79.6 72.7 QuickThought 56.8 70 40.2 79.7 83 55.3 86.2 80.7 90.3 71.4 DiscoveryBase 63.1 70.6 45.2 83.8 87.2 57.3 89.1 83.2 94.7 74.9 DiscoveryHard 62.7 70.4 44.5 83.4 88.1 57.3 89.5 82.8 94.1 74.8 Discovery10 61.3 69.7 42.9 81.8 86.7 55.8 87.8 81.4 96.1 73.7 DiscoveryAdv 61.5 70 43.9 82.6 86.2 56.2 89.1 82.8 96.1 74.3 DiscoveryShuffled 62.6 71.4 45.3 84.3 88 58.3 89.3 82.8 93.4 75 DiscoveryBig 63.3 71.4 46.0 84.1 87.8 57.1 89.4 84.2 96 75.5 Table 9: Accuracy of various models on linguistic probing tasks using logistic regression on SentEval.", "which uses a much simpler training task.", "Both of these rely on pre-established lists of discourse markers provided by the PDTB, and both perform a manual annotation for each markerNie et al. (2017) uses dependency patterns, while Jernite et al. (2017) uses broad discourse categories.", "Our work is the first to automatically discover discourse markers from text.", "More generally, various automatically extracted training signals have been used for unsupervised learning tasks.", "Hashtags (Felbo et al., 2017) have been sucessfully exploited in order to learn sentiment analysis from unlabelled tweets, but their availability is mainly limited to the microblog-ging domain.", "Language modeling provides a general training signal for representation learning, even though there is no obvious way to derive sentence representations from language models.", "BERT (Devlin et al., 2018) currently holds the best results in transfer learning based on language modeling, but it relies on sentence pair classification in order to compute sentence embeddings, and it makes use of a simple sentence contiguity detection task (like QuickThought); this task does not seem challenging enough since BERT reportedly achieves 98% detection accuracy.", "Phang et al. (2018) showed that the use of SNLI datasets yields significant gains for the sentence embeddings from Radford (2018), which are based on language modeling.", "For the analysis of our models, we draw inspiration from critical work on Natural Language Inference datasets (Dasgupta et al., 2018; Levy et al., 2018).", "Gururangan et al. (2018); Poliak et al. (2018) show that baseline models that disregard the hypothesis yield good results on SNLI, which suggests that the model does not perform the high level reasoning we would expect in order to predict the correct label.", "They attribute this effect to bias in human annotations.", "In this work, we show that this issue is not inherent to human labeled data, and propose the shuffle perturbation in order to measure to what extent the relationship between sentences is used.", "In this paper, we introduce a novel and efficient method to automatically discover discourse markers from text, and we use the resulting set of candidate markers for the construction of an extensive dataset for semi-supervised sentence representation learning.", "A number of dataset variations are evaluated on a wide range of transfer learning tasks (as well as implicit discourse recognition) and a comparison with existing models indicates that our approach yields state of the art results on the bulk of these tasks.", "Additionally, our analysis shows that removing simple' examples is detrimental to transfer results, while preventing the model to exploit the relationship between sentences has a negligible effect.", "This leads us to believe that, even though our approach reaches state of the art results, there is still room for improvement: models that adequately exploit the relationship between sentences would be better at leveraging the supervision of our dataset, and could yield even better sentence representations.", "In future work, we also aim to increase the coverage of our method.", "For instance, we can make use of more lenient patterns that capture an even wider range of discourse markers, such as multi-word markerse.", "Max Glockner, Vered Shwartz, and Yoav Goldberg.", "2018.", "Breaking NLI Systems with Sentences that Require Simple Lexical Inferences.", "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Short Papers) , (3):16.", "Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov.", "2018.", "Learning Word Vectors for 157 Languages.", "In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018) , Miyazaki, Japan.", "European Language Resources Association (ELRA).", "Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, and Noah A. Smith.", "2018.", "Annotation Artifacts in Natural Language Inference Data.", "Yacine Jernite, Samuel R. Bowman, and David Sontag.", "2017.", "Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning.", "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov.", "2016.", "Bag of tricks for efficient text classification.", "Cite arxiv:1607.01759.", "Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler.", "2015.", "Skip-thought vectors.", "In Advances in neural information processing systems , pages 32943302.", "Omer Levy, Samuel R Bowman, and Noah A Smith.", "2018.", "Annotation Artifacts in Natural Language Inference Data.", "Proceedings of NAACL-HLT 2018 , pages 107112.", "Lajanugen Logeswaran and Honglak Lee.", "2018.", "An efficient framework for learning sentence representations.", "pages 116.", "Lajanugen Logeswaran, Honglak Lee, and Dragomir Radev.", "2016.", "Sentence Ordering using Recurrent Neural Networks.", "pages 115.", "Laurens van der Maaten and Geoffrey Hinton.", "2008.", "Visualizing data using t-SNE.", "Journal of Machine Learning Research , 9:25792605.", "Eric Malmi, Daniele Pighin, Sebastian Krause, and Mikhail Kozhevnikov.", "2017.", "Automatic Prediction of Discourse Connectives.", "Eric Malmi, Daniele Pighin, Sebastian Krause, and Mikhail Kozhevnikov.", "2018.", "Automatic Prediction of Discourse Connectives.", "In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018) , Miyazaki, Japan.", "European Language Resources Association (ELRA).", "Daniel Marcu and Abdessamad Echihabi.", "2001.", "An unsupervised approach to recognizing discourse relations.", "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics ACL '02 , (July):368." ]
[ "abstain", "abstain", "objective", "result", "result", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "method", "other", "objective", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "other", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "abstain", "abstain", "other", "objective", "objective", "result", "result", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "In this paper, we tackle the task of Definition Generation (DG) in Chinese, which aims at automatically generating a definition for a word.", "Most existing methods take the source word as an indecomposable semantic unit.", "However, in parataxis languages like Chinese, word meanings can be composed using the word formation process, where a word (\" \", peachblossom ) is formed by formation components (\" \", peach ; \" \", flower ) using a formation rule ( Modifier-Head ).", "Inspired by this process, we propose to enhance DG with word formation features.", "We build a formation-informed dataset and propose a model DeFT , which Decomposes words into formation features, dynamically Fuses different features through a gating mechanism, and generaTes word definitions.", "Experimental results show that our method is both effective and robust.", "1 1 Introduction Definition Generation (DG) aims at automatically generating an explanatory text for a word.", "This task is of practical importance to assist dictionary construction, especially in highly productive languages like Chinese (Yang et al., 2020).", "Most existing methods take the source word as an indecomposable lexico-semantic unit, using features like word embedding (Noraset et al., 2017) and context (Gadetsky et al., 2018; Ishiwatari et al., 2019).", "Recently, Yang et al. (2020) and Li et al. (2020) achieve improvement by decomposing the word meaning into different semantic components.", "In decomposing the word meaning, the word formation process is an intuitive and informative way that has not been explored in DG by far.", "For parataxis languages like Chinese, a word is formed by formation components, i.e., morphemes , and Equal contribution.", "(cid:20914) 2 : (cid:20912)(cid:23608) ( spend) (cid:17790) 2 : (cid:17788)(cid:17788)(cid:9775) ( vainly) (cid:20914) 1 : (cid:20912)(cid:13876) ( flower) (cid:17790) 1 : (cid:17788)(cid:20849)(cid:17795) ( white) Modifier-Head Adverb-Verb (cid:17790)(cid:20914) (cid:20) (cid:29)(cid:3) (cid:17788)(cid:20849)(cid:17795)(cid:20912)(cid:572) ( White flower.) (cid:17790)(cid:20914) (cid:21) (cid:29) (cid:17788)(cid:17788)(cid:9775)(cid:20912)(cid:23608)(cid:572) ( Vainly spend.) Formation Rule Word Definition (cid:17790)(cid:20914) Morphemes: Definitions Word Figure 1: Word formation process for the polysemous \" \".", "a formation rule .", "As shown in Figure 1, the polysemous word \" \" holds two meanings \" 1 \" and \" 2 \", which can be distinguished by different morphemes (\" 1 ; 1 \" vs. \" 2 ; 2 \") and different rules ( Modifier-Head vs. Adverb-Verb ).", "Such intuitive formation process can clearly and unambiguously construct the word meaning.", "Inspired by the word formation process in Chinese, we propose to enhance DG with formation features.", "First, we build a formation-informed dataset under expert annotations.", "Next, we design a DG model DeFT , which Decomposes words into formation features, Fuses different features through a gating mechanism, and generaTes definitions.", "Our contributions are as follows: (1) We first propose to use word formation features to enhance DG and design a formation-informed model DeFT .", "(2) We build a new formation-informed DG dataset under expert annotations.", "(3) Experimental results show that our method brings a substantial performance improvement, and maintains a robust performance even with only word formation features.", "Definition Generation: Noraset et al. (2017) first propose the DG task and use word embeddings as the main input.", "The following methods add contexts for disambiguation (Gadetsky et al., 2018; Ishiwatari et al., 2019) or word-pair embeddings to capture lexical relations (Washio et al., 2019).", "Recent methods attempt to decompose the word meaning by using HowNet sememes (Yang et al., 2020) or modeling latent variables (Li et al., 2020).", "Semantic Components: To systematically define words, linguists decompose the word meaning into semantic components (Wierzbicka, 1996).", "Following this idea, HowNet (Dong and Dong, 2006) uses manually-created sememes to describe the semantic aspects of words.", "Recent studies also show that leveraging subword information produces better embeddings (Park et al., 2018; Lin and Liu, 2019; Zhu et al., 2019), but these methods lack a clear distinction among different formation rules.", "It is linguistically motivated to explore the word formation process to better understand words.", "Instead of combining roots and affixes, Chinese words are formed by characters in a parataxis way (Li et al., 2018).", "Here, we introduce two formation features and construct a formation-informed dataset.", "Chinese formation components are morphemes , defined as the smallest meaning-bearing units (Zhu, 1982).", "Morphemes are unambiguous in representing word meanings, since they can distinguish different meanings and uses of each character in a word, like \" 1 \" and \" 2 \" in Figure 1.", "Morphemes are also productive in constructing words, since over 99.48% Chinese words are formed using a small set of nearly 20,000 morphemes (Fu, 1988).", "These properties make morphemes highly effective as formation components.", "Formation rules specify how morphemes are combined to form words in a parataxis way.", "For example, the Modifier-Head rule uses the first morpheme to modify the second morpheme.", "Following the study of Liu et al. (2018), we adopt 16 Chinese formation rules and show the top 5 in instance percentage in Table 1.", "Complete descriptions of 16 formation rules are provided in Appendix A. 3.2 Formation-informed dataset We construct a DG dataset under expert annotations, which contains morphemes and formation rules.", "Each entry consists of (1) source word, (2) morphemes and morpheme definitions, (3) formation rule, (4) context (a sentence containing the source word), (5) source word definition.", "extract data from the 5 th edition of the Contemporary Chinese Dictionary published by the Commercial Press 2 , one of the most influential Chinese dictionaries.", "We collect 45,311 Chinese disyllabic word entries with contexts and definitions.", "To annotate them, we also collect 10,527 Chinese characters and 20,855 morphemes with definitions.", "Our annotators include two professors and six graduates major in Chinese linguistics.", "Given the definition, they annotate each word with its formation rule (as shown in Table 1) and morpheme IDs (as shown in Table 2).", "Each entry is cross-validated by three independent annotators and reviewed by one.", "The detailed annotation process includes the following three steps: (1) Equipped with the definition, annotators annotate each entry with two morpheme IDs (select from the morphemes of each character) and a formation rule (select from 16 formation rules).", "Each entry is independently annotated by three annotators, who also note down a confidence score.", "If three annotations are the same, turn to (3); otherwise, turn to (2).", "(2) Another annotator reviews the conflicting annotations and confidence scores, and decides the final annotation.", "Turn to (3).", "(3) The annotation is collected as an entry into the final dataset.", "It takes one minute on average for each annotator to annotate an entry.", "Only 8,193 out of 45,311 entries enter Phase (2) in the whole process.", "We extend the DG setting in Ishiwatari et al. (2019) to incorporate the word formation features, F = { morph 1 , morph 2 , rule } , where morph i is the i th morpheme definition sentence and rule is the formation rule.", "The training goal is to maximize the likelihood of the ground-truth definition D = d 1: T given the source word w , the context sentence C = c 1: n , and the word formation features F : p ( D | w , C, F ) = T (cid:2) t =1 p ( d t | d i<t , w , C, F ) .", "Our optimization objective is to minimize the cross-entropy loss L : L = T (cid:3) t =1 log (cid:4) p ( d t | d i<t , w , C, F ) (cid:5) , where d 1: T is the ground-truth definition, w is the pretrained embedding of the source word, C is the context sentence, F is the formation information.", "As shown in Figure 2, DeFT first produces a seed vector in a rule-specific manner as global supervision.", "Then we feed it into the definition generator, which uses a gating mechanism to dynamically fuse different features and generate definitions.", "We first employ a Bi-LSTM (Graves and Schmidhu-ber, 2005) to encode morph i .", "Then, we combine morph i into a comprehensive morpheme embedding r m with a rule-specific linear layer, which captures different semantic relations: m i = Bi-LSTM ([ morph i ]) , r m = W ( rule ) m [ m 1 ; m 2 ] + b ( rule ) m .", "We then use a linear layer to combine r m and the pretrained source word embedding w to obtain the seed vector r as the initial generator input: r = W r [ r m ; w ] + b r .", "We employ an LSTM followed by a GRU-like (Cho et al., 2014) gate GRU-GATE ( ) , which dynamically fuses different features, as the generator:", "h t = LSTM ( d t 1 , h (cid:3) t 1 ) , h (cid:3) t = GRU-GATE ( h t , feat t ) , feat t = [ r m ; w ; a ; g t ; c t ] ,", "where h t is the LSTM hidden state at the t th step, h (cid:3) t is the gated hidden state, d t 1 is the embedding of the previous definition word, specially, d 0 (cid:2) r , and feat t denotes the features that dynamically control the generation process.", "We explain a , g t , and c t as follows.", "a is the character-level embedding, obtained by combining the embedding ch i of each character in w with a rule-specific linear layer: a = W ( rule ) a [ ch 1 ; ch 2 ] + b ( rule ) a .", "g t is the gated attended morpheme vector that dynamically focuses on the most relevant parts in morphemes during generation.", "We first calculate attended morpheme vectors g (cid:3) t,i by the attention mechanism (Bahdanau et al., 2015): g (cid:3) t,i = Attention ( h t , morph i ) , where Attention ( h , seq ) denotes the function that uses h to attend over the Bi-LSTM encoded seq .", "We then design a MorphGATE to compute g t by assigning different weights to two morphemes: z t = ( W z [ g (cid:3) t, 1 ; g (cid:3) t, 2 ; h t ] + b z ) , g t = ( 1 z t ) (cid:2) g (cid:3) t, 1 + z t (cid:2) g (cid:3) t, 2 , 5527 Split #Words #Entries ContextLength Morph 1 Length Morph 2 Length DefinitionLength Train 29,169 36,248 7.22 7.69 7.29 12.02 Valid 3,673 4,531 7.32 7.45 7.30 11.91 Test 3,666 4,532 7.26 7.51 7.01 12.03 Table 3: Statistics of our formation-informed dataset.", "The inter-rater kappa (Fleiss and Cohen, 1973) is 0.65 for coverage and 0.66 for overall.", "We average scores of raters and obtain consistent results with 5528 Point Coverage Overall 1 Nothing is covered.", "where ( ) is Sigmoid and (cid:2) is Hadamard product.", "c t is the attended context vector.", "Following Ishiwatari et al. (2019), we take c t = Attention ( h t , C ) as a feature since it may assist disambiguation.", "Finally, GRU-GATE ( h t , feat t ) takes the LSTM hidden state h t and the dynamically controlled features feat t as input, and updates h t to h (cid:3) t by fusing different features: u t = ( W u [ h t ; feat t ] + b u ) , v t = ( W r [ h t ; feat t ] + b r ) , h t = tanh ( W h [( v t (cid:2) feat t ); h t ] + b h ) , h (cid:3) t = u t (cid:2) h t + ( 1 u t ) (cid:2) h t , where denotes the Sigmoid and (cid:2) denotes the Hadamard product.", "The gate u t controls how much the original state h t is remained, and the gate v t controls the contribution from features feat t .", "Dataset: We split the dataset described in Section 3 into training, validation and test sets by 8:1:1, as shown in Table", "3. Note that we treat polysemous words as different entries, and the words are mutually exclusive across three sets.", "Hyper-parameters: We tune hyper-parameters to achieve the best BLEU score on the validation set.", "We use Adam (Kingma and Ba, 2015) with an initial learning rate of 10 3 as the optimizer.", "We set hidden size to 300, batch size to 64 and dropout rate to 0.2.", "Word embeddings are 300-dimensional, pretrained by fastText (Bojanowski et al., 2017).", "We train for up to 50 epochs, and early stop the training process once the performance does not improve for 10 consecutive epochs.", "We run our experiments on a single NVIDIA GeForce GTX 2080Ti GPU with 11 GB memory.", "us but using different features, including SG (No-raset et al., 2017) that uses only the word feature, and LOG-CaD (Ishiwatari et al., 2019) that uses both the word and context features.", "We conduct both automatic and human evaluations to validate our method, and show results in Table", "4. For automatic evaluation, we select BLEU-4 (Bahdanau et al., 2015) and ROUGE-L (Lin, 2004) as metrics.", "We find that (1) our formation-informed DeFT (F+W+C) significantly outperforms baselines and other simplified versions (F, F+W, F+C); (2) based on W or W+C, adding formation features introduces significant improvement; (3) formation features are robust, since using only F can outperform LOG-CaD by 9.8% and 10.45% in BLEU and ROUGE-L, respectively.", "These findings validate that formation features can effectively enhance DG by assisting word meaning construction.", "For human evaluation, we measure semantic coverage and overall quality.", "The coverage metric measures how much ground-truth information is mentioned in the predicted definition.", "To be specific, the scores are given based on how many semantic aspects in the ground truth definition are covered by the predicted definition.", "The overall metric measures the overall quality of the predicted definition, referencing the ground-truth definition.", "We randomly select 100 entries from the test set, and hire three raters to rate the predicted definitions on a scale of 1 to 5, where each entry includes (1) the source word, (2) the ground-truth definition, and (3) the predicted definition to the raters.", "We show in Table 5 the detailed guideline for raters on each point.", "the automatic evaluation: formation features are effective and DeFT performs the best.", "Ablation study: Based on DeFT, we perform ablation study regarding MorphGATE and the formation rule in Table 6.", "(1) For MorphGATE, we replace it with a simple average function, which leads to a drop in performance.", "This reveals that different morphemes take effect in different generation phases.", "(2) For formation rule, we replace the rule-specific layers with a rule-shared layer, leading to a more serious performance drop.", "This verifies that distinguishing the specific formation rule can assist word meaning construction.", "Formation features can assist disambiguation: We present the generated definitions for a polysemous word in Figure", "3. The example shows that using only the word feature (W) cannot distinguish different meanings.", "By contrast, using only the formation features (F) can capture the meaning difference and disambiguate the word ( use vs. money ).", "Further, DeFT (F+W+C) generates the exactly correct definition by fusing different features.", "Due to space limits, we put two additional interesting analyses on formation rules in Appendix B. Formation features are more feasible and effective compared with sememes: Sememes are expert-crafted words to describe the semantic aspects of words.", "For annotation cost, annotating sememes is as expensive as writing definitions (Li et al., 2020), whereas annotating formation features is a simple multiple-choice task with 1.98 choices on average.", "For effectiveness, we conduct experiments using sememe embeddings from Yang et al. (2020) as additional features.", "Results show that, based on W, adding sememes brings a BLEU improvement of 0.52, lower than that of 2.30 from F+W.", "Further, based on DeFT, adding sememes even brings noises and decreases BLEU by 0.35.", "This indicates that, compared with sememes, formation features are more feasible and effective.", "In this paper, we propose to use formation features to enhance DG.", "We build a formation-informed dataset and design a model DeFT, which decomposes words into formation features and fuses features via a gating mechanism.", "Experimental results show that our method is both effective and robust.", "We would like to thank all the anonymous reviewers and annotators for their helpful advice on various aspects of this work, including Fuqian Wu, Ming Liu, Yaqi Yin, Yue Wang, etc.", "This paper is supported by the National Natural Science Foundation of China (No. 62036001, U19A2065) and the National Social Science Foundation of China (No. 16YY137, 18ZDA295).", "5529 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben-gio.", "4. Figure 4", "(a) shows that Overlapping is most similar to Suffixation, Prefixation, and Single Morpheme.", "Interestingly, word meanings constructed 5530 Word Formation Rule Explanation Use Case % dng zhong (Modifier-Head) morph 1 modifies morph 2 (noun)." ]
[ "objective", "abstain", "abstain", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "objective", "method", "objective", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "result", "abstain", "abstain", "method", "other", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "other", "other", "other", "other", "other", "other" ]
[ "Abstract Leveraging large-scale unlabeled web videos such as instructional videos for pre-training followed by task-specific finetuning has become the de facto approach for many video-and-language tasks.", "However, these instructional videos are very noisy, the accompanying ASR narrations are often incomplete, and can be irrelevant to or temporally misaligned with the visual content, limiting the performance of the models trained on such data.", "To address these issues, we propose an improved video-and-language pre-training method that first adds automatically-extracted dense region captions from the video frames as auxiliary text input, to provide informative visual cues for learning better video and language associations.", "Second, to alleviate the temporal misalignment issue, our method incorporates an entropy minimization-based constrained attention loss, to encourage the model to automatically focus on the correct caption from a pool of candidate ASR captions.", "Our overall approach is named DECEMBERT (Dense Captions and Entropy Minimization).", "Comprehensive experiments on three video-and-language tasks (text-to-video retrieval, video captioning, and video question answering) across five datasets demonstrate that our approach outperforms previous state-of-the-art methods.", "Ablation studies on pre-training and downstream tasks show that adding dense captions and constrained attention loss help improve the model performance.", "Lastly, we also provide attention visualization to show the effect of applying the proposed constrained attention loss.", "1 1 Introduction Video and language are ubiquitous in the world we live.", "The ability to understand the interplay of video and language is thus essential for intelligent agents to operate in real-world scenario.", "Past success in video-and-language has mostly been driven 1 Code and models: https://github.com/ zinengtang/DeCEMBERT by supervised learning, where models are learned on manually labeled data for a particular task (e.g., text-to-video retrieval).", "However, manually annotating video and language data is very expensive, hence limiting the scale of such datasets, and consequently also limiting the performance of models trained on the datasets.", "The self-supervised pretraining then finetuning paradigm offers an easy and generic solution to this dilemma, where models are first pre-trained on large-scale unlabeled data by performing various proxy tasks, followed by finetuning the pre-trained model on downstream tasks where data is often limited.", "Recent advances on language pre-training (De-vlin et al., 2019; Liu et al., 2019) demonstrate the effectiveness of this approach, where transformer-based (Vaswani et al., 2017) models pre-trained on large-scale unlabeled text corpus has shown to perform remarkably well across a wide range of natural language tasks (Rajpurkar et al., 2016; Williams et al., 2017; Zellers et al., 2018; Wang et al., 2018).", "Following this momentum, multimodal pre-training (Tan and Bansal, 2019; Lu et al., 2019; Chen et al., 2020; Su et al., 2019; Cho et al., 2021; Sun et al., 2019; Li et al., 2020c; Zhu and Yang, 2020; Miech et al., 2020; Li et al., 2020b; Lei et al., 2021) on large-scale image-text corpus (Sharma et al., 2018; Chen et al., 2015; Krishna et al., 2017) and video-text corpus (Lei et al., 2018; Miech et al., 2019; Sun et al., 2019) have also shown to outperform existing approaches (Ander-son et al., 2018; Yu et al., 2018a; Lei et al., 2020a,b) on vision and language tasks (Antol et al., 2015; Xu et al., 2016; Yu et al., 2018a; Suhr et al., 2019; Zhou et al., 2017; Lei et al., 2020b).", "The most commonly used proxy tasks for multimodal pre-training are masked language modeling (Devlin et al., 2019) (MLM) and cross-modal matching (Tan and Bansal, 2019; Lu et al., 2019; Zhu and Yang, 2020) (e.g., video-text matching), where MLM aims to learn a better language model in the presence of the extra [01:22] cross Video ASR Captions Dense Captions [01:15] easier start [01:17] taking pieces paper go a blue paper, the table is made of wood, ... a green and white paper, the hand is holding a paper, ... a blue and white paper, the hand is on the table, ...", "vision modality, and the matching objective encourages better association and alignment between relevant image-text or video-text pairs.", "Existing video-text pre-training models (Sun et al., 2019; Miech et al., 2020; Zhu and Yang, 2020) are typically trained on large-scale instructional video datasets such as HowTo100M (Miech et al., 2019).", "The dataset contains 1.2 million videos with 136 million clips that are automatically harvested from YouTube.", "Each clip is paired with text transcribed from the video narrations via an automatic speech recognition (ASR) system.", "While the models trained on HowTo100M have shown promising results, they suffer from a few inherent drawbacks from the dataset: ( i ) Semantic misalignment: the narration words are sometimes irrelevant to the visual content (e.g., credits or other nonvisual words, see Figure 1 text highlighted in pink ), and vice versa , i.e., some important visual objects and actions are not described by words.", "( ii )", "Temporal misalignment: the videos and the captions are far from perfectly aligned, i.e., people might talk about something before or after they actually demonstrate it.", "For example, Figure 1 shows the caption cross is spoken after the action happened.", "Miech et al. (2019) reported that around 50% of the clip-caption pairs in HowTo100M suffers from these two misalignments, both of which cause difficulties in optimizing the video-text matching objective.", "( iii )", "Furthermore, the ASR captions are generally noisy, incomplete, and unpunctuated (Tilk and Alume, 2015) (e.g., in Figure 1, taking pieces paper go), which limits the language modeling ability of the systems that trained on such text.", "To address the aforementioned issues, we propose to add Dense Captions (Johnson et al., 2016; Yang et al., 2017) as a complementary text input to the ASR captions.", "Beyond serving as an extra language input for better language modeling, dense captions also describes important object, attribute, and action details regarding several salient regions in the video frames, providing useful signals for video-text matching.", "In addition to its use in the pre-training stage, these dense captions also provide helpful clues for downstream tasks such as video question answering.", "In parallel, to alleviate the temporal misalignment issue, we propose a constrained attention loss that encourages the model to automatically focus on the relevant ASR caption from a pool of continuous caption candidates.", "Instead of using only a single paired ASR caption for each clip, we also use the captions from its neighboring clips.", "We expect one of neighboring captions semantically aligns with the clip.", "To encourage the alignment between the clip and its relevant caption, we employ a constrained attention loss that encourages the attention mass from video features to the captions to be distributed mostly in one of the caption, by minimizing the entropy of attention scores.", "We evaluate our DECEMBERT (Dense Captions and Entropy Minimization) model on a wide range of video-and-language tasks, including video question answering (Xu et al., 2017), text-to-video retrieval (Xu et al., 2016; Zhou et al., 2017), and video captioning (Xu et al., 2016; Zhou et al., 2017), where our approach outperforms previous state-of-the-art methods.", "To better understand the underlying factors that contribute to this success, we present comprehensive analyses concerning each of the added components.", "To summarize, our contribution is three-fold: ( i ) We propose incorporating automatically extracted dense captions as an extra text input for video-text pre-training.", "( ii )", "We propose an entropy minimization-based constrained attention loss to encourage the model to dynamically select the best matched captions from a pool of neighboring captions, to alleviate the inherent misalignment between the ASR captions and the videos.", "( iii )", "Extensive experiments on three video-and-language tasks (text-to-video retrieval, video captioning, and video question answering) across five datasets demonstrate the effectiveness of our approach.", "Furthermore, we also provide comprehensive ablation study and visualization to quantitatively and qualitatively examine the effect of using dense captions and the proposed constrained attention loss.", "Since the birth of BERT (Devlin et al., 2019), transformer (Vaswani et al., 2017) language pre-training models (Liu et al., 2019; Yang et al., 2019; Lan et al., 2020; Dong et al., 2019; Song et al., 2019; Raffel et al., 2020; Clark et al., 2020) which perform unsupervised pre-training followed by downstream task specific finetuning has became the de facto approach for various natural language understanding tasks (Rajpurkar et al., 2016; Williams et al., 2017; Zellers et al., 2018; Wang et al., 2018).", "Followed by this success, image-and-language pretraining models (Tan and Bansal, 2019; Lu et al., 2019; Chen et al., 2020; Zhou et al., 2020; Li et al., 2020a) and video-and-language pre-training models (Sun et al., 2019; Miech et al., 2019; Zhu and Yang, 2020; Miech et al., 2020; Li et al., 2020b; Luo et al., 2020; Huang et al., 2020; Stroud et al., 2020) have also shown promising results on many vision and language tasks (Antol et al., 2015; Xu et al., 2016; Zhou et al., 2017).", "For video-and-language pre-training in particular, most existing work (Sun et al., 2019; Miech et al., 2019; Zhu and Yang, 2020; Miech et al., 2020; Li et al., 2020b; Luo et al., 2020) are trained on large-scale unlabeled instructional videos, such as HowTo100M (Miech et al., 2019) videos.", "However, as the ASR captions associated with these videos are noisy, i.e., they are often temporally or semantically misaligned with the video content.", "Miech et al. (2020) propose Multiple Instance Learning Noise Contrastive Learning (MIL-NCE) to address the temporal misalignment issue, but semantic misalignment still remains.", "Moreover, MIL-NCE requires computing a separate similarity score from the target clip to each of the ASR caption candidates, it does not suitable for the prevailing single-stream transformer pre-training architecture due to linearly increased computation cost.", "Inspired by recent work (Kim and Bansal, 2019; Kim et al., 2020) that uses dense captions (Johnson et al., 2016; Yang et al., 2017) to improve image and video QA models, we propose to add dense captions an as auxiliary text input that provide aligned visual cues to ease the difficulties of learning a video-text matching objective from often temporally and semantically misaligned ASR captions.", "In addition, we also propose a constrained attention loss, which employs an entropy minimization-based regularization (Tanaka et al., 2018; Yi and Wu, 2019) to the model to encourage higher attention scores from the video to the correct matched caption among a pool of ASR caption candidates.", "In this section, we describe the details of DECEMBERT, including its architecture, pre-training objectives, dense caption inputs, and the constrained attention loss.", "Figure 2 shows an overview of DECEMBERT.", "Input Representations.", "Input text (e.g., ASR captions) are tokenized and represented as a sequence of WordPiece (Wu et al., 2016) tokens.", "We use a trainable word embedding layer to encode the tokens into feature representations.", "We use appearance and motion features to represent videos.", "For appearance, we use a resnet152 (He et al., 2016) model pre-trained on ImageNet (Deng et al., 2009) to extract 2D video features at 1FPS.", "Similarly, for motion, we use a 3D ResNeXt (Xie et al., 2017; Hara et al., 2018; Kataoka et al., 2020) to extract 3D video features at 1FPS.", "The temporally aligned appearance and motion features are L2-normalized and concatenated together at feature dimension.", "We then apply a two-layer MLP to map the it to the same dimension as the word embeddings.", "Next, we add learned positional embedding and token type embedding (Devlin et al., 2019) to the video and text representations to encode the position and token type information.", "The video and text representations are then concatenated as a single sequence as inputs to a 12-layer transformer encoder for pre-training and downstream task finetuning.", "Dense Captions.", "The original captions from ASR systems might not well describe a video with rich content or can even be irrelevant to the video as discussed in Section 1.", "Moreover, as ASR cap-Transformer + Position Embedding + Type Embedding + Type Embedding Video-Text Matching Masked Language Modeling Constrained Attention Loss Video a blue paper, the table is made of wood, ... easier start taking pieces paper go cross Dense Captions ASR Captions Figure 2: Overview of DECEMBERT architecture.", "tions are often incomplete and unpunctuated, they might also be sub-optimal for language modeling.", "Therefore, we use dense captions (Johnson et al., 2016) automatically extracted from an off-the-shelf image dense captioning model (Yang et al., 2017) as additional language input for the model.", "This dense captioning model is pre-trained on Visual Genome (Krishna et al., 2017) regional captions.", "To obtain video-level captions, we extract dense captions from frames sampled at every two seconds.", "There are on average 4.4 dense captions per frame, we sample two of them from each frame at each training step to avoid redundant information and reduce memory and computation cost.", "Note that the other dense captions might still be sampled in another training step.", "The sampled dense captions are then concatenated together as video-level captions for training.", "These extracted dense captions provide rich and comprehensive information regarding the salient objects, attributes, and actions (see examples in Figure 1 and Figure 2), which helps to optimize a video-text matching objective during pre-training and provide essential visual clues for many downstream tasks such as video question answering.", "Meanwhile, because the dense captions are text input with diverse semantics, it complements the typically short and incomplete ASR captions as additional resources for better language modeling.", "We observe in our ablation study that adding dense captions improves both MLM accuracy and videotext matching accuracy, demonstrating the effectiveness of using them as extra inputs.", "Pre-Training Objectives.", "During pre-training, we use masked language modeling (Devlin et al., 2019) (MLM) and cross-modality matching (Tan and Bansal, 2019; Lu et al., 2019; Miech et al., 2019; Zhu and Yang, 2020) (also referred as videotext matching in our context) as our objectives to learn model parameters.", "For masked language modeling, the goal is to learn better language models conditioned on bidirectional text context and the video.", "We set a probability of 0.20 2 to replace an input language token with [MASK] .", "When dense captions are used as extra text input, we also perform masked language modeling on them with the same masking probability as the ASR captions.", "For video-text matching, with a probability of 0.50, we replace the original ASR captions with randomly sampled captions from other videos or clips as a negative.", "Of the sampled negative ASR captions, 50% of them are from different videos, while another 50% are from the same video but different clips.", "Text from the same video clip is likely to have the same theme or similar context, and thus can serve as hard samples to improve the model's ability to do fine-grained matching.", "We do not designate a [CLS] token before the start of input caption, instead we take the mean pooling of the output sequence hidden states to perform binary classification for video-text matching.", "Empirically, we found this approach works better than using a 2 Because ASR captions are typically very short and grammatically less rigorous, we use a higher masking probability of 0.20 instead of the commonly used 0.15 as in BERT (Devlin et al., 2019).", "can be expressed as a block matrix: 348 = X q X Tr , q,r 2 { 0 , 1 , 2 , 3 } .", "(2) 349 Our goal is to encourage the model to focus on the 350 correct matched caption for an input clip, i.e", "., the 351 attention mass from the video clip to the correct 352 matched caption should be higher than the others.", "353 We denote the overall attention score from the input 354 video clip to one of the ASR captions as: The 355 maximum response between the ASR captions to 356 each element in the video can be computed as: 357 q j =max( S 0 ,j , dim=1) ,j 2 { 1 , 2 , 3 } .", "A is the attention output.", "When multiple 345 attention heads are used, the formulation is similar.", "346 We use S to denote the similarity matrix computed 347 by XXT .", "It can be expressed as a block matrix: 348 S q,r = X q X Tr , q,r 2 { 0 , 1 , 2 , 3 } .", "(2) 349 Our goal is to encourage the model to focus on the 350 correct matched caption for an input clip, i.e", "., the 351 attention mass from the video clip to the correct 352 matched caption should be higher than the others.", "A is the attention output.", "When multiple 345 attention heads are used, the formulation is similar.", "346 We use S to denote the similarity matrix computed 347 by XXT .", "It can be expressed as a block matrix: 348 S q,r = X q X Tr , q,r 2 { 0 , 1 , 2 , 3 } .", "(2) 349 Our goal is to encourage the model to focus on the 350 correct matched caption for an input clip, i.e", "A is the attention output.", "When multiple 345 attention heads are used, the formulation is similar.", "346 We use S to denote the similarity matrix computed 347 by XXT .", "It can be expressed as a block matrix: 348 S q,r = X q X Tr , q,r 2 { 0 , 1 , 2 , 3 } .", "(2) 349 Our goal is to encourage the model to focus on the 350 correct matched caption for an input clip, i.e", "., the 351 attention mass from the video clip to the correct 352 matched caption should be higher than the others.", "353 We denote the overall attention score from the input 354 video clip to one of the ASR captions as: The 355 maximum response between the ASR captions to 356 each element in the video can be computed as: 357 q j =max( S 0 ,j , dim=1) ,j 2 { 1 , 2 , 3 } .", "Recent work 307 MIL-NCE (Miech et al., 2020) proposes to ad308 dress this problem with a multiple instance learning 309 (MIL) based objective, but it is more computational 310 expensive as we need to get scores from multi311 ple different pairs.", "(2) 349 Our goal is to encourage the model to focus on the 350 correct matched caption for an input clip, i.e", "., the 351 attention mass from the video clip to the correct 352 matched caption should be higher than the others.", "353 We denote the overall attention score from the input 354 video clip to one of the ASR captions as: The 355 maximum response between the ASR captions to 356 each element in the video can be computed as: 357 q j =max( S 0 ,j , dim=1) ,j 2 { 1 , 2 , 3 } .", "Recent work 307 MIL-NCE (Miech et al., 2020) proposes to ad308 dress this problem with a multiple instance learning 309 (MIL) based objective, but it is more computational 310 expensive as we need to get scores from multi311 ple different pairs.", "(2) 349 Our goal is to encourage the model to focus on the 350 correct matched caption for an input clip, i.e", "., the 351 attention mass from the video clip to the correct 352 matched caption should be higher than the others.", "353 We denote the overall attention score from the input 354 video clip to one of the ASR captions as: The 355 maximum response between the ASR captions to 356 each element in the video can be computed as: 357 q j =max( S 0 ,j , dim=1) ,j 2 { 1 , 2 , 3 } .", "Recent work 307 MIL-NCE (Miech et al., 2020) proposes to ad308 dress this problem with a multiple instance learning 309 (MIL) based objective, but it is more computational 310 expensive as we need to get scores from multi311 ple different pairs.", "(2) 349 Our goal is to encourage the model to focus on the 350 correct matched caption for an input clip, i.e", "., the 351 attention mass from the video clip to the correct 352 matched caption should be higher than the others.", "353 We denote the overall attention score from the input 354 video clip to one of the ASR captions as: The 355 maximum response between the ASR captions to 356 each element in the video can be computed as: 357 q j =max( S 0 ,j , dim=1) ,j 2 { 1 , 2 , 3 } .", "(2) 349 Our goal is to encourage the model to focus on the 350 correct matched caption for an input clip, i.e", "., the 351 attention mass from the video clip to the correct 352 matched caption should be higher than the others.", "353 We denote the overall attention score from the input 354 video clip to one of the ASR captions as: The 355 maximum response between the ASR captions to 356 each element in the video can be computed as: 357 q j =max( S 0 ,j , dim=1) ,j 2 { 1 , 2 , 3 } .", "(2) 349 Our goal is to encourage the model to focus on the 350 correct matched caption for an input clip, i.e", "., the 351 attention mass from the video clip to the correct 352 matched caption should be higher than the others.", "353 We denote the overall attention score from the input 354 video clip to one of the ASR captions as: The 355 maximum response between the ASR captions to 356 each element in the video can be computed as: 357 q j =max( S 0 ,j , dim=1) ,j 2 { 1 , 2 , 3 } .", "Recent work 307 MIL-NCE (Miech et al., 2020) proposes to ad308 dress this problem with a multiple instance learning 309 (MIL) based objective, but it is more computational 310 expensive as we need to get scores from multi311 ple different pairs.", "(2) 349 Our goal is to encourage the model to focus on the 350 correct matched caption for an input clip, i.e", "., the 351 attention mass from the video clip to the correct 352 matched caption should be higher than the others.", "353 We denote the overall attention score from the input 354 video clip to one of the ASR captions as: The 355 maximum response between the ASR captions to 356 each element in the video can be computed as: 357 q j =max( S 0 ,j , dim=1) ,j 2 { 1 , 2 , 3 } .", "Recent work 307 MIL-NCE (Miech et al., 2020) proposes to ad308 dress this problem with a multiple instance learning 309 (MIL) based objective, but it is more computational 310 expensive as we need to get scores from multi311 ple different pairs.", "It can be expressed as a block matrix: 348 S q,r = X q X Tr , q,r 2 { 0 , 1 , 2 , 3 } .", "(2) 349 Our goal is to encourage the model to focus on the 350 correct matched caption for an input clip, i.e", "., the 351 attention mass from the video clip to the correct 352 matched caption should be higher than the others.", "353 We denote the overall attention score from the input 354 video clip to one of the ASR captions as: The 355 maximum response between the ASR captions to 356 each element in the video can be computed as: 357 q j =max( S 0 ,j , dim=1) ,j 2 { 1 , 2 , 3 } .", "This loss is added to every attention head in the transformer layers.", "It forces the model to give high attention scores only to one of the candidate ASR captions, i.e., to peak at only one caption rather than being flat because the one-hot distribution has the smallest entropy.", "(3) 358 We then employ an entropy-based loss: 359 L e = \u0000 3 X i =1 q j log( q j ) (4) 360 [Jie: TBD ] 361 4 Experiment 362 In this section, we compare our method with state-363 of-the-art methods across a range of downstream 364 tasks on several datasets.", "We also present a compre-365 hensive ablation study, where we demonstrate that 366 each of our proposed components help improve 367 the pre-training masked language modeling and 368 downstream tasks performance.", "369 4 caption is used as extra text input, we also perform 280 masked language modeling on it with the same 281 masking probability as the ASR captions.", "(3) 358 We then employ an entropy-based loss: 359 L e = \u0000 3 X i =1 q j log( q j ) (4) 360 [Jie: TBD ] 361 4 Experiment 362 In this section, we compare our method with state-363 of-the-art methods across a range of downstream 364 tasks on several datasets.", "We also present a compre-365 hensive ablation study, where we demonstrate that 366 each of our proposed components help improve 367 the pre-training masked language modeling and 368 downstream tasks performance.", "369 4 caption is used as extra text input, we also perform 280 masked language modeling on it with the same 281 masking probability as the ASR captions.", "353 We denote the overall attention score from the input 354 video clip to one of the ASR captions as: The 355 maximum response between the ASR captions to 356 each element in the video can be computed as: 357 q j =max( S 0 ,j , dim=1) ,j 2 { 1 , 2 , 3 } .", "(3) 358 We then employ an entropy-based loss: 359 L e = \u0000 3 X i =1 q j log( q j ) (4) 360 [Jie: TBD ] 361 4 Experiment 362 In this section, we compare our method with state-363 of-the-art methods across a range of downstream 364 tasks on several datasets.", "We also present a compre-365 hensive ablation study, where we demonstrate that 366 each of our proposed components help improve 367 the pre-training masked language modeling and 368 downstream tasks performance.", "369 4 caption is used as extra text input, we also perform 280 masked language modeling on it with the same 281 masking probability as the ASR captions.", "., the 351 attention mass from the video clip to the correct 352 matched caption should be higher than the others.", "353 We denote the overall attention score from the input 354 video clip to one of the ASR captions as: The 355 maximum response between the ASR captions to 356 each element in the video can be computed as: 357 q j =max( S 0 ,j , dim=1) ,j 2 { 1 , 2 , 3 } .", "(3) 358 We then employ an entropy-based loss: 359 L e = \u0000 3 X i =1 q j log( q j ) (4) 360 [Jie: TBD ] 361 4 Experiment 362 In this section, we compare our method with state-363 of-the-art methods across a range of downstream 364 tasks on several datasets.", "We also present a compre-365 hensive ablation study, where we demonstrate that 366 each of our proposed components help improve 367 the pre-training masked language modeling and 368 downstream tasks performance.", "369 4 caption is used as extra text input, we also perform 280 masked language modeling on it with the same 281 masking probability as the ASR captions.", "(3) 358 We then employ an entropy-based loss: 359 L e = \u0000 3 X i =1 q j log( q j ) (4) 360 [Jie: TBD ] 361 4 Experiment 362 In this section, we compare our method with state-363 of-the-art methods across a range of downstream 364 tasks on several datasets.", "We also present a compre-365 hensive ablation study, where we demonstrate that 366 each of our proposed components help improve 367 the pre-training masked language modeling and 368 downstream tasks performance.", "369 4 caption is used as extra text input, we also perform 280 masked language modeling on it with the same 281 masking probability as the ASR captions.", "In our formulation, we only 317 need to process a single sequence that combines 318 all the neighboring captions and without the need 319 for negative captions.", "that videos and extracted 320 ground truth caption in the corresponding time do 321 not exactly match in every sample, we address this 322 issue by allowing the model to automatically select 323 the matching caption.", "324 We denote an input video V as [ c 1 ,c 2 ,...,c N ] , 325 its corresponding ASR captions are denoted as 326 [ s 1 ,s 2 ,...,s N ] , where c i is the i th clip of V and 327 s i is the ASR caption of c i , N is the total num328 ber of clips in the video.", "For a clip c i , instead of 329 only inputting its associated caption s i , we also in-330 clude captions from its two neighboring clips, i.e", "(3) 358 We then employ an entropy-based loss: 359 L e = \u0000 3 X i =1 q j log( q j ) (4) 360 [Jie: TBD ] 361 4 Experiment 362 In this section, we compare our method with state-363 of-the-art methods across a range of downstream 364 tasks on several datasets.", "We also present a compre-365 hensive ablation study, where we demonstrate that 366 each of our proposed components help improve 367 the pre-training masked language modeling and 368 downstream tasks performance.", "369 4 caption is used as extra text input, we also perform 280 masked language modeling on it with the same 281 masking probability as the ASR captions.", "In our formulation, we only 317 need to process a single sequence that combines 318 all the neighboring captions and without the need 319 for negative captions.", "that videos and extracted 320 ground truth caption in the corresponding time do 321 not exactly match in every sample, we address this 322 issue by allowing the model to automatically select 323 the matching caption.", "324 We denote an input video V as [ c 1 ,c 2 ,...,c N ] , 325 its corresponding ASR captions are denoted as 326 [ s 1 ,s 2 ,...,s N ] , where c i is the i th clip of V and 327 s i is the ASR caption of c i , N is the total num328 ber of clips in the video.", "For a clip c i , instead of 329 only inputting its associated caption s i , we also in-330 clude captions from its two neighboring clips, i.e", "(3) 358 We then employ an entropy-based loss: 359 L e = \u0000 3 X i =1 q j log( q j ) (4) 360 [Jie: TBD ] 361 4 Experiment 362 In this section, we compare our method with state-363 of-the-art methods across a range of downstream 364 tasks on several datasets.", "We also present a compre-365 hensive ablation study, where we demonstrate that 366 each of our proposed components help improve 367 the pre-training masked language modeling and 368 downstream tasks performance.", "369 4 caption is used as extra text input, we also perform 280 masked language modeling on it with the same 281 masking probability as the ASR captions.", "In our formulation, we only 317 need to process a single sequence that combines 318 all the neighboring captions and without the need 319 for negative captions.", "that videos and extracted 320 ground truth caption in the corresponding time do 321 not exactly match in every sample, we address this 322 issue by allowing the model to automatically select 323 the matching caption.", "324 We denote an input video V as [ c 1 ,c 2 ,...,c N ] , 325 its corresponding ASR captions are denoted as 326 [ s 1 ,s 2 ,...,s N ] , where c i is the i th clip of V and 327 s i is the ASR caption of c i , N is the total num328 ber of clips in the video.", "For a clip c i , instead of 329 only inputting its associated caption s i , we also in-330 clude captions from its two neighboring clips, i.e", "(3) 358 We then employ an entropy-based loss: 359 L e = \u0000 3 X i =1 q j log( q j ) (4) 360 [Jie: TBD ] 361 4 Experiment 362 In this section, we compare our method with state-363 of-the-art methods across a range of downstream 364 tasks on several datasets.", "We also present a compre-365 hensive ablation study, where we demonstrate that 366 each of our proposed components help improve 367 the pre-training masked language modeling and 368 downstream tasks performance.", "369 4 caption is used as extra text input, we also perform 280 masked language modeling on it with the same 281 masking probability as the ASR captions.", "(3) 358 We then employ an entropy-based loss: 359 L e = \u0000 3 X i =1 q j log( q j ) (4) 360 [Jie: TBD ] 361 4 Experiment 362 In this section, we compare our method with state-363 of-the-art methods across a range of downstream 364 tasks on several datasets.", "We also present a compre-365 hensive ablation study, where we demonstrate that 366 each of our proposed components help improve 367 the pre-training masked language modeling and 368 downstream tasks performance.", "369 4 caption is used as extra text input, we also perform 280 masked language modeling on it with the same 281 masking probability as the ASR captions.", "(3) 358 We then employ an entropy-based loss: 359 L e = \u0000 3 X i =1 q j log( q j ) (4) 360 [Jie: TBD ] 361 4 Experiment 362 In this section, we compare our method with state-363 of-the-art methods across a range of downstream 364 tasks on several datasets.", "We also present a compre-365 hensive ablation study, where we demonstrate that 366 each of our proposed components help improve 367 the pre-training masked language modeling and 368 downstream tasks performance.", "369 4 caption is used as extra text input, we also perform 280 masked language modeling on it with the same 281 masking probability as the ASR captions.", "In our formulation, we only 317 need to process a single sequence that combines 318 all the neighboring captions and without the need 319 for negative captions.", "that videos and extracted 320 ground truth caption in the corresponding time do 321 not exactly match in every sample, we address this 322 issue by allowing the model to automatically select 323 the matching caption.", "324 We denote an input video V as [ c 1 ,c 2 ,...,c N ] , 325 its corresponding ASR captions are denoted as 326 [ s 1 ,s 2 ,...,s N ] , where c i is the i th clip of V and 327 s i is the ASR caption of c i , N is the total num328 ber of clips in the video.", "For a clip c i , instead of 329 only inputting its associated caption s i , we also in-330 clude captions from its two neighboring clips, i.e", "(3) 358 We then employ an entropy-based loss: 359 L e = \u0000 3 X i =1 q j log( q j ) (4) 360 [Jie: TBD ] 361 4 Experiment 362 In this section, we compare our method with state-363 of-the-art methods across a range of downstream 364 tasks on several datasets.", "We also present a compre-365 hensive ablation study, where we demonstrate that 366 each of our proposed components help improve 367 the pre-training masked language modeling and 368 downstream tasks performance.", "In our formulation, we only 317 need to process a single sequence that combines 318 all the neighboring captions and without the need 319 for negative captions.", "that videos and extracted 320 ground truth caption in the corresponding time do 321 not exactly match in every sample, we address this 322 issue by allowing the model to automatically select 323 the matching caption.", "324 We denote an input video V as [ c 1 ,c 2 ,...,c N ] , 325 its corresponding ASR captions are denoted as 326 [ s 1 ,s 2 ,...,s N ] , where c i is the i th clip of V and 327 s i is the ASR caption of c i , N is the total num328 ber of clips in the video.", "For a clip c i , instead of 329 only inputting its associated caption s i , we also in-330 clude captions from its two neighboring clips, i.e", "(3) 358 We then employ an entropy-based loss: 359 L e = \u0000 3 X i =1 q j log( q j ) (4) 360 [Jie: TBD ] 361 4 Experiment 362 In this section, we compare our method with state-363 of-the-art methods across a range of downstream 364 tasks on several datasets.", "We also present a compre-365 hensive ablation study, where we demonstrate that 366 each of our proposed components help improve 367 the pre-training masked language modeling and 368 downstream tasks performance.", "Constrained Attention Loss.", "The ASR captions are often temporally misaligned with their corresponding clips, simply pre-train a model over these misaligned clip-text pairs may lead to sub-optimal performance.", "To alleviate this issue, we propose a constrained attention loss that encourages the model to automatically select the best matched ASR caption from a pool of continuous caption candidates.", "This is achieved by minimizing the entropy of the attentions from the video to the ASR captions.", "Formally, we denote an input video V as [ c 1 , c 2 , ..., c N ] , its corresponding ASR captions are denoted as [ s 1 , s 2 , ..., s N ] , where c i is the i th clip of V and s i is the ASR caption of c i , N is the total number of clips in the video.", "For a clip c i , instead of only inputting its associated caption s i , we also include captions from its two neighboring clips, 3 i.e., s i 1 and s i +1 .", "In most cases, the correct matched caption for the clip is from these three captions.", "We denote X =[ X c i ; X s i 1 ; X s i ; X s i +1 ] R l d as the generalized input sequence to each transformer layer (dense captions are ignored for simplicity), where X c i , X s i 1 , X s i , X s i +1 are the embedding matrices correspond to the input clip and three captions.", "We further simplify the notations as X =[ X 0 ; X 1 ; X 2 ; X 3 ] .", "A single head 3 While our approach works for arbitrary number of neighbors, we use two neighbors to illustrate the idea for simplicity.", "self-attention operation in the transformer encoder layers can then be expressed as: A = softmax( XXT d , dim=1) X , (1) where softmax( , dim=1) denotes applying softmax at the second dimension of the input matrix.", "A is the attention output matrix.", "When multiple attention heads are used, the formulation is similar.", "We use S to denote the similarity matrix computed by XXT , it can be expressed using block matrices: S q,r = X q X Tr , q, r { 0 , 1 , 2 , 3 } .", "(2) Our goal is to encourage the model to focus on the correct matched caption for an input clip, i.e., the attention mass from the video clip to the correct matched caption should be higher than the others.", "To achieve this, we first define the maximum response between the video hidden states X 0 to the ASR captions hidden states X j as: z j = max( S 0 ,j , dim=1) , j { 1 , 2 , 3 } .", "(3) For a single example, we define its constrained attention loss as: u j = exp( z j ) (cid:80) 3 k =1 exp( z k ) , (4) L e = 3 (cid:88) j =1 u j log( u j ) .", "(5) This loss formulation is based on entropy minimization (Tanaka et al., 2018; Yi and Wu, 2019), it forces the model to assign high attention scores only to one of the ASR captions, i.e., to peak at only one caption rather than being flat because the one-hot distribution has the smallest entropy.", "Figrue 3 shows an overview of applying the constrained attention loss.", "During pre-training, we add this loss to each of the attention heads across all layers, we add these losses along with the MLM loss and video-text matching loss for joint optimization.", "Meanwhile, as the similarity matrix S is a symmetric matrix, the entropy minimization objective also encourages the correct matched ASR caption to have higher similarity to the video, while forcing the mismatched captions to put more attention on the other ASR captions rather than the video.", "In fact, we found that, of 100 randomly sampled videos, using two neighbors already covers 95% of the videos with at least one positive matched ASR caption.", "In this section, we compare our model with state-of-the-art methods on three video-and-language downstream tasks (e.g., video captioning, text-to-video", "retrieval, and video question answering) across five datasets.", "We then present a comprehensive ablation study, where we show that each of our proposed components help improve the pre-training task performance and downstream task performance.", "Lastly, we also provide an attention visualization example to demonstrate the effect of applying our proposed constrained attention loss.", "Pre-training.", "We use HowTo100M (Miech et al., 2019) for pre-training.", "It contains 1.22 million YouTube instructional videos that cover 23.6K instruction tasks (e.g., making peanut butter, pruning a tree ).", "Each video is associated with an English narration automatically transcribed by an Automatic Speech Recognition (ASR) system.", "On average, each video has 110 clip-caption pairs, with an average duration of 4 seconds per clip and 4 words per caption.", "We reserve 10K videos for validation, and use the rest of the videos for pre-training.", "Video Captioning.", "We evaluate video captioning on MSRVTT (Xu et al., 2016) and YouCook2 (Zhou et al., 2017) datasets.", "The task is to generate a text description (a single sentence or a paragraph of multiple sentences) for a given video.", "( i )", "MSRVTT contains 10K YouTube videos with 20 descriptions per video.", "The videos in MSRVTT are typically 10-30 seconds long, with an average length of 14.8 seconds.", "Its contains 6.5K videos in the train set, 497 videos in the val set, and 3K videos in the test set.", "( ii )", "YouCook2 is a cooking video dataset harvested from YouTube.", "It contains 2K videos from 89 recipes with a total length of 176 hours.", "Each video is annotated with temporal timestamps that indicate event segments (clips), a textual description is provided for each segment.", "In total, there are 14K video segments.", "Text-to-Video Retrieval.", "We evaluate text-to-video retrieval on MSRVTT and YouCook2 datasets, where the goal is to retrieve a relevant video from a gallery of videos given a text query.", "( i )", "MSRVTT is the same dataset as the captioning task.", "We follow previous work (Yu et al., 2018b; Miech et al., 2019) to use the 7k train+val videos for training and report results on the 1K test set sampled by Yu et al. (2018b).", "( ii )", "YouCook2 is the same dataset as the captioning task.", "We evaluate our model on the clip retrieval task as in previous work (Miech et al., 2019; Zhu and Yang, 2020).", "Video Question Answering.", "We evaluate video question answering (QA) performance on the MSRVTT-QA (Xu et al., 2017) dataset.", "It contains 243K open-ended questions constructed based on the videos and captions in MSRVTT.", "We use the BERT-base (Devlin et al., 2019) architecture as our transformer encoder, with hidden size 768 and 12 transformer layers.", "The entire model contains 115M parameters.", "The maximum length of video features is set to 100 for both pre-training and downstream tasks.", "We use Adam optimizer (Kingma and Ba, 2014) to optimize the model, with an learning rate of 1e-4, 1 =0 .", "9 , 2 =0 .", "98 , L2 weight decay of 0.01.", "For pre-training, we train the model for 20 epochs until convergence.", "Dense captions in different frames are potentially repeated if the contiguous frames have similar objects.", "This is expected as some videos have smooth shooting that stays at one angle for an extended time.", "We filter those dense captions to avoid redundancy.", "For downstream tasks, we finetune from the same pre-trained weights and use the same training and optimization settings as pre-training.", "We conduct all the experiments using NVIDIA GeForce GTX 1080Ti GPUs and Intel(R) Xeon(R) CPU E5-2630 v4.", "During pre-training, the model's inference speed under this infrastructure with one GPU is 5 samples per second.", "We present our results on three downstream tasks across five datasets, and compare the results against the state-of-the-art methods.", "All the downstream results are obtained by fine-tuning the same pre-trained model that is pre-trained with dense captions and constrained attention loss.", "Video Captioning.", "We follow Vaswani et al. (2017) to train auto-regressive captioning models, by only allowing the text tokens to attend to tokens that precede them at training.", "During inference time, we use beam search with beam size 5 to generate captions.", "For MSR-VTT, we evaluate captioning performance at sentence level.", "For YouCook2, we follow previous work (Lei et al., 2020a; Ging et al., 2020) to evaluate performance at paragraph-level, where single segment captions are concatenated as a paragraph for evaluation.", "We use standard metrics BLEU@4 (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2014), Rouge-L (Lin, 2004), and CIDEr-D (Vedantam et al., 2015) to report performance.", "Table 1 shows the comparison on MSRVTT, our DECEMBERT model achieves significant performance gain over previous state-of-the-art.", "Notably, DECEMBERT outperforms ORG-TRL (Zhang et al., 2020) by 1.6% BLEU@4, 2.6% Rouge-L, and 1.4% CIDEr-D, even though ORG-TRL uses a set of strong visual features (appearance, motion, and object) together with a sophisticated graph encoder network and external language model supervision.", "Table 2 shows the results on YouCook2 captioning task.", "Overall, DECEMBERT outperforms previous methods across all metrics.", "Compared to the strong baseline method MART+COOT+NIL-NCE (Lei et al., 2020a; Ging et al., 2020; Miech et al., 2020) PT, that uses HowTo100M videos for pre-training followed by a designated hierarchical modeling training, our approach still shows better performance with a reasonable margin.", "This shows the effectiveness of our pre-training strategy.", "Text-to-video Retrieval.", "We train text-to-video retrieval models similar to the way we perform video-text matching, where we sample a negative caption 50% of the time.", "We use average recall at K (R@K) and median rank (MdR) to report perfor-Method R@1 R@5 R@10 MdR HERO (Li et al., 2020b) w/ ASR, PT 20.5 47.6 60.9 JSFusion (Yu et al., 2018b) 10.2 31.2 43.2 13.0 HowTo (Miech et al., 2019) 12.1 35.0 48.0 12.0 HowTo (Miech et al., 2019) PT 14.9 40.2 52.8 9.0 Univilm (Luo et al., 2020) PT 15.4 39.5 52.3 9.0 ActBERT (Zhu and Yang, 2020) PT 16.3 42.8 56.9 10.0 HERO (Li et al., 2020b) PT 16.8 43.4 57.7 DECEMBERT 17.5 44.3 58.6 9.0 Table 3: Text-to-video retrieval results on MSRVTT 1k test set (Yu et al., 2018b).", "We show MSRVTT text-to-video retrieval in Table 3.", "Overall, our approach achieves the best performance.", "Compared to the pre-trained models HowTo (Miech et al., 2019), ActBERT (Zhu and Yang, 2020), and HERO (Li et al., 2020b), DECEMBERT achieves strong performance with a reasonable margin.", "It outperforms HERO by 0.7% R1, note that HERO is pre-trained with extra TV show videos (Lei et al., 2018; Liu et al., 2020a) in addition to the HowTo100M videos that we use.", "Moreover, DECEMBERT is also quite competitive compared to the HERO w/ ASR model that uses additional ASR features during finetuning.", "For YouCook2 text-to-video retrieval results shown in Table 4, our approach also show better performance compared to the pre-trained models HowTo and COOT+MIL-NCE.", "Notably, it outperforms previous state-of-the-art COOT+MIL-NCE by 7.5% R@10.", "Video Question Answering.", "We use a two-layer MLP followed by a softmax layer for open-ended question answering, where we optimize the probability of choosing the correct answer from a large pool of candidate answers.", "We report accuracy to Method Accuracy ST-VQA (Jang et al., 2017) 30.9 Co-Memory (Gao et al., 2018) 32.0 AMU (Xu et al., 2017) 32.5 Heterogeneous Memory (Fan et al., 2019) 33.0 HCRN (Le et al., 2020) 35.6 DECEMBERT 37.4 Table 5: Video question answering results on MSRVTT-QA test set.", "measure the QA performance.", "We show MSRVTT-QA results in Table 5 where our approach outperform all the baseline methods by a large margin.", "Compared to HCRN (Le et al., 2020) which employs a complicated hierarchical reasoning module, our approach achieves 1.8% performance gain, achieving a new state-of-the-art for the task.", "4.4 Analysis Ablation Study.", "We present ablation study on our pre-training strategies, on both the pre-training tasks and the MSRVTT captioning downstream task.", "We report ablation results on our 10K holdout HowTo100M videos for pre-training tasks, i.e., masked language modeling (MLM) accuracy and video-text matching accuracy.", "Because we use MLM for both dense captions and the original ASR captions, we report their accuracy separately.", "The results are shown in Table", "6. To understand how the pre-training strategies affect the downstream performance, we also perform downstream finetuning from pre-trained models using these different pre-training strategies.", "The results are shown in Table", "7. Compared to the basic model that uses only a single paired ASR caption with each clip for training, we observe the the variant that takes three ASR captions achieves significantly higher accuracy in MLM and video-text matching.", "Adding dense captions and constrained attention loss further improve the performance.", "Overall, the same trend also holds true for the downstream performance on MSRVTT captioning and QA tasks.", "The best captioning and QA models are finetuned from the model pre-trained using both the dense captions and the constrained attention loss.", "Compared to the basic model with only MLM and video-text matching, our best models achieve a significant performance gain: e.g., 3.3% BLEU@4, 3.1% CIDEr-D for captioning, and 2.3% Accuracy for QA.", "attention heads across all layers.", "In Figure 4, we compare the attention maps from models with or without the proposed constrained attention loss during pre-training.", "As we found the attention weight distributions (not absolute values) on different layers look similar to each other, we randomly chose the 10-th layer to showcase the effect of adding constrained attention loss.", "We observe that after adding constrained attention loss as a regularization, the attention mass concentrated to the best-matched ASR caption rather than distributed to all the captions.", "In this work, we propose DECEMBERT as an improved pre-training method for learning from noisy, unlabeled instructional videos.", "Specifically, we propose adding automatically-extracted frame-level dense captions as an auxiliary text input for learning better video and language associations.", "We also propose a constrained attention loss that forces the model to automatically focus on the best-matched caption from a pool of misalignment caption candidates via entropy minimization.", "Comprehensive experiments on three popular video and language tasks (i.e., text-to-video retrieval, video captioning, and video question answering) across five datasets demonstrate the effectiveness of DECEMBERT compared to existing approaches.", "We also provide detailed ablation study and visualization to quantitatively and qualitatively examine the impact of our added components.", "We thank the reviewers for their helpful feedback.", "This research is supported by DARPA MCS Grant #N66001-19-2-4031, DARPA KAIROS Grant #FA8750-19-2-1004, ARO-YIP Award #W911NF-18-1-0336, and Google Focused Research Award.", "The views contained in this article are those of the authors and not of the funding agency." ]
[ "abstain", "abstain", "objective", "method", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "other", "abstain", "abstain", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "method", "result", "objective", "objective", "abstain", "objective", "abstain", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "result", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "objective", "objective", "objective", "abstain", "method", "other", "other", "other" ]
[ "Human communication is multimodal in nature; it is through multiple modalities such as language, voice, and facial expressions, that opinions and emotions are expressed.", "Data in this domain exhibits complex multi-relational and temporal interactions.", "Learning from this data is a fundamentally challenging research problem.", "In this paper, we propose Modal-Temporal Attention Graph (MTAG).", "MTAG is an interpretable graph-based neural model that provides a suitable framework for analyzing multimodal sequential data.", "We first introduce a procedure to convert unaligned multimodal sequence data into a graph with heterogeneous nodes and edges that captures the rich interactions across modalities and through time.", "Then, a novel graph fusion operation, called MTAG fusion, along with a dynamic pruning and read-out technique, is designed to efficiently process this modal-temporal graph and capture various interactions.", "By learning to focus only on the important interactions within the graph, MTAG achieves state-of-the-art performance on multimodal sentiment analysis and emotion recognition benchmarks, while utilizing significantly fewer model parameters.", "1 1 Introduction With recent advances in machine learning research, analysis of multimodal sequential data has become increasingly prominent.", "At the core of modeling this form of data, there are the fundamental research challenges of fusion and alignment .", "Fusion is the process of blending information from multiple modalities.", "It is usually preceded by alignment, which is the process of finding temporal relations between the modalities.", "An important research area that exhibits this form of data is multimodal language analysis, where sequential modalities of Equal contribution 1 Code is available at https://github.com/ jedyang97/MTAG .", "language, vision, and acoustic are present.", "These three modalities carry the communicative information and interact with each other through time; e.g. positive word at the beginning of an utterance may be the cause of a smile at the end.", "When analyzing such multimodal sequential data, it is crucial to build models that perform both fusion and alignment accurately and efficiently by", "a) aligning arbitrarily distributed asynchronous modalities in an interpretable manner,", "b) efficiently accounting for short and long-range dependencies,", "c) explicitly modeling the inter-modal interactions between it was a very good movie (emphasis) (neutral) = Positional Embedding Node Construction MTAG Graph Fusion and Pruning Text Audio Video FFNVFFNVFFNVFFNTFFNTFFNTFFNTFFNTFFNTFFNAFFNAMTAG Prune MTAG Prune Remove Isolated Node \" Layers Multimodal & Temporal Edge Construction Figure 2: The 3-stage MTAG framework: Node Construction, Edge Construction and Fusion+Pruning. [Node Construction] Each modality's features are first passed through a distinct Feed-Forward-Network to be mapped into the same embedding size. Then, a positional embedding is added to each transformed feature based on its position in its own modality, so that temporal information are encoded. The features are now nodes in the graph. [Edge Construction] We then apply an algorithm to construct edges among these nodes by appropriately indexing each edge with a modal type and a temporal type. [Fusion+Pruning] Finally, we pass the graph into the MTAG module to learn interactions across modality and time. The output graph with updated node embeddings and pruned edges can be passed to downstream modules, e.g. a Multi-layer Perceptron, to complete specific tasks such as regression or classification. the modalities while simultaneously accounting for intra-modal dynamics. In this paper, we propose MTAG (Modal-Temporal Attention Graph). MTAG is capable of both fusion and alignment of asynchronously distributed multimodal sequential data. Modalities do not need to be pre-aligned, nor do they need to follow similar sampling rate. MTAG can capture interactions of various types across any number of modalities all at once, comparing to previous methods that model bi-modal interactions at a time (Tsai et al., 2019a). At its core, MTAG utilizes an efficient trimodal-temporal graph fusion operation. Coupled with our proposed dynamic pruning technique, MTAG learns a parameter-efficient and interpretable graph. In our experiments, we use two unaligned multimodal emotion recognition and sentiment analysis benchmarks: IEMOCAP (Busso et al., 2008) and CMU-MOSI (Zadeh et al., 2016). The proposed MTAG model achieves state-of-the-art performance with far fewer parameters. Subsequently, we visualize the learned relations between modalities and explore the underlying dynamics of multimodal language data. Our model incorporates all three modalities in both alignment and fusion, a fact that is also substantiated in our ablation studies. 2 Related Works Human Multimodal Language Analysis Analyzing human multimodal language involves learning from data across multiple heterogeneous sources that are often asynchronous, i.e. language, visual, and acoustic modalities that each uses a different sampling rate. Earlier works assumed multimodal sequences are aligned based on word boundaries (Lazaridou et al., 2015; Ngiam et al., 2011; Gu et al., 2018; Dumpala et al., 2019; Pham et al., 2019) and applied fusion methods for aligned sequences. To date, modeling unaligned multimodal language sequences remains understudied, except for (Tsai et al., 2019a; Khare et al., 2020; Zheng et al., 2020), which used cross-modal Transformers to model unaligned multimodal language sequences. However, the cross-modal Transformer module is a bi-modal operation that only account for two modalities' input at a time. In Tsai et al. (2019a), the authors used multiple cross-modal Transformers and applies late fusion to obtain trimodal features, resulting in a large amount of parameters needed to retain original modality information. Other works that also used cross-modal Transformer architecture for include Yang et al. (2020); Siriwardhana et al. (2020). In contrast to the existing works, our proposed graph method, with very small amount of model parameters, can aggregate information from multiple (more than 2) modalities at early stage by building edges between the corresponding modalities, allowing richer and more complex representation of the interactions to be learned. Graph Neural Networks Graph Neural Network (GNN) was introduced in (Gori et al., 2005; Scarselli et al., 2008) with an attempt to extend deep neural networks to handle graph-structured data. Since then, there has been an increasing research interest on generalizing deep neural net-work's operations such as convolution (Kipf and Welling, 2016; Schlichtkrull et al., 2017; Hamilton et al., 2017), recurrence (Nicolicioiu et al., 2019), and attention (Velickovic et al., 2018) to graph. Recently, several heterogeneous GNN methods (Wang et al., 2019a; Wei et al., 2019; Shi et al., 2016) have been proposed. The heterogeneous nodes referred in these works consist of uni-modal views of multiple data generating sources (such as movie metadata node, audience metadata node, etc.), whereas in our case the graph nodes represent multimodal views of a single data generating source (visual, acoustic, textual nodes from a single speaking person). In the NLP domain, multimodal GNN methods (Khademi, 2020; Yin et al., 2020) on tasks such as Visual Question Answering and Machine Translation. However, these settings still differ from ours because they focused on static images and short text which, unlike the multimodal video data in our case, do not exhibit long-term temporal dependencies across modalities. Based on these findings, we discovered there has been little research using graph-based methods for modeling unaligned, multimodal language sequences, which includes video, audio and text. In this paper, we demonstrate our proposed MTAG method can effectively model such unaligned, multimodal sequential data. 3 MTAG In this section, we describe our proposed framework: Modal Temporal Attention Graph (MTAG) for unaligned multimodal language sequences. We describe how we formulate the multimodal data into a graph G ( V , E ) , and the MTAG fusion operation that operates on G . In essence, our graph formulation by design alleviates the need for any hard alignments, and combined with MTAG fusion, allows nodes from one modality to interact Notation Explanation v i Node i e ij Edge from v i to v j N i Neighbor nodes incident into v i x i Initial node feature for v i x (cid:48) i Transformed node feature for v i i Node type for v i ij Edge modality type for e ij ij Edge temporal type for e ij M i Node type specific transformation matrix a ij , ij Edge type specific learnable attention vector i,j Raw attention score of node pair ( v i , v j ) i,j Attention weight of node pair ( v i , v j ), normalized over N i z i Node output feature for v i k Prune percentage h Index of multi-head attention head H Number of total attention heads Table 1: Terminologies used in this paper. freely with nodes from all other modalities at the same time, breaking the limitation of only modeling pairwise modality interactions in previous works. Figure 2 gives a high-level overview of the framework. 3.1 Node Construction As illustrated in Figure 2, each modality's input feature vectors are first passed through a modality-specific Feed-Forward-Network. This allows feature embeddings from different modalities to be transformed into the same dimension. A positional embedding (details in Appendix A) is then added (separately for each modality) to each embedding to encode temporal information. The output of this operation becomes a node v i in the graph. Each node is marked with a modality identifier i , where i { Audio, Video, Text } in our case. 3.2 Edge Construction In this section, we describe our design of modality edges and temporal edges. For a given node of a particular modality, its interactions with nodes from different modalities should be considered differently. For example, given a Video node, its interaction with an Audio node should be different from that with a Text node. In addition, the temporal order of the nodes also plays a key role in multimodal analysis (Poria et al., 2017). For example, a transition from a frown to a smile ( ) may imply a positive sentiment, whereas a transition from a smile to a frown ( ) may imply a negative sentiment. Therefore, interactions between nodes that appear in different temporal orders should also be considered differently. In GNNs, the edges define how node features are aggregated within a graph. In order to encapsulate the diverse types of node interactions, we assign edge types to each edge so that information can be aggregated differently on different types of edges. By indexing edges with edge types, different modal and temporal interactions between nodes can be addressed separately. Multimodal Edges. As we make no assumption about prior alignment of the modalities, the graph is initialized to be a fully connected graph. We use e ij to represent an edge from v i to v j . We assign e ij with a modality type identifier ij = ( i j ) . For example, an edge pointing from a Video node to a Text node will be marked with type ij = ( Video Text ) . Temporal Edges. In addition to ij , we also assign a temporal label ij to each e ij . Depending on the temporal order of v i and v j connected by e ij , we determine the value of ij to be either of { past, present, future } . For nodes from the same modality, the temporal orders can be easily determined by comparing their order of occurrences. To determine the temporal orders for nodes across different modalities, we first roughly align the two modalities with our pseudo-alignment. Then the temporal order can be simply read out. Pseudo-Alignment. As mentioned above, it is simple to determine the temporal edge types for nodes in a single modality. However, there is no clear definition of earlier\" or later\" across two modalities, due to the unaligned nature of our input sequences. To this end, we introduce the pseudo-alignment heuristic that coarsely defines the past, present and future connections between nodes across two modalities. Given a node v i from one modality i , our pseudo-alignment first determines a set of nodes V i,present in the other modality that can be aligned to v i and considered as present\".", "All nodes in the other modality that exists after V i,present are considered future\" V i,future , and all those before are considered V i,past . Once the coarse temporal order is established, the cross-modal temporal edge types can be easily determined. Figure 3 shows an example of such pseudo-alignment, and more details regarding the calculations can be found in Appendix A.2. Time FutureEdgePresentEdgePastEdge Vision Node TextNode PastNodes PresentNodes FutureNodes Figure 3: An example of the pseudo-alignment between two unaligned sequences. We first align the longer sequence to the shorter one as uniformly as possible. Then the aligned nodes from the longer sequence becomes the V i,present for node v i in the shorter sequence. V i,past and V i,future can then be determined accordingly. 3.3 Fusion and Pruning 3.3.1 MTAG Fusion With our formulation of the graph, we design the MTAG fusion operation that can digest our graph data with various node and edge types, and thus model the modal-temporal interactions. An algorithm of our method is shown in Algorithm 1 and a visual illustration is given in Figure 4. Specifi-cally, for each neighbor node v j that has an edge incident into a center node v i , we compute a raw attention score [ h ] ,i,j based on that edge's modality and temporal type: [ h ] ,i,j = LeakyRelu ( a ji , ji [ h ] [ x (cid:48) i (cid:107) x (cid:48) j ]) (1) where [ || ] denotes the concatenation of two column vectors into one long column vector. The [ h ] index is used to distinguish which multi-head attention head is being used. Note that a ji , ji [ h ] depends on both the modality and temporal edge types of e ji . This results in 27 edge types (9 types of modality interaction 3 types of temporal interaction). We normalize the raw attention scores over all neighbor nodes v j with Softmax so that the normalized attention weight sums to 1 to preserve the scale of the node features in the graph. [ h ] ,i,j = exp( [ h ] ,i,j ) (cid:80) k N i exp( [ h ] ,i,k ) (2) Then, we perform node feature aggregation for each node v i following: z i = H concat h =1 ( (cid:88) j N i [ h ] ,i,j x (cid:48) j ) (3) Algorithm 1: MTAG with edge pruning 1 Feature transformation x (cid:48) i M i x i , i 2 for h = 1 ...H do 3 for j N i i do 4 calculate raw attention score using modalityand temporal-edge-type specific parameters: [ h ] ,i,j = LeakyRelu ( a ji , ji [ h ] [ x (cid:48) i (cid:107) x (cid:48) j ]) 5 normalize raw attention scores over N i i to get attention weight [ h ] ,i,j 6 calculate node output feature z i = H concat h =1 ( (cid:80) j N i i [ h ] ,i,j x (cid:48) j ) 7 calculate average attention weight across all heads i,j = 1 H (cid:80) Hh =1 ( [ h ] ,i,j ) 8 sort i,j and delete the edges with the smallest k % average attention weight from N i i , obtaining N (cid:48) i 9 return z i , N (cid:48) i i where N i defines the neighbors of v i and hyperparameter H is the number of total attention heads. z i now becomes the new node embedding for node v i . After aggregation, v i transformed from a node with unimodal information into a node encoding the diverse modal-temporal interactions between v i and its neighbors (illustrated by the mixing of colors of the nodes in Figure 2). We desgined the operation to have H multi-head attention heads because the heterogeneous input data of the multimodal graph could be of different scales, making the variance of the data high. Adding multi-head attention could help stabilize the behavior of the operation. 3.3.2 Dynamic Edge Pruning Our graph formulation models interactions for all 27 edge types. This design results in a very large number of edges in the graph, making the computation graph difficult to fit into GPU memories. More importantly, when there are so many edges, it is hard to avoid some of these edges from inducing spurious correlations and distracting the model from focusing on the truly important interactions (Lee et al., 2019; Knyazev et al., 2019). To address these challenges, we propose to dynamically prune edges as the model learns the graph. Specifically, after each layer of MTAG, we have the attention weight [ h ] ,i,j for each attention head h and for i 3 4 5 # [\"]$&,()*+*,-# [\"]&&,()*+*,-1 2 #", "[\"]$&,(1+-Figure 4: Visualization of the MTAG operation around a single node.", "The text on each edge indicates which attention vector is used for that edge.", "Purple triangle represents a video node, green circle represents a text node and blue square represents an audio node.", "each edge e ij .", "We take the average of the attention weights over the attention heads: ij = 1 HH (cid:88) h =1 ( [ h ] ,i,j ) (4) Then, we sort ij and delete k % edges with the smallest attention weights, where k is a hyperparameter.", "These deleted edges will no longer be calculated in the next MTAG fusion layer.", "Our ablation study in Section 5.2 empirically verifies the effectiveness of this approach by comparing to no pruning and random pruning.", "At the end of the MTAG fusion process, we need to read out information scattered in the nodes into a single vector so that we can pass it through a classification head.", "Recall that the pruning process drops edges in the graph.", "If all edges incident into a node have been dropped, then it means that node was not updated based on its neighbors.", "In that case, we simply ignore that node in the readout process.", "We readout the graph by averaging all the surviving nodes' output features into one vector.", "This vector is then passed to a 3-layer Multi-Layer-Perceptron (MLP) to make the final prediction.", "We empirically evaluate MTAG model on two datasets: IEMOCAP (Busso et al., 2008) and CMU-MOSI (Zadeh et al., 2016); both are well-known datasets used by prior works (Liang et al., 2018; Pham et al., 2019; Tsai et al., 2019b,a) to benchmark multimodal emotion recognition and sentiment analysis.", "IEMOCAP IEMOCAP is a multimodal emotion recognition dataset consisting of 10K videos.", "The task we chose is the 4-way multilabel emotion classification, classifying into happy, sad, angry and neutral.", "For train split, the positive/negative label ratio for each emotion is 954:1763, 338:2379, 690:2027 and 735:1982.", "For the test split, the ratio is 383:555, 135:803, 193:745 and 227:711.", "Due to this unbalanced distribution of the the labels, we use F1 score as a better metric for comparison.", "CMU-MOSI CMU Multimodal Opinion Sentiment Intensity is a multimodal sentiment analysis dataset with 2,199 movie review video clips.", "Each video clip is labeled with real-valued sentiment score within [ 3 , +3] , with +3 being a very positive sentiment and 3 a very negative one.", "Following previous works (Tsai et al., 2019a), we report five metrics: 7-class classification accuracy (Acc 7 ), binary classification accuracy (Acc 2 , posi-tive/negative sentiments), F1 score, Mean Absolute Error (MAE) and the correlation of the model's prediction with human.", "We follow prior works (Tsai et al., 2019a) to evaluate on both of the above unaligned datasets, where original audio and video features are used, resulting in variable sequence lengths across modalities.", "For both datasets, the multimodal features are extracted from the textual (GloVe word embeddings Model \\ Metirc Acc 7 Acc 2 F1 MAE Corr (Unaligned) CMU-MOSI Sentiment CTC+EF-LSTM 31.0 73.6 74.5 1.078 0.542 LF-LSTM 33.7 77.6 77.8 0.988 0.624 CTC+MCTN 32.7 75.9 76.4 0.991 0.613 CTC+RAVEN 31.7 72.7 73.1 1.076 0.544 MulT 39.1 81.1 81.0 0.889 0.686 MTAG (ours) 38.9 82.3 82.1 0.866 0.722 Table 3: Results on unaligned CMU-MOSI.", "(Pennington et al., 2014)), visual (Facet (iMotions, 2017)), and acoustic (COVAREP (Degottex et al., 2014)) data modalities.", "For basleine evaluations, we use Early Fusion LSTM (EF-LSTM) and Late Fusion LSTM (LF-LSTM) (Tsai et al., 2019a) as baselines.", "In addition, we compare our model against similar methods as in previous works (Tsai et al., 2019a), which combine a Connectionist Temporal Classification (CTC) loss (Graves et al., 2006) with the preexisting methods such as EF-LSTM, MCTN (Pham et al., 2019), RAVEN (Wang et al., 2019b).", "Shown in Table 2 and Table 3, MTAG substantially out-performs previous methods on unaligned IEMOCAP benchmark and CMU-MOSI benchmark on most of the metrics.", "MTAG also achieves on-par performance on the Acc 7 metric on CMU-MOSI benchmark.", "With an extremely small number of parameters, our model is able to learn better alignment and fusion for multimodal sentiment analysis task.", "Details regarding our model and hyper-parameters can be found in the Appendix A. Parameter Efficiency (MTAG vs MulT) We discover that MTAG is a highly parameter-efficient model.", "A comparison of model parameters between MTAG and MulT (Tsai et al., 2019a) (previ-ous state of the art) is shown in Table 4.", "The hyperparameter used for this comparison can be found in the Appendix.", "With only a fraction ( 6 . 25% ) of MulT's parameters, MTAG is able to achieve on-par, and in most cases superior performance on the two datasets.", "This demonstrates the parameter efficiency of our method.", "Qualitative Analysis The attention weights on the graph edges forms a natural way to interpret our model.", "We visualize the edges to probe what MTAG has learned.", "The following case study is a randomly selected video clip from the CMU-MOSI validation set.", "We observe the phenomena shown below is a general trend.", "In Figure 5, we show an example of the asymmetric bi-modal relations between vision and text.", "We observe that our model picks on meaningful relations between words such as I really enjoyed \" and facial expressions such as raising eyebrow, highlighted in the red dashed boxes in Figure 5a. Our model can also learn long-range correlation between I really enjoyed \" and head nodding.", "Interestingly, we discover that strong relations that are not detected by vision-to-text edges can be recovered by the text-to-vision edges.", "This advocates the design of the multi-type edges, which allows the model to learn different relations independently that can complement one another.", "Figure 1 gives a holistic view of the attention weights among all three modalities.", "We observe a pattern where almost all edges involve the text modality.", "A possible explanation for this observation is that the text is the dominant modality with respect to the sentiment analysis task.", "This hypothesis is verified by the ablation study in Sec. 5.3.", "Meanwhile, there appears to be very small amount of edges connecting directly between vision and audio, indicating that there might be little meaningful correlation between them.", "This resonates with our ablation studies in Table 5, where vision and audio combined produce the lowest bi-modal performance.", "Under such circumstance, our MTAG learns to kill direct audio-vision relations and instead fuse their information indirectly using the text modality as a proxy, whereas previous methods such as MulT keeps audio-vision attentions alive along the way, introducing possible spurious relations that could distract model learning.", "We conduct ablation study using unalgined CMU-MOSI dataset.", "MTAG Full Model implements multimodal temporal edge types, adopts TopK Ablation Acc 2 F1 MAE Edge Types No Edge Types 82.4 82.5 0.937 Multimodal Edges Only 85.6 85.7 0.859 Temporal Edges Only 85.2 85.2 0.887 Pruning Random Pruning Keep 80% 75.5 74.5 1.080 No Pruning 84.7 84.7 0.908 Modalities Language Only 81.5 81.4 0.911 Vision Only 57.0 57.1 1.41 Audio Only 58.1 58.1 1.37 Vision, Audio 62.0 59.2 1.360 Language, Audio 85.9 85.7 0.915 Language, Vision 86.6 86.6 0.896 Full Model, All Modalities 87.0 87.0 0.859 Table 5: Ablation on unaligned CMU-MOSI validation set.", "edge pruning that keeps edges with top 80% edge weights, and includes all three modalities as its input.", "Table 5 shows the performance.", "We present research questions (RQs) as follows and discuss how ablation studies address them.", "We first study the effect of edge types on our model performance.", "As we incrementally add in multimodal and temporal edge types, our model's performance continues to increase.", "The model with 27 edge types performs the best under all metrics.", "By dedicating one attention vector a ji , ji to each edge, MTAG can model each complex relation individually, without having one relation interfering another.", "As shown in Figure 5 and Table 5, such design enhances multimodal fusion and alignment, helps maintain long-range dependencies in multimodal sequences, and yields better results.", "We compare our TopK edge pruning to no pruning and random pruning to demonstrates it effectiveness.", "We find that TopK pruning exceeds both no pruning and random pruning models in every aspect.", "It is clear that, by selectively keeping the top 80% most important edges, our model learns more IR e a ll y E n j oy e d D o i ng T h e m F o r T h e P h ill y DM ov i e C l ub S o IT hough t O h W h y N o t U m Emphasizes on \" I really enjoyed \" and raises eyebrow Neutral facial Expression Nods head while raising voice on \" Oh why not?! \" Video Order Text Order", "meaningful representations than randomly keeping 80%.", "Our model also beats the one where no pruning is applied, which attests to our assumption and observation from previous work (Lee et al., 2019; Knyazev et al., 2019) that spurious correlations do exist and can distract model from focusing on important interactions.", "Therefore, by pruning away the spurious relations, the model learned a better representation of the interactions, while using significantly fewer computation resources.", "Lastly, we study the impact of different modality combinations used in our model.", "As shown in Table 5, we find that adding a modality consistently brings performance gains to our model.", "Through the addition of individual modalities, we find that adding the text modality gives the most significant performance gain, indicating that text may be the most dominant modality for our task.", "This can also be qualitative confirmed by seeing the concentrated edge weights around text modality in Figure 1. This observation also conforms with the observations seen in prior works (Tsai et al., 2019a; Pham et al., 2019).", "On the contrary, adding audio only brings marginal performance gain.", "Overall, this ablation study demonstrates that all modalities are beneficial for our model to learn better multimodal representations.", "In this paper, we presented the Modal-Temporal Attention Graph (MTAG).", "We showed that MTAG is an interpretable model that is capable of both fusion and alignment.", "It achieves similar to SOTA performance on two publicly available datasets for emotion recognition and sentiment analysis while utilizing substantially lower number of parameters than a transformer-based model such as MulT.", "We thank Jianing Qian, Xiaochuang Han and Haop-ing Bai at CMU and the anonymous reviewers at NAACL for providing helpful discussions and feedbacks.", "This material is based upon work partially supported by BMW of North America, the National Science Foundation and National Institutes of Health.", "Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of BMW of North America, National Science Foundation or National Institutes of Health, and no official endorsement should be inferred." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "other", "other", "other" ]
[ "Natural language processing (NLP) systems have become a central technology in communication, education, medicine, artificial intelligence, and many other domains of research and development.", "While the performance of NLP methods has grown enormously over the last decade, this progress has been restricted to a minuscule subset of the world's 6,500 languages.", "We introduce a framework for estimating the global utility of language technologies as revealed in a comprehensive snapshot of recent publications in NLP.", "Our analyses involve the field at large, but also more in-depth studies on both user-facing technologies (machine translation, language understanding, question answering, text-to-speech synthesis) as well as foundational NLP tasks (dependency parsing, morphological inflection).", "In the process, we (1) quantify disparities in the current state of NLP research, (2) explore some of its associated societal and academic factors, and (3) produce tailored recommendations for evidence-based policy making aimed at promoting more global and equitable language technologies.", "1 1 Introduction The past decade has seen a rapid advance in natural language processing (NLP); it has grown from a relatively technical niche to a fundamental tool in virtually all domains that involve language data in any shape or form.", "NLP is now instrumental to not only bread-and-butter applications such as translation and question answering, but also tasks as wide ranging as detection of neurodegenerative diseases (Orimaye et al., 2017), exposing widespread gender and ethnic biases in societies (Caliskan et al., 2017), and predicting large-scale trends in collective consumer behavior (Kallus, 2014).", "Because of this NLP has become a staple technology for 1 Data and code to reproduce the findings discussed in this paper are available on GitHub ( https://github.com/ neubig/globalutility ).", "everyday frequent tasks in most contemporary societies of the world.", "For instance, an English speaker with a smartphone can now easily get accurate information on many topics through a quick query to a virtual assistant, they can consult an online translation service to translate a foreign language web page with a click, and they can interact with many different machines and computers through simple speech commands.", "These technological capabilities can be attributed to several developments over the last few decades:", "1. the advent of deep learning methods, which allow for more effective creation of NLP systems from existing data (Goldberg, 2017),", "2. the existence of standardized benchmark datasets and evaluation metrics (Wang et al., 2018; Hu et al., 2020),", "3. the prestige afforded by the research community to researchers who improve upon these benchmarks,", "4. the resulting large number of resources, be they computation, data, or ingenuity, that are poured into optimizing performance thereon.", "As both a theoretical and technical en-deavor, NLP is experiencing an explosive increase: the annual conference of the Association of Computational Linguistics (ACL) received in 2000 less than 300 papers, growing in 2010 to slightly less than 1,000, to over more than 3,500 submissions in its 2020 edition.", "Largely as a result of this expansion of research effort, state-of-the-art systems have also achieved evaluation benchmark scores on par with human performance on a variety of NLP tasks such as question answering on English (He et al., 2021), or on automatic translation of news from German, Russian, and Chinese to English (Barrault et al., 2020).", "2 These upward slanting curves on standard benchmarks fail to show how uneven this development has been for all potential NLP users.", "Extensive research across NLP tasks have found systematic 2 Although the significance of these parity claims has been disputed (Lubli et al., 2018).", "performance drops according to dimensions such as gender, racial identity, and language varieties, among others.", "The reasons for these biases are multifactorial and can be traced to virtually all stages in the process of NLP development, from the data used to train systems (Caliskan et al., 2017; Sap et al., 2019; De-Arteaga et al., 2019; Tatman, 2017; Tatman and Kasten, 2017; Buolamwini and Ge-bru, 2018; Raji and Buolamwini, 2019) to the very algorithms involved (Speicher et al., 2018; Bellamy et al., 2018; Adebayo et al., 2016).", "The growing awareness of these biases in NLP technologies brought by these studies, along with the development of novel metrics and tests to evaluate these disparities, have resulted in progressively more ef-ficient and principled strategies to understand and mitigate them.", "However, similarly systematic approaches are still lacking in one fundamental dimension of variation across individuals: their languages.", "Out of the over 6,500 languages spoken or signed in the world today (Hammarstrm, 2015), only a handful are systematically represented in academia and industry (Joshi et al., 2020; Yin et al., 2021).", "In spite of the aforementioned near-human results on translation or understanding of languages from the world's economic and political superpowers, the experience of any NLP practitioner is that, for the vast majority of languages, they fall far below such standards.", "Critically, the languages of the world showcase substantial variation in most domains of description, and in fact, the performance of language technologies has been shown to be sensitive to diverse aspects of the language under study, including morphology, word order, or phonological repertoire, as well as more mundane aspects like writing script or data availability (Arivazhagan et al., 2019; Tsarfaty et al., 2020; Xia et al., 2020; Muller et al., 2021).", "Hence, the transfer of NLP developments from one language to another is far from trivial, as it often means that building highly functional language technologies for any particular language is a non-automatic, costly, and technically challenging task.", "Taking all these considerations together, and given that even the consequences brought by unequal NLP technologies across (racial, gender, socioeconomic) groups within the same nominal language are already substantial, there is a pressing need for measuring and understanding NLP performance inequalities across the world's languages.", "Here we develop novel estimates on how the utility afforded by NLP systems is distributed across individuals, languages, and tasks at an unprecedented global scale.", "These estimates allow us to identify which languages are systematically under-served by language technologies and could benefit the most individuals from focused technology development.", "We finally trace these inequalities to the societal, economic, and academic correlates of NLP systems' performance, shedding light on its latent causes, and indicate how our results favor specific evidence-based policies in research and development.", "Our fundamental goal is evaluating the distribution of diverse representative language technologies (and their qualities) across the world's languages and their populations.", "Minimally, we would attempt to account for the patterns of association between the demand of language technologies and the utility they confer to users across languages.", "Thus, the first component of our analysis pertains quantifying the utility users in a given language l receive from a language technology.", "Ideally, such a measure would capture to what extent a given NLP system solves the specific problems an individual can pose to them for instance, how successfully the user can obtain information from an automatically translated web page, or how satisfied the user is by a speech-based virtual assistant's execution of a series of verbal commands.", "Intuitively, utility is associated with the nominal performance of the technology a more performant system will allow the user to obtain a greater degree of utility.", "How performance is measured depends on the task (see Section 1).", "Since our purpose is to allow for comparisons, we define the utility of a task and language, u l , as the corresponding performance normalized by the best possible performance afforded by such task, i.e. u l = performance l theoretical max performance In cases where the best possible performance is undefined or technically unattainable, we take the empirical maximum as an estimate of the theoretical one and normalize by the best-performing language across all languages L , i.e. we replace the denominator in the above definition by max l (cid:48) L ( performance l (cid:48) ) .", "Defining utility in this manner allow us to explore and contrast language technologies at the broadest scale, which is possible thanks to some necessary simplifying assumptions.", "As we pointed out before, not all users of the same language technology might benefit in the same manner given a fixed performance, and the relation between nominal performance and true\" utility might be complex and non-linear.", "3 With these caveats in mind, we further quantify the second component of our analysis, the demand for a language technology in each language l , d l .", "We characterize d l by taking into consideration demographic and linguistic perspectives.", "Under the first perspective, the demand for a given technology in a language is estimated to be proportional to the number of speakers of the language itself n l ( d l n l ).", "Under the second perspective, the demand across the approximately 6,500 languages of the world is identical ( d l 1 ).", "These two alternatives as well as any intermediate combination of them can be simply parameterized through a single exponent , d ( ) l = n l (cid:80) l (cid:48) L n l (cid:48) where = 1 correspond to a demographic notion of demand, = 0 to a linguistic one, and 0 < < 1 is in between.", "Equipped with these notions, we construct a simple family of global metrics ( M ) revealing to what degree the global demand for language technologies is actually met: M = (cid:88) l L d ( ) l u l 3 To give just one example, in machine-assisted human translation, the relationship between MT accuracy and productivity gain (directly associated with utility) is complex (Sanchez-Torron and Koehn, 2016).", "M has a number of intuitive properties we would like such a metric to have.", "M is bounded between 0 and 1; 0 corresponds to a case where no-one benefits from a given language technology, whereas 1 would correspond to a situation where all languages enjoy perfect technology.", "Increasing the utility of a given language leads to an increase in M , and the magnitude of this increase is influ-enced by both the size of the improvement and the demand in that language.", "We apply our measures of utility and demand to a set of diverse and representative NLP tasks, which", "are described below and summarized in Table", "1. The first three are tasks that technology users interact with directly in their everyday life, so that their output is already in a shape and form that is usable for most individuals.", "Question Answering (QA) consists of crafting a relevant answer to a question formulated in natural language, such as e.g. what is the capital city of the Philippines?\" or why do dogs like bones?\". This task is ubiquitous in online search or virtual assistants. Machine Translation (MT) is the task of translating from one language to another (e.g. from Tagalog to Estonian or from Japanese to Basque), and is typically used to facilitate inter-personal communication, information gathering, and e-commerce. Text-to-speech (TTS) is the task of rendering speech from textual input, which is used widely in spoken virtual assistants, car navigation systems, and is becoming a gateway for internet-of-things devices. Next, Natural Language Inference (NLI) is a central task in AI and involves the evaluation of information presented in propositional format. More specifically, given a sentence called the premise (e.g. the dog chewed a big bone), NLI systems decide whether a separate sentence called the hy-pothesis is entailed by the premise (e.g. the dog 5488 gnawed at a bone), negated by it (e.g. the dog was sleeping), or neither (e.g. the dog likes bones).", "While not a user-facing task per se , it measures the ability of NLP systems to adequately represent (and understand) user queries.", "Beyond these three (plus one) user-facing tasks, we also consider two more foundational linguistically-focused tasks, which often inform part of the pipelines of the user-facing tasks but which are rarely if ever encountered in the wild\" by language technology users. Morphological Inflection (Inflection) is the task of generating an inflected wordform given a lemma and a morphological specification, e.g. producing the third person singular form for run: run+3;SG runs . Syntactic Parsing under the dependency formalism (DEP) is the task of producing a syntactic parse of an input sentence, e.g. given the sentence dogs like bones specifying the dogs and bones are the subject and object of like respectively. 2.3 Correlates of NLP utility Beyond the performance of individual tasks, we take a bird's-eye-view of the field of language technologies in general, as we analyze some of the correlates of the scientific production in NLP. In particular, we follow two broad guiding questions: (1) does the system of academic incentives promote the development of a more linguistically diverse NLP? and (2) is economic centrality or sheer demographic demand the best predictor of NLP technologies in any given language? While a full understanding of the complex causal mechanisms binding society and NLP in general is outside of the scope of the present article, we set out to provide a first large-scale exploration of these matters by considering scientific publications appearing in major international NLP conferences as the basic units of science production. This sim-plification is not without challenges: for instance, some widely used language technologies are developed outside of the traditional scientific circuit based on proprietary technology, or they are published in local conferences, possibly in languages other than English. 4 In spite of this, studying scientific publications (and their correlates) allows us to evaluate transparent questions on the basis of publicly available data at a scale that is unfeasible for in-depth analyses. 4 e.g. the Japanese NLP society's 2020 conference published 396 papers: https://www.anlp.jp/proceedings/ annual_meeting/2020/ Therefore, we study the first question by determining whether the cumulative number of citations a paper receives is correlated with the number of languages it is associated with. We investigate our second question by finding the best predictive model of the number of NLP papers in any given language by contrasting two predictors: estimated number of users worldwide and approximate GDP associated with its users. We model these regression problems in a Bayesian generalized mixed effects framework (see Appendix B). 2.4 Data We manually aggregate information on task performance and demand for the tasks summarized in Table 1 from a number of sources (we relegate many details to Appendix A, and give a high-level overview here). The data is taken from a combination of multilingual benchmarks, shared tasks and published results in NLP conferences including: Question answering: We use data from the TyDi-QA (Clark et al., 2020), MLQA (Lewis et al., 2020), and SD-QA (Faisal et al., 2021) benchmarks and measure raw accuracy to calculate utility. Machine translation: We aggregate scores from the WMT and IWSLT evaluation campaigns, and 50 studies from the last three years' ACL, EMNLP, and NAACL conferences, using BLEU (Papineni et al., 2002) as an accuracy metric. Text-to-speech: We use data from the CMU Wilderness Project (Black, 2019) and use normalized negative mel-ceptral distortion (Ku-bichek, 1993) as an accuracy metric. Natural language inference: We use results from the XNLI leaderboard (Conneau et al., 2018) and raw accuracy as the evaluation metric. Syntactic parsing: We use the accuracies provided by UDPipe (Straka, 2018) and UDify (Kondratyuk and Straka, 2019) on the universal dependencies corpus (Zeman et al., 2017), with labeled attachment score as an accuracy metric. Morphological inflection: We use results from SIGMORPHON workshops inflection shared tasks (e.g. (Vylomova et al., 2020)) measuring utility with exact-match accuracy. 5489 Figure 1: Left panel: linguistic and demographic global utility metrics for a number of language technology tasks. The red curve corresponds to the sequence where first the language with the largest number of users is set to utility 1, then the second, and so on. Right panel: recent historical progression of two language technology tasks: Inflection and Machine Translation from English. Demographic and linguistic information necessary for the estimation of demands were obtained from a variety of sources, including Ethnologue, Glottolog, and the World Trade Organisation. For most tasks, the number of first-language speakers is used to measure demand, but for MT we estimate the need for translation between two languages based on economic indicators of interaction between countries, and the language-speaking populations within the countries where the language is spoken. 3 Results and Analysis 3.1 General observations Figure 1 presents an overview of our main findings. Unsurprisingly, most NLP tasks we focus on fare substantially better when utility is measured demographically rather than linguistically. Text-to-speech synthesis is the task with the most linguistic coverage: the published results (due to a single study (Black, 2019)) cover more than 630 languages (or about 10% of the world's languages). However, for the vast majority of these languages the measured quality of the generated speech is about half as good as the exceptionally good English system (Ren et al., 2021). The next most linguistically diverse tasks are those regarding mor-phosyntactic analysis, i.e. morphological inflection and dependency parsing, which have been evaluated over 140 and 90 languages respectively. For these more esoteric tasks which do not necessarily convey direct utility to a downstream user, the majority of the systems are in general very good. Natural language inference (NLI; a representative natural language understanding task) and question answering (QA) lie on the opposite side of the spectrum: the established benchmarks have only focused on up to 15 and 17 languages respectively, leading to very low scores on the linguistic axis. In Figure 1 (right panel) we observe the progress of the utility metrics in tasks for which we had access to comparable data across a span of the last 7 years. The extensive efforts of the UniMorph project (Kirov et al., 2018) to cover as many languages as possible are visible in the Inflection plot, with significant improvements over time. On the other hand, the machine translation field is still in the process of ramping up following demographics and/or socioeconomic priorities, with improved linguistic coverage over the years. The granularity of these findings can be increased on the basis of available data. Figure 2 additionally presents demographic utility across language populations for all tasks. The visualization allows for identification of ostensive gaps in received utility. The two bottom plots of Figure 2 display our metrics over speakers of a single language, based on question answering results for different spoken Arabic and Swahili lectal varieties (Faisal et al., 2021). This analysis shows that utility differences are small between Arabic vernaculars although these systems still lag behind the systems for Modern Standard Arabic, while the utility level of Coastal Swahili speakers in Tanzania is about 10% lower than that for speakers in Kenya. 5490 Dependency Parsing: M 1 = 0 . 63 Morphological Inflection: M 1 = 0 . 64 Number of Speakers 0.0 0.2 0.4 0.6 0.8 1.0 R e l a t i v e Q u a li t y c e s e ll h i n e n g s p a c m n d e u f i n t a m t g l a m h other Number of Speakers 0.0 0.2 0.4 0.6 0.8 1.0 R e l a t i v e Q u a li t y li n b e n h i n s p a f i n d e u e n g e ll t g l other Natural Language Inference: M 1 = 0 . 42 Question Answering: M 1 = 0 . 36 Number of Speakers 0.0 0.2 0.4 0.6 0.8 1.0 R e l a t i v e Q u a li t y e n g s p a d e u c m n h i n other Number of Speakers 0.0 0.2 0.4 0.6 0.8 1.0 R e l a t i v e Q u a li t y e n g s p a d e u c m n h i n b e n other Speech Synthesis: M 1 = 0 . 32 Machine Translation (X English): M 1 = 0 . 49 Number of Speakers 0.0 0.2 0.4 0.6 0.8 1.0 R e l a t i v e Q u a li t y e n g s p a a k a h i n t a m c m n b e n m a l other Number of Speakers 0.0 0.2 0.4 0.6 0.8 1.0 R e l a t i v e Q u a li t y e n g d e u c m n s p a e ll t a m b e n other Machine Translation (X Spanish): M 1 = 0 . 36 Machine Translation (X Bengali): M 1 = 0 . 10 Number of Speakers 0.0 0.2 0.4 0.6 0.8 1.0 R e l a t i v e Q u a li t y s p a p o r e n g d e u c m n h i n e ll t a m b e n k o r other Number of Speakers 0.0 0.2 0.4 0.6 0.8 1.0 R e l a t i v e Q u a li t y b e n e n g d e u c m n s p a p o r h i n e ll t a m k o r other QA [on Arabic Vernaculars]: M ara 1 = 0 . 58 QA [on Swahili Vernaculars]: M swa 1 = 0 . 23 Number of Arabic Speakers 0.0 0.2 0.4 0.6 0.8 1.0 R e l a t i v e Q u a li t y (Written) Modern Standard Arabic a e b a r y a r q a j p a c w a r z other Number of Swahili Speakers 0.0 0.2 0.4 0.6 0.8 1.0 R e l a t i v e Q u a li t y (Written) Coastal Swahili KNTZ other acw: Hijazi Arabic, aeb: Tunisian Arabic, ajp: South Levantine Arabic, aka: Aka, amh: Amharic, arq: Algerian Arabic, ary: Moroccan Arabic, arz: Egyptian Arabic, ben: Bengali, ces: Czech, cmn: Mandarin Chinese, deu: High German, ell: Greek, eng: English, fin: Finnish, hin: Hindi, kor: Korean, lin: Lingala, mal: Malayalam, por: Portuguese, spa: Spanish, swa: Swahili, tam: Tamil, tgl: Tagalog. Figure 2: Illustration of our metric on demographic-focused utility ( = 1 ) on various NLP tasks. 5491 MTtoeng 1 2 3 cmn hin ben = 1 hin ben cmn 0 . 7 hin ben urd 0 . 6 ben hin urd 0 . 5 ben mya urd 0 . 3 ben mya kor 0 . 2 ben mya sme 0 . 01 MTfromeng = 1 1 2 3 cmn hin ben 0 . 9 hin ben cmn 0 . 8 ben hin ara 0 . 4 ben hin tha 0 . 3 ben mal tha 0 . 2 mal ben tha 0 . 1 mal kur tha 0 . 01 mal kur tat SyntacticAnalysis(Dep. Parsing) = 1 1 2 3 cmn eng yue 0 . 9 cmn yue tha 0 . 8 yue tha cmn 0 . 7 yue tha amh 0 . 6 tha yue amh 0 . 5 amh tha yue 0 . 4 amh tha bam 0 . 3 amh bam yor 0 . 1 amh bam wbp 0 . 01 amh wbp bam MorphologicalInflection = 1 1 2 3 ind eng fra 0 . 9 ind fra eng 0 . 7 ind fra tgl 0 . 6 ind tgl mag 0 . 4 mag tgl ind 0 . 3 sjo ail bra 0 . 2 ail sjo itl 0 . 1 ail itl sjo SpeechSynthesis = 1 1 2 3 cmn ben spa 0 . 8 cmn ben por 0 . 6 cmn ben tel 0 . 4 ben tel fra 0 . 3 tel ben fra 0 . 2 mal tel fra 0 . 1 knk mal acm Figure 3: Priority languages (top-3 shown) change with different balancing of demographic and linguistic utility, with focus shifting from populous languages e.g. Mandarin (cmn) and Hindi (hin) to more under-served languages e.g. Kuranko (knk), Bambara (bam), Tatar (tat), or Aimele (ail). 3.2 Priorities in NLP development Given the current snapshot of NLP systems, we could ask which languages will lead to the largest global utility improvement. The relative importance of linguistic vs. demographic demands determines the priority ranking, as it can be observed in Figure 3 for a sample of five tasks. Improving on the demographic-focused utility entails a greater emphasis on Mandarin Chinese, Hindi, Spanish, and other populous languages that are generally well-served by current technologies. Balancing linguistic and demographic considerations leads to prioritizing a more diverse set of languages, mostly Asian and African languages like Amharic, Bambara, Bengali, Thai, or Yoruba, which are both populous and under-served, along with also large but severely under-served languages like Kurdish, Urdu, and Oromo. Further emphasis on linguistic utility would lead to prioritization of indigenous and potentially endangered languages of small communities like Aimele, Itelmen, North Sami, or Warlpiri, which are currently largely ignored by NLP research (Bird, 2020). 3.3 The role of society, economy, and academia Now we turn to our large-scale analysis of NLP publications. First, this reveals that a substantial proportion of publications do not even describe in a clear and unequivocal manner the language (or languages) they are dealing with (Bender, 2011). Given the current prevalence of English of a language of study in NLP, in most cases, the lack of an explicit reference to a particular language entails the system deals with English exclusively. This reflects a more deep-seated issue reflected in the citation of papers over time. Independently of publication venue, year, or subfield of NLP research, the number of languages a publication deals with is not predictive of how many citations it will accrue over time (see Figure 4, top right panel). In other words, if citations can be regarded as a proxy for academic incentives, scientists and developers are presented with little to no additional academic reward when tackling data, problems, or tasks involving more than one language. This naturally leads to the question of what explains the production of language technologies across languages to start with, which will necessarily involve agents, mechanisms, and data, outside of the scope of NLP publications themselves. Nevertheless, in order to contribute to this investigation, we determined whether approximate measures of economic centrality or number of language users were better predictors of sheer number of papers published for any given language (see Appendix C). While both variables are substantially collinear, we find that approximate GDP (rather than number of users) leads to a substantially smaller prediction error of number of published papers. 4 Discussion Our study, covering diverse NLP tasks and types of evidence, makes apparent the immense inequality in the development of language technologies across the world's languages. After English, a handful of Western European languages dominate the field -in particular German, French, and and Spanishas well as even fewer non-Indo-European languages, primarily Chinese, Japanese, and Arabic. Our preliminary investigation suggests it is the economic prowess of the users of a language (rather than the sheer demographic demand) what drives the development of language technologies. In spite of this, for some tasks (such as In-5492 Figure 4: Left panel: treemap of the number of NLP publications per language (with area proportional to the number). Right top panel: Relative citation rate vs number of languages in the publication. Right bottom panel: Number of publications according to number of language users and approximate GDP. Point size and transparency scales with number of publications. eng: English, zho: Chinese, deu: German, fra: French, spa: Spanish, jpn: Japanese, rus: Russian, nld: Dutch, ces: Czech, por: Portuguese, tur: Turkish, swe: Swedish, ita: Italian, fin: Finnish, ell: Greek, lat: Latin, hun: Hungarian, ara: Arabic, kor: Korean, hin: Hindi, pol: Polish, dan: Danish. flection) there is an encouraging trend of both demographicand linguistic-utility improving year-over-year. This is due to the nature of the task; reasonably accurate solutions can be achieved through small but highly-curated data. Since linguistic expertise on the languages of the world is, naturally, globally distributed, the main hurdle these tasks face is to pool such expertise under the premise of a common technical goal. In this respect, relatively low-cost and bottom-up actions that gather experts to work on specific NLP tasks (such as Universal Dependencies and UniMorph) have succeeded in accelerating the cross-linguistic development of language technologies. These prosper mainly on the basis of academic incentives, as those individuals or groups who contribute data and/or expertise are rewarded with individual publications or co-authorship in collective publications. Many of these contributions which do not necessarily involve hefty resource investments but instead linguistic expertise are markedly different from the typical publications in language technologies. However, these more esoteric tasks are tenuously associated with those that users are more likely to interact with, such as Machine Translation or Speech Synthesis. User-facing tasks all have in common a tight dependency on computational resources and large data, which in turn hinge on substantial financial means. In a context of pressing user needs across multiple populations and languages, we submit that future developments on policies aimed at furthering cross-linguistic technologies would benefit from clear (and possibly standardized) metrics that assist in streamlining complex decisions regarding resource allocation. Our measures of global coverage fulfill that role, and help identifying large but currently under-served languages. While we do not attempt to supplement the necessary in-depth evaluation of the need of each individual group and language, they provide a common ground for coordinating global efforts across heterogeneous actors. In addition, we would like to reiterate that our work here has necessarily made a large number of simplifying assumptions to even attempt to quantify disparities in language technology utility on a global scale. These most notably involve simplifying assumptions regarding the measurement of demand (based on native-speaker population and/or economic indicators) and the measurement of utility (based on simple accuracy metrics). Future work may further clarify these assumptions, making more accurate estimates of true user demand on a technology-by-technology level, or more accurately clarifying the relationship between standard accuracy metrics and the utility derived by users. 5493 Acknowledgements This work was supported by NSF Award 2040926. References Julius A Adebayo et al. 2016. FairML: ToolBox for diagnosing bias in predictive modeling . Ph.D. thesis, Massachusetts Institute of Technology. Waleed Ammar, Dirk Groeneveld, Chandra Bhagavat-ula, Iz Beltagy, Miles Crawford, Doug Downey, Jason Dunkelberger, Ahmed Elgohary, Sergey Feldman, Vu Ha, Rodney Kinney, Sebastian Kohlmeier, Kyle Lo, Tyler Murray, Hsu-Han Ooi, Matthew Peters, Joanna Power, Sam Skjonsberg, Lucy Wang, Chris Wilhelm, Zheng Yuan, Madeleine van Zuylen, and Oren Etzioni. 2018. Construction of the literature graph in semantic scholar. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers) , pages 8491, New Orleans Louisiana. Association for Computational Linguistics. Gopala Krishna Anumanchipalli, Kishore Prahallad, and Alan W Black. 2011. Festvox: Tools for creation and analyses of large speech corpora. In Workshop on Very Large Scale Phonetics Research, UPenn, Philadelphia , page 70. Mihael Arcan, Maja Popovic, Paul Buitelaar, et al. 2016. Asistenta machine translation system for slovene, serbian and croatian. In Proceedings of the 10th Conference on Language Technologies and Digital Humanities, Ljubljana, Slovenia . Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, et al. 2019. Massively multilingual neural machine translation in the wild: Findings and challenges. arXiv:1907.05019. Loc Barrault, Magdalena Biesialska, Ond rej Bojar, Marta R. Costa-juss, Christian Federmann, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Matthias Huck, Eric Joanis, Tom Kocmi, Philipp Koehn, Chi-kiu Lo, Nikola Ljubeic, Christof Monz, Makoto Morishita, Masaaki Nagata, Toshi-aki Nakazawa, Santanu Pal, Matt Post, and Marcos Zampieri. 2020. Findings of the 2020 conference on machine translation (WMT20). In Proceedings of the Fifth Conference on Machine Translation , pages 155, Online. Association for Computational Linguistics. Regina Barzilay and Min-Yen Kan, editors. 2017. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) . Association for Computational Linguistics, Vancouver, Canada. Rachel KE Bellamy, Kuntal Dey, Michael Hind, Samuel C Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilovic, et al. 2018. Ai fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXiv:1810.01943. Emily M Bender. 2011. On achieving and evaluating language-independence in nlp. Linguistic Issues in Language Technology , 6(3):126. Steven Bird. 2020. Decolonising speech and language technology. In Proceedings of the 28th International Conference on Computational Linguistics , pages 35043519, Barcelona, Spain (Online). International Committee on Computational Linguistics. Alan W Black. 2019. CMU wilderness multilingual speech dataset. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 59715975. IEEE. Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency , pages 7791. Paul-Christian Brkner. 2017. brms: An r package for bayesian multilevel models using stan. Journal of statistical software , 80(1):128. Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science , 356(6334):183186. Bob Carpenter, Andrew Gelman, Matthew D Hoffman, Daniel Lee, Ben Goodrich, Michael Betancourt, Marcus A Brubaker, Jiqiang Guo, Peter Li, and Allen Riddell. 2017. Stan: a probabilistic programming language. Grantee Submission , 76(1):1 32. Jonathan H Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020. TyDi QA: A benchmark for information-seeking question answering in ty pologically diverse languages. Transactions of the Association for Computational Linguistics , 8:454470. Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross-lingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 24752485. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, Graldine Walther, Ekaterina Vylomova, Arya D. McCarthy, Katharina Kann, Sabrina J. Mielke, Garrett Nicolai, Miikka Silfverberg, David Yarowsky, Jason Eisner, and Mans Hulden. 2018. The CoNLL SIGMORPHON 2018 shared task: Universal morphological reinflection. In Proceedings of the 5494 CoNLLSIGMORPHON 2018 Shared Task: Universal Morphological Reinflection , pages 127, Brussels. Association for Computational Linguistics. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, Graldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sandra Kbler, David Yarowsky, Jason Eisner, and Mans Hulden. 2017. CoNLL-SIGMORPHON 2017 shared task: Universal morphological reinflection in 52 languages. In Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection , pages 130, Vancouver. Association for Computational Linguistics. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 shared Task Morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology , pages 1022, Berlin, Germany. Association for Computational Linguistics. Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kentha-padi, and Adam Tauman Kalai. 2019. Bias in bios: A case study of semantic representation bias in a high-stakes setting. In Proceedings of the Conference on Fairness, Accountability, and Transparency , pages 120128. David M. Eberhard, Gary F. Simons, and Charles D. Fennig. 2018. Ethnologue: Languages of the world. twenty-second edition. SIL International. Fahim Faisal, Sharlina Keshava, Md Mahfuz Ibn Alam, and Antonios Anastasopoulos. 2021. SD-QA: Spoken dialectal question answering for the real world. In Findings of the Association for Computational Linguistics: EMNLP 2021 , pages 32963315, Punta Cana, Dominican Republic. Association for Computational Linguistics. Yoav Goldberg. 2017. Neural network methods for natural language processing. Synthesis Lectures on Human Language Technologies , 10(1):1309. Iryna Gurevych and Yusuke Miyao, editors. 2018. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) . Association for Computational Linguistics, Melbourne, Australia. Harald Hammarstrm. 2015. Ethnologue\" 16/17/18th editions: A comprehensive review. Language , 91(3):723737. Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. DeBERTa: Decoding-enhanced BERT with disentangled attention. In Proceedings of the International Conference on Learning Representations . Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson. 2020. Xtreme: A massively multilingual multitask benchmark for evaluating cross-lingual generalisation. In International Conference on Machine Learning , pages 44114421. PMLR. Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wan, editors. 2019. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) . Association for Computational Linguistics, Hong Kong, China. Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP world. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 62826293, Online. Association for Computational Linguistics. Nathan Kallus. 2014. Predicting crowd behavior with big public data. In Proceedings of the 23rd International Conference on World Wide Web , pages 625 630. Christo Kirov, Ryan Cotterell, John Sylak-Glassman, Graldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sabrina J Mielke, Arya D McCarthy, Sandra Kbler, et al. 2018. Unimorph 2.0: Universal morphology. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018) . Kevin Knight, Ani Nenkova, and Owen Rambow, editors. 2016. Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies . Association for Computational Linguistics, San Diego, California. Dan Kondratyuk and Milan Straka. 2019. 75 languages, 1 model: Parsing universal dependencies universally. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages 27792795. Anna Korhonen, David Traum, and Llus Mrquez, editors. 2019. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics . Association for Computational Linguistics, Florence, Italy. R Kubichek. 1993. Mel-cepstral distance measure for objective speech quality assessment. In Proceedings of IEEE Pacific Rim Conference on Communications Computers and Signal Processing , volume 1, pages 125128. IEEE. Samuel Lubli, Rico Sennrich, and Martin Volk. 2018. Has machine translation achieved human parity? a 5495 case for document-level evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 47914796. M Paul Lewis, Gary F Simons, Charles D Fennig, et al. 2009. Ethnologue: Languages of the world , volume 16. SIL International. Patrick Lewis, Barlas Oguz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2020. MLQA: Evaluating cross-lingual extractive question answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 7315 7330. Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell, and Graham Neubig. 2019. Choosing transfer languages for cross-lingual learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 31253135, Florence, Italy. Association for Computational Linguistics. Arya D. McCarthy, Ekaterina Vylomova, Shijie Wu, Chaitanya Malaviya, Lawrence Wolf-Sonkin, Garrett Nicolai, Christo Kirov, Miikka Silfverberg, Sabrina J. Mielke, Jeffrey Heinz, Ryan Cotterell, and Mans Hulden. 2019. The SIGMORPHON 2019 shared task: Morphological analysis in context and cross-lingual transfer for inflection. In Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology , pages 229 244, Florence, Italy. Association for Computational Linguistics. Benjamin Muller, Antonios Anastasopoulos, Benot Sagot, and Djam Seddah. 2021. When being unseen from mBERT is just the beginning: Handling new languages with multilingual language models. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 448462, Online. Association for Computational Linguistics. Sylvester O Orimaye, Jojo SM Wong, Karen J Golden, Chee P Wong, and Ireneous N Soyiri. 2017. Predicting probable alzheimer's disease using linguistic deficits and biomarkers. BMC bioinformatics , 18(1):113. Martha Palmer, Rebecca Hwa, and Sebastian Riedel, editors. 2017. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing . Association for Computational Linguistics, Copenhagen, Denmark. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proc. ACL , pages 311318. Inioluwa Deborah Raji and Joy Buolamwini. 2019. Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial ai products. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society , pages 429435. Yi Ren, Chenxu Hu, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. 2021. Fastspeech 2: Fast and high-quality end-to-end text-to-speech. In Proceedings of International Conference of Learning Representations (ICLR) . Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun'ichi Tsujii, editors. 2018. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing . Association for Computational Linguistics, Brussels, Belgium. Marina Sanchez-Torron and Philipp Koehn. 2016. Machine translation quality and post-editor productivity. In Proceedings of AMTA , volume 2016, page 16. Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 16681678. Till Speicher, Hoda Heidari, Nina Grgic-Hlaca, Krishna P Gummadi, Adish Singla, Adrian Weller, and Muhammad Bilal Zafar. 2018. A unified approach to quantifying algorithmic unfairness: Measuring individual &group unfairness via inequality indices. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining , pages 22392248. Milan Straka. 2018. Udpipe 2.0 prototype at conll 2018 ud shared task. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies , pages 197207. Rachael Tatman. 2017. Gender and dialect bias in YouTube's automatic captions. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing , pages 5359, Valencia, Spain. Association for Computational Linguistics. Rachael Tatman and Conner Kasten. 2017. Effects of talker dialect, gender & race on accuracy of bing speech and youtube automatic captions. In INTERSPEECH , pages 934938. Reut Tsarfaty, Dan Bareket, Stav Klein, and Amit Seker. 2020. From SPMRL to NMRL: What did we learn (and unlearn) in a decade of parsing morphologically-rich languages (MRLs)? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 7396 7408, Online. Association for Computational Linguistics. Aki Vehtari, Andrew Gelman, and Jonah Gabry. 2017. Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and computing , 27(5):14131432. 5496 Ekaterina Vylomova, Jennifer White, Elizabeth Salesky, Sabrina J. Mielke, Shijie Wu, Edoardo Maria Ponti, Rowan Hall Maudslay, Ran Zmigrod, Josef Valvoda, Svetlana Toldova, Francis Tyers, Elena Klyachko, Ilya Yegorov, Natalia Krizhanovsky, Paula Czarnowska, Irene Nikkarinen, Andrew Krizhanovsky, Tiago Pimentel, Lucas Torroba Hennigen, Christo Kirov, Garrett Nicolai, Adina Williams, Antonios Anastasopoulos, Hilaria Cruz, Eleanor Chodroff, Ryan Cotterell, Miikka Silfverberg, and Mans Hulden. 2020. SIGMORPHON 2020 shared task 0: Typologically diverse morphological inflection. In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology , pages 139, Online. Association for Computational Linguistics. Marilyn Walker, Heng Ji, and Amanda Stent, editors. 2018. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers) . Association for Computational Linguistics, New Orleans, Louisiana. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP , pages 353355, Brussels, Belgium. Association for Computational Linguistics. Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu, editors. 2020. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) . Association for Computational Linguistics, Online. Mengzhou Xia, Antonios Anastasopoulos, Ruochen Xu, Yiming Yang, and Graham Neubig. 2020. Predicting performance for natural language processing tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 86258646, Online. Association for Computational Linguistics. Kayo Yin, Amit Moryossef, Julie Hochgesang, Yoav Goldberg, and Malihe Alikhani. 2021. Including signed languages in natural language processing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , pages 73477360, Online. Association for Computational Linguistics. Daniel Zeman, Martin Popel, Milan Straka, Jan Hajic, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, Francis Tyers, Elena Badmaeva, Memduh Gokirmak, Anna Nedoluzhko, Silvie Cinkov, Jan Hajic jr., Jaroslava Hlavcov, Vclava Kettnerov, Zdenka Ureov, Jenna Kanerva, Stina Ojala, Anna Mis-sil, Christopher D. Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Herman Leung, Marie-Catherine de Marneffe, Manuela San-guinetti, Maria Simi, Hiroshi Kanayama, Valeria de Paiva, Kira Droganova, Hctor Martnez Alonso, agr ltekin, Umut Sulubacak, Hans Uszkoreit, Vivien Macketanz, Aljoscha Burchardt, Kim Harris, Katrin Marheinecke, Georg Rehm, Tolga Kayadelen, Mohammed Attia, Ali Elkahky, Zhuoran Yu, Emily Pitler, Saran Lertpradit, Michael Mandl, Jesse Kirchner, Hector Fernandez Alcalde, Jana Strnadov, Esha Banerjee, Ruli Manurung, Antonio Stella, At-suko Shimada, Sookyoung Kwak, Gustavo Men-dona, Tatiana Lando, Rattima Nitisaroj, and Josie Li. 2017. CoNLL 2017 shared task: Multilingual parsing from raw text to Universal Dependencies. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies , pages 119, Vancouver, Canada. Association for Computational Linguistics. A Materials Publication data We rely on papers available through the Anthology of the Association of Computational Linguistics 5 which hosts more than 60 thousand papers from all major NLP conferences. We rely on Semantic Scholar (Ammar et al., 2018) for citation information. We make the working assumption that a mention of a language in a research paper likely entails that the underlying research involves this language. We follow an automatic pipeline for finding language mentions in a paper, which starts by converting the paper PDF to a machine-readable format. We then search within the paper for any mention of a language's English name(s), its endonym, as well as its ISO or Glottolog code. We then apply a post-processing step to ensure the precision of this pipeline as our simple text-based search is prone to false positives for languages whose names match common English words (e.g. She, Male, Label, Even, The, Are), common placenames (e.g. Colorado, Nara, Sydney), parts of author names (e.g. Su, Kim, Dan, Ali, Rama), or mathematical notation (e.g. Dji, Dii). In addition, we enrich each publication by imputing its research area. There were 16 research areas identified, based on the ones represented at recent major NLP conferences (specifically starting with the 2019 version of EMNLP, and removing some of the areas that were unique to that conference). For each area, we identified 1-6 publication venues from the ACL Anthology, where more venues were 5 https://www.aclweb.org/anthology/ 5497 chosen when each venue had relatively few publications. Based on the abstracts of papers from each of these venues, we trained a bag-of-words classifier using the linear support vector machine implementation in scikit-learn 6 , and applied this classifier to the abstracts of the papers we wanted to classify. Necessary data and code to reproduce these results are released in the supplementary material. Data Sources and Metrics for Utility The majority of NLP research relies on automatic evaluation metrics over datasets annotated with gold-standard outputs. The advantage of this approach is that it allows consistent comparisons between systems and a seamless evaluation of progress on a specific evaluation set. On the other hand, there is no guarantee that even statistically significant improvement on an automatic metric translates to improvements on user-perceived utility. Nevertheless, the reality is that virtually all published NLP research reports automatic evaluation metrics, with only a tiny fraction diverging from the norm by e.g. using human evaluations. Our analysis assumes that all named languages have standard versions that are comprehensible and acceptable to all members of the population identified as speakers in our sources." ]
[ "abstain", "abstain", "result", "abstain", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "other" ]
[ "Discourse representation tree structure (DRTS) parsing is a novel semantic parsing task which has been concerned most recently.", "State-of-the-art performance can be achieved by a neural sequence-to-sequence model, treating the tree construction as an incremental sequence generation problem.", "Structural information such as input syntax and the intermediate skeleton of the partial output has been ignored in the model, which could be potentially useful for the DRTS parsing.", "In this work, we propose a structural-aware model at both the encoder and decoder phase to integrate the structural information, where graph attention network (GAT) is exploited for effectively modeling.", "Experimental results on a benchmark dataset show that our proposed model is effective and can obtain the best performance in the literature.", "Discourse representation tree structure (DRTS) is a form of discourse structure based on Discourse Representation Theory of Kamp and Reyle (1993), a popular theory of meaning representation (Kamp, 1981; Asher, 1993; Asher and Lascarides, 2003).", "It is designed to account for a variety of linguistic phenomena, including the interpretation of pronouns and temporal expressions within and across sentences.", "Correspondingly, as one type of discourse parsing, DRTS parsing (Liu et al., 2018) can be helpful for paragraph or document-level text understanding by converting DRS to tree-style DRTS.", "(Liu et al., 2019).", "Figure 1 shows an example of DRTS, where the leaf nodes are discourse representation units (DRUs), upon which a discourse tree structure built.", "In particular, a DRU consists of several individual tuples, where each tuple denotes a relation inside the DRU.", "For example, there is a relationship Figure 1: Left: Our proposed model with two structure-aware module.", "The relationships between the DRUs are organized by a tree skeleton, which includes three types of nodes: the S(DRS) nodes to introduce DRU, the relation nodes for inter-DRU relationship, and the variable nodes, which are used to define S(DRS) (e.g., p 4 , k 1 and k 4 ).", "There have been only a few existing studies related to DRTS parsing (van Noord et al., 2018a,b).", "In particular, the end-to-end encoder-decoder model of Liu et al. (2019) gives the state-of-the-art performance, which converts the task into a sequence-to-sequence problem.", "The input sequence consists of words in paragraphs, encoded by a BiLSTM structure, and the output sequence is top-to-bottom depth-first traversal of the output DRTS tree, which is decoded incrementally with an attention-based LSTM feature representation module.", "During decoding, Liu et al. (2019) separate the skeleton generation and the DRU producing, as illustrated by Figure 1. Although highly effective, the above model ignores some useful structure information in both the encoder and the decoder, which can be potentially useful for our task.", "Specifically, for encoding, syntax-based tree structure information has been demonstrated effective for a number of NLP tasks (Kasai et al., 2019; Li et al., 2018), including several other types of discourse parsing (Yu et al., 2018; Li et al., 2015).", "For decoding, the skeleton structure of DRTS can be also beneficial for our task.", "As a two-phase decoding strategy is exploited, the skeleton tree from the first phase could be helpful for DRU parsing of the second phase.", "We propose to improve DRTS parsing by making use of the above structure information, modeling dependency-based syntax of the input sentences as well as the skeleton structure to enhance the baseline model of Liu et al. (2019) using Graph Attention Network (GAT) (Velickovic et al., 2018), which has been demonstrated effective for tree/graph encoding (Huang and Carley, 2019; Linmei et al., 2019).", "In particular, we first derive dependency tree structures for each sentence in a paragraph from the Stanford Parser, and then encode them directly via one GAT module, which are fed as inputs for decoding.", "Second, after the first-state skeleton parsing is finished, we encode the skeleton structures by another GAT module, feeding the outputs for DRU parsing.", "Following Liu et al. (2019), we conduct experiments on the Groningen Meaning Bank (GMB) dataset.", "Results show that structural information is highly useful for our task, bring a significantly better performance over the baseline.", "In particular, dependency syntax gives an improvement of 2.84% based on the standard evaluation metrics and the skeleton structure information gives a further improvement of 1.41%.", "Finally, our model achieves 71.65% F1-score for the task, 4.25% better than the baseline model.", "Additionally, our model is also effective for sentence-level DRTS parsing, leading to an increase of 1.72% by the F1-score by our final model.", "We release our code and best models at http://github.com/seanblank/ DRTSparsing for facilitating future research.", "Formally, a DRT structure consists of two components according to the function: (1) the leaf nodes and (2) the tree skeleton (non-terminal nodes), respectively.", "Similar to other types of discourse representation methods, we have minimum semantic Figure 2: A full DRTS tree for document: k 1 : At least 27 wives of Israeli rabbis have signed a letter urging Jewish women to avoid dating Arab men. k 4 : The letter warns Jewish women that they will suffer if they date Arab men.", "units named by DRU, and then a discourse tree is built by the discourse relationships between these minimum units.", "Figure 2 shows the full tree version of Figure 1 in the introduction.", "DRU.", "DRU serves as terminal nodes of a DRT structure, which is constituted by a set of unordered relation tuples, as shown by the below dashed components of the tree in Figure 1. A relation tuple consists of a relation r and several arguments v 1 v n in r , it can be denoted as r ( v 1 v n ) .", "Variables refer to entities x , events e , states s , time t , propositions p , segment k and constants c .", "The relation is used to indicate the discourse connections among the inside variables.", "A total of 262 relation labels are defined in DRTS.", "One DRU may include unlimited relation tuples, which are all extracted from the corresponding text pieces.", "Skeleton.", "The skeleton reflects the structural connection between DRUs.", "Nodes in a skeleton can be divided into three categories, including the (S)DRS nodes, the relation nodes and the variable nodes.", "In particular, (S)DRS nodes denotes a full semantically-completed node of discourse analysis.", "The relation node defines a specific discourse relationship over its covered (S)DRS nodes.", "DRTS has defined six types of DRS relations, including IMP (implication), OR (disjunction), DUP (duplex), POS (possibility), NEC (necessity) and NOT (negation), respectively, which is orthogonal to the relations inside the DRUs.", "The variable node assigns one (S)DRS node with a specific symbol.", "There are two types of variable nodes, namely proposition and segment.", "For example, in Figure 2, the root is a SDRS node, IMP is a relation nodes and k 1 , p 4 denote the variable nodes.", "We take the multi-step encoder-decoder method of Liu et al. (2019) as the baseline model for DRTS parsing.", "First, an encoder is used to convert one input paragraph into neural vectors by using word embeddings as well as BiLSTMs, and then a multistep decoder is exploited to generate a full tree structure in a sequential manner incrementally.", "Given a paragraph, we concatenate all the sentences into one sequence, where each sentence is augmented with a start symbol (cid:104) s (cid:105) and an end token (cid:104) e (cid:105) at the front and end positions, respectively, obtaining a final input sequence for the paragraph D = (cid:104) s (cid:105) , w 1 , 1 , ..., w 1 ,n 1 , (cid:104) e (cid:105) , (cid:104) s (cid:105) , w 2 , 1 , ..., w m,n m , (cid:104) e (cid:105) .", "For simplicity, we use D = w 1 , ..., w n to denote the sequence for short.", "where e rand ( ) , and e pret ( ) denotes random and pretrained embeddings for current word, e lem ( ) denotes the random embedding for current word lemma, and denotes concatenation,", "We then apply MLP over the word representations, and further use BiLSTM to encode the vector sequence:", "x 1 x n = MLP ( v 1 v n ) H enc = h 1 h n = BiLSTM ( x 1 x n ) , (2) where H enc = h 1 h n is the encoder output.", "We transform the DRTS structure into a sequence of symbols, so that the original DRTS can be restored from the symbol sequence as well.", "By this transformation, we can apply the sequence-to-sequence architecture for decoding.", "In particular, a two-stage strategy for the decoding is adapted, first generating the skeleton structure, and then generating the DRUs.", "The key step is the transformation strategies of the two stages.", "Generating the skeleton structure.", "We define two types of symbols for each skeleton, where the first is the node label conjoined by a left bracket, indicting the start of traversal of the current node, and the second symbol is a right bracket, indicting the end of traversal of the current node.", "We exploit a top-down depth-first order to traverse the skeleton subtree, finishing a node traversal when all its child nodes have been finished.", "Figure 2 showed an example to illustrate the transformation.", "In this way, we can obtain a symbol sequence Y skt = y skt 1 , ..., y skt s which is equivalent to the skeleton tree.", "Generating the DRUs.", "After the skeleton is ready, we start the DRU generation process.", "The DRU nodes are only related to the (S)DRS nodes in the skeleton.", "Thus we generate DRU nodes one by one according to the (S)DRS nodes in the skeleton structure.", "For each DRU, we have two types of symbols, one for the relations and the other for the variables.", "We first generate all the relations and then generate the variables of each relation incrementally.", "1 In this way, we can obtain a sequence of Y dru = y dru 1 , ..., y dru t for DRU generation.", "2 Sequence decoding.", "We follow the standard sequence-to-sequence architecture (Liu et al., 2018) to obtain the final sequence Y = Y skt Y dru = y skt 1 , ..., y skt s y dru 1 , ..., y dru t incrementally.", "At each step, we score the candidate next-step symbols based on current observations: o skt j = g skt ( H y skt<j , H enc ) , o dru k = g dru ( H y skt<k , H skt , H enc ) , (3) where H enc refers to the encoder outputs, H skt and H dru denotes the outputs of skeleton decoder and the DRU decoder uses left-to-right LSTMs over Y skt and Y dru , respectively, and g skt ( ) and g dru ( ) are neural feature extraction functions for predicting skeleton and DRU symbols, respectively.", "Here we neglect the detailed description for g skt ( ) and g dru ( ) , which can be found in Liu et al. (2019).", "Training.", "Given a set of labeled data, the model is trained to minimize average cross-entropy losses 1 We follow a predefined order for relations.", "In fact, the order impacts little on the final influence.", "2 Our description is equivalent to Liu et al. (2019), who split this process into two steps (i.e., relation prediction and variable prediction).", "We merge the relation and variable predictions for brief.", "over all individual symbol predictions: L ( ) = 1 N (cid:88) i log p y * i (4) where are the set of model parameters, p y * i denotes the output probability of y * i , which is computed by softmax over o i , N is the total length of the output sequence.", "To represent the structure features, we use a GAT module on top of encoder and skeleton decoder stage to enhance the baseline model.", "The graph module is designed to learn non-local and nonsequential information from structural inputs.", "In this section, we first describe the GAT in detail and then illustrate its application on our task.", "Given a graph G = ( V, E ) , where each node v i has a initial vectorial representation, the GNN module enriches node representation with neighbor informations derived from the graph structure:", "where H l R n d is the stacked hidden outputs for all nodes at layer l ( H 0 denotes the input initial representations), A R n n denotes the graph adjacent matrix representation, and W l is the parameter set of the GNN at layer l .", "Different information aggregation functions lead to different GNN architectures.", "In particular, GAT uses the attention mechanism (Bahdanau et al., 2014) on graph neighbors, which has been demonstrated more effective than graph convolution neural network (GCN).", "The aggregation weights in GAT are computed by multi-head attention mechanism (Vaswani et al., 2017).", "Specifically, given a node i with a hidden representation h li at layer l and the its neighbors N i as well as their hidden representations, a GAT updates the node's hidden representation at layer l +1 using multi-head attention: h l +1 i = (cid:107) Kk =1 ( (cid:88) j N i kij W k h lj ) (6) where (cid:107) represents concatenation, is a sigmoid function, and W k is the corresponding weight matrix of input linear transformation.", "k ij are normalized attention coefficients computed by the k -th attention mechanism: kij = SOFTMAX j ( e ij ) = exp ( e ij ) (cid:80) k N i exp ( e ik ) (7) where e ij is attention coefficient that indicate the importance of node j to node i computed by: e ij = LeakyReLU (cid:0) f [ W h i (cid:107) W h j ] (cid:1) (8) f ( ) is a single-layer feed-forward neural network, parameterized by a shared weight, W denotes a Section #Doc #Sent AVG sent AVG word Train 7843 48599 6.2 135.3 Devel 991 6111 6.2 134.0 Test 1035 6469 6.3 137.2 Table 1: Statistics on GMB document level benchmarks, AVG sent and AVG word denote the average number of sentences and words per document, respectively.", "shared linear transformation and LeakyReLU is a non-linearity activation function.", "On the encoder side, we equip the inputs with dependency syntax structures, which have been demonstrated helpful for closely-related tasks such as RST discourse parsing.", "A GAT module is used to represent the encoder output as mentioned in Section 4.1.", "We transform the document into a dependency graph represented by a undirected adjacent matrix using an off-the shelf dependency parser (Chen and Manning, 2014).", "The hidden states of each node is updated with a multi-layer GAT network on the adjacent matrix A : H g-enc = GAT enc ( H enc E syn , A ; W ) , (9) where E syn is the embedding outputs of the syntactic labels in the dependency tree.", "The learned representation H g-enc is used to substitute the original H enc for predictions.", "We further enhance the baseline model by exploiting the partial output after skeleton prediction step is finished.", "On one hand, the skeleton structures can guide for DRU parsing.", "On the other hand, the joint skeleton and DRU parsing can further help to rerank the skeleton predictions as well, since global skeleton representations are exploited.", "Specifically, after all the skeleton nodes are generated, we construct a graph based on the nodes except the right parenthesis as shown in Figure 3. We use a GAT network on top of the hidden states to capture global structure information: H g-skt = GAT skt ( H skt E skt , A ; W ) , (10) where E skt is the embedding outputs of the node labels in the generated skeleton tree, and the global skeleton-aware representation H g-skt is used instead of the original H skt for future predictions.", "Data We conduct experiments on the benchmark GMB dataset, which provides a large collection of English texts annotated with Discourse Representation Structures (Bos et al., 2017).", "We follow Liu et al. (2019) using the processed tree-based DRTS format, and focus on document-level parsing.", "The data statistics are shown in Table 1. Hyperparameters We exploit the same hyperparameters as Liu et al. (2019) for fair comparison.", "In particular, we use the same pre-trained 100-dimensional word embeddings, which are trained on the AFP portion of the English Gigaword corpus.", "The sizes of random word and lemma embeddings are set to 300 and 100, respectively.", "The hidden sizes of BiLSTM modules in encoder and decoder are set to 300 and 600, respectively.", "In addition, the BiLSTM layer sizes of encoder and decoder are respectively 2 and 1. The hidden size of GAT modules is set to 300 and 600 for encoder and decoder, respectively.", "Following Liu et al. (2019), we adopt the COUNTER (van Noord et al., 2018a) tool to evaluate our final experimental results.", "In particular, we first transform the DRTS into a clause format and then run the standard evaluation script to obtain the F1-scores of our results compared with the gold-standard clause form.", "Note that COUNTER is computationally expensive, requiring more than 50 hours for the entire test dataset by using more than 100 threads.", "To facilitate development and analysis experiments, we suggest three alternatives for evaluation particularly for development experiments: (1) BLEU: a standard BLEU (Papineni et al., 2002) value is adopted as the metric to evaluate the resulting node sequence against the gold-standard output, since we model the task as a sequence-to-sequence task.", "(2) Skeleton: The bracket scoring method of constituent parsing is exploited to evaluate the skeleton performance, by regarding terminal DRU nodes as words in comparison with a constituent tree.", "3 3 https://nlp.cs.nyu.edu/evalb/ 49 50 BLEU Final-syntax-skelt 50.04 49.64 49.11", "(3) Tuple: The F1-score of tuple-level matching is exploited to measure the DRU performance, since the basic units inside a DRU are tuples of relation-variable functions.", "Exact matching is adopted considering variable orders.", "The BLEU is used for development and the Skeleton and Tuple are used for analysis.", "We conduct experiments on the development dataset to understand the key factors of our proposed model.", "Impact of structure labels Syntactic arcs and skeleton labels are embedded and concatenated to the embedding of the current node when using GAT to model the tree structure.", "We conduct a comparison to examine their effectiveness in our model.", "Figure", "4(a) shows the results.", "We can see that a performance degradation occurs without these label embeddings.", "In particular, BLEU score drops by 0.4 without syntax label embeddings and 0.93 without skeleton label embeddings, which shows that modeling label information improves unfixed skeleton tree structure even more.", "Impact of GAT setting As our proposed modules involve a l -layer GAT, we investigate the effect of the layer number l on the dev set as shown in Table 2. In particular, we vary the value of l in the set { 1, 2, 3, 4, 5 } and measure the corresponding BLEU scores.", "The structural-aware model equipped with GAT achieves the best performance when l is 2, which justifies the selection on the number of layers in the experimental setting section.", "Moreover, a dropping trend on both metrics is present as l increases.", "For a larger l , the GAT module becomes more difficult to train due to larger amounts of parameters.", "One intuitive reason is that each layer of the GAT module aggregates Model BLEU Head=1 48.76 Head=2 49.48 Head=3 50.01 Head=4 50.04 Head=5 50.04 Model BLEU layers=1 49.11 layers=2 50.04 layers=3 49.72 layers=4 49.01 layers=5 48.54 Table 2: GAT settings results on development set.", "the direct neighbor information of a node.", "After 2 layers, each node can obtain sufficient information, and further more layers can bring noise.", "We make comparison with multi-head attention, varying the heads in the set { 1, 2, 3, 4, 5 } and checking the corresponding BLEU scores.", "Theoretically, the larger the number of heads, the better the performance of the model.", "As can be seen in Table 2, when the number of heads exceeds 4, the performance becomes relatively stable.", "We thus choose the head to be 4 for the remaining experiments.", "Influence of the encoder and decoder GAT modules As shown in Figure", "4(b), without using structure information, the baseline encoder-decoder (Liu et al., 2019) model gives a development BLEU of 46.83.", "Adding a GAT module to the encoder as described in Section 4.2 increases the BLEU score to 48.35, demonstrating the usefulness of syntax-aware module.", "Furthermore, adding a GAT module to the decoder as described in Section 4.3 improves the performance to 49.73, which shows that our skeleton structure model is useful.", "Finally, a combination of both gives a 50.04 BLEU score.", "Table 3 shows the final results on the GMB test dataset.", "We report performances of the baseline and various tree-structure systems using the exact F1-score by COUNTER in addition to BLEU.", "The observations are consistent with the development set.", "Our final model, the joint GAT-enc+dec model, Model BLEU exact F1 Liu et al. (2019) 64.96 77.85 (baseline) GAT-encoder 66.02 78.22 GAT-decoder 66.69 79.14 GAT-enc+dec 68.14 79.94 Liu et al. (2018) 57.61 68.72 Table 4: Results on the sentence-level dataset.", "achieves competitive performance, with a exact F1-score of 71.65%.", "Our GAT enhanced models outperform the state-of-the-art model.", "For the vanilla encoder-decoder model, our GAT-encoder obtains a absolute improvement of 2.84% exact F1-score, which demonstrates that modeling syntax information is useful.", "The GAT decoder improves the performance to 70.81%, giving a 4.25% promotion, which indicates that the skeleton structure is helpful to DRTS parsing.", "As shown in Table 3, Tree-LSTM and GCN based systems also give competitive results to the state-of-the-art baseline model, which again demonstrates the effectiveness of modeling tree structures.", "GCN achieves better performance than Tree-LSTM by 1.06%, which can be because the GNN-based model obtains global information during layer stacking, but Tree-LSTM can only capture local structural information.", "GAT performs better than GCN by 0.84%, showing that GAT is a competitive choice of GNN.", "Consistent with observations of BLEU scores, our proposed GAT-enc+dec model shows the best performance on both evaluation metrics.", "In addition, we perform experiments on sentence-level datasets as shown in Table 4 as well, following Liu et al. (2019).", "We use the same setup as the document-level structure-aware model.", "As shown, both the GAT encoder and decoder can bring better results (i.e., 0.37% and 1.29% by the GAT encoder and decoder, respectively), and their combination can give further improvements (i.e., 0.80% over the GAT-decoder) significantly, which are consistent with the findings of the document-level parsing.", "Finally, the sentence-level performance reaches 79.94%, a new state-of-the-art score.", "The results demonstrate that our model is also applicable to sentence-level DRTS parsing.", "Interestingly, we find that the BLEU metric is highly indicative of model performance.", "Based on the observed pair of values on the test results, we (S)DRS Variable Relation All 50 55 60 65 70 Baseline + GAT-encoder + GAT-decoder + GAT-enc+dec Figure 5: Skeleton-level evaluation F1 (%) results.", "are able to approach the correction between BLEU and COUNTER by a line appropriately, demonstrating a faithful alignment to the COUNTER metric.", "The observation indicates that the BLEU is also a good metric for the task.", "Noticeably, one advantage of the BLEU is that the metric calculation is much faster (i.e., only several seconds) than the exact-F1 score, since the latter one consumes at least 24 hours as well as 100G+ memory for the evaluation of the test dataset.", "We conduct analysis to examine benefits by the structural-aware model.", "As the decoding process is decomposed into two steps, we examine the respective gains with respect to the two components, namely skeleton prediction and DRU parsing.", "Influence on Skeleton Prediction The bracket scoring metric suggested in Section 5.2 is used to measure the performance of skeleton prediction.", "Figure 5 shows the F1-scores with respect to node types, which are categorized into three types (Sec-tion 2), namely (S)DRS, relation and variable.", "In addition, the overall performance is reported as well.", "First, we can see that the (S)DRS nodes can achieve the best performance across the three types, the relation nodes rank the second and the variable type has the worst performance.", "This indicates the relative difficulty in parsing the three types of nodes.", "In particular, locating a DRU is relatively simpler as (S)DRS connects with DRU directly,", "followed by the coarse-grained discourse relations over the DRUs, while variable nodes are much more difficult since the order matters much (i.e., the subscript number in the variable).", "Second, the tendencies in terms of different models on the three categories are the same as the overall tendency, where our final model can bring the best skeleton performance, and the baseline shows the worst performance.", "The observation demonstrates the robustness of our proposed structural-aware model: we can achieve consistently better performances on all the types over the baseline.", "Influence on relation tuples inside DRUs Further we analyze the model performance on DRU parsing.", "A strict matching strategy on the relation tuples inside DRUs is used to measure the performance, as described in Section 5.2.", "Table 5 shows the performances, where the F1-scores of the overall matching, only relation matching as well as unary and binary relation tuples are reported.", "4 First, we can find that the overall exact matching F1-score is rather low (below 40).", "When considering the relation performance ignoring the variables, the final F1-score reaches, with an increase of 31.88, which indicates that variable recognition is extremely difficult.", "Variables in DURs are similar to the variable nodes in skeleton, however the scale of the inside DRU variables is much larger.", "We further categorize the relation tuples by their number of variables.", "The unary tuples (i.e. tuples consist of only one 4 There are no relations containing more than two variables according to the corpus statistics. variable node) can obtain better performance than the binary tuples (i.e. tuples consist of two variable nodes), which is reasonable.", "In addition, we look into the performance in terms of different models.", "We can see that all structural-aware models can obtain better performances than the baseline on all settings, demonstrating the effectiveness of our proposed models.", "In particular, the GAT-decoder demonstrates relatively higher performance compared to GAT-encoder, which is consistent with the results observed in Table 3. As expected, the final joint GAT-enc+dec model obtain a better score than both of individual GAT models.", "Case study Figure 6 shows one case study to illustrate the gains of our proposed models over the baseline model, where the detailed differences are highlighted with red color.", "As shown, the baseline model is already able to obtain a strong results with linguistically-motivated copy strategies, constraint-based inference and so on.", "However, without structural-aware information, the model is ineffective to handle several implicit long-distance dependencies.", "For example, the relation of That( x 16 , p 4 ) is unable to be recognized by the baseline model, while the models with structural-aware GAT decoder can get it correctly.", "The major reason is that the structural-aware decoder can transmit the information from p 4 to its parent node, which can facilitate the next-step generation of the parent node.", "On the other hand, the syntactic information from the input sentences can help the first-step skeleton disambiguation.", "For example, as shown in Figure 6, the models without GAT-encoder can misclassify the relations between k 1 and k 4 , which is the discourse relation between the input two short sentences.", "The major reason of the misleading may be possibly due to the word if in the second sentence, which is one indicator for the After relation.", "When the syntactic information is encoded by the GAT encoder, the GAT-enc+dec model can learn the fined-grained dependency reduced by the word if, and thus is able to obtain the accurate relation of the two sentences (i.e., Conti. ) 6 Related work Discourse parsing is one important topic in the NLP.", "There are several main types of discourse parsing tasks in the literature, including rhetorical structure theory (RST; MANN and Thompson, 1988) based parsing, centering theory (CT; Grosz et al., 1995; Barzilay and Lapata, 2008) based parsing and DRT based parsing in this study.", "Discourse Representation Theory (DRT) based parsing is a relatively classic, yet not fully researched semantic analysis task because of its complexity.", "Le and Zuidema (2012) present the first work of a data-driven DRT parser, using a graph-based representation of DRT structures.", "Recently, van Noord et al. (2018b) apply the idea of neural machine translation for graph-based DRT parsing, achieving impressing performance.", "These studies only focus on sentence-level DRT representations, as the complexity would increase much at the paragraph level.", "In contrast, we investigate the paragraph level DRT parsing.", "DRTS parsing simplifies graphs into trees.", "There are two existing papers in this line.", "Liu et al. (2018) are the first to work on DRTS parsing, who propose an end-to-end sequence-to-sequence model for the task.", "Further, Liu et al. (2019) improve the model by suggesting several effective strategies including supervised attention, copying from alignments, and constraint-based inference.", "In this work, we improve DRTS parsing instead of Liu et al. (2019) with two types of structure information.", "Syntax information has been widely exploited for NLP tasks.", "Seminal work exploits discrete features designed by experts (Feng and Hirst, 2014; Heilman and Sagae, 2015).", "Recently, a range of neural modules have been proposed to encode syntax, such as Tree-LSTM (Tai et al., 2015; Zhu et al.; Teng and Zhang, 2016), Tree-CNN (Roy et al., 2020) and the recently proposed implicit approaches (Yin et al., 2018; Zhang et al., 2019).", "Syntax has been demonstrated effective for RST based discourse parsing as well (Yu et al., 2018).", "Our work is to build a syntax tree-aware model and we are the first to use syntax for DRT based discourse parsing.", "GNN has received increasing interests for its strong capability of encoding structural information (Kipf and Welling, 2016; Bastings et al., 2017; Zhang et al., 2018; Zhang and Zhang, 2019; Song et al., 2018).", "GAT is one representative model, which demonstrates success in a number of NLP tasks (Huang and Carley, 2019; Linmei et al., 2019).", "In this work, we exploit GAT to represent tree-structural information for DRTS parsing.", "We investigated the representation of structural information for discourse representation tree structure parsing, showing that a graph neural network can bring significant improvements.", "In particular, we use GAT for representing syntax in encoding, and representing a structural backbone for decoding.", "Experiments on the standard GMB dataset show that our method is high effective, achieving the best results in the literature.", "We thank all reviewers for the valuable comments, which greatly help to improve the paper.", "This work is supported by the National Natural Science Foundation of China (NSFC No. 61976180), the funds of Beijing Advanced Innovation Center for Language Resources (No. TYZ19005) and the Westlake University and Bright Dream Joint Institute for Intelligent Robotics.", "Meishan Zhang is the corresponding author." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "objective", "method", "result", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "other", "other", "other" ]