sentences
sequence | labels
sequence |
---|---|
[
"Recent work has shown that neural rerankers can improve results for dependency parsing over the top k trees produced by a base parser.",
"However, all neural rerankers so far have been evaluated on English and Chinese only, both languages with a configurational word order and poor morphology.",
"In the paper, we re-assess the potential of successful neural reranking models from the literature on English and on two morphologically rich(er) languages, German and Czech.",
"In addition, we introduce a new variation of a discriminative reranker based on graph convolutional networks (GCNs).",
"We show that the GCN not only outperforms previous models on English but is the only model that is able to improve results over the baselines on German and Czech.",
"We explain the differences in reranking performance based on an analysis of",
"a) the gold tree ratio and",
"b) the variety in the k -best lists.",
"Neural models for dependency parsing have been a tremendous success, pushing state-of-the-art results for English on the WSJ benchmarking dataset to over 94% LAS (Dozat and Manning, 2017).",
"Most state-of-the-art parsers, however, are local and greedy and are thus expected to have problems finding the best global parse tree.",
"This suggests that combining greedy, local parsing models with some mechanism that adds a global view on the data might increase parsing accuracies even further.",
"In this work, we look into incorporating global information for dependency parsing via reranking .",
"Different model architectures have been proposed for neural reranking of dependency parse trees (Le and Zuidema, 2014; Zhu et al., 2015; Zhou et al., 2016).",
"Despite achieving modest or even substantial improvements over the baseline parser, however, all the systems above only report performance on English and Chinese data, both morphologically poor languages with a configurational word order and mostly projective tree structures.",
"In the paper, we thus try to reproduce results for different reranking models from the literature on English data and compare them to results for German and Czech, two morphologically rich(er) languages (MRLs) with a high percentage of nonprojective structures.",
"In addition, we present a new discriminative reranking model based on graph convolutional networks (GCNs).",
"Our GCN reranker outperforms the other rerankers on English and is also the only model able to obtain small improvements over the baseline parser on German and Czech while the other rerankers fail to beat the baselines.",
"The improvements, however, are not significant and raise the question what makes neural reranking of MRLs more difficult than reranking English or Chinese.",
"We analyze the differences in performance on the three languages and show that the reasons for this failure are due to the composition and quality of the k -best lists.",
"In particular, we show that the gold tree ratio in the English k -best list is much higher than for German and Czech, and that the trees in the English k -best list show a higher variety , thus making it easier for the reranker to distinguish between highand low-quality trees.",
"The paper is structured as follows.",
"In 2, we review related work on reranking for neural dependency parsing.",
"The different reranking models are described in detail in 3.",
"In 4, we first reproduce reranking results for English and evaluate our new reranker on the English data.",
"Then we test the different models on the two morphologically rich(er) languages and present the results of our evaluation and our analysis, before we conclude in",
"5. 2 Related Work Reranking is a popular technique to improve parsing performance of the output of a base parser.",
"First, the top k candidate trees are generated by the base parser, then these trees are reranked using additional features not accessible to the base parser.",
"This adds a more global and complete view of the trees, in contrast to the local and incomplete features used by the parser.",
"Discriminative rerankers have been a success story in constituency parsing (Collins and Koo, 2005; Charniak and Johnson, 2005).",
"A disadvantage of the traditional feature-rich rerankers is that the large number of potentially sparse features makes them prone to overfitting, and also reduces the efficiency of the systems.",
"Neural rerankers offer a solution to that problem by learning dense, low-dimensional feature representations that are better at generalization, and so reduce the risk of overfitting.",
"Neural reranking The first neural reranker has been presented by Socher et al. (2013) for constituency parsing, based on a recursive neural network which processes the nodes in the parse tree bottom-up and learns dense feature presentations for the whole tree.",
"This approach was adapted for dependency parsing by Le and Zuidema (2014).",
"Zhu et al. (2015) improve on previous work by proposing a recursive convolutional neural network (RCNN) architecture for reranking which can capture syntactic and semantic properties of words and phrases in the parse trees (see 3 for a more detailed description of the two models).",
"k -best vs. forest reranking There exist two different approaches to reranking for parsing: k -best reranking and forest reranking.",
"In k -best reranking, the complete parse tree is encoded and presented to the reranker.",
"A disadvantage of k -best reranking is the limited scope of the k -best list which provides an upper bound for reranking performance.",
"In contrast, a packed parse forest is a compact representation of exponentially many trees of which each node represents a deductive step.",
"Forest reranking (Huang, 2008; Hayashi et al., 2013) approximately decodes the highest scored tree with both local and non-local features in a parse forest with cube pruning (Huang and Chiang, 2005).",
"In our work, we focus on neural reranking of a k -best list of parses generated by a base parsing system as we could not find any available parsers that are both non-projective and produce packed parse forests as output.",
"In this section, we look into reranking for dependency parsing and compare two different types of models: the generative inside-outside recursive neural network ( IORNN ) reranker (Le and Zuidema, 2014) and the discriminative reranker based on recurrent convolutional neural networks ( RCNNs ) (Zhu et al., 2015).",
"In addition, we propose a new reranking model for dependency parsing that employs graph convolutional networks ( GCNs ) to encode the trees.",
"A generative reranking model scores a dependency structure by estimating its generation probability.",
"The probability of generating a fragment of a dependency tree (e.g., a node) D depends on its dependency context CD .",
"The amount of information used in CD is called the order of the generative model.",
"Ideally, we want to generate a dependency subtree D based on -order context C D which includes all ancestors of D , their siblings, and all siblings of D .",
"As the -order counting model is impracticable due to data sparsity, Le and Zuidema (2014) propose the IORNN model to encode the context to generate each node in a dense vector.",
"IORNN The IORNN extends the idea of recursive neural networks (Socher et al., 2010) for constituent parsing where the inner representation of a node is computed bottom up.",
"It also adds a second vector to each node, an outer representation , which is computed top down.",
"The inner representation represents the content of the subtree at the current node, while the outer representation represents the context used to generate that node.",
"The model is further adapted to -order dependency trees with partial outer representations that represent the partial context while generating dependents from left to right.",
"For details on how to compute these representations, please refer to Le and Zuidema (2014).",
"Training The IORNN is trained to maximize the probability of generating each word w given its partial outer representation o w : L () = 1 m (cid:88) T D (cid:88) w T log P ( w | o w ) (1) where D is the set of dependency trees, and m is the total number of words.",
"In contrast to generative models, a discriminative reranker learns to distinguish the correct parse tree of a sentence from the incorrect ones.",
"Since the tree space is huge, one cannot generate all possible trees to train the model, but can only use a subset of the trees generated by the base parser.",
"Therefore, a discriminative reranker is only optimized for one specific parser and can easily overfit the error types of the k -best list.",
"The common idea of all models in this section is to encode the structure of a dependency tree via its node and/or edge representations.",
"Node representations are computed either recursively bottom-up ( RCNN ) or in a step-by-step recurrent manner ( GCN ).",
"RCNN A RCNN recursively encodes each subtree with regards to its children using a convolutional layer.",
"At each dependency node h , a RCNN module computes its hidden representation h and a plausibility score s ( h ) based on the representation of its children.",
"For details, see Zhu et al. (2015).",
"Given a sentence x and its dependency tree y , the score of y is computed by summing up the scores of all inner nodes h : s ( x, y, ) = (cid:88) h y s ( h ) (2) The network then outputs the predicted tree y from the input list gen ( x ) with the highest score: y = argmax y gen ( x ) s ( x, y, ) (3) The bottom-up fashion used in the RCNN can cause disproportion between the tree structure and its representation due to the order in the recursive computation.",
"Consider two trees that only differ in one edge.",
"Their node representations will be more similar if the edge appears higher up in the tree and less so if the edge is closer to the lower level, since the difference spreads to the upper level.",
"Thus, we believe that a discriminative reranker can benefit from a model that considers nodes in a tree more equally, as done in our GCN model below.",
"GCN GCNs have been used to encode nodes in a graph with (syntactic) information from their neighbors.",
"By stacking several layers of GCNs, the learned representation can capture information about directly connected nodes (with only one layer), or nodes that are K hops away (with K layers).",
"We adapt the syntactic gated GCNs for semantic role labeling from Marcheggiani and Titov (2017) to encode the parse trees in our experiments.",
"To our best knowledge, this is the first time GCNs are used for reranking in dependency parsing.",
"Let the hidden representation of node v after K GCN layers be h ( K ) v .",
"The plausibility score of each tree is the sum of the scores of all nodes in the tree: s ( x, y, ) = (cid:88) v y v h ( K ) v (4) Training Given an input sentence x , the input to the reranker is the corresponding correct parse tree y and a list of trees generated by a base parser gen ( x ) .",
"As in conventional ranking systems, all discriminative rerankers can be trained with a margin-based hinge loss so that the score of the correct tree is higher than the score of the incorrect one with a margin of at least m : L ( y, t ) = max(0 , s ( x, t, ) + m s ( x, y, )) t gen ( x ) \\ { y } (5) Zhu et al. (2015) use a structured margin m = ( y, t ) , which is computed by counting the number of incorrect edges of t with respect to y .",
"is a discount hyperparameter indicating the importance of to the loss.",
"In addition, the tree predicted by the model y (i.e., the highest scored tree) (3) is used to calculate the final loss.",
"Alternatively, the loss of the predicted tree can be replaced by the average loss over all trees in the list.",
"None of the models above does consider the scores from the base parser when ranking trees.",
"Therefore, it seems plausible to try combining the advantages from both models, base parser and reranker, to produce a better final model.",
"The most common way to do so is to consider the base parser and the reranker as a mixture model.",
"The score of any reranking model s r can be combined with the score of the base parser s b using a linear combination: s ( x, y ) = s r ( x, y, ) + (1 ) s b ( x, y ) (6) where [0 , 1] is a parameter.",
"We are now providing a systematic evaluation of different neural reranking models used to rank the k -best lists generated by different parsers.",
"In our first experiments, we try to reproduce the results for the available rerankers (IORNN, RCNN) on English.",
"After that, we compare the performance of the rerankers on German and Czech data.",
"Unless stated otherwise, results are compared based on UAS and LAS including punctuation .",
"English Following Zhu et al. (2015), we use the Penn Treebank (PTB) with standard splits: sections 2-21 for training, section 22 for development and section 23 for testing.",
"Their reranking models are applied to unlabeled trees.",
"The authors used the linear incremental parser from Huang and Sagae (2010) to produce k -best lists and achieved slight improvements due to differences in optimization.",
"In contrast, we obtained the data and pre-trained model from the public repository.",
"1 Although not emphasized in their paper, Zhu et al. (2015) obtained the top k parses from the forests (a by-product of dynamic programming) rather than by using beam search.",
"This is very important for reranking because the forest encodes exponentially many trees and so the k -best list extracted from the parse forest has a higher upper bound (Huang and Sagae, 2010).",
"Following previous work, we refer to the greedy, one-best results from the base parser as the baseline .",
"Oracle worst and best are the lower and upper bound accuracies of the trees in the k -best list, respectively.",
"Top tree results are calculated on the highest scored trees by the base parser in the list.",
"Table 1 shows that both our baseline and upper bound results are lower than those from Zhu et al. (2015).",
"Extracting the top trees from the parse forest results in a much higher upper bound (+3.97%, development set) compared to using beam search (+1.46%, although not shown here).",
"The maximum gain of our k -best list at k = 64 using the forest is about 1% lower than in Zhu et al. (2015).",
"We follow the original train/dev/test splits and use the predicted POS and morphological tags provided by the shared task organizers.",
"The top k parses are produced using the graph-based parser in the MATE tools (Bohnet, 2010), 2 a non-neural model that employs second order, approximate nonprojective parsing (McDonald and Pereira, 2006).",
"The algorithm first finds the highest scored projective tree with exact inference, then rearranges the edges one at a time as long as the overall score improves and the parse tree does not violate the tree constraint.",
"This algorithm also creates a list of k best trees through its search process.",
"We also tried to generate the k -best lists with a transition-based parser by adding a beam search decoder, but the beam failed to improve the parsing upper bound.",
"Czech We use the Czech Universal Dependencies (UD) Treebank, 3 based on the Prague Dependency Treebank 3.0 (Bejcek et al., 2013).",
"We use the original train/dev/test split and use MarMoT (Mueller et al., 2013) to predict UD POS tags by 5-way jackknifing.",
"The k -best lists are created using the same parser as for German.",
"The properties of the k -best lists extracted from the German and Czech data are shown in table 2.",
"Extracting the top k parses results in scores lower than the baseline when using the top trees as output, as the reranking scores do not always correlate with the quality of the trees.",
"Pre-trained word embeddings In all experiments on English, we use the 50-dimensional GloVe word embeddings (Pennington et al., 2014) trained on Wikipedia 2014 and Gigaword",
"5. For German, we train 100-dimensional dependency-based word embeddings (Levy and Goldberg, 2014) on the SdeWaC corpus (Faa and Eckart, 2013) with a cutoff frequency of 20 for both words and contexts and set the number of negative samples to 15.",
"In experiments on Czech, we reduce the number of dimensions of the word vectors from fastText (Bojanowski et al., 2017) to 100 using PCA (Raunak et al., 2019).",
"This section is dedicated to the reproduction of the published results for the IORNN and RCNN rerankers on the English PTB.",
"All results are from one run since we observe little variation between different runs 4 (and even between different settings the results hardly vary).",
"IORNN The results from Le and Zuidema (2014) can be reproduced with 93.01% UAS using the data and instructions from the public repository 5 .",
"We are able to replicate this trend on our unlabeled English data described in 4.1, i.e., the reranking results are better than the baseline.",
"The IORNN 4 For instance, the standard deviations of 5 runs on the development and test sets are dev = 0 .",
"mixture model achieves 92.06% UAS on the test set, which is lower than the reproduced results on the paper's original data.",
"Our baseline, however, is also lower due to the use of different data conversion rules for the conversion from constituency trees to dependencies, and the use of different base parsers.",
"Note that Le and Zuidema (2014) also optimize the results on k while we keep k fixed in our experiments to make the results comparable between the different models.",
"In addition, the authors do a logarithmic scaling for the score of the reranker in the mixture model combination (equa-tion 6) and we use this function as it is. 6,7 Table 3 summarizes the results from our reproduction study.",
"RCNN Since the code is not publicly available, we re-implemented the RCNN model following the description in the paper (Zhu et al., 2015).",
"However, we were not able to reproduce the results on the 10-best list extracted from the parse forest.",
"The authors report 93.83% (+1.48) UAS without punctuation using the mixture reranker with k = 64 , and the same trend sets for all k .",
"All our attempts to get better results than the base parser fail.",
"Even when combining the reranking score with the score from the base parser, results do not improve over the baseline.",
"We run an ablation study to investigate the effect of different hyperparameters on the model's performance.",
"We achieve best scores (UAS 90.65% and 90.29%) on both development and test set when removing L2 and structured margin and replacing 6 The IORNN code does not output the reranking scores to train a mixture model separately.",
"7 Applying a scaling to either score only affects the range of the combination parameter , not the final results.",
"the largest margin with the average margin.",
"However, one thing we noted during training is that the learning curves indicate severe overfitting.",
"In conclusion, despite our efforts we were not able to reproduce the RCNN results from Zhu et al. (2015).",
"RCNN-shared As the learning curves for the RCNN models show severe overfitting, we propose to simplify the original model.",
"The original RCNN has a large number of parameters, due to its use of different weight matrices and vectors for the POS tags of the current head-child pair.",
"In the simplified model, we replace those matrices W ( h,c ) and vectors v ( h,c ) with a shared matrix W and vector v .",
"Word embeddings and POS embeddings (randomly initialized) are concatenated as the input to the RCNN.",
"Following common practice, we also test a model where we place several BiLSTM layers before the RCNNs to learn better representations from the input embeddings (+BiLSTMs).",
"By switching from RCNN to the RCNN-shared model, we are now able to beat the baseline, even though by only a small margin (UAS 90.65% and 90.29% on the dev and test sets respectively).",
"We also study the effect of k to the model's performance (table 4).",
"Training the reranker on a larger k -best list 8 improves the UAS by 0.36% on the development set, which shows that the model learns better with more negative examples.",
"Increasing k at test time, on the other hand, hurts performance because the longer list now contains more low quality trees.",
"The drop caused by using a longer list at test time is also smaller (0.20% vs 0.68%) when the model is trained with more trees.",
"We now present results for our new GCN reranking model on the English data.",
"The best GCN model 8 In practice, we do not train on the whole k -best trees when k is large, but down-sample k in each batch to keep the training time efficient.",
"See the appendix for details.",
"(using 1 BiLSTM layer and 3 GCN layers) trained on k = 64 parse trees significantly outperforms the RCNN-shared model 9 with 92.40% UAS on the development set, compared to 91.86% for RCNN-shared ( p < . 001 ), an increase of +0.54%.",
"The best results for the different reranking models on the PTB test set are summarized in table",
"5. We include in the table the results for reranking the top parse trees of different sizes ( k = 10 , 64 ).",
"Reranker is the ranked list produced by the reranking model only.",
"Mixture is the result for combining the output score given by the rerankers and the score of the base parser as described in 3.3.",
"Following Zhu et al. (2015), we do not use the exact linear equation (6), but do logarithmic scaling of the base parser's score.",
"The parameter is optimized based on the results on the development set, which has the same k as the test set.",
"Since the correct tree is not always in the k -best list, we also show an upper bound performance for our 9 We did not do a hyperparameter optimization, but increased the number of parameters in the best RCNN-shared models and observed no significant improvement.",
"rerankers where we manually add the gold trees to the input list ( with oracle ).",
"Note that with oracle is the result from the reranker, not from the mixture reranking model because the correct tree does not have a score from the base parser if it is not included in the k -best list.",
"Combining the score from both the reranker and the base parser consistently improves over the reranking score alone (except for the GCN reranker k test = 10 ), which confirms our hypothesis that the parser and the reranker complement each other by looking at different scoring criteria.",
"Although the accuracy drops when reranking longer lists, the mixture scores are always higher.",
"Compared to the RCNN-shared models, the GCN models benefit less from the mixture models, maybe because the Model UAS LAS Baseline 90.19 87.90 Top tree 88.36 86.28 IORNN ( k train = 10 ) Reranker ( k test = 10 ) 89.32 87.16 Mixture ( = 0 . 91 ) 89.47 87.41 RCNN-shared ( k train = 50 ) Reranker ( k test = 50 ) 89.50 86.12 Mixture ( = 0 . 1 ) 90.12 87.87 With oracle 92.76 90.06 GCN ( k train = 50 ) Reranker ( k test = 50 ) 89.96 87.50 Mixture ( = 0 . 11 ) 90.33 88.21 With oracle 94.29 92.85 Table 7: Performance of different rerankers on the German SPMRL test set.",
"GCNs rank trees more similar to the base parser.",
"The upper bound performance ( with oracle ) shows that we can still improve results with a better k -best list.",
"Interestingly, although we achieve modest improvements compared to Zhu et al. (2015), our upper bound is higher than theirs.",
"A comparison of results with the original RCNN paper on their data is given in table",
"6. 4.4 Neural Reranking for MRLs We now evaluate the reranking models that have proved to be effective for English (IORNN, RCNN-shared (+BiLSTMs) and GCNs) on German and Czech data.",
"Note that the RCNN model only ranks unlabeled trees while the other two models also consider the dependency labels, which is particularly important for non-configurational languages.",
"All models are trained with the same hyperparameter settings as for English.",
"The mixture scores are combined using equation 6 except that we optimize the IORNN mixture model using the original tool provided by the authors.",
"The results for the different reranking models are presented in table 7 and 8.",
"Neither the IORNN nor the RCNN-shared reranker can surpass the baseline.",
"The GCN mixture model is the only model that shows significant improvements over the other models ( p < . 001 ) including the baseline, although small ( 0.15-0.3% LAS).",
"Taking a closer look at different grammatical functions in the output, we can see a clear differ-Model UAS LAS Baseline 91.87 88.85 Top tree 91.02 88.28 IORNN ( k train = 10 ) Reranker ( k test = 10 ) 91.07 87.97 Mixture ( = 0 . 94 ) 91.42 88.54 RCNN-shared ( k train = 50 ) Reranker ( k test = 50 ) 90.68 86.63 Mixture ( = 0 . 07 ) 91.79 88.80 With oracle 93.28 89.99 GCN ( k train = 50 ) Reranker ( k test = 50 ) 91.12 87.84 Mixture ( = 0 . 09 ) 91.89 89.01 With oracle 94.47 92.42 Table 8: Performance of different rerankers on the Czech UD test set.",
"ence between the reranking results and the baseline (table 9).",
"Although the overall accuracy is similar, our reranking results show a better performance for core arguments (nsubj: subject, obj: direct object, iobj: indirect object) and conjunctions (conj).",
"Through our experiments, we have shown that neural reranking models, which have demonstrated their effectiveness on English data, fail to improve baseline parsing results when applied to German and Czech.",
"This brings us to the question whether this failure is due to the differences between the languages or simply due to the lower quality in the German and Czech k -best lists that are input to the rerankers.",
"It is conceivable that language-specific properties such as the freer word order and richer morphology in German and Czech might make it harder for our models to learn a good representation capturing the quality of a specific parse tree.",
"However, when we add the correct parse tree to the k -best list ( with oracle results in table 5, 7 and 8), the accuracy goes up to 94% for English, German and Czech, which effectively eliminates the first reason.",
"This points to the method used to obtain the k 0 20 40 60 88 89 90 91 92 Gold tree ratio (%) UAS English mixture German mixture Czech mixture Figure 1: UAS for the GCN reranking mixture model with respect to the gold tree ratio in the k -best lists.",
"best list as the main factor responsible for the low results for German and Czech.",
"Beam search, although being straightforward to implement, fails to create high quality k -best lists for the base parsers used for both languages ( 4.1).",
"While several projective parsers support k -best parsing (Huang and Sagae, 2010; McDonald and Pereira, 2006), there is, to the best of our knowledge, no out-of-the-box parsing system that implements an effective non-projective k -best parsing algorithm (as, for example, Hall (2007)'s algorithm).",
"Gold tree ratio Clearly, the (upper bound) tree accuracy in the k -best list determines the reranking performance.",
"In all datasets, we observe that the accuracy decreases when sentence length increases.",
"Overall, the (unlabeled) tree accuracy in the English k -best list is 5% higher than in the German data, but is behind that in the Czech data.",
"This, however, is not caused by a larger amount of long sentences in the German data.",
"For sentences of same length, the top k trees from the PTB contain more gold trees than those from the German SPMRL and Czech UD datasets.",
"We further study the effect of the gold tree ratio for reranking by removing the gold trees from the k best list to reduce the ratio to a certain level.",
"Figure 1 shows that the gold tree ratio strongly correlates with the reranking results.",
"k -best list variation We measure the variation between the trees in the k -best lists by calculating the standard deviation of their UAS.",
"Figure 2 illustrates the UAS standard deviation distribution in the data for the three languages for k = 10 .",
"In each dataset, the tree UAS variation in the English data is the highest, followed by German and then Czech, which shows that the re-arranging method used to generate German and Czech k -best trees tends to return more similar trees.",
"We hypothesize train dev test 0 20 40 Figure 2: Tree UAS standard deviation of 10-best lists.",
"that reranking benefits from diversity, especially if the data contains hard negative examples (incorrect trees that are very similar to the correct one).",
"The gap between reranker performance and with oracle results shows that the reranker is able to detect the correct tree among the incorrect ones because they are very different from each other.",
"Reranking models Among the neural rerankers, the RCNNs are prone to error propagation from the lower levels, and the IORNNs are sensitive to the order of the child nodes.",
"Both models did not work very well when moving to German and Czech compared to the GCNs, which disregard the top-down or left-to-right order.",
"In practice, parser output reranking is not a very cost effective way to improve parsing performance, unless we have a fast way to generate high quality output trees.",
"However, the small improvement in core arguments might be useful for downstream applications that require high quality prediction of core arguments.",
"We have evaluated recent neural techniques for reranking dependency parser output for English, German and Czech and presented a novel reranking model, based on graph convolutional networks (GCNs).",
"We were able to reproduce results for English, using existing rerankers, and showed that our novel GCN-based reranker even outperformed them.",
"However, none of the rerankers works well on the two morphologically rich(er) languages.",
"Our analysis gave some insights into this issue.",
"We showed that the failure of the rerankers to improve results for German and Czech over the baseline is due to the lower quality of the k -best lists.",
"Here the gold tree ratio in the k -best list plays an important role, as the discriminative rerankers are very well able to distinguish the gold trees from other trees in the list, but their performance drops notably when we remove the gold trees from the list.",
"In addition, we observe a higher diversity in the English k -best list, as compared to German and Czech, which helps the rerankers to learn the differences between highand low-quality trees.",
"We conclude that the prerequisite for improving dependency parsing with neural reranking is a diverse k -best list with a high gold-tree ratio.",
"The latter is much harder to achieve for MRLs where the freer word order and high amount of non-projectivity result in a larger number of tree candidates, reflected by a lower gold tree ratio.",
"This work was supported by the Leibniz Science Campus Empirical Linguistics and Computational Modeling, funded by the Leibniz Association under grant no.",
"SAS-2015-IDS-LWC and by the Ministry of Science, Research, and Art (MWK) of the state of Baden-Wurttemberg."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"result",
"method",
"result",
"result",
"abstain",
"other",
"other"
] |
[
"Learning to follow instructions is of fundamental importance to autonomous agents for vision-and-language navigation (VLN).",
"In this paper, we study how an agent can navigate long paths when learning from a corpus that consists of shorter ones.",
"We show that existing state-of-the-art agents do not generalize well.",
"To this end, we propose BabyWalk, a new VLN agent that is learned to navigate by decomposing long instructions into shorter ones (BabySteps) and completing them sequentially.",
"A special design memory buffer is used by the agent to turn its past experiences into contexts for future steps.",
"The learning process is composed of two phases.",
"In the first phase, the agent uses imitation learning from demonstration to accomplish BabySteps.",
"In the second phase, the agent uses curriculum-based reinforcement learning to maximize rewards on navigation tasks with increasingly longer instructions.",
"We create two new benchmark datasets (of long navigation tasks) and use them in conjunction with existing ones to examine BabyWalk's generalization ability.",
"Empirical results show that BabyWalk achieves state-of-the-art results on several metrics, in particular, is able to follow long instructions better.",
"The codes and the datasets are released on our project page https://github.com/ Sha-Lab/babywalk .",
"Autonomous agents such as household robots need to interact with the physical world in multiple modalities.",
"As an example, in vision-and-language navigation (VLN) (Anderson et al., 2018), the agent moves around in a photo-realistic simulated environment (Chang et al., 2017) by following a sequence of natural language instructions.",
"To infer its whereabouts so as to decide its moves, the Author contributed equally On leave from University of Southern California agent infuses its visual perception, its trajectory and the instructions (Fried et al., 2018; Anderson et al., 2018; Wang et al., 2019; Ma et al., 2019a,b).",
"Arguably, the ability to understand and follow the instructions is one of the most crucial skills to acquire by VLN agents.",
"Jain et al. (2019) shows that the VLN agents trained on the originally proposed dataset ROOM 2R OOM ( i.e . R 2 R thereafter) do not follow the instructions, despite having achieved high success rates of reaching the navigation goals.",
"They proposed two remedies: a new dataset ROOM 4R OOM (or R 4 R ) that doubles the path lengths in the R 2 R , and a new evaluation metric Coverage weighted by Length Score (CLS) that measures more closely whether the ground-truth paths are followed.",
"They showed optimizing the fidelity of following instructions leads to agents with desirable behavior.",
"Moreover, the long lengths in R 4 R are informative in identifying agents who score higher in such fidelity measure.",
"In this paper, we investigate another crucial aspect of following the instructions: can a VLN agent generalize to following longer instructions by learning from shorter ones?",
"This aspect has important implication to real-world applications as collecting annotated long sequences of instructions and training on them can be costly.",
"Thus, it is highly desirable to have this generalization ability.",
"After all, it seems that humans can achieve this effortlessly 1 .",
"To this end, we have created several datasets of longer navigation tasks, inspired by R 4 R (Jain et al., 2019).",
"We trained VLN agents on R 4 R and use the agents to navigate in ROOM 6R OOM ( i.e ., R 6 R ) and ROOM 8R OOM ( i.e ., R 8 R ).",
"We contrast to the performance of the agents which are trained on those datasets directly (in-domain).",
"The results 1 Anecdotally, we do not have to learn from long navigation experiences.",
"Instead, we extrapolate from our experiences of learning to navigate in shorter distances or smaller spaces (perhaps a skill we learn when we were babies or kids).",
"Our findings are that the agents trained on R 4 R (denoted by the purple and the pink solid lines) perform significantly worse than the in-domain agents (denoted the light blue dashed line).",
"Also interestingly, when such out-of-domain agents are applied to the dataset R 2 R with shorter navigation tasks, they also perform significantly worse than the corresponding in-domain agent despite R 4 R containing many navigation paths from R 2 R .",
"Note that the agent trained to optimize the aforementioned fidelity measure ( RCM (fidelity)) performs better than the agent trained to reach the goal only ( RCM (goal)), supporting the claim by Jain et al. (2019) that following instructions is a more meaningful objective than merely goal-reaching.",
"Yet, the fidelity measure itself is not enough to enable the agent to transfer well to longer navigation tasks.",
"To address these deficiencies, we propose a new approach for VLN.",
"The agent follows a long navigation instruction by decomposing the instruction into shorter ones (micro-instructions, i.e ., BABYSTEP s), each of which corresponds to an intermediate goal/task to be executed sequentially.",
"To this end, the agent has three components:",
"(a) a memory buffer that summarizes the agent's experiences so that the agent can use them to provide the context for executing the next BABY-STEP .",
"(b) the agent first learns from human experts in bite-size.",
"Instead of trying to imitate to achieve the ground-truth paths as a whole, the agent is given the pairs of a BABY-STEP and the corresponding human expert path so that it can learn policies of actions from shorter instructions.",
"(c) In the second stage of learning, the agent refines the policies by curriculum-based reinforcement learning, where the agent is given increasingly longer navigation tasks to achieve.",
"In particular, this curriculum design reflects our desiderata that the agent optimized on shorter tasks should generalize well to slightly longer tasks and then much longer ones.",
"While we do not claim that our approach faithfully simulates human learning of navigation, the design is loosely inspired by it.",
"We name our approach BABYWALK and refer to the intermediate navigation goals in",
"(b) as BABY-STEP s.",
"Fig. 1 shows that BABYWALK (the red solid line) significantly outperforms other approaches and despite being out-of-domain, it even reach the performance of in-domain agents on R 6 R and R 8 R .",
"The effectiveness of BABYWALK also leads to an interesting twist.",
"As mentioned before, one of the most important observations by Jain et al. (2019) is that the original VLN dataset R 2 R fails to reveal the difference between optimizing goal-reaching (thus ignoring the instructions) and optimizing the fidelity (thus adhering to the instruc-tions).",
"Yet, leaving details to section 5, we have also shown that applying BABYWALK to R 2 R can lead to equally strong performance on generalizing from shorter instructions ( i.e ., R 2 R ) to longer ones.",
"In summary, in this paper, we have demonstrated empirically that the current VLN agents are ineffective in generalizing from learning on shorter navigation tasks to longer ones.",
"We propose a new approach in addressing this important problem.",
"We validate the approach with extensive benchmarks, including ablation studies to identify the effectiveness of various components in our approach.",
"Vision-and-Language Navigation (VLN) Recent works (Anderson et al., 2018; Thomason et al., 2019; Jain et al., 2019; Chen et al., 2019; Nguyen and Daum III, 2019) extend the early works of instruction based navigation (Chen and Mooney, 2011; Kim and Mooney, 2013; Mei et al., 2016) to photo-realistic simulated environments.",
"For instance, Anderson et al. (2018) proposed to learn a multi-modal Sequence-to-Sequence agent (Seq2Seq) by imitating expert demonstration.",
"Fried et al. (2018) developed a method that augments the 2541 paired instruction and demonstration data using a learned speaker model, to teach the navigation agent to better understand instructions.",
"Wang et al. (2019) further applies reinforcement learning (RL) and self-imitation learning to improve navigation agents.",
"Ma et al. (2019a,b) designed models that track the execution progress for a sequence of instructions using soft-attention.",
"Different from them, we focus on transferring an agent's performances on shorter tasks to longer ones.",
"This leads to designs and learning schemes that improve generalization across datasets.",
"We use a memory buffer to prevent mistakes in the distant past from exerting strong influence on the present.",
"In imitation learning stage, we solve fine-grained subtasks (BABY-STEP s) instead of asking the agent to learn the navigation trajectory as a whole.",
"We then use curriculum-based reinforcement learning by asking the agent to follow increasingly longer instructions.",
"Transfer and Cross-domain Adaptation There have been a large body of works in transfer learning and generalization across tasks and environments in both computer vision and reinforcement learning (Andreas et al., 2017; Oh et al., 2017; Zhu et al., 2017a,b; Sohn et al., 2018; Hu et al., 2018).",
"Of particular relevance is the recent work on adapting VLN agents to changes in visual environments (Huang et al., 2019; Tan et al., 2019).",
"To our best knowledge, this work is the first to focus on adapting to a simple aspect of language variability the length of the instructions.",
"Curriculum Learning Since proposed in (Ben-gio et al., 2009), curriculum learning was successfully used in a range of tasks: training robots for goal reaching (Florensa et al., 2017), visual question answering (Mao et al., 2019), image generation (Karras et al., 2018).",
"To our best knowledge, this work is the first to apply the idea to learning in VLN.",
"In the VLN task, the agent receives a natural language instruction X composed of a sequence of sentences.",
"We model the agent with an Markov Decision Process (MDP) which is defined as a tuple of a state space S , an action space A , an initial state s 1 , a stationary transition dynamics : SA S , a reward function r : S A R , and the discount factor for weighting future rewards.",
"The agent acts according to a policy : S A 0 R + .",
"The state and action spaces are defined the same as in (Fried et al., 2018) (cf. 4.4 for details).",
"For each X , the sequence of the pairs ( s , a ) is called a trajectory Y = (cid:2) s 1 , a 1 , . . . , s | Y | , a | Y | (cid:3) where || denotes the length of the sequence or the size of a set.",
"We use a to denote an action taken by the agent according to its policy.",
"Hence, Y denotes the agent's trajectory, while Y (or a ) denotes the human expert's trajectory (or action).",
"The agent is given training examples of (X , Y) to optimize its policy to maximize its expected rewards.",
"In our work, we introduce additional notations in the following.",
"We will segment a (long) instruction X into multiple shorter sequences of sentences { x m , m = 1 , 2 , , M } , to which we refer as BABY-STEP s.",
"Each x m is interpreted as a microinstruction that corresponds to a trajectory by the agent y m and is aligned with a part of the human expert's trajectory, denoted as y m .",
"While the alignment is not available in existing datasets for VLN, we will describe how to obtain them in a later section ( 4.3).",
"Throughout the paper, we also freely interexchange the term following the m th micro-instruction, executing the BABY-STEP x m , or complete the m th subtask.",
"We use t [1 , | Y | ] to denote the (discrete) time steps the agent takes actions.",
"Additionally, when the agent follows x m , for convenience, we sometimes use t m [1 , | y m | ] to index the time steps, instead of the global time t = t m + (cid:4) m 1 i =1 | y i | .",
"We describe in detail the 3 key elements in the design of our navigation agent:",
"(i) a memory buffer for storing and recalling past experiences to provide contexts for the current navigation instruction ( 4.1);",
"(ii) an imitation-learning stage of navigating with short instructions to accomplish a single BABY-STEP ( 4.2.1);",
"(iii) a curriculum-based reinforcement learning phase where the agent learns with increasingly longer instructions ( i.e . multiple BABY-STEP s) ( 4.2.2).",
"We describe new benchmarks created for learning and evaluation and key implementation details in 4.3 and 4.4 (with more details in the Appendix).",
"The basic operating model of our navigation agent BABYWALK is to follow a micro instruction x m ( i.e",
"., a short sequence of instructions, to which we (cid:6)(cid:23)(cid:29) (cid:7) (cid:26)(cid:8) (cid:22) (cid:11) (cid:22) (cid:2)(cid:14) (cid:19) (cid:27)(cid:17) (cid:19) (cid:16) (cid:19) (cid:3)(cid:4)(cid:6)(cid:7)(cid:8)(cid:10)(cid:1)(cid:2)(cid:9)(cid:5)(cid:5)(cid:4)(cid:8) (cid:4) (cid:7)(cid:23)(cid:26)(cid:27)(cid:25)(cid:28)(cid:16)(cid:27)(cid:20)(cid:24)(cid:23)(cid:1)(cid:26)(cid:17)(cid:18)(cid:22)(cid:17)(cid:23)(cid:27)(cid:15)(cid:27)(cid:20)(cid:24)(cid:23) (cid:1) (cid:10) (cid:27)(cid:19)(cid:1)(cid:5) (cid:4)(cid:5)(cid:14) (cid:3)(cid:11) (cid:12)(cid:6)(cid:10) (cid:2) (cid:16) (cid:27)(cid:17) (cid:16) (cid:27)(cid:17) (cid:16) (cid:27)(cid:17) (cid:28) (cid:28) (cid:28) (cid:3) (cid:29) (cid:3) (cid:29) (cid:3) (cid:29) (cid:9) (cid:21)(cid:23)(cid:19)(cid:19)(cid:18)(cid:20)(cid:24) (cid:12) (cid:13) (cid:5) (cid:4)(cid:5)(cid:14) (cid:13) (cid:4)(cid:9)(cid:8) (cid:10)(cid:24)(cid:21)(cid:20)(cid:16)(cid:30)(cid:1) (cid:15) (cid:10) (cid:5)(cid:25) (cid:10) (cid:5)(cid:25) (cid:5) (cid:5) (cid:6) (cid:6) Figure 2: The BABYWALK agent has a memory buffer storing its past experiences of instructions x m , and its trajectory y m .",
"also refer as BABY-STEP ), conditioning on the context z m and to output a trajectory y m .",
"A schematic diagram is shown in Fig. 2. Of particularly different from previous approaches is the introduction of a novel memory module.",
"We assume the BABYSTEP s are given in the training and inference time 4.3 explains how to obtain them if not given a prior (Readers can directly move to that section and return to this part afterwards).",
"The left of the Fig. 3 gives an example of those micro-instructions.",
"Context The context is a summary of the past experiences of the agent, namely the previous ( m 1) mini-instructions and trajectories: z m = g (cid:5) f SUMMARY ( x 1 , , x m 1 ) , f SUMMARY ( y 1 , , y m 1 ) (cid:6) (1) where the function g is implemented with a multilayer perceptron.",
"The summary function f SUMMARY is explained in below.",
"Summary To map variable-length sequences (such as the trajectory and the instructions) to a single vector, we can use various mechanisms such as LSTM.",
"We reported an ablation study on this in 5.3.",
"In the following, we describe the forgetting one that weighs more heavily towards the most recent experiences and performs the best empirically.",
"where the weights are normalized to 1 and inverse proportional to how far i is from m ,",
"is a hyper-parameter (we set to 1 / 2 ) and ( ) is a monotonically nondecreasing function and we simply choose the identity function.",
"Note that, we summarize over representations of micro-instructions ( x m ) and experiences of executing those micro-instructions y m .",
"The two encoders u ( ) and v ( ) are described in 4.4.",
"They are essentially the summaries of low-level details, i.e",
"., representations of a sequence of words, or a sequence of states and actions.",
"While existing work often directly summarizes all the low-level details, we have found that the current form of hierarchical summarizing ( i.e ., first summarizing each BABY-STEP , then summarizing all previous BABY-STEP s) performs better.",
"Policy The agent takes actions, conditioning on the context z m , and the current instruction x m : a t ( | s t , a t 1 ; u ( x m ) , z m ) (5) where the policy is implemented with a LSTM with the same cross-modal attention between visual states and languages as in (Fried et al., 2018).",
"The agent learns in two phases.",
"In the first one, imitation learning is used where the agent learns to execute BABY-STEP s accurately.",
"In the second one, the agent learns to execute successively longer tasks from a designed curriculum.",
"BABY-STEP s are shorter navigation tasks.",
"With the m th instruction x m , the agent is asked to follow the instruction so that its trajectory matches the human expert's y m .",
"To assist the learning, the context is computed from the human expert trajectory up to the m th BABY-STEP ( i.e",
"., in eq.",
"(1), y s are replaced with y s).",
"We maximize the objective (cid:6) = M (cid:7) m =1 | y m | (cid:7) t m =1 log ( a t m | s t m , a t m 1 ; u ( x m ) , z m ) We emphasize here each BABY-STEP is treated independently of the others in this learning regime.",
"Each time a BABY-STEP is to be executed, we preset the agent in the human expert's context (cid:7)(cid:24)(cid:28)(cid:29)(cid:27)(cid:30)(cid:15)(cid:29)(cid:20)(cid:25)(cid:24)(cid:25)(cid:18)(cid:1)(cid:28)(cid:30)(cid:14)(cid:3)(cid:29)(cid:13)(cid:28)(cid:21)(cid:28) (cid:12)(cid:13)(cid:27)(cid:23)(cid:30)(cid:26)(cid:5) (cid:7)(cid:8) (cid:8)(cid:23)(cid:26)(cid:25)(cid:17)(cid:17)(cid:34)(cid:27)(cid:17)(cid:28)(cid:30)(cid:36)(cid:29)(cid:14)(cid:17)(cid:20)(cid:13)(cid:32)(cid:21)(cid:26)(cid:28)(cid:30)(cid:26)(cid:15)(cid:26)(cid:24)(cid:27)(cid:23)(cid:17)(cid:30)(cid:17)(cid:29)(cid:21)(cid:25)(cid:19)(cid:23)(cid:17)(cid:29)(cid:31)(cid:14)(cid:4)(cid:30)(cid:13)(cid:29)(cid:22)(cid:29) (cid:10)(cid:8)(cid:1)(cid:15)(cid:30)(cid:27)(cid:27)(cid:20)(cid:15)(cid:30)(cid:22)(cid:30)(cid:23) (cid:8)(cid:17)(cid:15)(cid:29)(cid:30)(cid:27)(cid:17) (cid:2)(cid:4) (cid:10)(cid:26)(cid:1)(cid:17)(cid:34)(cid:27)(cid:17)(cid:28)(cid:30)(cid:1)(cid:16)(cid:17)(cid:24)(cid:26)(cid:3)(cid:1)(cid:23)(cid:17)(cid:13)(cid:28)(cid:25)(cid:1)(cid:18)(cid:28)(cid:26)(cid:24)(cid:1)(cid:17)(cid:34)(cid:30)(cid:17)(cid:28)(cid:25)(cid:13)(cid:23)(cid:1)(cid:28)(cid:17)(cid:33)(cid:13)(cid:28)(cid:16)(cid:29) (cid:6)(cid:1)(cid:29)(cid:31)(cid:14)(cid:4)(cid:30)(cid:13)(cid:29)(cid:22)(cid:1)(cid:19)(cid:21)(cid:32)(cid:17)(cid:25)(cid:1)(cid:20)(cid:21)(cid:29)(cid:30)(cid:26)(cid:28)(cid:35)(cid:15)(cid:26)(cid:25)(cid:30)(cid:17)(cid:34)(cid:30) (cid:3) (cid:8)(cid:17)(cid:15)(cid:29)(cid:30)(cid:27)(cid:17) (cid:2)(cid:29) (cid:30)(cid:1)(cid:15)(cid:26)(cid:25)(cid:29)(cid:17)(cid:15)(cid:31)(cid:30)(cid:21)(cid:32)(cid:17)(cid:1)(cid:29)(cid:31)(cid:14)(cid:4)(cid:30)(cid:13)(cid:29)(cid:22)(cid:29)(cid:1)(cid:19)(cid:21)(cid:32)(cid:17)(cid:25)(cid:1)(cid:20)(cid:21)(cid:29)(cid:30)(cid:26)(cid:28)(cid:35)(cid:1)(cid:15)(cid:26)(cid:25)(cid:30)(cid:17)(cid:34)(cid:30) (cid:3) (cid:8)(cid:17)(cid:15)(cid:29)(cid:30)(cid:27)(cid:17) (cid:2)(cid:11) (cid:12)(cid:20)(cid:17)(cid:1)(cid:33)(cid:20)(cid:26)(cid:23)(cid:17)(cid:1)(cid:30)(cid:13)(cid:29)(cid:22) (cid:17)(cid:34)(cid:21)(cid:30)(cid:1)(cid:30)(cid:20)(cid:17)(cid:1)(cid:28)(cid:26)(cid:26)(cid:24)(cid:1)(cid:30)(cid:20)(cid:17)(cid:25)(cid:1)(cid:19)(cid:26)(cid:1)(cid:29)(cid:30)(cid:28)(cid:13)(cid:21)(cid:19)(cid:20)(cid:30)(cid:1)(cid:13)(cid:25)(cid:16)(cid:1)(cid:30)(cid:31)(cid:28)(cid:25)(cid:1)(cid:23)(cid:17)(cid:18)(cid:30)(cid:5) (cid:19)(cid:26)(cid:1)(cid:29)(cid:30)(cid:28)(cid:13)(cid:21)(cid:19)(cid:20)(cid:30)(cid:1)(cid:31)(cid:25)(cid:30)(cid:21)(cid:23)(cid:1)(cid:35)(cid:26)(cid:31)(cid:1)(cid:27)(cid:13)(cid:29)(cid:29)(cid:1)(cid:13)(cid:25)(cid:1)(cid:17)(cid:35)(cid:17)(cid:1)(cid:15)(cid:20)(cid:13)(cid:28)(cid:30)(cid:1)(cid:27)(cid:21)(cid:15)(cid:30)(cid:31)(cid:28)(cid:17)(cid:1)(cid:18)(cid:28)(cid:13)(cid:24)(cid:17)(cid:1)(cid:26)(cid:25)(cid:1)(cid:30)(cid:20)(cid:17)(cid:1)(cid:23)(cid:17)(cid:18)(cid:30)(cid:1)(cid:33)(cid:13)(cid:23)(cid:23)(cid:1)(cid:30)(cid:20)(cid:17)(cid:25)(cid:1)(cid:33)(cid:13)(cid:21)(cid:30)(cid:1)(cid:30)(cid:20)(cid:17)(cid:28)(cid:17)(cid:5)(cid:1) (cid:19)(cid:26)(cid:1)(cid:29)(cid:30)(cid:28)(cid:13)(cid:21)(cid:19)(cid:20)(cid:30)(cid:5)(cid:1)(cid:27)(cid:13)(cid:29)(cid:29)(cid:1)(cid:30)(cid:20)(cid:17)(cid:1)(cid:14)(cid:13)(cid:28)(cid:1)(cid:33)(cid:21)(cid:30)(cid:20)(cid:1)(cid:30)(cid:20)(cid:17)(cid:1)(cid:29)(cid:30)(cid:26)(cid:26)(cid:23)(cid:29)(cid:5)(cid:1) (cid:33)(cid:13)(cid:23)(cid:22)(cid:1)(cid:29)(cid:30)(cid:28)(cid:13)(cid:21)(cid:19)(cid:20)(cid:30)(cid:1)(cid:31)(cid:25)(cid:30)(cid:21)(cid:23)(cid:1)(cid:35)(cid:26)(cid:31)(cid:1)(cid:19)(cid:17)(cid:30)(cid:1)(cid:30)(cid:26)(cid:1)(cid:13)(cid:1)(cid:30)(cid:13)(cid:14)(cid:23)(cid:17)(cid:1)(cid:33)(cid:21)(cid:30)(cid:20)(cid:1)(cid:15)(cid:20)(cid:13)(cid:21)(cid:28)(cid:29)(cid:1)(cid:30)(cid:20)(cid:17)(cid:25)(cid:1)(cid:29)(cid:30)(cid:26)(cid:27)(cid:5)(cid:1) (cid:5) (cid:11) (cid:5) (cid:10) (cid:5) (cid:9) (cid:5) (cid:8) (cid:6) (cid:8) (cid:6) (cid:9) (cid:6) (cid:10) (cid:6) (cid:11) (cid:9)(cid:17)(cid:15)(cid:30)(cid:31)(cid:28)(cid:17)(cid:1)(cid:2)(cid:6) (cid:11)(cid:17)(cid:33)(cid:13)(cid:28)(cid:16) (cid:5) (cid:11) (cid:2) (cid:5) (cid:8) (cid:6) (cid:8) (cid:5) (cid:9) (cid:6) (cid:9) (cid:5) (cid:10) (cid:6) (cid:10) (cid:12)(cid:6) (cid:11) (cid:1)(cid:7) (cid:11) (cid:2) (cid:2) BabyWalk (cid:11)(cid:17)(cid:33)(cid:13)(cid:28)(cid:16) (cid:12)(cid:6) (cid:10) (cid:12)(cid:6) (cid:10) (cid:12)(cid:6) (cid:11) (cid:9)(cid:17)(cid:15)(cid:30)(cid:31)(cid:28)(cid:17)(cid:1)(cid:2)(cid:7) (cid:5) (cid:8) (cid:6) (cid:8) (cid:5) (cid:9) (cid:6) (cid:9) (cid:5) (cid:10) (cid:2) (cid:2) (cid:2) (cid:5) (cid:11) (cid:1)(cid:7) (cid:11) (cid:5) (cid:10) (cid:5) (cid:8) (cid:6) (cid:8) (cid:5) (cid:9) (cid:6) (cid:9) (cid:2) (cid:2) (cid:1)(cid:7) (cid:10) BabyWalk BabyWalk (cid:6)(cid:17)(cid:15)(cid:25)(cid:23)(cid:26)(cid:25)(cid:28)(cid:20)(cid:29)(cid:20)(cid:25)(cid:24)(cid:1)(cid:25)(cid:18)(cid:1)(cid:13)(cid:1)(cid:24)(cid:13)(cid:31)(cid:20)(cid:19)(cid:13)(cid:29)(cid:20)(cid:25)(cid:24)(cid:1)(cid:29)(cid:13)(cid:28)(cid:21)(cid:1) (cid:10)(cid:8)(cid:1)(cid:15)(cid:30)(cid:27)(cid:27)(cid:20)(cid:15)(cid:30)(cid:22)(cid:30)(cid:23)(cid:1)(cid:16)(cid:17)(cid:28)(cid:20)(cid:19)(cid:24) (cid:9)(cid:20)(cid:26)(cid:17)(cid:22)(cid:20)(cid:24)(cid:17) (cid:4)(cid:13)(cid:5) (cid:11) (cid:14) (cid:4)(cid:13)(cid:5) (cid:11) (cid:14) (cid:4)(cid:13)(cid:5) (cid:10) (cid:14) Figure 3: Two-phase learning by BABYWALK .",
"and the last visited state.",
"We follow existing literature (Anderson et al., 2018; Fried et al., 2018) and use student-forcing based imitation learning, which uses agent's predicted action instead of the expert action for the trajectory rollout.",
"We want the agent to be able to execute multiple consecutive BABY-STEP s and optimize its performance on following longer navigation instructions (instead of the cross-entropy losses from the imitation learning).",
"However, there is a discrepancy between our goal of training the agent to cope with the uncertainty in a long instruction and the imitation learning agent's ability in accomplishing shorter tasks given the human annotated history .",
"Thus it is challenging to directly optimize the agent with a typical RL learning procedure, even the imitation learning might have provided a good initialization for the policy, see our ablation study in 5.3.",
"Inspired by the curriculum learning strategy (Bengio et al., 2009), we design an incremental learning process that the agent is presented with a curriculum of increasingly longer navigation tasks.",
"Fig. 3 illustrates this idea with two lec-tures.",
"Given a long navigation instruction X with M BABY-STEP s, for the k th lecture, the agent is given all the human expert's trajectory up to but not including the (M k + 1) th BABY-STEP , as well as the history context z M k +1 .",
"The agent is then asked to execute the k th micro-instructions from x M k +1 to x M using reinforcement learning to produce its trajectory that optimizes a task related R 2 R R 4 R R 6 R R 8 R Train seen instr.",
"metric, for instance the fidelity metric measuring how faithful the agent follows the instructions.",
"As we increase k from 1 to M , the agent faces the challenge of navigating longer and longer tasks with reinforcement learning.",
"However, the agent only needs to improve its skills from its prior exposure to shorter ones.",
"Our ablation studies show this is indeed a highly effective strategy.",
"To our best knowledge, this is the first work studying how well VLN agents generalize to long navigation tasks.",
"To this end, we create the following datasets in the same style as in (Jain et al., 2019).",
"ROOM 6R OOM and ROOM 8R OOM We concatenate the trajectories in the training as well as the validation unseen split of the ROOM 2R OOM dataset for 3 times and 4 times respectively, thus extending the lengths of navigation tasks to 6 rooms and 8 rooms.",
"To join, the end of the former trajectory must be within 0.5 meter with the beginning of the later trajectory.",
"Table 1 and Fig. 4 contrast the different datasets in the # of instructions, the average length (in words) of instructions and how the distributions vary.",
"Table 1 summarizes the descriptive statistics of BABY-STEP s across all datasets used in this paper.",
"The datasets and the segmentation/alignments are made publically available 2 .",
"In the following, we describe key information for research reproducibility, while the complete details are in the Appendix.",
"States and Actions We follow (Fried et al., 2018) to set up the states as the visual features ( i.e .",
"ResNet-152 features (He et al., 2016)) from the agent-centric panoramic views in 12 headings 3 elevations with 30 degree intervals.",
"Likewise, we use the same panoramic action space.",
"Identifying BABY-STEP s Our learning approach requires an agent to follow microinstructions ( i.e ., the BABY-STEP s).",
"Existing datasets (Anderson et al., 2018; Jain et al., 2019; Chen et al., 2019) do not provide fine-grained segmentations of long instructions.",
"Therefore, we use a template matching approach to aggregate consecutive sentences into BABY-STEP s.",
"First, we extract the noun phrase using POS tagging.",
"Then, we employs heuristic rules to chunk a long instruction into shorter segments according to punctuation and landmark phrase ( i.e ., words for concrete objects).",
"We document the details in the Appendix.",
"Without extra annotation , we propose a method to approximately chunk original expert trajectories into sub-trajectories that align with the BABYSTEP s.",
"This is important for imitation learning at the micro-instruction level ( 4.2.1).",
"Specifically, we learn a multi-label visual landmark classifier to identify concrete objects from the states along expert trajectories by using the landmark phrases 2 Available at https://github.com/Sha-Lab/ babywalk extracted from the their instructions as weak supervision.",
"For each trajectory-instruction pair, we then extract the visual landmarks of every state as well as the landmark phrases in BABY-STEP instructions.",
"Next, we perform a dynamic programming procedure to segment the expert trajectories by aligning the visual landmarks and landmark phrases, using the confidence scores of the multi-label visual landmark classifier to form the function.",
"Encoders and Embeddings The encoder u ( ) for the (micro)instructions is a LSTM.",
"The encoder for the trajectory y contains two separate Bi-LSTMs, one for the state s t and the other for the action a t .",
"The outputs of the two Bi-LSTMs are then concatenated to form the embedding function v ( ) .",
"The details of the neural network architectures ( i.e . configurations as well as an illustrative figure), optimization hyper-parameters, etc .",
"are included in the Appendix.",
"Learning Policy with Reinforcement Learning In the second phase of learning, BABYWALK uses RL to learn a policy that maximizes the fidelity-oriented rewards (CLS) proposed by Jain et al. (2019).",
"We use policy gradient as the optimizer (Sutton et al., 2000).",
"Meanwhile, we set the maximum number of lectures in curriculum RL to be 4, which is studied in Section 5.3.",
"We describe the experimental setup ( 5.1),fol-lowed by the main results in 5.2 where we show the proposed BABYWALK agent attains competitive results on both the in-domain dataset but also generalizing to out-of-the-domain datasets with varying lengths of navigation tasks.",
"We report results from various ablation studies in 5.3.",
"While we primarily focus on the ROOM 4R OOM dataset, we re-analyze the original ROOM 2R OOM dataset in 5.4 and were surprised to find out the agents trained on it can generalize.",
"Datasets We conduct empirical studies on the existing datasets ROOM 2R OOM and ROOM 4R OOM (Anderson et al., 2018; Jain et al., 2019), and the two newly created benchmark datasets ROOM 6R OOM and ROOM 8R OOM , described in 4.3.",
"Table 1 and Fig. 4 contrast their differences.",
"Evaluation Metrics We adopt the following metrics: Success Rate ( SR ) that measures the average rate of the agent stopping within a specified distance near the goal location (Anderson et al., 2018), Coverage weighted by Length Score ( CLS ) (Jain et al., 2019) that measures the fidelity of the agent's path to the reference, weighted by the length score, and the newly proposed Success rate weighted normalized Dynamic Time Warping ( SDTW ) that measures in more fine-grained details, the spatiotemporal similarity of the paths by the agent and the human expert, weighted by the success rate (Maga-lhaes et al., 2019).",
"Both CLS and SDTW measure explicitly the agent's ability to follow instructions and in particular, it was shown that SDTW corresponds to human preferences the most.",
"We report results in other metrics in the Appendix.",
"Agents to Compare to Whenever possible, for all agents we compare to, we either re-run, reimplement or adapt publicly available codes from their corresponding authors with their provided instructions to ensure a fair comparison.",
"We also sanity check by ensuring the results from our implementation and adaptation replicate and are comparable to the reported ones in the literature.",
"We compare our BABYWALK to the following: (1) the SEQ 2 SEQ agent (Anderson et al., 2018), being adapted to the panoramic state and action space used in this work; (2) the Speaker Follower ( SF ) agent (Fried et al., 2018); (3) the Reinforced Cross-Modal Agent ( RCM ) (Wang et al., 2019) that refines the SF agent using reinforcement learning with either goal-oriented reward ( RCM ( GOAL )) or fidelity-oriented reward ( RCM ( FIDELITY )); (4) the Regretful Agent ( REGRETFUL ) (Ma et al., 2019b) that uses a progress monitor that records visited path and a regret module that performs backtracking; (5) the Frontier Aware Search with Backtracking agent ( FAST ) (Ke et al., 2019) that incorporates global and local knowledge to compare partial trajectories in different lengths.",
"The last 3 agents are reported having state-of-the art results on the benchmark datasets.",
"Except the SEQ 2 SEQ agent, all other agents depend on an additional pre-training stage with data augmentation (Fried et al., 2018), which improves cross-board.",
"Thus, we train two BABYWALK agents: one with and the other without the data augmentation.",
"In-domain Generalization This is the standard evaluation scenario where a trained agent is assessed on the unseen split from the same dataset as the training data.",
"The leftmost columns in Table 2 reports the results where the training data is from R 4 R .",
"The BABYWALK agents outperform all other agents when evaluated on CLS and SDTW .",
"When evaluated on SR , FAST performs the best and the BABYWALK agents do not stand out.",
"This is expected: agents which are trained to reach goal do not necessarily lead to better instruction-following.",
"Note that RCM ( FIDELITY ) performs well in path-following.",
"Out-of-domain Generalization While our primary goal is to train agents to generalize well to longer navigation tasks, we are also curious how the agents perform on shorter navigation tasks too.",
"The right columns in Table 2 report the comparison.",
"The BABYWALK agents outperform all other agents in all metrics except SR .",
"In particular, on 2546 Figure 5: Performance by various agents on navigation tasks in different lengths.",
"SDTW , the generalization to R 6 R and R 8 R is especially encouraging, resulting almost twice those of the second-best agent FAST .",
"Moreover, recalling from Fig. 1, BABYWALK 's generalization to R 6 R and R 8 R attain even better performance than the RCM agents that are trained in-domain .",
"Fig. 5 provides additional evidence on the success of BABYWALK , where we have contrasted to its performance to other agents' on following instructions in different lengths across all datasets .",
"Clearly, the BABYWALK agent is able to improve very noticeably on longer instructions.",
"Qualitative Results Fig. 6 contrasts visually several agents in executing two (long) navigation tasks.",
"BABYWALK 's trajectories are similar to what human experts provide, while other agents' are not.",
"Memory Buffer is Beneficial Table 3 illustrates the importance of having a memory buffer to summarize the agent's past experiences.",
"Without the memory ( NULL ), generalization to longer tasks is significantly worse.",
"Using LSTM to summarize is worse than using forgetting to summarize (eqs.",
"(2,3)).",
"Meanwhile, ablating of the forgetting Setting R 4 R R 4 R R 4 R others Metrics SR CLS SDTW SR CLS SDTW IL 24.7 27.9 11.1 24.2 25.8 10.2 IL + RL 25.0 45.5 13.6 25.0 43.8 14.1 IL + CRL w/ LECTURE # 1 st 24.1 44.8 13.5 24.1 43.1 13.6 2 nd 26.7 45.9 15.2 26.2 43.7 14.8 3 rd 27.9 47.4 17.0 26.7 45.4 16.3 4 th 27.3 49.4 17.3 27.6 47.9 17.5 Table 4: BABYWALK 's performances with curriculum-based reinforcement learning ( CRL ), which improves imitation learning without or with reinforcement learning ( IL + RL ).",
"mechanism concludes that = 0 .",
"5 is the optimal to our hyperparameter search.",
"Note that when = 0 , this mechanism degenerates to taking average of the memory buffer, and leads to inferior results.",
"Curriculum-based RL (CRL) is Important Table 4 establishes the value of CRL.",
"While imitation learning ( IL ) provides a good warm-up for SR , significant improvement on other two metrics come from the subsequent RL ( IL + RL ).",
"Furthermore, CRL (with 4 lectures) provides clear improvements over direct RL on the entire instruction ( i.e ., learning to execute all BABY-STEP s at once).",
"Each lecture improves over the previous one, especially in terms of the SDTW metric.",
"Our experimental study has been focusing on using R 4 R as the training dataset as it was established that as opposed to R 2 R , R 4 R distinguishes well an agent who just learns to reach the goal from an agent who learns to follow instructions.",
"Given the encouraging results of generalizing to longer tasks, a natural question to ask, how well 2547 HUMANBABYWALK RCM SF SEQ 2 SEQ Figure 6: Trajectories by human experts and VLN agents on two navigation tasks.",
"can an agent trained on R 2 R generalize?",
"Results in Table 5 are interesting.",
"Shown in the top panel, the difference in the averaged performance of generalizing to R 6 R and R 8 R is not significant.",
"The agent trained on R 4 R has a small win on R 6 R presumably because R 4 R is closer to R 6 R than R 2 R does.",
"But for even longer tasks in R 8 R , the win is similar.",
"In the bottom panel, however, it seems that R 2 R R 4 R is stronger (incurring less loss in performance when compared to the in-domain setting R 4 R R 4 R ) than the reverse direction ( i.e ., comparing R 4 R R 2 R to the in-domain R 2 R R 2 R ).",
"This might have been caused by the noisier segmentation of long instructions into BABY-STEP s in R 4 R .",
"(While R 4 R is composed of two navigation paths in R 2 R , the segmentation algorithm is not aware of the natural boundaries between the two paths.) 6 Discussion There are a few future directions to pursue.",
"First, despite the significant improvement, the gap between short and long tasks is still large and needs to be further reduced.",
"Secondly, richer and more complicated variations between the learning setting and the real physical world need to be tackled.",
"For instance, developing agents that are robust to variations in both visual appearance and instruction descriptions is an important next step.",
"Acknowledgments We appreciate the feedback from the reviewers.",
"This work is partially supported by NSF Awards IIS-1513966/1632803/1833137, CCF-1139148, DARPA Award#: FA8750-18-2-0117, DARPA-D3M Award UCB-00009528, Google Research Awards, gifts from Face-book and Netflix, and ARO# W911NF-12-1-0241 and W911NF-15-1-0484."
] | [
"abstain",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"objective",
"method",
"other",
"other",
"objective",
"other",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"result",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Detecting disclosures of individuals' employment status on social media can provide valuable information to match job seekers with suitable vacancies, offer social protection, or measure labor market flows.",
"However, identifying such personal disclosures is a challenging task due to their rarity in a sea of social media content and the variety of linguistic forms used to describe them.",
"Here, we examine three Active Learning (AL) strategies in real-world settings of extreme class imbalance, and identify five types of disclosures about individuals' employment status (e.g. job loss) in three languages using BERT-based classification models.",
"Our findings show that, even under extreme imbalance settings, a small number of AL iterations is sufficient to obtain large and significant gains in precision, recall, and diversity of results compared to a supervised baseline with the same number of labels.",
"We also find that no AL strategy consistently outperforms the rest.",
"Qualitative analysis suggests that AL helps focus the attention mechanism of BERT on core terms and adjust the boundaries of semantic expansion, highlighting the importance of interpretable models to provide greater control and visibility into this dynamic learning process.",
"Up-to-date information on individuals' employment status is of tremendous value for a wide range of economic decisions, from firms filling job vacancies to governments designing social protection systems.",
"At the aggregate level, estimates of labor market conditions are traditionally based on nationally representative surveys that are costly to produce, especially in lowand middle-income countries (Devarajan, 2013; Jerven, 2013).",
"As social media becomes more ubiquitous all over the world, more individuals can now share their employment status with peers and unlock the social capital of their networks.",
"This, in turn, can provide a new lens to examine the labor market and devise Lost Job Was hired Job offer Is unemployed Looking for a job Needing a job cuz I dnt like nt makin money.",
"policy, especially in countries where traditional measures are lagging or unreliable.",
"A key challenge in using social media to identify personal disclosures of employment status is that such statements are extremely rare in an abundance of social media content roughly one in every 10,000 posts which renders random sampling ineffective and prohibitively costly for the development of a large labeled dataset.",
"On the other hand, simple keyword-based approaches run the risk of providing seemingly high-accuracy classifiers while substantially missing linguistic variety used to describe events such as losing a job, looking for a job, or starting a new position (see Figure 1 for example).",
"In the absence of a high-quality, comprehensive, and diverse ground-truth about personal employment disclosures, it is difficult to develop classification models that accurately capture the flows in and out of the labor market in any country, let alone robustly estimating it across multiple countries.",
"Furthermore, state-of-the-art deep neural models provide little visibility into or control over the linguistic patterns captured by the model, which hampers the ability of researchers and practitioners to determine whether the model has truly learned new linguistic forms and sufficiently converged.",
"Active Learning (AL) is designed for settings where there is an abundance of unlabeled examples and limited labeling resources (Cohn et al., 1994).",
"It aims to focus the learning process on the most informative samples and maximize model perfor-6564 mance for a given labeling budget.",
"In recent years, AL proved successful in several settings, including policy-relevant tasks involving social media data (Pohl et al., 2018; Palakodety et al., 2020).",
"The success of pre-trained language models such as BERT (Devlin et al., 2019) in a variety of language understanding tasks has sparked interest in using AL with these models for imbalanced text classification.",
"Yet, most research in this field has focused on artificially-generated rarity in data or imbalance that is not as extreme as the present setting (Ein-Dor et al., 2020; Schrder et al., 2021).",
"Therefore, there is no evidence of the efficiency of AL using BERT-based models for sequence classification in real-world settings with extreme imbalance.",
"It is unclear whether some AL strategies will perform significantly better than others in these settings, how quickly the different strategies will reach convergence (if at all), and how the different strategies will explore the linguistic space.",
"In this work, we leverage BERT-based models (Devlin et al., 2019) in three different AL paradigms to identify tweets that disclose an individual's employment status or change thereof.",
"We train classifiers in English, Spanish, and Portuguese to determine whether the author of a tweet recently lost her job, was recently hired, is currently unemployed, posting to find a job, or posting a job offer.",
"We use two standard AL strategies, Uncertainty Sampling (Lewis and Gale, 1994) and Adaptive Retrieval (Mussmann et al., 2020), and propose a novel strategy we name Exploit-Explore Retrieval that uses k-skip-n-grams (n-grams with k skipped tokens) to explore the space and provide improved interpretability.",
"We evaluate the models both quantitatively and qualitatively across languages and AL strategies, and compare them to a supervised learning baseline with the same number of labels.",
"Therefore, our contributions are: An evaluation of three AL strategies for BERT-based binary classification under extreme class imbalance using real-world data.",
"A novel AL strategy for sequence classification that performs on par with other strategies, but provides additional interpretability and control over the learning process.",
"A qualitative analysis of the linguistic patterns captured by BERT across AL strategies.",
"A large labeled dataset of tweets about unemployment and fine-tuned models in three languages to stimulate research in this area 1 .",
"Social media users disclose information that is valuable for public policy in a variety of areas ranging from health (Achrekar et al., 2011; Mahata et al., 2018; Klein et al., 2018) to emergency response to natural disasters (Bruns and Liang, 2012; Kry-vasheyeu et al., 2016) through migration flows (Fio-rio et al., 2017; Chi et al., 2020; Palotti et al., 2020).",
"A key challenge in identifying self-disclosures on social media is the rare and varied nature of such content with a limited labeling budget.",
"Prior work that studied self-disclosures on Twitter had either used pattern matching, which is prone to large classification errors (Antenucci et al., 2014; Proserpio et al., 2016), or focused on curated datasets (Li et al., 2014; Preotiuc-Pietro et al., 2015; Sarker et al., 2018; Ghosh Chowdhury et al., 2019), which provide no guarantees about recall or coverage of the positive class.",
"These issues are more severe in real-world settings of extreme imbalance, where random sampling is unlikely to retrieve any positives, let alone diverse.",
"These challenges motivate the use of AL, as described next.",
"AL has been used successfully in various settings to maximize classification performance for a given labeling budget (see Settles (1995) for a survey).",
"With the emergence of pre-trained language models such as BERT (Devlin et al., 2019) and their success across a number of different language tasks, recent work has studied the combination of AL and BERT, either by using BERT to enhance traditional AL methods (Yuan et al., 2020) or by applying established AL methods to improve BERT's classification performance (Zhang and Zhang, 2019; Shelmanov et al., 2019; Liu et al., 2020; Griehaber et al., 2020; Prabhu et al., 2021; Schrder et al., 2021).",
"In the specific case of binary classification with moderate class imbalance, Ein-Dor et al. (2020) show that AL with BERT significantly outperforms random sampling but that no single AL strategy stands out in terms of BERT-based classification performance, both for balanced and imbalanced 1 Labeled datasets and models can be found at https://github.com/manueltonneau/ twitter-unemployment 6565 settings.",
"Yet, the authors only consider a relatively moderate class imbalance of 10-15% positives, and does not cover extreme imbalance, which is common in many text classification tasks.",
"Our current research examines a considerably more extreme imbalance of about 0.01% positives, where traditional AL approaches can be ineffective (Attenberg and Provost, 2010).",
"Under this extreme imbalance, Mussmann et al. (2020) show the potential of AL for BERT to outperform random sampling for pairwise classification.",
"To the best of our knowledge, this work is the first to compare the performance of AL methods for BERT-based sequence classification in real-world extreme imbalance settings.",
"Our dataset was collected from the Twitter API.",
"It contains the timelines of the users with at least one tweet in the Twitter Decahose and with an inferred profile location in the United States, Brazil, and Mexico.",
"In addition to the United States, we chose to focus on Brazil and Mexico as both of them are middle-income countries where Twitter's penetration rate is relatively high.",
"For each country, we drew a random sample of 200 million tweets covering the period between January 2007 and December 2020 and excluding retweets.",
"We then split it evenly in two mutually exclusive random samples R e and R s .",
"In the following sections, we use R e to evaluate each model's performance in a real-world setting and R s to sample new tweets to label.",
"Our labeling process sought to identify four nonexclusive, binary states that workers may experience during their career: losing a job (Lost Job), being unemployed (Is Unemployed), searching for a job (Job Search), and finding a job (Is Hired).",
"We only considered first-person disclosures as positives.",
"For the classes Lost Job and Is Hired, we only considered such events that happened in the past month as positives as we want to determine the user's current employment status.",
"To complement the focus on workers, we also labeled tweets containing job offers (\"Job Offer\").",
"We used Amazon Mechanical Turk (MTurk) to label tweets according to these 5 classes (see Figure 1 and Section A.2 for details).",
"As previously stated, the extreme imbalance of our classification task of one positive example for every",
"10,000 tweets renders random sampling ineffective and prohibitively costly.",
"In order to build high-performing classifiers at a reasonable cost, we selected a set of 4 to 7 seed keywords that are highly specific of the positives and frequent enough for each class and country.",
"To do so, we defined a list of candidate seeds, drawing from Antenucci et al. (2014) for the US and asking native speakers in the case of Mexico and Brazil, and individually evaluated their specificity and frequency (see Section A.1 for additional details).",
"We then randomly sampled 150 tweets containing each seed from R s , allowing us to produce a stratified sample L 0 of 4,524 English tweets, 2703 Portuguese tweets, and 3729 Spanish tweets respectively (Alg. 1).",
"We then labeled each tweet using Amazon Mechanical Turk (MTurk) allowing us to construct a language-specific stratified sample that is common to the 5 classes (see Section A.3 for descriptive statistics of the stratified sample).",
"We trained five binary classifiers to predict each of the five aforementioned labeled classes.",
"Preliminary analysis found that BERT-based models considerably and consistently outperformed keyword-based models, static embedding models, and the combination of these models.",
"We benchmarked several BERT-based models and found that the following models gave the best performance on our task: Conversational BERT for English tweets (Burtsev et al., 2018), BERTimbau for Brazilian Portuguese tweets (Souza et al., 2020) and BETO for Mexican Spanish tweets (Caete et al., 2020) (see Section A.4 for details on model selection).",
"We fine-tuned each BERT-based model on a 70:30 train-test split of the labeled tweets for 20 epochs (Alg. 1).",
"Following Dodge et al. (2020), we repeated this process for 15 different random seeds and retained the best performing model in terms of area under the ROC curve (AUROC) on the test set at or after the first epoch (see Section A.5 for details).",
"While the standard classification performance measure in an imbalanced setting is the F1 score with a fixed classification threshold (e.g. 0.5), it is not applicable in our case for two reasons.",
"First, we care about the performance on a large random set of tweets and the only labeled set we could compute the F1 metric from is the stratified test set 6566 which is not representative of the extremely imbalanced random sample R e .",
"Second, the fact that neural networks are poorly calibrated (Guo et al., 2017) makes the choice of a predefined classification threshold somewhat arbitrary and most likely sub-optimal.",
"We developed an alternative threshold-setting evaluation strategy.",
"First, we computed the predicted score of each tweet in R e (Alg. 1), which is a random sample.",
"Then, for each class, we labeled 200 tweets in R e along the score distribution (see section A.7.1 for more details).",
"We measured the performance of each classifier on R e by computing: the Average Precision as common in information retrieval.",
"the number of predicted positives , defined as the average rank in the confidence score distribution when the share of positives reaches 0.5.",
"the diversity , defined as the average pairwise distance between true positives.",
"Details about the evaluation metrics can be found in Section A.7.",
"Initialization : for each seed s , sample 150 tweets containing s from R s ; have them labeled for the five classes; the resulting labeled set is the stratified sample L 0 = S 0 ; discard already sampled tweets from R s ( R s = R s L 0 ) At each iteration i and for each class: Finetuning : train-test split of S i in 70/30; finetune 15 BERT models on the train set using different seeds; select the best model M i with the highest AUROC on the test set.",
"Inference on R e and R s using M i Active Learning : sample most informative tweets from R s (100 per class); have them labeled for the five classes; the resulting labeled set is L i +1 ; define S i +1 = (cid:83) i +1 j =0 L j and R s = R s L i +1 Evaluation: sample tweets along the score distribution in R e ; have them labeled; compute the average precision, number of predicted positives and diversity metrics Algorithm 1: Experimental procedure 3.5 Active Learning strategies Next, we used pool-based AL (Settles, 1995) in batch mode, with each class-specific fine-tuned model as the classification model, in order to query new informative tweets in R s .",
"We compared three different AL strategies aiming to balance the goal of improving the precision of a classifier while expanding the number and the diversity of detected positives instances: Uncertainty Sampling consists in sampling instances that a model is most uncertain about.",
"In a binary classification problem, the standard approach is to select examples with a predicted score close to 0.5 (Settles, 2009).",
"In practice, this rule of thumb might not always lead to identify uncertain samples when imbalance is high (Mussmann et al., 2020), especially with neural network models known to be poorly calibrated (Guo et al., 2017).",
"To overcome this issue, we contrast a naive approach which consists in querying the 100 instances whose uncalibrated scores are the closest to 0.5, to an approach that uses calibrated scores (see Section A.9 for details).",
"Adaptive Retrieval aims to maximize the precision of a model by querying instances for which the model is most confident of their positivity (Mussmann et al., 2020).",
"This approach is related to certainty sampling (Attenberg et al., 2010).",
"Here, we select the 100 tweets whose predicted score is the highest for each class.",
"Our novel strategy, Exploit-Explore Retrieval (see Section A.8 for details), aims to maximize precision (exploitation') while improving recall by feeding new and diverse instances at each iteration (exploration'): Exploitation : Randomly query 50 new tweets from the top 10 4 tweets with the highest predicted score in R s .",
"Exploration : Identify the 10 k-skip-n-grams with the highest frequency of occurrences in the top 10 4 tweets, relative to their frequency in R s .",
"Then, randomly sample 50 new tweets containing each k-skip-n-gram (see Section A.8 for formal definition of k-skip-n-grams and a discussion on the choice of threshold).",
"Additionally, we compared these AL strategies to a supervised Stratified Sampling baseline, that consists of the same initial motifs defined in Section 3.2 and the same number of labels as available to all other AL strategies.",
"Overall, for each strategy, each iteration and each class, we labeled 100 new tweets in R s .",
"We then combined the 500 new labels across classes with the existing ones to finetune and evaluate a new BERT-based model for 6567 each class as described in Section 3.3, which we then used to select tweets for labeling for the next iteration.",
"We considered that an AL strategy had converged when there was no significant variation of average precision, number of predicted positives and diversity for at least two iterations (see Section A.7.6 for details).",
"At iteration 0, we fine-tuned a BERT-based classifier on a 70:30 train-test split of the initialization sample L 0 for each class and country.",
"All the AUROC values on the test set are reported in Table 7.",
"We obtain very high AUROCs ranging from 0.944 to 0.993 across classes and countries.",
"Job Offer has the highest AUROCs with values ranging from 0.985 for English to 0.991 for Portuguese and 0.993 for Spanish.",
"Upon closer examination of positives for this class, we find that the linguistic structure of tweets mentioning job offers is highly repetitive, a large share of these tweets containing sentences such as We're #hiring! Click to apply: or naming job listing platforms (e.g: #Ca-reerArc).",
"By contrast, the most difficult class to predict is Lost Job, with an AUROC on the test set equal to 0.959 for English and 0.944 for Spanish.",
"This class also has the highest imbalance, with approximately 6% of positives in the stratified sample for these two languages.",
"Taken together, these results show that a fine-tuned BERT model can achieve very high classification performance on a stratified sample of tweets across classes and languages.",
"However, these num-bers cannot be extrapolated to directly infer the models' performance on random tweets, which we discuss in the next section.",
"Next, we compared the performance of our exploit-explore retrieval strategy on English, Spanish and Portuguese tweets.",
"We used exploit-explore retrieval as it provides similar results to other strategies (Section 4.3), while allowing greater visibility into selected motifs during the development process (Section 4.4).",
"We ran 8 AL iterations for each language and report the results in Fig. 2, Fig. 5 and Table 10.",
"First, we observe substantial improvements in average precision (AP) across countries and classes with just one or two iterations.",
"These improvements are especially salient in cases where precision at iteration 0 is very low.",
"For instance, for the English Is Unemployed class and the Spanish Is Hired class, average precision goes respectively from 0.14 and 0.07 to 0.83 and 0.8 from iteration 0 to iteration 1 (Fig. 2 and Fig. 5).",
"A notable exception to this trend is the class Job Offer, especially for English and Portuguese.",
"These performance differences can in part be explained by the varying quality of the initial seed list across classes.",
"This is confirmed by the stratified sampling baseline performance discussed in 4.3.",
"In the case of Job Offer, an additional explanation discussed earlier in Section 4.1 is the repetitive structure of job offers in tweets which makes this class easier to detect compared to others.",
"Also, the class Lost Job has the worst performance in terms of AP across countries.",
"One reason is that the data imbalance for this class is even higher than for other classes, as mentioned in Section 4.1.",
"Another explanation for the low precision is the ambiguity inherent to the recency constraint, namely that an individual must have lost her job at most one month prior to posting the tweet.",
"Apart from the Job Offer class in English and Portuguese, AL consistently allows to quickly expand from iteration 0 levels with the number of predicted positives multiplied by a factor of up to 10 4 (Fig. 2).",
"Combined with high AP values, this result means that the classifiers manage to capture substantially more positives compared to iteration",
"0. This high expansion is combined with increasing semantic diversity among true positive instances.",
"The class Job Offer stands out with little expansion and diversity changes in the English and Portuguese cases.",
"For Spanish, expansion and diversity changes are higher.",
"One explanation is that the structure of Mexican job offers is less repetitive, with individual companies frequently posting job offers, as opposed to job aggregators in the case of the US and Brazil.",
"Overall, apart from a few edge cases, we find that AL used with pre-trained language models is successful at significantly improving precision while expanding the number and the diversity of predicted positive instances in a small number of iterations across languages.",
"Indeed, precision gains reach up to 90 percentage points from iteration 0 to the last iteration across languages and classes and the number of predicted positives is multiplied 6568 Figure 2: Average precision, number of predicted positives and diversity of true positives (in row) for each class (in column) for English (green), Portuguese (orange), and Spanish (purple).",
"by a factor of up to 10 4 .",
"Furthermore, on average, the model converges in only 5.6 iterations across classes for English and Portuguese, and in 4.4 iterations for Spanish (see Table 10 for details).",
"In this section, we evaluated on English tweets the stratified sampling baseline and the four AL strategies described in Section 3.5, namely exploit-explore retrieval, adaptive retrieval and uncertainty sampling with and without calibration.",
"We ran five iterations for each strategy and reported the results on Figure 3 in this section as well as Table 11 and Figure 6 in Section A.10.",
"We find that AL brings an order of magnitude more positives and does so while preserving or improving both the precision and the diversity of results.",
"Apart from the Job Offer class discussed in Section 4.2, AL consistently outperforms the stratified sampling baseline.",
"This is especially true for the classes Is Unemployed and Lost Job where the baseline performance stagnates at a low level, suggesting a poor seed choice, but also holds for classes Is Hired and Job Search with stronger baseline performance.",
"We also find that no AL strategy consistently dominates the rest in terms of precision, number and diversity of positives.",
"The gains in performance are similar across AL strategies, and are particularly high for the classes Lost Job and Is Unemployed, which start with a low precision.",
"The number of predicted positives and the diversity measures also follow similar trends across classes and iterations.",
"We also observe occasional drops in average precision of more than 25% from one iteration to the next.",
"Uncalibrated uncertainty sampling seems particularly susceptible to these drops, with at least one occurrence for each class.",
"Upon examination of the tweets sampled for labeling by this strategy, the vast majority of tweets are negatives and when a few positives emerge, their number is not large enough to allow the model to generalize well.",
"This variability slows down the convergence process of uncertainty sampling when it is not uncalibrated (table 11).",
"In contrast, calibrated uncertainty sam-6569 Figure 3: Average precision, number of predicted positives and diversity of true positives (in row) for each class (in column) across AL strategies.",
"pling is less susceptible to these swings, emphasizing the importance of calibration for more stable convergence in settings of extreme imbalance.",
"Taken together, our quantitative results show that the positive impact of AL on classification performance in an extremely imbalanced setting holds across AL strategies.",
"Aside from a few occasional performance drops, we find significant gains in precision, expansion and diversity across strategies.",
"Yet, we find that no AL strategy consistently dominates the others across a range of prediction tasks for which the number and the linguistic complexity of positive instances vary widely.",
"Next, we investigate the results qualitatively to gain deeper understanding of the learning process.",
"We qualitatively examined the tweets selected for labeling by each strategy to understand better what BERT-based models capture and reflect on the quantitative results.",
"We focused on English tweets only and took a subsample of tweets at each iteration to better understand each strategy's performance.",
"We excluded the Job Offer class from this analysis since the performance, in this case, is exceptionally high, even at iteration",
"0. Our analysis finds that many tweets queried by the various AL strategies capture a general tone that is present in tweets about unemployment, but that is not specific to one's employment status.",
"For example, these include tweets of the form of I'm excited to ... in two days for the recently hired class, I've been in a shitty mood for ... for unemployment or I lost my ... for job loss.",
"This type of false positives seems to wane down as the AL iterations progress, which suggests that a key to the success of AL is first to fine-tune the attention mechanism to focus on the core terms and not the accompanying text that is not specific to employment status.",
"In the stratified sampling case, the focus on this unemployment tone remains uncorrected, explaining the poor performance for classes Lost Job and Is Unemployed and the performance drops for Is Hired and Job Search.",
"A second theme in tweets queried by AL in-6570 volves the refinement of the initial motifs.",
"Uncertainty sampling (calibrated and uncalibrated), adaptive retrieval, and the exploitation part of our exploit-explore retrieval method seem to query tweets that either directly contain a seed motif or a close variant thereof.",
"For example, tweets for the class Lost Job may contain the seed motifs laid off, lost my job, and just got fired.",
"As mentioned in Section 4.2 to explain occasional drops in performance, many tweets labeled as negatives contain over-generalization of the semantic concept such as expanding to other types of losses (e.g. lost my phone), other types of actions (e.g. got pissed off), or simply miss the dependence on first-person pronouns (e.g. @user got fired).",
"Many of the positively labeled tweets contain more subtle linguistic variants that do not change the core concept such as I really need a job, I really need to get a job, I need to find a job, or I need a freaken job.",
"Adaptive retrieval chooses these subtle variants more heavily than other strategies with some iterations mostly populated with I need a job variants.",
"Overall, these patterns are consistent with a view of the learning process, specifically the classification layer of the BERT model, as seeking to find the appropriate boundaries of the target concept.",
"Finally, the exploration part of the exploit-explore retrieval makes the search for new forms of expression about unemployment more explicit and interpretable.",
"For example, the patterns explored in the first few iterations of explore-exploit retrieval include I ... lost ... today, quit .. my .. job, I ... start my ... today, and I'm ... in ... need.",
"A detailed presentation of the explored k-skip-n-grams for US tweets can be found in Table 9 of Section A.8.",
"While this strategy suffers from issues that also affect other AL strategies, we find that the explore part of exploit-explore retrieval is more capable of finding new terms that were not part of the seed list (e.g., quit, career) and provides the researcher with greater insight into and control over the AL process.",
"This work developed and evaluated BERT-based models in three languages and used three different AL strategies to identify tweets related to an individual's employment status.",
"Our results show that AL achieves large and significant improvements in precision, expansion, and diversity over stratified sampling with only a few iterations and across languages.",
"In most cases, AL brings an order of magnitude more positives while preserving or improving both the precision and diversity of results.",
"Despite using fundamentally different AL strategies, we observe that no strategy consistently outperforms the rest.",
"Within the extreme imbalance setting, this is in line with and complements the findings of Ein-Dor et al. (2020).",
"Additionally, our qualitative analysis and exploration of the exploit-explore retrieval give further insights into the performance improvements provided by AL, finding that substantial amounts of queried tweets hone the model's focus on employment rather than surrounding context and expand the variety of motifs identified as positive.",
"This puts exploit-explore retrieval as a valuable tool for researchers to obtain greater visibility into the AL process in extreme imbalance cases without compromising on performance.",
"While the present work demonstrates the potential of AL for BERT-based models under extreme imbalance, an important direction for future work would be to further optimize the AL process.",
"One could for instance study the impact on performance of the stratified sample size or the AL batch size.",
"To overcome the poor seed quality for some classes, other seed generation approaches could be tested, such as mining online unemployment forums using topic modeling techniques to discover different ways to talk about unemployment.",
"In terms of model training and inference, the use of multitask learning for further performance improvement could be studied due to the fact that classes of unemployment are not mutually exclusive.",
"We hope that our experimental results as well as the resources we make available will help bridge these gaps in the literature.",
"We acknowledge that there is some risk, like any other technology that makes inferences at the individual level, that the technology presented here will be used for harm.",
"However, due to the public nature of the content and the fact that the potential harm already exists using basic keyword search, we believe that the marginal risk added by our classifier is minimal.",
"We thank participants of the Israeli Seminar on Computational Linguistics at Ben-Gurion University of the Negev as well as the anonymous reviewers for their valuable comments.",
"We also thank Aleister Montfort, Varnitha Kurli Reddy and Boris Sobol for their excellent research assistance.",
"This work was supported by the SDG Partnership Fund."
] | [
"abstain",
"abstain",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"method",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"method",
"other",
"other",
"other"
] |
[
"As a more natural and intelligent interaction manner, multimodal task-oriented dialog system recently has received great attention and many remarkable progresses have been achieved.",
"Nevertheless, almost all existing studies follow the pipeline to first learn intra-modal features separately and then conduct simple feature concatenation or attention-based feature fusion to generate responses, which hampers them from learning inter-modal interactions and conducting cross-modal feature alignment for generating more intention-aware responses.",
"To address these issues, we propose UniTranSeR, a Unified Transformer Semantic Representation framework with feature alignment and intention reasoning for multimodal dialog systems.",
"Specifically, we first embed the multimodal features into a unified Transformer semantic space to prompt inter-modal interactions, and then devise a feature alignment and intention reasoning (FAIR) layer to perform cross-modal entity alignment and fine-grained key-value reasoning, so as to effectively identify user's intention for generating more accurate responses.",
"Experimental results verify the effectiveness of UniTranSeR, showing that it significantly outperforms state-of-the-art approaches on the representative MMD dataset.",
"The multimodal task-oriented dialog systems are designed to help users achieve specific goals such as clothing recommendation or restaurant reservation, which is in growing demand in the current business environment.",
"As a leading study, Saha et al. (2018) released a multimodal dialog dataset (MMD) in the online retail domain.",
"Based on such a benchmark dataset, many multimodal dialog models incorporating domain knowledge have recently been proposed (Chauhan et al., 2019; Zhang et al., Corresponding author.",
"2019, 2021), which basically exploit taxonomy-based method (Liao et al., 2018; Cui et al., 2019) or attention-based method (Nie et al., 2019; He et al., 2020) to incorporate knowledge base (KB) information for better performance.",
"Though achieving remarkable progress, existing multimodal task-oriented dialog systems still suffer from the following three limitations.",
"Firstly , prior models only learn the intra-modal features (including textual features, visual features and domain knowledge) separately before fusing them.",
"Since these multimodal cues in general can enhance and complement each other, projecting them into a unified semantic space to learn the inter-modal features, with no doubt, can help improve the abilities of natural language understanding, which in turn will benefit the response generation.",
"Secondly , prior models only conduct simple feature concatenation (Saha et al., 2018; Nie et al., 2019) or attention-based feature fusion (Cui et al., 2019) af-103 ter acquiring intra-modal representations, but without learning fine-grained alignment between different modalities before fusion, which is not favorable to query knowledge for accurate multimodal response generation.",
"Take the dialog in Figure 1 as an example, when answering the user's query on similar style of jackets, the model is expected to align the word jackets with the corresponding visual features for proper semantic complement and entity enhancement.",
"Thirdly , prior models basically lack the capability of entity-level reasoning, which prevents them from performing reasoning over crucial entities to guide intention-aware response generation.",
"For example, in Figure 1, when the user asks show some similar jackets in black color , the chatbot is expected to properly explore the pivot attribute black that connects the start query cue jackets with the target recommended product images.",
"Specifically, the model needs to perform a 2 -hop reasoning over triples (jacket_q, attribute, black_v) and (black_q, image, jacket_v) and obtain the intended 4 images.",
"To address the aforementioned limitations, we propose a Unified Transformer Semantic Representation framework with feature alignment and intention reasoning, UniTranSeR for short.",
"Specifically, to address the first limitation, we stand on the shoulder of Vision-and-Language Pre-training (VLP) methods (Lu et al., 2019; Li et al., 2019; Chen et al., 2020; Li et al., 2021) to propose a unified-modal Transformer encoder, which is used to project all the multimodal features into a unified semantic space to prompt inter-modality interactions, with the objective of learning better representations.",
"Based on the unified encoder, we further address the second limitation by designing a feature alignment module to perform cross-modal feature alignment.",
"Finally, to address the third limitation, we devise a fine-grained intention reasoning module for capturing users' real intentions, by leveraging a key-value attention based memory mechanism to perform multi-hop knowledge query for generating text or image responses.",
"We conduct experiments on MMD, one of the most influential benchmark datasets for multimodal dialog generation.",
"We follow the mainstream evaluation script of dialog generation and demonstrate that UniTranSeR significantly outperforms the current state-of-the-art baselines.",
"Ablation study also shows the efficacy of each component in improving the performance of dialog generation, and a further case study reveals that our model can effectively perform fine-grained token-level feature alignment for multimodal dialog generation.",
"2.1 Unimodal Dialog Systems Recent years has witnessed the remarkable success in textual dialog systems, which can be roughly divided into two categories: open-domain conversations with casual chi-chat (Song et al., 2020; Gangal et al., 2021; Chan et al., 2021; Yang et al., 2021) and task-oriented dialog systems (Pei et al., 2021; Santra et al., 2021; Wang et al., 2021; Mi et al., 2021; Madotto et al., 2021; Gou et al., 2021; Raghu et al., 2021), which are designed to help users achieve specific goals.",
"Early efforts mainly adopt a sequence-to-sequence (Seq2Seq) architecture, but cannot work well in KB retrieval and reasoning.",
"To alleviate this problem, copy mechanism (Eric and Manning, 2017) have been adopted and many memory augmented Seq2Seq models have been proposed (Bordes et al., 2017; Wen et al., 2018; Madotto et al., 2018; Wu et al., 2019; Reddy et al., 2019; Qin et al., 2019; Wang et al., 2020; Qin et al., 2020), which achieve promising results.",
"With the flourishing of social media platforms, massive amounts of multimedia data are generated daily, which poses great demand for multimodal dialog systems.",
"However, due to the lack of large-scale multimodal dialog datasets, researches in this domain have been limited.",
"To this end, Saha et al. (2018) provided a vertical retail domain dataset MMD to promote the research and proposed a multimodal hierarchical encoder-decoder model (MHRED) as a baseline.",
"Based on MHRED, Liao et al. (2018) incorporated the style tips into a knowledge-aware multimodal dialog model (KMD).",
"Cui et al. (2019) designed a user attention-guided multimodal dialog system (UMD) by additionally considering the hierarchical product taxonomy and user's attention to products.",
"Chauhan et al. (2019) introduced an ordinal and attribute aware multimodal dialog system (OAM) by employing a novel position and attribute aware attention mechanism.",
"Later, Nie et al. (2019) proposed a multimodal dialog system with adaptive decoders (MAGIC), which can incorporate different forms of domain knowledge to generate different kinds of responses.",
"Recently, combining with 104 show some similar T-shirts but in a lighter color T-shirt_231 color nattier_blue nattier_blue_style img img_url Unified-modal Transformer Semantic Encoder [CLS] 0/1 K VTVI T-shirt_232 color dark-grey dark_grey_style img img_url Q Align Reason general response intention-aware response image output Hierarchical Transformer Response Decoder HTR Decoder UTS Encoder show FAIR Layer Text Image Knowledge Figure 2: The Proposed Framework.",
"Transformer (Vaswani et al., 2017), He et al. (2020) advanced a multimodal dialog system via capturing context-aware dependencies of semantic elements (MATE) for textual response generation.",
"Most existing multimodal dialog systems learn intra-modal features separately for later feature concatenation or fusion.",
"Different from them, our proposed UniTranSeR can project all the multimodal features into a unified semantic space to perform fine-grained feature alignment and intention reasoning, which can lead to more accurate responses.",
"Vision-and-Language Pre-training (VLP) (Lu et al., 2019; Li et al., 2021) is another line of research relevant to our work, but different from ours in that it focuses more on boosting the performance of representation learning, while the multimodal dialog systems focus more on multi-turn multimodal interaction between users and agents.",
"The proposed UniTranSeR mainly comprises three parts: Unified-modal Transformer Semantic (UTS) encoder (Sec. 3.1), Feature Alignment and Intention Reasoning (FAIR) layer (Sec. 3.2), and Hierarchical Transformer Response (HTR) decoder (Sec. 3.3), as shown in Figure",
"2. We define the multimodal dialog generation task as generating the most likely response sequence Y = { y 1 , y 2 , , y n } and selecting topk most matched images, giving multimodal context utterances U = { u 1 , u 2 , . . . , u | U | } and multimodal knowledge base B as inputs.",
"The probability of a textual response can be formally defined as, P ( Y | U, B ) = n (cid:89) t =1 P ( y t | y 1 , . . . , y t 1 , U, B ) (1) where y t represents the current token decoded by the HTR decoder.",
"The UTS encoder is used to project all the multimodal features into a unified vector space for inter-modal interactions, while the FAIR layer is designed to align cross-modal hidden features, with textual features and visual features from previous UTS encoder as inputs.",
"Similar to MAGIC (Nie et al., 2019), our HTR decoder is designed to decode three types of responses: general responses that refer to the highly frequent responses (e.g., courtesy greetings) in the conversation, such as How can I help you? ; intention-aware responses that refer to the task-oriented utterances, such as Found some similar black leather-jackets for you ; and multimodal responses that refer to the intention-aware responses with image output.",
"The response type is determined by a query vector Q from the FAIR layer, in which an intention classifier is trained to decide which kind of response should be given out.",
"We first use a text embedder and an image embedder to extract textual features and visual features, respectively, and extract informative features from external knowledge by utilizing both text and image embedders.",
"Afterwards, we feed these three kinds of features into a unified Transformer encoder for unified-modal semantic representation learning.",
"Text Embedder.",
"To learn textual intra-modal features, we use a BERT tokenizer to split the input sentence into words and exploit a single transformer layer to obtain these words' initial embed-dings.",
"Note the self-attention mechanism in Transformer is order-less.",
"So, it is necessary to encode the words' position as additional inputs.",
"The final 105 representation for each word is derived via summing up its word embedding and position embedding, followed by a layer normalization (LN) layer.",
"Image Embedder.",
"To learn visual intra-modal features, we use a contour slicer to cut the input images into patches and exploit ResNet-50 (He et al., 2016) to extract these patches' visual features.",
"We notice that people usually focus on four parts of a clothing image: head, upper body, lower body, and feet, so we intuitively use an equal-height mode to slice an image into four patches, which efficiently solves the problem of region feature extraction, without using complex target detection networks such as Faster R-CNN (Ren et al., 2015).",
"Then, we feed the patches into ResNet-50 to get the patches' initial embed-dings.",
"Similarly, we also encode the position features for each patch via a 4 -dimensional vector [ image _ index, patch _ index, width, height ] .",
"Both visual and position features are then fed through a fully-connected (FC) layer, to be projected into the same embedding space.",
"The final visual embedding for each patch is obtained by first summing up the two FC outputs, and then passing them through an LN layer.",
"Knowledge Embedder.",
"To integrate informative features from external knowledge 1 into the task-oriented dialog, we equip the product knowledge base for each utterance through searching a fashion item table provided by MMD.",
"We then treat these searched knowledge entries into the same triplet format, i.e., (product, match, product) , (product, attribute, value) , (product, celebrity, pas-sion_score) .",
"Next, for the text and image elements of these triples, we use the text and image embed-ders to obtain their respective representations.",
"Unified Transformer Encoder.",
"After obtaining the multimodal initial embeddings, denoted as h t , h v and h k respectively, we project them into a unified semantic space to obtain interactive representations by using a unified Transformer encoder.",
"Specifically, in each utterance, the textual features, visual features and informative features correspond to l tokens with [TXT], 4 tokens 2 with [IMG] and 4 tokens 3 with [KNG].",
"In order to integrate 1 External knowledge of MMD includes: style tips graph, attributes table and celebrities histogram, as shown in Figure",
"1. 2 Note when an utterance contains multiple images, it can be unrolled into a sequence of utterances, each containing a single image, the same as previous work.",
"Unified-modal MLM show some similar [MASK] ...",
"T-shirts Unified-modal MPM show some similar ...",
"T-shirts Figure 3: Illustration of MLM and MPM.",
"dialog history of previous rounds, we initialize the current [CLS] p by using the representation of the previous round [CLS] p 1 .",
"The output hidden state representations can then be phrased as: H p = f (cid:0) [CLS] p 1 h pt [TXT] h pv [IMG] h pk [KNG] (cid:1) (2) where f ( ) denotes the Transformer encoder, H p 0 denotes the hidden state representation of the current round [CLS] p , which is regarded as the contextual semantic vector of the entire utterance in this round, H p 1: l denotes the representations for the text sequence, H pl +1: l +4 denotes the representations for the patch sequence, and H pl +5: l +8 denotes the representations for knowledge entries.",
"Note the superscript p is omitted for simplicity if no confusion occurs in the following discussion.",
"To obtain better representations, we introduce the Masked Language Modeling (MLM) loss and Masked Patch Modeling (MPM) loss to train them.",
"We denote the input words as w = { w 1 , . . . , w l } , the image patches as v = { v 1 , . . . , v 4 } , the knowledge elements as k = { k 1 , . . . , k 4 } , and the mask indices as m NL , where N is the natural numbers and L is the length of masked tokens.",
"In MLM, we randomly mask out the input words with a probability of 15% , and replace the masked ones w m with a special token [MASK], as illustrated in Figure",
"3. The goal is to predict these masked words by attentively integrating the information of their surrounding words w \\ m , image patches v and knowledge elements k , by minimizing the following loss: LMLM ( ) = E ( w,v,k ) U log P (cid:0) w m | w \\ m , v, k (cid:1) (3) Similar to MLM, in MPM, we also randomly mask out the image patches and use zeros tensor to replace them, as shown in Figure",
"3. Unlike textual words that can be categorized as discrete labels, visual features are high-dimensional and continuous tensors, thus cannot be supervised via a negative log-likelihood loss.",
"Following UNITER (Chen et al., 2020), we built the MPM loss as: LMPM ( ) = E ( w,v,k ) U g (cid:0) v m | v \\ m , w, k (cid:1) (4) where v m are masked image patches and v \\ m are remaining patches.",
"g (cid:0) v m | v \\ m , w, k (cid:1) = L (cid:88) i =1 (cid:13)(cid:13)(cid:13) f (cid:16) v ( i ) m (cid:17) h v ( i ) m (cid:13)(cid:13)(cid:13) 2 2 (5)",
"To align the cross-modal features for accurate intention classification and knowledge query, we devise a feature alignment and intention reasoning (FAIR) layer.",
"In feature alignment, we use Image-Text Matching (ITM) and Word-Patch Alignment 4 (WPA) to conduct a two-level alignment.",
"That is, ITM is used to align text and image in sentence-level, while WPA is used to align each split word and each sliced patch in token-level.",
"In intention reasoning, we fuse f ( [CLS] ) and aligned entities' hidden state representations to obtain a query vector Q , which is then used for intention classification and knowledge query.",
"Image-Text Matching (ITM).",
"In ITM, we use the output f ( [CLS] ) of the unified Transformer encoder to compute the match probability of the sampled pair.",
"Specifically, we feed f ( [CLS] ) into an FC layer and a sigmoid function to predict a probability score P ( w, v ) , which is between 0 and 1 .",
"During training, we sample a positive or negative pair ( w, v ) from the dataset D at each step.",
"The negative pair is created by randomly replacing the image or text in the same batch.",
"We employ a binary cross-entropy loss for optimization: LITM ( ) = E ( w,v ) D [ y log P ( w, v )+ (1 y ) log (1 P ( w, v ))] (6) where y is a binary truth label.",
"Note here we only use ITM to train image-text pairs but without considering the knowledge vector, because it has already matched the textual sequence when being searched out.",
"Word-Patch Alignment (WPA).",
"For more fine-grained alignment between each word and image patch, we introduce a WPA technology, which is used to train the consistency and exclusiveness between these cross-modal features to prompt alignment.",
"We use a WPA loss to supervise the process, 4 A modified version of the previous Word-Region Alignment (WRA), which can be adapted to the alignment between textual words and visual patches.",
"LWPA ( ) = (cid:88) l i =1 (cid:88) 4 j =1 T ij ( w i , v j ) (7)",
"where denotes the cos( ) similarity function, T R l 4 is a ground truth table and each T ij T is a binary label 0 or",
"1. During training, we sample positive or negative pairs ( w i , v j ) from each multimodal utterance to construct a probability table, as shown in Figure",
"2. The above loss function LWPA is then used to update the parameters .",
"During inference, we continue to fuse aligned entities' hidden state representation and f ( [CLS] ) to obtain a unified query vector Q , which contains multimodal query information with entity enhancement, and will be used for subsequent intention reasoning.",
"Intention Classify (IC).",
"Given the query vector Q , this component aims to understand the users' intention and thereafter determine which type of response should be generated.",
"To be clear, there are a total of 17 types labeled in the MMD dataset, and each user's utterance is labeled with a specific intention type.",
"Following MAGIC, we customize the type of response specifically for each intention, as shown in Table",
"1. Subsequently, we leverage an MLP layer to predict Q 's probability distribution and select the highest probability to generate a response.",
"Besides, a cross-entropy loss is applied to optimizing the intention classifier: LIC ( ) = (cid:88) | U | i =1 (cid:88) 17 j =1 I ij log P ( I ij | Q ) (8) where P ( I ij | Q ) denotes the probability of being predicted as intention I ij , and I ij is a ground truth label.",
"The intention classifier is trained by the loss function LIC ( ) to update parameter , and finally outputs a reliable intention prediction result I in the inference phase.",
"Knowledge Query (KQ).",
"Given the predicted intention result I , this component first determines whether knowledge query is required based on Table",
"1. If required, we adopt a key-value memory mechanism to query all embedded knowledge triples 5 .",
"Specifically, these embedded knowledge triples are divided into key parts and value parts, which are respectively denoted as vector K and vector V .",
"Note here K is obtained through a linear 5 The triple is in the form of ( head, relation, tail ) 107 Id Intention categories Response type Component Id Intention categories Response type Component 1 greeting general IC 10 ask-attribute intention-aware IC+KQ 2 self-info general IC 11 suited-for intention-aware IC+KQ 3 give-criteria multimodal IC+KQ+MR 12 celebrity intention-aware IC+KQ 4 show-image multimodal IC+KQ+MR 13 filter-results multimodal IC+KQ+MR 5 give-description multimodal IC+KQ+MR 14 sort-results multimodal IC+KQ+MR 6 show-more multimodal IC+KQ+MR 15 switch-synset general IC 7 show-orientation multimodal IC+KQ+MR 16 buy general IC 8 show-similar multimodal IC+KQ+MR 17 exit general IC 9 goes-with intention-aware IC+KQ Table 1: The categories of user's intentions, their corresponding response types and required components.",
"fusion of the embedded head-entities and relations.",
"The knowledge query process is as follows: i = Softmax (cid:0) QT K i (cid:1) (9) VT = (cid:88) | M | i =1 i V i (10) where i denotes the attentive probability score for K i , | M | is the number of knowledge triples, and VT is a weighted sum of V i , which will be used for textual decoding in an intention-aware response.",
"Multi-hop Recommend (MR).",
"Given the predicted intention result I and one-hop query result VT , this component first needs to determine whether an image recommendation is required based on Table",
"1. If required, we continue to use VT as a query vector to perform another hop query over the entire knowledge base, which implies that the product images will be recommended, if the key parts of their corresponding triples have high similarity to VT .",
"Specifically, i = Softmax (cid:0) VT T K i (cid:1) (11) After deriving i , we use VI = { q i } , an image pointer vector, to select images with top i for recommendation, where q i = (cid:26) 1 , if V i = 1 1 512 0 , otherwise (12) and 1 1 512 is a column vector with each element equal to 1 , which denotes for the special token [URL] of the image's link.",
"Note here 512 is the embedding size in our unified Transformer encoder.",
"It is not difficult to see that UniTranSeR can extend the above one-hop knowledge query to multi-hop by iteratively performing attention-based key-value reasoning and ultimately achieve multi-hop image recommendation.",
"As mentioned earlier, we used a hierarchy mechanism to decode different types of response sequences, including general responses, intention-aware responses and multimodal responses.",
"They Dataset Statistics Train Valid Test Dialogs 105,439 22,595 22,595 Proportion 70% 15% 15% Table 2: Statistics of the MMD dataset.",
"share the same uni-directional Transformer layer, but the semantic representations fed to this decoder are different.",
"Specifically, for general responses, we just take the sentence-level representations f ( [CLS] ) as input.",
"For intention-aware responses, we take the concatenation of f ( [CLS] ) and attentive vector VT followed by an FC layer as input.",
"For multimodal responses, we take the input for the intention-aware responses, as well as VI , the image pointer vector, as input.",
"To evaluate the performance of UniTranSeR, we conduct experiments on the widely-used benchmark dataset MMD contributed by Saha et al. (2018).",
"The MMD dataset consists of over 150k conversations between users and chatbots in the retail domain, and each conversation describes a complete online shopping process.",
"During the conversations, the user proposes his/her requirements in multimodal utterances and the chatbot introduces different products step by step until they make a deal.",
"In our experiments, we follow Nie et al. (2019) to partition MMD.",
"The statistics the dataset after partition are presented in Table 2, and more detailed statistics can be found in Appendix A.4.",
"Following several previous work (Nie et al., 2019; He et al., 2020; Zhang et al., 2021), we use Bleun , Nist and Recall@ k to evaluate our model over two basic tasks separately, i.e., text task and image task.",
"For the text task, we employ the proposed HTR decoder to produce all general responses and intention-aware responses.",
"As the length of 20.07% target responses in MMD is less than 4 , such as Hello! and Thanks a lot! , we follow Nie et al. (2019) to calculate Bleun by 108 Methods Text Task Image Task Bleu-1 Bleu-2 Bleu-3 Bleu-4 Nist Recall@1 Recall@2 Recall@3 PreviousMethods MHRED (Saha et al., 2018) 32.60 25.14 23.21 20.52 3.0901 0.7980 0.8859 0.9345 KMD (Liao et al., 2018) ---0.9198 0.9552 0.9755 UMD (Cui et al., 2019) 42.78 33.69 28.06 23.73 -0.9796 0.9980 0.9990 OAM (Chauhan et al., 2019) 48.30 38.24 32.03 27.42 4.3236 -MAGIC (Nie et al., 2019) 50.71 39.57 33.15 28.57 4.2135 0.9813 0.9927 0.9965 MATE (He et al., 2020) 56.55 47.89 42.48 38.06 --Ours UniTranSeR 63.27 55.93 51.31 48.07 4.9774 0.9983 0.9995 0.9998 Table 3: Main results.",
"varying n from 1 to",
"4. Note higher Bleu and Nist scores indicate that more n -gram overlaps exist between the predicted and target responses, and hence are more favorable.",
"For the image task, we adopt Recall@ k to evaluate the efficacy of image response, where k is varied from 1 to",
"3. Note the image response is correct only if the positive image is recommended in the topk product images.",
"We compare our model with the following state-of-the-art baselines.",
"MHRED (Saha et al., 2018) 6 is the first baseline work to integrate the visual features into a hierarchical encoder-decoder model for their constructed MMD dataset.",
"KMD (Liao et al., 2018) incorporates the style tips into the memory augmented neural model and adopts deep reinforcement learning to boost the performance.",
"UMD (Cui et al., 2019) 7 proposes a user attention-guided multimodal dialog system by considerring the hierarchical product taxonomy and the user's attention to products.",
"OAM (Chauhan et al., 2019) proposes a novel ordinal and attribute aware attention mechanism for multimodal dialog generation.",
"MAGIC (Nie et al., 2019) 8 adopts the adaptive decoders with intention understanding to explicitly generate three types of responses.",
"MATE (He et al., 2020) 9 utilizes a multimodal element-level encoder to integrate dialog context and leverages a knowledge-aware two-stage decoder for response generation, and achieves state-of-the-art performance.",
"Following Saha et al. (2018) and Nie et al. (2019), we utilize two-turn utterances prior to the target response as the context, and set the vocabulary size to 26 , 422 .",
"In our trainings, the batch size is set to 64, learning rate is set to 1 e 4 and the max number of training epoches is set to 1 e 4 .",
"Adam optimizer is used to optimize all models.",
"All experiments are conducted with PyTorch.",
"More details about hyperparameter settings can be found in Appendix A.1.",
"Automatic Evaluation Following KMD, UMD and MAGIC, we evaluate model performance automatically from two aspects: text response and image response.",
"From the results in Table 3, we can observe that our model UniTranSeR achieves the state-of-the-art performance on both tasks.",
"Specifically, in text task, UniTranSeR exhibits the highest Bleun with varying n from 1 to 4 compared with other baselines, indicating that our model can generate responses closer to the golden ones.",
"Moreover, our model outperforms MATE, a recent model that can capture context-aware dependencies of semantic elements, by 26 .",
"3% in Bleu-4 score, which verifies the effectiveness of our model in learning cross-modal feature alignment and conduct intention reasoning to generate more accurate and informative responses.",
"In image task, an extremely difficult performance improvement can be observed, which further verifies the superiority of our model.",
"Human Evaluation The human evaluation mainly focuses on four aspects: fluency, relevance, correctness, and informativeness, which are all important for task-oriented dialogue systems (Cui et al., 2019; Nie et al., 2019; He et al., 2020).",
"We first randomly selected 200 dialogs from the MMD datasets, and used different models to generate responses, including UMD, OAM, MAGIC, MATE 109 Model Flue.",
"and UniTranSeR.",
"Then, we hired human experts to score the responses and golden responses in blind review on a scale from 1 to 5 , which simulated a real-life multimodal task-oriented conversation scenario.",
"By calculating the average score of the above metrics, we obtained the final manual evaluation results, as shown in Table",
"4. It can be observed that UniTranSeR consistently outperforms the other four models on all metrics, which is in line with the results of automatic evaluation.",
"In this part, we perform ablation experiments to evaluate the effectiveness of each component.",
"We focus on five crucial components and set them accordingly: 1) w/o UTS Encoder denotes that we use a BiGRU to replace the unified-modal Transformer encoder for multimodal encoding; 2) w/o HTR Decoder denotes that we use a Uni-directional GRU to replace the hierarchical Transformer decoder for response generation; 3) w/o ITM denotes that we remove the LITM loss to make the parameters not updated; 4) w/o WPA denotes that we remove the LWPA loss and just regard the sentence-level representation f ( [CLS] ) as query vector Q to query knowledge; 5) w/o IR Module denotes that we remove the IC and KQ components and just adopt the context vector f ( [CLS] ) to generate responses 10 ; From Table 5, we can observe that removing each component will result in a performance degradation.",
"Specifically, w/o IR Module causes 54.96% drops in Bleu-4 score and 54.18% drops in Nist 10 Equivalent to generating general responses, since there is no knowledge query.",
"score, which verifies the great efficacy of intention classify and knowledge query components.",
"Moreover, w/o WPA, w/o ITM and w/o UTS Encoder respectively cause 28.54%, 20.48% and 14.37% drops in Nist score, which further demonstrates the effectiveness of cross-modal feature alignment and unified-modal semantic encoding.",
"To better illustrate the advantage of our model and understand what the feature alignment module has learned, we visualize several examples of text-to-image attention, as shown in Figure",
"4. It can be observed that our model is able to capture fine-grained entity alignment between different modalities.",
"The reason may be that: 1) We adopt a unified-modal Transformer semantic encoder, which enables to map different modalities of semantic cues into a same vector space to prompt inter-modality interactions for better representations; 2) Based on the obtained representations, the WPA technology can help supervise fine-grained word-patch alignment, which is beneficial to identifying user's real intention and generate more intention-aware responses.",
"In this paper, we propose a Unified Transformer Semantic Representation framework with feature alignment and intention reasoning, referred to UniTranSeR.",
"Specifically, we project the multimodal features into a unified semantic space by utilizing a Transformer encoder to prompt inter-modal interactions.",
"We further design a feature alignment and intention reasoning layer to conduct cross-modal feature alignment and fine-grained intention rea-110 soning, with the objective of generating more accurate and intention-aware responses.",
"Experiments on the representative MMD dataset demonstrate the effectiveness and superior performance of our UniTranSeR model in both automatic and human evaluation."
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"method",
"method",
"method",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"other",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"objective",
"method",
"objective",
"objective"
] |
[
"Recent work in neural machine translation has demonstrated both the necessity and feasibility of using inter-sentential context context from sentences other than those currently being translated.",
"However, while many current methods present model architectures that theoretically can use this extra context, it is often not clear how much they do actually utilize it at translation time.",
"In this paper, we introduce a new metric, conditional cross-mutual information , to quantify the usage of context by these models.",
"Using this metric, we measure how much document-level machine translation systems use particular varieties of context.",
"We find that target context is referenced more than source context, and that conditioning on a longer context has a diminishing effect on results.",
"We then introduce a new, simple training method, context-aware word dropout , to increase the usage of context by context-aware models.",
"Experiments show that our method increases context usage and that this reflects on the translation quality according to metrics such as BLEU and COMET, as well as performance on anaphoric pronoun resolution and lexical cohesion contrastive datasets.",
"1 1 Introduction While neural machine translation (NMT) is reported to have achieved human parity in some domains and language pairs (Hassan et al., 2018), these claims seem overly optimistic and no longer hold with document-level evaluation (Toral et al., 2018; Laubli et al., 2018).",
"Recent work on context-aware NMT attempts to alleviate this discrepancy by incorporating the surrounding context sentences (in either or both the source and target sides) in the translation system.",
"This can be done by, for example, feeding context sentences to standard NMT 1 https://github.com/neulab/contextual-mt Figure 1: Illustration of how we can measure context usage by a model q MT as the amount of information gained when a model is given the context C and source X vs when the model is only given the X .",
"models (Tiedemann and Scherrer, 2017), using different encoders for context (Zhang et al., 2018), having cache-based memories (Tu et al., 2018a), or using models with hierarchical attention mechanisms (Miculicich et al., 2018; Maruf et al., 2019a) more details in 2.",
"While such works report gains in translation quality compared to sentence-level baselines trained on small datasets, recent work has shown that, in more realistic high-resourced scenarios, these systems fail to outperform simpler baselines with respect to overall translation accuracy, pronoun translation, or lexical cohesion (Lopes et al., 2020).",
"We hypothesize that one major reason for these lacklustre results is due to the fact that models with the architectural capacity to model cross-sentential context do not necessarily learn to do so when trained with existing training paradigms.",
"However, even quantifying model usage of context is an ongoing challenge; while contrastive evaluation has been proposed to measure performance on inter-sentential discourse phenomena (Muller et al., 2018; Bawden et al., 2018), this approach is confined to a narrow set of phenomena, such as pronoun translation and lexical cohesion.",
"A toolbox to measure the impact of context in broader settings is still missing.",
"To address the limitations above, we take inspiration from the recent work of Bugliarello et al. (2020) and propose a new metric, conditional cross-mutual information (CXMI, 3), to measure quantitatively how much context-aware models actually use the provided context by comparing the model distributions over a dataset with and without context.",
"Figure 1 illustrates how it measures context usage.",
"This metric applies to any probabilistic context-aware machine translation model, not only the ones used in this paper.",
"We release a software package to encourage the use of this metric in future context-aware machine translation research.",
"We then perform a rigorous empirical analysis of the CXMI between the context and target for different context sizes, and between source and target context.",
"We find that: (1) context-aware models use some information from the context, but the amount of information used does not increase uniformly with the context size, and can even lead to a reduction in context usage; (2) target context seems to be used more by models than source context.",
"Given the findings, we next consider how to encourage models to use more context.",
"Specifically, we introduce a simple but effective variation of word dropout (Sennrich et al., 2016a) for context-aware machine translation, dubbed COWORD dropout (4).",
"Put simply, we randomly drop words from the current source sentence by replacing them with a placeholder token.",
"Intuitively, this encourages the model to use extra-sentential information to compensate for the missing information in the current source sentence.",
"We show that models trained with COWORD dropout not only increase context usage compared to models trained without it but also improve the quality of translation, both according to standard evaluation metrics (BLEU and COMET) and according to contrastive evaluation based on inter-sentential discourse phenomena such as anaphoric pronoun resolution and lexical cohesion (4.2, Table 1).",
"We are interested in learning a system that translates documents consisting of multiple sentences between two languages.",
"2 More formally, given a corpus of parallel documents in two languages, D = { D 1 , ..., DN } , where each document is a sequence of source and target sentences, D = { ( x (1) , y (1) ) , ..., ( x ( K ) , y ( K ) ) } , we are interested in learning the mapping between the two languages.",
"We consider the typical (auto-regressive) neural machine translation system q parameterized by .",
"The probability of translating x ( i ) into y ( i ) given the context of the sentence C ( i ) is q ( y ( i ) | x ( i ) , C ( i ) ) = T (cid:89) t =1 q ( y ( i ) t | x ( i ) , y ( i ) <t , C ( i ) ) where y ( i ) t represents the t th token of sentence y ( i ) .",
"This context can take various forms.",
"On one end, we have the case where no context is passed, C ( i ) = , and the problem is reduced to sentence-level translation.",
"On the other end, we have the case where all the source sentences and all the previous generated target sentences are passed as context C ( i ) = { x (1) , ..., x ( K ) , y (1) , ..., y ( i 1) } .",
"As mentioned, there are many architectural approaches to leveraging context (see 5 for a more complete review), and the methods that we present in this paper are compatible with most architectures because they do not specify how the model q uses the context.",
"In experiments, we focus mostly on the simpler approach of concatenating the context to the current sentences (Tiedemann and Scherrer, 2017).",
"Recent work by Lopes et al. (2020) has shown that, given enough data (either through pretraining or larger contextual datasets), this simple approach tends to be competitive with or even outperform its more complex counterparts 2 Here, a document could be an actual document but it could also represent other contextual collections of text, such as a sequence of dialogue utterances.",
"While context-aware models allow use of context, they do not ensure contextual information is actually used: models could just be relying on the current source sentence and/or previously generated target words from the same sentence when generating the output.",
"Contrastive evaluation, where models are assessed based on the ability to distinguish correct translations from contrastive ones, is a common way to assess the ability of context-aware models to capture specific discourse phenomena that require inter-sentential context, such as anaphora resolution (Muller et al., 2018) and lexical cohesion (Bawden et al., 2018).",
"However, these methods only provide an indirect measure of context usage with respect to a limited number of phenomena and can fail to capture other, unknown ways in which the model might be using context.",
"Kim et al. (2019) showed that most improvements to translation quality are due to non-interpretable usages of context, such as the introduction of noise that acts as a regularizer to the encoder/decoder.",
"This problem is further exacerbated by the fact that there is no clear definition of what entails context usage.",
"In a different context, Bugliarello et al. (2020) introduced cross-mutual information (XMI), to measure the difficulty of translating between different language pairs in sentence-level neural machine translation.",
"Given a language model q LM for a target sentence Y and a translation model q MT for translating from X to Y , XMI is defined as: XMI ( X Y ) = H q LM ( Y ) H q MT ( Y | X ) , where H q LM denotes the cross-entropy of the target sentence Y under the language model q LM and H q MT the conditional cross-entropy of Y given X under the translation model q MT .",
"This allows us to measure how much information the source sentence gives us about the target sentence (an analogue of mutual information for cross-entropy).",
"In the case where q LM and q MT perfectly model the underlying probabilities we would have XMI ( X Y ) = MI ( X, Y ) , the true mutual information.",
"Taking inspiration from the above, we propose Conditional Cross-Mutual Information (CXMI), a new measure of the influence of context on a model's predictions.",
"This is done by considering an additional variable for the context C and measuring how much information the context C provides about the target Y given the source X .",
"where H q MTA is the entropy of a context-agnostic machine translation model, and H q MTC refers to a context-aware machine translation model.",
"This quantity can be estimated (see Appendix A for a more formal derivation) over an held-out test set with N sentence pairs and the respective context as: CXMI ( C Y | X ) 1 NN (cid:88) i =1 log q MTA ( y ( i ) | x ( i ) ) q MTC ( y ( i ) | x ( i ) , C ( i ) ) While q MTA and q MTC can, in theory, be any models, we are interested in removing any confounding factors other than the context that might lead to instability in the estimates of the distributions.",
"For example, if q MTA and q MTC use completely different models, it would not be clear if the difference in the probability estimates is due to the introduction of context or due to other extraneous factors such as differences in architectures, training regimens, or random seeds.",
"To address this we consider a single model, q MT , that is able to translate with and without context (more on how this achieved in 3.2).",
"We can then set the context-agnostic model and the contextual model to be the same model q MTA = q MTC = q MT .",
"This way we attribute the information gain to the introduction of context.",
"Throughout the rest of this work, when we reference context usage we will precisely mean this information gain (or loss).",
"Data We experiment with a document-level translation task by training models on the IWSLT2017 (Cettolo et al., 2012) dataset for language pairs EN DE and EN FR (with approximately 200K sentences for both pairs).",
"We use the test sets 2011-2014 as validation sets and the 2015 as test sets.",
"To address the concerns pointed out by Lopes et al. (2020) that gains in performance are due to the use of small training corpora and weak baselines, we use Paracrawl (Espl`a et al., 2019) and perform some data cleaning based on language identifica-tion tools, creating a pretraining dataset of around 82M and 104M sentence pairs for EN DE and EN FR respectively.",
"All data is encoded/vectorized with byte-pair encoding (Sennrich et al., 2016b) using the Senten-cePiece framework (Kudo and Richardson, 2018).",
"For the non-pretrained case, we use 20K vocabulary size shared across source/target, while for the pretrained case we use a 32K vocabulary size.",
"Besides translation quality, we also evaluate our models on two contrastive datasets for different discourse phenomena to better assess the ability of our models to capture context (more on this in 4.2): For the EN DE language pair, we evaluate on the ContraPro dataset (Muller et al., 2018), targeting anaphoric pronoun resolution.",
"Source-side sentences contain the English anaphoric pronoun it while target-side sentences contain the corresponding German translations er , sie or es .",
"Contrastive erroneous translations are automatically created by replacing the correct pronoun with one of the other two.",
"The test set contains 4,000 examples for each target pronoun type and context is needed to correctly disambiguate.",
"Context includes the four previous sentences For the EN FR language pair, we evaluate on the dataset by Bawden et al. (2018) targeting anaphoric pronoun resolution and lexical cohesion.",
"It contains 200 manually curated examples for each phenomenon.",
"Anaphora examples include singular and plural personal and possessive pronouns that require context to be correctly inferred and the dataset is balanced such that a model that does not use context can only achieve 50% accuracy.",
"Context includes the previous sentence Models and Optimization For all our experiments, we consider an encoder-decoder Transformer architecture (Vaswani et al., 2017).",
"In particular, we train the transformer small (hidden size of 512, feedforward size of 1024, 6 layers, 8 attention heads).",
"For the pretrained setup, we also pre-train a transformer large architecture (hidden size of 1024, feedforward size of 4096, 6 layers, 16 attention heads) and subsequently fine-tune on the IWSL2017 datasets.",
"As in Vaswani et al. (2017), we train using the Adam optimizer with 1 = 0 .",
"9 and 2 = 0 .",
"98 and use an inverse square root learning rate scheduler, with an initial value of 10 4 and 5 10 4 for pretrained and non-pretrained cases respectively, and with a linear warm-up in the first 4000 steps.",
"We train the models with early stopping on the validation perplexity.",
"We train all our models on top of the Fairseq framework (Ott et al., 2019).",
"To assess the relative importance of different context sizes on both the source and target side, we start by considering two models, one for the source-side context and one for the target-side context, that receive context of size k , C ( i ) = { x ( i k ) , . . . , x ( i 1) } or C ( i ) = { y ( i k ) , . . . , y ( i 1) } .",
"During training, k is selected randomly to be in { 1 , . . . , 4 } for every example.",
"This way the model is trained to translate the same source without and with different context sizes and is thus able to translate based on any context size in that interval.",
"Figure 2 shows the CXMI values computed over the test set as a function of the context size for both the source-side and target-side contextual models for both the non-pretrained and pretrained regimens for the EN DE language pair.",
"Results for the EN FR language pair are similar and can be found in Appendix B. For the non-pretrained case, for both the source and target context, the biggest jump in context usage is when we increase the context size from 0 to 1.",
"After that, increasing the context size leads to diminishing increases in context usage and even reduced context usage for the source-side context.",
"Interestingly, when the model is stronger, such as in the pretrained case, we can see that it can leverage target-side context even better than the non-pretrained case, with a similar trend of diminishing increases in context usage for both regimes.",
"However, this is not the case for the source-side context, and it seems that the pretrained model is barely able to use the contextual information on this side.",
"Overall, for this regime, we can conclude that having a context size of one or two previous sentences on both sides is beneficial to the model, and that target-side context is slightly more used than source-side context.",
"This appears to corroborate the findings of Bawden et al. (2018) that target-side context is more effective than the source context.",
"Does CXMI Really Measure Context Usage?",
"To assert that CXMI correlates with interpretable measures of context usage, we perform a correlation analysis with the performance in the contrastive datasets mentioned.",
"In these datasets, usage of context is evident where the model picks the right answer when it is passed the context and is not able to do so when no context is given.",
"Thus Table 2 shows the point-biserial correlation coefficient 3 between the per-sample CXMI and binary random variable and a binary variable that takes the value 1 if the contextual model picks the correct translation and the non-contextual model picks the incorrect one, for different context sizes on the pretrained model.",
"We can see that there is a statistically significant correlation between both values, which strengthens the notion that CXMI captures previous measures of context usage to some extent.",
"3 The Point-Biserial correlation coefficient is a special case of the Pearson correlation coefficient when one of the random variables is dichotomous.",
"Motivated by the above results demonstrating the limited context usage of models trained using the standard MLE training paradigm, particularly with respect to more distant context, we now ask the question: Is it possible to modify the training methodology to increase context usage by the model?",
"As an answer, we extend a popular regularization technique used in sentence-level machine translation, word dropout (Sennrich et al., 2016a), to the context-aware setting.",
"The idea behind context-aware word (COWORD ) dropout is to model the translation probability between x ( i ) and y ( i ) as p ( y ( i ) | x ( i ) ) = T (cid:89) t =1 p ( y ( i ) t | x ( i ) , y ( i ) <t , C ( i ) ) , where x ( i ) is a perturbed version of the current source sentence generated by randomly dropping tokens and replacing them with a mask token given a dropout probability p : r ( i ) t Bernoulli ( p ) x ( i ) t = (cid:40) (cid:104) MASK (cid:105) if r ( i ) t = 1 x ( i ) t otherwise.",
"In the case where no context is passed C ( i ) = , COWORD dropout reduces to word dropout.",
"The intuition behind such a perturbation is that, by dropping information from the current source and not the context, we increase the relative reliability of context C ( i ) , therefore providing the inductive bias 0 1 2 3 4 0 1 2 10 2 Context Size CXMI p = 0 .",
"that context is important for the translation.",
"We will see in 4.2 that this inductive bias is beneficial and that COWORD dropout not only improves performance but also increases context usage.",
"Setup As in 3.2, we consider transformer models trained on the IWSLT2017 for both EN DE and EN FR, both from scratch and pretrained using the procedure previously described.",
"In particular, due to findings in the previous section, we consider models with either only target-side context or both source-side and target-side context.",
"Context Usage To assess if our proposed regularization technique, COWORD dropout, increases context usage by models, we train a model using the same dynamic context size setting used in 3.2.",
"Figure 3 plots the CXMI values on the test set as a function of the target context size as we increase the dropout value p .",
"We see that increasing this value consistently increases context usage according to CXMI across different context sizes.",
"Note that, at test time, COWORD dropout is disabled, which means that it provides inductive bias only during training and models learn to use more context by themselves.",
"Table 3 illustrates some examples where the COWORD dropout increased the per-sample CXMI significantly.",
"While the model only has access to target context, we present the source context for clarity.",
"In the first example, while the source is a complete sentence, the target is only a fragment of one so the context helps complete it.",
"In the other two examples shown, we can see that context helps disambiguate the gender of the German translation of the English pronoun it .",
"Interestingly, the words that use context the most according to CXMI match very closely to the ones that native speakers annotated.",
"Translation Quality To evaluate if the increased usage of context correlates with better machine translation quality, based on the previous experiments on context usage and values for COWORD dropout, we consider three models trained with fixed-size context: A baseline that has no context, reducing to sentence-level model ie: i.e. , C ( i ) = ; a one-to-two model having as context the previous target sentence, i.e. , C ( i ) = { y ( i 1) } ; a two-to-two model having as context the previous source sentence and the previous target sentence, i.e. , C ( i ) = { x ( i 1) , y ( i 1) } .",
"In addition, to explore the benefits of COWORD dropout in other architectures, we also train a one-to-two multi-encoder (Jean et al., 2017) transformer small model (more details in Appendix C).",
"For all models with target context, when decoding, we use the previous decoded sentences as target context.",
"Table 4 shows the performance across three different seeds of the baseline and contextual models for both the non-pretrained and pretrained setting, with increasing values of COWORD dropout p .",
"We also run the baseline with COWORD dropout (which, as said previously, reduces to word dropout) to ensure that improvements were not only due to regularization effects on the current source/target.",
"We report the standard BLEU score (Papineni et al., 2002) calculated using sacreBLEU (Post, 2018) and COMET, a more accurate evaluation method using multilingual embeddings (Rei et al., 2020).",
"For the non-pretrained case, we can see that a COWORD dropout value p > 0 consistently improves the performance of the contextual models when compared to models running with p = 0 and with the sentence-level baseline with the same values for word dropout.",
"For the pretrained case, the improvements are not as noticeable, although models trained with COWORD dropout still always outperform models trained without it.",
"This is perhaps a reflection of the general trend that better models are harder to improve.",
"Table 5 shows that COWORD dropout is also helpful for the multi-encoder model, with COWORD dropout helping significantly.",
"This shows that this method could be helpful for context-aware architectures other than concatenation-based.",
"Discourse Phenomena While automatic metrics such as BLEU and COMET allow us to measure translation quality, they mostly target sentence-level quality and do not specifically focus on phenomena that require context-awareness.",
"Contrastive datasets, as described in 3.2, allow us to measure the performance of context-aware models in specific discourse phenomena by comparing the probability of correct translation against the contrastive translations.",
"Models that capture the targeted discourse phenomena well will consistently rank the correct translation higher than the contrastive ones.",
"While there is a disconnect between the translation (done via decoding) and contrastive evaluation, it is currently the best way to measure a model's performance on context-aware discourse phenomena.",
"Table 6 shows the average performance over the contrastive datasets of the baseline and contextual models for both the (non-)pretrained settings, with increasing values of COWORD dropout p .",
"We can see that in general, increasing COWORD dropout leads to improved performance, particularly for the non-pretrained case.",
"This gain is particularly clear for pronoun resolution and the EN DE language pair.",
"We hypothesise that this is due to the small size of the contrastive sets for the EN FR language pair, which leads to high variance.",
"Table 7 similarly shows that COWORD dropout improves the performance of the multi-encoder model across all phenomena, which again shows that our proposed regularization method has benefits for multiple architectures for context-aware machine translation.",
"Curiously, when these models are trained without COWORD dropout, they achieve performance similar to the sentence-level baseline, while when dropout is applied, they are able to effectively start using context.",
"Context-aware Machine Translation There have been many works in the literature that try to incorporate context into NMT systems.",
"Tiedemann and Scherrer (2017) first proposed the simple approach of concatenating the previous sentences in both the source and target side to the input to the system; Jean et al. (2017), Bawden et al. (2018), and Zhang et al. (2018) used an additional context-specific encoder to extract contextual features from the previous sentences; Maruf and Haffari (2018) and Tu et al. (2018b) used cache-based memories to encode context; Wang et al. (2017) used a hierarchical RNN to encode the global context from all previous sentences; Miculicich et al. (2018) and Maruf et al. (2019a) used hierarchical attention networks to encode context; Chen et al. (2020) added document-level discourse structure information to the input; Sun et al. (2020) trained a simple concatenation-based model with varying context size during training to have a model that is able to translate with any context size, similar to what is done in this work.",
"Similarly to what we do with COWORD dropout, Jean and Cho (2019) attempted to maximise sensitivity to context by introducing a margin-based regularization term to explicitly encourage context usage.",
"For a more detailed overview, Maruf et al. (2019b) extensively describe the different approaches and how they leverage context.",
"While these models lead to improvements with small training sets, Lopes et al. (2020) showed that the improvements are negligible when compared with the concatenation baseline when using larger datasets.",
"However, importantly, both our metric CXMI for measuring context usage and the proposed regularization method of COWORD dropout, can theoretically be applied to any of the above-mentioned methods.",
"Evaluation In terms of evaluation, most previous work focuses on targeting a system's performance on contrastive datasets for specific inter-sentential discourse phenomena.",
"Muller et al. (2018) built a large-scale dataset for anaphoric pronoun resolution, Bawden et al. (2018) manually created a dataset for both pronoun resolution and lexical choice and Voita et al. (2019) created a dataset that targets deixis, ellipsis and lexical cohesion.",
"Sto-janovski et al. (2020) showed through adversarial attacks that models that do well on other contrastive datasets rely on surface heuristics and create a contrastive dataset to address this.",
"In contrast, our CXMI metric is phenomenon-agnostic and can be measured with respect to all phenomena that require context in translation.",
"Information-Theoretic Analysis Bugliarello et al. (2020) first proposed cross-mutual information (XMI) in the context of measuring the difficulty of translating between languages.",
"Our work differs in that we propose a conditional version of XMI, where S is always observed, and we use it to assess the information gain of context rather than the difficulty of translating different languages.",
"We introduce a new, architecture-agnostic, metric to measure how context-aware machine translation models are using context and propose a simple regularization technique to increase context usage by these models.",
"Our results are theoretically applicable to almost all recently proposed context-aware models and future work should go about measuring exactly how much these models leverage context and if COWORD dropout also improves context usage and performance in these.",
"We also hope this work motivates exploring (C)XMI for other uses cases where measuring the relevance/usage of inputs to a particular model other than context-aware machine translation.",
"It could, for example, be used in conditional language modelling to analyse how the inputs we are conditioning on are being used by the model.",
"We would like to thank all the members of DeepSPIN, NeuLab, and Unbabel who provided feedback on earlier versions of this work.",
"This work was supported by the European Research Council (ERC StG DeepSPIN 758969), by the P2020 programs MAIA and Unbabel4EU (LISBOA-01-0247-FEDER-045909 and LISBOA-01-0247-FEDER-042671), and by the Fundacao para a Ciencia e Tec-nologia through contracts SFRH/BD/150706/2020 and UIDB/50008/2020."
] | [
"abstain",
"abstain",
"objective",
"method",
"result",
"objective",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"result",
"method",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"method",
"other",
"other",
"objective",
"other",
"other",
"other",
"objective",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other"
] |
[
"Towards building intelligent dialogue agents, there has been a growing interest in introducing explicit personas in generation models.",
"However, with limited persona-based dialogue data at hand, it may be difficult to train a dialogue generation model well.",
"We point out that the data challenges of this generation task lie in two aspects: first, it is expensive to scale up current persona-based dialogue datasets; second, each data sample in this task is more complex to learn with than conventional dialogue data.",
"To alleviate the above data issues, we propose a data manipulation method, which is model-agnostic to be packed with any persona-based dialogue generation model to improve its performance.",
"The original training samples will first be distilled and thus expected to be fitted more easily.",
"Next, we show various effective ways that can diversify such easier distilled data.",
"A given base model will then be trained via the constructed data curricula, i.e. first on augmented distilled samples and then on original ones.",
"Experiments illustrate the superiority of our method with two strong base dialogue models (Transformer encoder-decoder and GPT2).",
"The ability to generate responses with consistent personas is important towards building intelligent dialogue agents.",
"In past years, there has been a growing interest in introducing explicit personas in dialogue generation models (Song et al., 2019; Wolf et al., 2019).",
"A piece of persona text generally consists of profiles and background personal facts.",
"A clipped persona-based dialogue from the PersonaChat (Zhang et al., 2018a) dataset is shown in Figure 1, which covers rich persona features.",
"For Work was done when Yu Cao was an intern at Tencent AI Lab.",
"a persona-based dialogue generation model, generated responses need to be relevant to the dialogue context as well as consistent with personas.",
"Most existing generation models for this task rely heavily on training with sufficient persona-based dialogues.",
"However, available data are limited due to their expensive collection costs.",
"Take the PersonaChat as an example, two crowd-sourced annotators are hired to play the part of a provided persona and converse naturally with each other.",
"In total, about 162 thousand dialogue utterances are collected with less than 5 thousand unique persona profiles.",
"Compared with conventional dialogue datasets such as OpenSubtitles (Lison and Tiedemann, 2016) and Weibo (Shang et al., 2015) with millions of utterances, persona-based dialogue datasets are relatively small.",
"Besides the limited data scale, another data issue we want to point out is that a persona-based dialogue is more complex to learn with, in comparison with conventional dialogues.",
"Recall that a persona-based dialogue involves not only multiple dialogue utterances, but also auxiliary persona sentences.",
"Welleck et al. (2019) showed that not all responses in the PersonaChat dataset are consistent with the provided personas.",
"This makes it difficult for a model to capture a reliable mapping from training data.",
"Supposing we apply a similar dialogue model as in conventional dialogue generation tasks with a comparable parameter size, we 7984 should expect more data would be necessary to train a robust model on the more difficult data setting.",
"Moreover, it may be difficult to use existing data augmentation methods (Li et al., 2019; Niu and Bansal, 2019) to automatically construct such complex persona-based dialogue data.",
"For example, if we apply back translation (Sennrich et al., 2016) to every sentence in persona-based samples, the augmented ones may not maintain the coherence between the dialogue history and the response as well as the consistency between the persona and the response simultaneously.",
"A few studies have been conducted to alleviate the above data issues by finetuning existing pretrained models such as GPT (Wolf et al., 2019; Golovanov et al.) or BERT ( ? Song et al., 2021).",
"They often stick to a certain pretrained model.",
"Sophisticated finetuning strategies, including proper network modifications and loss functions, are required to get satisfactory performance, making them not useful across different pretrained models.",
"Moreover, they do not address the data difficulty issue explicitly.",
"Most of them simply concatenate all persona and dialogue history sentences into a single input sequence for finetuning, and rely on the ability of the pretrained model to fast adapt to the target data domain.",
"Hence, we want to design a model-agnostic method to address both the data scale and data difficulty issue, which can be packed with any base model, either trained from scratch or finetuned from a pretrained model.",
"In this work, we propose a data manipulation method for persona-based dialogue data, which is model-agnostic to be packed with any base model to improve their robustness and consistency.",
"Our method includes three operations on data, namely D 3 , in sequence:",
"(i) D ata distillation: original training samples are simplified into contain only useful and less redundant persona sentences and dialogue utterances, which are expected to be fitted more easily;",
"(ii) D ata diversification: with the easier distilled samples, we can also perform data augmentation more reliably.",
"We design various methods to edit new personas, and then align them with new and consistent responses to improve data diversity;",
"(iii) D ata curriculum: with both augmented distilled and original data at hand, we arrange them into a data curriculum for model learning (Bengio et al., 2009), where the base model is trained on the easier augmented distilled data and then the harder original data.",
"To validate the effectiveness of our method, we perform experiments on two strong base dialogue models, Transformer-based encoder-decoder and GPT2.",
"Persona-based dialogue generation It sees growing interest in recent years, thanks to the released benchmark datasets such as PersonaChat/ ConvAI2 (Zhang et al., 2018a; Dinan et al., 2020).",
"Previous works mostly focus on modifying dialogue models to condition auxiliary persona information, including extra persona embedding(Li et al., 2016b), profile memory (Zhang et al., 2018a), copying from personas (Yavuz et al., 2019), CVAE with persona information (Song et al., 2019), and using meta-learning to augment low-resource personas (Tian et al., 2021).",
"Recent works try to adopt large-scale pretrained models on this task.",
"GPT/GPT2 (Radford et al., 2018, 2019) are chosen the most often and shown to improve the generation quality with different finetuning strategies (Wolf et al., 2019; Golovanov et al.; Cao et al., 2020).",
"Some leverage BERT (De-vlin et al., 2019) as backbones ( ? Song et al., 2021).",
"Other pretrained models also demonstrate their effectiveness (Lin et al., 2021).",
"The aforementioned methods often need proper network modifications and finetuning loss functions in order to get satisfactory performance.",
"It is hard to transfer them to be useful across different pretrained models.",
"Moreover, most of them simply concatenate persona texts and dialogue history together as a single input sequence (Wolf et al., 2019; Roller et al., 2021), highly depending on the ability of the pretrained model to fast adapt to the target data domain.",
"Text data manipulation Various data augmentation methods have been widely used in many NLP tasks (Sennrich et al., 2016; Hou et al., 2018; Guo et al., 2019; Min et al., 2020), which are also effective to boost the performance of dialogue models.",
"New generated dialogue utterances (Li et al., 2019; Niu and Bansal, 2019) and retrieval results (Zhang et al., 2020) can be used to augment the training data.",
"However, all previous work only studies the pairwise relationship between a query and a response to design the augmentation techniques, which are not applicable to involving auxiliary information, such as personas, simultaneously.",
"uninformative or noisy samples to enhance data quality (Csky et al., 2019; Akama et al., 2020).",
"Cai et al. (2020a) combine data augmentation and re-weighting to make models learn more effectively.",
"Tian et al. (2019) utilize learnable memory based on dialogue clusters to enhance the model.",
"Curriculum learning Bengio et al. (2009) examine the benefits of training models using various curricula successively from easy to hard.",
"It has been applied to many NLP tasks such as machine translation (Platanios et al., 2019), reading comprehension (Tay et al., 2019) and language understanding (Xu et al., 2020).",
"Cai et al. (2020b) adopt the idea in open-domain dialogue generation, where curriculum plausibility is determined by the response properties, including coherence and diversity.",
"Our work is different in that we introduce new distilled data regarding as a curriculum.",
"We first formally define a persona-based training sample.",
"It consists of L persona description sentences P = { p 1 , p 2 ,",
".., p L } , M dialogue history utterances H = { h 1 , h 2 ,",
".., h M } , and a gold response R .",
"The given training dataset is denoted as D = { ( P, H, R ) } .",
"Note that L and M in different training samples can be different.",
"A dialogue model needs to generate a response R , which is coherent with the dialogue history H and consistent with persona information in P .",
"Our proposed data manipulation method D 3 is model-agnostic.",
"For any dialogue model, we will not change the model itself, but only manipulate its training data.",
"We develop three data manipulation operations in sequel, former two for augmentation and the last one eases training, shown in Figure 2: 1. D ata distillation.",
"We construct simple persona-consistent data D dis = { ( (cid:101) P , (cid:101) H, (cid:101) R ) } by removing redundant information in P and H ; 2. D ata diversification.",
"Due to the limited amount of distilled samples, we design various methods to increase the data variety and scale, and obtain the diversified data D div = { ( (cid:101) p, (cid:101) h, (cid:101) r ) } ; 3. D ata curriculum.",
"We combine D dis and D div as the augmented dataset D a .",
"A curriculum strategy is defined to train the model with the easier distilled samples in D a first and then the original ones in D .",
"Before introducing our distillation method, we discuss the difficulty of training a model with the original",
"original training samples in detail.",
"The dependency of a response on the given persona fluctuates between different parts of the persona sentences.",
"As shown in Figure 1, most responses only correspond to one persona sentence.",
"The remaining persona information is mostly redundant, and may confuse the model to attend on useful persona information.",
"Similarly, we notice that models tend to attend more on the last few utterances of H rather than the historical ones.",
"We find that by using a Transformer encoder-decoder model, the attention weights of the last Transformer layer on the last utterance is 45% higher than the average on the other utterances.",
"See Appendix C.1 for the experiment and results.",
"This observation is also consistent with previous studies on multi-turn context understanding (Khan-delwal et al., 2018; Sankar et al., 2019).",
"A few previous works have demonstrated that attention-based models will be distracted by noisy attended information, and accurate attention supervisions can be very beneficial (Liu et al., 2016; Hsu et al., 2018).",
"Inspired by them, we mimic a hard attention supervision between the response and useful persona/dialogue history by directly removing redundant tokens in the attended sequences.",
"Therefore, different from previous work that modify the model to inject attention supervisions, our method only manipulates data.",
"Persona distillation We aim to determine which persona sentence the current response is consistent with, and thus remove the remaining non-consistent ones.",
"To do so, we associate each persona sentence p k with the target response R , and determine the consistency between each p k and R .",
"Following previous work (Welleck et al., 2019), we cast it as a natural language inference (NLI) problem.",
"If R entails p k , it is considered to be consistent with p k , otherwise irrelevant to p k .",
"A trained RoBERTa (Liu et al., 2019) model is used here as the NLI model, with an accuracy of 90.8% on the DialogueNLI dev set provided in Welleck et al. (2019).",
"Details are provided in Appendix A.1.",
"Dialogue history distillation We can adopt a trained attention-based model to determine useful context sentences.",
"For simplicity, we could also keep only the most useful last utterance HM in a distilled sample (as suggested by our preliminary experiments discussed in the beginning of this sec-tion).",
"In our experiments in 4, we find that using the last utterance is enough for our method to work 7986 P k Persona distillation ...",
"A distilled sample ( (cid:101) P , (cid:101) H, (cid:101) R ) is ready to be constructed now.",
"Here, (cid:101) P and (cid:101) H both contain only one sentence.",
"(cid:101)",
"P is any p k that entails R , and (cid:101) H is the last utterance in the dialogue history, and (cid:101) R = R .",
"Such samples form the distilled dataset D dis .",
"Note that an original sample in D may result in none, one, or multiple distilled samples, as R may entail none, one, or multiple persona sentences.",
"Distilled samples should ease model training as their responses are highly dependent on their (cid:101) P and (cid:101) H .",
"However, samples in D dis are limited in terms of both scale (around 40% of the original data) and diversity (about 4.5k unique persona sentences).",
"Hence, it is necessary to augment D dis .",
"Thanks to the assured relationship between (cid:101) P / (cid:101) H and R , we can devise possible methods to diversify distilled samples with more semantically varied samples.",
"Our data diversification operation contains the following three parts along with quality filtering, as shown in Figure 2. Persona editing We aim to obtain new persona sentences to improve the data scale, and more importantly the persona diversity.",
"Hence, we here consider both token-level and phrase-level editing methods given a persona sentence (cid:101) P : Token-level editing: we randomly mask a pre-defined ratio of tokens in (cid:101) P , then use a pretrained BERT (Devlin et al., 2019) model to make predictions on the masked positions one by one.",
"Phrase-level editing: we remove the last few tokens in (cid:101) P with the removal length determined by a random ratio, and utilize a pretrained GPT2 (Rad-ford et al., 2019) to rewrite the removal part.",
"Multiple edited persona sentences can be obtained from one certain (cid:101) P .",
"Here, we finetune pretrained models using all persona sentences for a trade-off between semantic diversity and domain similarity.",
"To ensure a satisfactory fluency and novelty of an edited persona (cid:101) p , we rate it via a scoring function: f = PPL ( (cid:101) p ) + (1 ) BS f ( (cid:101) p, (cid:101) P ) .",
"Here, PPL calculates the normalized perplexity via a GPT2 model to measure its fluency, and the rescaled F1 value of BERTScore (BS f ) (Zhang et al., 2019) is employed to evaluate the semantic similarity between two sentences.",
"Lower values for both functions are preferred, indicating higher fluency or novelty.",
"is a hyper-parameter.",
"We rank all edited personas originated from the same (cid:101) P with the ascending order of their scores in Eq.",
"1, and select the top N p ones.",
"Response aligning Since the semantic meaning of an edited persona sentence obtained above could change, the original response may not be consistent with it.",
"Therefore, we need to get a new aligned response to maintain the persona consistency.",
"Two approaches are utilized to obtain an aligned response (cid:101) r given an edited persona sentence (cid:101) p and the corresponding distilled history utterance (cid:101) H : Token-level editing: We observe that some overlapped tokens can be found between (cid:101) P and (cid:101) R .",
"If an overlapped token w has been changed to a new token w (cid:48) in the edited persona (cid:101) p , we directly replace w in (cid:101) R with w (cid:48) in the same positions, resulting in an aligned response (cid:101) r .",
"An illustration figure can be found in Appendix A.2.",
"Model predicting: If no overlapped token can be found, token-level editing will not be applicable.",
"Then we employ a GPT2-based encoder-decoder model (Cao et al., 2020) finetuned on the distilled 7987 P p r my husband is a lawyer.",
"Dialogue history augmentation To further scale up the size of distilled samples, we also manipulate the dialogue history (cid:101) H .",
"Since the diversity scarcity issue is not severe in (cid:101) H , we use a popular sentence-level data augmentation method, back translation (BT) (Sennrich et al., 2016), to obtain variants of dialogue utterances.",
"We could consider the semantics of the variants are identical.",
"Distilled history utterance (cid:101) H is translated into an intermediate language, then back into the source language using a couple of existing translation models.",
"The original dialogue history and its N h variants compose the augmented dialogue history set { (cid:101) h } .",
"Combining the above three parts together, we now obtain new samples { ( (cid:101) p, (cid:101) h, (cid:101) r ) } .",
"We evaluate them with respect to fluency, persona consistency and history coherence: s = PPL ( (cid:101) r ) + NLI ( (cid:101) p, (cid:101) r ) + (1 ) NLI c ( (cid:101) h, (cid:101) r ) , (2) where NLI measures the entailment between a persona sentence and the response by the same NLI model in 3.1, and NLI c evaluates the entailment between a dialogue history utterance and the response using another NLI model (Dziri et al., 2019)(details in Appendix A.2).",
"and are hyper-parameters.",
"We filter samples below a threshold T , and the remaining samples constitute the diversified data set D div .",
"The whole augmented training dataset is the union of D dis and D div .",
"The quality of augmented samples is discussed in Appendix B. D D dis D div D a D + D a #sample 65,719 26,693 26,700 53,393 119,112 #persona 4,710 4,522 9,788 14,310 14,498 #token 20,467 13,420 12,794 17,835 23,269 Table 1: Statistics of samples obtained in each stage.",
"During inference, the model should be capable to handle testing data with multiple persona sentences and dialogue history utterances as the original data.",
"Therefore, a model trained using D a only is not proper.",
"We should use both D a and D .",
"Unlike previous studies that treat the original and augmented data equally and mix them directly, we design a curriculum strategy.",
"Considering the different training difficulty of data in D a and D , we treat D a as an easy curriculum while the original dataset D as a hard curriculum.",
"The model is trained on such data curriculum successively until convergence.",
"To validate the effectiveness of our proposed model-agnostic data manipulation method, we first experiment on two strong persona-based dialogue generation models (Transformer encoder-decoder and GPT2) on the benchmark PersonaChat (Zhang et al., 2018a) dataset.",
"Next we conduct a series of analysis to examine the usefulness of different data manipulation operations in our method.",
"1 4.1 Experimental Setup Dataset The PersonaChat (Zhang et al., 2018a) data is widely used in this field (Song et al., 2019, 2020; Wolf et al., 2019; Golovanov et al.).",
"Each sample has a dialogue history H with no more than 15 utterances ( M 15 ) and a persona P with between 4 and 6 sentences ( 4 L 6 ).",
"Numbers of samples, unique persona sentences, and tokens in each stage of our method are listed in Table 1. Base models Two dialogue model architectures are considered: TRANSFORMER (Vaswani et al., 2017): an encoder-decoder architecture using Transformer as the backbone with pointer generator (See et al., 2017) integrated; GPT2: one of the most powerful pretrained models on this task (Wolf et al., 2019; Golovanov et al.; Cao et al., 2020).",
"1 Code is available at https://github.com/ caoyu-noob/D3 .",
"TRANSFORMER is trained from scratch, and GPT2 is finetuned.",
"For both models, we construct training data by concatenating persona and dialogue history as a single input sequence, in which special symbols and token type embeddings are involved to distinguish between them.",
"The negative log-likelihood loss is used to train models using Adam optimizer (Kingma and Ba, 2015).",
"Compared methods We pack two base models with our method D 3 and other data manipulation approaches for comparison: BACKTRANSLATION (BT) (Sennrich et al., 2016): we perform BT on all sentences in a training sample, including the persona sentences and dialogue utterances, and train the model with the augmented and original data jointly; CVAE (Li et al., 2019): a CVAE-based generation model is trained on the original data and then used to generate new responses via sampling with different latent codes.",
"Since it can only handle pairwise data, we concatenate all input sentences as a single input sequence in this method; ENTROPYFILTER ( FILTER ) (Csky et al., 2019): it removes generic responses according to the entropy, which is calculated using the dialogue history and the response without using the persona.",
"The detailed configurations of each method are given in Appendix B. Automatic metrics We adopt multiple widely used metrics to measure the response quality, including Perplexity (PPL), BLEU (Papineni et al., 2002), NIST-4 (Doddington, 2002) and BERTScore (Zhang et al., 2019).",
"We use the same BS f in Eq.",
"1 for BERTScore.",
"To evaluate the response diversity, we use Distinct-n (Li et al., 2016a) (Dist, n=1,2,3) which is the ratio of unique n-grams among the corpus, and Entropy-n (Zhang et al., 2018b) (Ent, n=1,2,3) that is the entropy obtained via the n-gram distribution in a sentence.",
"Moreover, C-score (Madotto et al., 2019) ( C ) is involved, where we follow the default setting and use the output of an NLI model trained on the DialogueNLI dataset (Welleck et al., 2019) to indicate the consistency between a response and persona sentences.",
"Human evaluation We randomly selected 200 samples from the test set for human evaluations.",
"Five professional annotators from a third-party company were asked to rate the responses from three aspects: 1) Fluency (Flu.); 2) Coherence (Coh.) with the dialogue history, 3) Persona consistency (Pcon.).",
"The scores for the first two aspects have three scales, in which 1/2/3 indicates unac-ceptable/moderate/satisfactory respectively.",
"The last one is binary, where 1 means the response is consistent with at least one persona sentence in the sample and 0 otherwise.",
"The agreement rate from raters is 97.5%, 89.5%, 100% @3 (at least 3 of them reach an agreement) in the these aspects, indicating the validity of scores.",
"The instruction of human evaluation is given in Appendix B. 4.2 Results Table 2 reports the results on two based models trained with the use of various compared data manipulation methods.",
"T-test is conducted between our D 3 and other compared methods on each base model for metrics including BS f , C-score and three human evaluation metrics.",
"Other automatic metrics have similar results or are not applicable such as Distinct-n.",
"Details of the significant tests are given in Appendix C.2.",
"On TRANSFORMER , all methods achieve improvements on most metrics compared with training with the original dataset.",
"Our method yields the best performance except for Ent-1.",
"On GPT2, many methods fail to improve the various metrics consistently.",
"For example, on the persona consistency (Pcon.), only ENTROPY FILTER and our method can get higher scores than training with the original dataset.",
"The reason is that the data scarcity issue is less severe with a pretrained model, and it is more important to address the data diversity issue.",
"In our method, the augmented distilled samples are encouraged to have different semantics with the original ones and improve the data diversity, and thus continue to get improvements on the strong pretrained GPT2.",
"We further analyze the contributions made by different data manipulation operations in our method by answering the following three questions: 1. Is there a need to construct simple data D dis as in data distillation?",
"2. Can data diversification effectively obtain diverse distilled data?",
"3. Does the curriculum strategy better exploit the augmented data and help model training?",
"We use results on TRANSFORMER here for discussion in the following part.",
"Refer to Appendix C.3 for extensive results on GPT2 model.",
"Analysis of data distillation To examine the effectiveness of data distillation, we need to neutralize the influence of data diversification as it is only applicable to distilled data.",
"Following variants of our D 3 are considered: 1) w/o diversification : only using distilled data D dis in the easy curriculum; 2) w/o distillation : based on 1), we recover samples in D dis into their original format, which means all their persona sentences and history utterances are included; 3) only distillation : only D dis is used in training without using the original data in D .",
"Results of these variants are shown in the middle of Table 3. Obviously, removing data diversification decreases the performance in all aspects as the model has less training data.",
"If we further remove data distillation and use the same amount of data in their original formats, the model performs even worse, especially on the C-score.",
"This validates the effectiveness of data distillation in our method.",
"However, it is not proper to completely rely on distilled data.",
"From the results of only using distilled data in training, our method improves the C-score, yet significantly degenerates in other aspects.",
"The reason is that the relationship between persona/dialogue history and the response has changed from the original data to their distilled ones.",
"Thus a model trained with distilled data should serve as a warm start to learn the original data, but not to replace the original data.",
"We also test the robustness of our data distillation method by using an NLI model trained in a few-shot setting (200 samples).",
"Results are included in Table 3 as D 3 *.",
"It is slightly worse than our method with sufficient NLI training data, but still superior to most compared methods.",
"Note that the response diversity metrics nearly remain unchanged.",
"This means that our data diversification methods are still effective when starting from noisy distilled samples.",
"It also shows that our method can be useful when only limited in-domain NLI labeled data are available for data distillation.",
"1 shows that the diversified data contain many new persona sentences as well as tokens.",
"Besides, we compute 7990 PPL BLEU NIST-4 BS f Ent-1 Ent-2 Ent-3 Dis-1 Dis-2 Dis-3 C TRANSD 3 37.30 3.358 1.206 0.1574 4.223 6.165 7.298 1.826 7.923 14.42 0.485 Original 38.28 3.140 1.148 0.1486 4.046 5.484 6.262 1.609 6.298 11.71 0.235 Only augment 126.3 1.603 0.956 0.0852 4.315 6.309 7.426 1.747 7.530 12.66 0.942 Shuffle 37.66 3.203 1.175 0.1521 4.128 6.096 6.979 1.659 6.889 13.79 0.404 Reverse 48.17 2.137 1.019 0.1508 3.947 5.291 6.039 1.368 5.503 9.211 0.912 Table 4: Performance comparison between different curriculum variants, using TRANSFORMER as the base model.",
"the Novelty metrics (Wang and Wan, 2018; Zhang et al., 2020) of diversified samples in D div .",
"It takes the original distilled samples in D dis as references, and uses the Jaccard similarity function to measure the proportion of n-grams ( n = 1 , 2 , 3 , 4 ) in D div but not in D dis .",
"A higher value means more novel content.",
"Note that we particularly prefer more novel personas, while not encouraging more novel dialogue histories.",
"Thus, the Novelty scores on the overall samples which include dialogue histories, personas and responses, are lower than those on the personas.",
"To further examine how each part of data diversification works, we conduct the following ablation studies: 1) w/o persona editing : no persona sentence will be edited; 2) w/o history augmentation : only original dialogue history is used; 3) w/o response filtering : all constructed samples are directly used without using Eq.",
"2. Results in the bottom of Table 3 show that all these designs contribute to the performance of the whole method.",
"Among them, response filtering is the most important as it ensures the quality of augmented samples.",
"We also investigate the proportions of diversified samples coming from various source combinations.",
"Results are shown in Figure 4, which shows that more than 80% diversified samples have their responses obtained via model predicting, as token editing sets a strict condition that overlapped tokens must exist.",
"Phrase-level editing also contributes to more high-quality personas with satisfactory fluency and semantic novelty.",
"Analysis of data curriculum We first compare other data curriculum variants to show the usefulness of training with the designed data curriculum.",
"The following variants are included: 1) Original : only the original dataset D (the hard curriculum in D 3 ) is used, which is equal to the base model; 2) Only augment : only the augmented dataset D a (the easy curriculum in D 3 ) is used; 3) Shuffle : shuffling of the original dataset D and the augmented dataset D a together to train the model; 4) Reverse : using the curricula in a reverse order, which means the hard curriculum first and then the easy one.",
"Relevant results are shown in Table 4.",
"There is no doubt that our curriculum is the best when comprehensively considering all aspects.",
"Although Only augment and Reverse show high C-scores, their responses are much worse in n-gram accuracy as they involve more persona information while focusing less on the dialogue coherence during generating.",
"Shuffle shows better performance than Original as it includes more augmented data than the original dataset, which may benefit the training.",
"However, such a mixing strategy is not so efficient as our data curriculum as it neglects the learning difficulty of different data sources.",
"Next, we further quantify the effect of curriculum training on models using the attention from the response on the persona sentences.",
"We define two metrics, token-level/sentence-level consistent attention weight ( a t and a s ), to measure how the attention contributes to reflecting the proper personas.",
"Recall that we concatenate the persona sentences and history utterances as a single model input.",
"We record the token positions of the entailed persona sentences in the input sequence, which are determined by our NLI model, denoted as S .",
"Then for each index s S , if its corresponding token in the input also occurs in the response, we put this 7991 0 1 2 3 4 5 a t / 10 2 0 1 2 3 4 5 0 5 10 15 20 25 30 layer-2 0 5 10 15 20 25 a s / 10 2 layer-4 0 5 10 15 20 25 layer-6 0 5 10 15 20 25 Orig.",
"index pair into a set T = { ( s, l ) } , where s and l are the token positions in the input sequence and response sequence respectively.",
"Then we have two measurements for each sample: a t = 1 |T | (cid:88) ( i,j ) T a ij , a s = 1 YY (cid:88) i =1 (cid:88) j S a ij , (3) where a ij [0 , 1] is the normalized scalar attention weight at the i -th decoding step on the j -th input token, i.e. (cid:80) j a ij = 1 , and Y is the length of the generated response.",
"A higher a t / a s indicates that the model poses more attention on proper persona tokens, where the former one is fine-grained for reflecting how the attention works properly at each step, while the latter one is coarse-grained for the whole generated response.",
"Part of the results with selected TRANSFORMER layers for these two metrics on all samples from the PersonaChat dev set are shown in Figure 5 (Refer to Appendix C.4 for the complete results).",
"Obviously, our method shows the highest a t and a s on all given layers compared to other two curriculum variants.",
"Such a superiority is more significant in higher layers, which is more decisive for generating responses (Fan et al., 2019).",
"While the attentions weights tend to distribute uniformly in lower layers, which are close to the uniform values.",
"Case study Some response samples generated when using TRANSFORMER as the base model are shown in Figure 6. Here H indicates dialogue history, a persona sentence shaded in a darker color denotes that it has a higher attention weight posed by the model.",
"Our method D 3 can offer a model with the capability to pose more attention on the i love running , it is a stress reliever.",
"proper persona texts during generating responses.",
"More cases can be found in Appendix C.6.",
"Our work targets the challenging personal-based dialogue generation task.",
"Unlike previous work that designs a new dialogue model to improve the generation performance, we analyze the data issues affecting current models.",
"On one hand, the data scale and diversity are expensive to increase by data collection.",
"On the other hand, current data are difficult to learn with.",
"Based on such an understanding, we propose a model-agnostic data manipulation method for this task.",
"It first distills the original data and then augments both the amount and diversity of the distilled data.",
"A curriculum training is then applied to utilize both augmented and original data.",
"Experimental results showed that our method effectively improves the performance of two strong dialogue models, i.e. Transformer encoder-decoder and GPT2.",
"We would like to thank Piji Li and Lemao Liu for their helpful discussion and feedback.",
"We also thank anonymous reviewers for their constructive comments."
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"other",
"other"
] |
[
"Linguistic probing of pretrained Transformer-based language models (LMs) revealed that they encode a range of syntactic and semantic properties of a language.",
"However, they are still prone to fall back on superficial cues and simple heuristics to solve downstream tasks, rather than leverage deeper linguistic information.",
"In this paper, we target a specific facet of linguistic knowledge, the interplay between verb meaning and argument structure.",
"We investigate whether injecting explicit information on verbs' semantic-syntactic behaviour improves the performance of pretrained LMs in event extraction tasks, where accurate verb processing is paramount.",
"Concretely, we impart the verb knowledge from curated lexical resources into dedicated adapter modules ( verb adapters ), allowing it to complement, in downstream tasks, the language knowledge obtained during LM-pretraining.",
"We first demonstrate that injecting verb knowledge leads to performance gains in English event extraction.",
"We then explore the utility of verb adapters for event extraction in other languages: we investigate 1) zero-shot language transfer with multilingual Transformers and 2) transfer via (noisy automatic) translation of English verb-based lexical knowledge.",
"Our results show that the benefits of verb knowledge injection indeed extend to other languages, even when relying on noisily translated lexical knowledge.",
"Large Transformer-based encoders, pretrained with self-supervised language modeling (LM) objectives, form the backbone of state-of-the-art models for most NLP tasks (Devlin et al., 2019; Yang et al., 2019b; Liu et al., 2019).",
"Recent probes showed that they implicitly extract a non-negligible amount of linguistic knowledge from text corpora in an unsupervised fashion (Hewitt and Manning, 2019; Vulic et al., 2020; Rogers et al., 2020, inter alia ).",
"In downstream tasks, however, they often rely on spurious correlations and superficial cues (Niven and Kao, 2019) rather than a deep understanding of language meaning (Bender and Koller, 2020), which is detrimental to both generalisation and interpretability (McCoy et al., 2019).",
"In this work, we focus on a specific facet of linguistic knowledge: reasoning about events.",
"1 Identifying tokens in the text that mention events and classifying the temporal and causal relations among them is crucial to understand the structure of a story or dialogue (Carlson et al., 2002; Miltsakaki et al., 2004) and to ground a text in real-world facts.",
"Verbs (with their arguments) are prominently used for expressing events (with their participants).",
"Thus, fine-grained knowledge about verbs, e.g., the syntactic patterns in which they partake and the semantic frames, may help pretrained encoders to achieve a deeper understanding of text and improve their performance in event-oriented downstream tasks.",
"There already exist some expert-curated computational resources that organise verbs into classes based on their syntactic-semantic properties (Jackendoff, 1992; Levin, 1993).",
"In particular, here we consider English VerbNet and FrameNet as rich sources of verb knowledge.",
"Expanding a line of research on injecting external linguistic knowledge into pretrained LMs (Peters et al., 2019; Levine et al., 2020; Lauscher et al., 2020b), we integrate verb knowledge into the LMs for the first time.",
"We devise a new method to distil verb knowledge into dedicated adapter modules (Pfeiffer et al., 2020b), which reduce the risk of (catastrophic) forgetting of and allow seamless modular integration with distributional knowledge.",
"1 For instance, in the sentence Stately, plump Buck Mulligan came from the stairhead, bearing a bowl of lather (...) , an event of COMING occurs in the past, with BUCKMULLIGAN as a participant, simultaneously to an event of BEARING with an additional participant, a BOWL .",
"We hypothesise that complementing pretrained LMs with verb knowledge should benefit model performance in downstream tasks that involve event extraction and processing.",
"We first put this hypothesis to the test in English monolingual event identification and classification tasks from the TempEval (UzZaman et al., 2013) and ACE (Doddington et al., 2004) datasets.",
"We report modest but consistent improvements in the former, and significant performance boosts in the latter, thus verifying that verb knowledge is indeed paramount for a deeper understanding of events and their structure.",
"Moreover, expert-curated resources are not available for most of the languages spoken worldwide.",
"Therefore, we also investigate the effectiveness of transferring verb knowledge across languages; in particular, from English to Spanish, Arabic and Chinese.",
"The results demonstrate the success of the transfer techniques, and also shed some light on an important linguistic question: to what extent can verb classes (and predicateargument structures) be considered cross-lingually universal, rather than varying across languages (Hartmann et al., 2013)?",
"Overall, our main contributions consist in 1) mitigating the limitations of pretrained encoders regarding event understanding by supplying external verb knowledge; 2) proposing a new method to do so in a modular way through verb adapters; 3) exploring techniques to transfer verb knowledge to resource-poor languages.",
"The performance gains across four diverse languages and several event processing tasks and datasets validate that complementing distributional knowledge with curated verb knowledge is both beneficial and cost-effective.",
"Figure 1 illustrates our framework for injecting verb knowledge from VerbNet or FrameNet and leveraging it in downstream event processing tasks.",
"First, we inject the external verb knowledge, formulated as the so-called lexical constraints (Mrkic et al., 2017; Ponti et al., 2019) (in our case verb pairs, see 2.1), into a (small) additional set of adapter parameters (2.2) (Houlsby et al., 2019).",
"Second (2.3), we combine the language knowledge encoded in the original LM parameters and the verb knowledge from verb adapters for event processing tasks.",
"To this end, we either",
"a) fine-tune both sets of parameters (1. pretrained LM; 2. verb adapters) or",
"b) freeze both sets of parameters and insert an additional set of task-specific adapter pa-Multi-head attention Add & Normalize Feed-forward Add & Normalize Add & Normalize Verb (VN/FN)adapter Verb-pair classifier LVERB or [ convert , transform , TRUE] [ separate , split , FALSE] LTASK Multi-head attention Add & Normalize Feed-forward Add & Normalize Add & Normalize Verb (VN/FN)adapter ...it also [affects] STATE small businesses, which [pay] OCCURRENCE premiums...",
"rameters .",
"In both cases, the task-specific training is informed both by the general language knowledge captured in the pretrained LM, and the specialised verb knowledge, captured in the verb adapters.",
"Given the inter-connectedness between verbs' meaning and syntactic behaviour (Levin, 1993; Kipper Schuler, 2005), we assume that refining latent representation spaces with verb knowledge would have a positive effect on event extraction tasks that strongly revolve around verbs.",
"Lexical classes, defined in terms of verbs' shared semantic-syntactic properties, provide a mapping between the verbs' senses and the morpho-syntactic realisation of their arguments (Jackendoff, 1992; Levin, 1993).",
"The potential of verb classifications lies in their predictive power: for any given verb, a set of rich semantic-syntactic properties can be inferred based on its class membership.",
"In this work, we explicitly harness this rich linguistic knowledge to aid pretrained LMs in capturing regularities in the properties of verbs and their arguments.",
"We select two major English lexical databases VerbNet (Kipper Schuler, 2005) and FrameNet (Baker et al., 1998) as sources of verb knowledge at the semantic-syntactic interface, each representing a different lexical framework.",
"VerbNet (VN) (Kipper Schuler, 2005; Kipper et al., 2006), the largest available verb-focused lexicon, organises verbs into classes based on the overlap in their semantic properties and syntactic behaviour; it builds on the premise that a verb's predicate-argument structure informs its meaning (Levin, 1993).",
"Each entry provides a set of thematic roles and selectional preferences for the verbs' arguments; it also lists the syntactic contexts characteristic for the class members.",
"Its hierarchical classification starts from broader classes and spans several granularity levels where each subclass further re-fines the semantic-syntactic properties inherited from its parent class.",
"2 The VN class membership is English-specific, but the underlying verb class construction principles are thought to apply cross-lingually (Jackendoff, 1992; Levin, 1993); its translatability has been indicated in previous work (Vulic et al., 2017; Majewska et al., 2018).",
"The current English VN contains 329 main classes.",
"FrameNet (FN) (Baker et al., 1998) is more semantically oriented than VN.",
"Grounded in the theory of frame semantics (Fillmore, 1976, 1977, 1982), it organises concepts according to semantic frames, i.e., schematic representations of situations and events, which they evoke, each characterised by a set of typical roles assumed by its participants.",
"The word senses associated with each frame (FN's lexical units) are similar in terms of their semantic content, as well as their typical argument structures.",
"Currently, English FN covers 1,224 frames and its annotations illustrate the typical syntactic realisa-tions of the frame elements.",
"Frames themselves are, however, semantically defined: this means that they may be shared even across languages with different syntactic properties.",
"3 2.2 Training Verb Adapters Training Task and Data Generation.",
"In order to inject external verb knowledge into pretrained LMs, we devise an intermediary training task: we train 2 For example, within a top-level class free-80', which includes verbs like liberate , discharge , and exonerate which participate in a NP V NP PP.",
"THEME frame (e.g., It freed him of guilt ), there exists a subset of verbs participating in a syntactic frame NP V NP S_ING (free-80-1'), within which there exists an even more constrained subset of verbs appearing with prepositional phrases headed specifically by the preposition from (e.g., The scientist purified the water from bacteria ).",
"3 For instance, descriptions of transactions will include the same frame elements Buyer, Seller, Goods, Money in most languages.",
"Indeed, English FN has inspired similar projects in other languages: e.g., Spanish (Subirats and Sato, 2004), Japanese (Ohara, 2012), and Danish (Bick, 2011).",
"a dedicated VN-/FN-knowledge adapter (hereafter VN-Adapter and FN-Adapter ).",
"We frame the task as binary word-pair classification: we predict if two verbs belong to the same VN class or FN frame.",
"We extract training instances from FN and VN independently.",
"This allows for a separate analysis of the impact of verb knowledge from each resource.",
"We generate positive training instances by extracting all unique verb pairings from the set of members of each main VN class/FN frame (e.g., walkmarch ), resulting in 181,882 instances created from VN and 57,335 from FN.",
"We then generate k = 3 negative examples per positive example by combining controlled and random sampling.",
"In controlled sampling, we follow prior work on semantic specialisation (Wieting et al., 2015; Glava and Vulic, 2018b; Lauscher et al., 2020b).",
"For each positive example p = ( w 1 , w 2 ) in the training batch B , we create two negatives p 1 = ( w 1 , w 2 ) and p 2 = ( w 1 , w 2 ) ; w 1 is the verb from batch B other than w 1 that is closest to w 2 in terms of their cosine similarity in an auxiliary static word embedding space X aux R d ; conversely, w 2 is the verb from B other than w 2 closest to w 1 .",
"We additionally create one negative instance p 3 = ( w 1 , w 2 ) by randomly sampling w 1 and w 2 from batch B , not considering w 1 and w 2 .",
"We ensure that the negatives are not present in the global set of all positive verb pairs.",
"Similar to Lauscher et al. (2020b), we tokenise each (positive and negative) training instance into WordPiece tokens, prepended with sequence start token [CLS] , and with [SEP] tokens in between the verbs and at the end of the input sequence.",
"We use the representation of the [CLS] token x CLS R h (with h as the hidden state size of the Transformer) from the last Transformer layer as the latent representation of the verb pair, and feed it to a simple binary classifier: 4 y = softmax ( x CLSW cl + b cl ) , with W cl R h 2 and b cl R 2 as classifier's trainable parameters.",
"We train by minimising the standard cross-entropy loss ( LVERB in Figure 1).",
"Adapter Architecture.",
"Instead of directly fine-tuning all parameters of the pretrained Transformer, we opt for storing verb knowledge in a separate set of adapter parameters, keeping the verb knowledge 4 We also experimented with sentence-level tasks: we fed",
"(a) pairs of sentence examples from VN/FN in a binary classification setup (e.g., Jackie leads Rose to the store. Jackie escorts Rose. ); and",
"(b) individual sentences in a multi-class classification setup (predicting the correct VN class/FN frame).",
"These variants, however, led to weaker performance.",
"separate from the general language knowledge acquired in pretraining.",
"This (1) allows downstream training to flexibly combine the two sources of knowledge, and (2) bypasses the issues with catastrophic forgetting and interference (Hashimoto et al., 2017; de Masson d'Autume et al., 2019).",
"We adopt the standard efficient adapter architecture of Pfeiffer et al. (2020a,c).",
"In each Transformer layer l , we insert a single adapter ( Adapter l ) after the feed-forward sub-layer.",
"The adapter itself is a two-layer feed-forward neural network with a residual connection, consisting of a down-projection D R h m , a GeLU activation (Hendrycks and Gimpel, 2016), and an up-projection U R m h , where h is the hidden size of the Transformer model and m is the dimensionality of the adapter: Adapter l ( h l , r l ) = U l ( GeLU ( D l ( h l ))) + r l ; where r l is the residual connection, output of the Transformer's feed-forward layer, and h l is the Transformer hidden state, output of the subsequent layer normalisation.",
"The next step is downstream fine-tuning for event processing tasks.",
"We experiment with (1) token-level event trigger identification and classification and (2) span extraction for event triggers and arguments (a sequence labeling task); see 3.",
"For the former, we mount a classification head a simple single-layer feed-forward softmax regression classifier on top of the Transformer augmented with VN-/FN-Adapters.",
"For the latter, we follow the architecture from prior work (M'hamdi et al., 2019; Wang et al., 2019) and add a CRF layer (Laf-ferty et al., 2001) on top of the sequence of Transformer's outputs (for subword tokens).",
"For all tasks, we propose and evaluate two different fine-tuning regimes: (1) full fine-tuning , where we update both the original Transformer's parameters and VN-/FN-Adapters (see 2a in Figure 1); and (2) task-adapter ( TA ) fine-tuning , where we keep both Transformer's original parameters and VN-/FN-Adapters frozen, while stacking a new trainable task adapter on top of the VN-/FN-Adapter in each Transformer layer (see 2b in Figure 1).",
"Creation of curated resources like VN or FN takes years of expert linguistic labour.",
"Consequently, such resources do not exist for a vast majority of languages.",
"Given the inherent cross-lingual nature of verb classes and semantic frames (see VerbNet FrameNet English (EN) 181,882 57,335 Spanish (ES) 96,300 36,623 Chinese (ZH) 60,365 21,815 Arabic (AR) 70,278 24,551 Table 1: Number of positive verb pairs in English, and in each target language obtained via VTRANS (2.4).",
"2.1), we investigate the potential for verb knowledge transfer from English to target languages, without any manual target-language adjustments.",
"Massively multilingual LMs, such as multilingual BERT (mBERT) (Devlin et al., 2019) or XLM-R (Conneau et al., 2020) have become the de facto standard mechanisms for zero-shot ( ZS ) cross-lingual transfer.",
"In our first transfer approach: we fine-tune mBERT first on the English verb knowledge, then on English task data, and then simply make task predictions for the target language input.",
"The second approach, dubbed VTRANS , is inspired by the work on cross-lingual transfer of semantic specialisation for static word embeddings (Glavas et al., 2019; Ponti et al., 2019; Wang et al., 2020b).",
"In brief (with full details in Appendix C), starting from a set of positive pairs from English VN/FN, VTRANS involves three steps: (1) automatic translation of verbs in each pair into the target language, (2) filtering of the noisy target language pairs by means of a transferred relation prediction model trained on the English examples, and (3) training the verb adapters injected into the pretrained model, now with the translated and filtered target-language verb pairs.",
"For the monolingual target-language FN-/VN-Adapter training, we follow the protocol used for English, see 2.2.",
"Event Processing Tasks and Data.",
"In event processing tasks, systems are tasked with detecting that something happened , identifying what type of occurrence took place, as well as what entities were involved.",
"Verbs typically act as the organisational core of each such event schema, carrying a lot of semantic and structural weight.",
"Therefore, a model's grasp of verbs' properties should have a bearing on final task performance.",
"Based on this assump-tion, we select event extraction and classification as suitable tasks to profile the methods from 2.",
"These tasks and the corresponding data are based on the two prominent frameworks for annotating event expressions: TimeML (Pustejovsky et al., 2003, 2005) and the Automatic Content Extraction (ACE) (Doddington et al., 2004).",
"First, we rely on the TimeML-annotated corpus from TempEval tasks (Verhagen et al., 2010; UzZaman et al., 2013), which targets automatic identification of temporal expressions and relations, and events.",
"Second, we use the ACE dataset: it provides annotations for entities, the relations between them, and for events in which they participate in newswire text.",
"5 Task 1: Trigger Identification and Classification (TempEval).",
"We frame the first event processing task as a token-level classification problem, predicting whether a token triggers an event and assigning it to one of the following event types: OCCURRENCE (e.g., died, attacks ), STATE (e.g., share, assigned ), REPORTING ( e.g., announced, said ), IACTION (e.g., agreed, trying ), I-STATE (e.g., understands, wants, consider ), ASPECTUAL (e.g., ending, began ), and PERCEPTION (e.g., watched, spotted ).",
"6 We use the TempEval-3 data for English and Spanish (UzZaman et al., 2013), and the TempEval-2 data for Chinese (Verhagen et al., 2010) (see Table 6 in the appendix for exact dataset sizes).",
"Task 2: Trigger and Argument Identification and Classification (ACE).",
"In this sequence labeling task, we detect and label event triggers and their arguments, with four individually scored subtasks:",
"(i) trigger identification, where we identify the key word conveying the nature of the event, and",
"(ii) trigger classification, where we classify the trigger word into one of the predefined categories;",
"(iii) argument identification, where we predict whether an entity mention is an argument of the event identified in",
"(i), and (iv) argument classification, where the correct role needs to be assigned to the identified event arguments.",
"We use the ACE data available for English, Chinese, and Arabic.",
"7 Event extraction as specified in these two frameworks is a challenging, highly context-sensitive problem, where different words (most often verbs) may trigger the same type of event, and conversely, the same word (verb) can evoke differ-5 We provide more details about the frameworks and their corresponding annotation schemes in Appendix A. 6 E.g., in the sentence: The rules can also affect small businesses, which sometimes pay premiums tied to employees' health status and claims history. , affect and pay are event triggers of type STATE and OCCURRENCE , respectively.",
"7 The ACE annotations distinguish 34 trigger types (e.g., Business:Merge-Org , Justice:Trial-Hearing , Conflict:Attack ) and 35 argument roles.",
"Following previous work (Hsi et al., 2016), we conflate eight time-related argument roles e.g., Time-At-End', Time-Before', Time-At-Beginning' into a single Time' role in order to alleviate training data sparsity.",
"ent types of event schemata depending on the context.",
"Adopting these tasks for evaluation thus tests whether leveraging fine-grained curated knowledge of verbs' semantic-syntactic behaviour can improve pretrained LMs' reasoning about event-triggering predicates and their arguments.",
"Model Configurations.",
"For each task, we compare the performance of the underlying vanilla BERT-based model (see 2.3) against its variant with an added VN-Adapter or FN-Adapter 8 (see 2.2) in two regimes:",
"(a) full fine-tuning, and",
"(b) task adapter (TA) fine-tuning (see Figure 1).",
"To ensure that any performance gains are not merely due to increased parameter capacity offered by the adapters, we also evaluate a variant where we replace the verb adapter with a randomly initialised adapter of the same size ( +Random ).",
"Additionally, we examine the impact of increasing the capacity of the trainable task adapter by replacing it with a Double Task Adapter' (2TA), i.e., a task adapter with double the number of trainable parameters compared to the base architecture from 2.2.",
"Finally, we compare the VN/FN-Adapter approach with a computationally more expensive alternative method of injecting external verb knowledge, sequential fine-tuning , where the full BERT is first fine-tuned on the FN/VN data (as in 2.2) and then on the task (see Appendix D for details).",
"Training Details: Verb Adapters.",
"We experimented with k { 2 , 3 , 4 } negative examples and the following combinations of controlled ( c ) and randomly ( r ) sampled negatives (see 2.2): k = 2 [ cc ] , k = 3 [ ccr ] , k = 4 [ ccrr ] .",
"In our preliminary experiments we found k = 3 [ ccr ] to yield best-performing adapters.",
"The evaluation and analysis presented in 4 are thus based on this setup.",
"Our VNand FN-Adapters are injected into the BERT Base cased model: the details on adapter training and hyperparameter search are in Appendix B. Downstream Task Fine-Tuning.",
"In downstream fine-tuning on TempEval, we train for 10 epochs in batches of size 32, with a learning rate 1 e 4 and maximum input sequence length of T = 128 WordPiece tokens.",
"For ACE, in light of a greater data sparsity, 9 we search for optimal hyperparameters 8 We also experimented with inserting both verb adapters simultaneously; however, this resulted in weaker downstream performance than adding each separately, a likely product of the partly redundant, partly conflicting information encoded in these adapters (see 2.1 for comparison of VN and FN).",
"9 Most event types ( 70% ) have fewer than 100 labeled instances, and three have fewer than 10 (Liu et al., 2018).",
"for each language and evaluation setup from the following grid: learning rate l { 1 e 5 , 1 e 6 } , epochs n { 3 , 5 , 10 , 25 , 50 } , batch b { 8 , 16 } (maximum input sequence length T = 128 ).",
"Transfer Experiments in zero-shot ( ZS ) setups are based on mBERT, to which we add the VNor FN-Adapter trained on the English VN/FN data.",
"We train the model on English training data available for each task, and evaluate it on the target-language test set.",
"For the VTRANS approach (2.4), we use language-specific BERT models available for our target languages, and leverage target-language adapters trained on translated and automatically refined verb pairs.",
"The model, with or without the target-language VN-/FN-Adapter, is trained and evaluated on the training and test data available in the language.",
"We carry out the procedure for three target languages (see Table 1).",
"We use the same negative sampling parameter configuration proven strongest in our English experiments ( k = 3 [ ccr ] ).",
"English Event Processing.",
"Table 2 shows the performance on English Task 1 (TempEval) and Task 2 (ACE).",
"First, we note that the computationally more efficient setup with a dedicated task adapter (TA) yields higher absolute scores compared to full fine-tuning (FFT) on TempEval.",
"When the underlying BERT is frozen along with the added FN-/VN-Adapter, the TA is enforced to encode additional task-specific knowledge into its parameters, beyond what is provided in the verb adapter.",
"This yields two strongest results overall from the +FN/VN setups.",
"On ACE, the primacy of TA-based training is overturned in favour of FFT.",
"Encouragingly, boosts provided by verb adapters are visible regardless of the chosen task fine-tuning regime.",
"We notice consistent statistically significant 10 improvements in the +VN setup, although the performance of the TA-based setups clearly suffers in argument ( ARG ) tasks due to decreased trainable parameter capacity.",
"Lack of visible improvements from the Random Adapter supports the interpretation that performance gains indeed stem from the added useful non-random' signal in the verb adapters.",
"In addition, we verify how our principal setup with added adapter modules compares to an alternative established approach, sequential fine-tuning (+FN/VN seq ).",
"In TempEval, we note that 10 We test significance with the Student's t -test with a significance value set at = 0 .",
"fine-tuning all model parameters on VN/FN data allows retrieving more additional verb knowledge beneficial for task performance than adding smaller pre-trained adapters on top of the underlying model.",
"However, FN/VN seq scores are still inferior to the results achieved in the TA-based +FN/VN setup.",
"In ACE, the FN/VN seq results in trigger tasks are weaker than those achieved through the addition of self-contained knowledge adapters, however, they offer additional boosts in argument tasks.",
"Multilingual Event Processing.",
"Table 3 compares the performance of zero-shot ( ZS ) transfer and monolingual target training (via VTRANS ) on TempEval in Spanish and Chinese.",
"For both, the addition of the FN-Adapter in the TA-based setup boosts ZS transfer.",
"The benefits extend to the FFT setup in Chinese, achieving the top score overall.",
"In monolingual evaluation, we observe consistent gains from the added transferred knowledge via VTRANS in Spanish.",
"In Chinese performance boosts come from the transferred VN-style class membership information (+VN).",
"This suggests that even the noisily translated verb pairs carry enough useful signal through to the target language.",
"To tease apart the contribution of the language-specific encoders and transferred verb knowledge, we carry out an additional monolingual evaluation substituting the target-language BERT with mBERT, trained on (noisy) target language verb signal (ES-M BERT/ZHMBERT).",
"Although mBERT scores are lower than monolingual BERTs in absolute terms, the use of the transferred verb knowledge helps reduce the gap between the models, with gains achieved over the baselines in Spanish.",
"11 In ACE, the top scores are achieved in the monolingual FFT setting; as with English, keeping the full capacity of BERT parameters unfrozen noticeably helps performance.",
"12 In Arabic, FN knowledge provides performance boosts across the four tasks and with both the zero-shot ( ZS ) and monolingual ( VTRANS ) transfer approaches, whereas the addition of the VN adapter boosts scores in ARG tasks.",
"The usefulness of FN knowledge extends to zero-shot transfer in Chinese, and both adapters benefit the ARG tasks in the monolingual ( VTRANS ) 11 Due to analogous patterns in relative scores of mBERT and monolingual BERTs in monolingual ACE evaluation, we show the VTRANS mBERT results in ACE in Appendix E. 12 This is especially the case in ARG tasks, where the TA-based setup fails to achieve meaningful improvements over zero, even with extended training up to 100 epochs.",
"Due to the computational burden of such long training, the results in this setup are limited to trigger tasks (after 50 epochs).",
"transfer setup.",
"Notably, in zero-shot transfer, we observe that the highest scores are achieved in the task adapter (TA) fine-tuning, where the inclusion of the verb adapters offers additional gains.",
"Overall, however, the argument tasks elude the restricted capacity of the TA-based setup, with very low scores.",
"Additionally, in Appendix E we show the results with sequential fine-tuning.",
"Similarly to our EN results (Table 2), we observe advantages of using the full capacity of BERT parameters to encode verb knowledge in most setups in TempEval, while the comparison to the adapter-based approach is less clear-cut on ACE.",
"In sum, sequential fine-tuning is a strong verb knowledge injection variant; however, it is computationally more expensive and less portable.",
"The modular and efficient adapter-based approach therefore presents an attractive alternative, while offering competitive task performance.",
"Crucially, the strong results from the sequential setup further corroborate our core finding that external lexical verb information is indeed beneficial for event processing tasks across the board.",
"Zero-shot Transfer vs Monolingual Training.",
"The results reveal a considerable gap between the performance of ZS transfer versus monolingual fine-tuning.",
"The event extraction tasks pose a significant challenge to zero-shot transfer via mBERT; however, mBERT exhibits much more robust performance in the monolingual setup, with available target-language training data for event tasks.",
"In the latter, mBERT trails language-specific BERTs by less than 5 points (Table 3).",
"This is encouraging, given that monolingual pretrained LMs currently exist only for a small set of high-resource languages.",
"For all other languages should there be language-specific event task data one can leverage mBERT.",
"Moreover, mBERT's performance is further improved by the inclusion of transferred verb knowledge via VTRANS : in Spanish, where its typological closeness to English renders direct transfer of semantic-syntactic information viable, the addition of VTRANS -based verb adapters yields significant gains both in the FFT and the TA setup.",
"13 These results confirm the effectiveness of lexical knowledge transfer suggested previously in the work on semantic specialisation of static word vectors (Ponti et al., 2019; Wang et al., 2020b).",
"Double Task Adapter.",
"Promisingly, we see in Table 5 that the relative performance gains from FN/VN adapters are preserved regardless of the added trainable task adapter capacity.",
"As expected, the increased task adapter size helps argument tasks in ACE, where verb adapters produce additional gains.",
"Overall, this suggests that verb adapters indeed encode additional, non-redundant information beyond what is offered by the pretrained model alone, and boost the dedicated task adapter.",
"Cleanliness of Verb Knowledge.",
"Despite the promising results with the VTRANS approach, there are still fundamental limitations: (1) noisy translation based on cross-lingual semantic similarity may already break the VerbNet class membership alignment; and (2) the language-specificity of verb classes due to which they cannot be directly ported to another language without adjustments.",
"14 The fine-grained class divisions and exact class membership in VN may be too English-specific to allow direct automatic translation.",
"On the contrary, semantically-driven FrameNet lends itself better to cross-lingual transfer: we report higher average gains in cross-lingual setups with the FN-Adapter.",
"To quickly verify if the noisy direct transfer curbs the usefulness of injected knowledge, we evaluate the injection of clean verb knowledge from a small lexical resource available in Spanish: we train an ES FN-Adapter on top of ES-BERT on 13 We noted analogous positive effects on performance of the more powerful XLM-R Large model (Appendix E).",
"14 This is in contrast to the proven cross-lingual portability of synonymy and antonymy relations shown in previous work on semantic specialisation transfer (Mrkic et al., 2017; Ponti et al., 2019), which rely on semantics alone.",
"2,866 verb pairs derived from its FrameNet (Subi-rats and Sato, 2004).",
"The results (Appendix E) reveal that, despite having 12 times fewer positive examples for training the verb adapter compared to VTRANS , the native' ES FN-Adapter offers gains between +0.2 and +0.4 points over VTRANS , compensating the limited coverage with gold standard accuracy.",
"This suggests that work on optimising and accelerating resource creation merits future research efforts on a par with modeling work.",
"Event Extraction.",
"The cost and complexity of event annotation requires robust transfer solutions capable of making fine-grained predictions in the face of data scarcity.",
"Traditional event extraction methods relied on hand-crafted, language-specific features (Ahn, 2006; Gupta and Ji, 2009; Llorens et al., 2010; Hong et al., 2011; Li et al., 2013; Glava and najder, 2015) (e.g., POS tags, entity knowledge), which limited their generalisation ability and effectively prevented language transfer.",
"More recent approaches commonly resorted to word embedding input and neural text encoders such as recurrent nets (Nguyen et al., 2016; Duan et al., 2017; Sha et al., 2018) and convolutional nets (Chen et al., 2015; Nguyen and Grishman, 2015), as well as graph neural networks (Nguyen and Gr-ishman, 2018; Yan et al., 2019) and adversarial networks (Hong et al., 2018; Zhang et al., 2019).",
"Most recent empirical advancements in event trigger and argument extraction tasks stem from fine-tuning of LM-pretrained Transformer networks (Yang et al., 2019a; Wang et al., 2019; M'hamdi et al., 2019; Wadden et al., 2019; Liu et al., 2020).",
"Limited training data nonetheless remains an obstacle, especially when facing previously unseen event types.",
"The alleviation of such data scarcity issues was attempted through data augmentation automatic data annotation (Chen et al., 2017; Zheng, 2018; Araki and Mitamura, 2018) and bootstrapping for training data generation (Ferguson et al., 2018; Wang et al., 2019).",
"The recent release of the large English event detection dataset MAVEN (Wang et al., 2020c), with annotations of event triggers only, partially remedies for English data scarcity.",
"MAVEN also demonstrates that even the state-of-the-art Transformer models fail to yield satisfying event detection performance in the general domain.",
"The fact that it is unlikely to expect datasets of similar size for other event extraction tasks and especially for other languages only em-phasises the need for external event-related knowledge and transfer learning approaches, such as the ones introduced in this work.",
"Semantic Specialisation.",
"Representation spaces induced through self-supervised objectives from large corpora, be it the word embedding spaces (Mikolov et al., 2013; Bojanowski et al., 2017) or those spanned by LM-pretrained Transformers (Devlin et al., 2019; Liu et al., 2019), encode only distributional knowledge.",
"A large body of work focused on semantic specialisation of such distributional spaces by injecting lexico-semantic knowledge from external resources (e.g., WordNet (Fellbaum, 1998), BabelNet (Navigli and Ponzetto, 2010) or ConceptNet (Liu and Singh, 2004)) in the form of lexical constraints (Faruqui et al., 2015; Mrkic et al., 2017; Glava and Vulic, 2018b; Ka-math et al., 2019; Vulic et al., 2021).",
"Joint specialisation models (Yu and Dredze, 2014; Lauscher et al., 2020b; Levine et al., 2020, inter alia ) train the representation space from scratch on the large corpus, but augment the self-supervised training objective with an additional objective based on external lexical constraints.",
"Lauscher et al. (2020b) add to the Masked LM (MLM) and next sentence prediction (NSP) pretraining objectives of BERT (Devlin et al., 2019) an objective that predicts pairs of (near-)synonyms, aiming to improve word-level semantic similarity in BERT's representation space.",
"In a similar vein, Levine et al. (2020) add the objective that predicts WordNet supersenses.",
"While joint specialisation models allow the external knowledge to shape the representation space from the very beginning of the distributional training, this also means that any change in lexical constraints implies a new, computationally expensive pretraining from scratch.",
"Retrofitting and post-specialisation methods (Faruqui et al., 2015; Mrkic et al., 2017; Vulic et al., 2018; Ponti et al., 2018; Glava and Vulic, 2019; Lauscher et al., 2020a; Wang et al., 2020a), in contrast, start from a pretrained representation space (word embedding space or a pretrained encoder) and fine-tune it using external lexico-semantic knowledge.",
"Wang et al. (2020a) fine-tune the pre-trained RoBERTa (Liu et al., 2019) with lexical constraints obtained automatically via dependency parsing, whereas Lauscher et al. (2020a) use lexical constraints derived from ConceptNet to inject knowledge into BERT: both adopt adapter-based fine-tuning, storing the external knowledge in a separate set of parameters.",
"Our work adopts a similar adapter-based specialisation approach, however, focusing on event-oriented downstream tasks, and knowledge from VerbNet and FrameNet.",
"We investigated the potential of leveraging knowledge about semantic-syntactic behaviour of verbs to improve the capacity of large pretrained models to reason about events in diverse languages.",
"We proposed an auxiliary pretraining task to inject VerbNetand FrameNet-based lexical verb knowledge into dedicated verb adapter modules.",
"We demonstrated that state-of-the-art pretrained models still benefit from the gold standard linguistic knowledge stored in lexical resources, even those with limited coverage.",
"Crucially, we showed that the benefits of the knowledge from resource-rich languages can be extended to other, resource-leaner languages through translation-based transfer of verb class/frame membership information.",
"Acknowledgements.",
"This work is supported by the ERC Consolidator Grant LEXICAL (no 648909) awarded to AK.",
"The work of GG is supported by the Baden-Wrttemberg Stiftung (Eliteprogramm, AGREE grant)."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain"
] |
[
"In selective prediction, a classifier is allowed to abstain from making predictions on low-confidence examples.",
"Though this setting is interesting and important, selective prediction has rarely been examined in natural language processing (NLP) tasks.",
"To fill this void in the literature, we study in this paper selective prediction for NLP, comparing different models and confidence estimators.",
"We further propose a simple error regularization trick that improves confidence estimation without substantially increasing the computation bud-get.",
"We show that recent pre-trained transformer models simultaneously improve both model accuracy and confidence estimation effectiveness.",
"We also find that our proposed regularization improves confidence estimation and can be applied to other relevant scenarios, such as using classifier cascades for accuracy efficiency trade-offs.",
"Source code for this paper can be found at https://github.com/ castorini/transformers-selective .",
"Recent advances in deep learning models have pushed the frontier of natural language processing (NLP).",
"Pre-trained language models based on the transformer architecture (Vaswani et al., 2017) have improved the state-of-the-art results on many NLP applications.",
"Naturally, these models are deployed in various real-world applications.",
"However, one may wonder whether they are always reliable, as pointed out by Guo et al. (2017) that modern neural networks, while having better accuracy, tend to be overconfident compared to simple networks from 20 years ago.",
"In this paper, we study the problem of selective prediction (Geifman and El-Yaniv, 2017) in NLP.",
"Under the setting of selective prediction, a model is allowed to abstain from making predictions on uncertain examples (Figure 1) and thereby reduce I enjoyed movies.",
"the error rate.",
"This is a practical setting in a lot of realistic scenarios, such as making entailment judgments for breaking news articles in search engines (Carlebach et al., 2020) and making critical predictions in medical and legal documents (Zhang et al., 2019).",
"In these cases, it is totally acceptable, if not desirable, for the models to admit their uncertainty and call for help from humans or better (but more costly) models.",
"Under the selective prediction setting, we construct a selective classifier by pairing a standard classifier with a confidence estimator.",
"The confidence estimator measures how confident the model is for a certain example, and instructs the classifier to abstain on uncertain ones.",
"Naturally, a good confidence estimator should have higher confidence for correctly classified examples than incorrect ones.",
"We consider two choices of confidence estimators, softmax response (SR; Hendrycks and Gimpel, 2017), and Monte-Carlo dropout (MC-dropout; Gal and Ghahramani, 2016).",
"SR interprets the output of the final softmax layer as a probability distribution and the highest probability as confidence.",
"MC-dropout repeats the inference process multiple times, each time with a different dropout mask, and treats the negative variance of maximum probability as confidence.",
"Confidence estimation is critical to selective prediction, and therefore studying this problem also helps relevant tasks such as active learning (Cohn et al., 1995; Shen et al., 2018) and early exiting (Schwartz et al., 2020; Xin et al., 2020; Zhou et al., 2020; Xin et al., 2021).",
"In this paper, we compare selective prediction performance of different NLP models and confidence estimators.",
"We also propose a simple trick, error regularization, which can be applied to any of these models and confidence estimators, and improve their selective prediction performance.",
"We further study the application of selective prediction on a variety of interesting applications, such as classification with no valid labels (no-answer problem) and using classifier cascades for accuracy efficiency trade-offs.",
"Experiments show that recent powerful NLP models such as BERT (Devlin et al., 2019) and ALBERT (Lan et al., 2020) improve not only accuracy but also selective prediction performance; they also demonstrate the effectiveness of the proposed error regularization by producing better confidence estimators which reduce the area under the riskcoverage curve by 10% .",
"Selective prediction has been studied by the machine learning community for a long time (Chow, 1957; El-Yaniv and Wiener, 2010).",
"More recently, Geifman and El-Yaniv (2017, 2019) study selective prediction for modern deep learning models, though with a focus on computer vision tasks.",
"Selective prediction is closely related to confidence estimation, as well as out-of-domain (OOD) detection (Scholkopf et al., 2000; Liang et al., 2018) and prediction error detection (Hendrycks and Gimpel, 2017), albeit more remotely.",
"There have been many different methods for confidence estimation.",
"Bayesian methods such as Markov Chain Monte Carlo (Geyer, 1992) and Variational Inference (Hin-ton and Van Camp, 1993; Graves, 2011) assume a prior distribution over model parameters and obtain confidence estimates through the posterior.",
"Ensemble-based methods (Gal and Ghahramani, 2016; Lakshminarayanan et al., 2017; Geifman et al., 2019) estimate confidence based on statistics of the ensemble model's output.",
"These methods, however, are computationally practical for small models only.",
"Current large-scale pre-trained NLP models, such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019), are too expensive to run multiple times of inference, and therefore require lightweight confidence estimation.",
"Previously, selective prediction and confidence estimation have been studied in limited NLP scenarios.",
"Dong et al. (2018) train a separate confidence scoring model to explicitly estimate confidence in semantic parsing.",
"Kamath et al. (2020) introduce selective prediction for OOD question answering, where abstention is allowed for OOD and difficult questions.",
"However, selective prediction for broader NLP applications has yet to be explored, and we hope to draw the attention of the NLP community to this problem.",
"There are two notable related topics, confidence calibration and unanswerable questions, but the difference between them and selective prediction is still nontrivial.",
"Calibration (Guo et al., 2017; Jiang et al., 2018; Kumar et al., 2018; Wang et al., 2020; Desai and Durrett, 2020) focuses on adjusting the overall confidence level of a model, while selective prediction is based on relative confidence among the examples.",
"For example, the most widely used calibration technique, temperature scaling (Platt, 1999), globally increases or decreases the model's confidence on all examples, but the ranking of all examples' confidence is unchanged.",
"Unanswerable questions are considered in previous datasets, e.g., SQuAD2.0 (Rajpurkar et al., 2018).",
"The unanswerable questions are impossible to answer even for humans, while abstention in selective prediction is due to model uncertainty rather than model-agnostic data uncertainty.",
"Given a feature space X and a set of labels Y , a standard classifier f is a function f : X Y .",
"A selective classifier is another function h : X Y {} , where is a special label indicating the abstention of prediction.",
"Normally, the selective classifier is composed of a pair of functions h = ( f, g ) , where f is a standard classifier and g is the selective function g : X { 0 , 1 } .",
"Given an input x X , the output of the selective classifier is as follows: h ( x ) = (cid:40) f ( x ) , if g ( x ) = 1 , , if g ( x ) = 0 , (1) and we can see that the output of g controls prediction or abstention.",
"In most cases, g consists of a confidence estimator g : X R , and a confidence threshold : g ( x ) = 1 [ g ( x ) > ] .",
"g ( x ) indicates how confident the classifier f is on the example x , and controls the overall prediction versus abstention level.",
"A selective classifier makes trade-offs between coverage and risk .",
"Given a labeled dataset S = { ( x i , y i ) } ni =1 X Y and an error function L to calculate each example's error l i = L ( f ( x i ) , y i ) , the coverage and the selective risk of a classifier h = ( f, g ) on S are, respectively, ( h ) = 1 | S | (cid:88) ( x i ,y i ) S g ( x i ) , (3) r ( h ) = (cid:80) ( x i ,y i ) S g ( x i ) l i (cid:80) ( x i ,y i ) S g ( x i ) .",
"The selective classifier aims to minimize the selective",
"selective risk at a given coverage.",
"The performance of a selective classifier h = ( f, g ) can be evaluated by the riskcoverage curve (RCC; El-Yaniv and Wiener, 2010), which is drawn by varying the confidence threshold (see Figure 2 for an example).",
"Quantitatively, the area under curve (AUC) of RCC measures the effectiveness of a selective classifier.",
"1 In order to minimize the AUC of RCC, the selective classifier should, intuitively, output g ( x ) = 1 for correctly classified examples and g ( x ) = 0 for incorrect ones.",
"Therefore, an ideal g has the following property: ( x i , y i ) , ( x j , y j ) S , g ( x i ) g ( x j ) iff l i l j .",
"We propose the following metric, reversed pair proportion (RPP), to evaluate how far the confidence estimator g is to ideal, given the labeled dataset S of size n : RPP = n (cid:80) 1 i,j n 1 [ g ( x i ) < g ( x j ) , l i < l j ] n 2 .",
"RPP measures the proportion of example pairs with a reversed confidenceerror relationship, and the n 2 in the denominator is used to normalize the value.",
"An ideal confidence estimator has an RPP value of 0.",
"In most cases for multi-class classification, the last layer of the classifier is a softmax activation, which",
"1 AUC in this paper always corresponds to RCCs.",
"outputs a probability distribution P ( y ) over the set of labels Y , where y Y is a label.",
"In this case, the classifier can be written as f ( x ) = y = arg max y Y P ( y ) , (6) where y is the label with highest probability.",
"Perhaps the most straightforward and popular choice for the confidence estimator is softmax response (Hendrycks and Gimpel, 2017): g SR ( x ) = P ( y ) = max y Y P ( y ) .",
"Alternatively, we can use the difference between probabilities of the top two classes for confidence estimation.",
"We refer to this method as PD (proba-bility difference).",
"Gal and Ghahramani (2016) argue that softmax outputs are often erroneously interpreted as model confidence, and propose to use MC-dropout as the confidence estimator.",
"In MC-dropout, P ( y ) is computed for a total of R times, using a different dropout mask at each time, producing P 1 ( y ) , P 2 ( y ) , , PR ( y ) .",
"The variance of them is used to estimate the confidence: g MC ( x ) = Var[ P 1 ( y ) , , PR ( y )] .",
"We use the negative sign here because a larger variance indicates a greater uncertainty, i.e., a lower confidence (Geifman and El-Yaniv, 2017; Kamath et al., 2020).",
"By using different dropout masks, MC-dropout is equivalent to using an ensemble for confidence estimation, but does not require actually training and storing multiple models.",
"Nevertheless, compared to SR, the inference cost of MC-dropout is multiplied by R , which can be a problem when model inference is expensive.",
"SR and MC-dropout are often used directly out of the box as the confidence estimator.",
"We propose a simple regularization trick that can be easily applied at training (or fine-tuning for pre-trained models) time and can improve the effectiveness of the induced confidence estimators.",
"Considering that a good confidence estimator should minimize RPP defined in Equation 5, we 0.0 0.2 0.4 0.6 0.8 1.0 0.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 MRPC 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.02 0.04 0.06 0.08 QNLI 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 MNLI BERT-base-MC BERT-large-MC BERT-base-SR BERT-large-SR Coverage R i s k Figure 2: Riskcoverage curves of BERT-base and BERT-large models with SR and MC confidence estimators.",
"add the following regularizer to the original training loss function: L total = n (cid:88) i =1 H ( f ( x i ) , y i ) + L reg , (9) L reg = (cid:88) 1 i,j n i,j 1 [ e i > e j ] , (10) i,j = max { 0 , g SR ( x i ) g SR ( x j ) } 2 .",
"(11)",
"Here, H ( , ) is the task-specific loss function such as cross entropy ( H is not the same with the error function L ), is the hyperparameter for regularization, g SR is the maximum softmax probability defined in Equation 7, and e i is the error of example i at the current iterationdetails to calculate it will be explained in the next paragraph.",
"We use SR confidence here because it is easily accessible at training time, while MC-dropout confidence is not.",
"The intuition of this regularizer is as follows: if the model's error on example i is larger than its error on example j (i.e., example i is considered more difficult for the model), then the confidence on example i should not be greater than the confidence on example j .",
"In practice, at each iteration of training (fine-tuning), we can obtain the error e i in one of the two following ways.",
"Current iteration error We simply use the error function L to calculate the error of the example at the current iteration, and use it as e i .",
"In the case of multi-class classification, L is often chosen as the 01 error.",
"is, we draw inspiration from forgettable examples (Toneva et al., 2019).",
"We calculate example error with L throughout the training process, and use the error averaged from the beginning to the current iteration as e i .",
"In this case, e i takes value from [0 , 1] .",
"In practice, it is computationally prohibitive to either strictly compute L reg from Equation 10 for all example pairs, or to calculate history record error after every iteration.",
"We therefore make the following two approximations.",
"For L reg from Equation 10, we only consider examples from the mini-batch of the current iteration.",
"For current iteration error, where e i takes value from { 0 , 1 } , we consider all pairs where e i = 1 and e j = 0 .",
"For history record error, where e i takes value from [0 , 1] , we sort all examples in the mini-batch by their errors, and divide the mini-batch into 20% of examples with high error values and 80% of examples with low error values; 2 then we consider all pairs where example i is from the former 20% and j from the latter 80% .",
"For calculating history record error, we compute and record the error values for the entire training set 10 times per epoch (once after each 10% iterations).",
"At each training iteration, we use the average of error values recorded so far as e i .",
"We conduct experiments of selective prediction on NLP tasks.",
"Since the formulation of selective prediction is model agnostic, we choose the 2 We choose this 2080 division to mimic the current iteration error case, where roughly 20% of training examples have an error of 1 and 80% have an error of 0.",
"following representative models: (1) BERT-base and BERT-large (Devlin et al., 2019), the dominant transformer-based models of recent years; (2) ALBERT-base (Lan et al., 2020), a variant of BERT featuring parameter sharing and memory efficiency; (3) Long Short-Term Memory (LSTM; Hochreiter and Schmidhuber, 1997), the popular pre-transformer model that is lightweight and fast.",
"In this section, we compare the performance of selective prediction of these models, demonstrate the effectiveness of the proposed error regularization, and show the application of selective prediction in two interesting scenariosthe no-answer problem and the classifier cascades.",
"We conduct experiments mainly on three datasets: MRPC (Dolan and Brockett, 2005), QNLI (Wang et al., 2018), and MNLI (Williams et al., 2018).",
"In Section 5.4, we will need an additional non-binary dataset SST-5 (Socher et al., 2013).",
"Statistics of these datasets can be found in Table",
"2. Following the setting of the GLUE benchmark (Wang et al., 2018), we use the training set for training/fine-tuning and the development set for evaluation (the test set's labels are not publicly available); MNLI's development set has two parts, matched and mismatched (m/mm).",
"These datasets include semantic equivalence judgments, entailment classification, and sentiment analysis, which are important application scenarios for selective prediction as discussed in Section",
"1. The implementation is based on PyTorch (Paszke et al., 2019) and the Huggingface Transformers Library (Wolf et al., 2020).",
"Training/fine-tuning and inference are done on a single NVIDIA Tesla V100 GPU.",
"Since we are evaluating the selective prediction performance of different models instead of pursuing state-of-the-art results, we do not extensively tune hyperparameters; instead, most experiment settings such as hidden sizes, learning rates, and batch sizes are kept unchanged from the Huggingface Library.",
"Further setup details can be found in Appendix A. 5.2 Comparing Different Models We compare selective prediction performance of different models in Table",
"1. For each model, we report the performance given by the two confidence estimators, softmax response (SR) and MC-dropout (MC); the results of using PD for confidence estimation are very similar to those of SR, and we report them in Appendix B due to space limitations.",
"The accuracy and the F1 score 3 measure the effectiveness of the classifier f , RPP measures the reliability of the confidence estimator g , and AUC is a comprehensive metric for both the classifier and the confidence estimator.",
"The choice of confidence estimator does not affect the model's accuracy.",
"We also provide riskcoverage curves (RCCs) of different models and confidence estima-3 We henceforth refer to both accuracy and F1 scores simply as accuracy for the sake of conciseness.",
"tors in Figure",
"2. MC in the table and the figure uses a dropout rate of 0 .",
"01 and repetitive runs R = 10 .",
"We first notice that models with overall higher accuracy also have better selective prediction performance (lower AUC and RPP).",
"For example, compared with LSTM, BERT-base has higher accuracy and lower AUC/RPP on all datasets, and the same applies to the comparison between BERT-base and BERT-large.",
"Since the classifier's effectiveness does not directly affect RPP, the consistency of RPP's and accuracy's improvement indicates that sophisticated models simultaneously improve both model accuracy and confidence estimation.",
"This is in contrast to the discovery by Guo et al. (2017) that sophisticated neural networks, despite having better accuracy, are more easily overconfident and worse calibrated than simple ones.",
"We also notice that MC-dropout performs consistently worse than softmax response, shown by both AUC and RPP.",
"This shows that for NLP tasks and models, model confidence estimated by MC-dropout fails to align well with real example diffi-culty.",
"We further study and visualize in Figure 3 the effect of different dropout rates and different numbers of repetitive runs R on MC-dropout's selective prediction performance.",
"We can see that (1) a dropout rate of 0 .",
"01 is a favorable choice: larger dropout rates lead to worse performance while smaller ones do not improve it; (2) MC-dropout needs at least 20 repetitions to obtain results comparable to SR, which is extremely expensive.",
"Although MC-dropout has a sound theoretical foundation, its practical application to NLP tasks needs further improvements.",
"In this part, we show that our simple regularization trick improves selective prediction performance.",
"In Model Reg.",
"Table 3, we report the accuracy, AUC, and RPP for each model, paired with three different regularizers: no regularization (none), current error regularizer (curr.), and history error regularizer (hist.), as described in Section",
"4. We first see that applying error regularization (either current or history) does not harm model accuracy.",
"There are minor fluctuations, but generally speaking, error regularization has no negative effect on the models' effectiveness.",
"We can also see that error regularization improves models' selective prediction performance, reducing AUC and RPP.",
"As we mention in the previous section, AUC is a comprehensive metric for both the classifier f and the confidence estimator g .",
"We therefore focus on this metric in this section, and we bold the lowest AUC in Table",
"3. We see that error regularization consistently achieve the lowest AUC values, and on average, the best scores are approximately 10% lower than the scores without regularization.",
"This shows that error regularization produces confidence estimators that give better confidence rankings.",
"The two regularization methods, current error and history error, are similar in quality, with neither outperforming the other across all models and datasets.",
"Therefore, we can conclude only that the error regularization trick improves selective prediction, but the best specific method varies.",
"We leave this exploration for future work.",
"In this section, we conduct experiments to see how selective classifiers perform on datasets that either allow abstention or, equivalently, provide the no-answer label.",
"This no-answer problem occurs whenever a trained classifier encounters an example whose label is unseen in training, which is common in practice.",
"For example, in the setting of ultrafine entity typing with more than 10,000 labels (Choi et al., 2018), it is unsurprising to encounter examples with unseen types.",
"Ideally, in this case, the classifier should choose the no-answer label.",
"This setting is important yet often neglected, and there exist few classification datasets with the no-answer label.",
"We therefore build our own datasets, binarized MNLI and SST-5 (bMNLI and bSST-5), to evaluate different models in this setting (Table 2).",
"The MNLI dataset is for sentence entailment classification.",
"Given a pair of sentences, the goal is to predict the relationship between them, among three labels: entailment, contradiction, and neutral.",
"The SST-5 dataset is for fine-grained sentence sentiment classification.",
"Given a sentence, the goal is to predict the sentiment of it, among five labels: strongly positive, mildly positive, strongly negative, mildly negative, and neutral.",
"To convert the original MNLI and SST-5 datasets into our binarized versions bMNLI and bSST-5, we modify the following: for SST-5, we merge strongly and mildly positive/negative into one positive/negative class; for MNLI, we simply regard entailment as positive and contradictory as negative.",
"We then remove all neutral instances from the training set but keep those in the development and test sets.",
"This way, neutral instances in the development and test sets should be classified as no-answer by the model.",
"A good model is expected to assign neutral examples in the development and test sets with low confidence scores , thereby predicting the no-answer label for them.",
"We report results for these two datasets with 0 5 10 15 20 GFLOPs 0.82 0.84 0.86 0.88 F 1 MRPC 0 5 10 15 20 GFLOPs 0.65 0.70 0.75 0.80 0.85 0.90 A cc .",
"the no-answer label in Table",
"4. Accuracy (Acc), AUC, and RPP have the same meaning from the previous sections.",
"We also consider a new metric specifically for the no-answer setting, augmented accuracy (Acc*), which is calculated as follows: (1) we make a number of attempts by searching a threshold from 0 .",
"7 to 1 .",
"0 in increments of 0 .",
"01 ; (2) for each attempt, we regard all examples with predicted confidence lower than as neutral, and then calculate the accuracy; (3) among all attempts, we take the highest accuracy as Acc*.",
"Choosing the optimal requires knowing the ground-truth answers in advance and is not practical in reality.",
"4 Instead, Acc* indicates how well a model recognizes examples whose label is likely unseen in the training set.",
"We first see that Acc* is consistently higher than Acc in all cases.",
"This is unsurprising, but it demonstrates that unseen samples indeed have lower confidence and shows that introducing the abstention option is beneficial in the no-answer scenario.",
"Also, we observe that error regularization improves the models' selective prediction performance, producing lower AUC/RPP and higher Acc* in most cases.",
"This further demonstrates the effectiveness of the simple error regularization trick.",
"Secondly, we can see that the improvement of Acc* over Acc is larger in bMNLI than in bSST-5.",
"The reason is that in bMNLI, neutral examples constitute about a third of the entire development set, while in bSST-5 they constitute only a fifth.",
"The 4 Alternatively, one may use a validation set to choose the optimal .",
"In our experiments, however, we use the development set for evaluation, since the labels of the test set itself are not publicly available.",
"Holding out a part of the training set for validation is left for future exploration.",
"improvement is positively correlated with the proportion of neutral examples, since they are assigned lower confidence scores and provide the potential for abstention-based improvements.",
"In this section, we show how confidence estimation and abstention can be used for accuracyefficiency trade-offs.",
"We use classifier cascades: we first use a less accurate classifier for prediction, abstain on examples with low confidence, then send them to more accurate but more costly classifiers.",
"Here we choose LSTM and BERT-base to constitute the cascade, but one can also choose other models and more levels of classifiers.",
"We first use an LSTM for all examples' inference, and then send difficult ones to BERT-base.",
"Since the computational cost of LSTM is negligible 5 compared to BERT-base, the key to efficiency here is correctly picking the difficult examples.",
"In Figure 4, we show the results of accuracy/F1 score versus average FLOPs 6 per inference example.",
"Each curve represents a method to choose difficult examples: The blue curves are obtained by randomly selecting examples, as a simple baseline.",
"The orange and green curves are obtained by using SR of LSTM as the indicator of example difficulty; the orange curves represent the LSTM trained with no regularization while the green curves are with history error regularization.",
"Different points on the curves are chosen by varying the proportion of examples sent to the more accurate model, BERT-5 BERT-base's cost is 10 5 times larger than LSTM here.",
"6 We use the torchprofile toolkit to measure multiply accumulate operations (MACs), and then double the number to obtain floating point operations (FLOPs).",
"base.",
"A curve with a larger area under it indicates a better accuracyefficiency trade-off.",
"We can see that the blue curves are basically linear interpolations between the LSTM (the lower-left dot) and BERT-base (the upper-right dot), and this is expected for random selection.",
"Orange and green curves are concave, indicating that using SR for confidence estimation is, unsurprisingly, more effective than random selection.",
"Between these two, the green curves (history error regularization) have larger areas under themselves than orange ones (no regularization), i.e., green curves have better accuracy given the same FLOPs.",
"This demonstrates the effectiveness of error regularization for better confidence estimation.",
"In this paper, we introduce the problem of selective prediction for NLP.",
"We provide theoretical background and evaluation metrics for the problem, and also propose a simple error regularization method that improves selective prediction performance for NLP models.",
"We conduct experiments to compare different models under the selective prediction setting, demonstrate the effectiveness of the proposed regularization trick, and study two scenarios where selective prediction and the error regularization method can be helpful.",
"We summarize interesting experimental observations as follows:",
"1. Recent sophisticated NLP models not only improve accuracy over simple models, but also provide better selective prediction results (better confidence estimation).",
"2. MC-dropout, despite having a solid theoretical foundation, has difficulties matching the effectiveness of simple SR in practice.",
"3. The simple error regularization helps models lower their AUC and RPP, i.e., models trained with it produce better confidence estimators.",
"4. Selective prediction can be applied to scenarios where estimating example difficulties is necessary.",
"In these cases, our proposed error regularization trick can also be helpful, such as providing better accuracyefficiency trade-offs.",
"Future Work (1) Despite the effectiveness of the proposed error regularization trick, we are not certain on the best way for computing the error (current or history); it is important to unify them into one method that consistently does well.",
"(2) We have only covered a selection of NLP tasks, and there are still other unexplored categories: token-level classification such as named entity recognition and question answering, sequence generation such as summarization and translation, and so on; it would be interesting to extend selective prediction to these problems.",
"(3) There exists another setting for selective prediction where abstention induces a fixed cost (Bartlett and Wegkamp, 2008) and the goal is to minimize the overall cost instead of AUC; it would also be interesting to investigate this setting for NLP applications.",
"We thank anonymous reviewers for their constructive suggestions.",
"This research is supported in part by the Canada First Research Excellence Fund and the Natural Sciences and Engineering Research Council (NSERC) of Canada."
] | [
"abstain",
"abstain",
"method",
"objective",
"result",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"method",
"abstain",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"other",
"other"
] |
[
"Graph convolutional network (GCN) has become popular in various natural language processing (NLP) tasks with its superiority in long-term and non-consecutive word interactions.",
"However, existing single-hop graph reasoning in GCN may miss some important nonconsecutive dependencies.",
"In this study, we define the spectral graph convolutional network with the high-order dynamic Chebyshev approximation (HDGCN), which augments the multi-hop graph reasoning by fusing messages aggregated from direct and long-term dependencies into one convolutional layer.",
"To alleviate the over-smoothing in high-order Chebyshev approximation, a multi-vote based cross-attention (MVCAttn) with linear computation complexity is also proposed.",
"The empirical results on four transductive and inductive NLP tasks and the ablation study verify the efficacy of the proposed model.",
"Our source code is available at https://github.com/ MathIsAll/HDGCN-pytorch .",
"Graph neural networks (GNNs) are usually used to learn the node representations in Euclidean space from graph data, which have been developed to one of the hottest research topics in recent years (Zhang, 2020).",
"The primitive GNNs relied on recursive propagation on graphs, which takes a long time to train (Zhang et al., 2019b).",
"One major variant of GNNs, graph convolutional networks (GCNs) (Kipf and Welling, 2017; Yao et al., 2019), takes spectral filtering to replace recursive message passing and needs only a shallow network to convergent, which have been used in various NLP tasks.",
"For example, Yao et al. (2019) constructed the text as a graph and input it to a GCN.",
"This method achieved better results than conventional deep learning models in text classification.",
"Afterward, the GCNs corresponding author: Qingcai Chen have became popular in more tasks, such as word embedding (Zhang et al., 2020b), semantic analysis (Zhang et al., 2019a), document summarization (Wang et al., 2020), knowledge graph (Wang et al., 2018), etc.",
"The spectral graph convolution in Yao's GCN is a localized first-order Chebyshev approximation.",
"It is equal to a stack of 1-step Markov chain (MC) layer and fully connected (FC) layer.",
"Unlike the multi-step Markov chains, the message propagation in vanilla GCN lacks the node probability transitions.",
"As a result, the multi-hop graph reasoning is very tardy in GCN and easily causes the suspended animation problem (Zhang and Meng, 2019).",
"However, the probability transition on the graph is useful to improve the efficiency in learning contextual dependencies.",
"In many NLP tasks (like the question answering (QA) system and entity relation extraction), the features of the two nodes need to be aligned.",
"As an example, Figure 1 shows a simple graph where the node n 4 is a pronoun of node n 1 .",
"In this example, the adjacency matrix is masked on nodes n 2 , n 3 , n 5 to demonstrate the message passing between n 1 and n 4 .",
"Figure 1",
"(c) and",
"(d) plot the processes of feature alignment on two nodes without and with probability transitions respectively.",
"In this example, the feature alignment process without probability transition needs 10 more steps than which with probability transition.",
"It is shown that encoding the multi-hop dependencies through the spectral graph filtering in GCN usually requires a deep network.",
"However, as well known that the deep neural network (DNN) is tough to train and easily causes the over-fitting problem (Rong et al., 2019).",
"Some newest studies to improve the multi hop graph reasoning include graph attention networks (GATs) (Velickovic et al., 2018), graph residual neural network (GRESNET) (Zhang and Meng, 2019), graph diffusive neural network (DIFNET) Figure 1:",
"(Zhang, 2020), TGMC-S (Zhang et al., 2020c) and Graph Transformer Networks (Yun et al., 2019; Zhang and Zhang, 2020).",
"GATs enhance the graph reasoning by implicitly re-defining the graph structure with the attention on the 1-hop neighbors, but there is equilibrial optimization on the whole graph.",
"GRESNET solves the suspended animation problem by creating extensively connected highways to involve raw node features and intermediate representations throughout all the model layers.",
"However, the multi-hop dependencies are still reasoned at a slow pace.",
"DIFNET introduces a new neuron unit, i.e., GDU (gated diffusive unit), to model and update the hidden node states at each layer.",
"DIFNET replaces the spectral filtering with a recursive module and realizes the neural gate learning and graph residual learning.",
"But the time cost is aggravated in DIFNET compared with GCN.",
"TGMC-S stacks GCN layers on adjacent matrices with different hops of traffic networks.",
"Different from the ground-truth traffic network in TGMC-S, it is hard to construct the multi-hop word-word relationships objectively from the text.",
"TGMC-S hadn't given a way to improve the multi-hop message passing in GCN.",
"Transformers (Vaswani et al., 2017) and corresponding pre-trained models (Xu et al., 2019) could be thought of as fully-connected graph neural networks that contain the multi-hop dependencies.",
"They figure out the contextual dependencies on the fully-connected graph with the attention mechanism.",
"The message propagation in transformers follows the relations self-adaptively learned from input sequence instead of the fixed graph structures.",
"Publications have shown that transformers outperform GCNs in many NLP tasks.",
"Graph Transformer (Dwivedi and Bresson, 2020) generalizes the Transformer to arbitrary graphs, and improves inductive learning from Laplacian eigenvectors on graph topology.",
"However, due to the connections scale quadratically growth with node number N in graphs, things get out of hand for very large N .",
"Additionally, the fully-connected graph is not an interpretable architecture in practical tasks.",
"For example, whether Transformers are the best choice to bring the text in linguistic theory?",
"1 To improve the efficiency and performance of multi hop graph reasoning in spectral graph convolution, we proposed a new graph convolutional network with high-order dynamic Chebyshev approximation (HDGCN).",
"A prime ChebNet and a high-order dynamic (HD) ChebNet are firstly applied to implement this Chebyshev approximation.",
"These two sub-networks work like a trade-off on low-pass signals (direct dependencies) and high-pass signals (multi-hop dependencies) respectively.",
"The prime ChebNet takes the same frame as the convolutional layer in vanilla GCN.",
"It mainly extracts information from direct neighbors in local contexts.",
"The HD-ChebNet aggregates messages from multi-hop neighbors following the transition direction adaptively learned by the attention mechanism.",
"The standard self-attention (Vaswani et al., 2017) has a O (cid:0) N 2 (cid:1) computation complexity and it is hard to be applied on long sequence.",
"Even the existing sparsity attention methods, like the Star-Transformer (Guo et al., 2019) and Extended Transformer Construction (ETC) (Ainslie et al., 2020), have reduced the quadratic dependence limit of sequence length to linear dependence, but the fully-connected graph structure cannot be kept.",
"We design a multi-vote-based cross-attention (MVCAttn) mechanism.",
"The MVCAttn scales the computation complexity O ( N 2 ) in self-attention to O ( N ) .",
"To improve the efficiency and performance of multi-hop reasoning in spectral graph convolution, we propose a novel graph convolutional network with high-order dynamic Chebyshev Approximation (HDGCN).",
"1 https://towardsdatascience.com/transformers-are-graph-neural-networks-bca9f75412aa To avoid the over-smoothing problem in HD-ChebNet, we propose a multi-vote based cross-attention (MVCAttn) mechanism, which adaptively learn the direction of node probability transition.",
"MVCAttn is a variant of the attention mechanism with the property of linear computation complexity.",
"The experimental results show that the proposed model outperforms compared SOTA models on four transductive and inductive NLP tasks.",
"Our work draws supports from the vanilla GCN and the attention mechanism, so we first give a glance at the paradigm of these models in this section.",
"The GCN model proposed by (Kipf and Welling, 2017) is the one we interested, and it is defined on graph G = {V , E} , where V is the node set and E is the edge set.",
"The edge ( v i , v j ) E represents a link between nodes v i and v j .",
"The graph signals are attributed as X R |V| d , and the graph relations E can be defined as an adjacency matrix A R |V||V| (binary or weighted).",
"Each convolutional layer in GCN is a 1st Chebyshev approximation on spectral graph convolution, and its layer-wise propagation rule in neural network is defined as: H ( l +1) = (cid:16)(cid:101) AH ( l ) W ( l ) (cid:17) , L l 0 (cid:101) A = ( D + IN ) 12 ( A + IN ) ( D + IN ) 12 , (1) where H (0) = X , (cid:101) A is the normalized adjacency matrix and is a non-linear activation function.",
"The node embeddings output from the last convolutional layer are fed into a softmax classifier for node or graph classification, and the loss function L can be defined as the cross-entropy error.",
"The weight set { W ( l ) } Ll =0 can be jointly optimized by minimizing L via gradient descent.",
"The attention mechanism is an effective way to extract task-relevant features from inputs, and it helps the model to make better decisions (Lee et al., 2019).",
"It has various approaches to compute the attention score from features, and the scaled dot-product attention proposed in Transformers (Vaswani et al., 2017) is the most popular one.",
"where X RN d is the input sequence, and weights W q R d d k , W k R d k d , W v R d d v are used to transform sequence to queries, keys and values.",
"As showed in Equation 2, the attention scores A can be viewed as a dynamic adjacency matrix on sequence X .",
"This process in self-attention is similar to the graph convolutional layer defined in Equation 1. The only difference is that the adjacency matrix in Equation 2 is adaptively learned from input instead of prior graph structures.",
"In our model, the input graph G = ( V , E ) takes the same form as the one in GCN.",
"The nodes are attributed as X R |V| d , and the adjacency matrix A R |V||V| (binary or weighted) is defined on graph edges E .",
"The spectral graph convolution in Fourier do-main is defined as, g (cid:63) x = U g (cid:16)(cid:101) (cid:17) UT x (3) where x R d is the signal on a node, U is the matrix of eigenvectors on normalized graph Laplacian L = IN D 12 AD 12 = UU T , and the filter g ( (cid:101) ) is a function of the eigenvalues on normalized (cid:101) L in Fourier domain.",
"The K -th ( K > 2 ) order truncation of Chebyshev polynomials on this spectral graph convolution is, g (cid:63) x K (cid:88) i =0 i U T i (cid:16)(cid:101) (cid:17) UT x (4) where T 0 (cid:16)(cid:101) (cid:17) = I , T 1 = (cid:101) , T i> 1 (cid:16)(cid:101) (cid:17) = 2 (cid:101) T i 1 (cid:16)(cid:101) (cid:17) T i 2 (cid:16)(cid:101) (cid:17) .",
"To replace the parameters { i } Ki =1 with another parameter set { ( i ) } K/ 2 i =1 , the K th-order Chebyshev polynomials in Equation 4 are approximated as: g (cid:63) x K/ 2 (cid:88) k =0 (cid:16) U (cid:101) U T (cid:17) 2 k (cid:16) I U (cid:101) U T (cid:17) x ( k ) K/ 2 (cid:88) k =1 (cid:101) A 2 k (cid:101) A x ( i ) (5) Figure 2:",
"where the (cid:101) A is normalized adjacency matrix (as defined in Equation 1).",
"As the node state transition (cid:101) A 2 k causes the over-smoothing problem (Li et al., 2018; Nt and Maehara, 2019), we take the dynamic pairwise relationship A d self-adaptively learned by the attention mechanism to turn the direction of node state transition.",
"kd",
"In our implementation, the first-order and higher-order Chebyshev polynomials in Equation 5 is approximated with a prime Chebyshev network (ChebNet) and high-order dynamic Chebyshev networks (HD-ChebNets) respectively.",
"We generalize the graph convolution on K th-order dynamic Chebyshev approximation (Equation 5) to the layer-wise propagation as follows, H K/ 2 (cid:88) k =0 Z ( k ) , Z (0) = (cid:16)(cid:101) AXW (0) (cid:17) (cid:124) (cid:123)(cid:122) (cid:125) Prime ChebNet , Z ( k ) = (cid:16)(cid:101) A (cid:16) A ( k ) d Z ( k ) W ( k ) d (cid:17) W ( k ) (cid:17) (cid:124) (cid:123)(cid:122) (cid:125) Unit in HD-ChebNet , (6) where k is the order and W (0) , W ( k ) , W ( k ) d are nonlinear filters on node signals.",
"For the convenience of writing, we just define the first layer of HDGCN.",
"We consider the same convolutional architecture as the one in GCN to implement the prime ChebNet,",
"and it mainly aggregates messages from the direct dependencies.",
"As the multi-hop neighbors can be interacted via the 1-hop neighbors, we take the Z (0) output from the prime ChebNet as input of the HD-ChebNet.",
"The multi-vote based cross-attention (MVCAttn) mechanism first adaptively learns the direction of node probability transition A ( k ) d , its schematic is showed in Figure 2",
"(b).",
"MVCAttn has two phases graph information aggregation and diffusion.",
"Graph Information Aggregation coarsens the node embeddings Z ( k 1) to a small supernode set S ( k ) RM d , M (cid:28) |V| .",
"The first step is multi-vote projection (MVProj).",
"In which node embeddings Z ( k 1) are projected to multiple votes V ( k ) R |V| M d , and these votes are aggregated to supernode set S ( k ) = { s ( k ) m } Mm =1 .",
"Next, the forward cross-attention (FCAttn) updates the supernode values as: (cid:98) S ( k ) = FCAttn (cid:16) Z ( k ) , S ( k ) (cid:17) = A ( k ) f Z ( k 1) W fv A ( k ) f = Softmax (cid:32) Z ( k 1) W fk W fq S ( k ) d (cid:33) (9) where W fk R d k d c , W fq R d c d k and W fv R d k d k .",
"Graph Information Diffusion feeds the supernodes (cid:98) S ( k ) back to update node set Z ( k ) .",
"With the node embeddings Z ( k 1) and supernode embeddings (cid:98) S ( k ) , the backward cross-attention (BCAttn) is defined as, Z ( k ) = BCAttn (cid:16)(cid:101) S ( k ) , Z ( k 1) (cid:17) = A ( k ) b Z ( k 1) W bv A ( k ) b = Softmax (cid:32) (cid:98) S ( k ) W bq W bk Z ( k 1) d (cid:33) (10) where W bq R d k d a , W bk R d a d k and W bv R d k d k .",
"The last step is adding the probability transition with (cid:101) A .",
"The output of k -th order HD-ChebNet (Equation A) is, (cid:98) Z ( k ) = (cid:16) (cid:101) AZ ( k ) W ( k ) (cid:17) (11) Finally, the outputs from the prime ChebNet and HD-ChebNets are integrated as the node embeddings, H = norm Z (0) + K/ 2 (cid:88) k =1 (cid:98) Z ( k ) .",
"Node Classification The node representations H output from the last graph convolutional layer are straightforward fed into a Softmax classifier for node classification.",
"whole graph is constructed via a readout layer on the outputs H ,",
"h v = ( f 1 ( h v )) (cid:12) tanh ( f 2 ( h v )) h g = 1 |V| |V| (cid:88) v =1 h v + Maxpool (cid:0) h 1 h |V| (cid:1) (14)",
"where (cid:12) denotes the Hadamard product and f 1 () , f 2 () are two non-linear functions.",
"The graph representation h g R d is fed into the Softmax classifier to predict the graph label.",
"In this section, we evaluate HDGCN on transductive and inductive NLP tasks of text classification, aspect-based sentiment classification, natural language inference, and node classification.",
"In experiment, each layer of HDGCN is fixed with K = 6 order Chebyshev approximation and the model stacks L = 1 layer.",
"The dimension of input node embeddings is d = 300 of GlVe or d = 768 of pre-trained BERT, and the hyper-parameter d k = 64 , d a = 64 .",
"So the weights W (0) R d 64 , W ld , W ( k ) R 64 64 and W fk , W fq , W bq , W bk R 64 64 .",
"The number of super-nodes is set as M = 10 .",
"Our model is optimized with adaBelief (Zhuang et al., 2020) with a learning rate 1 e 5 .",
"The schematics about the HDGCN is shown in Figure 2. To analyze the effectiveness of MVCAttn in avoiding over-smoothing, we report the results of ablation study HDGCNstatic in Table 1, 2 5.",
"The ablation model HDGCNstatic is an implementation of Equation 5, in which the node state transition is determined by the static adjacency matrix (cid:101) A 2 k .",
"The first experiment is designed to evaluate the performance of HDGCN on the text graph classification.",
"Four small-scale text datasets 2 MR, R8, R52, Ohsumed, and four large-scale text datasets AG's News 3 , SST-1, SST-2 4 , Yelp-F 5 are used in this task.",
"The graph structures are built on word-word co-occurrences in a sliding window 2 https://github.com/yao8839836/text gcn 3 http://groups.di.unipi.it/ gulli/AG corpus of news articles.html 4 https://nlp.stanford.edu/sentiment/treebank.html 5 https://www.yelp.com/dataset (width=3 and unweighted) on individual documents.",
"HDGCN is initialized with word embeddings pre-trained by 300 -d GloVe and 768 -d BERT-base on small and large scale datasets respectively.",
"The baselines include TextCNN, TextRNN, fastText, SWEM, TextGCN, GraphCNN, TextING, minCUT, BERT-base, DRNN, CNN-NSU, CapNets, LK-MTL, TinyBERT, Star-Transformer.",
"Table 1 shows the test accuracies on four small-scale English datasets, in which HDGCN ranks top with accuracies 86.50%, 98.45%, 96.57%, 73.97% respectively.",
"HDGCN beats the best baselines achieved by TextING (the newest GNN model) and the fine-tuned BERT-base model.",
"Our ablation model HDGCNstatic also achieves higher accuracies than the newest GNN models TextING and minCUT.",
"Therefore, the outperformance of HDGCN verifies that (1) the node probability transition in high-order Chebyshev approximation improves the spectral graph convolution; (2) the MVCAttn mechanism in high-order ChebNet further raises the effectiveness by avoiding the over-smoothing problem.",
"Table 2 shows the test accuracies of HDGCN and other SOTA models on large-scale English datasets.",
"HDGCN achieves the best results 95.5%, 53.9%, 69.6% on AG, SST-1, Yelp-F respectively, and performs a slight gap 0.3% with the top-1 baseline (TinyBERT) on SST-2.",
"These results support that HDGCN outperforms the fully-connected graph module in Transformers and corresponding pre-trained models.",
"Additionally, these comparisons also demonstrates that the combination of prior graph structures and self-adaptive graph structures in graph convolution is able to improve the multihop graph reasoning.",
"In the second experiment, we make a case study on the MR dataset to visualize how the HDGCN improve multi-hop graph reasoning.",
"Here, we take the positive comment inside the film's conflict powered plot there is a decent moral trying to get out, but it's not that , it's the tension that keeps you in your seat Affleck and Jackson are good sparring partners as an example.",
"First, the word interactions on prior graph structure (cid:101) A (word-word co-occurrence in a sliding window with width=3) is showed in Figure 3.",
"We can see that the word mainly interacts with its consecutive neighbors.",
"It is hard for the vanilla GCN to encode multi-hop and non-consecutive word-word interactions as the example shown in Figure 1. Figure 4 shows the node interactions from node embeddings Z (0) to supernodes (cid:98) S (1) and the graph diffusion from (cid:98) S (1) to node embeddings Z (1) .",
"In which, the supernode S 4 puts greater attention on the segment it's the tension that keeps you in your seat .",
"This segment determines its positive Initializedembeddings Model TWITTER LAP14 REST14 REST15 REST16 Acc.",
"polarity significantly.",
"The other supernodes S 1 , S 2 , S 3 , S 5 just aggregate messages from the global context evenly.",
"Next, the messages aggregated in supernodes S 1 S 5 are mainly diffused to four tokens conflict , decent , moral , you .",
"That verifies the self-adaptively learned graph structure A (1) f A (1) b by the MVCAttn improves the multihop graph reasoning on nodes conflict, decent, moral, you .",
"From the perspective of semantics, these four words determine the positive sentiment in this comment significantly.",
"Figure 5 shows the message aggregation from node embeddings Z (1) to supernodes (cid:98) S (2) and the message diffusion from (cid:98) S (2) to node embeddings Z (2) .",
"We can see that the supernode S 4 puts greater attention on another segment there is a decent moral young to get out , which also contributes to the sentiment polarity.",
"Then messages aggregated to supernodes S 1 S 5 are diffused to all words evenly.",
"The backward interactions from supern-Figure 5: The word interactions in MVCAttn A (2) f A (2) b of the 2nd HD-ChebNet, where S 1 S 5 represent the supernodes.",
"odes S 1 S 5 to all graph nodes do not have visible differences.",
"These results demonstrate that the multi-hop graph reasoning in HDGCN just needs one graph convolutional layer to attain the stationary state.",
"The third experiment evaluates HDGCN's performance on the task of aspect-based sentiment classification.",
"This task aims to identify whether the sentiment polarities of aspect are explicitly given in sentences (Zhao et al., 2020).",
"The datasets used in this task include TWITTER, LAP14, REST14, REST15, REST16 (Zhao et al., 2020).",
"The details about the statistics on these datasets are shown in Figure 6.",
"The SOTA comparison models include AOA, TNet-LF, ASCNN, ASGCN-DT, ASGCN-DG, AEN-BERT, BERT-PT, SDGCN-BERT.",
"Each sample in this task includes a sentence pair, an aspect, and a label.",
"The sentence pair and the aspect are concatenated into one long sentence, and the text graph is preprocessed with the dependency tree on this sentence.",
"HDGCN is tested twice with word embeddings initialized by pre-trained 300 -d GloVe and 768 -d BERT-base respectively.",
"Table 3 shows the test accuracies and micro-F1 scores on 5 datasets, where HDGCN achieves new state-of-the-art results on TWITTER, REST14, REST15, REST16, and a top-3 result on the LAP14.",
"As shown in Figure 6 that the LAP14 has the maximum percentage of long sentences among all datasets.",
"A shallow network in HDGCN does not outperform the SOTA result on the LAP14.",
"Additionally, compared with the newest ASGCN and attention-based AOA, HDGCN achieves the best results on TWITTER, LAP14, REST15, REST16 (Acc) and performs very close with the highest accuracy on REST14 and macro-F1 score on REST16.",
"Above comparison supports that the matching between aspect and sentence pair in HDGCN is more accurate than the newest GNN and attention-based models, which verifies that the multi-hop graph reasoning is improved in HDGCN.",
"The fourth experiment evaluates HDGCN's performance on the Stanford natural language inference (SNLI) task (Bowman et al., 2015).",
"This task aims to predict the semantic relationship is entailment or contradiction or neutral between a premise sentence and a hypothesis sentence.",
"All the comparison methods include fine-tuned BERT-base, MT-DNN (Liu et al., 2020), SMART (Jiang et al., 2020), and CA-MTL (Pilault et al., 2021).",
"In this task, the premise and hypothesis sentences are concatenated and constructed into a long sentence.",
"Which is preprocessed to a text graph with the dependency tree.",
"The word embeddings in HDGCN were initialized from the pre-trained 768 -d BERT-base.",
"All test accuracies are shown in Table 4, where HDGCN achieves the new state-of-the-art results Model Total parameters % data used 0.1% 1.0% 10% BERT-base (Devlin et al., 2019) 1 .",
"on the 10% data.",
"As the MT-DNN, SMART and CA-MTL are all fine-tuned on multi-task learning, they perform better than HDGCN in low resource regimes ( 0 . 1% and 1 . 0% of the data).",
"HDGCN just uses 0 .",
"02 more parameters than the BERT-base, and it outperforms the later model on all scales of data.",
"These results verify that the combination of prior graph structure and self-adaptive graph structure in HDGCN performs comparably with the fully-adaptive graph structures in Transformers and BERT-based multi-task learning models.",
"The fifth experiment evaluates the effectiveness of HDGCN on the node classification task.",
"We use three standard citation network benchmark datasets Cora, Citeseer, and Pubmed, to compare the test accuracies on transductive node classification.",
"In the three datasets, the nodes represent the documents and edges (undirected) represent citations.",
"The node features correspond to elements of a bag-of-words representation of a document (Velickovic et al., 2018).",
"We also use the PPI dataset to compare the results on inductive node classification, which consists of graphs corresponding to different human tissues.",
"The baselines for comparison include GCN, GAT, Graph-Bert, GraphNAS, Loopy-Net, HGCN, GRACE, GCNII.",
"The results of our evaluation are recorded in Table 5.",
"HDGCN achieves the new state-of-the-art results on Cora, Citeseer and Pubmed, and performs equally best with the newest GCNII on PPI.",
"Our ablation model, HDGCNstatic , also achieves close results with the newest GNNs on Cora, Citeseer, Pubmed, but it performs poorly on PPI.",
"Which verifies that the high-order Chebyshev approximation of spectral graph convolution has more serious over-smoothing problem in inductive node classification than transductive node classification.",
"All comparisons in this experiment demonstrate the effectiveness of MVCAttn to avoid the over-smoothing problem.",
"This study proposes a multi-hop graph convolutional network on high-order dynamic Chebyshev approximation (HDGCN) for text reasoning.",
"To improve the multi-hop graph reasoning, each convolutional layer in HDGCN fuses low-pass signals (direct dependencies saved in fixed graph structures) and high-pass signals (multi-hop dependencies adaptively learned by MVCAttn) simultaneously.",
"We also firstly propose the multi-votes based cross-attention (MVCAttn) mechanism to alleviate the over-smoothing in high-order Chebyshev approximation, and it just costs the linear computation complexity.",
"Our experimental results demonstrate that HDGCN outperforms compared SOTA models on multiple transductive and inductive NLP tasks.",
"This work is supported by Natural Science Foundation of China (Grant No.61872113, 62006061), Strategic Emerging Industry Development Special Funds of Shenzhen (Grant No.XMHT20190108009), the Tencent Group Science and Technology Planning Project of Shenzhen (Grant No.JCYJ20190806112210067) and Shenzhen Foundational Research Funding (Grant No.JCYJ20200109113403826)."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"other",
"objective",
"objective",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"other"
] |
[
"Current state-of-the-art NMT systems use large neural networks that are not only slow to train, but also often require many heuristics and optimization tricks, such as specialized learning rate schedules and large batch sizes.",
"This is undesirable as it requires extensive hyperparameter tuning.",
"In this paper, we propose a curriculum learning framework for NMT that reduces training time, reduces the need for specialized heuristics or large batch sizes, and results in overall better performance.",
"Our framework consists of a principled way of deciding which training samples are shown to the model at different times during training, based on the estimated difficulty of a sample and the current competence of the model.",
"Filtering training samples in this manner prevents the model from getting stuck in bad local optima, making it converge faster and reach a better solution than the common approach of uniformly sampling training examples.",
"Furthermore, the proposed method can be easily applied to existing NMT models by simply modifying their input data pipelines.",
"We show that our framework can help improve the training time and the performance of both recurrent neural network models and Transformers, achieving up to a 70% decrease in training time, while at the same time obtaining accuracy improvements of up to 2.2 BLEU.",
"Neural Machine Translation (NMT; Kalchbrenner and Blunsom (2013); Bahdanau et al. (2015)) now represents the state-of-the-art adapted in most machine translation systems (Wu et al., 2016; Crego et al., 2016; Bojar et al., 2017a), largely due to its ability to benefit from end-to-end training on massive amounts of data.",
"In particular, recently-introduced self-attentional Transformer architectures (Vaswani et al., 2017) are rapidly becoming the de-facto standard in NMT, having demonstrated CURRICULUM LEARNINGDIFFICULTY Use sample only if: difficulty(sample) competence(model) COMPETENCEMODELTRAINER DATASAMPLEMODELSTATE Figure 1: Overview of the proposed curriculum learning framework.",
"both superior performance and training speed compared to previous architectures using recurrent neural networks (RNNs; (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014)).",
"However, large scale NMT systems are often hard to train, requiring complicated heuristics which can be both time-consuming and expensive to tune.",
"This is especially true for Transformers which, when carefully tuned, have been shown to consistently outperform RNNs (Popel and Bojar, 2018), but on the other hand, also rely on a number of heuristics such as specialized learning rates and large-batch training.",
"In this paper, we attempt to tackle this problem by proposing a curriculum learning framework for training NMT systems that reduces training time, reduces the need for specialized heuristics or large batch sizes, and results in overall better performance.",
"It allows us to train both RNNs and, perhaps more importantly, Transformers, with relative ease.",
"Our proposed method is based on the idea of teaching algorithms in a similar manner as humans, from easy concepts to more difficult ones.",
"This idea can be traced back to the work of Elman (1993) and Krueger and Dayan (2009).",
"The main motivation is that training algorithms can perform better if training data is presented in a specific order, starting from easy examples and moving on to more difficult ones, as the learner becomes more competent .",
"In the case of machine learning, it can also be thought of as a means to avoid getting stuck in bad local optima early on in training.",
"An overview of the proposed framework is shown in Figure 1. Notably, we are not the first to examine curriculum learning for NMT, although other related works have met with mixed success.",
"Kocmi and Bojar (2017) explore impact of several curriculum heuristics on training a translation system for a single epoch, presenting the training examples in an easy-to-hard order based on sentence length and vocabulary frequency.",
"However, their strategy introduces all training samples during the first epoch, and how this affects learning in following epochs is not clear, with official evaluation results (Bo-jar et al., 2017b) indicating that final performance may indeed be hurt with this strategy.",
"Contemporaneously to our work, Zhang et al. (2018) further propose to split the training samples into a prede-fined number of bins (5, in their case), based on various difficulty metrics.",
"A manually designed curriculum schedule then specifies the bins from which the model samples training examples.",
"Experiments demonstrate that benefits of curriculum learning are highly sensitive to several hyperparameters (e.g., learning rate, number of iterations spent in each phase, etc.), and largely provide benefits in convergence speed as opposed to final model accuracy.",
"In contrast to these previous approaches, we define a continuous curriculum learning method (in-stead of a discretized regime) with only one tunable hyperparameter (the duration of curriculum learn-ing).",
"Furthermore, as opposed to previous work which only focuses on RNNs, we also experiment with Transformers, which are notoriously hard to train (Popel and Bojar, 2018).",
"Finally, unlike any of the work described above, we show that our curriculum approach helps not only in terms of convergence speed, but also in terms of the learned model performance.",
"In summary, our method has the following desirable features: 1. Abstract: It is a novel, generic, and extensible formulation of curriculum learning.",
"that of Kocmi and Bojar (2017), can be formulated as special cases of our framework.",
"2. Simple: It can be applied to existing NMT systems with only a small modification to their training data pipelines.",
"3. Automatic: It does not require any tuning other than picking the value of a single parameter, which is the length of the curriculum (i.e., for how many steps to use curriculum learning, before easing into normal training).",
"4. Efficient: It reduces training time by up to 70%, whereas contemporaneous work of Zhang et al. (2018) reports reductions of up to 46%.",
"5. Improved Performance: It improves the performance of the learned models by up to 2.2 BLEU points, where the best setting reported by Zhang et al. (2018) achieves gains of up 1.55 BLEU after careful tuning.",
"We propose competence-based curriculum learning , a training framework based on the idea that training algorithms can perform better if training data is presented in a way that picks examples appropriate for the model's current competence .",
"More specifically, we define the following two concepts that are central to our framework: Difficulty: A value that represents the difficulty of a training sample and that may depend on the current state of the learner.",
"For example, sentence length is an intuitive difficulty metric for natural language processing tasks.",
"The only constraint is that difficulty scores are comparable across different training samples (i.e., the training samples can be ranked according to their difficulty).",
"Competence: A value between 0 and 1 that represents the progress of a learner during its training.",
"It is defined as a function of the learner's state.",
"More specifically, we define the competence, c ( t ) at time t (measured in terms of training steps), of a learner as the proportion of training data it is allowed to use at that time.",
"The training examples are ranked according to their difficulty and the learner is only allowed to use the top c ( t ) portion of them at time t .",
"Using these two concepts, we propose Algorithm 1 (a high-level overview is shown in Figure 1, an example visualization of the first two steps is shown Thankyouverymuch!",
"Sentence Difficulty",
"in Figure 2, and an example of the interaction between difficulty and competence is shown in Figure 3).",
"Note that, at each training step, we are not changing the relative probability of each training sample under the input data distribution, but we are rather constraining the domain of that distribution, based on the current competence of the learner.",
"Eventually, once the competence becomes 1 , the training process becomes equivalent to that without using a curriculum, with the main difference that the learner should now be more capable to learn from the more difficult examples.",
"Given the dependence of this algorithm on the specific choices of the difficulty scoring function, d , and the competence function, c , we now describe our instantiations for training NMT models.",
"There are many possible ways of defining the difficulty of translating a sentence.",
"We consider two heuristics inspired by what we, as humans, may consider difficult when translating, and by factors which can negatively impact the optimization algorithms used when training NMT models.",
"In the rest of this section we denote our training corpus as a collection of M sentences, { s i } M i =1 , where each sentence is a sequence of words.",
"s i = { w i 0 , . . . , w iN i } .",
"Sentence Length: We argue that it is harder to translate longer sentences, as longer sentences require being able to translate their component parts, which often consist of short sentences.",
"Furthermore, longer sentences are intuitively harder to translate due to the propagation of errors made early on when generating the target language sentence.",
"Therefore, a simple way to define the difficulty of a sentence s i = { w i 0 , . . . , w iN i } is as follows: d length ( s i ) (cid:44) N i .",
"Note that we can compute this difficulty metric on either the source language sentence or the target language sentence.",
"We only consider the source sentence in this paper 1 .",
"Word Rarity: Another aspect of language that can affect the difficulty of translation is the frequency with which words appear.",
"For example, humans may find rare words hard to translate because we rarely ever see them and it may be hard to recall their meaning.",
"The same can be true for NMT models where:",
"(i) the statistical strength of the training examples containing rare words is low and thus the model needs to keep revisiting such words in order to learn robust representations for them, and",
"(ii) the gradients of the rare word embeddings tend to have high variance; they are overestimates of the true gradients in the few occasions where they are non-zero, and underestimates otherwise.",
"This suggests that using word frequencies may be a helpful difficulty heuristic.",
"Given a corpus of sentences, { s i } Mi =1 , we define relative word frequencies as: p ( w j ) (cid:44) 1 N total M (cid:88) i =1 N i (cid:88) k =1 1 w ik = w j , (2) where j = 1 , . . . , # { unique words in corpus } and 1 condition is the indicator function which is equal to 1 if its condition is satisfied and 0 otherwise.",
"Next we need to decide how to aggregate the relative word frequencies of all words in a sentence to obtain a single difficulty score for that sentence.",
"Previous research has proposed various pooling operations, such as minimum, maximum, and average (Zhang et al., 2018), but they show that they do not work well in practice.",
"We propose a different approach.",
"Ultimately, what might be most important is the overall likelihood of a sentence as that contains information about both word frequency and, implicitly, sentence length.",
"An approximation to this likelihood is the product of the unigram probabilities, which is related to previous work in the area of active learning (Settles and Craven, 2008).",
"This product can be thought of as an approximate language model (assuming words are sampled independently) and also implicitly incorporates in-1 NMT models typically first pick up information about producing sentences of correct length.",
"It can be argued that presenting only short sentences first may lead to learning a strong bias for the sentence lengths.",
"In our experiments, we did not observe this to be an issue as the models kept improving and predicting sentences of correct length, throughout training.",
"formation about the sentence length that was proposed earlier (longer sentence scores are products over more terms in [0 , 1] and are thus likely to be smaller).",
"We thus propose the following difficulty heuristic: d rarity ( s i ) (cid:44) N i (cid:88) k =1 log p ( w ik ) , (3) where we use logarithms of word probabilities to prevent numerical errors.",
"Note that negation is used because we define less likely (i.e., more rare) sentences as more difficult.",
"These are just two examples of difficulty metrics, and it is easy to conceive of other metrics such as the occurrence of homographs (Liu et al., 2018) or context-sensitive words (Bawden et al., 2018), the examination of which we leave for future work.",
"For this paper, we propose two simple functional forms for c ( t ) and justify them with some intuition.",
"More sophisticated strategies that depend on the loss function, the loss gradient, or on the learner's performance on held-out data, are possible, but we do not consider them in this paper.",
"In this case, new training examples are constantly being introduced during the training process, with a constant rate r (as a proportion of the total number of available training examples).",
"Note that we can also define r = (1 c 0 ) /T , where T denotes the time after which the learner is fully competent, which results in: c linear ( t ) (cid:44) min (cid:18) 1 , t 1 c 0 T + c 0 (cid:19) .",
"Root: In the case of the linear form, the same number of new and more difficult, examples are added to the training set, at all times t .",
"However, as the training data grows in size, it gets less likely that any single data example will be sampled in a training batch.",
"Thus, given that the newly added examples are less likely to be sampled, we propose to reduce the number of new training examples per unit time as training progresses to give the learner sufficient time to assimilate their information content.",
"More specifically, we define the rate in which new examples are added as inversely proportional to the current training data size: dc ( t ) dt = P c ( t ) , (6) for some constant P 0 .",
"for some constants P and D .",
"Then, we consider the following constraint: c 0 (cid:44) c (0) = D D = c 20 .",
"Finally, we also have that c ( T ) = 1 P = (1 c 20 ) / 2 T , where T denotes the time after which the learner is fully competent.",
"This, along with the constraint that c ( t ) [0 , 1] for all t 0 , results in the following definition: c sqrt ( t ) (cid:44) min (cid:32) 1 , (cid:114) t 1 c 2 0 T + c 20 (cid:33) .",
"In our experiments, we refer to this specific formulation as the square root competence model.",
"If we want to make the curve sharper , meaning that even more time is spent per sample added later on in training, then we can consider the following more general form, for p 1 : c rootp ( t ) (cid:44) min (cid:32) 1 , p (cid:114) t 1 c p 0 T + c p 0 (cid:33) .",
"We observed that best performance is obtained when p = 2 and then, as we increase p , performance converges to that obtained when training without a curriculum.",
"Plots of the competence functions we presented are shown in Figure 4. 2.3 Scalability Our method can be easily used in large-scale NMT systems.",
"This is because it mainly consists of a preprocessing step of the training data that computes the difficulty scores.",
"The implementation we are releasing with this paper computes these scores in an efficient manner by building a graph describing their dependencies, as well as whether they are sentence-level scores (e.g., sentence length), or corpus-level (e.g., CDF), and using that graph to optimize their execution.",
"Using only 8GB of memory, we can process up to 20k sentences per second when computing sentence rarity scores, and up to 150k sentences per second when computing sentence length scores.",
"For our experiments, we use three of the most commonly used datasets in NMT, that range from a small benchmark dataset to a large-scale dataset with millions of sentences.",
"Statistics about the datasets are shown in Table 1. We perform experiments using both RNNs and Transformers.",
"For the RNN experiments we use a bidirectional LSTM for the encoder, and an LSTM with the attention model of Bahdanau et al. (2015) for the decoder.",
"The number of layers of the encoder and the decoder are equal.",
"We use a 2-layer encoder and a 2-layer decoder for all experiments on IWSLT datasets, and a 4-layer encoder and a 4-layer decoder for all experiments on the WMT dataset, due to the dataset's significantly larger size.",
"For the Transformer experiments we use the BASE model proposed by Vaswani et al. (2017).",
"It consists of a 6-layer encoder and decoder, using 8 attention heads, and 2,048 units for the feed-forward layers.",
"The multi-head attention keys and values depth is set to the word embedding size.",
"The word embedding size is 512 for all experiments.",
"Furthermore, for the Transformer experiments on the two smaller datasets we do not use any learning rate schedule, and for the experiments on the largest dataset we use the default Transformer schedule.",
"A detailed discussion on learning rate schedules for Transformers is provided near the end of this section.",
"All of our experiments were conducted on a machine with a single Nvidia V100 GPU, and 24 GBs of system memory.",
"During training, we use a label smoothing factor of 0.1 (Wu et al., 2016) and the AMSGrad optimizer (Reddi et al., 2018) with its default parameters in TensorFlow, and a batch size of 5,120 tokens Dataset # Train # Dev # Test IWSLT-15 En (cid:41) Vi 133 k 768 1268 IWSLT-16 Fr (cid:41) En 224 k 1080 1133 WMT-16 En (cid:41) De 4 .",
"(due to GPU memory constraints).",
"During inference, we employ beam search with a beam size of 10 and the length normalization scheme of Wu et al. (2016).",
"2 Curriculum Hyperparameters.",
"We set the initial competence c 0 to 0.01, in all experiments.",
"This means that all models start training using the 1% easiest training examples.",
"The curriculum length T is effectively the only hyperparameter that we need to set for our curriculum methods.",
"In each experiment, we set T in the following manner: we train the baseline model without using any curriculum and we compute the number of training steps it takes to reach approximately 90% of its final BLEU score.",
"We then set T to this value.",
"This results in T being set to 5,000 for the RNN experiments on the IWSLT datasets, and 20,000 for the corresponding Transformer experiments.",
"For WMT, we set T to 20,000 and 50,000 for RNNs and Transformers, respectively.",
"Furthermore, we use the following notation and abbreviations when presenting our results: Plain: Trained without using any curriculum.",
"SL: Curriculum with sentence length difficulty.",
"SR: Curriculum with sentence rarity difficulty.",
"Linear: Curriculum with the linear competence shown in Equation 5. Sqrt: Curriculum with the square root competence shown in Equation 7.",
"Data Preprocessing.",
"Our experiments are performed using the machine translation library released by Platanios et al. (2018).",
"We use the same data preprocessing approach the authors used in their experiments.",
"While training, we consider sentences up to length 200.",
"Similar to them, for the IWSLT-15 experiments we use a per-language vocabulary which contains the 20,000 most frequently 2 We emphasize that we did not run experiments with other architectures or configurations, and thus our baseline architectures were not chosen because they were favorable to our method, but rather because they were frequently mentioned in existing literature.",
"occurring words, while ignoring words that appear less than 5 times in the whole corpus.",
"For the IWSLT-16 and WMT-16 experiments we use a byte-pair encoding (BPE) vocabulary (Sennrich et al., 2016) trained using 32,000 merge operations, similar to the original Transformer paper by Vaswani et al. (2017).",
"Results.",
"We present a summary of our results in Table 2 and we also show complete learning curves for all methods in Figure 5. The evaluation metrics we use are the test set BLEU score and the time it takes for the models using curriculum learning to obtain the BLEU score that the baseline models attain at convergence.",
"We observe that Transformers consistently benefit from our curriculum learning approach, achieving gains of up to 2 BLEU, and reductions in training time of up to 70%.",
"RNNs also benefit, but to a lesser extent.",
"This is consistent with our motivation for this paper, which RNN TRANSFORMER Plain SL Curriculum SR Curriculum Plain Plain* SL Curriculum SR Curriculum c linear c sqrt c linear c sqrt c linear c sqrt c linear c sqrt BLEU En (cid:41) Vi 26.27 26.57 27.23 26.72 26.87 28.06 29.77 29.14 29.57 29.03 29.81 Fr (cid:41) En 31.15 31.88 31.92 31.39 31.57 34.05 34.88 34.98 35.47 35.30 35.83 En (cid:41) De 26.53 26.55 26.54 26.62 26.62 27.95 28.71 29.28 29.93 30.16 T i m e En (cid:41) Vi 1.00 0.64 0.61 0.71 0.57 1.00 1.00 0.44 0.33 0.35 0.31 Fr (cid:41) En 1.00 1.00 0.93 1.10 0.73 1.00 1.00 0.49 0.44 0.42 0.39 En (cid:41) De 1.00 0.86 0.89 1.00 0.83 1.00 0.58 0.55 0.55 0.55 Table 2: Summary of experimental results.",
"stems from the observation that training RNNs is easier and more robust than training Transformers.",
"Furthermore, the square root competence model consistently outperforms the linear model, which fits well with our intuition and motivation for introducing it.",
"Regarding the difficulty heuristics, sentence length and sentence rarity both result in similar performance.",
"We also observe that, for the two small datasets, RNNs converge faster than Transformers in terms of both the number of training iterations and the overall training time.",
"This is contrary to other results in the machine translation community (e.g., Vaswani et al., 2017), but could be explained by the fact that we are not using any learning rate schedule for training Transformers.",
"However, they never manage to outperform Transformers in terms of test BLEU score of the final model.",
"Furthermore, to the best of our knowledge, for IWSLT-15 we achieve state-of-the-art performance.",
"The highest previously reported result was 29 .",
"03 BLEU (Pla-tanios et al., 2018), in a multi-lingual setting.",
"Using our curriculum learning approach we are able to achieve a BLEU score of 29 .",
"81 for this dataset.",
"Overall, we have shown that our curriculum learning approach consistently outperforms models trained without any curriculum , in both limited data settings and large-scale settings.",
"Learning Rate Schedule.",
"In all of our IWSLT experiments so far, we use the default AMSGrad learning rate of 0.001 and intentionally avoid using any learning rate schedules.",
"However, Transformers are not generally trained without a learning rate schedule, due to their instability.",
"Such schedules typically use a warm-up phase, which means that the learning rate starts at a very low value and keeps increasing until the end of the warm-up period, after which a decay rate is typically used.",
"In order to show that our curriculum learning approach can act as a principled alternative to such highly tuned learning rate schedules, we now present the results we obtain when training our Transformers using the following learning rate schedule: lr ( t ) (cid:44) d 0 .",
"where t is the current training step, d embedding is the word embeddings size, and T warmup is the number of warmup steps and is set to 10,000 in these experiments.",
"This schedule was proposed in the original Transformer paper (Vaswani et al., 2017), and was tuned for the WMT dataset.",
"The results obtained when using this learning rate schedule are also shown in table 2, under the name Plain*.",
"In both cases, our curriculum learning approach obtains a better model in about 70% less training time .",
"This is very important, especially when applying Transformers in new datasets, because such learning rate heuristics often require careful tuning.",
"This tuning can be both very expensive and time consuming, often resulting in very complex mathematical expressions, with no clear motivation or intuitive explanation (Chen et al., 2018).",
"Our curriculum learning approach achieves better results, in significantly less time, while only requiring one parameter (the length of the curriculum).",
"Note that even without using any learning rate schedule, our curriculum methods were able to achieve performance comparable to the Plain* in about twice as many training steps.",
"Plain was not able to achieve a BLEU score above 2 .",
"00 even after fives times as many training steps, at which point we stopped these experiments.",
"Implementation and Reproducibility.",
"We are releasing an implementation of our proposed method and experiments built on top of the machine translation library released by Platanios et al. (2018), using TensorFlow Scala (Platanios, 2018), and is available at https://github.com/ eaplatanios/symphony-mt .",
"Furthermore, all experiments can be run on a machine with a single Nvidia V100 GPU, and 24 GBs of system memory.",
"Our most expensive experiments the ones using Transformers on the WMT-16 dataset take about 2 days to complete, which would cost about $125 on a cloud computing service such as Google Cloud or Amazon Web Services, thus making our results reproducible, even by independent researchers.",
"The idea of teaching algorithms in a similar manner as humans, from easy concepts to more difficult ones, has existed for a long time (Elman, 1993; Krueger and Dayan, 2009).",
"Machine learning models are typically trained using stochastic gradient descent methods, by uniformly sampling mini-batches from the pool of training examples, and using them to compute updates for the model parameters.",
"Deep neural networks, such as RNNs and Transformers, have highly non-convex loss functions.",
"This makes them prone to getting stuck in saddle points or bad local minima during training, often resulting in long training times and bad generalization performance.",
"Bengio et al. (2009) propose a curriculum learning approach that aims to address these issues by changing the mini-batch sampling strategy.",
"They propose starting with a distribution that puts more weight on easy samples, and gradually increase the probability of more difficult samples as training progresses, eventually converging to a uniform distribution.",
"They demonstrate empirically that such curriculum approaches indeed help decrease training times and sometimes even improve generalization.",
"Perhaps the earliest attempt to apply curriculum learning in MT was made by Zou et al. (2013).",
"The authors employed a curriculum learning method to learn Chinese-English bilingual word embeddings, which were subsequently used in the context of phrase-based machine translation.",
"They split the word vocabulary in 5 separate groups based on word frequency, and learned separate word embeddings for each of these groups in parallel.",
"Then, they merged the 5 different learned embeddings and continued training using the full vocabulary.",
"While this approach makes use of some of the ideas behind curriculum learning, it does not directly follow the original definition introduced by Bengio et al. (2009).",
"Moreover, their model required 19 days to train.",
"There have also been a couple of attempts to apply curriculum learning in NMT that were discussed in section 1. There also exists some relevant work in areas other than curriculum learning.",
"Zhang et al. (2016) propose training neural networks for NMT by focusing on hard examples, rather than easy ones.",
"They report improvements in BLEU score, while only using the hardest 80% training examples in their corpus.",
"This approach is more similar to boosting by Schapire (1999), rather than curriculum learning, and it does not help speed up the training process; it rather focuses on improving the performance of the trained model.",
"The fact that hard examples are used instead of easy ones is interesting because it is somewhat contradictory to that of curriculum learning.",
"Also, in contrast to curriculum learning, no ordering of the training examples is considered.",
"Perhaps another related area is that of active learning, where the goal is to develop methods that request for specific training examples.",
"Haffari et al. (2009), Bloodgood and Callison-Burch (2010), and Ambati (2012) all propose methods to solicit training examples for MT systems, based on the occurrence frequency of n-grams in the training corpus.",
"The main idea is that if an n-gram is very rare in the training corpus, then it is difficult to learn to translate sentences in which it appears.",
"This is related to our sentence rarity difficulty metric and points out an interesting connection between curriculum learning and active learning.",
"Regarding training Transformer networks, Shazeer and Stern (2018) perform a thorough experimental evaluation of Transformers, when using different optimization configurations.",
"They show that a significantly higher level of performance can be reached by not using momentum during optimization, as long as a carefully chosen learning rate schedule is used.",
"Such learning rate schedules are often hard to tune because of the multiple seemingly arbitrary terms they often contain.",
"Furthermore, Popel and Bojar (2018) show that, when using Transformers, increasing the batch size results in a better model at convergence.",
"We believe this is indicative of very noisy gradients when starting to train Transformers and that higher batch sizes help increase the signal-to-noise ratio.",
"We show that our proposed curriculum learning method offers a more principled and robust way to tackle this problem.",
"Using our approach, we are able to train Transformers to state-of-the-art performance, using small batch sizes and without the need for peculiar learning rate schedules, which are typically necessary.",
"We have presented a novel competence-based curriculum learning approach for training neural machine translation models.",
"Our resulting framework is able to boost performance of existing NMT systems, while at the same time significantly reducing their training time.",
"It differs from previous approaches in that it does not depend on multiple hyperparameters that can be hard to tune, and it does not depend on a manually designed discretized training regime.",
"We define the notions of competence , for a learner, and difficulty , for the training examples, and propose a way to filter training data based on these two quantities.",
"Perhaps most interestingly, we show that our method makes training Transformers faster and more reliable, but has a much smaller effect in training RNNs.",
"In the future, we are mainly interested in:",
"(i) exploring more difficulty heuristics, such as measures of alignment between the source and target sentences (Kocmi and Bojar, 2017), sentence length discrepancies, or even using a pre-trained language model to score sentences, which would act as a more robust replacement of our sentence rarity heuristic, and",
"(ii) exploring more sophisticated competence metrics that may depend on the loss function, the loss gradient, or on the learner's performance on held-out data.",
"Furthermore, it would be interesting to explore applications of curriculum learning to multilingual machine translation (e.g., it may be easier to start with high-resource languages and move to low-resource ones later on).",
"We would also like to explore the usefulness of our framework in more general machine learning tasks, outside of NMT.",
"We would like to thank Maruan Al-Shedivat and Dan Schwartz for the useful feedback they provided in early versions of this paper.",
"This research was supported in part by AFOSR under grant FA95501710218."
] | [
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"objective",
"method",
"objective",
"objective",
"objective",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"method",
"objective",
"method",
"objective",
"result",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other"
] |
[
"UCLA Los Angeles, USA",
"Anjie Fang Amazon.com, Inc.",
"Seattle, WA, USA Oleg Rokhlenko Amazon.com, Inc.",
"Seattle, WA, USA Shervin Malmasi Amazon.com, Inc.",
"Seattle, WA, USA tmeng@cs.ucla.edu {njfn,olegro,malmasi}@amazon.com Abstract Named Entity Recognition (NER) remains difficult in real-world settings; current challenges include short texts (low context), emerging entities, and complex entities (e.g. movie names).",
"Gazetteer features can help, but results have been mixed due to challenges with adding extra features, and a lack of realistic evaluation data.",
"It has been shown that including gazetteer features can cause models to overuse or underuse them, leading to poor generalization.",
"We propose GEMNET , a novel approach for gazetteer knowledge integration, including (1) a flexible Contextual Gazetteer Representation (CGR) encoder that can be fused with any word-level model; and (2) a Mixture-of-Experts gating network that overcomes the feature overuse issue by learning to conditionally combine the context and gazetteer features, instead of assigning them fixed weights.",
"To comprehensively evaluate our approaches, we create 3 large NER datasets (24M tokens) reflecting current challenges.",
"In an uncased setting, our methods show large gains (up to +49% F1) in recognizing difficult entities compared to existing baselines.",
"On standard benchmarks, we achieve a new uncased SOTA on CoNLL03 and WNUT17.",
"Identifying entities is a core NLP task.",
"Named Entity Recognition (NER) is the task of finding entities and recognizing their type (e.g., person or location).",
"Mention Detection (MD) is a simpler task of identifying entity spans, without the types.",
"Advances in neural NER have produced high scores on benchmark datasets like CoNLL03 and OntoNotes (Devlin et al., 2019).",
"However, a number of challenges remain.",
"As noted by Augenstein et al. (2017), these scores are driven by the use of well-formed news text, the presence of easy entities, and memorization effects due to entity overlap between train/test sets; these models perform sig-nificantly worse on unseen entities or noisy text.",
"Beyond news text, many challenges remain in NER.",
"Context information has been shown to be important for NER (Jayarao et al., 2018), and short texts like search queries are very challenging due to low context and a lack of surface features (Guo et al., 2009; Carmel et al., 2014).",
"Unseen and emerging entities also pose a challenge (Bernier-Colborne and Langlais, 2020).",
"Finally, some entities, like movie names are not simple noun phrases and are harder to recognize (Ashwini and Choi, 2014).",
"Table 1 lists more details about these challenges, and how they can be evaluated.",
"Entity Knowledge is essential for overcoming these issues, and critical in the absence of casing.",
"Even a human may not correctly parse what is [[life is beautiful]]? without knowing that a movie is being referenced.",
"However, most models start with no knowledge of real world entities, learning them from the training data.",
"Continuous data annotation can add new entities, but is expensive and often not feasible.",
"Consequently, methods for integrating external knowledge, e.g., Knowledge Bases (KBs) or gazetteers, into neural architectures have gained renewed attention.",
"However, such studies have reported limited gains (Liu et al., 2019; Rijhwani et al., 2020).",
"The mixed success of gazetteers stems from three main limitations in current work: gazetteer feature representation, their integration with contextual models, and a lack of data.",
"For the representation, one-hot binary encoding is often used to represent gazetteer features (Song et al., 2020).",
"However, this does not capture contextual info or span boundaries.",
"Alternatively, independent span taggers trained on gazetteers have been proposed to extract potential entities Liu et al. (2019), but such models can be difficult to train and may not provide reliable features.",
"There are also limitations in the integration of gazetteer features.",
"Existing studies often add extra features to a word-level model's Contextual Word Representations (CWRs), which typically contain no info about real world entities or their spans (Yamada et al., 2020).",
"This concatenation approach is sub-optimal as it creates additional, and often highly correlated features.",
"This has been shown to cause feature under-training, where the model will learn to mostly rely on either context or gazetteer during training, and underuse the other (Yang et al., 2016).",
"This can be problematic as the utility of the gazetteer is variable: it is valuable in low-context cases, but may not be useful when rich syntactic context (from the CWR) can identify entities.",
"Conversely, a true entity may be missing from the gazetteer.",
"However, when gazetteers are represented as an independent feature, the model assigns it a fixed weight, and its contribution to the prediction is static.",
"To overcome this, external knowledge should dynamically be infused into relevant dimensions of the CWR, with the model learning to conditionally balance the contribution of the CWR and gazetteer to the prediction.",
"Finally, these issues are compounded by a lack of data reflecting the challenges from Table 1, which prevents the exploration of effective architectures for knowledge injection.",
"The key contributions of this paper are new data and methods to address the above challenges.",
"We propose GEMNET , a gazetteer expert mixture network for effectively integrating gazetteers into any word-level model.",
"The model includes an encoder for C ontextual G azetteer R epresentations (CGRs) as a way to incorporate any number of gazetteers into a single, span-aware, dense representation.",
"We also propose a gated Mixture-of-Experts (MoE) method to fuse CGRs with Contextual Word Representations from any word-level model (e.g., BERT), something not explored in previous work.",
"Our novel MoE approach allows the model to conditionally compute a joint CGR-CWR representation, training a gating network to learn how to balance the contribution of context and gazetteer features.",
"Finally, we employ multi-stage training to drive further improvements by aligning the CGR/CWR vectors.",
"To evaluate our proposed approaches, we create 3 challenging NER datasets that represent short sentences, questions, and search queries.",
"The created datasets have complex entities with low-context and represent the challenges in Table 1.",
"Extensive experiments in an uncased setting show that our MoE model outperforms other baselines, including concatenation, in all experiments.",
"We achieve state-of-the-art (SOTA) results on CoNLL03/WNUT17, but its utility is more notable on our difficult low-context data.",
"We show that short texts make NER much harder, but gazetteers yield huge gains of up to +49% F1, specially in recognizing complex/unseen entities.",
"We also show that gazetteer coverage during training is important.",
"Deep Learning for NER Neural approaches have greatly improved NER results in recent years.",
"A shift to encoders e.g., BiLSTM-CRF models (Huang et al., 2015), using static word embeddings eliminated the need for manual feature engineering (e.g., capitalization features).",
"More recently, transformer-based Language Models (LMs), e.g., BERT (Devlin et al., 2019), achieved further improvements by using deep contextual word representations.",
"Such models jointly learn syntactic cues and entity knowledge, and may fail to recognize unseen or syntactically ambiguous entities.",
"Consequently, training data is augmented with gazetteers.",
"NER with Gazetteers Annotated NER data can only achieve coverage for a finite set of entities, but models face a potentially infinite entity space in the real world.",
"To address this, researchers have integrated gazetteers into models (Bender et al., 2003; Malmasi and Dras, 2015).",
"String matching is commonly used to extract gazetteer matches, which are then concatenated to word representations.",
"Song et al. (2020) use gazetteers from the Wikidata KB to generate one-hot vectors that are concatenated to BERT representations, yielding minor improvements on CoNLL03.",
"This concatenation approach has been shown to cause feature under-training (Yang et al., 2016), as discussed in 1.",
"An alternative approach uses gazetteers to train a subtagger model to recognize entity spans.",
"Liu et al. (2019) propose a hybrid semi-Markov CRF subtagger, reporting minor improvements.",
"While a subtagger may learn regularities in entity names, a key limitation is that it needs retraining and evaluation on gazetteer updates.",
"Recent work has considered directly integrating knowledge into transformers, e.g., KnowBert adds knowledge to BERT layers (Peters et al., 2019), and LUKE is pretrained to predict masked entities (Yamada et al., 2020).",
"The drawbacks of such methods are that they are specific to Transformers, and the model's knowledge cannot be updated without retraining.",
"We aim to overcome the limitations of previous work by designing a model-agnostic gazetteer representation that can be fused into any word-level model.",
"Mixture-of-Experts (MoE) Models MoE is an approach for conditionally computing a representation, given several expert inputs, which can be neural models with different architectures (Arnaud et al., 2020) or models using different knowledge sources (Jain et al., 2019).",
"In MoE, a gating network is trained to dynamically weight experts per-instance, according to the input.",
"It has demonstrated to be useful in various applications like recommendation (Zhu et al., 2020), domain adaptation for sentiment analysis, and POS tagging (Guo et al., 2018).",
"For NER, Liu et al. (2020) proposed a Mixture of Entity Experts (MoEE) approach where they train an expert layer for each entity type, and then combine them using an MoE approach.",
"Their approach does not include external gazetteers, and the experts provide an independent representation that is not combined with the word representation.",
"In our work we treat word and external gazetteer representations as independent experts, applying MoE to learn a dynamically fused representation.",
"We experiment using three standard benchmarks: CoNLL03, OntoNotes, and WNUT17.",
"However, these corpora do not capture the issues from Table 1; rich context and common entities (country names) allow a simple RNN model to achieve near-SOTA results.",
"A key contribution of our paper is the creation of 3 new datasets that represent those challenges.",
"They are difficult, as shown in 5.1.",
"NER Taxonomy: We adopt the WNUT 2017 (Derczynski et al., 2017) taxonomy entity types: PERSON ( PER for short, names of peo-ple), LOCATION ( LOC , locations/physical facili-ties), CORPORATION ( CORP , corporations and busi-nesses), GROUPS ( GRP , all other groups), PRODUCT ( PROD , consumer products), and CREATIVE-WORK ( CW , movie/song/book/etc. titles).",
"Our datasets are described below.",
"1 All data are uncased, and we make them publicly available.",
"2 Their statistics, listed in Table 2, show that they reflect the challenges from 1: short inputs (low context), with many unseen entities in the test set.",
"create our training set, we take advantage of the rich interlinks in Wikipedia.",
"We parse the English Wikpedia dump and extract sentences from all articles.",
"The sentences are parsed, and linked pages are resolved to their respective Wikidata entities to identify their type.",
"To mimic search and voice settings, we minimize the context around the entities by dropping sentences with unlinked entities, iden-tified using interlinks and a capitalization heuristic.",
"The result is a corpus of 1.4 million low-context sentences with annotated entities, e.g., A version for the [sega cd] was also announced. 1 More details about their development are in Appendix A 2 https://registry.opendata.aws/ lowcontext-ner-gaz Set Dataset Type # Sentence # Token # Entity Avg.",
"MSQ-NER : MS-MARCO Question NER To represent NER in the QA domain, we create a set of natural language questions, based on the MS-MARCO QnA corpus (V2.1) (Bajaj et al., 2016).",
"Like Wu et al. (2020), we templatize the questions by applying NER to extract item names, which are then mapped to our taxonomy.",
"Entities are replaced with their types to create templates, e.g., who sang <CW> and when did <PROD> come out .",
"Approx 3 .",
"5 k Templates (appearing > = 5 times) are chosen and slotted with entities from a knowledge base to generate 18 k annotated questions e.g., when did [xbox 360] come out .",
"There are a wide range of question shapes and entity types, please see Appendix A for examples.",
"ORCAS-NER : Search Query NER To represent the search query domain, we utilize 10 million Bing user queries from the ORCAS dataset (Craswell et al., 2020) and apply the same templatization procedure as MSQ-NER .",
"This yields search templates e.g., <PROD> price and <CORP> phone number , which are used to create annotated queries, e.g., [airpods pro] reviews .",
"A total of 472 k queries are generated from 97 k unique templates, please see examples in Appendix A. 3.1 Gazetteer Data Our gazetteer is composed of 1 .",
"67 million entities from the English Wikidata KB.",
"Instead of collecting entities from the web (Khashabi et al., 2018), we focused on entities that map to our taxonomy.",
"Alternative names (aliases) for entities are included.",
"Gazetteer statistics are listed in Appendix B. 4 The GEMNET Model We propose GEMNET , a generic gazetteer fusion approach that can be integrated with any word-level model, e.g., RNNs and Transformers.",
"We experiment with both BiLSTM-CRF and BERT-CRF models which produce (contextual) word representations, and complement these word experts with gazetteers.",
"The overall architecture is shown in Figure 1, and the components are detailed below.",
"Our gazetteer representations is obtained in two steps: entry matching, and contextual encoding.",
"Gazetteer Entry Matching A gazetteer g is a list of entries that are associated with a category.",
"For instance, a PERSON gazetteer contains a list of known people.",
"The k -th entry g ( k ) is associated with a tokenized string ( John Carpenter' ) and t ( k ) holds the IOB2 tags ( [B-PER, I-PER] ).",
"We use T to denote the tag set over all gazetteers, e.g., T = { B-PER,I-PER,B-LOC,I-LOC,O, ... } .",
"We denote input sentences as ( w 1 , w 2 , . . . , w L ) , where w i is the i -th word, and L is the length.",
"Full string matching is applied to inputs to identify matches across all gazetteers.",
"Overlapping matches are resolved by preferring longer ones over shorter ones, and earlier matches over later ones.",
"A match matrix, M { 0 , 1 } L | T | , represents the matching results.",
"It is initialized with zeros, and successful matches ( w i , w i +1 , . . . , w i + m ) = g ( k ) will set M i + j, t ( k ) j = 1 , j = 0 , 1 , . . . , m, indicating that the word w i + j is represented by a one-hot vector over the tag set T .",
"A key advantage of this representation is that it captures multiple matches for a given span in a sentence.",
"As shown in Table 3, the word apple can be matched to product and organization types.",
"Furthermore, it is span-aware due to the IOB2 encoding.",
"Any number of types and gazetteers can be added as needed, allowing the model to learn from correlations, and identify ambiguous entities.",
"M is extracted by a gazetteer matcher, as a preprocessing step outside the network.",
"This modular approach has an important advantage: it allows the gazetteer to be updated without model retraining.",
"This is useful for efficiently recognizing emerging entities, and supporting personalized user-defined entities (e.g., contact lists).",
"representation and f is an activation function.",
"This creates a dense representation that captures interactions between multiple matches.",
"We then contextualize this representation by applying a BiLSTM: h iforward = LSTM( h i 1 forward , h igaz ) h ibackward = LSTM( h i +1 backward , h igaz ) h iCGR = [ h iforward , h ibackward ] where [ , ] is the concatenation.",
"A sample visualization of the embeddings is shown in Appendix D. This dense contextualized gazetteer representation (CGR) can capture information about entity span boundaries (present in M ), as well as interactions between entities in a sentence.",
"The CGR operates on IOB2 tags and cannot memorize specific patterns; it is designed to be integrated with a lexical model.",
"We consider these representations to be orthogonal: CGRs can complement the model's knowledge and syntactic representation.",
"Mixture-of-Experts (MoE) Model The word-level model and CGRs complement each other and may not always be in agreement.",
"The word model may have low confidence about the span of an unseen entity, but the gazetteer may have knowledge of it.",
"Conversely, the model's syntactic context may be confident about a span not in the gazetteer.",
"In fact, the two sources can be considered as independent experts and an effective model should learn to use their outputs dynamically.",
"Inspired by the MoE architecture (Pavlitskaya et al., 2020), we apply conditional computation to combine our representations, allowing the model to learn the contexts where it should rely more on each expert.",
"We add a gating network to create a weighted linear combination of the word and gazetteer representations.",
"For a sentence, the two models output 3 their representations h word and h gaz , which are used to train the gating network: w e = ( [ h word , h CGR ]) , h = w e h word + (1 w e ) h CGR , where are trainable parameters with size 2 L , [ , ] is the concatenation and is the Sigmoid activation function.",
"We learn gating weights, w e , so that the model can learn to dynamically compute the hidden information h for each word.",
"The architecture of our model is shown in Figure 1.",
"After obtaining h , we feed it to a CRF layer to predict a tag.",
"Two-stage Training Our architecture jointly optimizes over both experts, but their initial states differ.",
"The word expert often contains pretrained elements, either as word embeddings or transformers.",
"The randomly-initialized CGR will have high initial loss, and its representation is not aligned with the word expert, preventing correct convergence.",
"We tackle this problem through a two-stage training method to adapt the two experts to each other.",
"In the first stage, we freeze the word ex-3 Outputs sizes must be equal, e.g., CGR must match BERT.",
"pert and only train the CGR encoder with the MoE and CRF layers, forcing the model to use gazetteer knowledge in order to minimize the loss.",
"Importantly, this also adapts the CGR encoder to align its representation with that of the word expert, e.g., the dimensions with noun signals will be aligned with those of BERT, enabling the computation of their linear combination.",
"In the second stage, the two experts are jointly fine-tuned to co-adapt them.",
"This ensures that the CGR encoder starts with reasonable weights, and allows the MoE gating network to better learn how to balance the two experts.",
"Data: All experiments are uncased, using standard benchmarks (CoNLL03, OntoNotes, WNUT17) and the new datasets we create (see 3).",
"Models: We integrate GEMNET with both BERT and BiLSTM word encoders.For BERT, we use the pretrained BERTBASE model.",
"The last output layer is used, and for each word, we use the first wordpiece representation as its representation.",
"The BiLSTM model has 3 inputs: GloVe embeddings (Pennington et al., 2014), ELMo embeddings (Peters et al., 2018) and CharCNN embeddings (Ma and Hovy, 2016).",
"Evaluation: We evaluate MD and NER, and report entity-level precision, recall and F1 scores.",
"Our first experiment aims to measure the difficulty of our datasets (3) relative to existing benchmarks.",
"We train a BERT model on CoNLL03 and use it to measure MD performance on our data.",
"Measuring NER performance is not possible as we use a different tag set (WNUT17 vs CoNLL03).",
"Results: Compared to the CoNLL03 results, the LOWNER performance is worse.",
"Although the evaluation on LOWNER is a transfer setting, the large gap shows the existing model cannot generalize well to our datasets due to the hard entities.",
"Results for MSQ-NER and ORCAS-NER , which are short texts, are even lower.",
"Overall, we note the difficulty of our datasets due to low context and hard entities.",
"We explore all model architectures by training on LOWNER (set 1 in Table 2) and evaluating MD and NER performance on all datasets (sets 35 in Table 2).",
"See Appendix C for training details.",
"Models: The GEMNET model is jointly trained and fused with BERT and BiLSTM word encoders, with and without two-stage training.",
"To assess the impact of the MoE component, we also concatenate the CGR and CWR vectors, without MoE.",
"Baselines: We compare against three baselines: (1) no gazetteer baselines; (2) binary concatenation: we simply concatenate the binary match features ( M ) to the word representations, as is common in the literature; (3) the subtagger model of Liu et al. (2019).",
"They are shown as baselines in table 5. Results: MD and NER performance for all models is shown in Table 5. Overall we note the high effectiveness of the GEMNET model.",
"In particular, our BiLSTM-based GEMNET approach improves F1 by up to 49% over the no gazetteer BiLSTM baseline in ORCAS-NER .",
"Different aspects of the results are discussed below.",
"Word Encoder Performance: For LOWNER , we note that BERT achieves the best results, which is to be expected since the data consists of full sentences.",
"MD is easier than NER, and represents the upper bound for NER.",
"Performance in all cases decreases with low context, with search queries (ORCAS-NER ) being the hardest.",
"BiLSTMs perform better on shorter inputs, e.g., ORCAS-NER .",
"Impact of Gazetteers: Results improve in all cases with external knowledge.",
"While the subtagger and the binary concatenation baselines yield gains compared to the no gazetteer baselines, our CGR-based approach outperforms all of them in all NER tests.",
"This indicates the high effectiveness of our CGR.",
"For LOWNER , using CGR+MoE, MD performance improves by 2 .",
"4% , while NER increases 4 .",
"7% over the no gazetteer BERT baseline.",
"Low-context data, MSQ-NER and ORCASNER , have much lower baseline performance, and benefit greatly from external knowledge.",
"The best MSQ-NERNER model improves 36% over the no gazetteer BiLSTM baseline, while ORCAS-NER increases by 49% .",
"This clearly demonstrates the impact of gazetteer integration.",
"Effect of Integration Method: CGR outperforms baselines in all NER experiments, showing the effectiveness of a span-aware, contextual representation that is jointly trained with the word-level model.",
"The MoE integration is superior to concatenation in all cases.",
"This is more salient in low context settings, demonstrating that the MoE model can rely on the CGR feature when the syntactic context (CWR) is not discriminative.",
"In some cases baselines actually degrade performance as the model can not effectively balance the experts.",
"Effect of Two-stage Training: We observe that two-stage training is crucial for BERT, including concatenation models and MoE models, but not for the BiLSTM model.",
"This confirms our hypothesis that the CGR cannot be jointly trained with a large pretrained model.",
"Freezing BERT and then jointly fine-tuning them provides great improvements.",
"Results on Benchmarks: We applied GEMNET , i.e., BERT using CGR+MoE with two stage training, to the standard benchmarks.",
"We experiment in an uncased setting, and and compare with the reported uncased SOTA (Mayhew et al., 2019).",
"The SOTA uses BERT-CRF, which are the same as our baseline architecture.",
"For comparison, we also reproduce the BERT baseline using our implementation.",
"Results are shown in Table 6. Our models achieve SOTA results in all uncased settings, demonstrating generalization across domains; we improve by 3.9% on WNUT17.",
"We also look at performance across different entity classes to understand the source of our improvements.",
"Table 7 shows relative gains per class, comparing the no gazetteer baseline performance against the best model.",
"Detailed precision/recall values are in Appendix E (Table 16).",
"The smallest gains are on PER and LOC types, and the largest gains are on products and creative works ( CW ).",
"This agrees with our hypothesis that these complex entities are the hardest to recognize.",
"Comparing datasets, increases are much larger on MSQ-NER and ORCAS-NER , confirming the challenges of short low-context inputs, and our models effectiveness in such cases.",
"We also conduct a qualitative error analysis to identify instances where the best non-gazetteer baseline fails, but our model provides correct output.",
"Some examples are shown in Table 8.",
"The baseline often lacks knowledge about complex and long-tail entities, either missing them (#1,6,8 show full or partial MD failure) or misclassifying them (#3-5 show NER errors).",
"Another common trend we observe is baselines incorrectly predicting nested entities within complex entities (#2,10).",
"We consider the impact of gazetteer coverage 4 on performance.",
"We hypothesize that training coverage impacts how much the model learns to rely on the gazetteer.",
"To verify this we examine two 4 The proportion of entities that are present in the gazetteer",
"scenarios: (1) the gazetteer coverage for train and test match (i.e., both high or low); and (2) there is a coverage gap between train and test, e.g., train coverage is 90% but is 25% for test, or vice versa.",
"Model and Data: For each train/test set we create gazetteers that have p % coverage of the set's gold entities, with p { 5 , 10 , 20 , 30 , 50 , 75 , 90 , 95 } .",
"This is achieved by randomly dropping entities.",
"We then train models using each p and evaluate on test sets, using all values of p .",
"This experiment is done using LOWNER and MSQ-NER .",
"Results: Results are plotted as heatmaps in Figure 2.",
"Best results occur with high train and test coverage, while the worst results fall under high training coverage but low test coverage.",
"When train coverage is low, test coverage has no impact as the model presumably ignores the gazetteer input.",
"Across test coverage values, best results are generally around the diagonal, i.e., matching training coverage.",
"These patterns are identical across datasets, indicating that a train/test coverage gap should be avoided.",
"In practice, if test set coverage cannot be measured, or high coverage is not guaranteed, then using lower training coverage (e.g., 50% ) prevents performance degradation in very low test coverage cases.",
"We also note that the gap between the best and worst result for LOWNER is not huge, showing the impact of sentence context.",
"This gap is much larger for ORCAS-NER , where the model cannot rely on the context.",
"Finally, we note that an alternative dynamic dropout method 5 achieved similar results.",
"We also consider the impact of a low-resource setting (limited annotations) on performance, hypothesizing that gazetteers are more helpful in such settings.",
"To verify this, we create random subsets of 5/10/20% of the training data and compare the NER performance of a baseline (BERT-base) vs our best model(BERT+CGR+MoE+2stage) when trained on this data.",
"Results are shown in Table 9. The results show that gazetteers are always more effective than baseline in low-resource scenarios.",
"Specifically, they improve much faster with less data, achieving close to maximum performance with only 20% of the data.",
"5 Gazetteer matches are randomly dropped during training (i.e., random entity dropout).",
"We focused on integrating gazetteers into NER models.",
"We proposed GEMNET , a flexible architecture that includes a Contextual Gazetteer Representation encoder, combined with a novel Mixture-of-Expert gating network to conditionally utilize this information alongside any word-level model.",
"GEMNET supports external gazetteers, allowing the model's knowledge to be updated without retraining.",
"We also developed new datasets to represent the current challenges in NER.",
"Experimental results demonstrated that our method can alleviate the feature weight under-training issue, achieving significant improvements on our data and a standard benchmark, WNUT17.",
"The datasets we released can serve as benchmarks for evaluating the entity knowledge possessed by models in future work.",
"Future work involves investigating integration with different model architectures, partial gazetteer matching, and additional entity features.",
"We would like to extend our gratitude to Eugene Agichtein, Alexandre Salle, and Besnik Fetahu for their valuable inputs and discussion during this project.",
"We also thank the anonymous reviewers for their constructive remarks."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"objective",
"method",
"objective",
"abstain",
"result",
"result",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"objective",
"objective",
"method",
"abstain",
"other",
"other"
] |
[
"A core step in statistical data-to-text generation concerns learning correspondences between structured data representations (e.g., facts in a database) and associated texts.",
"In this paper we aim to bootstrap generators from large scale datasets where the data (e.g., DBPedia facts) and related texts (e.g., Wikipedia abstracts) are loosely aligned.",
"We tackle this challenging task by introducing a special-purpose content selection mechanism.",
"1 We use multi-instance learning to automatically discover correspondences between data and text pairs and show how these can be used to enhance the content signal while training an encoder-decoder architecture.",
"Experimental results demonstrate that models trained with content-specific objectives improve upon a vanilla encoder-decoder which solely relies on soft attention.",
"A core step in statistical data-to-text generation concerns learning correspondences between structured data representations (e.g., facts in a database) and paired texts (Barzilay and Lapata, 2005; Kim and Mooney, 2010; Liang et al., 2009).",
"These correspondences describe how data representations are expressed in natural language ( content realisation ) but also indicate which subset of the data is verbalised in the text ( content selection ).",
"Although content selection is traditionally performed by domain experts, recent advances in generation using neural networks (Bahdanau et al., 2015; Ranzato et al., 2016) have led to the use of large scale datasets containing loosely related data and text pairs.",
"A prime example are online data sources like DBPedia ( Auer et al., 2007) and Wikipedia and their associated texts which 1 Our code and data are available at https://github.com/EdinburghNLP/wikigen .",
"are often independently edited.",
"Another example are sports databases and related textual resources.",
"Wiseman et al. (2017) recently define a generation task relating statistics of basketball games with commentaries and a blog written by fans.",
"In this paper, we focus on short text generation from such loosely aligned data-text resources.",
"We work with the biographical subset of the DBPedia and Wikipedia resources where the data corresponds to DBPedia facts and texts are Wikipedia abstracts about people.",
"Figure 1 shows an example for the film-maker Robert Flaherty , the Wikipedia infobox, and the corresponding abstract.",
"We wish to bootstrap a data-to-text generator that learns to verbalise properties about an entity from a loosely related example text.",
"Given the set of properties in Figure (1a) and the related text in Figure (1b), we want to learn verbalisations for those properties that are mentioned in the text and produce a short description like the one in Figure (1c).",
"In common with previous work (Mei et al., 2016; Lebret et al., 2016; Wiseman et al., 2017) our model draws on insights from neural machine translation (Bahdanau et al., 2015; Sutskever et al., 2014) using an encoder-decoder architecture as its backbone.",
"Lebret et al. (2016) introduce the task of generating biographies from Wikipedia data, however they focus on single sentence generation.",
"We generalize the task to multi-sentence text, and highlight the limitations of the standard attention mechanism which is often used as a proxy for content selection.",
"When exposed to sub-sequences that do not correspond to any facts in the input, the soft attention mechanism will still try to justify the sequence and somehow distribute the attention weights over the input representation (Ghader and Monz, 2017).",
"The decoder will still memorise high frequency sub-sequences in spite of these not being supported by any facts in the input.",
"(a)",
"(b) Robert Joseph Flaherty, (February 16, 1884 July 23, 1951) was an American film-maker who directed and produced the first commercially successful feature-length documentary film, Nanook of the North (1922).",
"The film made his reputation and nothing in his later life fully equalled its success, although he continued the development of this new genre of narrative documentary, e.g., with Moana (1926), set in the South Seas, and Man of Aran (1934), filmed in Ireland's Aran Islands.",
"He is considered the father of both the documentary and the ethnographic film.",
"Flaherty was married to writer Frances H. Flaherty from 1914 until his death in 1951.",
"Frances worked on several of her husband's films, and received an Academy Award nomination for Best Original Story for Louisiana Story (1948).",
"(c) Robert Joseph Flaherty, (February 16, 1884 July 23, 1951) was an American film-maker.",
"Flaherty was married to Frances H. Flaherty until his death in 1951.",
"ings via a specific content selection mechanism based on multi-instance learning (MIL; Keeler and Rumelhart, 1992) which automatically discovers correspondences, namely alignments, between data and text pairs.",
"These alignments are then used to modify the generation function during training.",
"We experiment with two frameworks that allow to incorporate alignment information, namely multi-task learning (MTL; Caruana, 1993) and reinforcement learning (RL; Williams, 1992).",
"In both cases we define novel objective functions using the learnt alignments.",
"Experimental results using automatic and human-based evaluation show that models trained with content-specific objectives improve upon vanilla encoder-decoder architectures which rely solely on soft attention.",
"The remainder of this paper is organised as follows.",
"We discuss related work in Section 2 and describe the MIL-based content selection approach in Section 3.",
"We explain how the generator is trained in Section 4 and present evaluation experiments in Section 5.",
"Section 7 concludes the paper.",
"Previous attempts to exploit loosely aligned data and text corpora have mostly focused on extracting verbalisation spans for data units.",
"Most approaches work in two stages: initially, data units are aligned with sentences from related corpora using some heuristics and subsequently extra content is discarded in order to retain only text spans verbalising the data.",
"Belz and Kow (2010) obtain verbalisation spans using a measure of strength of association between data units and words, Walter et al. (2013) extract textual patterns from paths in dependency trees while Mrabet et al. (2016) rely on crowd-sourcing.",
"Perez-Beltrachini and Gardent (2016) learn shared representations for data units and sentences reduced to subject-predicate-object triples with the aim of extracting verbalisations for knowledge base properties.",
"Our work takes a step further, we not only induce data-to-text alignments but also learn generators that produce short texts verbalising a set of facts.",
"Our work is closest to recent neural network models which learn generators from independently edited data and text resources.",
"Most previous work (Lebret et al., 2016; Chisholm et al., 2017; Sha et al., 2017; Liu et al., 2017) targets the generation of single sentence biographies from Wikipedia infoboxes, while Wiseman et al. (2017) generate game summary documents from a database of basketball games where the input is always the same set of table fields.",
"In contrast, in our scenario, the input data varies from one entity (e.g., athlete) to another (e.g., scientist) and properties might be present or not due to data incompleteness.",
"Moreover, our generator is enhanced with a content selection mechanism based on multi-instance learning.",
"MIL-based techniques have been previously applied to a variety of problems including image retrieval (Maron and Ratan, 1998; Zhang et al., 2002), object detection (Carbonetto et al., 2008; Cour et al., 2011), text classification (Andrews and Hofmann, 2004), image captioning (Wu et al., 2015; Karpathy and Fei-Fei, 2015), paraphrase detection (Xu et al., 2014), and information extraction (Hoffmann et al., 2011).",
"The application of MIL to content selection is novel to our knowledge.",
"We show how to incorporate content selection into encoder-decoder architectures following training regimes based on multi-task learning and reinforcement learning.",
"Multi-task learning aims to improve a main task by incorporating joint learning of one or more related auxiliary tasks.",
"It has been applied with success to a variety of sequence-prediction tasks focus-1517 ing mostly on morphosyntax.",
"Examples include chunking, tagging (Collobert et al., 2011; Sgaard and Goldberg, 2016; Bjerva et al., 2016; Plank, 2016), name error detection (Cheng et al., 2015), and machine translation (Luong et al., 2016).",
"Reinforcement learning (Williams, 1992) has also seen popularity as a means of training neural networks to directly optimize a task-specific metric (Ranzato et al., 2016) or to inject task-specific knowledge (Zhang and Lapata, 2017).",
"We are not aware of any work that compares the two training methods directly.",
"Furthermore, our reinforcement learning-based algorithm differs from previous text generation approaches (Ranzato et al., 2016; Zhang and Lapata, 2017) in that it is applied to documents rather than individual sentences.",
"We consider loosely coupled data and text pairs where the data component is a set P of property-values { p 1 : v 1 , , p | P | : v | P | } and the related text T is a sequence of sentences ( s 1 , , s | T | ) .",
"We define a mention span as a (possibly discontinuous) subsequence of T containing one or several words that verbalise one or more property-value from P .",
"For instance, in Figure 1, the mention span married to Frances H. Fla-herty verbalises the property-value { Spouse ( s ) : Frances Johnson Hubbard } .",
"In traditional supervised data to text generation tasks, data units (e.g., p i : v i in our particular setting) are either covered by some mention span j or do not have any mention span at all in T .",
"The latter is a case of content selection where the generator will learn which properties to ignore when generating text from such data.",
"In this work, we consider text components which are independently edited, and will unavoidably contain unaligned spans , i.e., text segments which do not correspond to any property-value in P .",
"The phrase from 1914 in the text in Figure (1b) is such an example.",
"Similarly, the last sentence, talks about Frances' awards and nominations and this information is not supported by the properties either.",
"Our model checks content in both directions; it identifies which properties have a corresponding text span (data selection) and also foregrounds (un)aligned text spans (text selection).",
"This knowledge is then used to discourage the generator from producing text not supported by facts in the prop-married spouse : FrancesJohnsonFlaherty to spouse : FrancesJohnsonFlaherty Frances spouse : FrancesJohnsonFlaherty Flaherty spouse : FrancesJohnsonFlaherty death died : july 23 , 1951 in died : july 23 , 1951 1951 died : july 23 , 1951 Table 1: Example of word-property alignments for the Wikipedia abstract and facts in Figure 1.",
"erty set P .",
"We view a property set P and its loosely coupled text T as a coarse level, imperfect alignment.",
"From this alignment signal, we want to discover a set of finer grained alignments indicating which mention spans in T align to which properties in P .",
"For each pair ( P , T ) , we learn an alignment set A ( P , T ) which contains property-value word pairs.",
"For example, for the properties spouse and died in Figure 1, we would like to derive the alignments in Table 1.",
"We formulate the task of discovering finer-grained word alignments as a multi-instance learning problem (Keeler and Rumelhart, 1992).",
"We assume that words from the text are positive labels for some property-values but we do not know which ones.",
"For each data-text pair ( P , T ) , we derive | T | pairs of the form ( P , s ) where | T | is the number of sentences in T .",
"We encode property sets P and sentences s into a common multimodal h -dimensional embedding space.",
"While doing this, we discover finer grained alignments between words and property-values.",
"The intuition is that by learning a high similarity score for a property set P and sentence pair s , we will also learn the contribution of individual elements (i.e., words and property-values) to the overall similarity score.",
"We will then use this individual contribution as a measure of word and property-value alignment.",
"More concretely, we assume the pair is aligned (or unaligned) if this individual score is above (or below) a given threshold.",
"Across examples like the one shown in Figure (1a-b), we expect the model to learn an alignment between the text span marriedtoFrancesH.Flaherty and the property-value { spouse : Frances Johnson Hubbard } .",
"Property Set Encoder As there is no fixed order among the property-value pairs p : v in P , we individually encode each one of them.",
"Furthermore, both properties p and values v may consist of short phrases.",
"For instance, the property cause o f death and value cerebral thrombosis in Figure 1.",
"We 1518 therefore consider property-value pairs as concatenated sequences pv and use a bidirectional Long Short-Term Memory Network (LSTM; Hochreiter and Schmidhuber, 1997) network for their encoding.",
"Note that the same network is used for all pairs.",
"Each property-value pair is encoded into a vector representation: p i = biLSTM denc ( pv i ) (1) which is the output of the recurrent network at the final time step.",
"We use addition to combine the forward and backward outputs and generate encoding { p 1 , , p | P | } for P .",
"Sentence Encoder We also use a biLSTM to obtain a representation for the sentence s = w 1 , , w | s | .",
"Each word w t is represented by the output of the forward and backward networks at time step t .",
"A word at position t is represented by the concatenation of the forward and backward outputs of the networks at time step t : w t = biLSTM senc ( w t ) (2) and each sentence is encoded as a sequence of vectors ( w 1 , , w | s | ) .",
"Alignment Objective Our learning objective seeks to maximise the similarity score between property set P and a sentence s (Karpathy and Fei-Fei, 2015).",
"This similarity score is in turn defined on top of the similarity scores among property-values in P and words in s .",
"Equation (3) defines this similarity function using the dot product.",
"The function seeks to align each word to the best scoring property-value: SP s = | s | t = 1 max i { 1 ,..., | P |} p i w t (3) Equation ( 4) defines our objective which encourages related properties P and sentences s to have higher similarity than other P 6 = P and s 6 = s : LCA = max ( 0 , SP s SP s + 1 ) + max ( 0 , SP s SP s + 1 ) (4) 4 Generator Training In this section we describe the base generation architecture and explain two alternative ways of using the alignments to guide the training of the model.",
"One approach follows multi-task training where the generator learns to output a sequence of words but also to predict alignment labels for each word.",
"The second approach relies on reinforcement learning for adjusting the probability distribution of word sequences learnt by a standard word prediction training algorithm.",
"We follow a standard attention based encoder-decoder architecture for our generator (Bahdanau et al., 2015; Luong et al., 2015).",
"Given a set of properties X as input, the model learns to predict an output word sequence Y which is a verbalisation of (part of) the input.",
"More precisely, the generation of sequence Y is conditioned on input X : P ( Y | X ) = | Y | t = 1 P ( y t | y 1: t 1 , X ) (5) The encoder module constitutes an intermediate representation of the input.",
"For this, we use the property-set encoder described in Section 3 which outputs vector representations { p 1 , , p | X | } for a set of property-value pairs.",
"The decoder uses an LSTM and a soft attention mechanism (Luong et al., 2015) to generate one word y t at a time conditioned on the previous output words and a context vector c t dynamically created: P ( y t + 1 | y 1: t , X ) = so ftmax ( g ( h t , c t )) (6) where g ( ) is a neural network with one hidden layer parametrised by W o R | V | d , | V | is the output vocabulary size and d the hidden unit dimension, over h t and c t composed as follows: g ( h t , c t ) = W o tanh ( W c [ c t ; h t ]) (7) where W c R d 2 d .",
"h t is the hidden state of the LSTM decoder which summarises y 1: t : h t = LSTM ( y t , h t 1 ) (8) The dynamic context vector c t is the weighted sum of the hidden states of the input property set (Equa-tion (9)); and the weights ti are determined by a dot product attention mechanism: c t = | X | i = 1 ti p i (9) ti = exp ( h t p i ) i exp ( h t p i ) (10) We initialise the decoder with the averaged sum of the encoded input representations ( Vinyals et al., 2016).",
"The model is trained to optimize negative log likelihood: L wNLL = | Y | t = 1 logP ( y t | y 1: t 1 , X ) (11) 1519 We extend this architecture to multi-sentence texts in a way similar to Wiseman et al. (2017).",
"We view the abstract as a single sequence, i.e., all sentences are concatenated.",
"When training, we cut the abstracts in blocks of equal size and perform forward backward iterations for each block (this includes the back-propagation through the en-coder).",
"From one block iteration to the next, we initialise the decoder with the last state of the previous block.",
"The block size is a hyperparameter tuned experimentally on the development set.",
"The generation of the output sequence is conditioned on the previous words and the input.",
"However, when certain sequences are very common, the language modelling conditional probability will prevail over the input conditioning.",
"For instance, the phrase from 1914 in our running example is very common in contexts that talk about periods of marriage or club membership, and as a result, the language model will output this phrase often, even in cases where there are no supporting facts in the input.",
"The intuition behind multi-task training (Caruana, 1993) is that it will smooth the probabilities of frequent sequences when trying to simultaneously predict alignment labels.",
"Using the set of alignments obtained by our content selection model, we associate each word in the training data with a binary label a t indicating whether it aligns with some property in the input set.",
"Our auxiliary task is to predict a t given the sequence of previously predicted words and input X : P ( a t + 1 | y 1: t , X ) = sigmoid ( g ( h t , c t )) (12) g ( h t , c t ) = v a tanh ( W c [ c t ; h t ]) (13) where v a R d and the other operands are as defined in Equation (7).",
"We optimise the following auxiliary objective function: L aln = | Y | t = 1 logP ( a t | y 1: t 1 , X ) (14) and the combined multi-task objective is the weighted sum of both word prediction and alignment prediction losses: LMTL = L wNLL + ( 1 ) L aln (15) where controls how much model training will focus on each task.",
"As we will explain in Section 5, we can anneal this value during training in favour of one objective or the other.",
"Although the multi-task approach aims to smooth the target distribution, the training process is still driven by the imperfect target text.",
"In other words, at each time step t the algorithm feeds the previous word w t 1 of the target text and evaluates the prediction against the target w t .",
"Alternatively, we propose a training approach based on reinforcement learning (Williams 1992) which allows us to define an objective function that does not fully rely on the target text but rather on a revised version of it.",
"In our case, the set of alignments obtained by our content selection model provides a revision for the target text.",
"The advantages of reinforcement learning are twofold:",
"(a) it allows to exploit additional task-specific knowledge (Zhang and Lapata, 2017) during training, and",
"(b) enables the exploration of other word sequences through sampling.",
"Our setting differs from previous applications of RL (Ranzato et al., 2016; Zhang and Lapata, 2017) in that the reward function is not computed on the target text but rather on its alignments with the input.",
"The encoder-decoder model is viewed as an agent whose action space is defined by the set of words in the target vocabulary.",
"At each time step, the encoder-decoder takes action y t with policy P ( y t | y 1: t 1 , X ) defined by the probability in Equation (6).",
"The agent terminates when it emits the End Of Sequence (EOS) token, at which point the sequence of all actions taken yields the output sequence Y = ( y 1 , , y | Y | ) .",
"This sequence in our task is a short text describing the properties of a given entity.",
"After producing the sequence of actions Y , the agent receives a reward r ( Y ) and the policy is updated according to this reward.",
"Reward Function We define the reward function r ( Y ) on the alignment set A ( X , Y ) .",
"If the output action sequence Y is precise with respect to the set of alignments A ( X , Y ) , the agent will receive a high reward.",
"Concretely, we define r ( Y ) as follows: r ( Y ) = pr r pr ( Y ) (16) where pr adjusts the reward value r pr which is the unigram precision of the predicted sequence Y and the set of words in A ( X , Y ) .",
"Training Algorithm We use the REINFORCE algorithm (Williams, 1992) to learn an agent that maximises the reward function.",
"As this is a gradient descent method, the training loss of a sequence 1520 is defined as the negative expected reward: LRL = E ( y 1 , , y | Y | ) P ( | X )[ r ( y 1 , , y | Y | )] where P is the agent's policy, i.e., the word distribution produced by the encoder-decoder model (Equation (6)) and r ( ) is the reward function as defined in Equation (16).",
"The gradient of LRL is given by: LRL | Y | t = 1 log P ( y t | y 1: t 1 , X )[ r ( y 1: | Y | ) b t ] where b t is a baseline linear regression model used to reduce the variance of the gradients during training.",
"b t predicts the future reward and is trained by minimizing mean squared error.",
"The input to this predictor is the agent hidden state h t , however we do not back-propagate the error to h t .",
"We refer the interested reader to Williams (1992) and Ranzato et al. (2016) for more details.",
"Document Level Curriculum Learning Rather than starting from a state given by a random policy, we initialise the agent with a policy learnt by pretraining with the negative log-likelihood objective (Ranzato et al., 2016; Zhang and Lapata, 2017).",
"The reinforcement learning objective is applied gradually in combination with the log-likelihood objective on each target block subsequence.",
"Recall from Section 4.1 that our document is segmented into blocks of equal size during training which we denote as MAXBLOCK .",
"When training begins, only the last tokens are predicted by the agent while for the first ( MAXBLOCK ) we still use the negative log-likelihood objective.",
"The number of tokens predicted by the agent is incremented by units every 2 epochs.",
"We set = 3 and the training ends when ( MAXBLOCK ) = 0.",
"Since we evaluate the model's predictions at the block level, the reward function is also evaluated at the block level.",
"Data We evaluated our model on a dataset collated from WIKIBIO (Lebret et al., 2016), a corpus of 728,321 biography articles (their first paragraph) and their infoboxes sampled from the English Wikipedia.",
"We adapted the original dataset in three ways.",
"Firstly, we make use of the entire abstract rather than first sentence.",
"Secondly, we reduced the dataset to examples with a rich set of properties and multi-sentential text.",
"We eliminated examples with less than six property-value pairs and abstracts consisting of one sentence.",
"We also placed a minimum restriction of 23 words in the length of the abstract.",
"We considered abstracts up to a maximum of 12 sentences and property sets with a maximum of 50 property-value pairs.",
"Finally, we associated each abstract with the set of DBPedia properties p : v corresponding to the abstract's main entity.",
"As entity classification is available in DBPedia for most entities, we concatenate class information c (whenever available) with the property value, i.e., p : vc .",
"In Figure 1, the property value spouse : FrancesH .",
"Flaherty is extended with class information from the DBPedia ontology to spouse : FrancesH .",
"FlahertyPerson .",
"Pre-processing Numeric date formats were converted to a surface form with month names.",
"Numerical expressions were delexicalised using different tokens created with the property name and position of the delexicalised token on the value sequence.",
"For instance, given the property-value for birth date in Figure (1a), the first sentence in the abstract (Figure (1b)) becomes Robert Joseph Flaherty, (February DLX birth date 2,DLX birth date 4July... .",
"Years and numbers in the text not found in the values of the property set were replaced with tokens YEAR and NUMERIC.",
"2 In a second phase, when creating the input and output vocabularies, VI and VO respectively, we delexicalised words w which were absent from the output vocabulary but were attested in the input vocabulary.",
"Again, we created tokens based on the property name and the position of the word in the value sequence.",
"Words not in VO or VI were replaced with the symbol UNK.",
"Vocabulary sizes were limited to | VI | = 50 k and | VO | = 50 k for the alignment model and | VO | = 20 k for the generator.",
"We discarded examples where the text contained more than three UNKs (for the content aligner) and five UNKs (for the generator); or more than two UNKs in the property-value (for generation).",
"Finally, we added the empty relation to the property sets.",
"Table 2 summarises the dataset statistics for the generator.",
"We report the number of abstracts in the dataset (size), the average number of sentences and tokens in the abstracts, and the average number of properties and sentence length in tokens 2 We exploit these tokens to further adjust the score of the reward function given by Equation (16).",
"Each time the predicted output contains some of these symbols we decrease the reward score by which we empirically set to 0.025 .",
"(sent.len).",
"For the content aligner (cf. Section 3), each sentence constitutes a training instance, and as a result the sizes of the train and development sets are 796,446 and 153,096, respectively.",
"Training Configuration We adjusted all mod-els' hyperparameters according to their performance on the development set.",
"The encoders for both content selection and generation models were initialised with GloVe (Pennington et al., 2014) pre-trained vectors.",
"The input and hidden unit dimension was set to 200 for content selection and 100 for generation.",
"In all models, we used encoder biLSTMs and decoder LSTM (reg-ularised with a dropout rate of 0.3 (Zaremba et al., 2014)) with one layer.",
"Content selection and generation models (base encoder-decoder and MTL) were trained for 20 epochs with the ADAM opti-miser (Kingma and Ba, 2014) using a learning rate of 0.001.",
"The reinforcement learning model was initialised with the base encoder-decoder model and trained for 35 additional epochs with stochastic gradient descent and a fixed learning rate of 0.001.",
"Block sizes were set to 40 (base), 60 (MTL) and 50 (RL).",
"Weights for the MTL objective were also tuned experimentally; we set = 0 .",
"1 for the first four epochs (training focuses on alignment prediction) and switched to = 0 .",
"9 for the remaining epochs.",
"Content Alignment We optimized content alignment on the development set against manual alignments.",
"Specifically, two annotators aligned 132 sentences to their infoboxes.",
"We used the Yawat annotation tool (Germann, 2008) and followed the alignment guidelines (and evaluation metrics) used in Cohn et al. (2008).",
"The inter-annotator agreement using macro-averaged f-score was 0.72 (we treated one annotator as the reference and the other one as hypothetical system output).",
"Alignment sets were extracted from the model's output (cf. Section 3) by optimizing the threshold avg ( sim ) + a std ( sim ) where sim denotes the similarity between the set of property values and words, and a is empirically set to 0.75; avg and std are the mean and standard deviation of sim scores across the development set.",
"Each word was aligned to a property-value if their similarity exceeded a threshold of 0.22.",
"Our best content alignment model (Content-Aligner) obtained an f-score of 0.36 on the development set.",
"We also compared our Content-Aligner against a baseline based on pre-trained word embeddings (EmbeddingsBL).",
"For each pair ( P , s ) we computed the dot product between words in s and properties in P (properties were represented by the the averaged sum of their words' vectors).",
"Words were aligned to property-values if their similarity exceeded a threshold of 0.4.",
"EmbeddingsBL obtained an f-score of 0.057 against the manual alignments.",
"Finally, we compared the performance of the Content-Aligner at the level of property set P and sentence s similarity by comparing the average ranking position of correct pairs among 14 dis-tractors, namely rank@15.",
"The Content-Aligner obtained a rank of 1.31, while the EmbeddingsBL model had a rank of 7.99 (lower is better).",
"We compared the performance of an encoder-decoder model trained with the standard negative log-likelihood method (ED), against a model trained with multi-task learning (EDMTL ) and reinforcement learning (EDRL ).",
"We also included a template baseline system (Templ) in our evaluation experiments.",
"The template generator used hand-written rules to realise property-value pairs.",
"As an approximation for content selection, we obtained the 50 more frequent property names from the training set and manually defined content ordering rules with the following criteria.",
"We ordered personal life properties (e.g., birth date or occupation ) based on their most common order of mention in the Wikipedia abstracts.",
"Profession dependent properties (e.g., position or genre ), were assigned an equal ordering but posterior to the personal properties.",
"We manually lexicalised properties into single sentence templates to be concatenated to produce the final text.",
"The template for the property position and example verbalisation for the property-value position : de f ender of the entity zanetti are [ NAME ] played as [ POSITION ] . and Zanettiplayedasdefender. respectively.",
"(Papineni et al., 2002) against the noisy Wikipedia abstracts.",
"Considering these as a gold standard is, however, not entirely satisfactory for two reasons.",
"Firstly, our models generate considerably shorter text and will be penalized for not generating text they were not supposed to generate in the first place.",
"Secondly, the model might try to reproduce what is in the imperfect reference but not supported by the input properties and as a result will be rewarded when it should not.",
"To alleviate this, we crowd-sourced using AMT a revised version of 200 randomly selected abstracts from the test set.",
"Crowdworkers were shown a Wikipedia infobox with the accompanying abstract and were asked to adjust the text to the content present in the infobox.",
"Annotators were instructed to delete spans which did not have supporting facts and rewrite the remaining parts into a well-formed text.",
"We collected three revised versions for each abstract.",
"Inter-annotator agreement was 81.64 measured as the mean pairwise BLEU-4 amongst AMT workers.",
"Automatic evaluation results against the revised abstracts are also shown in Table 3.",
"As can be seen, all encoder-decoder based models have a significant advantage over Templ when evaluating against both types of abstracts.",
"The model enabled with the multi-task learning content selection mechanism brings an improvement of 1.29 BLEU-4 over a vanilla encoder-decoder model.",
"Performance of the RL trained model is inferior and close to the ED model.",
"We discuss the reasons for this discrepancy shortly.",
"To provide a rough comparison with the results reported in Lebret et al. (2016), we also computed BLEU-4 on the first sentence of the text generated by our system.",
"3 Recall that their model generates the first sentence of the abstract, whereas we out-3 We post-processed system output with Stanford CoreNLP (Manning et al., 2014) to extract the first sentence.",
"put multi-sentence text.",
"Using the first sentence in the Wikipedia abstract as reference, we obtained a score of 37.29% (ED), 38.42% (EDMTL ) and 38.1% (EDRL ) which compare favourably with their best performing model (34.7% 0.36).",
"Human-Based Evaluation We further examined differences among systems in a human-based evaluation study.",
"Using AMT, we elicited 3 judgements for the same 200 infobox-abstract pairs we used in the abstract revision study.",
"We compared the output of the templates, the three neural generators and also included one of the human edited abstracts as a gold standard (reference).",
"For each test case, we showed crowdworkers the Wikipedia infobox and five short texts in random order.",
"The annotators were asked to rank each of the texts according to the following criteria: (1) Is the text faithful to the content of the table?",
"and (2) Is the text overall comprehensible and fluent?",
"Ties were allowed only when texts were identical strings.",
"Table 5 presents examples of the texts (and properties) crowdworkers saw.",
"Table 4 shows, proportionally, how often crowdworkers ranked each system, first, second, and so on.",
"Unsurprisingly, the human authored gold text is considered best (and ranked first 47% of the time).",
"EDMTL is mostly ranked second and third best, followed closely by EDRL .",
"The vanilla encoder-decoder system ED is mostly forth and Templ is fifth.",
"As shown in the last column of the table (Rank), the ranking of EDMTL is overall slightly better than EDRL .",
"We further converted the ranks to ratings on a scale of 1 to 5 (assigning ratings 5 ... 1 to rank placements 1 ... 5).",
"This allowed us to perform Analysis of Variance (ANOVA) which revealed a reliable effect of system type.",
"Post-hoc Tukey tests showed that all systems were significantly worse than RevAbs and significantly better than Templ (p < 0.05).",
"EDMTL is not significantly better than EDRL but is significantly (p < 0.05) different from ED.",
"Discussion The texts generated by EDRL are shorter compared to the other two neural systems 1523 property-set name = dorsey burnette, date = may 2012, bot = blevintron bot, background = solo singer, birth = december 28 , 1932, birth place = memphis, tennessee, death place = { los angeles; canoga park, california } , death = august 19 , 1979, associated acts = the rock and roll trio, hometown = memphis, tennessee, genre = { rock and roll; rockabilly; country music } , occupation = { composer; singer } , instruments = { rockabilly bass; vocals; acoustic guitar } , record labels = { era records; coral records; smash records; imperial records; capitol records; dot records; reprise records } RevAbs Dorsey Burnette (December 28 , 1932 August 19 , 1979) was an american early Rockabilly singer.",
"which might affect BLEU-4 scores and also the ratings provided by the annotators.",
"As shown in Table 5 (entity dorsey burnette ), EDRL drops information pertaining to dates or chooses to just verbalise birth place information.",
"In some cases, this is preferable to hallucinating incorrect facts; however, in other cases outputs with more information are rated more favourably.",
"Overall, EDMTL seems to be more detail oriented and faithful to the facts included in the infobox (see dorseyburnette , aaronmoores , or kirillmoryganov ).",
"The template system manages in some specific configurations to verbalise appropriate facts ( indrani bose ), however, it often fails to verbalise infrequent properties ( aaronmoores ) or focuses on properties which are very frequent in the knowledge base but are rarely found in the abstracts ( kirillmoryganov ).",
"In this paper we focused on the task of bootstrapping generators from large-scale datasets consisting of DBPedia facts and related Wikipedia biography",
"biography abstracts.",
"We proposed to equip standard encoder-decoder models with an additional content selection mechanism based on multi-instance learning and developed two training regimes, one based on multi-task learning and the other on reinforcement learning.",
"Overall, we find that the proposed content selection mechanism improves the accuracy and fluency of the generated texts.",
"In the future, it would be interesting to investigate a more sophisticated representation of the input (Vinyals et al., 2016).",
"It would also make sense for the model to decode hierarchically, taking sequences of words and sentences into account (Zhang and Lapata, 2014; Lebret et al., 2015).",
"We thank the NAACL reviewers for their constructive feedback.",
"We also thank Xingxing Zhang, Li Dong and Stefanos Angelidis for useful discussions about implementation details.",
"We gratefully acknowledge the financial support of the European Research Council (award number 681760)."
] | [
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"objective",
"abstain",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"We present a direct speech-to-speech translation (S2ST) model that translates speech from one language to speech in another language without relying on intermediate text generation.",
"We tackle the problem by first applying a self-supervised discrete speech encoder on the target speech and then training a sequence-to-sequence speech-to-unit translation (S2UT) model to predict the discrete representations of the target speech.",
"When target text transcripts are available, we design a joint speech and text training framework that enables the model to generate dual modality output (speech and text) simultaneously in the same inference pass.",
"Experiments on the Fisher Spanish-English dataset show that the proposed framework yields improvement of 6.7 BLEU compared with a baseline direct S2ST model that predicts spectrogram features.",
"When trained without any text transcripts, our model performance is comparable to models that predict spectrograms and are trained with text supervision, showing the potential of our system for translation between unwritten languages 1 .",
"Speech translation aims at converting speech from one language into speech or text in another language.",
"The technology helps bridge the communication barriers between people speaking different languages and can provide access to multimedia content in different languages.",
"Conventional speech-to-text translation (S2T) systems take a cascaded approach by concatenating automatic speech recognition (ASR) and machine translation (MT).",
"In recent years, end-to-end S2T (Brard et al., 2016) is proposed to alleviate the error propagation issue between ASR and MT. These S2T models can be further combined with text-to-speech (TTS) 1 Audio samples are available at https: //facebookresearch.github.io/speech_translation/direct_s2st_units/index.html synthesis to provide both speech and text translation, which allows the technology to be adopted in a wider range of applications.",
"More recently, researchers have started exploring building direct speech-to-speech translation (S2ST) models without relying on text generation as an intermediate step (Jia et al., 2019b, 2021).",
"Direct S2ST has the benefits of lower computational costs and inference latency as fewer decoding steps are needed compared to cascaded systems.",
"In addition, direct S2ST is a natural approach for supporting translation for languages without a writing system (Tjandra et al., 2019; Zhang et al., 2020).",
"Jia et al. (2019b) first addresses the problem by training an attention-based sequence-to-sequence model that maps source speech spectrograms into target spectrograms.",
"Model training is challenging as it requires the model to learn not only the alignment between two languages but also the acoustic and linguistic characteristics of both languages.",
"As a result, there is a performance gap between the direct S2ST system and an S2T+TTS cascaded system.",
"The recent success in self-supervised learning for speech has demonstrated that speech representations learned from a large unlabelled speech corpus can lead to impressive performance on a variety of downstream tasks (Yang et al., 2021) including ASR (Baevski et al., 2020; Hsu et al., 2021), speaker and language identification (Fan et al., 2020), etc.",
"Moreover, discretized speech units obtained from the clustering of self-supervised speech representations allow researchers to take advantage of existing NLP modeling techniques on speech, such as spoken generative language modeling (Lakhotia et al., 2021).",
"In this work, we tackle the challenge of modeling target speech in direct S2ST by predicting self-supervised discrete representations of the target speech instead of mel-spectrogram features.",
"Compared with spectrogram features, self-supervised 3327 discrete units can disentangle linguistic content from speaker identify or prosodic information in speech (Polyak et al., 2021).",
"With the use of discrete units, we can also apply common practice such as beam search during inference.",
"We investigate direct S2ST with discrete units in the scenarios where the source and target transcripts may or may not be available, the latter case being representative of unwritten languages.",
"For the written languages, we present a framework that jointly generates speech and text output by combining S2ST and S2T tasks through a shared encoder and a partially shared decoder.",
"We resolve the length mismatch issue between the speech and text output during decoding with connectionist temporal classification (CTC) (Graves et al., 2006).",
"Experiments show that with the combination of discrete units prediction, speech and text joint training and beam search, our direct S2ST system matches the performance of a cascaded S2T+TTS system.",
"For the unwritten target languages, we first extend the use of discrete units to text-to-speech translation (Zhang et al., 2020) when there are source text transcripts available.",
"Then we show that with multitask learning using both discrete representations for the source and the target speech, it is possible to train a direct S2ST system without the use of any text transcripts.",
"In addition, we measure the system runtime and memory usage during inference and empirically show that the proposed framework is the most efficient compared to the direct S2ST system that predicts spectrogram features or other cascaded systems.",
"The rest of this paper is organized as follows.",
"After introducing background and related work in the next section, we describe our system in detail in Sec. 3. Following this, we present experimental results including objective evaluation on translation quality, subjective evaluation on speech quality, and system benchmark in Sec. 4. Finally Sec. 5 concludes with a discussion of potential future work.",
"Conventional S2ST systems are built by combining either cascaded or end-to-end S2T models with TTS (Lavie et al., 1997; Nakamura et al., 2006).",
"The majority of the speech translation research has focused on the S2T setup.",
"Studies on ASR+MT systems explore better ways to integrate ASR output lattice to MT models (Matusov et al., 2005) in order to alleviate the error propagation issue between the two.",
"End-to-end S2T (Brard et al., 2016) has the potential to resolve the issue, as long as it is properly trained with multitask learning (Weiss et al., 2017), model pre-training (Bahar et al., 2019; Li et al., 2021) or data augmentation (Jia et al., 2019a) to overcome the data scarcity problem.",
"Studies on TTS for S2ST focus more on synthesizing the paralinguistic information transferred from the source speech, such as prosody (Aguero et al., 2006; Anu-manchipalli et al., 2012) and word-level emphasis (Do et al., 2017).",
"On the other hand, Translatotron (Jia et al., 2019b) is an attention-based sequence-to-sequence framework that directly translates mel-spectrogram of the source speech into spectrogram features of the target speech.",
"Multitask learning is essential in facilitating the model to converge, though there is still a performance gap towards a S2T+TTS cascaded system.",
"The follow-up and concurrent work, Translatotron 2 (Jia et al., 2021), addresses the over-generation issue by conditioning the spectrogram synthesizer directly on the output from the auxiliary target phoneme decoder.",
"Kano et al. (2021) propose to build a single deep-learning framework step-by-step by pre-training ASR, MT and TTS models separately and connecting them with Transcoder layers.",
"However, the inference process requires the ASR and MT decoders to complete decoding a full sequence, and thus it loses the latency advantage of a direct S2ST system.",
"Tjandra et al. (2019); Zhang et al. (2020) both investigate direct S2ST models under the unwritten language setup by transforming the target speech into discrete representations through Variational Auto-Encoder (VAE), training a sequence-to-sequence model for translation into target discrete units, and an inverter for converting the units to speech.",
"In this work, we propose to train a transformer-based speech-to-discrete unit model for direct S2ST.",
"We design a text decoding task conditioned on the intermediate representation of the discrete unit decoder in addition to the auxiliary tasks proposed in (Jia et al., 2019b).",
"We choose to use HuBERT (Hsu et al., 2021) to generate the target self-supervised discrete units, since Yang et al. (2021); Lakhotia et al. (2021); Polyak et al. (2021) have shown its superior performance across ASR, spoken language modeling and speech synthesis, compared to other unsupervised representations, including VAE-based representations used in (Tjan-dra et al., 2019; Zhang et al., 2020).",
"Overall, there exists little work on direct S2ST due to the lack of parallel S2ST training data.",
"While Jia et al. (2019b) performs one set of experiments on in-house real-world S2ST data, Jia et al. (2019b, 2021); Tjandra et al. (2019); Zhang et al. (2020); Kano et al. (2021) all take advantage of TTS services to produce synthetic target speech for model training.",
"We follow the same approach and conduct our experiments with single-speaker synthetic target speech.",
"Our proposed system (Fig. 1) is a transformer-based sequence-to-sequence model with a speech encoder and a discrete unit decoder and incorporates auxiliary tasks (shown in dashed lines) similar to (Jia et al., 2019b) during training to facilitate model learning.",
"For written target languages, we further apply target text CTC decoding conditioned on the intermediate representations from the discrete unit decoder for joint speech and text training and generation.",
"Finally, a vocoder is separately trained to convert discrete units into waveform.",
"HuBERT (Hsu et al., 2021) learns speech representations in a self-supervised manner by leveraging k-means clustering on the model's intermediate representations (or the Mel-frequency cepstral",
"co-(a) stacked",
"efficient features for the first iteration) to generate discrete labels of masked audio segments.",
"A HuBERT model pre-trained on an unlabelled speech corpus of the target language can encode the target speech into continuous representations at every 20-ms frame.",
"A k-means algorithm is applied on the learned representations of the unlabelled speech to generate K cluster centroids (Lakhotia et al., 2021; Polyak et al., 2021), which are used to encode target utterances into sequences of cluster indices at every 20-ms.",
"In the end, a target utterance y is represented as [ z 1 , z 2 , ..., z T ] , z i { 0 , 1 , ..., K 1 } , 1 i T , where T is the number of frames.",
"We build the S2UT model by adapting from the transformer model for MT (Vaswani et al., 2017).",
"A stack of 1D-convolutional layers, each with stride 2 and followed by a gated linear unit activation function, is prepended to the transformer layers in the encoder for downsampling the speech input (Synnaeve et al., 2019).",
"As the target sequence is discrete, we train the S2UT model with cross-entropy loss with label smoothing.",
"We explore two strategies for predicting the discrete unit sequence.",
"In the first strategy (Fig.",
"2(a), dubbed as stacked ), we apply the concept of reduction factor, r , from TTS (Wang et al., 2017) and generate a K r vector at every decoding step for predicting r consecutive discrete units.",
"In the second strategy (Fig.",
"2(b), dubbed as reduced ), we collapse a consecutive sequence of the same units into one single unit, resulting a sequence of unique discrete units.",
"Both strategies help speed up training and inference time.",
"We follow the design in (Jia et al., 2019b) to incorporate auxiliary tasks with additional attention and decoder modules conditioned on the intermediate layers of the encoder.",
"The target output of the auxiliary tasks can be either phonemes, characters, subword units or any discrete representations of the source or target utterances.",
"These auxiliary tasks are only used during training and not in inference.",
"For written target languages, we add target text CTC decoding conditioned on an intermediate layer from the discrete unit decoder for the model to generate dual modality output.",
"The use of CTC can mitigate the length mismatch between the speech and text output.",
"However, since it only allows monotonic alignment, we rely on the transformer layers that the CTC decoder conditioned on to take care of the reordering from source to target.",
"During training, we do teacher-forcing with the ground truth target discrete unit sequence and compute CTC loss using the teacher-forced intermediate representations from the decoder.",
"During inference, we can perform discrete unit decoding and CTC decoding for text at each decode step simultaneously.",
"We adopt the modified version of the HiFi-GAN neural vocoder (Kong et al., 2020) proposed in (Polyak et al., 2021) for unit-to-waveform conversion.",
"For the stacked discrete unit output, we train the vocoder with only discrete unit sequence and without extra pitch information as the input.",
"For the reduced discrete unit output, we enhance the vocoder with a lightweight duration prediction module from Fastspeech 2 (Ren et al., 2020), which consists of two 1D-convolutional layers, each with ReLU activation and followed by layer normalization and dropout, and a linear layer.",
"We train the enhanced vocoder by minimizing the mean square error (MSE) between the module prediction and the ground truth duration of each unit segment in logarithmic domain, together with the generator-discriminator loss from HiFi-GAN.",
"We perform our experiments using the Fisher Spanish-English speech translation corpus (Post et al., 2014) as in (Jia et al., 2019b; Zhang et al., 2020).",
"The dataset consists of 139k sentences (ap-proximately 170 hours) from telephone conversa-train dev dev2 test # samples 126k 4k 4k 3.6k source (hrs) 162.5 4.6 4.7 4.5 target (hrs) 139.3 4.0 3.8 3.9 Table 1: Statistics (number of samples and duration) of the Fisher Spanish-English dataset (Post et al., 2014) after pre-processing tions in Spanish, the corresponding Spanish text transcriptions and their English text translation.",
"As in previous studies on direct S2ST (Jia et al., 2019b, 2021; Zhang et al., 2020), we use a high-quality in-house TTS engine to prepare synthetic target speech with a single female voice as the training targets.",
"We perform all experiments, including the baselines, with the synthetic target speech and do not rely on the TTS engine for other uses.",
"We apply the ASR model described in Sec. 4.4 on the synthetic speech and filter out samples with word error rate (WER) greater than 80.",
"Table 1 lists the statistics of the resulting training set, the two development sets and the test set.",
"S2UT model We use the pre-trained HuBERT Base model 2 trained on Librispeech (Panayotov et al., 2015) for two iterations and follow (Hsu et al., 2021; Lakhotia et al., 2021) to perform k-means with K = 100 on representations from the sixth layer of the model for extracting discrete units for all target English speech.",
"We compute 80-dimensional mel-filterbank features at every 10-ms for the source speech as input to the speech encoder and apply cepstral mean and variance normalization and SpecAugment (Park et al., 2019) with the Librispeech basic policy.",
"The downsampling stack in the speech encoder contains two 1D-convolutional layers with kernel size 5 and 1024 channels, resulting in a downsampling factor of 4 on the input speech.",
"The encoder contains 12 transformer layers with embedding size 256, feed-forward network (FFN) embedding size 2048 and 4 attention heads.",
"The decoder consists of 6 transformer layers with the same embedding size and FFN embedding size as the encoder and 8 attention heads.",
"We explore four targets for the auxiliary tasks: source phonemes ( sp ), target phonemes ( tp ), source characters ( sc ) and target characters ( tc ).",
"For sp 2 https://github.com/pytorch/fairseq/ tree/master/examples/hubert 3330 or sc , we append an attention module and a decoder to the sixth layer of the encoder based on preliminary experimentation.",
"For tp or tc , we attach the attention and the decoder to the eighth layer of the encoder.",
"All multihead attention modules have 4 heads and the decoders have 2 transformer layers, 256-dimensional embeddings and a FFN embedding size of 2048.",
"Each auxiliary loss has a constant weight of 8.0 during training.",
"For written target languages, we condition the CTC decoding on the third layer of the discrete unit decoder.",
"The target text for CTC is encoded as 1k unigram subword units (Kudo, 2018) to guarantee that the text sequence length is shorter than the length of the stacked or reduced discrete unit sequence.",
"The weight on the CTC loss is set to 1.6 during training.",
"We train the models for 400k steps using Adam with 1 = 0 .",
"9 , 2 = 0 .",
"98 , (cid:15) = 10 8 , label smoothing 0 .",
"2 , and apply an inverse square root learning rate decay schedule with 10k warmup steps.",
"All other hyper-parameters, such as dropout and learning rate, are tuned on the development set.",
"All models are implemented using FAIRSEQ S2T (Ott et al., 2019; Wang et al., 2020b) 3 .",
"Unit-based vocoder We follow the same vocoder design and training procedure in (Polyak et al., 2021) and incorporate a duration prediction module from Fastspeech 2 (Ren et al., 2020).",
"The two 1D-convolutional layers in the module have a filter size of 128 and a kernel size of 3. We apply a dropout of 0.5, and the weight on the MSE loss from the duration prediction module is set to 1.0 during training 4 .",
"The vocoder is trained on the synthetic target speech from the Fisher training set.",
"We build two cascaded baselines, ASR+MT+TTS and S2T+TTS, and one direct S2ST baseline that predicts spectrogram features.",
"All models in the cascaded baselines are trained with character input or output.",
"ASR We train the transformer-based Spanish ASR system with the default hyper-parameters and s2t_transformer_s architecture in FAIRSEQ S2T (Wang et al., 2020b).",
"pytorch/fairseq/tree/main/examples/speech_to_speech .",
"4 Code for vocoder training is available at https://github.com/facebookresearch/speech-resynthesis/tree/main/examples/speech_to_speech_translation MT As the input to the MT model is characters, we follow the default gru_transformer setup in FAIRSEQ (Ott et al., 2019) to prepend a bidirectional recurrent layer with gated recurrent units (GRU) to the transformer encoder to incorporate a larger context (Wang et al., 2020a).",
"S2T We explore both LSTM-based (Weiss et al., 2017) and transformer-based end-to-end S2T systems.",
"The former consists of 8 bidirectional LSTM layers for the encoder and 4 LSTM layers for the decoder.",
"Embedding and hidden state sizes are all 256.",
"The latter has the same model architecture as the S2UT model except that it predicts characters as output.",
"We do not apply pre-training or multitask learning and find that the LSTM-based model works better.",
"TTS The transformer-based TTS model (Li et al., 2019) has 6 transformer layers, 4 attention heads, embedding size 512 and FFN embedding size 2048 for both the encoder and the decoder.",
"We use 32-dimensional layer for the decoder prenet.",
"The model is trained on the English text and the synthetic target speech with a reduction factor of 5 on the output feature frames.",
"The vocoder is a HiFi-GAN model (Kong et al., 2020) fine-tuned on the mel-spectrogram features from teacher-forcing.",
"Transformer Translatotron We implement a transformer-based Translatotron instead of the LSTM architecture in (Jia et al., 2019b) to speed up model training.",
"The model predicts mel-spectrogram features of the target speech and consists of the same speech encoder design as in the S2UT model, the same speech decoder design as in the TTS model for the cascaded baselines, and a fine-tuned HiFi-GAN vocoder (Kong et al., 2020).",
"We use the same auxiliary task setup as in the S2UT model with a constant weight of 0.1 on each auxiliary loss, apply a reduction factor of 5 on the output feature frames and tune the hyper-parameters on the development sets.",
"Preliminary studies show no performance degradation for the transformer-based model compared with our implementation of the LSTM version of the model.",
"We evaluate both the translation quality and the speech quality of the system output.",
"To evaluate the translation quality, we follow the setup in (Jia et al., 2019b; Zhang et al., 2020) to apply ASR on the speech output and compute BLEU scores 3331 BLEU MOS dev dev2 test test ID speech text speech text speech text 1 Synthetic target 88.5 100.0 89.4 100.0 90.5 100.0 3.49 0.14 Cascaded systems: 2 ASR (beam=10) + MT (beam=5) + TTS 42.1 45.1 43.5 46.1 43.9 46.3 3.37 0.15 3 S2T (beam=10) + TTS 38.5 41.1 39.9 42.4 40.2 42.1 3.43 0.14 Direct systems: 4 Transformer Translatotron ( r = 5 , w/ sp, tp ) 25.0 -26.3 -26.2 -5 Transformer Translatotron ( r = 5 , w/ sc, tc ) 32.9 -34.1 -33.2 -3.31 0.11 6 S2UT, no reduction ( r = 1 , w/ sc, tc ) 33.4 -34.6 -34.1 -3.35 0.14 7 S2UT stacked ( r = 5 , w/ sc, tc ) 34.0 -34.5 -34.4 -Direct systems with dual modality output: 8 S2UT stacked + CTC ( r = 5 , w/ sc, tc ) 34.4 36.4 36.4 37.9 34.4 35.8 3.32 0.14 9 S2UT reduced + CTC (w/ sc, tc ), beam=1 36.8 40.0 38.4 41.5 38.5 40.7 10 S2UT reduced + CTC (w/ sc, tc ), beam=10 38.2 41.3 39.5 42.2 39.9 41.9 3.41 0.14 From the literature : 11 Translatotron (Jia et al., 2019b) 24.8 -26.5 -25.6 -3.69 0.07 12 + pre-trained encoder (Jia et al., 2019b) 30.1 -31.5 -31.1 -13 Translatotron 2 (Jia et al., 2021) --37.0 -3.98 0.08 14 + data augmentation (Jia et al., 2021) --40.3 -3.79 0.09 Table 2: Results from systems using target transcripts during training.",
"We adopt an open-sourced English ASR model 5 built with the combination of wav2vec 2.0 pre-training and self-training (Xu et al., 2021).",
"The model, which is pre-trained on Libri-Light (Kahn et al., 2020) and fine-tuned on full Librispeech (Panayotov et al., 2015), achieves WER of 1.9 and 3.9 on the Librispeech test-clean and other sets, respectively.",
"As the ASR output is in lowercase and without punctuation except apostrophes, we normalize the reference text before computing BLEU using SACREBLEU (Post, 2018) 6 .",
"In addition to measuring the translation quality via an objective metric, we conduct human listening tests to collect mean opinion scores (MOS) to evaluate the naturalness of the speech output.",
"We randomly sample 200 utterances from the test set, and each sample is rated by 8 raters on a scale of 15, with 1 being the worst and 5 being the best.",
"We explore model training under both written and unwritten language scenarios.",
"For the former, we take advantage of text transcriptions of source and target speech during S2UT model training.",
"For the latter, we focus on the cases where the source is in either a written or unwritten language, while the target language is without a writing system.",
"Source & Target Written Table 2 summarizes the experimental results under the written language setup.",
"In the following discussion, we first focus on the translation content quality evaluated by BLEU.",
"We include the results from (Jia et al., 2019b, 2021) as references (11-14).",
"However, as different ASR models are used for evaluation, we should not directly compare the BLEU scores with our experiments.",
"We also list the BLEU scores evaluated on the synthetic target speech (1) to show the impact of the ASR errors on the evaluation metric.",
"First, we explore using different targets for the auxiliary tasks with transformer Translatotron and see that using characters as targets for the auxiliary tasks gives 7 BLEU gain on the test set compared to phonemes (4 vs. 5).",
"In all following experiments, we use characters as the auxiliary task targets.",
"Second, we compare the proposed S2UT model to transformer Translatotron .",
"We start with the stacked strategy as both models have the same reduction ratio of 5.",
"We can see that S2UT stacked outperforms the transformer Translatotron by 1.2 BLEU on the test set (5 vs. 7), indicating that discrete units are easier to model than continuous-valued mel-spectrogram features.",
"We also experiment with S2UT training using the full discrete unit sequence ( r = 1 ) and see that a larger reduction factor can speed up training and inference and does not hurt the performance (6 vs. 7).",
"Third, we incorporate target text CTC decoding to the S2UT model and evaluate both speech and text output.",
"Joint training with discrete unit loss and text CTC loss brings an average gain of 1.2 BLEU on the dev sets for S2UT stacked (7 vs. 8), while the performance on the test set remains the same.",
"Moreover, we see that the reduced strategy is more effective than stacked .",
"When decoding with a beam size of 1, we see 1.4 BLEU improvement on speech output and 1.2 BLEU gain on text output on the test set (8 vs. 9).",
"Finally, we apply beam search on the best setup we find, S2UT reduced with joint speech and text training and auxiliary tasks, and the resulting direct S2ST system performs on par with the S2T+TTS system (3 vs. 10) and bridges 63% of the gap between transformer Translatotron (5) and the three-stage ASR+MT+TTS cascaded system (2).",
"Compared with the cascaded system, the proposed framework has the advantage of being able to generate consistent speech and text output in one inference pass.",
"We also examine the output from the tc auxiliary task, which can serve as another way to generate translated text from the direct S2ST system.",
"By using ASR decoded text from the speech output as reference, we see a character error rate (CER) of 4.5 for the CTC decoded text and 30.3 for the tc decoded text on the dev set, indicating that the former is more aligned with the generated audio.",
"From the MOS results in Table 2, we see that direct S2ST systems that predict all frames, such as Translatotron and S2UT stacked models, tend to have slightly lower MOS than others.",
"The proposed S2UT reduced system has an MOS close to that for synthetic target (1 vs. 10).",
"The latter can be viewed as the upper bound of the best MOS we can get, since the model is trained with the synthetic speech as target.",
"Source Written, Target Unwritten We explore the unwritten target language setup by starting from the scenario where the source speech has a text writing system.",
"Table 3 summarizes the results.",
"First, we build cascaded systems by combining ASR and text-to-speech translation (Zhang et al., 2020).",
"The latter can be built by either training a TTS model that predicts spectrogram features or a text-to-unit model with source text and target speech in two languages.",
"We refer to the first approach as text-to-spectrogram translation (T2ST) and the second as text-to-unit translation (T2UT).",
"We use the same architecture as the transformer TTS model to train the T2ST model with reduction ratio 2, and the same setup as the MT model to train the T2UT model with reduced unit sequences.",
"From Table 3, we see that the model that predicts discrete units outperforms the one that predicts spectrogram features by 15.1 BLEU on the test set (15 vs. 16), which is another evidence showing that discrete units are easier to model as translation targets than continuous spectrogram features.",
"In fact, ASR+T2UT also outperforms S2T+TTS by 0.8 BLEU on the test set (3 vs. 16), which provides another option for building two-stage cascaded systems.",
"Next, we focus on S2UT reduced based on the findings from the written language setup for direct S2ST.",
"We find that training an S2UT model with sc auxiliary task can already achieve 88% of the performance from a system trained with both 3333",
"source and target text (10 vs. 17).",
"This is in contrary to the findings in (Jia et al., 2019b) where training Translatotron with only source transcripts attains 28% of the performance of a system trained with both source and target text.",
"Source & Target Unwritten We extend our experiments to a fully unwritten language setup by training models without using any text transcripts (Table 3).",
"Jia et al. (2019b) has pointed out that the model has difficulty in learning to attend to the input speech when trained without auxiliary tasks.",
"Zhang et al. (2020) addresses the challenge by training with discrete unit targets and shows potential, while it uses labelled speech from languages other than the source or the target to guide the VAE learning for the discrete units.",
"When S2UT reduced is trained without auxiliary tasks, the performance greatly deteriorates (19).",
"We notice that the model can still generate meaningful text.",
"However, the generated speech does not reflect the content in the source speech, and the 7.4 BLEU score is mostly contributed by the function words.",
"This shows that the discrete unit decoder can learn a language model over the unit sequence, while the challenge is in the attention on the encoder output.",
"To facilitate the S2UT model training, we apply the HuBERT model pre-trained on English to extract discrete representations for the source Spanish speech, and the source units ( su ) are used as an auxiliary task target.",
"The resulting S2UT model achieves only a 1.4 BLEU difference on the test set compared with transformer Translatotron trained with both source and target text supervision (5 vs. 20).",
"This shows that source units are effective in guiding the model to properly learn the attention, and the self-supervised discrete representations can capture basic pronunciations that are transferable across languages.",
"In addition to evaluating the quality of the system output, we examine the efficiency of the models during inference by benchmarking the runtime, total number of floating point operations (FLOPs) and max memory on an Intel Xeon Gold 6230 CPU.",
"We conduct the study with three subsets of 500 samples from the Fisher dev set, one with random samples, one with the shortest and the other one with the longest utterances.",
"Fig. 3 shows the comparison of two direct S2ST systems, the proposed S2UT reduced and transformer Translatotron , one two-stage cascaded system (S2T+TTS) and one three-stage cascaded system (ASR+MT+TTS).",
"For each system, we report the runtime and FLOPs measured by timeit and PyPAPI from all stages, and the maximum memory from any single stage measured by memory-profiler .",
"All metrics are averaged by the total number of samples.",
"For cascaded models we only consider the metrics for model inference at different stages and ignore any intermediate data/IO processing overhead.",
"First, we see that TTS is the bottleneck for cascaded systems, as it takes up the largest percentage of runtime ( > 89% in S2T+TTS and > 81% in ASR+MT+TTS) and contributes to the maximum memory used.",
"The runtime may be improved with the use of non-autoregressive TTS systems.",
"We leave the investigation to future work, as it is also possible to apply non-autoregressive translation 3334 with discrete units.",
"Next, the proposed S2UT reduced model is the most efficient among the four systems across all subsets.",
"Compared to S2T+TTS, our direct system runs 1.5X faster and reduces 47% FLOPs and 55% max memory, while maintaining the same level of translation quality (Table 2).",
"This verifies one of the benefits of direct S2ST systems, which is lower computational costs and inference latency.",
"Lastly, the proposed S2UT reduced can not only produce better translation than transformer Translatotron but also run 1.3X faster and reduce 39% FLOPs and 51% max memory.",
"This demonstrates an addition advantage of modeling discrete units instead of spectrogram features.",
"We investigate training direct S2ST models with the use of self-supervised discrete representations as targets.",
"We examine model training under both the written and unwritten language scenarios.",
"For the former, we propose a framework with joint speech and text training that performs on par with an S2T+TTS baseline, yet it can run more effi-ciently.",
"We demonstrate the possibility of translating between two unwritten languages by taking advantage of discrete representations of both the source and the target speech for model training.",
"Our empirical benchmark shows that the proposed direct S2ST system with discrete units is the most efficient during inference compared with a direct S2ST model that predicts spectrogram features or other cascaded systems.",
"With the recent release of large-scale S2S dataset (Wang et al., 2021), we plan to investigate the proposed framework with real data in the future.",
"Another important aspect in generating speech output is the voice and prosody.",
"In our work, we focus on content translation and leave the para-linguistic aspect of speech translation to future work.",
"We use an open-sourced ASR model for evaluation, so the results should be comparable with all future research in the field that follows the same evaluation protocol.",
"We will also release the code for reproducing the experiments.",
"We would like to thank Jade Copet, Emmanuel Dupoux, Evgeny Kharitonov, Kushal Lakhotia, Abdelrahman Mohamed, Tu Anh Nguyen and Morgane Rivire for helpful discussions on discrete"
] | [
"method",
"objective",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"objective",
"method",
"abstain",
"result",
"abstain",
"result",
"objective",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"method",
"result",
"abstain",
"other"
] |
[
"We investigate grounded language learning through real-world data, by modelling a teacher-learner dynamics through the natural interactions occurring between users and search engines; in particular, we explore the emergence of semantic generalization from unsupervised dense representations outside of synthetic environments.",
"A grounding domain, a denotation function and a composition function are learned from user data only.",
"We show how the resulting semantics for noun phrases exhibits compositional properties while being fully learnable without any explicit labelling.",
"We benchmark our grounded semantics on compositionality and zero-shot inference tasks, and we show that it provides better results and better generalizations than SOTA non-grounded models, such as word2vec and BERT.",
"Most SOTA models in NLP are only intra-textual .",
"Models based on distributional semantics such as standard and contextual word embeddings (Mikolov et al., 2013; Peters et al., 2018; Devlin et al., 2019) learn representations of word meaning from patterns of co-occurrence in big corpora, with no reference to extra-linguistic entities.",
"While successful in a range of cases, this approach does not take into consideration two fundamental facts about language.",
"The first is that language is a referential device used to refer to extra-linguistic objects.",
"Scholarly work in psycholinguistics (Xu and Tenenbaum, 2000), formal semantics (Chierchia and McConnell-Ginet, 2000) and philosophy of language (Quine, 1960) show that (at least some aspects of) linguistic meaning can be represented as a sort of mapping between linguistic and extra-linguistic entities.",
"The second is Corresponding author.",
"that language may be learned based on its usage and that learners draw part of their generalizations from the observation of teachers' behaviour (Tomasello, 2003).",
"These ideas have been recently explored by work in grounded language learning, showing that allowing artificial agents to access human actions providing information on language meaning has several practical and scientific advantages (Yu et al., 2018; Chevalier-Boisvert et al., 2019).",
"While most of the work in this area uses toy worlds and synthetic linguistic data, we explore grounded language learning offering an example in which unsupervised learning is combined with a language-independent grounding domain in a real-world scenario.",
"In particular, we propose to use the interaction of users with a search engine as a setting for grounded language learning.",
"In our setting, users produce search queries to find products on the web: queries and clicks on search results are used as a model for the teacher-learner dynamics.",
"1. we provide a grounding domain composed of dense representations of extra-linguistic entities constructed in an unsupervised fashion from user data collected in the real world.",
"In particular, we learn neural representations for our domain of objects leveraging prod2vec (Grbovic et al., 2015): crucially, building the grounding domain does not require any linguistic input and it is independently justified in the target domain (Tagli-abue et al., 2020a).",
"In this setting, lexical denotation can also be learned without explicit labelling, as we use the natural interactions between the users and the search engine to learn a noisy denotation for the lexicon (Bianchi et al., 2021).",
"More specifically, we use DeepSets (Cotter et al., 2018) constructed from user behavioural signals as the extra-linguistic reference of words.",
"For instance, the denotation of the word shoes is constructed from the clicks produced by real users on products that are in fact shoes after having performed the query shoes in the search bar.",
"Albeit domain specific, the resulting language is significantly richer than languages from agent-based models of language acquisition (Sowik et al., 2020; Fitzgerald and Tagliabue, 2020), as it is based on 26k entities from the inventory of a real website.",
"2. We show that a dense domain built through unsupervised representations can support compositionality .",
"By replacing a discrete formal semantics of noun phrases (Heim and Kratzer, 1998) with functions learned over DeepSets, we test the generalization capability of the model on zero-shot inference: once we have learned the meaning of Nike shoes, we can reliably predict the meaning of Adidas shorts.",
"In this respect, this work represents a major departure from previous work on the topic, where compositional behavior is achieved through either discrete structures built manually (Lu et al., 2018; Krishna et al., 2016), or embeddings of such structures (Hamilton et al., 2018).",
"3. To the best of our knowledge, no dataset of this kind (product embeddings from shopping sessions and query-level data) is publicly available.",
"As part of this project, we release our code and a curated dataset, to broaden the scope of what researchers can do on the topic 1 .",
"Methodologically, our work draws inspiration from research at the intersection between Artificial Intelligence and Cognitive Sciences: as pointed out in recent papers (Bisk et al., 2020; Bender and Koller, 2020), extra-textual elements are crucial in advancing our comprehension of language acquisition and the notion of meaning.",
"While synthetic environments are popular ways to replicate child-like abilities (Kosoy et al., 2020; Hill et al., 2020), our work calls the attention on real-world Information Retrieval systems as experimental settings: cooperative systems such as search engines offer new ways to study language grounding, in between the oversimplification of toy models and the 1 Please refer to the project repository for additional information: https://github.com/coveooss/ naacl-2021-grounded-semantics .",
"daunting task of providing a general account of the semantics of a natural language.",
"The chosen IR domain is rich enough to provide a wealth of data and possibly to see practical applications, whereas at the same time it is sufficiently self-contained to be realistically mastered without human supervision.",
"Following our informal exposition in Section 1, we distinguish three components, which are learned separately in a sequence: learning a language-independent grounding domain, learning noisy denotation from search logs and finally learning functional composition.",
"While only the first model (prod2vec) is completely unsupervised, it is important to remember that the other learning procedures are only weakly supervised, as the labelling is obtained by exploiting an existing user-machine dynamics to provide noisy labels (i.e. no human labeling was necessary at any stage of the training process).",
"Learning a representation space .",
"We train product representation to provide a dense ontol-ogy for the (small) world we want our language to describe.",
"Those representations are known in product search as product embeddings (Grbovic et al., 2015): prod2vec models are word2vec models in which words in a sentence are replaced by products in a shopping session.",
"For this study, we pick CBOW (Mu et al., 2018) as our training algorithm, and select d = 24 as vector size, optimizing hyperparameters as recommended by Bianchi et al. (2020); similar to what happens with word2vec, related products (e.g. two pairs of sneakers) end up closer in the embedding space.",
"In the overall picture, the product space just constitutes a grounding domain, and re-using tried and tested (Tagliabue et al., 2020b) neural representations is an advantage of the proposed semantics.",
"Learning lexical denotation .",
"We interpret clicks on products in the search result page, after a query is issued, as a noisy pointing signal (Tagliabue and Cohn-Gordon, 2019), i.e., a map between text (shoes) and the target domain (a portion of the product space).",
"In other words, our approach can be seen as a neural generalization of model-theoretic semantics, where the extension of shoes is not a discrete set of objects, but a region in the grounding space.",
"Given a list of products clicked by shoppers after queries, we represent meaning through an order-invariant operation over product embeddings (average pooling weighted by empirical frequencies, similar to Yu et al. (2020)); following Cotter et al. (2018), we refer to this representation as a DeepSet .",
"Since words are now grounded in a dense domain, set-theoretic functions for NPs (Chierchia and McConnell-Ginet, 2000) need to be replaced with matrix composition, as we explain in the ensuing section.",
"Learning functional composition.",
"Our functional composition will come from the composition of DeepSet representations, where we want to learn a function f : DeepSet DeepSet DeepSet .",
"We address functional composition by means of two models from the relevant literature (Hartung et al., 2017): one, Additive Compositional Model ( ADM ), sums vectors together to build the final DeepSet representation.",
"The second model is instead a Matrix Compositional Model ( MDM ): given in input two DeepSets (for example, one for Nike and one for shoes) the function we learn as the form Mv + Nu , where the interaction between the two vectors is mediated through the learning of two matrices, M and N .",
"Since the output of these processes is always a DeepSet, both models can be recursively composed, given the form of the function f .",
"Data.",
"We obtained catalog data, search logs and detailed behavioral data (anonymized product interactions) from a partnering online shop, Shop X .",
"Shop X is a mid-size Italian website in the sport apparel vertical 2 .",
"Browsing and search data are sampled from one season (to keep the underlying catalog consistent), resulting in a total of 26 , 057 distinct product embeddings, trained on more than 700 , 000 anonymous shopping sessions.",
"To prepare the final dataset, we start from comparable literature (Baroni and Zamparelli, 2010) and the analysis of linguistic and browsing behavior in Shop X , and finally distill a set of NP queries for our compositional setting.",
"In particular, we build a rich, but tractable set by excluding queries that are too rare (<5 counts), queries with less than three different products clicked, and queries for which no existing product embedding is present.",
"Afterwards, we zoom into NP-like constructions, by inspecting which features are frequently used in the query log (e.g. shoppers 2 For convenience of exposition, all queries and examples cited in the paper are translated into English. search for sport, not colors), and matching logs and NPs to produce the final set.",
"Based on our experience with dozens of successful deployments in the space, NPs constitute the vast majority of queries in product search: thus, even if our intent is mainly theoretical, we highlight that the chosen types overlap significantly with real-world frequencies in the relevant domain.",
"Due to the power-law distribution of queries, one-word queries are the majority of the dataset (60%); to compensate for sparsity we perform data augmentation for rare compositional queries (e.g. Nike running shoes): after we send a query to the existing search engine to get a result set, we simulate n = 500 clicks by drawing products from the set with probability proportional to their overall popularity (Bianchi et al., 2021) 3 .",
"The final dataset consists of 104 activity + sortal 4 queries running shoes ; 818 brand + sortal queries Nike shoes , and 47 gender + sortal queries women shoes; our testing data consists of 521 brand + activity + sortal (BAS) triples, 157 gender + activity + sortal (GAS) triples, 406 brand + gender + activity + sortal (BGAS) quadruples.",
"5 Tasks and Metrics.",
"Our evaluation metrics are meant to compare the real semantic representation of composed queries (Nike shoes) with the one predicted by the tested models: in the case of the proposed semantics, that means evaluating how it predicts the DeepSet representation of Nike shoes, given the representation of shoes and Nike.",
"Comparing target vs predicted representations is achieved by looking at the nearest neighbors of the predicted DeepSet, as intuitively complex queries behave as expected only if the two representations share many neighbors.",
"For this reason, quantitative evaluation is performed using two well-known ranking metrics: nDCG and Jac-3 Since the only objects users can click on are those returned by the search box, query representation may in theory be biased by the idiosyncrasies of the engine.",
"In practice, we confirmed that the embedding quality is stable even when a sophisticated engine is replaced by simple Boolean queries over TF-IDF vectors, suggesting that any bias of this sort is likely to be very small and not important for the quality of the compositional semantics.",
"4 Sortal refers to a type of object: shoes and polo are sortals, while black and Nike are not; activity is the sport activity for a product, e.g. tennis for a racket.",
"5 Dataset size for our compositional tests is in line with intra-textual studies on compositionality (Baroni and Zam-parelli, 2010; Rubinstein et al., 2015); moreover, the lexical atoms in our study reflect a real-world distribution that is independently generated, and not frequency on general English corpora.",
"card (Vasile et al., 2016; Jaccard, 1912).",
"We focus on two tasks: leave-one-brand-out ( LOBO ) and zero-shot ( ZT ).",
"In LOBO , we train models over the brand + sortal queries but we exclude from training a specific brand (e.g., Nike); in the test phase, we ask the models to predict the DeepSet for a seen sortal and an unseen brand.",
"For ZT we train models over queries with two terms (brand + sortal, activity + sortal and gender + sortal) and see how well our semantics generalizes to compositions like brand + activity + sortal; the complex queries that we used at test time are new and unseen.",
"Models.",
"We benchmark our semantics (tagged as p in the results table) based on ADM and MDM against three baselines: one is another grounded model, where prod2vec embeddings are replaced by image embeddings (tagged as v in the results table), to test the representational capabilities of the chosen domain against a well-understood modality image vectors are extracted with ResNet-18, taking the average pooling of the last layer to obtain 512-dimensional vectors; two are intra-textual models, where word embeddings are obtained from state-of-the-art distributional models, BERT ( UM ) (the Umberto model 6 ) and Word2Vec ( W2V ), trained on textual metadata from Shop X catalog.",
"For UM , we extract the 768 dimensional representation from the [CLS] embedding of the 12th layer of the query and learn a linear projection 6 https://huggingface.co/Musixmatch/ umberto-commoncrawl-cased-v1 to the product-space (essentially, training to predict the DeepSet representation from text).",
"The generalization to different and longer queries for UM comes from the embeddings of the queries themselves.",
"Instead, for W2V , we learn a compositional function that concatenates the two input DeepSets, projects them to 24 dimensions, pass them through a Rectified Linear Unit, and finally project them to the product space.",
"7 We run every model 15 times and report average results; RMSProp is the chosen optimizer, with a batch size of 200, 20% of the training set as validation set and early stopping with patience = 10 .",
"Results.",
"Table 1 shows the results on LOBO , with grounded models outperforming intra-textual ones, and prod2vec semantics (tagged as p ) beating all baselines.",
"Table 2 reports performance for different complex query types in the zero-shot inference task: grounded models are superior, with the proposed model outperforming baselines across all types of queries.",
"MDM typically outperforms ADM as a composition method, except for GAS , where all models suffer from gender sparsity; in that case, the best model is ADM , i.e. the one without an implicit bias from the training.",
"In general, grounded models outperform intra-textual models, often by a wide margin, and prod2vec-based semantics outperforms image-based semantics, proving that the 7 First results with the same structure as ADM and MDM showed very low performances, thus we made the architecture more complex and non-linear.",
"chosen latent grounding domain supports rich representational capabilities.",
"The quantitative evaluations were confirmed by manually inspecting nearest neighbors for predicted DeepSets in the LOBO setting as an example, MDM predicts for Nike shoes a DeepSet that has (correctly) all shoes as neighbors in the space, while, for the same query, UM suggests shorts as the answer.",
"Figure 1 shows some examples of compositions obtained by the MDM model on the LOBO task; the last example shows that the model, given in input the query Nike shirt, does not reply with a shirt, but with a Nike jacket: even if the correct meaning of shirt was not exactly captured in this contest, the model ability to identify a similar item is remarkable.",
"In the spirit of Bisk et al. (2020), we argued for grounding linguistic meaning in artificial systems through experience.",
"In our implementation, all the important pieces domain, denotation, composition are learned from behavioral data.",
"By grounding meaning in (a representation of) objects and their properties, the proposed noun phrase semantics can be learned bottom-up like distributional models, but can generalize to unseen examples, like traditional symbolic models: the implicit, dense structure of the domain (e.g. the relative position in the space of Nike products and shoes ) underpins the explicit, discrete structure of queries picking objects in that domain (e.g. Nike shoes) in other words, compositionality is an emergent phenomenon.",
"While encouraging, our results are still preliminary: first, we plan on extending our semantics, starting with Boolean operators (e.g. shoes NOT Nike); second, we plan to improve our representational capabilities, either through symbolic knowledge or more discerning embedding strategies; third, we wish to explore transformer-based architectures (Lee et al., 2019) as an alternative way to produce set-like representations.",
"We conceived our work as a testable application of a broader methodological stance, loosely following the agenda of the child-as-hacker (Rule et al., 2020) and child-as-scientist (Gopnik, 2012) programs.",
"Our search-engine-as-a-child metaphor may encourage the use of abundant real-world search logs to test computational hypotheses about language learning inspired by cognitive sciences (Carey and Bartlett, 1978).",
"We wish to thank Christine Yu, Patrick John Chia and the anonymous reviewers for useful comments on a previous draft.",
"Federico Bianchi is a mem-ber of the Bocconi Institute for Data Science and Analytics (BIDSA) and the Data and Marketing Insights (DMI) unit.",
"User data has been collected in the process of providing business services to the clients of Coveo : user data is collected and processed in an anonymized fashion, in full compliance with existing legislation (GDPR).",
"In particular, the target dataset uses only anonymous uuids to label sessions and, as such, it does not contain any information that can be linked to individuals."
] | [
"objective",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"method",
"method",
"method",
"method",
"objective",
"abstain",
"result",
"method",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"other",
"other",
"abstain",
"abstain"
] |
[
"Sequence labeling is a fundamental task for a range of natural language processing problems.",
"When used in practice, its performance is largely influenced by the annotation quality and quantity, and meanwhile, obtaining ground truth labels is often costly.",
"In many cases, ground truth labels do not exist, but noisy annotations or annotations from different domains are accessible.",
"In this paper, we propose a novel framework Consensus Network (CONNET ) that can be trained on annotations from multiple sources (e.g., crowd annotation, cross-domain data).",
"It learns individual representation for every source and dynamically aggregates source-specific knowledge by a context-aware attention module.",
"Finally, it leads to a model reflecting the agreement (con-sensus) among multiple sources.",
"We evaluate the proposed framework in two practical settings of multi-source learning: learning with crowd annotations and unsupervised cross-domain model adaptation.",
"Extensive experimental results show that our model achieves significant improvements over existing methods in both settings.",
"We also demonstrate that the method can apply to various tasks and cope with different encoders.",
"1 1 Introduction Sequence labeling is a general approach encompassing various natural language processing (NLP) tasks including part-of-speech (POS) tagging (Ratnaparkhi, 1996), word segmentation (Low et al., 2005), and named entity recognition (NER) (Nadeau and Sekine, 2007).",
"Typically, existing methods follow the supervised learning paradigm, and require high-quality annotations.",
"While gold standard annotation is expensive and The first two authors contributed equally.",
"time-consuming, imperfect annotations are relatively easier to obtain from crowdsourcing (noisy labels) or other domains (out-of-domain).",
"Despite their low cost, such supervision usually can be obtained from different sources, and it has been shown that multi-source weak supervision has the potential to perform similar to gold annotations (Ratner et al., 2016).",
"Specifically, we are interested in two scenarios: 1) learning with crowd annotations and 2) unsupervised cross-domain model adaptation .",
"Both situations suffer from imperfect annotations, and benefit from multiple sources.",
"Therefore, the key challenge here is to aggregate multi-source imperfect annotations for learning a model without knowing the underlying ground truth label sequences in the target domain.",
"Our intuition mainly comes from the phenomenon that different sources of supervision have different strengths and are more proficient with distinct situations.",
"Therefore they may not keep consistent importance during aggregating supervisions, and aggregating multiple sources for a specific input should be a dynamic process that depends on the sentence context.",
"To better model this nature, we need to (1) explicitly model the unique traits of different sources when training and (2) find best suitable sources for generalizing the learned model on unseen sentences.",
"In this paper, we propose a novel framework, named Consensus Network (CONNET ), for sequence labeling with multi-source supervisions.",
"We represent the annotation patterns as different biases of annotators over a shared behavior pattern.",
"Both annotator-invariant patterns and annotator-specific biases are modeled in a decoupled way.",
"The first term comes through sharing part of low-level model parameters in a multi-task learning schema.",
"For learning the biases, we decouple them from the model as the transformations Figure 1: Illustration of the task settings for the two applications in this work :",
"on top-level tagging model parameters, such that they can capture the unique strength of each annotator.",
"With such decoupled source representations, we further learn an attention network for dynamically assigning the best sources for every unseen sentence through composing a transformation that represents the agreement among sources (consen-sus).",
"Extensive experimental results in two scenarios show that our model outperforms strong baseline methods, on various tasks and with different encoders.",
"CONNET achieves state-of-the-art performance on real-world crowdsourcing datasets and improves significantly in unsupervised cross-domain adaptation tasks over existing works.",
"There exists three threads of related work with this paper, which are sequence labeling, crowdsourcing and unsupervised domain adaptation.",
"Neural Sequence Labeling.",
"Traditional approaches for sequence labeling usually need significant efforts in feature engineering for graphical models like conditional random fields (CRFs) (Lafferty, 2001).",
"Recent research efforts in neural network models have shown that end-to-end learning like convolutional neural networks (CNNs) (Ma and Hovy, 2016a) or bidirectional long short-term memory (BLSTMs) (Lam-ple et al., 2016) can largely eliminate human-crafted features.",
"BLSTM-CRF models have achieved promising performance (Lample et al., 2016) and are used as our base sequence tagging model in this paper.",
"Crowd-sourced Annotation.",
"Crowd-sourcing has been demonstrated to be an effective way of fulfilling the label consumption of neural models (Guan et al., 2017; Lin et al., 2019).",
"It collects annotations with lower costs and a higher speed from non-expert contributors but suffers from some degradation in quality.",
"Dawid and Skene (1979) proposes the pioneering work to aggregate crowd annotations to estimate true labels, and Snow et al. (2008) shows its effectiveness with Amazon's Mechanical Turk system.",
"Later works (Dempster et al., 1977; Dredze et al., 2009; Raykar et al., 2010) focus on Expectation-Maximization (EM) algorithms to jointly learn the model and annotator behavior on classification.",
"Recent research shows the strength of multitask framework in semi-supervised learning (Lan et al., 2018; Clark et al., 2018), cross-type learning (Wang et al., 2018), and learning with entity triggers (Lin et al., 2020).",
"Nguyen et al. (2017); Rodrigues and Pereira (2018); Simpson et al. (2020) regards crowd annotations as noisy gold labels and constructs crowd components to model annotator-specific bias which were discarded during the inference process.",
"It is worth mentioning that, it has been found even for human curated annotations, there exists certain label noise that hinders the model performance (Wang et al., 2019).",
"Unsupervised Domain Adaptation.",
"Unsupervised cross-domain adaptation aims to transfer knowledge learned from high-resource domains (source domains) to boost performance on low-resource domains (target domains) of interests such as social media messages (Lin et al., 2017).",
"Different from supervised adaptation (Lin and Lu, 2018), we assume there is no labels at all for target corpora.",
"Saito et al. (2017) and Ruder and Plank (2018) explored bootstrapping with multitask tri-training approach, which requires unlabeled data from the target domain.",
"The method is developed for one-to-one domain adaptation and does not model the differences among multiple source domains.",
"Yang and Eisenstein (2015) represents each domain with a vector of metadata domain attributes and uses domain vectors to train the model to deal with domain shifting, which is highly dependent on prior domain knowledge.",
"(Ghifary et al., 2016) uses an auto-encoder method by jointly training a predictor for source labels, and a decoder to reproduce target input with a shared encoder.",
"The decoder acts as a normalizer to force the model to learn shared knowledge between source and target domains.",
"Adversarial penalty can be added to the loss function to make models learn domain-invariant feature only (Fernando et al., 2015; Long et al., 2014; Ming Harry Hsu et al., 2015).",
"However, it does not exploit domain-specific information.",
"We formulate the multi-source sequence labeling problem as follows.",
"Given K sources of supervision, we regard each source as an imperfect annotator (non-expert human tagger or models trained in related domains).",
"For the k -th source data set S ( k ) = { ( x ( k ) i , y ( k ) i ) } m k i =1 , we denote its i -th sentence as x ( k ) i which is a sequence of tokens: x ( k ) i = ( x ( k ) i, 1 , , x ( k ) i,N ) .",
"The tag sequence of the sentence is marked as y ( k ) i = { y ( k ) i,j } .",
"We define the sentence set of each annotators as X ( k ) = { x ( k ) i } m k i =1 , and the whole training domain as the union of all sentence sets: X = (cid:83) ( K ) k =1 X ( k ) .",
"The goal of the multi-source learning task is to use such imperfect annotations to train a model for predicting the tag sequence y for any sentence x in a target corpus T .",
"Note that the target corpus T can either share the same distribution with X (Application I) or be significantly different (Application II).",
"In the following two subsections, we formulate two typical tasks in this problem as shown in Fig. 1.",
"Application I: Learning with Crowd Annotations.",
"When learning with crowd-sourced data, we regard each worker as an imperfect annotator ( S ( k ) ), who may make mistakes or skip sentences in its annotations.",
"Note that in this setting, different annotators tag subsets of the same given dataset ( X ), and thus we assume there are no input distribution shifts among X ( k ) .",
"Also, we only test sentences in the same domain such that the distribution in target corpus T is the same as well.",
"That is, the marginal distribution of target corpus PT ( x ) is the same with that for each individual source dataset, i.e. PT ( x ) = P k ( x ) .",
"However, due to imperfectness of the annotations in each source, P k ( y | x ) is shifted from the underlying truth P ( y | x ) (illustrated in the top-left part of Fig. 1).",
"The multi-source learning objective here is to learn a model PT ( y | x ) for supporting inference on any new sentences in the same domain.",
"Application II: Unsupervised Cross-Domain Model Adaptation.",
"We assume there are available annotations in several source domains, but not in an unseen target domain.",
"We assume that the input distributions P ( x ) in different source domains X ( k ) vary a lot, and such annotations can hardly be adapted for training a target domain model.",
"That is, the prediction distribution of each domain model ( P k ( y | x ) ) is close to the underlying truth distribution ( P ( y | x ) ) only when x X ( k ) .",
"For target corpus sentences x T , such a source model P k ( y | x ) again differs from underlying ground truth for the target domain PT ( y | x ) and can be seen as an imperfect annotators.",
"Our objective in this setting is also to jointly model PT ( y , x ) while noticing that there are significant domain shifts between T and any other X ( k ) .",
"In this section, we present our two-phase framework CONNET for multi-source sequence labeling.",
"As shown in Figure 2, our proposed framework first uses a multi-task learning schema with a special objective to decouple annotator representations as different parameters of a transformation around CRF layers.",
"This decoupling phase (Sec-tion 4.2) is for decoupling the model parameters into a set of annotator-invariant model parameters and a set of annotator-specific representations.",
"Secondly, the dynamic aggregation phase (Sec-tion 4.3) learns to contextually utilize the annotator representations with a lightweight attention mechanism to find the best suitable transformation for each sentence, so that the model can achieve a context-aware consensus among all sources.",
"The inference process is described in Section 4.4.",
"Many recent sequence labeling frameworks (Ma and Hovy, 2016b; Misawa et al., 2017) share a very basic structure: a bidirectional LSTM network followed by a CRF tagging layer (i.e. BLSTM-CRF).",
"The BLSTM encodes an input sequence x = { x 1 , x 2 , . . . , x n } into a sequence of hidden state vectors h 1: n .",
"The CRF takes as input the hidden state vectors and computes an emission score matrix U R n L where L is the size of tag set.",
"It also maintains a trainable transition matrix M RL L .",
"We can consider U i,j is the score of labeling the tag with id j { 1 , 2 , ..., L } for i th source ID () sentence ( 01 ) sentence () Attention () prediction (8 (1) ) prediction (8) Weighted Voting Consensus 0 BLSTM CRF BLSTM CRF Annotator{ (1) } Decoupling Phase Aggregation Phase Figure 2: Overview of the CONNET framework.",
"The CRF further computes the score s for a predicted tag sequence y = { y 1 , y 2 , ..., y k } as s ( x , y ) = T (cid:88) t =1 ( U t,y t + M y t 1 ,y t ) , (1) and then tag sequence y follows the conditional distribution P ( y | x ) = exp s ( x , y ) (cid:80) y Y x exp s ( x , y ) .",
"(2) 4.2 The Decoupling Phase: Learning annotator representations For decoupling annotator-specific biases in annotations, we represent them as a transformation on emission scores and transition scores respectively.",
"Specifically, we learn a matrix A ( k ) RL L for each imperfect annotator k and apply this matrix as transformation on U and M as follows: s ( k ) ( x , y ) = T (cid:88) t =1 (cid:16) ( UA ( k ) ) t,y t + ( MA ( k ) ) y t 1 ,y t (cid:17) .",
"From this transformation, we can see that the original score function s in Eq.",
"1 becomes an source-specific computation.",
"The original emission and transformation score matrix U and M are still shared by all the annotators, while they both are transformed by the matrix A ( k ) for k -th annotator.",
"While training the model parameters in this phase, we follow a multi-task learning schema.",
"That is, we share the model parameters for BLSTM and CRF (including W , b , M ), while updating A ( k ) only by examples in S k = {X ( k ) , Y ( k ) } .",
"The learning objective is to minimize the negative log-likelihood of all source annotations: L = log K (cid:88) k =1 |X ( k ) | (cid:88) i =1 P ( y ( k ) i | x ( k ) i ) , (4) P ( y ( k ) i | x ( k ) i ) = exp s ( k ) ( x ( k ) i , y ( k ) i ) (cid:80) y (cid:48) exp s ( k ) ( x , y (cid:48) ) .",
"The assumption on the annotation representation A ( k ) is that it can model the pattern of annotation bias.",
"Each annotator can be seen as a noisy version of the shared model.",
"For the k -th annotator, A ( k ) models noise from labeling the current word and transferring from the previous label.",
"Specifically, each entry A ( k ) i,j captures the probability of mistakenly labeling i -th tag to j -th tag.",
"In other words, the base sequence labeling model in Sec. 4.1 learns the basic consensus knowledge while annotator-specific components add their understanding to predictions.",
"In the second phase, our proposed network learns a context-aware attention module for a consensus representation supervised by combined predictions on the target data.",
"For each sentence in target data T , these predictions are combined by weighted voting.",
"The weight of each source is its normalized F 1 score on the training set.",
"Through weighted voting on such augmented labels over all source sentences X , we can find a good approximation of underlying truth labels.",
"For better generalization and higher speed, an attention module is trained to estimate the relevance of each source to the target under the supervision of generated labels.",
"Specifically, we compute the sentence embedding by concatenating the last hidden states of the forward LSTM and the backward LSTM, i.e. h ( i ) = [ h ( i ) T ; h ( i ) 0 ] .",
"The attention module inputs the sentence embedding and outputs a normalized weight for each source: q i = softmax ( Qh ( i ) ) , where Q RK 2 d .",
"where d is the size of each hidden state h ( i ) .",
"Source-specific matrices { A ( k ) } Kk =1 are then aggregated into a consensus representation A i for sentence x i X by A i = K (cid:88) k =1 q i,k A ( k ) .",
"In this way, the consensus representation contains more information about sources which are more related to the current sentence.",
"It also alleviates the contradiction problem among sources, because it could consider multiple sources of different emphasis.",
"Since only an attention model with weight matrix Q is required to be trained, the amount of computation is relatively small.",
"We assume the base model and annotator representations are well-trained in the previous phase.",
"The main objective in this phase is to learn how to select most suitable annotators for the current sentence.",
"CONNET learns parameters through two phases described above.",
"In the decoupling phase, each instance from source S k is used for training the base sequence labeling model and its representation A ( k ) .",
"In the aggregation phase, we use aggregated predictions from the first phase to learn a lightweight attention module.",
"For each instance in the target corpus x i T , we calculate its embedding h i from BLSTM hidden states.",
"With these sentence embeddings, the context-aware attention module assigns weight q i to each source and dynamically aggregates source-specific representations { A ( k ) } for inferring y i .",
"In the inference process, only the consolidated consensus matrix A i is applied to the base sequence learning model.",
"In this way, more specialist knowledge helps to deal with more complex instances.",
"The proposed model can be applied to two practical multi-sourcing settings: learning with crowd annotations and unsupervised cross-domain model adaptation.",
"In the crowd annotation learning setting, the training data of the same domain is annotated by multiple noisy annotators, and each annotator is treated as a source.",
"In the decoupling phase, the model is trained on noisy annotations, and in the aggregation phase, it is trained with combined predictions on the training set.",
"In the cross-domain setting, the model has access to unlabeled training data of the target domain and clean labeled data of multiple source domains.",
"Each domain is treated as a source.",
"In the decoupling phase, the model is trained on source domains, and in the aggregation phase, the model is trained on combined predictions on the training data of the target domain.",
"Our framework can also extend to new tasks other than sequence labeling and cope with different encoders.",
"We will demonstrate this ability in experiments.",
"Our method is also incorporated as a feature for controlling the quality of crowd-annotation in annotation frameworks such as AlpacaTag (Lin et al., 2019) and LEAN-LIFE (Lee et al., 2020).",
"We evaluate CONNET in the two aforementioned settings of multi-source learning: learning with crowd annotations and unsupervised cross-domain model adaptation.",
"Additionally, to demonstrate the generalization of our framework, we also test our method on sequence labeling with transformer encoder in Appendix B and text classification with MLP encoder in Section 5.5.",
"Crowd-Annotation Datasets.",
"We use crowd-annotation datasets based on the 2003 CoNLL shared NER task (Tjong Kim Sang and De Meul-der, 2003).",
"The real-world datasets, denoted as AMT, are collected by Rodrigues et al. (2014) using Amazon's Mechanical Turk where F1 scores of annotators against the ground truth vary from 17.60% to 89.11%.",
"Since there is no development set in AMT, we also follow Nguyen et al. (2017) to use the AMT training set and CoNLL 2003 development and test sets, denoted as AMTC.",
"Overlapping sentences are removed in the training set, which is ignored in that work.",
"Additionally, we construct two sets of simulated datasets to investigate the quality and quantity of annotators.",
"To simulate the behavior of a non-expert annotator, a CRF model is trained on a small subset of training data and generates predictions on the whole set.",
"Because of the limited size of training data, each model would have a bias to certain patterns.",
"Cross-Domain Datasets.",
"In this setting, we investigate three NLP tasks: POS tagging, NER and text classification.",
"For POS tagging task, we use the GUM portion (Zeldes, 2017) of Universal Dependencies (UD) v2.3 corpus with 17 tags and 7 Methods AMTC AMT Precision(%) Recall(%) F1-score(%) Precision(%) Recall(%) F1-score(%) CONCAT-SLM 85.95 ( 1.00) 57.96( 0.26) 69.23( 0.13) 91.12 ( 0.57) 55.41( 2.66) 68.89( 1.92) MVT-SLM 84.78( 0.66) 62.50( 1.36) 71.94( 0.66) 86.96( 1.22) 58.07( 0.11) 69.64( 0.31) MVS-SLM 84.76( 0.50) 61.95( 0.32) 71.57( 0.04) 86.95( 1.12) 56.23( 0.01) 68.30( 0.33) DS-SLM (Nguyen et al., 2017) 72.30 61.17 66.27 -HMM-SLM (Nguyen et al., 2017) 76.19 66.24 70.87 -MTL-MVT (Wang et al., 2018) 81.81( 2.34) 62.51( 0.28) 70.87( 1.06) 88.88( 0.25) 65.04( 0.80) 75.10( 0.44) MTL-BEA (Rahimi et al., 2019) 85.72( 0.66) 58.28( 0.43) 69.39( 0.52) 77.56( 2.23) 67.23( 0.72) 72.01( 0.85) CRF-MA (Rodrigues et al., 2014) --49.40 85.60 62.60 Crowd-Add (Nguyen et al., 2017) 85.81( 1.53) 62.15( 0.18) 72.09( 0.42) 89.74( 0.10) 64.50( 1.48) 75.03( 1.02) Crowd-Cat (Nguyen et al., 2017) 85.02( 0.98) 62.73( 1.10) 72.19( 0.37) 89.72( 0.47) 63.55( 1.20) 74.39( 0.98) CL-MW (Rodrigues and Pereira, 2018) --66.00 59.30 62.40 CONNET (Ours) 84.11( 0.71) 68.61 ( 0.03) 75.57 ( 0.27) 88.77( 0.25) 72.79( 0.04) 79.99 ( 0.08) Gold (Upper Bound) 89.48( 0.32) 89.55( 0.06) 89.51( 0.21) 92.12( 0.31) 91.73( 0.09) 91.92( 0.21) Table 1: Performance on real-world crowd-sourced NER datasets.",
"domains: academic, bio, fiction, news, voyage, wiki, and interview.",
"For NER task, we select the English portion of the OntoNotes v5 corpus (Hovy et al., 2006).",
"The corpus is annotated with 9 named entities with data from 6 domains: broadcast conversation (bc), broadcast news (bn), magazine (mz), newswire (nw), pivot text (pt), telephone conversation (tc), and web (web).",
"Multi-Domain Sentiment Dataset (MDS) v2.0 (Blitzer et al., 2007) is used for text classification, which is built on Amazon reviews from 4 domains: books, dvd, electronics, and kitchen.",
"Since the dataset only contains word frequencies for each review without raw texts, we follow the setting in Chen and Cardie (2018) considering 5,000 most frequent words and use the raw counts as the feature vector for each review.",
"For sequence labeling tasks, we follow Liu et al. (2018) to build the BLSTM-CRF architecture as the base model.",
"The dimension of character-level, word-level embeddings and LSTM hidden layer are set as 30 , 100 and 150 respectively.",
"For text classification, each review is represented as a 5000 -d vector.",
"We use an MLP with a hidden size of 100 for encoding features and a linear classification layer for predicting labels.",
"The dropout with a probability of 0 .",
"5 is applied to the nonrecurrent connections for regularization.",
"The network parameters are updated by stochastic gradient descent (SGD).",
"The learning rate is initialized as 0 .",
"015 and decayed by 5% for each epoch.",
"The training process stops early if no improvements in 15 continuous epochs and selects the best model on the development set.",
"For the dataset without a development set, we report the performance on the 50 -th epoch.",
"For each experiment, we report the average performance and standard variance of 3 runs with different random initialization.",
"We compare our models with multiple baselines, which can be categorized in two groups: wrapper methods and joint models.",
"To demonstrate the theoretical upper bound of performance, we also train the base model using ground-truth annotations in the target domain ( Gold ).",
"A wrapper method consists of a label aggregator and a deep learning model.",
"These two components could be combined in two ways: (1) aggregating labels on crowd-sourced training set then feeding the generated labels to a Sequence Labeling Model ( SLM ) (Liu et al., 2017); (2) feeding multi-source data to a Multi-Task Learning ( MTL ) (Wang et al., 2018) model then aggregating multiple predicted labels.",
"We investigate multiple label aggregation strategies.",
"CONCAT considers all crowd annotations as gold labels.",
"MVT does majority voting on the token level, i.e. , the majority of labels { y ki,j } is selected as the gold label for each token x i,j .",
"MVS is conducted on the sequence level, addressing the problem of violating Begin/In/Out (BIO) rules.",
"DS (Dawid and Skene, 1979), HMM (Nguyen et al., 2017) and BEA (Rahimi et al., 2019) induce consensus labels with probability models.",
"In contrast with wrapper methods, joint models incorporate multi-source data within the structure of sequential taggers and jointly model all individual annotators.",
"CRF-MA models CRFs with Multiple Annotators by EM algorithm (Rodrigues et al., 2014).",
"Nguyen et al. (2017) augments the LSTM 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 Overall PER ORG LOC MISC ( a ) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 Annotator ID snt1(PER) snt2(ORG) snt3(LOC) snt4(MISC) ( b ) 0.0 0.2 0.4 0.6 0.8 0.5 0.0 0.5 1.0 1.5 Figure 3: Visualizations of",
"architecture with crowd vectors.",
"These crowd components are element-wise added to tags scores ( Crowd-Add ) or concatenated to the output of hidden layer ( Crowd-Cat ).",
"These two methods are the most similar to our decoupling phase.",
"We implemented them and got better results than reported.",
"CL-MW applies a crowd layer to a CNN-based deep learning framework (Rodrigues and Pereira, 2018).",
"Tri-Training uses bootstrapping with multi-task Tri-Training approach for unsupervised one-to-one domain adaptation (Saito et al., 2017; Ruder and Plank, 2018).",
"Performance on real-world datasets.",
"Tab.",
"1 shows the performance of aforementioned methods and our CONNET on two real-world datasets, i.e. AMT and AMTC 2 .",
"We can see that CONNET outperforms all other methods on both datasets significantly on F 1 score, which shows the effectiveness of dealing with noisy annotations for higher-quality labels.",
"Although CONCAT-SLM achieves the highest precision, it suffers from low recall.",
"Most existing methods have the high-precision but low-recall problem.",
"One possible reason is that they try to find the latent ground truth and throw away illuminating annotator-specific information.",
"So only simple mentions can be clas-sified with great certainty while difficult mentions fail to be identified without sufficient knowledge.",
"In comparison, CONNET pools information from all annotations and focus on matching knowledge to make predictions.",
"It makes the model be able to identify more mentions and get a higher recall.",
"Case study.",
"It is enlightening to analyze whether the model decides the importance of annotators given a sentence.",
"Fig. 3 visualizes test F1 score of all annotators, and attention weights q i in Eq.",
"6 2 We tried our best to re-implement the baseline methods for all datasets, and left the results blank when the re-implementation is not showing consistent results as in the original papers.",
"for 4 sampled sentences containing different entity types.",
"Obviously, the 2 nd sample sentence with ORG has higher attention weights on 1 st, 5 th and 33 rd annotator who are best at labeling ORG .",
"More details and cases are shown in Appendix A.1.",
"Ablation study.",
"We also investigate multiple variants of two phases on AMT dataset, shown in Fig. 4. We explore 3 approaches to incorporate source-specific representation in the decoupling phase (DP).",
"CRF means the traditional approach as Eq.",
"1 while DP(1+2) is for our method as Eq.",
"3. DP(1) only applies source representations A ( k ) to the emission score U while DP(2) only transfers the transition matrix M .",
"We can observe from the result that both variants can improve the result.",
"The underlying model keeps more consensus knowledge if we extract annotator-specific bias on sentence encoding and label transition.",
"We also compare 4 methods of generating supervision targets in the aggregation phase (AP).",
"OMV uses ma-Task & Corpus Multi-Domain POS Tagging: Universal Dependencies v2.3 GUM Target Domain academic bio fiction news voyage wiki interview AVG Acc (%) CONCAT 92.68 92.12 93.05 90.79 92.38 92.32 91.44 92.11( 0.07) MTL-MVT (Wang et al., 2018) 92.42 90.59 91.16 89.69 90.75 90.29 90.21 90.73( 0.29) MTL-BEA (Rahimi et al., 2019) 92.87 91.88 91.90 91.03 91.67 91.31 91.29 91.71( 0.06) Crowd-Add (Nguyen et al., 2017) 92.58 91.91 91.50 90.73 91.74 90.47 90.61 91.36( 0.14) Crowd-Cat (Nguyen et al., 2017) 92.71 91.71 92.48 91.15 92.35 91.97 91.22 91.94( 0.08) Tri-Training (Ruder and Plank, 2018) 92.84 92.15 92.51 91.40 92.35 91.29 91.00 91.93( 0.01) CONNET 92.97 92.25 93.15 91.06 92.52 92.74 91.66 92.33( 0.17) Gold (Upper Bound) 92.64 93.10 93.15 91.33 93.09 94.67 92.20 92.88( 0.14) Task & Corpus Multi-Domain NER: OntoNotes v5.0 English Target Domain nw wb bn tc bc mz AVG F 1 (%) CONCAT 68.23 32.96 77.25 53.66 72.74 62.61 61.24( 0.92) MTL-MVT (Wang et al., 2018) 65.74 33.25 76.80 53.16 69.77 63.91 60.44( 0.45) MTL-BEA (Rahimi et al., 2019) 58.33 32.62 72.47 47.83 48.99 52.68 52.15( 0.58) Crowd-Add (Nguyen et al., 2017) 45.76 32.51 50.01 26.47 52.94 28.12 39.30( 4.44) Crowd-Cat (Nguyen et al., 2017) 68.95 32.61 78.07 53.41 74.22 65.55 62.14( 0.89) Tri-Training (Ruder and Plank, 2018) 69.68 33.41 79.62 47.91 70.85 68.53 61.67( 0.31) CONNET 71.31 34.06 79.66 52.72 71.47 70.71 63.32 ( 0.81) Gold (Upper Bound) 84.70 46.98 83.77 52.57 73.05 70.58 68.61( 0.64) Task & Corpus Multi-Domain Text Classification: MDS Target Domain books dvd electronics kitchen AVG Acc (%) CONCAT 75.68 77.02 81.87 83.07 79.41( 0.02) MTL-MVT (Wang et al., 2018) 74.92 74.43 79.33 81.47 77.54( 0.06) MTL-BEA (Rahimi et al., 2019) 74.88 74.60 79.73 82.82 78.01( 0.28) Crowd-Add (Nguyen et al., 2017) 75.72 77.35 81.25 82.90 79.30( 9.21) Crowd-Cat (Nguyen et al., 2017) 76.45 77.37 81.22 83.12 79.54( 0.25) Tri-Training (Ruder and Plank, 2018) 77.58 78.45 81.95 83.17 80.29( 0.02) CONNET 78.75 81.06 84.12 83.45 81.85 ( 0.04) Gold (Upper Bound) 78.78 82.11 86.21 85.76 83.22( 0.19) Table 2: Performance on cross-domain data The best score (except the Gold) in each column that is significantly ( p < 0 . 05 ) better than the second best is marked bold , while those are better but not significantly are underlined .",
"jority voting of original annotations, while PMV substitutes them with model prediction learned from DP.",
"AMV extends the model by using all prediction, while AWV uses majority voting weighted by each annotator's training F 1 score.",
"The results show the effectiveness of AWV , which could augment training data and well approximate the ground truth to supervise the attention module for estimating the expertise of annotator on the current sentence.",
"We can also infer labels on the test set by conducting AWV on predictions of the underlying model with each annotator-specific components.",
"However, it leads to heavy computation-consuming and unsatisfying performance, whose test F 1 score is 77 .",
"35( 0 . 08) .",
"We can also train a traditional BLSTM-CRF model with the same AMV labels.",
"Its result is 78 .",
"93( 0 . 13) , which is lower than CONNET and show the importance of extracted source-specific components.",
"origin train set into z folds and each fold could be used to train a CRF model whose reliability could be represented as r = 1 /z assuming a model with less training data would have stronger bias and less generalization.",
"We tried 5 settings where z = { 5 , 10 , 15 , 30 , 50 } and randomly select 5 folds for each setting.",
"When the reliability level is too low, i.e. 1 / 50 , only the base model is used for prediction without annotator representations.",
"Shown in Fig.",
"5(a), CONNET achieves significant improvements over MVT-SLM and competitive performance as Crowd-Cat , especially when annotators are less reliable.",
"Regarding the annotator quantity, we split the train set into 50 subsets ( r = 1 / 50 ) and randomly select { 5 , 10 , 15 , 30 , 50 } folds for simulation.",
"Fig.",
"5(b) shows CONNET is superior to baselines and able to well deal with many annotators while there is no obvious relationship between the performance and annotator quantity in baselines.",
"We can see the performance of our model Figure 6: Heatmap of averaged attention scores from each source domain to each target domain.",
"increases as the number of annotators and, regardless of the number of annotators, our method consistently outperforms than other baselines.",
"The results of each task on each domain are shown in Tab.",
"2.",
"We can see that CONNET performs the best on most of the domains and achieves the highest average score for all tasks.",
"We report the accuracy for POS tagging and classification, and the chunk-level F 1 score for NER.",
"We can see that CONNET achieves the highest average score on all tasks.",
"MTL-MVT is similar to our decoupling phase and performs much worse.",
"Naively doing unweighted voting does not work well.",
"The attention can be viewed as implicitly doing weighted voting on the feature level.",
"MTL-BEA relies on a probabilistic model to conduct weighted voting over predictions, but unlike our approach, its voting process is independent from the input context.",
"It is probably why our model achieves higher scores.",
"This demonstrates the importance of assigning weights to domains based on the input sentence.",
"Tri-Training trains on the concatenated data from all sources also performs worse than CONNET , which suggests the importance of a multi-task structure to model the difference among domains.",
"The performance of Crowd-Add is unstable (high standard deviation) and very low on the NER task, because the size of the crowd vectors is not controllable and thus may be too large.",
"On the other hand, the size of the crowd vectors in Crowd-Cat can be controlled and tuned.",
"These two methods leverage domain-invariant knowledge only but not domain-specific knowledge and thus does not have comparable performance.",
"each sentence in the target domain we collected the attention score of each source domain, and fi-nally the attention scores are averaged for each source-target pair.",
"Fig. 6 shows all the source-to-target average attention scores.",
"We can see that some domains can contribute to other related domains.",
"For example, bn (broadcast news) and nw (newswire) are both about news and they contribute to each other; bn and bc (broadcast conversation) are both broadcast and bn contributes to bc; bn and nw both contributes to mz (magzine) probably because they are all about news; wb (web) and tc (telephone conversation) almost make no positive contribution to any other, which is reasonable because they are informal texts compared to others and they are not necessarily related to the other.",
"Overall the attention scores can make some sense.",
"It suggests that the attention is aware of relations between different domains and can contribute to the model.",
"In this paper, we present CONNET for learning a sequence tagger from multi-source supervision.",
"It could be applied in two practical scenarios: learning with crowd annotations and cross-domain adaptation.",
"In contrast to prior works, CONNET learns fine-grained representations of each source which are further dynamically aggregated for every unseen sentence in the target data.",
"Experiments show that our model is superior to previous crowd-sourcing and unsupervised domain adaptation sequence labeling models.",
"The proposed learning framework also shows promising results on other NLP tasks like text classification.",
"This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Contract No. 2019-19051600007, United States Office Of Naval Research under Contract No.",
"N660011924033, and NSF SMA 18-29268.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government.",
"We would like to thank all the collaborators in USC INK research lab for their constructive feedback on the work."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"Given the complexity of combinations of tasks, languages, and domains in natural language processing (NLP) research, it is computationally prohibitive to exhaustively test newly proposed models on each possible experimental setting.",
"In this work, we attempt to explore the possibility of gaining plausible judgments of how well an NLP model can perform under an experimental setting, without actually training or testing the model .",
"To do so, we build regression models to predict the evaluation score of an NLP experiment given the experimental settings as input.",
"Experimenting on 9 different NLP tasks, we find that our predictors can produce meaningful predictions over unseen languages and different modeling architectures, outperforming reasonable baselines as well as human experts.",
"Going further, we outline how our predictor can be used to find a small subset of representative experiments that should be run in order to obtain plausible predictions for all other experimental settings.",
"1 1 Introduction Natural language processing (NLP) is an extraordinarily vast field, with a wide variety of models being applied to a multitude of tasks across a plenitude of domains and languages.",
"In order to measure progress in all these scenarios, it is necessary to compare performance on test datasets representing each scenario.",
"However, the cross-product of tasks, languages, and domains creates an explosion of potential application scenarios, and it is infeasible to collect high-quality test sets for each.",
"In addition, even for tasks where we do have a wide variety of test data, e.g. for well-resourced tasks such as machine translation (MT), it is still 1 Code, data and logs are publicly available at https: //github.com/xiamengzhou/NLPerf .",
"computationally prohibitive as well as not environmentally friendly (Strubell et al., 2019) to build and test on systems for all languages or domains we are interested in.",
"Because of this, the common practice is to test new methods on a small number of languages or domains, often semi-arbitrarily chosen based on previous work or the experi-menters' intuition.",
"As a result, this practice impedes the NLP community from gaining a comprehensive understanding of newly-proposed models.",
"Table 1 illustrates this fact with an example from bilingual lexicon induction, a task that aims to find word translation pairs from cross-lingual word embeddings.",
"As vividly displayed in Table 1, almost all the works report evaluation results on a different subset of language pairs.",
"Evaluating only on a small subset raises concerns about making inferences when comparing the merits of these methods: there is no guarantee that performance on EnglishSpanish ( EN ES , the only common evaluation dataset) is representative of the expected performance of the models over all other language pairs (Anastasopoulos and Neubig, 2020).",
"Such phenomena lead us to consider if it is possible to make a decently accurate estimation for the performance over an untested language pair without actually running the NLP model to bypass the computation restriction.",
"Toward that end, through drawing on the idea of characterizing an experiment from Lin et al. (2019), we propose a framework, which we call NLPERF , to provide an exploratory solution.",
"We build regression models, to predict the performance on a particular experimental setting given past experimental records of the same task, with each record consisting of a characterization of its training dataset and a performance score of the corresponding metric.",
"Concretely, in 2, we start with a partly populated table (such as the one from BLI Method Evaluation Set DE EN EN DE ES EN EN ES FR EN EN FR IT EN EN IT EN PT EN RU ES DE PT RU Zhang et al. (2017) ?",
"Table 1) and attempt to infer the missing values with the predictor.",
"We begin by introducing the process of characterizing an NLP experiment for each task in 3.",
"We evaluate the effectiveness and robustness of NLPERF by comparing to multiple baselines, human experts, and by perturbing a single feature to simulate a grid search over that feature (4).",
"Evaluations on multiple tasks show that NLPERF is able to outperform all baselines.",
"Notably, on a machine translation (MT) task, the predictions made by the predictor turn out to be more accurate than human experts.",
"An effective predictor can be very useful for multiple applications associated with practical scenarios.",
"In 5, we show how it is possible to adopt the predictor as a scoring function to find a small subset of experiments that are most representative of a bigger set of experiments.",
"We argue that this will allow researchers to make informed decisions on what datasets to use for training and evaluation, in the case where they cannot experiment on all experimental settings.",
"Last, in 6, we show that we can adequately predict the performance of new models even with a minimal number of experimental records.",
"In this section we formalize the problem of predicting performance on supervised NLP tasks.",
"Given an NLP model of architecture M trained over dataset(s) D of a specific task involving language(s) L with a training procedure (optimiza-tion algorithms, learning rate scheduling etc.) P , we can test the model on a test dataset D (cid:48) and get a score S of a specific evaluation metric.",
"The resulting score will surely vary depending on all the above mentioned factors, and we denote this relation as g : SM , P , L , D , D (cid:48) = g ( M , P , L , D , D (cid:48) ) .",
"In the ideal scenario, for each test dataset D (cid:48) of a specific task, one could enumerate all different settings and find the one that leads to the best performance.",
"As mentioned in Section 1, however, such a brute-force method is computationally infeasible.",
"Thus, we turn to modeling the process and formulating our problem as a regression task by using a parametric function f to approximate the true function g as follows: SM , P , L , D , D (cid:48) = f ([ M ; P ; L ; D ; D (cid:48) ]) where denotes a set of features for each influ-encing factor.",
"For the purpose of this study, we mainly focus on dataset and language features L and D , as this already results in a significant search space, and gathering extensive experimental results with fine-grained tuning over model and training hyperparameters is both expensive and relatively complicated.",
"In the cases where we handle multiple models, we only use a single categorical model feature to denote the combination of model architecture and training procedure, denoted as C .",
"We still use the term model to refer to this combination in the rest of the paper.",
"We also omit the test set features, under the assumption that the data distributions for training and testing data are the same (a fairly reasonable assumption if we ignore possible domain shift).",
"Therefore, for all experiments below, our final prediction function is the following: SC , L , D = f ([ C ; L ; D ]) In the next section we describe concrete instantiations of this function for several NLP tasks.",
"To build a predictor for NLP task performance, we must 1) select a task, 2) describe its featurization, and 3) train a predictor.",
"We describe details of these three steps in this section.",
"Tasks We test on tasks including bilingual lexicon induction (BLI); machine translation trained on aligned Wikipedia data (Wiki-MT), on TED talks (TED-MT), and with cross-lingual transfer for translation into English (TSF-MT); cross-lingual dependency parsing (TSF-Parsing); cross-lingual POS tagging (TSF-POS); cross-lingual entity linking (TSF-EL); morphological analysis (MA) and universal dependency parsing (UD).",
"Ba-sic statistics on the datasets are outlined in Table",
"2. For Wiki-MT tasks, we collect experimental records directly from the paper describing the corresponding datasets (Schwenk et al., 2019).",
"For TED-MT and all the transfer tasks, we use the results of Lin et al. (2019).",
"For BLI, we conduct experiments using published results from three papers, namely Artetxe et al. (2016), Artetxe et al. (2017) and Xu et al. (2018).",
"For MA, we use the results of the SIGMORPHON 2019 shared task 2 (McCarthy et al., 2019).",
"Last, the UD results are taken from the CoNLL 2018 Shared Task on universal dependency parsing (Zeman et al., 2018b).",
"Featurization For language features, we utilize six distance features from the URIEL Typological Database (Littell et al., 2017), namely geographic, genetic, inventory, syntactic, phonological, and featural distance.",
"The complete set of dataset features includes the following:",
"1. Dataset Size: The number of data entries used for training.",
"2. Word/Subword Vocabulary Size: The number of word/subword types.",
"3. Average Sentence Length: The average length of sentences from all experimental.",
"4. Word/Subword Overlap: | T 1 T 2 | | T 1 | + | T 2 | where T 1 and T 2 denote vocabularies of any two corpora.",
"5. Type-Token Ratio (TTR): The ratio between the number of types and number of tokens (Richards, 1987) of one corpus.",
"6. Type-Token Ratio Distance: (cid:18) 1 TTR 1 TTR 2 (cid:19) 2 where TTR 1 and TTR 2 denote TTR of any two corpora.",
"7. Single Tag Type: Number of single tag types.",
"8. Fused Tag Type: Number of fused tag types.",
"9. Average Tag Length Per Word: Average number of single tags for each word.",
"10. Dependency Arcs Matching WALS Features: the proportion of dependency parsing arcs matching the following WALS features, computed over the training set: sub-ject/object/oblique before/after verb and ad-jective/numeral before/after noun.",
"For transfer tasks, we use the same set of dataset features D as Lin et al. (2019), including features 16 on the source and the transfer language side.",
"We also include language distance features between source and transfer language, as well as between source and target language.",
"For MT tasks, we use features 16 and language distance features, but only between the source and target language.",
"For MA, we use features 1, 2, 5 and morphological tag related features 79.",
"For UD, we use features 1, 2, 5, and",
"Predictor Our prediction model is based on gradient boosting trees (Friedman, 2001), implemented with XGBoost (Chen and Guestrin, 2016).",
"This method is widely known as an effective means for solving problems including ranking, classification and regression.",
"We also experimented with Gaussian processes (Williams and Rasmussen, 1996), but settled on gradient boosted trees because performance was similar and Xg-boost's implementation is very efficient through the use of parallelism.",
"We use squared error as the objective function for the regression and adopted a fixed learning rate 0.1.",
"To allow the model to fully fit the data we set the maximum tree depth to be 10 and the number of trees to be 100, and use the default regularization terms to prevent the model from overfitting.",
"In this section we investigate the effectiveness of NLPERF across different tasks on various metrics.",
"Following Lin et al. (2019), we conduct k fold cross validation for evaluation.",
"To be specific, we randomly partition the experimental records of (cid:104)L , D , C , S(cid:105) tuples into k folds, and use k 1 folds to train a prediction model and evaluate on the remaining fold.",
"Note that this scenario is similar to filling in the blanks in Table 1, where we have some experimental records that we can train the model on, and predict the remaining ones.",
"For evaluation, we calculate the average root mean square error (RMSE) between the predicted scores and the true scores.",
"Baselines We compare against a simple mean value baseline, as well as against language-wise mean value and model-wise mean value baselines.",
"The simple mean value baseline outputs an average of scores s from the training folds for all test entries in the left-out fold ( i ) as follows: s ( i )mean = 1 |S \\ S ( i ) | (cid:88) s S\\S ( i ) s ; i 1 . . . k (2) Note that for tasks involving multiple models, we calculate the RMSE score separately on each model and use the mean RMSE of all models as the final RMSE score.",
"The language-wise baselines make more informed predictions, taking into account only training instances with the same transfer, source, or target language (depending on the task setting).",
"For example, the source-language mean value baseline s ( i,j ) s lang for j th test instance in fold i outputs an average of the scores s of the training instances that share the same source language features s lang , as shown in Equation 3: s ( i,j ) s lang = (cid:80) s, ( L , src = s lang) s (cid:80) s, ( L , src = s lang) ( s, ) ( |S \\ S ( i ) | , | \\ ( i ) | ) (3) where is the indicator function.",
"Similarly, we define the targetand the transfer-language mean value baselines.",
"In a similar manner, we also compare against a model-wise mean value baseline for tasks that include experimental records from multiple models.",
"Now, the prediction for the j th test instance in the left-out fold i is an average of the scores on the same dataset (as characterized by the language L and dataset D features) from all other models: s ( i,j ) model = (cid:80) s, ( L = lang , D = data) s (cid:80) s, ( L = lang , D = data) ( s, ) ( |S \\ S ( i ) | , | \\ ( i ) | ) (4) where lang = ( i,j ) L and data = ( i,j ) D respectively denote the language and dataset features of the test instance.",
"Main Results For multi-model tasks, we can do either S ingle M odel prediction (SM), restricting training and testing of the predictor within a single model, or M ultiM odel (MM) prediction using a categorical model feature.",
"The RMSE scores of NLPERF along with the baselines are shown in Table",
"3. For all tasks, our single model predictor is able to more accurately estimate the evaluation score of unseen experiments compared to the single model baselines, confirming our hypothesis that the there exists a correlation that can be captured between experimental settings and the downstream performance of NLP systems.",
"The language-wise baselines are much stronger than the simple mean value baseline but still perform worse than our single model predictor.",
"Similarly, the model-wise baseline significantly outperforms the mean value baseline because results from other models reveal much information about the dataset.",
"Even so, our multi-model predictor still outperforms the model-wise baseline.",
"The results nicely imply that for a wide range of tasks, our predictor is able to reasonably estimate left-out slots in a partly populated table given results of other experiment records, without actually running the system.",
"We should note that RMSE scores across different tasks should not be directly compared, mainly because the scale of each evaluation metric is different.",
"For example, a BLEU score (Papineni et al., 2002) for MT experiments typically ranges from 1 to 40, while an accuracy score usually has a much larger range, for example, BLI accuracy ranges from 0.333 to 78.2 and TSF-POS accuracy ranges from 1.84 to 87.98, which consequently makes the RMSE scores of these tasks higher.",
"Comparison to Expert Human Performance We constructed a small scale case study to evaluate whether NLPERF is competitive to the performance of NLP sub-field experts.",
"We focused on the TED-MT task and recruited 10 MT practitioners, 2 all of whom had published at least 3 MT-related papers in ACL-related conferences.",
"In the first set of questions, the participants were presented with language pairs from one of the k data folds along with the dataset features and were asked to estimate an eventual BLEU score for each data entry.",
"In the second part of the questionnaire, the participants were tasked with making estimations on the same set of language pairs, but this time they also had access to features, and BLEU scores from all the other folds.",
"3 2 None of the study participants were affiliated to the au-thors' institutions, nor were familiar with this paper's content.",
"The partition of the folds is consistent between the human study and the training/evaluation for the predictor.",
"While the first sheet is intended to familiarize the participants with the task, the second sheet fairly adopts the training/evaluation setting for our predictor.",
"As shown in Table 4, our participants outperform the mean baseline even without information from other folds, demonstrating their own strong prior knowledge in the field.",
"In addition, the participants make more accurate guesses after acquiring more information on experimental records in other folds.",
"In neither case, though, are the human experts competitive to our predictor.",
"In fact, only one of the participants achieved performance comparable to our predictor.",
"Feature Perturbation Another question of interest concerning predicting performance is how will the model perform when trained on data of a different size (Kolachina et al., 2012a).",
"To test NLPERF 's extrapolation ability in this regard, we conduct an array of experiments on one language pair with various data sizes on the Wiki-MT task.",
"We pick two language pairs, Turkish to English ( TR EN ) and Portuguese to English ( PT EN ) as our testbed for the Wiki-MT task.",
"We sample par-(and make estimations over one of the folds) in the A. 0 100 200 300 400 500 Data Size (k) 5 10 15 20 BLEUTR-EN TR-EN prediction 0 400 800 1200 1600 2000 2400 15 20 25 30 35 BLEUPT-EN PT-EN prediction Figure 1: Our model's predicted BLEU scores and true BLEU scores, on sampled TR EN datasets (sizes 10k/50k/100k/200k/478k) and PT EN datasets (sizes 100k/500k/1000k/2000k/2462k), achieving a RMSE score of 1.83 and 9.97 respectively.",
"allel datasets with different sizes and train MT models with each sampled dataset to obtain the true BLEU scores.",
"On the other hand, we collect the features of all sampled datasets and use our predictor (trained over all other languages pairs) to obtain predictions.",
"The plot of true BLEU scores and predicted BLEU scores are shown in Figure",
"1. Our predictor achieves a very low average RMSE of 1.83 for TR EN pair but a relatively higher RMSE of 9.97 for PT EN pair.",
"The favorable performance on the tr-en pair demonstrates the possibility of our predictor to do feature extrapolation over data set size.",
"In contrast, the predictions on the pt-en pair are significantly less accurate.",
"This is due to the fact that there are only two other experimental settings scoring as high as 34 BLEU score, with data sizes of 3378k (en-es) and 611k (gl-es), leading to the predictor's inadequacy in predicting high BLEU scores for low-resourced data sets during extrapolation.",
"This reveals the fact that while the predictor is able to extrapolate performance on settings similar to what it has seen in the data, NLPERF may be less successful under circumstances unlike its training inputs.",
"As shown in Table 1, it is common practice to test models on a subset of all available datasets.",
"The reason for this is practical it is computationally prohibitive to evaluate on all settings.",
"However, if we pick test sets that are not representative of the data as a whole, we may mistakenly reach unfounded conclusions about how well models perform on other data with distinct properties.",
"For example, models trained on a small-sized dataset may not scale well to a large-sized one, or models that perform well on languages with a particular linguistic characteristic may not do well on languages with other characteristics (Bender and Friedman, 2018).",
"Here we ask the following question: if we are only practically able to test on a small number of experimental settings, which ones should we test on to achieve maximally representative results?",
"Answering the question could have practical implications: organizers of large shared tasks like SIGMORPHON (McCarthy et al., 2019) or UD (Zeman et al., 2018a) could create a minimal subset of settings upon which they would ask participants to test to get representative results; similarly, participants could possibly expedite the iteration of model development by testing on the representative subset only.",
"A similar avenue for researchers and companies deploying systems over multiple languages could lead to not only financial savings, but potentially a significant cut-down of emissions from model training (Strubell et al., 2019).",
"We present an approximate explorative solution to the problem mentioned above.",
"Formally, assume that we have a set N , comprising experimental records (both features and scores) of n datasets for one task.",
"We set a number m ( < n ) of datasets that we would like to select as the representative subset.",
"By defining RMSEA ( B ) to be the RMSE score derived from evaluating on one subset B the predictor trained on another subset of experimental records A , we consider the most representative subset D to be the one that minimizes the RMSE score when predicting all of the other datasets: arg min DN RMSED ( N \\ D ) .",
"Naturally, enumerating all (cid:0) nm (cid:1) possible subsets would be prohibitively costly, even though it would lead to the optimal solution.",
"Instead, we employ a beam-search-like approach to efficiently search for an approximate solution to the best performing subset of arbitrary size.",
"Concretely, we start our approximate search with an exhaustive enumeration of all subsets of size",
"2. At each following step t , we only consider the best k subsets {D ( i ) t ; i 1 , . . . , k } into account and discard the rest.",
"As shown in Equation 6, for each candidate 2 3 4 5 10 20 30 40 RMSE rus-englav-eng bos-engron-engeng-fin cat-englav-engeng-esteng-nor bos-engron-engeng-finkor-enghrv-eng eng-spaeng-ben eng-spaeng-afrafr-eng eng-spaeng-noreng-daneng-afr eng-spaeng-danafr-engeng-afreng-nor BLI 2 3 4 5 10 20 30 40 50 lt_hsegl_treegal lt_hseen_pudhy_armtdp lt_hsepcm_nscpl_lfgpl_sz lt_hsebr_keben_linecs_fictreepl_sz kpv_ikdpsa_ufal kpv_ikdptl_trgsa_ufal kpv_ikdpkpv_latticesa_ufaltl_trg tl_trgsa_ufalcs_pudfo_ofttr_pud MA 2 3 4 5 5 10 15 20 25 RMSE sqi-engkur-eng nob-engmsa-engcmn-eng nob-engmsa-englit-engces-eng nob-engmsa-englit-engfas-engheb-eng spa-engpor-eng rus-engpor-engvie-eng rus-engpor-engvie-engron-eng spa-engfra-engara-engpor-engita-eng TED-MT 2 3 4 5 5 10 15 20 25 glg-rusron-por srp-ukrdeu-epoeng-tur swe-fraita-slkfin-engglg-eng srp-ukrdeu-epoita-rusfra-porukr-srp eng-spaspa-eng por-engeng-spaspa-eng eng-spapor-engeng-porfra-eng glg-spaeng-itaeng-spavie-engspa-glg Wiki-MT Most representative Least representative Random Search Figure 2: Beam search results (beam size=100) for up to the 5 most (and least) representative datasets for 4 NLP tasks.",
"For tasks that involve multiple models, we take experimental records of the selected dataset from all models into account during expansion.",
"Given all expanded subsets, we train a predictor for each to evaluate on the rest of the data sets, and keep the best performing k subsets {D ( i ) t +1 ; i 1 , . . . , k } with minimum RMSE scores for the next step.",
"Furthermore, note that by simply changing the arg min to an arg max in Equation 5, we can also find the least representative datasets.",
"We present search results for four tasks 4 as beam search progresses in Figure 2, with corresponding RMSE scores from all remaining datasets as the y-axis.",
"For comparison, we also conduct random searches by expanding the subset with a randomly selected experimental record.",
"In all cases, the most representative sets are an aggregation of datasets with diverse characteristics such as languages and dataset sizes.",
"For example, in the Wiki-MT task, the 5 most representative datasets include languages that fall into a diverse range of language families such as Romance, Turkic, Slavic, etc. while the least representative ones include duplicate pairs (opposite directions) mostly 4 Readers can find results on other tasks in Appendix B. involving English.",
"The phenomenon is more pronounced in the TED-MT task, where not only the 5 most representative source languages are diverse, but also the dataset sizes.",
"Specifically, the Malay-English (msa-eng) is a tiny dataset (5k parallel sentences), and Hebrew-English (heb-eng) is a high-resource case (212k parallel sentences).",
"Notably, for BLI task, to test how representative the commonly used datasets are, we select the most frequent 5 language pairs shown in Table 1, namely en-de, es-en, en-es, fr-en, en-fr for evaluation.",
"Unsurprisingly, we get an RMSE score as high as 43 .",
"44 , quite close to the performance of the worst representative set found using beam search.",
"This finding indicates that the standard practice of choosing datasets for evaluation is likely unrepresentative of results over the full dataset spectrum, well aligned with the claims in Anastasopoulos and Neubig (2020).",
"A particularly encouraging observation is that the predictor trained with only the 5 most representative datasets can achieve an RMSE score comparable to k-fold validation, which required using all of the datasets for training.",
"5 This indicates that one would only need to train NLP models on a small set of representative datasets to obtain reasonably plausible predictions for the rest.",
"In another common scenario, researchers propose new models for an existing task.",
"It is both time-consuming and computationally intensive to run experiments with all settings for a new model.",
"In this section, we explore if we can use past experimental records from other models and a minimal set of experiments from the new model to give a plausible prediction over the rest of the datasets, potentially reducing the time and resources needed for experimenting with the new model to a large extent.",
"We use the task of UD parsing as our testbed 6 as it is the task with most unique models (25 to be exact).",
"Note that we still only use a single categorical feature for the model type.",
"To investigate how many experiments are needed to have a plausible prediction for a new model, we first split the experimental records equally into a sample set and a test set.",
"Then we randomly sample n (0 n 5) experimental records from the sample set and add them into the collection of experiment records of past models.",
"Each time we re-train a predictor and evaluate on the test set.",
"The random split repeats 50 times and the random sampling repeats 50 times, adding up to a total of 2500 experiments.",
"We use the mean value of the results from other models, shown in Equation 7 as the prediction baseline for the left-out model, and because experiment results of other models reveal significant information about the dataset, this serves as a relatively strong baseline: s k = 1 n 1 n (cid:88) i =1 1 ( i M / { k } ) s i .",
"We show the prediction performance (in RMSE) over 8 systems 7 in Figure",
"3. Interestingly, the predictor trained with no model records (0) outperforms the mean value baseline for the 4 best systems, while it is the opposite case on the 4 worst systems.",
"Since there is no information provided about the new-coming model, the predictions are solely based on dataset and language features.",
"One reason might explain the phenomenon the correlation between the features and the scores of the worse-performing systems is different from 6 MA and BLI task results are in Appendix C 7 The best and worst 4 systems from the shared task.",
"those better-performing systems, so the predictor is unable to generalize well (ONLP).",
"In the following discussion, we use RMSE@n to denote the RMSE from the predictor trained with n data points of a new model.",
"The relatively low RMSE@0 scores indicate that other models' features and scores are informative for predicting the performance of the new model even without new model information.",
"Comparing RMSE@0 and RMSE@1, we observe a consistent improvement for almost all systems, indicating that NLPERF trained on even a single extra random example achieves more accurate estimates over the test sets.",
"Adding more data points consistently leads to additional gains.",
"However, predictions on worse-performing systems benefit more from it than for better-performing systems, indicating that their feature-performance correlation might be considerably different.",
"The findings here indicate that by extrapolating from past experiments, one can make plausible judgments for newly developed models.",
"As discusssed in Domhan et al. (2015), there are two main threads of work focusing on predicting performance of machine learning algorithms.",
"The first thread is to predict the performance of a method as a function of its training time, while the second thread is to predict a method's performance as a function of the training dataset size.",
"Our work belongs in the second thread, but could easily be extended to encompass training time/procedure.",
"In the first thread, Kolachina et al. (2012b) attempt to infer learning curves based on training data features and extrapolate the initial learning curves based on BLEU measurements for statistical machine translation (SMT).",
"By extrapolating the performance of initial learning curves, the predictions on the remainder allows for early termination of a bad run (Domhan et al., 2015).",
"In the second thread, Birch et al. (2008) adopt linear regression to capture the relationship between data features and SMT performance and find that the amount of reordering, the morphological complexity of the target language and the relatedness of the two languages explains the majority of performance variability.",
"More recently, Elsahar and Gall (2019) use domain shift metrics such as H -divergence based metrics to predict drop in performance under domain-shift.",
"Rosenfeld et al. 0 1 2 3 4 5 4 6 8 RMSEHIT-SCIR (78 . 86) 0 1 2 3 4 5 3 4 UDPipe (76 . 07) 0 1 2 3 4 5 4 4 .",
"(2020) explore the functional form of the dependency of the generalization error of neural models on model and data size.",
"We view our work as a generalization of such approaches, appropriate for application on any NLP task.",
"In this work, we investigate whether the experiment setting itself is informative for predicting the evaluation scores of NLP tasks.",
"Our findings promisingly show that given a sufficient number of past training experimental records, our predictor can 1) outperform human experts; 2) make plausible predictions even over new-coming models and languages; 3) extrapolate well on features like dataset size; 4) provide a guide on how we should choose representative datasets for fast iteration.",
"While this discovery is a promising start, there are still several avenues on improvement in future work.",
"First, the dataset and language settings covered in our study are still limited.",
"Experimental records we use are from relatively homogeneous settings, e.g. all datasets in Wiki-MT task are sentence-pieced to have 5000 subwords, indicating that our predictor may fail for other subword settings.",
"Our model also failed to generalize to cases where feature values are out of the range of the training experimental records.",
"We attempted to apply the predictor of Wiki-MT to evaluate on a low-resource MT dataset, translating from Mapudungun (arn) to Spanish (spa) with the dataset from Duan et al. (2019), but ended up with a poor RMSE score.",
"It turned out that the average sentence length of the arnspa data set is much lower than that of the training data sets and our predictors fail to generalize to this different setting.",
"Second, using a categorical feature to denote model types constrains its expressive power for modeling performance.",
"In reality, a slight change in model hyperparameters (Hoos and Leyton-Brown, 2014; Probst et al., 2019), optimization algorithms (Kingma and Ba, 2014), or even random seeds (Madhyastha and Jain, 2019) may give rise to a significant variation in performance, which our predictor is not able to capture.",
"While investigating the systematic implications of model structures or hyperparameters is practically infeasible in this study, we may use additional information such as textual model descriptions for modeling NLP models and training procedures more elaborately in the future.",
"Lastly, we assume that the distribution of training and testing data is the same, which does not consider domain shift.",
"On top of this, there might also be a domain shift between data sets of training and testing experimental records.",
"We believe that modeling domain shift is a promising future direction to improve performance prediction.",
"The authors sincerely thank all the reviewers for their insightful comments and suggestions, Philipp Koehn, Kevin Duh, Matt Post, Shuoyang Ding, Xuan Zhang, Adi Renduchintala, Paul McNamee, Toan Nguyen and Kenton Murray for conducting human evaluation for the TED-MT task, Daniel Beck for discussions on Gaussian Processes, Shruti Rijhwani, Xinyi Wang, Paul Michel for discussions on this paper.",
"This work is generously supported from the National Science Foundation under grant 1761548."
] | [
"abstain",
"objective",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"method",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"other",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"objective",
"method",
"abstain",
"result",
"other",
"other"
] |
[
"The capability to automatically detect human stress can benefit artificial intelligent agents involved in affective computing and human-computer interaction.",
"Stress and emotion are both human affective states, and stress has proven to have important implications on the regulation and expression of emotion.",
"Although a series of methods have been established for multimodal stress detection, limited steps have been taken to explore the underlying inter-dependence between stress and emotion.",
"In this work, we investigate the value of emotion recognition as an auxiliary task to improve stress detection.",
"We propose MUSER a transformer-based model architecture and a novel multi-task learning algorithm with speed-based dynamic sampling strategy.",
"Evaluations on the Multimodal Stressed Emotion (MuSE) dataset show that our model is effective for stress detection with both internal and external auxiliary tasks, and achieves state-of-the-art results.",
"Stress is a feeling of emotional or physical tension, as a response to the environment when people have difficulty dealing with the conditions (Dobson and Smith, 2000; Muthukumar and Nachiappan, 2013).",
"Stress detection is a classification task that predicts whether a certain target is under stress.",
"The task has drawn research attention for two reasons: first, stress detection plays an important role in applications related to psychological well-being (Cohen et al., 1991), cognitive behavior therapies (Tull et al., 2007), and safe driving (Gao et al., 2014; Chen et al., 2017); second, stress is a known regulator of human emotion mechanisms (Tull et al., 2007), and thus research on stress detection can potentially benefit the development of emotionally intelligent agents.",
"The impact of stress on human behavior can be observed through various modalities.",
"Previous work has considered both unimodal and multimodal stress detection using acoustic, video and physiological sensor signals (Lane et al., 2015; Jaques et al., 2016; Aigrain et al., 2016; Alberdi et al., 2016; Bara et al., 2020).",
"However, text-based stress detection remains vastly underexplored, with some studies (Lin et al., 2014) showing the potential for further research.",
"In recent years, the surge of advanced natural language understanding models and structures provides a great opportunity for stress detection systems, especially using the textual modality.",
"In this work, we focus on the textual and acoustic modalities.",
"For the model architecture, we use Transformers (Vaswani et al., 2017) as a textual encoder and Multi-Layer Perceptrons (MLP) as an acoustic encoder.",
"The majority of existing stress detection methods are based on single-task learning with the binary stress/non-stress labels.",
"However, stress is not an isolated affective state, but closely related to the expression and regulation of human emotions.",
"Physiological studies (Wang and Saudino, 2011) have demonstrated that emotion and stress share some neural structures, including prefrontal cortex (Taylor and Stanton, 2007), anterior cingulate cortex (Pruessner et al., 2008), and amygdala (Adolphs, 2003).",
"Acoustic studies (Paulmann et al., 2016) have shown that the pitch and amplitude of human emotional prosody is different under stress and non-stressed status.",
"Inspired by these studies, our work aims to exploit the inter-dependence between emotion and stress.",
"Specifically, we investigate the value of emotion recognition as an auxiliary task for stress detection.",
"Multi-task learning (Pasunuru and Bansal, 2017; Gottumukkala et al., 2020; Guo et al., 2018a; Gong et al., 2019) has proven to be effective for transferring knowledge between different tasks.",
"Dynamic sampling strategies, which aim at adaptively adjusting the ratio of samples from different tasks, are widely used to balance the training schedule.",
"However, strategies based on gradients (Chen et al., 2018b), uncertainty (Kendall et al., 2018) or loss (Liu et al., 2019) cannot leverage the validation performances, while some performance-based strategies (Gottumukkala et al., 2020) are impractical if the metrics for different tasks are not directly comparable (i.e., with different scale ranges).",
"To this end, we propose a novel speed-based strategy that is both effective and efficient in the multi-task learning for stress and emotion.",
"Our method is evaluated on the Multimodal Stressed Emotion (MuSE) dataset (Jaiswal et al., 2019, 2020), which includes both stress and emotion labels, making it the ideal benchmark for an in-depth analysis of their inter-dependence.",
"To test the generalization ability of our method, we also use an external emotion dataset for the auxiliary task.",
"Multimodal emotion recognition is a well-studied field with many existing datasets (Busso et al., 2008, 2016; Chen et al., 2018a; Barros et al., 2018; Zadeh et al., 2018).",
"We choose the OMG-Emotion dataset (Barros et al., 2018) as the external auxiliary task because it is representative and challenging, with numerical emotion scores instead of categorical labels.",
"Our paper makes four main contributions.",
"First, we show the inter-dependence between stress and emotion via quantitative analyses on linguistic and acoustic features, and propose to use emotion recognition as an auxiliary task for stress detection.",
"Second, we establish a stress detection model with a transformer structure, as well as a novel speed-based dynamic sampling strategy for multi-task learning.",
"We name our framework the MUltimodal Stress Detector with Emotion Recognition (MUSER).",
"Third, we achieve state-of-the-art results on the MuSE dataset via multitask training with stress and emotion labels.",
"We also achieve competitive results when we use the OMG-Emotion (Barros et al., 2018) dataset as an external auxiliary task.",
"Finally, experimental results show that our speed-based dynamic sampling significantly outperforms other widely-used methods.",
"Stress detection based on textual modality has been studied by (Lin et al., 2014) and (Jaiswal et al., 2020), using the Linguistic Inquiry and Word Count (LIWC) lexicon (Pennebaker et al., 2001) to",
"extract features that are indicative of human emotion.",
"Acoustic features (Lane et al., 2015; Paul-mann et al., 2016; Horvath, 1982; Lech and He, 2014) have also been used for unimodal stress detection in both physiological and computational studies.",
"A drawback of the unimodal approaches is that they only have access to partial information about the expression of stress, while multiple modalities can potentially be informative at the same time (Aigrain et al., 2016).",
"As demonstrated by previous work on human sentiment and emotion prediction (Zadeh et al., 2016, 2018; Yao et al., 2020), multimodal features usually results in better performances.",
"Commonly-used modalities for stress detection include video, audio, text and physiological signals such as thermal maps from sensors (Aigrain et al., 2016; Alberdi et al., 2016; Lane et al., 2015; Jaques et al., 2016).",
"Jaiswal et al. (2020) proposed the Multimodal Stressed Emotion (MuSE) dataset, which includes records from all the commonly-used modalities.",
"Each video clip is annotated for both stress detection and emotion recognition.",
"Unimodal and multimodal baselines are provided for each task.",
"Bara et al. (2020) developed a multimodal deep learning method that learns modality-independent representations in an unsupervised approach.",
"However, none of these models leverage the intrinsic connections between stress and emotion.",
"Our experiments are conducted on the MuSE dataset using only the textual and acoustic modalities, to be compatible with most external emotion recognition tasks.",
"However, our proposed multitask learning method is model-agnostic and can be generalized to any structure and any modality combinations.",
"Widely-used multimodal emotion recognition datasets include SEMAINE (McKeown et al., 2011), IEMOCAP (Busso et al., 2008), MOSEI (Zadeh et al., 2018) and OMG-Emotion (Barros et al., 2018).",
"Emotion can be annotated either with pre-defined emotion categories or through two-dimensional scores of activation (arousal) and valence, according to the self-assessment manikin proposed by (Bradley and Lang, 1994).",
"MuSE, in particular, has emotion annotations expressed by Table 1: LIWC features that have the top 20 highest regression coefficients in all three tasks.",
"activation and valence scores (1 9), which is more fine-grained than categorical definitions (happy, angry, etc.).",
"The OMG-Emotion dataset we use as external auxiliary task is annotated in the same way with a score range of 0 1.",
"Because of the different task natures, balancing the training procedure with all the tasks is a critical problem for multi-task learning.",
"Loss-balancing strategies (Chen et al., 2018b; Kendall et al., 2018; Liu et al., 2019; Gong et al., 2019; Guo et al., 2018a; Lample et al., 2017; Yao et al., 2019) are suitable for situations in which there are multiple training objectives that can be combined via weighted summation for each data point.",
"In contrast, for multi-task learning across different datasets, a sampling strategy should be applied to decide the mixing ratio (how many batches to sample from each task) in each epoch.",
"To this end, Pasunuru et al. (2017) used a fixed sampling ratio; Guo et al. (2018b) proposed a dynamic sampling strategy based on reinforcement learning, which depends on the estimation of Q-values; Gottumukkala et al. (2020) used a dynamic sampling procedure based on the gap between multi-task and single-task results a performance-based method that requires all the tasks to use the same set of evaluation metrics.",
"For comparison, our proposed strategy is also based on how fast the model is learning each task, but does not require the metrics to be directly comparable.",
"The MuSE dataset (Jaiswal et al., 2020) is collected from the multimodal video recordings of 28 student participants, 9 female and 19 male.",
"Each partici-Table 2: LIWC features that are among the top 20 highest regression coefficients unique to stress, activation and valence tasks.",
"pant is invited to a video-recording session before and after the final exam period; sessions before exams are labeled as stressed, and the remainng ones are labeled as non-stressed.",
"We use only the records from the monologue sub-sessions where both acoustic and textual modalities are available.",
"In these sub-sessions, the participants view five emotion-eliciting questions on the screen in a sequence, and answer them with monologues.",
"After each monologue, the participants provide self-assessment scores for activation (calm vs. excited) and valence (negative vs. positive).",
"The scores range from 1 9.",
"The monologues are segmented into sentences for pre-processing; each sentence is annotated with the same stress label and emotion scores as the whole monologue.",
"We use a train, validation, and test split of 1,853, 200, and 273 sentences, respectively.",
"Textual features come from the automatic transcripts for the audio clips of each sentence.",
"Although the sentences come with visual and thermal features as well, we focus mainly on the textual and acoustic modalities because this allows us to use almost any external emotion recognition dataset as our auxiliary task.",
"In order to analyze the connections between linguistic features that are most indicative of stress, activation, and valence, we first extract a feature vector based on the LIWC lexicon (Pennebaker et al., 2001).",
"Each dimension of the vector corresponds to a certain word category and has a value equal Table 3: Opensmile eGeMap features that have the top 20 highest regression coefficients for different tasks.",
"to the count of words observed in that category in the sentence.",
"We then apply z-normalization on each feature and fit a linear model to predict the stress/non-stress label, as well as the activation and valence scores.",
"For each of the three tasks, we pick the features with the top 20 highest absolute values of the linear classification/regression coefficients, which we assume to be the key indicators.",
"Features that appear in top 20 for all three tasks are shown in Table 1.",
"The features are ranked by the absolute value of their linear coefficients.",
"As shown, the positive-emotion and perceptual word classes are critical for both emotion and stress tasks, which is intuitive because they are a pair of inter-dependent human affect status.",
"Bio, health, and body words are also on the list, suggesting that both stress and emotion are closely related to physiological status and feelings, which is potentially because they share some neural structures in brain (Wang and Saudino, 2011).",
"The intersection of all the three top-indicator sets has nine elements, reflecting a reasonable overlap.",
"Table 2 shows the word classes appearing uniquely in the top 20 indicator list for each task.",
"It is worth noticing that the non-fluent words (er, hmm, etc.) are the strongest unique indicator of stress, which reflects the differences in the audio speeches under stressed/non-stressed conditions.",
"We could also observe that activation is more connected to entities and events, while valence is more related to personal feelings.",
"For stress indicators in the acoustic modality, we extract 88-dimensional features using OpenSmile (Eyben et al., 2010) with the eGeMaps (Eyben et al., 2015) configuration.",
"We follow (Jaiswal et al., 2020) to do speaker-level z-normalization on each feature, and fit a linear classification/regression model as we did for the textual features.",
"Table 3 shows the most indicative acoustic feature classes for all the tasks, as well as the ones that are unique for each task.",
"Amplitude/loudness is the strongest indicator class for all tasks, followed by spectral flux, which is a measure of how quickly the power spectrum of a signal is changing.",
"It also suggests that stress has a closer relationship with spectral features such as slope, describing how quickly the spectrum of an audio sound tails off towards the high frequencies.",
"The intersection of all three indicator sets has 11 elements, suggesting that they share many acoustic patterns.",
"For more detailed explanations and examples of the eGeMaps features please refer to (Eyben et al., 2015) and (Botelho et al., 2019).",
"Regarding the differences in the task nature, as seen in Table 4, the number of unique indicators for each each and for each modality show that the activation task is less independent of the stress task than the valence task.",
"In other words, the activation task has more indicators in common with the stress task.",
"Based on the task inter-dependency demonstrated in Section 3.2 and 3.3, we propose to use multimodal emotion recognition as an auxiliary task for",
"stress detection.",
"Since MuSE has both stress and emotion labels, their activation and valence scores can be used as an internal auxiliary task.",
"To test the generalization capability of our multitask learning method, we choose OMG-Emotion (Barros et al., 2018) as an external emotion recognition dataset for the auxiliary task, which is annotated in the same manner as MuSE (activa-tion/valence).",
"We download the videos from the training and validation sets and filter out all the samples where the video link is broken or the length of automatic transcription is less than 5, resulting in 1,484 videos.",
"The contents and scenarios in the OMG-Emotion dataset are completely different from MuSE.",
"We hold out 300 videos as a validation set to enable dynamic sampling.",
"Note that stress detection is a binary classification task, while the two auxiliary emotion tasks have a regressive nature.",
"Each utterance in the MuSE dataset is automatically segmented into sentences, transcribed, and tokenized by a pre-trained BERT tokenizer (De-vlin et al., 2019).",
"For the acoustic modality, we use OpenSmile (Eyben et al., 2010) with the eGeMAPS configuration (Eyben et al., 2015) to extract 88 utterance-level statistical features.",
"Following (Jaiswal et al., 2020), we perform speaker-level z-normalization on all acoustic features.",
"For videos in the OMG-Emotion dataset, we first extract the audio and automatic transcripts, and then do the same pre-processing as on MuSE.",
"We propose MUSER: MUltimodal Stress Detector using Emotion Recognition.",
"The model structure is based on neural networks.",
"Specifically, we use a Transformer (Vaswani et al., 2017) textual encoder pre-trained with BERT (Devlin et al., 2019), and an MLP-based acoustic encoder to generate representations on each modality, and fuse them before classification or regression.",
"Our model architecture is depicted in Figure 1.",
"For the textual encoder, we use a Transformer neural network pre-trained with BERT on BookCor-pus and English Wikipedia (Devlin et al., 2019).",
"Our Transformer model has 12 layers, 12 attention heads, and 768-dimensional hidden states.",
"The averaged hidden states on the top level are projected to 256-dimensional representations by a fully-connected layer.",
"Our acoustic encoder is a Multi-layer Perceptron network with four hidden layers and ReLU activation.",
"The input of the acoustic encoder is the OpenSmile features extracted from the audio speech of each sentence, and the output of each hidden layer is 256-dimensional.",
"We fuse the multimodal features by concatenating the top-level 256-dimensional textual and acoustic representations.",
"For the emotion recognition tasks, the concatenated representation is fully connected to a single output unit by a task-specific linear layer with a 0.1 dropout rate.",
"For the stress detection Figure 2: Dynamic sampling procedure for MUSER multi-task training.",
"task, two output units are used to predict the logits for stress and non-stress labels.",
"A softmax layer is used to compute probabilities and training loss.",
"Note that in related work (Jaiswal et al., 2020; Bara et al., 2020), the late fusion stands for an ensemble method, while MUSER solves the task with a single model.",
"We directly share all the trainable parameters (hard sharing) except the task-specific output layers.",
"For each batch of training samples, only one task is assigned, and one step of back-propagation is performed according to the task objective function with the task-specific output layer plugged in.",
"In each epoch of multi-task training, different amounts of training data are sampled from both the auxiliary task of activation/valence regression and the main task of stress classification.",
"We explore both uniform sampling and dynamic sampling strategy to adaptively decide the mixing ratio of the multiple tasks in each epoch.",
"Uniform Sampling.",
"In our conditions, the number of training samples in the main task and the auxiliary tasks are approximately on the same scale.",
"Therefore, an intuitive method is to switch between the tasks with uniform sampling: for each batch, we first decide which task to train with an equal chance, and then randomly select 32 (the batch size) samples; the batch is trained with the corresponding supervision signals (either emotion scores or stress labels) from the selected task.",
"Dynamic Sampling.",
"Having an equal number of samples for each task in each epoch is not the most efficient way for multi-task training because it is not searching for the most informative task during each epoch.",
"It is more intuitive that when one task reaches a bottleneck, more samples from the other tasks should be selected instead.",
"Motivated by this idea, we propose to dynamically select the task for each batch according to the model's speed of learning each task.",
"After each training epoch, the sampling distribution is updated based on the model's current and historical performance on each task on the validation set.",
"Specifically, for activation and valence tasks, we compute the ratio of the average rooted mean square error (RMSE) score on the past n epochs to the RMSE score in the current epoch.",
"The ratios are noted as r a and r v , respectively.",
"For the stress task, we compute the ratio of the accuracy in the current epoch to the average of the past n epochs, noted as r s .",
"The history length n is picked by hand.",
"The sampling distribution for the next epoch is then computed as: p a , p v , p s = softmax ([ r a /, r v /, r s / ]) , (1) Table 5: Comparison with look-alike methods for multi-task learning.",
"where is the temperature coefficient set to 0.1.",
"We use the ratios to history instead of the validation scores themselves to compute the distribution because this makes different metrics comparable to each other, and it is a good estimation of which task is the model currently learning the fastest.",
"We name this strategy a speed-based dynamic sampling.",
"The sampling procedure is shown in Figure 2, and a comparison to look-alike multitask learning methods is included in Table 5.",
"We use an AdamW (Loshchilov and Hutter, 2018) optimizer with an initial learning rate of 3e-4 for all our multimodal and multi-task experiments.",
"In each epoch, we repeatedly sample data with a batch-size of 32 from the main task or the auxiliary tasks, and apply one-step back-propagation for each batch, until the total selected number reaches the size of the MuSE training set.",
"Gradients are clipped to have a maximum norm of 1.0.",
"The history length n in speed-based dynamic sampling is chosen from {1, 5, 10} according to the performance on the validation set.",
"We warm up the dynamic sampling by applying uniform sampling for the first n epochs.",
"The maximum epoch number is typically set to 1000, while the training process is controlled by early stopping.",
"For the Transformer textual encoder, we limit the maximum sequence length to be 128.",
"The evaluation metrics include overall accuracy, as well as the precision, recall, and f-score for the stressed class.",
"For unimodal experiments, we use the textual encoder or the acoustic encoder independently to compute representations before regression or classification.",
"For the Transformer textual encoder, we use a learning rate of 2e-5; for the MLP acoustic model, we use a learning rate of 5e-4.",
"These learning rates are separately fine-tuned on each unimodal task.",
"Other hyperparameters of the models are kept the same as the multimodal structure.",
"Table 6 shows the stress detection results with single modalities.",
"Our Transformer encoder outperforms the baseline textual model because of its capability to discover syntactic-level long distance relationships in natural language and the external linguistic knowledge from the advanced BERT pretraining; our acoustic model also improves beyond the baseline results, potentially because we used a more up-to-date version of eGeMaps configuration and a fine-tuned learning rate.",
"To jointly train with both the textual and acoustic features, we use the multimodal fusion model introduced in Section 4.3 as a basic architecture.",
"Our MUSER model is trained from scratch to set up a single-task baseline for multimodal stress detection.",
"Besides, a potential alternative to multitask learning is pre-training on the auxiliary tasks and fine-tuning on the main task.",
"For a complete comparison, we set up several strategies for pretraining.",
"All the pre-training methods use the internal auxiliary task of MuSE.",
"The compared methods are as follows: Activation-100 : pre-train for 100 epochs with the activation annotations, then switch to the main task of stress detection.",
"Valence-100 : pre-train for 100 epochs with the valence annotations, then switch to the main task of stress detection.",
"epochs on the activation task, then 100 epochs on the valence task, and switch to stress detection.",
"Valence-activation-stress : pre-train for 100 epochs on the valence task, then 100 epochs on the activation task, and switch to stress detection.",
"The results are presented in Table",
"7. Among the pre-training and fine-tuning results, Activation-100 shows the most significant improvement.",
"The second-best score is the valence-activation-stress order.",
"Thus, we can conclude that activation is the better auxiliary task under this paradigm.",
"Additionally, using only one auxiliary task is always Table 6: Results: stress detection with single modality.",
"better than using two of them; this is because when the model learns from the second auxiliary task, it forgets the knowledge from the previous task because it lacks a memory mechanism to look back (Hayes et al., 2020).",
"Pre-training on the emotion recognition tasks using either activation or valence improves stress detection because the model is equipped with the capabilities to encode the features and predict emotions before the training of stress detection task starts.",
"For multi-task learning, we compare two sampling strategies: uniform sampling and our proposed speed-based dynamic sampling.",
"We also implement and modify the loss-based weight balancing method proposed by (Liu et al., 2019) to adjust the mixing ratios in dynamic sampling instead, and compare it with our methods.",
"The results using the internal MuSE emotion recognition as an auxiliary task are shown in Table",
"8. Comparing the uniform sampling results with Table 7, we conclude that using any auxiliary task is better than training from scratch.",
"However, multitask training with the activation and valence tasks together is better than using them separately.",
"This is different from the observations in Table 7 and can be explained by the differences in the training procedure: in multi-task learning, the model looks back-and-forth into each task in each epoch, making it able to memorize the shared knowledge from all the tasks.",
"Additionally, when the model is optimized for the two emotion tasks at the same time, the lower-level representation becomes more general and informative because it is frequently plugged with different task-specific layers.",
"Comparing the results of using a single auxiliary task of activation vs. valence, activation leads to better results as compared to valence, which is in agreement with Table",
"7. This is further supported by the analyses in Tables 2 and 4: given the lower unique indicator count of the activation task, as well as the fact that the pre-training and multi-task learning results are all compatible, we can conclude that for stress detection, the nature of the activation dimension of emotion is closer and more helpful than the valence dimension.",
"This potentially suggests that stress has a major effect on whether people feel excited (activation), but a minor effect on their opinion toward events and objects (valence).",
"We test our speed-based dynamic sampling algorithm using activation and valence together as auxiliary tasks and it yields promising results with history set to 5 and 10.",
"It significantly outperforms both the uniform sampling and our implementation of the loss-based strategy (Liu et al., 2019) (t-test, p < 0 . 05 ), achieving state-of-the-art scores on MuSE stress detection task with one single model and only two modalities.",
"Our model works the best with a history length between 5 and 10.",
"If the history is too short, the model takes longer to converge and has unstable performance, while if the history is too long, it fails to capture the dynamics.",
"However, because of the intrinsic inter-dependence between emotion and stress, any existing dataset with emotion labels can potentially serve as an external auxiliary task for stress detection.",
"However, this requires our model and multi-task training algorithm to generalize beyond the internal MuSE emotion tasks.",
"We test our model on OMG-Emotions as an example of external emotion datasets.",
"Table 9 shows results on MuSE stress detection using OMG-Emotion as an auxiliary task.",
"Comparing to Table 7, although the source and content of OMG-Emotions are different from MuSE, multitask learning still outperforms single-task learning and pre-training (t-test, p < 0 . 05 ).",
"This reveals that the connection between stress and emotion widely exists, and our multi-task learning method works in general cases.",
"Additionally, Table 9 suggests that while using an external emotion dataset, our speed-based sampling method still outperforms uniform sampling, as well as our implementation of loss-based dynamic sampling (Liu et al., 2019).",
"This supports the robustness and effectiveness of our speed-based strategy.",
"In this work, we uncovered the connections and differences between stress detection and emotion recognition using textual and acoustic features, and proposed to use emotion recognition as an auxiliary task for stress detection.",
"We proposed MUSER: a Transformer-based model structure, together with a novel speed-based dynamic sampling strategy for multi-task learning.",
"Experimental results support the inter-dependence of stress and emotion (activation/valence), and proves the effectiveness and robustness of our methods.",
"MUSER achieved state-of-the-art results on the MuSE stress detection task both when internal (MuSE) and when external (OMG-Emotions) emotion data and annotations were used.",
"Our code is publicly available at https://lit.eecs.umich.edu/ downloads.html#MUSER Acknowledgements We would like to thank Cristian-Paul Bara and Mimansa Jaiswal for their helpful discussion on the data processing and features of MuSE dataset.",
"We appreciate the insightful comments from all the reviewers and program committee.",
"This material is based in part on work supported by the Toyota Research Institute, the Precision Health initiative at the University of Michigan, the National Science Foundation (grant #1815291), and the John Templeton Foundation (grant #61156).",
"Any opinions, findings, conclusions, or recommendations in this material are those of the authors and do not necessarily reflect the views of the Toyota Research Institute, Precision Health initiative, the National Science Foundation, or the John Templeton Foundation."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"method",
"objective",
"objective",
"objective",
"abstain",
"result",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"objective",
"objective",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain"
] |
[
"In this paper, we propose a generalizable dialog generation approach that adapts multiturn reasoning , one recent advancement in the field of document comprehension, to generate responses (answers) by taking current conversation session context as a document and current query as a question.",
"The major idea is to represent a conversation session into memories upon which attention-based memory reading mechanism can be performed multiple times, so that (1) user's query is properly extended by contextual clues and (2) optimal responses are step-by-step generated.",
"Considering that the speakers of one conversation are not limited to be one, we separate the single memory used for document comprehension into different groups for speaker-specific topic and opinion embedding.",
"Namely, we utilize the queries' memory, the responses' memory, and their unified memory, following the time sequence of the conversation session.",
"Experiments on Japanese 10-sentence (5-round) conversation modeling show impressive results on how multi-turn reasoning can produce more diverse and acceptable responses than state-of-the-art single-turn and non-reasoning baselines.",
"Dialogue systems such as chatbots are a thriving topic that is attracting increasing attentions from researchers (Sordoni et al., 2015; Serban et al., 2016; Li et al., 2015; Wen et al., 2016).",
"Recent achievements, such as deep neural networks for text generating, user profiling (Li et al., 2014), and natural language understanding, have accelerated the progresses of this field, which was historically approached by conventional rule-based and/or statistical response ranking strategies.",
"Response ranking models retrieve the most suitable response(s) from a fixed set of (question, answer) pairs given a dialogue context and current query from a user (Banchs and Li, 2012; Lowe et al., 2015).",
"Learning-to-rank approaches were applied to compute the similarity scores of between (query, context) and indexed candidate (question, answer) pairs to return the optimal answer to the user.",
"These ranking-based retrieval strategies have been well-applied as an important approach to dialogue systems, yet the set of scripted responses are limited and are short at generalization.",
"On the other hand, statistical machine translation (SMT) systems have been applied to dialogue systems (Ritter et al., 2011), taking user's query as a source language sentence and the chatbot's response as a target language sentence.",
"Labeled data for learning-to-ranking training will not be necessary anymore and all we need is the large-scale (question, answer) pairs.",
"The sequence-to-sequence model proposed in (Sutskever et al., 2014) applied end-to-end training of neural networks to text generation.",
"This model, further enhanced by an attention mechanism (Bahdanau et al., 2014), was generic and allowed its application to numerous sequence-to-sequence learning tasks such as neural machine translation (NMT) (Cho et al., 2014; Bah-danau et al., 2014), image captioning (Donahue et al., 2015; Mao et al., 2015), speech recognition (Chan et al., 2015) and constituency parsing (Vinyals et al., 2015).",
"The simplicity of these models makes them attractive, since translation and alignment are learned jointly on the fly.",
"Specially, Vinyals and Le (2015) applied the sequence-to-sequence model to conversational modeling and achieved impressive results on various datasets.",
"Their model was trained to predict a response given the previous sentence (s).",
"Shang et al. (2015) combined local and global attentions 2049 and reported better results than retrieval based systems.",
"Sordoni et al. (2015) explored three different end-to-end approaches for the problem of predicting the response given a query attached with a single message context.",
"Multi-turn conversation modeling is considered to be more difficult than machine translation, since there are many more acceptable responses for a given (context, query) input and these often rely on external knowledge and/or contextual reasoning.",
"Dialogue systems trained with a maximum likelihood estimation (MLE) objective function, as most SMT utilizes, often learn to reply generic sentences as I don't know or sounds good, which have a high incidence in the answer part of (question, answer) style training datasets.",
"There have been various attempts at diversifying the responses (Li et al., 2016a; Yu et al., 2016; Li et al., 2017) but the lack of variations in the responses remains as an essential challenge.",
"We wonder that if this stress can be relieved by modeling the prior context in a rather fine-grained way.",
"In document comprehension fields, multi-turn reasoning (also called multi-hop reasoning) has delivered impressive results by assimilating various pieces of information to produce an unified answer (Hill et al., 2015; Dhingra et al., 2016).",
"Through multi-turn reading the document's memory using attention models, current question can be extended with much richer knowledge.",
"This makes it easier to figure out the correct answer from that document.",
"Different documents need to be read different times to yield out the correct answer for the input question.",
"Specially, Shen et al. (2016) use a dynamic number of turns by introducing a termination gate to control the number of iterations of reading and reasoning.",
"Motivated by the reasoning network for document comprehension (Shen et al., 2016), we propose multi-turn reasoning neural networks that generate the proper response (or, answer) by attention-based reasoning from current conversation session (document) and current query (identical to question in document comprehension) from the user.",
"In particular, our networks utilize conversation context and explicitly separate speakers' interventions into sentence-level and conversation-level memories.",
"Our first model uses plain single-turn attention to integrate all the memories, and the second approach integrates multi-turn reasoning.",
"The formulation of our proposed approach is designed in a generalized way, allowing for inclusion of additional information such as external knowledge bases (Yih and Ma, 2016; Ghazvininejad et al., 2017; Han et al., 2015) or emotional memories (Zhou et al., 2017).",
"Moreover, our approach for two-speaker scenario can be easily extended to group chatting by a further speaker-specific memory splitting.",
"We evaluate the performances of our methods by comparing three configurations trained on a Japanese twitter conversation session dataset.",
"Each conversation session contains 10 sentences which are 5-round between two real-world speakers.",
"The results provide evidences that multi-turn reasoning neural networks can help improving the consistency and diversity of multi-turn conversation modeling.",
"This paper is structured as follows: Section 2 gives a general description of multi-turn conversation modeling; Section 3 describes background neural language modeling, text generation, and attention mechanisms; Section 4.1 first introduces a model with multiple attention modules and then explains how the multi-turn reasoning mechanism can be further integrated into the previous models; Sections 5, 6 and 7 describe the experimental settings and results using automatic evaluation metrics, detailed human-evaluation based analysis, and conclusions, respectively.",
"Consider a dataset D consisting of a list of conversations between two speakers.",
"Each conversation d D is an ordered sequence of sentences s i , where i [1 , T d ] and T d is the number of sentences in d , produced by two speakers alternately.",
"In this way, for s i , s j d , both sentences are from the same speaker if and only if i j (mod 2) .",
"Note that, our definition includes the case that one speaker continuously expresses his/her message through several sentences.",
"We simply concatenate these sentences into one to ensure that the conversation is modeled with alternate speakers.",
"A multi-turn conversation model is trained to search parameters that maximize the likelihood of every sentence s i d where i 2 , supposing that the beginning sentence s 1 is always given as a precondition: M = arg max {L ( , D ) } , (1) 2050 where L ( , D ) = X d D T d Y i =2 p ( s i | s <i ) .",
"<i j",
"The probability of each sentence, p ( s i | s <i ) , is frequently estimated by a conditional language model.",
"Note that, traditional single-turn conversation models or NMT models are a special case of this model by simply setting T d to be",
"2. That is, the generation of the next sentence is session-insensitive and is only determined by the former single sentence.",
"Another aspect of understanding this contextual conversation model is that, the number of reference contextual sentences s <i is not limited to be one.",
"Suppose there are already 9 sentences known in one conversation session and we want to generate the 10-th sentence, then from p ( s 1 ) to p ( s 9 ) are all preconditions and we will only need to focus on estimating p ( s 10 | s < 10 ) .",
"We adapt sequence-to-sequence neural models (Sutskever et al., 2014) for multi-turn conversation modeling.",
"They are separated into an encoder part and a decoder part.",
"The encoder part applies a RNN on the input sequence(s) s <i to yield prior information.",
"The decoder part estimates the probability of the generated output sequence s i by employing the last hidden state of the encoder as the initial hidden state of the decoder.",
"Sutskever et al. (2014) applied this technique to NMT and impressive experimental results were reported thereafter.",
"Using Equation 2, we are modeling two chatbots talking with each other, since all the s 2 ,...,T d are modeled step-by-step on the fly.",
"However, we can add constraints to determine whose responses to be generated, either one speaker or both of them.",
"That is, when i takes odd integers of 1, 3, 5 and so on, we are modeling the first speaker.",
"Even integers of i indicates a generation of responses for the second speaker.",
"Language models (LM) are trained to compute the probability of a sequence of tokens (words or characters or other linguistic units) being a linguistical sentence.",
"Frequently, the probability of a sentence s with T s tokens is computed by the production of the probabilities of each token y j s given its contexts y <j and y >j : p ( s ) = T s Y j =1 p ( y j | y <j , y >j ) .",
"When generating a sequence based on a LM, we can generate one word at a time based on the previously predicted words.",
"In this situation, only the previously predicted words are known and the probability of current sequence is approximated by dropping the posterior context.",
"That is, p ( y j | y <j , y >j ) p ( y j | y <j ) .",
"We construct a sequence generation LM using sequence-to-sequence neural network f .",
"The neural network intercalates linear combinations and non-linear activate functions to estimate the probability of mass function.",
"Then, in the encoder part of f , the contextual information is represented by a fixed-size hidden vector h j : p ( y j | y <j , y >j ) f ( y j 1 , h j , f ) , (5) where f represents f 's trainable parameters.",
"To embed the previous word sequence into a fixed-size vector, recurrent neural networks (RNN) such as long short term memory (LSTM) networks (Hochreiter and Schmidhuber, 1997) or gated recurrent units (GRU) (Cho et al., 2014) are widely used.",
"These networks repeat a recurrent operation on each input word: h j = g ( h j 1 , y j 1 , g ) , (6) where g represents the trainable parameters of a RNN function g , and h j is the hidden state of the RNN at time j .",
"The hidden state h of a RNN can accumulate information from previous words ( y j s and j < T s ) or previous sentences ( s i d and i < T d ) which ensures the encoding and decoding processes in sequence-to-sequence models.",
"Since the contextual sentences are known already, the encoder can represent them in both forward ( ~h j ) and backward ( ~ h j ) directions.",
"The results from both recursions can be combined by a concatenated operation.",
"This is referred to as bidirectional RNN shorted as BiRNN (Schuster and Paliwal, 1997).",
"For each sentence s i d ( i < T d ), we annotate the combination of the final states of each RNN direction as a memory vector m i = [( ~h ( i ) T si ) T ; ( ~ h ( i ) 1 ) T ] T .",
"A projection of annotation m i can be used as the decoder's initial state t 0 such as t 0 = tanh ( W s m T d 0 ) and T d 0 < T d .",
"W s here is a weight matrix that projects m T d 0 into a vector that shares a same dimension with t 0 .",
"In (Bah-danau et al., 2014) for NMT, ~ h 1 , backward encoding of a single source sentence, was used to initialize t 0 = tanh ( W s ~ h 1 ) .",
"Summarizing all contextual information into one single fixed-length vector becomes weaker to guide the generation of the target sentence as the contextual information grows longer.",
"To tackle this problem, an attention mechanism (Bahdanau et al., 2014) was applied to NMT for learning to align and translate jointly.",
"In this attention-based model, the conditional probability in Equation 4 is defined as: p ( y j | y <j ) = f ( y j 1 , t j , c j , f ) , (8) where t j = g ( t j 1 , y j 1 , c j , g ) (9) is a RNN hidden state in the decoder part for time j and c j = PT s i =1 ( j ) i h i is a context vector , a weighted combination of the annotation set memory ( h 1 , ..., h T s ) produced by encoding a source sentence s with length T s .",
"The weight ( j ) i of each (source) annotation h i is computed by ( j ) i = exp( e ( j ) i ) PT s l =1 exp( e ( j ) l ) .",
"(10) where e ( j ) i is an alignment model and is implemented by a feed-forward network a : e ( j ) i = a ( h i , t j 1 ) .",
"This method was applied to single-turn conversation modeling (Vinyals and Le, 2015).",
"We use this model, with attention over each immediately previous sentence s i 1 d for generating s i , as a baseline for our experiments.",
"We annotate this model as SIMPLE subsequently in this paper.",
"The attention mechanism described in Equations 8 and 9 is performed in a single-turn feed-forward fashion.",
"However, for complex context and complex queries, human readers often revisit the given context in order to perform deeper inference after one turn reading.",
"This real-world reading phenomenon motivated the multi-turn reasoning networks for document comprehension (Shen et al., 2016).",
"Considering dialog generation scenario with given rich context, we intuitively think if the attentions can be performed multi-turns so that the conversation session is better understood and the simple query, which frequently omits unknown number of context-sensitive words, can be extended for a better generation of the response.",
"The domain adaptation from document comprehension to dialog generation is feasible by taking the rich context of the speakers as a document, current user's query as a question and the chatbot's response as an answer.",
"However, there are still several major challenges for this domain adaptation.",
"First, a document is frequently written by a single author with one (hidden) personality, one writing style, and one distribution of the engagement rates of the topics appearing in that document.",
"These are not the case for conversation scenario in which at least two speakers are involved with different (hidden) personalities, personalized speaking styles, and diverse engagement rate distributions of the topics in that conversation session.",
"Second, for document comprehension, the output is frequently a single named entity (Shen et al., 2016) and thus a single softmax function can satisfy this one-shot ranking problem.",
"However, we will need a RNN decoder utilizing context vectors for generating the target response sentence being a sequence of tokens (words or characters) instead of one single named entity.",
"We tackle the first challenge by separating the context into multiple type memories upon which attention models are performed.",
"For the second difference, we replace the simple softmax output layer by a GRU decoder employing reasoning-attention context vectors.",
"The SIMPLE model can use multiple turns of context to infer the response by concatenating them",
"during decoding, using a separator symbol such as EOS for end-of-sentence.",
"Sordoni et al. (2015) separated the query message and the previous two context messages when conditioning the response.",
"The previous context messages were concatenated and treated as a single message.",
"In our proposed models, we use more than three turns for the context.",
"We separate the last message (the query) from the previous turns to produce a set of annotations h , one per character 1 in the sentence.",
"While encoding the contextual information, we separate the m i from each speaker into two sets.",
"The motivation is to capture individual characteristics such as personalized topical information and speaking style (Li et al., 2016b).",
"We refer to the set of annotations from the same speaker as one memory.",
"That is, the sentences for which the probabilities are being predicted as M r (response memory, specially corresponds to the chatbot's side) and the question set as M q (query memory, specially corresponds to the user's side).",
"We further apply a RNN on top of m i to produce one more set of vectors M c (context memory): M c = T c [ i =0 { m ( c ) i } , (12) in which, m ( c ) i = RNN ( m i , m ( c ) i 1 ) , (13) where T c is the number of turns (sentences) in the conversation.",
"The initial state m ( c ) 0 is a trainable parameter.",
"We apply an attention mechanism on each of the memories M q , M r , M c and M h (of current query) separately.",
"Refer to Figure 1 for an intuitive illustration of these memories.",
"Following (Shen et al., 2016), we choose projected cosine similarity function as the attention module.",
"The attention score a q j,i on memory m q i M q for a RNN hidden state t j in the decoder part is computed as follows: a qj,i = softmax i =1 ,..., | M q | cos ( W q 1 m qi , W q 2 t j ) , (14) where W q 1 and W q 2 are weight vectors associated with m qi and t j , respectively.",
"Consequently, the attention vector on the query sequences is given by: c q j = | M q | X i =1 a q j,i m q i .",
"Similarly, the attention scores and attention vectors on the other three memories can be derived by replacing q with r , c , and h in Equations 14, 15.",
"We then concatenate these resulting attention vectors into a final context vector c Mj , which is consequently applied to Equations 8 and 9.",
"Since the dimension of the updated context vector c Mj is four times larger, its weight matrix C will need to be enlarged with a same column dimension with the dimension of c Mj so that Cc Mj still aligns with the dimension of the hidden layer vector t j .",
"More details of the GRU function style definition of t j using c j can be found in (Bahdanau et al., 2015).",
"We refer to this model that integrates multiple types of memories through separated attention mechanisms as MULTI .",
"Note that, by separately embedding conversation context into multiple type memories following the number of speakers, we can easily extend this two speaker scenario into group chatting in which tens or hundreds of speakers can be engaged in.",
"The only extension is to further separate M q by speakers.",
"Consequently, the context vector can be concatenated using the attention vectors by read-2053 (cid:0) (cid:0) (cid:0) (cid:0) Figure 2: Illustration of the reason part of the REASON model.",
"ing all the memories.",
"The theoretical benefit is that the chatbot can softly keep track of each individual speaker's topics and then make a decision of how to response to that speaker.",
"Another extension will be using a reinforcement learning policy to determine when to let the chatbot to give a response to which speaker in current group chatting.",
"Generally, the number and type of memories can be enlarged in a reasonable way, such as by introducing external knowledge (Yih and Ma, 2016; Ghazvininejad et al., 2017; Han et al., 2015) or performing sentiment analysis to the fact mem-ories to yield emotional memories (Zhou et al., 2017).",
"A detailed description and experimental testifying is out of the scope of this paper.",
"As illustrated in Figure 2, we apply a multiturn reasoning mechanism, following Shen et al. (2016), to the multiple-type annotation memories.",
"This reasoning mechanism replaces the single-turn attention mechanism.",
"We adapt the idea of using a termination state during the inference to dynamically determine how many turns to reason.",
"The termination module can decide whether to continue to infer the next turn (of re-reading the four types of memories) after digesting intermediate topical and speaker-specific information, or to terminate the whole inference process when it concludes that existing information is sufficient to generate the next word in a response.",
"Generally, the idea is to construct a reasoning attention vector that works as a context vector during generating the next word.",
"This idea is included in the Reasoning box in Figure",
"2. Specially, y j 1 stands for a former word generated by the hidden state s j 1 in the GRU decoder.",
"E y is the embedding matrix.",
"We use a full-connection layer to map from s j 1 to the initial reasoning hidden state h R 1 , since h R m should be with the same length alike each memory vector in M q,r,c,h and s j 1 's dimension is smaller than that.",
"Thus, (1) outside the reasoning box, we use a GRU decoder to yield s j so that a next word y j can be generated, and (2) inside the reasoning box, we read the memories to yield the optimal contextual vector.",
"The reasoning box takes the memories M q,r,c,h and s j 1 as inputs and finally outputs c R j,m .",
"The number of reasoning turns for yielding the reasoning attention vectors ( c R j which is further indexed by reasoning steps of 1, 2 in Figure 2) during the decoding inference is dynamically parameterized by both the contextual memories and current query, and is generally related to the complexities of the conversation context and current query.",
"The training steps are performed as per the general framework as described in Equations 8 and 9.",
"For each reasoning hidden state h R m , the termination probability o m is estimated by f tg ( h R m ; tg ) , which is o m = (1 o m 1 ) ( w > t h R m + b t ) , (16) where tg = { w t , b t } , w t is a weight vector, b t is a bias, and is the sigmoid logistic function.",
"Then, different hidden states h R m are first weighted by their termination probabilities o m and then summed to produce a reasoning-attention context vector c R j (using the equations as described previously in Section 3.2), which is consequently used to construct the next reasoning step's h R 2 = RNN( h R 1 , c R j, 1 ).",
"The final c R j,m ( m 1 is the final reasoning step) will be used in Equations 8 and 9 in a way alike former attention vectors.",
"During our experiments, we instead used a sum of from o 2 c R j, 1 to o m +1 c R j,m as the final c 0 R j,m for next word generation.",
"During generating each word in the response, our network performs a response action r m at the m -th step, which implies that the termination gate variables o 1: m = ( o 1 = 0 , o 2 = 2054 A: i'm bored B: boooring A: isn't it?",
"0 , ..., o m 1 = 0 , o m = 1) .",
"A stochastic policy (( o m , r m ) | h R m , t j ; ) with parameters to get a distribution of termination actions, to continue reading the conversation context (i.e., M q,r,c,h ) or to stop, and of response actions r m for predicting the next word if the model decides to stop at current step.",
"In our experiments, we set a maximum step parameter T max to be 5 for heuristically avoiding too many reasoning times.",
"We follow (Shen et al., 2016) to compute the expected reward and its gradient for one instance.",
"We refer to this model with multi-turn reasoning attentions as REASON .",
"In our experiments, we used a dataset consisting of Japanese twitter conversations.",
"Each conversation contains 10 sentences from two real-world alternating speakers.",
"Given the origin of the dataset, it is quite noisy, containing misspelled words, slang and kaomoji (multi-character sequences of facial emoticons) among meaningful words and characters.",
"Preliminary experiments by using a word-based approach resulted in the vocabulary size being too big and with too many word breaking errors, we instead used a character-based approach.",
"Figure 3 shows a sample 10-sentence conversation in which original Japanese sentences were translated into English and similar spelling patterns were kept in a sense (such as boooring for boring and whyyy for why ).",
"We kept the conversations in which all sentences were no more than 20 characters.",
"This fil-tering strategy resulted in a dataset of 254K conversations from which 100 (1K sentences) where taken out for testing and another 100 for validat-Conversation Characters sessions Sentences (Unique) Train 253K 2.5M 24M (6,214) Validation 100 1K 10K (836) Test 100 1K 9.3K (780) Table 1: Statistics of the filtered datasets.",
"ing and hyper-parameter tuning.",
"The training set contains 6,214 unique characters, which are used as our vocabulary with the addition of two special symbols, an UNK (out-of-vocabulary unknown word) and an EOS (end-of-sentence).",
"Table 1 shows major statistics of the dataset.",
"The training minimizes negative log-likelihood (NLL) per character on the nine sentences s 2 ,..., 10 of each conversation.",
"One configuration in MULTI and REASON is that, we respectively use the reference contexts (instead of former automatically generated sentences) to generate current sentence.",
"That is, when generating s i , we use the golden contextual sentences of from s 1 to s i 1 .",
"These three systems were respectively trained 3 epochs (10,000 iterations) on an AdaDelta (Zeiler, 2012) optimizer.",
"Character embedding matrix was shared by both the encoder and the decoder parts.",
"All the hidden layers, in the encoding/decoding parts and the attention models, were of size 200 and the character embeddings were of size 100.",
"The recurrent units that we used were GRU.",
"The gradients were clipped at the maximum gradient norm of 1.",
"The reasoning module's maximum steps T max was set to be 5.",
"The data was iterated on mini-batches of less than 1,500 symbols each.",
"We initialized the recurrent weight matrices in GRUs as random orthogonal matrices.",
"Unless specially mentioned, all the elements of the 0-indexed vectors and all bias vectors were initialized to be zero.",
"Any other weight matrices were initialized by sampling from the Gaussian distribution of mean 0 and variance 0.01.",
"Figure 4 shows the progression of the NLLs per 2055 SIMPLE MULTI REASONBLEU4 Train 1.98 1.97 2.30 Validation 1.80 2.12 2.62 Test 2.20 2.13 2.89 BLEU2 Train 6.77 6.78 7.03 Validation 6.67 6.89 8.14 Test 7.19 7.24 7.97 Table 2: Character-level BLEU-2/4 (%) scores.",
"character during training.",
"The validation costs begun converging in the third epoch for the three models.",
"The plot roughly shows lower cost for more complex models.",
"Galley et al. (2015) obtained better correlation with human evaluation when using BLEU-2 rather than BLEU-4.",
"We thus report both of these scores for automatic evaluation and comparison.",
"The character-level BLEU-4 and BLEU-2 scores for the trained models are reported in Table",
"2. The REASON model achieved consistently better BLEU-2 and BLEU-4 scores in the three datasets.",
"MULTI performed slightly better than SIMPLE on the validation set yet that performance is less stable than REASON .",
"Figure 4 also reflects that, (1) the final training costs of SIMPLE and MULTI are quite close with each other at iteration 10,000; (2) there is a big margin of between the final training cost of REASON and that of SIMPLE or MULTI ; and (3) the validation costs exactly follows an order of SIMPLE > MULTI > REASON .",
"Figure 5 illustrates an English translation of a conversation and the responses suggested by each of the described models.",
"This conversation is extracted from the test set.",
"The three responses are different from the reference response, but the one from REASON looks the most consistent with the given context.",
"The response from MULTI is contradicting the context of speaker B as he/she said Not at all in a former sentence.",
"As it has been shown in (Liu et al., 2016) that BLEU doesn't correlate well with human judgments, we asked three human evaluators to respectively examine 900 responses from each of the models given their reference contexts.",
"The evaluators were asked to judge (1) whether one response is acceptable and (2) whether one response is better than the other two responses.",
"A summary of this evaluation is displayed in Table",
"3. The acceptable column refers to the percentage of responses A: I feel nostalgic.",
"that were considered acceptable by at least two of the human evaluators while the best-of-three columns refers to the percentage of times that each model's response was considered by at least two evaluators to be better than the other two's, from the contexts that had at least one acceptable response.",
"The last two columns make one-to-one comparisons.",
"In 18% of the contexts, none of the models produced an acceptable response.",
"This human evaluation shows that complexer models are more likely to produce acceptable responses.",
"The MULTI and REASON models are only different in the attention mechanism of multiturn reasoning.",
"The reasoning module performed better than single-turn attention 58% of the times.",
"Table 4 contains the character-level distinct-n (Li et al., 2016a) metrics for n-grams where 1 n 5 .",
"This metric measures the number of distinct n-grams divided by the total number of n-grams in the generated responses.",
"The displayed results are computed on the concatenation of all the responses to the test-set contexts.",
"The Reference column was computed on the reference responses and represents the optimal human-like ratio.",
"SIMPLE performed the best at uni-gram diver-2056 SIMPLE MULTI REASON Reference distinct-1 .039 .028 .032 .088 distinct-2 .112 .095 .121 .407 distinct-3 .199 .180 .238 .588 distinct-4 .248 .241 .310 .587 distinct-5 .255 .265 .328 .530 Table 4: N-gram diversity metrics of between (1) the responses generated to the test set and (2) their reference responses.",
"sity.",
"For n-grams n 2 , REASON produced the most diverse outputs.",
"While the results for REASON were consistently better than the other two models, the results for MULTI were not always better than SIMPLE .",
"This indicates MULTI does not always benefit from the augmented context without the multi-turn reasoning attentions.",
"We have presented a novel approach to multi-turn conversation modeling.",
"Our approach uses multiple explicitly separated memories to represent rich conversational contexts.",
"We also presented multiturn reasoning attentions to integrate various annotation memories.",
"We run experiments on three different models with and without the introduced approaches and measured their performances using automatic metrics and human evaluation.",
"Experimental results verified that the increased contexts are able to help producing more acceptable and diverse responses.",
"Driven by the depth of the reasoning attention, the diversities of the responses are significantly improved.",
"We argue that the reasoning attention mechanism helps integrating the multiple pieces of information as it can combine them in a more complex way than a simple weighted sum.",
"We further observed that as the accuracy of the conversation model improves, the diversity of the generated responses increases.",
"The proposed approach of multi-turn reasoning over multiple memory attention networks is presented in a general framework that allows the inclusion of memories of multiple resources and types.",
"Applying to group chatting with more than two speakers and reasoning over emotion embeddings or knowledge vectors included from an external knowledge base/graph are taken as our future directions.",
"The authors thank the anonymous reviewers for their impressive comments and suggestions for"
] | [
"objective",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"result",
"abstain",
"method",
"other"
] |
[
"Semi-Supervised Text Classification (SSTC) mainly works under the spirit of self-training.",
"They initialize the deep classifier by training over labeled texts; and then alternatively predict unlabeled texts as their pseudo-labels and train the deep classifier over the mixture of labeled and pseudo-labeled texts.",
"Naturally, their performance is largely affected by the accuracy of pseudo-labels for unlabeled texts.",
"Unfortunately, they often suffer from low accuracy because of the margin bias problem caused by the large difference between representation distributions of labels in SSTC.",
"To alleviate this problem, we apply the angular margin loss, and perform Gaussian linear transformation to achieve balanced label angle variances, i.e., the variance of label angles of texts within the same label.",
"More accuracy of predicted pseudo-labels can be achieved by constraining all label angle variances balanced, where they are estimated over both labeled and pseudo-labeled texts during self-training loops.",
"With this insight, we propose a novel SSTC method, namely Semi-Supervised Text Classification with Balanced Deep representation Distributions (S 2 TC-BDD ).",
"To evaluate S 2 TCBDD , we compare it against the state-of-the-art SSTC methods.",
"Empirical results demonstrate the effectiveness of S 2 TC-BDD , especially when the labeled texts are scarce.",
"S emiS upervised L earning ( SSL ) refers to the paradigm of learning with labeled as well as unlabeled data to perform certain applications (van Engelen and Hoos, 2020).",
"Especially, developing effective SSL models for classifying text data has long been a goal for the studies of natural language processing, because labeled texts are difficult to col-lect in many real-world scenarios.",
"Formally, this Contributing equally with the first author.",
"Corresponding author.",
"research topic is termed as S emiS upervised T ext C lassification ( SSTC ), which nowadays draws much attention from the community (Clark et al., 2018; Gururangan et al., 2019; Chen et al., 2020).",
"To our knowledge, the most recent SSTC methods mainly borrow ideas from the successful patterns of supervised deep learning, such as pretraining and fine-tuning (Dai and Le, 2015; Howard and Ruder, 2018; Peters et al., 2018; Gururangan et al., 2019; Devlin et al., 2019).",
"Generally, those methods perform deep representation learning on unlabeled texts followed by supervised learning on labeled texts.",
"However, a drawback is that they separately learn from the labeled and unlabeled texts, where, specifically, the deep representations are trained without using the labeling information, resulting in potentially less discriminative representations as well as worse performance.",
"To avoid this problem, other SSTC methods combine the traditional spirit of self-training with deep learning, which jointly learn the deep representation and classifier using both labeled and unlabeled texts in a unified framework (Miyato et al., 2017, 2019; Sachan et al., 2019; Xie et al., 2020; Chen et al., 2020).",
"To be specific, this kind of methods initializes a deep classifier, e.g., BERT (Devlin et al., 2019) with Angular Margin (AM) loss (Wang et al., 2018), by training over labeled texts only; and then it alternatively predicts unlabeled texts as their pseudo-labels and trains the deep classifier over the mixture of labeled and pseudo-labeled texts.",
"Accordingly, both labeled and unlabeled texts can directly contribute to the deep classifier training.",
"Generally speaking, for deep self-training methods, one significant factor of performance is the accuracy of pseudo-labels for unlabeled texts.",
"Unfortunately, they often suffer from low accuracy, where one major reason is the margin bias problem.",
"To interpret this problem, we look around the AM loss with respect to the label angle , i.e., the angles between deep representations of texts and weight vectors of labels.",
"For unlabeled texts, the pseudo-labels are predicted by only ranking the label angles, but neglecting the difference between label angle variances , i.e., the variance of label angles of texts within the same label, which might be much large in SSL as illustrated in Fig.1.",
"In this context, the boundary of AM loss is actually not the optimal one, potentially resulting in lower accuracy for pseudo-labels (see",
"Fig.2(a)).",
"To alleviate the aforementioned problem, we propose a novel SSTC method built on BERT with AM loss, namely S emiS upervised T ext C lassification with B alanced D eep representation D istributions ( S 2 TC-BDD ).",
"Most specifically, in S 2 TC-BDD , we suppose that the label angles are drawn from each label-specific Gaussian distribution.",
"Therefore, for each text we can apply linear transformation operations to balance the label angle variances.",
"This is equivalent to moving the boundary to the optimal one, so as to eliminate the margin bias (see examples in",
"Fig.2(b)).",
"We can estimate each label angle variance over both labeled and pseudo-labeled texts during the self-training loops.",
"We evaluate the proposed S 2 TC-BDD method by comparing the most recent deep SSTC methods.",
"Experimental results indicate the superior performance of S 2 TC-BDD even with very few labeled texts.",
"The pre-training and fine-tuning framework has lately shown impressive effectiveness on a variety of tasks (Dai and Le, 2015; Radford et al., 2019a; Howard and Ruder, 2018; Peters et al., 2018; DeFigure",
"DeFigure 2: Let solid circle and triangle denote labeled positive and negative texts, and hollow ones denote corresponding unlabeled texts.",
"(a) The large difference between label angle variances results in the margin bias.",
"Many unlabeled texts (in red) can be misclassified.",
"(b) Balancing the label angle variances can eliminate the margin bias.",
"Best viewed in color.",
"vlin et al., 2019; Yang et al., 2019; Chen et al., 2019; Akbik et al., 2019; Radford et al., 2019b; Brown et al., 2020; Chen et al., 2020).",
"They mainly perform deep representation learning on generic data, followed by supervised learning for downstream tasks.",
"Several SSTC methods are built on this framework (Dai and Le, 2015; Howard and Ruder, 2018; Peters et al., 2018; Gururangan et al., 2019; Devlin et al., 2019).",
"For instance, the VAriational Methods for Pretraining In Resource-limited Environments (VAMPIRE) (Gururangan et al., 2019) first pre-trains a Variational Auto-Encoder (VAE) model on unlabeled texts, and then trains a classifier on the augmentation representations of labeled texts computed by the pre-trained VAE.",
"However, the VAE model is trained without using the labeling information, resulting in potentially less discriminative representations for labeled texts.",
"Recent works on SSTC mainly focus on deep self-training (Miyato et al., 2017; Clark et al., 2018; Sachan et al., 2019; Miyato et al., 2019; Xie et al., 2020; Chen et al., 2020), which can jointly learn deep representation and classifier using both labeled and unlabeled texts in a unified framework.",
"It is implemented by performing an alternative process, in which the pseudo-labels of unlabeled texts are updated by the current deep classifier, and then the deep classifier is retrained over both labeled and pseudo-labeled texts.",
"For example, the Virtual Adversarial Training (VAT) method (Miyato et al., 2017, 2019) follows the philosophy of making the classifier robust against random and local perturbation.",
"It first generates the predictions of original texts with the current deep classifier and then trains the deep classifier by utilizing a consistency loss between the original predictions and the outputs of deep classifier over noise texts by applying local perturbations to the embeddings of original texts.",
"Further, the work in (Sachan et al., 2019) combines maximum likelihood, adversarial training, virtual adversarial training, and entropy minimization in a unified objective.",
"Furthermore, rather than applying local perturbations, Unsupervised Data Augmentation (UDA) (Xie et al., 2020) employs consistency loss between the predictions of unlabeled texts and corresponding augmented texts by data augmentation techniques such as back translations and tf-idf word replacements.",
"The work (Clark et al., 2018) exploits cross-view training by matching the predictions of auxiliary prediction modules over the restricted views of unlabeled texts ( e.g., only part of sentence) with ones of primary prediction module over the corresponding full views.",
"Orthogonal to the aforementioned self-training SSTC methods, our S 2 TC-BDD further considers the margin bias problem by balancing the label angle variances.",
"This is beneficial for more accurate pseudo-labels for unlabeled texts, so as to boost the performance of SSTC tasks.",
"In this section, we describe the proposed deep self-training SSTC method, namely S emiS upervised T ext C lassification with B alanced D eep representation D istributions ( S 2 TC-BDD ).",
"Formulation of SSTC Consider a training dataset D consisting of a limited labeled text set D l = { ( x li , y li ) } i = N l i =1 and a large unlabeled text set D u = { x uj } j = N u j =1 .",
"Specifically, let x li and x uj denote the word sequences of labeled and unlabeled texts, respectively; and let y li { 0 , 1 } K denote the corresponding one-hot label vector of x li , where y lik = 1 if the text is associated with the k -th label, or y lik = 0 otherwise.",
"We declare that N l , N u , and K denote the numbers of labeled texts, unlabeled texts and category labels, respectively.",
"In this paper, we focus on the paradigm of inductive SSTC, whose goal is to learn a classifier from the training dataset D with both labeled and unlabeled texts.",
"The important notations are described in Table",
"1. 3.1 Overview of S 2 TC-BDD Overall speaking, our S 2 TC-BDD performs a self-training procedure for SSTC.",
"Given a training dataset, it first trains a fine-tuned deep classifier based on the pre-trained BERT model (Devlin et al., Table 1: Summary of notations Notation Description N l Number of labeled texts N u Number of unlabeled texts K Number of category labels D l Labeled text set D u Unlabeled text set x l Word sequence of labeled text in D l x u Word sequence of unlabeled text in D u y l { 0 , 1 } K One-hot label vector of labeled text 2019) with AM loss (Wang et al., 2018).",
"During the self-training loops, we employ the current deep classifier to predict unlabeled texts as pseudo-labels, and then update it over both labeled and pseudo-labeled texts.",
"In particular, we develop a B alanced D eep representation D istribution ( BDD ) loss, aiming at more accurate pseudo-labels for unlabeled texts.",
"The overall framework of S 2 TC-BDD is shown in Fig.3.",
"We now present the important details of S 2 TC-BDD .",
"BDD Loss Formally, our BDD loss is extended from the AM loss (Wang et al., 2018).",
"For clarity, we first describe the AM loss with respect to angles.",
"Given a training example ( x i , y i ) , it can be formulated below: L am ( x i , y i ; ) = K (cid:88) k =1 y ik log e s (cos( ik ) y ik m ) (cid:80) Kj =1 e s (cos( ij ) y ij m ) , (1) where denotes the model parameters, cos( ik ) = f (cid:62) i W k (cid:107) f i (cid:107) 2 (cid:107) W k (cid:107) 2 , (cid:107)(cid:107) 2 is the (cid:96) 2 -norm of vectors; f i and W k denote the deep representation of text x i and the weight vector of label k , respectively; ik is the angle between f i and W k ; s and m are the parameters used to control the rescaled norm and magnitude of cosine margin, respectively.",
"Reviewing Eq.1, we observe that it directly measures the loss by label angles of texts only.",
"We kindly argue that it corresponds to non-optimal decision boundary in SSTC, where the difference between label angle variances is much larger than supervised learning.",
"To alleviate this problem, we suppose that the label angles are drawn from each label-specific Gaussian distri-Figure 3: Overview the framework of S 2 TC-BDD .",
"Best viewed in color.",
"bution {N ( k , 2 k ) } k = K k =1 .",
"Thanks to the properties of Gaussian distribution, we can easily transfer them into the ones with balanced variances {N ( k , (cid:98) 2 ) } k = K k =1 , (cid:98) 2 = (cid:80) Kk =1 2 k K by performing the following linear transformations to the angles: k ( ik ) = a k ik + b k , k [ K ] , (2) where a k = (cid:98) k , b k = (1 a k ) k .",
"(3) With these linear transformations { k ( ) } k = K k =1 , all angles become the samples from balanced angular distributions with the same variances, e.g., k ( ik ) N ( k , (cid:98) 2 ) .",
"Accordingly, the angular loss of Eq.1 can be rewritten as the following BDD loss: L bdd ( x i , y i ; ) = K (cid:88) k =1 y ik log e s (cos( k ( ik )) y ik m ) (cid:80) K j =1 e s (cos( j ( ij )) y ij m ) .",
"Supervised Angular Loss Applying the BDD loss L bdd of Eq.4 to the labeled text set D l = { ( x li , y li ) } i = N l i =1 , we can formulate the following supervised angular loss:",
"Unsupervised Angular Loss Under the self-training paradigm, we form the loss with unlabeled texts and pseudo-labels.",
"Specifically, we denote the pseudo-label as the output probability of the deep classifier.",
"It is computed by normalizing { cos( k ( ik )) } k = K k =1 with the softmax function: p ( k | x i , ) = e cos( k ( ik )) (cid:80) Kj =1 e cos( j ( ij )) (cid:44) y i , k [ K ] .",
"For each unlabeled text x ui the pseudo-label distribution is given by p ( k | x ui , (cid:101) ) (cid:44) y ui with the fixed copy (cid:101) of the current model parameter during self-training loops.",
"Besides, to avoid those pseudo-label distributions { y ui } N u i =1 too uniform, we employ a sharpen function with a temperature T over them: y ui = Sharpen( y ui , T ) = ( y ui ) 1 /T (cid:107) ( y ui ) 1 /T (cid:107) 1 , i [ N u ] , where (cid:107)(cid:107) 1 is the (cid:96) 1 -norm of vectors.",
"When T 0 , the pseudo-label distribution tends to be the one-hot vector.",
"Applying the BDD loss of Eq.4 to the unlabeled text set D u = { x uj } j = N u j =1 and pseudo-label distributions { y ui } N u i =1 , we can formulate the following unsupervised angular loss: L u ( D u , { y ui } N u i =1 ; ) = 1 N u N u (cid:88) i =1 L bdd ( x ui , y ui ; ) .",
"This conditional entropy regularization is introduced by (Grandvalet and Bengio, 2004), and also utilized in (Sajjadi et al., 2016; Miyato et al., 2019; Sachan et al., 2019).",
"It also sharpens the output probability of the deep classifier.",
"Full Objective of S 2 TC-BDD Combining the supervised angular loss",
"Eq.(5), unsupervised angular loss",
"Eq.(6), and entropy regularization",
"Eq.(7), the full objective of S 2 TC-BDD can be formulated below: L ( D l , D u ; ) = L l ( D l ; ) + 1 L u ( D u , { y ui } N u i =1 ; ) + 2 R ( D l , D u ; ) , (8) where 1 and 2 are regularization parameters.",
"In this section, we describe implementations of label angle variances.",
"As mentioned before, what we concern is the estimations of angular distributions {N ( k , 2 k ) } k = K k =1 , where their draws are the angles between deep representations of texts and label prototypes denoted by { c k } k = K k =1 .",
"Both { ( k , 2 k ) } k = K k =1 and { c k } k = K k =1 are estimated over both labeled and pseudo-labeled texts during self-training loops.",
"In the following, we describe their learning processes in more detail.",
"Within the framework of stochastic optimization, we update the { ( k , 2 k ) } k = K k =1 and { c k } k = K k =1 per-epoch.",
"For convenience, we denote as the index set of labeled and unlabeled texts in one epoch, { f i } i and { y i } i as the deep representations of texts and corresponding label or pseudo-label vectors ( i.e., y li or y ui ) in the current epoch, respectively.",
"Estimating Label Prototypes Given the current { f i } i and { y i } i , we calculate the label prototypes { c k } k = K k =1 by the weighted average of { f i } i , formulated below: c k = (cid:80) i y ik f i (cid:80) i y ik , k [ K ] .",
"To avoid the misleading affect of some mislabeled texts, inspired by (Liu et al., 2020), we update { c k } k = K k =1 by employing the moving average with a learning rate :",
"{ f i } i and { c k } k = K k =1 , the angles between them can be calculated by: ik = arccos (cid:0) f (cid:62) i c k (cid:107) f i (cid:107) 2 (cid:107) c k (cid:107) 2 (cid:1) , i , k [ K ] .",
"(10)",
"Accordingly, we can compute the estimations of { k } k = K k =1 and { 2 k } k = K k =1 as follows: k = (cid:80) i y ik ik (cid:80) i y ik , (11) 2 k = (cid:80) i y ik ( ik k ) 2 (cid:80) i y ik 1 .",
"(12)",
"Further, the moving average is also used to the updates below: ( t ) k (1 ) ( t ) k + ( t 1) k , ( 2 k ) ( t ) (1 )( 2 k ) ( t ) + ( 2 k ) ( t 1) .",
"Datasets To conduct the experiments, we employ three widely used benchmark datasets for text classification: AG News (Zhang et al., 2015), Yelp (Zhang et al., 2015), and Yahoo (Chang et al., 2008).",
"For all datasets, we form the unlabeled training set D u , labeled training set D l and development set by randomly drawing from the corresponding original training datasets, and utilize the original test sets for prediction evaluation.",
"The dataset statistics and split information are described in Table",
"2. Baseline Models To evaluate the effectiveness of S 2 TC-BDD , we choose five existing SSTC algorithms for comparison.",
"The details of baseline methods are given below.",
"NB+EM (Nigam et al., 2000): A semi-supervised text classification method combining a Naive Bayes classifier (NB) and Expectation-Maximization (EM).",
"In experiments, we pre-process texts following (Gu-rurangan et al., 2019) and use tf-idfs as the representations of texts.",
"BERT (Devlin et al., 2019): A supervised text classification method built on the pre-trained BERT-based-uncased model 1 and fine-tuned with the supervised softmax loss on labeled texts.",
"BERT+AM : A semi-supervised text classification method built on the pre-trained BERT-based-uncased 1 and fine-tuned following the self-training spirit with the AM loss on both labeled and unlabeled texts.",
"VAMPIRE (Gururangan et al., 2019): A semi-supervised text classification method based on variational pre-training.",
"The code is available on the net.",
"2 In experiments, the default parameters are utilized.",
"VAT (Miyato et al., 2019): A semi-supervised text classification method based on virtual adversarial training.",
"[parameter configuration: perturbation size (cid:15) = 5 . 0 , regularization co-efficient = 1 . 0 , hyperparameter for finite difference = 0 . 1 ] UDA (Xie et al., 2020): A semi-supervised text classification method based on unsupervised data augmentation with back translation.",
"The code is available on the net.",
"3 In experiments, we utilize the default parameters, and generate the augmented unlabeled data by using FairSeq 4 with German as the intermediate language.",
"For S 2 TC-BDD , BERT, BERT+AM, VAT and UDA, we utilize BERT-based-uncased tokenizer to tokenize texts; average pooling over BERT-based-uncased model as text encoder to encode texts; and a two-layer MLP, whose hidden size and activation function are 128 and tanh respectively, as the classifier to predict labels.",
"We set the max sentence length as 256 and remain the first 256 tokens for texts exceeding the length limit.",
"For optimization, we utilize the Adam optimizer with learning rates of 5e-6 for BERT encoder and 1e-3 for MLP classifier.",
"For BERT, we set the batch size of labeled tests as 8.",
"For S 2 TC-BDD , BERT+AM, VAT and UDA, the batch sizes of labeled and unlabeled tests are 4 and 8, respectively.",
"For all datasets, we iterate 20 epochs, where each one contains 200 inner loops.",
"All experiments are carried on a Linux server with two NVIDIA GeForce RTX 2080Ti GPUs, Intel Xeon E5-2640 v4 CPU and 64G memory.",
"2 https://github.com/allenai/vampire 3 https://github.com/google-research/uda 4 https://github.com/pytorch/fairseq",
"1 .",
"0 , 2 = 1 .",
"0 , s = 1 .",
"0 , m = 0 .",
"01 .",
"Specifically, for Yelp we set m = 0 .",
"3 .",
"For the sharpening temperature T , we set 0 .",
"5 for AG News and Yahoo , 0 .",
"3 for Yelp .",
"The learning rate of label prototypes and label angle variances is set to 0 .",
"1 .",
"Metrics We utilize two metrics of Micro-F1 and Macro-F1, which are two different types of the averaged F1 scores.",
"In experiments, we employ the implementation of Micro-F1 and Macro-F1 in the public Scikit-Learn (Pedregosa et al., 2011) tool.",
"5 4.2 Results For all datasets, we perform each method with five random seeds, and report the average scores.",
"We first evaluate the classification performance of S 2 TC-BDD with different amounts of labeled texts.",
"For all methods, we conduct the experiments by varying the number of labeled texts N l over the set { 100 , 1000 , 10000 } with the number of unlabeled texts N u = 20000 for AG News and Yelp , and N u = 40000 for Yahoo .",
"The classification results of both Micro-F1 and Macro-F1 over all datasets are shown in Table 3, in which the best scores among all comparing baselines are highlighted in boldface.",
"Generally speaking, our proposed S 2 TC-BDD outperforms the baselines in most cases.",
"Across all datasets and evaluation metrics, S 2 TC-BDD ranks 1.1 in average.",
"Several observations are made below.",
"Comparing S 2 TC-BDD against baselines: First, we can observe that S 2 TC-BDD consistently dominates the pre-training methods (in-cluding BERT and VAMPIRE) on both Micro-F1 and Macro-F1 scores by a big margin, especially when labeled texts are scarce.",
"For example, when N l = 100 , the Macro-F1 scores 5 https://scikit-learn.org/stable/ Table 3: Experimental results of Micro-F1 and Macro-F1 varying the number of labeled texts N l .",
"of S 2 TC-BDD are even about 0.17, 0.26 and 0.24 higher than VAMPIRE on the datasets of AG News , Yelp and Yahoo , respectively.",
"Second, when labeled texts are very scarce ( i.e., when N l = 100 ), S 2 TC-BDD performs better than other self-training baseline methods ( i.e., NB+EM, BERT+AM, VAT and UDA) on all datasets, e.g., for Micro-F1 about 0.08 higher than VAT on Yahoo .",
"Otherwise, when labeled texts are large, S 2 TC-BDD can also achieve the competitive performance, even perform better across all datasets.",
"Comparing S 2 TC-BDD against BERT+AM and BERT: Our S 2 TC-BDD method consistently outperforms BERT+AM and BERT across all datasets and metrics.",
"For example, when N l = 100 the Micro-F1 scores of S 2 TCBDD beat those of BERT+AM by 0 .",
"01 0 .",
"03 and those of BERT by 0 .",
"03 0 .",
"05 across all datasets.",
"That is because S 2 TC-BDD employs both labeled and unlabeled texts for training and can predict more accurate pseudo-labels of unlabeled texts than BERT+AM, benefit-ing for the classifier training.",
"This result is expected since S 2 TC-BDD performs a Gaussian linear transformation to balance the label angel variances, so as to eliminate the margin bias, leading to more accurate predicted pseudo-labels of unlabeled texts.",
"Besides, these results empirically prove that unlabeled texts are beneficial to the classification performance.",
"Comparing BERT based methods against NB+EM and VAMPIRE: All BERT based methods ( i.e., BERT, BERT+AM, VAT, UDA and S 2 TC-BDD ) consistently dominate baselines based on small models ( i.e., NB+EM, VAMPIRE).",
"For example, when N l = 10000 , the Micro-F1 and Macro-F1 scores of BERT are about 0.03, 0.18 and 0.05 higher than those of NB+EM on the datasets of AG News , Yelp and Yahoo , respectively.",
"The observation is expected because BERT is a bigger model, hence can extract more discriminative representations of texts than those from the VAE model used in VAMPIRE and tf-idfs used in NB+EM.",
"For NB+EM, BERT+AM, VAMPIRE, VAT, UDA and S 2 TC-BDD , we also perform the experiments with 100 labeled texts and varying the number of unlabeled texts N u over the set { 0 , 200 , 2000 , 20000 } for AG News and Yelp , and { 0 , 400 , 4000 , 40000 } for Yahoo .",
"Note that VAMPIRE needs unlabeled texts for pre-training, thus we omit the experiments for VAMPIRE with N u = 0 .",
"The classification results are reported in Table",
"4. Roughly, for all methods the classification Table 4: Experimental results of Micro-F1 and Macro-F1 varying the number of unlabeled texts N u .",
"performance becomes better as the amount of unlabeled texts increasing.",
"For instance, the Micro-F1 scores of S 2 TC-BDD on all datasets gain about 0.3 improvement as the number of unlabeled texts increasing.",
"These results prove the effectiveness of unlabeled texts in riching the limited supervision from scarce labeled texts and improving the classification performance.",
"Besides, an obvious observation is that the self-training methods ( i.e., NB+EM, BERT+AM, VAT, UDA and S 2 TC-BDD ) consistently outperform the pre-training method ( i.e., VAMPIRE), especially when unlabeled texts are fewer.",
"The possible reason is that the pretraining methods need more unlabeled texts for pre-training while the self-training methods do not have the requirement.",
"We perform ablation studies by stripping each component each time to examine the effectiveness of each component in S 2 TC-BDD .",
"Here, we denote BDD as balanced deep representation angular loss L bdd in Eq.4.",
"Stripping BDD means that we replace the proposed loss L bdd with the AM loss L am in Eq.1.",
"The results are displayed in Table",
"5. Overall, the classification performance will drop when removing any component of S 2 TC-BDD , suggesting that all parts make contributions to the final performance of S 2 TC-BDD .",
"Besides, removing unlabeled texts brings the most significant drop of the performance.",
"This result is expected because label angle variances approximated only with very scarce labeled texts will have lower accuracy, resulting in worse performance.",
"Further, in contrast to entropy regularization, the performance after stripping BDD decrease more.",
"Note that the difference between the proposed L bdd and L am is whether constraining the label angle variances to be balanced or not.",
"This result indicates that the balanced constraint of label angle variances brings a better deep classifier as well as more accurate pseudo-labels for unlabeled texts, especially when labeled texts are limited, and also empirically prove the Table 6: Average per-epoch running time (second, s ) of BERT, BERT+AM and S 2 TC-BDD .",
"effectiveness of our balanced label angle variances.",
"To evaluate the efficiency of our S 2 TC-BDD , we perform efficiency comparisons over BERT, BERT+AM and S 2 TC-BDD on all benchmark datasets.",
"To be fair, for all methods and datasets we set the batch sizes of labeled and unlabeled texts to 4 and 8 respectively, and iterate 100 epochs, where each one consists of 200 inner loops.",
"The average per-epoch running time results are shown in Table",
"6. Generally speaking, the per-epoch running time of our proposed S 2 TC-BDD is close to those of BERT and BERT+AM.",
"This result means that Gaussian linear transformation and estimation of label angle variances in our S 2 TC-BDD only introduce very few computation costs.",
"That is expected since they merely require very few simple linear operations, which are very efficient.",
"In this paper, we propose a novel self-training SSTC method, namely S 2 TC-BDD .",
"Our S 2 TC-BDD addresses the margin bias problem in SSTC by balancing the label angle variances, i.e., the variance of label angles of texts within the same label.",
"We estimate the label angle variances with both labeled and unlabeled texts during the self-training loops.",
"To constrain the label angle variances to be balanced, we design several Gaussian linear transformations and incorporate them into a well established AM loss.",
"Our S 2 TC-BDD empirically outperforms the existing SSTC baseline methods.",
"We would like to acknowledge support for this project from the National Natural Science Foundation of China (NSFC) (No.61876071, No.62006094), the Key R&D Projects of Science and Technology Department of Jilin Province of China (No.20180201003SF, No.20190701031GH)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"method",
"method",
"method",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"result",
"other"
] |
[
"Language models pretrained on text from a wide variety of sources form the foundation of today's NLP.",
"In light of the success of these broad-coverage models, we investigate whether it is still helpful to tailor a pretrained model to the domain of a target task.",
"We present a study across four domains (biomedi-cal and computer science publications, news, and reviews) and eight classification tasks, showing that a second phase of pretraining in-domain ( domain-adaptive pretraining ) leads to performance gains, under both highand low-resource settings.",
"Moreover, adapting to the task's unlabeled data ( task-adaptive pretraining ) improves performance even after domain-adaptive pretraining.",
"Finally, we show that adapting to a task corpus augmented using simple data selection strategies is an effective alternative, especially when resources for domain-adaptive pretraining might be unavailable.",
"Overall, we consistently find that multiphase adaptive pretraining offers large gains in task performance.",
"Today's pretrained language models are trained on massive, heterogeneous corpora (Raffel et al., 2019; Yang et al., 2019).",
"For instance, ROBERTA (Liu et al., 2019) was trained on over 160GB of uncompressed text, with sources ranging from English-language encyclopedic and news articles, to literary works and web content.",
"Representations learned by such models achieve strong performance across many tasks with datasets of varying sizes drawn from a variety of sources (e.g., Wang et al., 2018, 2019).",
"This leads us to ask whether a task's textual domain a term typically used to denote a distribution over language characterizing a given topic or genre (such as science or mystery novels)is still relevant.",
"Do the latest large pretrained models work universally or is it still helpful to build Figure 1: An illustration of data distributions.",
"separate pretrained models for specific domains?",
"While some studies have shown the benefit of continued pretraining on domain-specific unlabeled data (e.g., Lee et al., 2019), these studies only consider a single domain at a time and use a language model that is pretrained on a smaller and less diverse corpus than the most recent language models.",
"Moreover, it is not known how the benefit of continued pretraining may vary with factors like the amount of available labeled task data, or the proximity of the target domain to the original pretraining corpus (see Figure 1).",
"We address this question for one such high-performing model, ROBERTA (Liu et al., 2019) ( 2).",
"We consider four domains (biomedical and computer science publications, news, and reviews; 3) and eight classification tasks (two in each do-main).",
"For targets that are not already in-domain for ROBERTA , our experiments show that continued pretraining on the domain (which we refer to as domain-adaptive pretraining or DAPT ) consistently improves performance on tasks from the target domain, in both highand low-resource settings.",
"Above, we consider domains defined around genres and forums, but it is also possible to induce a domain from a given corpus used for a task, such as the one used in supervised training of a model.",
"This raises the question of whether pretraining on a corpus more directly tied to the task can further improve performance.",
"We study how domain-adaptive pretraining compares to task-adaptive pretraining , or TAPT , on a smaller but directly task-relevant corpus: the unlabeled task dataset ( 4), drawn from the task distribution .",
"Task-adaptive pretraining has been shown effective (Howard and Ruder, 2018), but is not typically used with the most recent models.",
"We find that TAPT provides a large performance boost for ROBERTA , with or without domain-adaptive pretraining.",
"Finally, we show that the benefits from task-adaptive pretraining increase when we have additional unlabeled data from the task distribution that has been manually curated by task designers or annotators.",
"Inspired by this success, we propose ways to automatically select additional task-relevant unlabeled text, and show how this improves performance in certain low-resource cases ( 5).",
"On all tasks, our results using adaptive pretraining techniques are competitive with the state of the art.",
"a thorough analysis of domainand task-adaptive pretraining across four domains and eight tasks, spanning lowand high-resource settings; an investigation into the transferability of adapted LMs across domains and tasks; and a study highlighting the importance of pretraining on human-curated datasets, and a simple data selection strategy to automatically approach this performance.",
"Learning for most NLP research systems since 2018 consists of training in two stages.",
"First, a neural language model (LM), often with millions of parameters, is trained on large unlabeled cor-1 https://github.com/allenai/ dont-stop-pretraining pora.",
"The word (or wordpiece; Wu et al. 2016) representations learned in the pretrained model are then reused in supervised training for a downstream task, with optional updates ( fine-tuning ) of the representations and network from the first stage.",
"One such pretrained LM is ROBERTA (Liu et al., 2019), which uses the same transformer-based architecture (Vaswani et al., 2017) as its predecessor, BERT (Devlin et al., 2019).",
"It is trained with a masked language modeling objective (i.e., cross-entropy loss on predicting randomly masked tokens).",
"The unlabeled pretraining corpus for ROBERTA contains over 160 GB of uncompressed raw text from different English-language corpora (see Appendix A.1).",
"ROBERTA attains better performance on an assortment of tasks than its predecessors, making it our baseline of choice.",
"Although ROBERTA 's pretraining corpus is derived from multiple sources, it has not yet been established if these sources are diverse enough to generalize to most of the variation in the English language.",
"In other words, we would like to understand what is out of ROBERTA 's domain.",
"Towards this end, we explore further adaptation by continued pretraining of this large LM into two categories of unlabeled data:",
"(i) large corpora of domain-specific text ( 3), and",
"(ii) available unlabeled data associated with a given task ( 4).",
"Our approach to domain-adaptive pretraining ( DAPT ) is straightforwardwe continue pretraining ROBERTA on a large corpus of unlabeled domain-specific text.",
"The four domains we focus on are biomedical (BIOMED ) papers, computer science (CS) papers, newstext from REALNEWS , and AMAZON reviews.",
"We choose these domains because they have been popular in previous work, and datasets for text classification are available in each.",
"Table 1 lists the specifics of the unlabeled datasets in all four domains, as well as ROBERTA 's training corpus.",
"1 3.1 Analyzing Domain Similarity Before performing DAPT , we attempt to quantify the similarity of the target domain to ROBERTA 's pretraining domain.",
"We consider domain vocabularies containing the top 10K most frequent unigrams (excluding stopwords) in comparably sized 1 For BIOMED and CS, we used an internal version of S2ORC that contains papers that cannot be released due to copyright restrictions.",
"random samples of held-out documents in each do-main's corpus.",
"We use 50K held-out documents for each domain other than REVIEWS , and 150K held-out documents in REVIEWS , since they are much shorter.",
"We also sample 50K documents from sources similar to ROBERTA 's pretraining corpus (i.e., BOOKCORPUS , STORIES , WIKIPEDIA , and REALNEWS ) to construct the pretraining domain vocabulary, since the original pretraining corpus is not released.",
"Figure 2 shows the vocabulary overlap across these samples.",
"We observe that ROBERTA 's pretraining domain has strong vocabulary overlap with NEWS and REVIEWS , while CS and BIOMED are far more dissimilar to the other domains.",
"This simple analysis suggests the degree of benefit to be expected by adaptation of ROBERTA to different domainsthe more dissimilar the domain, the higher the potential for DAPT .",
"Our LM adaptation follows the settings prescribed for training ROBERTA .",
"We train ROBERTA on each domain for 12.5K steps, which amounts to single pass on each domain dataset, on a v3-8 TPU; see other details in Appendix B. This second phase of pretraining results in four domain-adapted LMs, one for each domain.",
"We present the masked LM loss of ROBERTA on each domain before and after DAPT in Table",
"1. We observe that masked LM loss decreases in all domains except NEWS after DAPT , where we observe a marginal increase.",
"We discuss cross-domain masked LM loss in Appendix E. Under each domain, we consider two text classification tasks, as shown in Table",
"2. Our tasks represent both highand low-resource ( 5K labeled training examples, and no additional unlabeled data) settings.",
"For HYPERPARTISAN , we use the data splits from Beltagy et al. (2020).",
"For RCT, we represent all sentences in one long sequence for simultaneous prediction.",
"Baseline As our baseline, we use an off-the-shelf ROBERTA -base model and perform supervised fine-tuning of its parameters for each classification task.",
"On average, ROBERTA is not drastically behind the state of the art (details in Appendix A.2), and serves as a good baseline since it provides a single LM to adapt to different domains.",
"Classification Architecture Following standard practice (Devlin et al., 2019) we pass the final layer [CLS] token representation to a task-specific feedforward layer for prediction (see Table 14 in Appendix for more hyperparameter details).",
"Results Test results are shown under the DAPT column of Table 3 (see Appendix C for validation results).",
"We observe that DAPT improves over ROBERTA in all domains.",
"For BIOMED , CS, and REVIEWS , we see consistent improve-Domain Task Label Type Train (Lab.) Train (Unl.) Dev.",
"ments over ROBERTA , demonstrating the benefit of DAPT when the target domain is more distant from ROBERTA 's source domain.",
"The pattern is consistent across highand lowresource settings.",
"Although DAPT does not increase performance on AGNEWS , the benefit we observe in HYPERPARTISAN suggests that DAPT may be useful even for tasks that align more closely with ROBERTA 's source domain.",
"Additionally, we compare DAPT against a setting where for each task, we adapt the LM to a domain outside the domain of interest.",
"This controls for the case in which the improvements over ROBERTA might be attributed simply to exposure to more data, regardless of the domain.",
"In this setting, for NEWS , we use a CS LM; for REVIEWS , a BIOMEDLM; for CS, a NEWSLM; for BIOMED , a REVIEWSLM.",
"We use the vocabulary overlap statistics in Figure 2 to guide these choices.",
"Our results are shown in Table 3, where the last column ( DAPT ) corresponds to this setting.",
"For each task, DAPT significantly outperforms adapting to an irrelevant domain, suggesting the importance of pretraining on domain-relevant data.",
"Furthermore, we generally observe that DAPT results in worse performance than even ROBERTA on end-tasks.",
"Taken together, these results indicate that in most settings, exposure to more data without considering domain relevance is detrimental to end-task performance.",
"However, there are two tasks (SCIERC and ACL-ARC) in which DAPT marginally improves performance over ROBERTA .",
"This may suggest that in some cases, continued pretraining on any additional data is useful, as noted in Baevski et al. (2019).",
"Our analysis of DAPT is based on prior intuitions about how task data is assigned to specific domains.",
"For instance, to perform DAPT for HELPFULNESS , we only adapt to AMAZON reviews, but not to any REALNEWS articles.",
"However, the gradations in Figure 2 suggest that the boundaries between domains are in some sense fuzzy; for example, 40% of unigrams are shared between REVIEWS and NEWS .",
"As further indication of this overlap, we also qualitatively identify documents that overlap cross-domain: in Table 4, we showcase reviews and REALNEWS articles that are similar to these reviews (other examples can be found in Appendix D).",
"In fact, we find that adapting ROBERTA to IMDB review REALNEWS article The Shop Around the Corner is one of the great films from director Ernst Lubitsch .",
"NEWS not as harmful to its performance on REVIEWS tasks ( DAPT on NEWS achieves 65.5 2 . 3 on HELPFULNESS and 95.0 0 . 1 on IMDB).",
"Although this analysis is by no means comprehensive, it indicates that the factors that give rise to observable domain differences are likely not mutually exclusive.",
"It is possible that pretraining beyond conventional domain boundaries could result in more effective DAPT ; we leave this investigation to future work.",
"In general, the provenance of data, including the processes by which corpora are curated, must be kept in mind when designing pretraining procedures and creating new benchmarks that test out-of-domain generalization abilities.",
"Datasets curated to capture specific tasks of interest tend to cover only a subset of the text available within the broader domain.",
"For example, the CHEMPROT dataset for extracting relations between chemicals and proteins focuses on abstracts of recently-published, high-impact articles from hand-selected PubMed categories (Krallinger et al., 2017, 2015).",
"We hypothesize that such cases where the task data is a narrowly-defined subset of the broader domain, pretraining on the task dataset itself or data relevant to the task may be helpful.",
"Task-adaptive pretraining ( TAPT ) refers to pretraining on the unlabeled training set for a given task; prior work has shown its effectiveness (e.g. Howard and Ruder, 2018).",
"Compared to domain-adaptive pretraining ( DAPT ; 3), the task-adaptive approach strikes a different trade-off: it uses a far smaller pretraining corpus, but one that is much more task-relevant (under the assumption that the training set represents aspects of the task well).",
"This makes TAPT much less expensive to run than DAPT , and as we show in our experiments, the performance of TAPT is often competitive with that of DAPT .",
"Similar to DAPT , task-adaptive pretraining consists of a second phase of pretraining ROBERTA , but only on the available task-specific training data.",
"In contrast to DAPT , which we train for 12 .",
"5 K steps, we perform TAPT for 100 epochs.",
"We artificially augment each dataset by randomly masking different words (using the masking probability of 0.15) across epochs.",
"As in our DAPT experiments, we pass the final layer [CLS] token representation to a task-specific feedforward layer for classification (see Table 14 in Appendix for more hyperparameter details).",
"Our results are shown in the TAPT column of Table 5.",
"TAPT consistently improves the ROBERTA baseline for all tasks across domains.",
"Even on the news domain, which was part of ROBERTA pretraining corpus, TAPT improves over ROBERTA , showcasing the advantage of task adaptation.",
"Particularly remarkable are the relative differences between TAPT and DAPT .",
"DAPT is more resource intensive (see Table 9 in 5.3), but TAPT manages to match its performance in some of the tasks, such as SCIERC.",
"In RCT, HYPERPARTISAN , AGNEWS , HELPFULNESS , and IMDB, the results even exceed those of DAPT , highlighting the efficacy of this cheaper adaptation technique.",
"Combined DAPT and TAPT We investigate the effect of using both adaptation techniques together.",
"We begin with ROBERTA and apply DAPT then TAPT under this setting.",
"The three phases of pretraining add up to make this the most computationally expensive of all our settings (see Table 9).",
"As expected, combined domainand task-adaptive pretraining achieves the best performance on all tasks (Table 5).",
"2 Overall, our results show that DAPT followed by TAPT achieves the best of both worlds of domain and task awareness, yielding the best performance.",
"While we speculate that TAPT followed by DAPT would be susceptible to catastrophic forgetting of the task-relevant corpus (Yogatama et al., 2019), alternate methods of combining the procedures may result in better downstream performance.",
"Future work may explore pretraining with a more sophisticated curriculum of domain and task distributions.",
"Cross-Task Transfer We complete the comparison between DAPT and TAPT by exploring whether adapting to one task transfers to other tasks in the same domain.",
"For instance, we further pretrain the LM using the RCT unlabeled data, fine-tune it with the CHEMPROT labeled data, and observe the effect.",
"We refer to this setting as TransferTAPT .",
"Our results for tasks in all four domains are shown in Table",
"6. We see that TAPT optimizes for single task performance, to the detriment of cross-task transfer.",
"These results demonstrate that data distributions of tasks within a given domain might differ.",
"Further, this could also explain why adapting only to a broad domain is not sufficient, and why TAPT after DAPT is effective.",
"In 4, we continued pretraining the LM for task adaptation using only the training data for a supervised task.",
"Inspired by the success of TAPT , we next investigate another setting where a larger pool of unlabeled data from the task distribution exists, Pretraining BIOMEDNEWSREVIEWS RCT-500 HYP.",
"We explore two scenarios.",
"First, for three tasks (RCT, HYPERPARTISAN , and IMDB) we use this larger pool of unlabeled data from an available human-curated corpus ( 5.1).",
"Next, we explore retrieving related unlabeled data for TAPT , from a large unlabeled in-domain corpus, for tasks where extra human-curated data is unavailable ( 5.2).",
"Dataset creation often involves collection of a large unlabeled corpus from known sources.",
"This corpus is then downsampled to collect annotations, based on the annotation budget.",
"The larger unlabeled corpus is thus expected to have a similar distribution to the task's training data.",
"Moreover, it is usually available.",
"We explore the role of such corpora in task-adaptive pretraining.",
"Data We simulate a low-resource setting RCT-500, by downsampling the training data of the RCT dataset to 500 examples (out of 180K available), and treat the rest of the training data as unlabeled.",
"The HYPERPARTISAN shared task (Kiesel et al., 2019) has two tracks: lowand high-resource.",
"We use 5K documents from the high-resource setting as CuratedTAPT unlabeled data and the original low-resource training documents for task fine-tuning.",
"For IMDB, we use the extra unlabeled data manually curated by task annotators, drawn from the same distribution as the labeled data (Maas et al., 2011).",
"Results We compare CuratedTAPT to TAPT and DAPT + TAPT in Table",
"7. CuratedTAPT further improves our prior results from 4 across all three datasets.",
"Applying CuratedTAPT after adapting to the domain results in the largest boost in performance on all tasks; in HYPERPARTISAN , DAPT + CuratedTAPT is within standard deviation of CuratedTAPT .",
"Moreover, curatedTAPT achieves Figure 3: An illustration of automated data selection ( 5.2).",
"95% of the performance of DAPT + TAPT with the fully labeled RCT corpus (Table 5) with only 0.3% of the labeled data.",
"These results suggest that curating large amounts of data from the task distribution is extremely beneficial to end-task performance.",
"We recommend that task designers release a large pool of unlabeled task data for their tasks to aid model adaptation through pretraining.",
"Consider a low-resource scenario without access to large amounts of unlabeled data to adequately benefit from TAPT , as well as absence of computational resources necessary for DAPT (see Table 9 for details of computational requirements for different pretraining phases).",
"We propose simple unsupervised methods to retrieve unlabeled text that aligns with the task distribution, from a large in-domain corpus.",
"Our approach finds task-relevant data from the domain by embedding text from both the task and domain in a shared space, then selects candidates from the domain based on queries using the task data.",
"Importantly, the embedding method must be lightweight enough to embed possibly millions of sentences in a reasonable time.",
"Given these constraints, we employ VAMPIRE (Gururangan et al., 2019; Figure 3), a lightweight bag-of-words language model.",
"We pretrain VAMPIRE on a large deduplicated 3 sample of the domain (1M sentences) to obtain embeddings of the text from both the task and domain sample.",
"We then select k candidates of each task sentence from the domain sample, in embeddings space.",
"Candidates are selected",
"(i) via nearest neighbors selection ( k NN-TAPT ) 4 , or",
"(ii) randomly ( RAND-TAPT ).",
"We continue pretraining ROBERTA on this augmented corpus with both the task data (as in TAPT ) as well as the selected candidate pool.",
"Results Results in Table 8 show that k NN-TAPT outperforms TAPT for all cases.",
"RAND-TAPT is generally worse than k NN-TAPT , but within a standard deviation arising from 5 seeds for RCT and ACL-ARC.",
"As we increase k , k NN-TAPT performance steadily increases, and approaches that of DAPT .",
"Appendix F shows examples of nearest neighbors of task data.",
"Future work might consider a closer study of k NN-TAPT , more sophisticated data selection methods, and the tradeoff between the diversity and task relevance of selected examples.",
"The computational requirements for all our adaptation techniques on RCT-500 in the BIOMED domain in Table 9.",
"TAPT is nearly 60 times faster to train than DAPT on a single v3-8 TPU and storage requirements for DAPT on this task are 5.8M times that of TAPT .",
"Our best setting of DAPT + TAPT amounts to three phases of pretraining, and at first glance appears to be very expensive.",
"However, once the LM has been adapted to a broad domain, it can be reused for multiple tasks within that domain, with only a single additional TAPT phase per task.",
"While CuratedTAPT tends to achieve the best cost-3 We deduplicated this set to limit computation, since different sentences can share neighbors.",
"benefit ratio in this comparison, one must also take into account the cost of curating large in-domain data.",
"Automatic methods such as k NN-TAPT are much cheaper than DAPT .",
"Transfer learning for domain adaptation Prior work has shown the benefit of continued pretraining in domain (Alsentzer et al., 2019; Chakrabarty et al., 2019; Lee et al., 2019).",
"5 We have contributed further investigation of the effects of a shift between a large, diverse pretraining corpus and target domain on task performance.",
"Other studies (e.g., Huang et al., 2019) have trained language models (LMs) in their domain of interest, from scratch.",
"In contrast, our work explores multiple domains, and is arguably more cost effective, since we continue pretraining an already powerful LM.",
"Task-adaptive pretraining Continued pretraining of a LM on the unlabeled data of a given task ( TAPT ) has been show to be beneficial for end-task performance (e.g. in Howard and Ruder, 2018; Phang et al., 2018; Sun et al., 2019).",
"In the presence of domain shift between train and test data distributions of the same task, domain-adaptive pretraining ( DAPT ) is sometimes used to describe what we term TAPT (Logeswaran et al., 2019; Han and Eisenstein, 2019).",
"Related approaches include language modeling as an auxiliary objective to task classifier fine-tuning (Chronopoulou et al., 2019; Radford et al., 2018) or consider simple syntactic structure of the input while adapting to task-specific 5 In contrast, Peters et al. (2019) find that the Jensen-Shannon divergence on term distributions between BERT's pretraining corpora and each MULTINLI domain (Williams et al., 2018) does not predict its performance, though this might be an isolated finding specific to the MultiNLI dataset.",
"data (Swayamdipta et al., 2019).",
"We compare DAPT and TAPT as well as their interplay with respect to dataset size for continued pretraining (hence, expense of more rounds of pretraining), relevance to a data sample of a given task, and transferability to other tasks and datasets.",
"See Table 11 in Appendix A for a summary of multi-phase pretraining strategies from related work.",
"Data selection for transfer learning Selecting data for transfer learning has been explored in NLP (Moore and Lewis, 2010; Ruder and Plank, 2017; Zhang et al., 2019, among others).",
"Dai et al. (2019) focus on identifying the most suitable corpus to pretrain a LM from scratch, for a single task: NER, whereas we select relevant examples for various tasks in 5.2.",
"Concurrent to our work, Aharoni and Goldberg (2020) propose data selection methods for NMT based on cosine similarity in embedding space, using DISTILBERT (Sanh et al., 2019) for efficiency.",
"In contrast, we use VAMPIRE, and focus on augmenting TAPT data for text classification tasks.",
"Khandelwal et al. (2020) introduced k NN-LMs that allows easy domain adaptation of pretrained LMs by simply adding a datastore per domain and no further training; an alternative to integrate domain information in an LM.",
"Our study of human-curated data 5.1 is related to focused crawling (Chakrabarti et al., 1999) for collection of suitable data, especially with LM reliance (Remus and Biemann, 2016).",
"What is a domain?",
"Despite the popularity of domain adaptation techniques, most research and practice seems to use an intuitive understanding of domains.",
"A small body of work has attempted to address this question (Lee, 2001; Eisenstein et al., 2014; van der Wees et al., 2015; Plank, 2016; Ruder et al., 2016, among others).",
"For instance, Aharoni and Goldberg (2020) define domains by implicit clusters of sentence representations in pretrained LMs.",
"Our results show that DAPT and TAPT complement each other, which suggests a spectra of domains defined around tasks at various levels of granularity (e.g., Amazon reviews for a specific product, all Amazon reviews, all reviews on the web, the web).",
"We investigate several variations for adapting pretrained LMs to domains and tasks within those domains, summarized in Table 10.",
"Our experiments reveal that even a model of hundreds of millions of parameters struggles to encode the complexity of a single textual domain, let alone all of language.",
"We show that pretraining the model towards a specific task or small corpus can provide significant benefits.",
"Our findings suggest it may be valuable to complement work on ever-larger LMs with parallel efforts to identify and use domainand task-relevant corpora to specialize models.",
"While our results demonstrate how these approaches can improve ROBERTA , a powerful LM, the approaches we studied are general enough to be applied to any pretrained LM.",
"Our work points to numerous future directions, such as better data selection for TAPT , efficient adaptation large pretrained language models to distant domains, and building reusable language models after adaptation.",
"The authors thank Dallas Card, Mark Neumann, Nelson Liu, Eric Wallace, members of the AllenNLP team, and anonymous reviewers for helpful feedback, and Arman Cohan for providing data.",
"This research was supported in part by the Office of Naval Research under the MURI grant N00014-18-1-2670."
] | [
"abstain",
"objective",
"result",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"method",
"abstain",
"method",
"abstain",
"result",
"result",
"objective",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"objective",
"other",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"other",
"objective",
"other",
"method",
"other",
"other",
"method",
"other",
"other",
"method",
"abstain",
"objective",
"other",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"result",
"result",
"method",
"objective",
"method",
"other",
"other"
] |
[
"A language-independent representation of meaning is one of the most coveted dreams in Natural Language Understanding.",
"With this goal in mind, several formalisms have been proposed as frameworks for meaning representation in Semantic Parsing.",
"And yet, the dependencies these formalisms share with respect to language-specific repositories of knowledge make the objective of closing the gap between highand low-resourced languages hard to accomplish.",
"In this paper, we present the BabelNet Meaning Representation (BMR), an interlingual formalism that abstracts away from language-specific constraints by taking advantage of the multilingual semantic resources of BabelNet and VerbAtlas.",
"We describe the rationale behind the creation of BMR and put forward BMR 1.0, a dataset labeled entirely according to the new formalism.",
"Moreover, we show how BMR is able to outperform previous formalisms thanks to its fully-semantic framing, which enables top-notch multilingual parsing and generation.",
"We release the code at https: //github.com/SapienzaNLP/bmr .",
"Natural Language Understanding (NLU) enables machines to understand human language.",
"A key enabling task in NLU is that of Semantic Parsing, whose longed-for dream is that of developing a formalism that can be used as an interlingual representation of meaning, i.e., one that, independently of the language, can explicitly embed sentence meaning into a machineand human-readable form (Navigli, 2018).",
"To this end, different formalisms such as Abstract Meaning Representation (Banarescu et al., 2013, AMR), Universal Conceptual Cognitive Annotation (Abend and Rappoport, 2013, UCCA) and Universal Meaning Representation (Van Gysel et al., 2021, UMR), have been proposed over the years.",
"formalism for Semantic Parsing, being widely applied to a variety of areas of NLP, such as Machine Translation (Song et al., 2019), Question Answering (Lim et al., 2020; Bonial et al., 2020b; Kapani-pathi et al., 2021), Human-Robot Interaction (Bo-nial et al., 2020a), Text Summarization (Hardy and Vlachos, 2018; Liao et al., 2018) and Information",
"Extraction (Rao et al., 2017).",
"The primary precept of AMR is that different sentences carrying the same meaning should have the same graph representation.",
"Nonetheless, a few inherent properties of AMR make it inappropriate for the purpose of providing a language-agnostic representation of meaning.",
"In fact, nodes within AMR graphs are represented by means of either English lemmas or OntoNotes frames (Hovy et al., 2006) which, in turn, are based on PropBank (Kingsbury and Palmer, 2002).",
"The issue with lemmas is that they are merely surface forms devoid of semantics, whereas, with respect to frames, even though analogous repositories exist in other languages such as AnCora for Spanish (Aparicio et al., 2008) or the Chinese PropBank (Xue and Palmer, 2009), they are not mutually interlinked, hence making the cross-lingual application of AMR arduous to achieve (Conia et al., 2021).",
"Against this background, we follow the ideas put forward by Navigli et al. (2022) and develop the BabelNet Meaning Representation (BMR), a formalism providing the building blocks for a language-agnostic representation of meaning by exploiting the wealth of multilingual knowledge contained in BabelNet (Navigli and Ponzetto, 2010; Navigli et al., 2021) 1 and VerbAtlas (Di Fabio et al., 2019) 2 .",
"In outline, the main contributions of this paper are as follows:",
"(i) we introduce BMR, a new Semantic Parsing formalism that can be used as an interlingua,",
"(ii) we produce BMR 1.0, i.e., the first 1 https://babelnet.org/ 2 https://verbatlas.org/ 1727 lexical-semantic dataset annotated according to the BMR formalism,",
"(iii) we create and release models that can generate BMR graphs from text and text from BMR graphs in English, German, Spanish, and Italian, and",
"(iv) we describe a sound experimental setup to show how, thanks to its fully semantic framing, BMR outdoes previous formalisms in both preserving and encoding textual information, as well as in being used as an interlingua in downstream tasks such as Machine Translation.",
"Even though the vast majority of formalisms for Semantic Parsing have been designed with English in mind, several approaches have attempted to narrow the gap between English and other languages.",
"For instance, Universal Conceptual Cognitive Annotation (Abend and Rappoport, 2013, UCCA) was proposed as a cross-lingual annotation formalism in which words in a sentence are connected using semantic relations not tied to specific languages.",
"And yet, while UCCA reflects the semantic relations between nodes via a set of coarse-grained roles, it represents concepts by means of simple lemmas, hence preventing an abstraction from language-specific constraints.",
"Parallel Meaning Bank (Abzianidze et al., 2017, PMB), an approach based on the Discourse Representation Theory (Kamp and Reyle, 1993, DRT), 3 also emerged.",
"In PMB, English sentences are parsed with labels that are automatically projected to non-English translations.",
"PMB too, however, cannot be seen as a unified interlingual representation, since it uses English-specific meaning repositories.",
"As regards Abstract Meaning Representation (Banarescu et al., 2013, AMR), instead, several approaches have tried to adapt it for cross-lingual use.",
"As a case in point, Xue et al. (2014) analyzed the viability of tailoring the AMR formalism to fit other languages by making use of language-specific repositories similar to PropBank (Aparicio et al., 2008; Xue and Palmer, 2009).",
"4 On a different note, Damonte and Cohen (2018) and Blloshmi et al. (2020) attempted to adopt AMR as an interlingual formalism, despite its English-centric nature, by assuming that the AMR graph of an English sentence is also representative of translations of that sentence in other languages.",
"Once again, these 3 DRT is a framework that embeds the semantics of an utterance employing a formal logic semantic structure.",
"approaches testify to the limits of AMR as an interlingua, given the drawbacks of dealing with structural divergences among different languages.",
"In recent years, Zhu et al. (2019) have recommended abstracting the AMR formalism away in order to reduce its language-specific complexity by preserving just the predicate roles and relations that constitute the core semantic information of sentences.",
"Conversely, rather than decreasing the complexity of AMR, the Universal Meaning Representation (Van Gysel et al., 2021, UMR) extends it by including new features that render the formalism less tied to a specific language.",
"In particular, UMR enriches the verbal predicates with information about grammatical aspect and scope, while introducing temporal and modal dependencies at the document level.",
"Finally, it enhances AMR to use it as a cross-lingual formalism by employing language-specific repositories and relations.",
"Yet, the focus of UMR is that of providing languages with the necessary resources to parse texts, rather than being an interlingual representation.",
"In contrast to previous approaches, and thanks to the multilingually-shared word meanings and semantic roles taken from the interlinked repositories of BabelNet (Navigli et al., 2021) and VerbAtlas (Di Fabio et al., 2019), we put forward BMR, a formalism that fully detaches from syntax and thus stands as a lexical-semantic representation that is able to bring different languages together.",
"To accomplish the goal of an interlingual meaning representation, we disconnect our formalism from language-specific constraints of any kind.",
"To this end, we draw on resources that inherently connect word meanings and predicate-argument structures across languages, i.e., BabelNet and VerbAtlas.",
"BabelNet (Navigli et al., 2021) is a multilingual encyclopedic dictionary and semantic knowledge base in which concepts are represented as synsets (sets of synonyms that convey the same meaning), linked via semantic relation edges like hypernymy or meronymy.",
"BabelNet was built by the aggregation of several knowledge resources including WordNet (Fellbaum, 1998), Wikipedia and Wik-tionary, resulting in a remarkable ontology of concepts and named entities covering 500 languages.",
"Given its versatility, which makes it suitable for a wide range of tasks across languages, we employ its most recent version 5.0 as a tool to switch the 1728 Figure 1: AMR graph for the sentence The students and their parents will take the plane at the last minute\" .",
"VerbAtlas (Di Fabio et al., 2019) is a manually-curated lexical-semantic inventory that collapses the BabelNet verbal synsets into around 450 semantically-coherent frames, each defining prototypical argument structures via human-readable relationships (e.g. AGENT , THEME ).",
"Thanks to its linkage to BabelNet, VerbAtlas represents the best option for handling predicate-argument relations in BMR in a language-independent manner.",
"Like AMR, BMR embeds the semantics of a sentence in a directed acyclic graph, with nodes and edges connecting them.",
"However, where AMR relies on English lemmas and OntoNotes frames to represent nodes and relations (see Figure 1), BMR disposes of language-specific constraints, and employs multilingual concepts and self-explanatory semantic roles (see Figure 2).",
"5 In what follows (Sections 4.1 to 4.4), we will describe and detail the features that make BMR stand out with respect to a widely-employed Semantic Parsing formalism such as AMR, as well as their integration into the AMR 3.0 dataset (Knight et al., 2020) to produce the BMR 1.0 dataset.",
"6 4.1 Self-explanatory Semantic Relations As briefly mentioned in Section 1, AMR derives its coarse-grained frames and argument structures 5 Appendix A details how to read BMR graphs.",
"6 AMR 3.0 is licensed by LDC at https://catalog.",
"ldc.upenn.edu/LDC2020T02 .",
"For this reason, we do not make the BMR-annotated dataset (BMR 1.0) publicly available, but rather provide tools to convert the original AMR 3.0 dataset, provided its rightful ownership.",
"from the English PropBank section of OntoNotes, a repository which is circumscribed to the English language and that features semantic relations that are both predicate-specific and largely unintelligible without a gloss.",
"For example, in Figure 1, the subgraph representation of students' parents is pivoted on the frame have-rel-role-91 , where the relations :ARG0 , :ARG1 , and :ARG2 identify the first entity, the second entity, and the role of the first entity, respectively.",
"As importantly, even though language-specific repositories similar to PropBank have been used to annotate non-English sentences with structures comparable to those of AMR (Aparicio et al., 2008; Xue and Palmer, 2009), there is not an exact one-to-one mapping between the frames they define, meaning that, e.g., the frame have-rel-role-91 might not be featured in the other inventories.",
"Therefore, with the aim of overcoming language specificity, we replace PropBank with VerbAtlas as an alternative repository of predicate-argument structure information, which, as explained above, inherently accounts for multilingually-shared semantics.",
"To build the BMR 1.0 dataset, we exploit the mapping provided by Di Fabio et al. (2019), which links VerbAtlas frames and arguments to PropBank, and use it to replace the original frames and semantic roles in the AMR 3.0 dataset with those of VerbAtlas (e.g., the frame take-01 corresponds to MOVE_BY_MEANS_OF in VerbAtlas, and its ARG0 to AGENT ).",
"However, this mapping is incomplete and, as a result, several predicates found within AMR 3.0 can not be transitioned directly.",
"Among these, two kinds of predicates can be identified,",
"(i) predicates that OntoNotes labels as verbal, and",
"(ii) non-verbal predicates and special predicates which AMR uses to define special semantic structures (e.g., have-rel-role-91 ).",
"To deal with these predicates, we asked a linguist 7 to create a mapping between PropBank and VerbAtlas for the missing verbal predicates, and, with respect to the others instead, to map them to BMR adapting previous semantic roles and creating new ones to better accommodate their argument structures.",
"8 4.2 Node Merging Multiword expressions and idioms are rendered word by word in AMR, using node composition.",
"Nevertheless, such an approach is not feasible for an interlingual representation, since the overall meaning of an expression can not, as a general rule, be compositionally inferred from the meanings of its individual words.",
"Therefore, in BMR we make use of the available BabelNet synsets to identify the meaning of a multiword expression or idiom, and hence we represent it with a single node.",
"As a case in point, the idiom at the last minute which, according to Wiktionary, is defined as very close to a deadline or potential crucial event, does not entail that something will happen precisely in the last minute.",
"This exact expression, that in AMR 3.0 is represented using two nodes ( m and l ) as: ( m / minute : mode ( l / last )) appears in BMR as a single node m : ( m / at _ the _ last _ minute / bn :00114428 r ) As a result, we are both able to",
"(i) abstract away from language-specific lexicons making use of concepts connected across languages and, concurrently, to",
"(ii) reduce the graph density, hence easing the computational burden for systems.",
"Another intrinsic limit of AMR as an interlingual representation is that, since the meaning of nodes can only be partially identified using OntoNotes frames, AMR maximizes their usage so as to express as many concepts as possible, even nonverbal ones.",
"The main reason this constitutes an issue is that the OntoNotes frame composition used to define a concept and the concept itself are not semantically equivalent.",
"For example, the concept of student , which AMR represents as a person who studies by means of the connection between the node of person with the OntoNotes frame study-01 , is arguably different from the definition of student as, quoting the BabelNet synset gloss, a learner who is enrolled in an educational institution.",
"Additionally, these language-specific rules are not transferable across languages, and they are not consistent even within AMR itself, as, whenever a verbalization is not viable (AMR does not render professor as a person who professes), the word is included in the graph as it is.",
"In the remainder of this Section, we describe the strategies by means of which we remodel AMR 3.0 to obtain BMR 1.0 employing node merging.",
"Multiword Expression Identification To merge nodes, we must first identify the words or multiword expressions that are represented by several nodes in the AMR graph.",
"In this regard, we proceed by lemmatizing the original sentences in AMR 3.0 using the 3.1 version of the SpaCy software library (Honnibal and Johnson, 2015).",
"At this stage, for each sentence, we check for the longest concatenations of lemmas that match a BabelNet synset lexicalization in BabelNet 5.0.",
"Once the expressions have been identified, we use the automatic AMR aligner of Flanigan et al. (2014) to get the alignments between the tokens in the original sentence (and, consequently, the identified words and multiwords) and the graph nodes.",
"Manual Validation The automatic identification of multiwords can be noisy and lead to poor node merging choices which, in turn, can result in wrong sense attributions.",
"For instance, in the sentence the rest of the world knows the same, the multiword rest of the world is identified, even though its only meaning in BabelNet is that of a team of players from many countries, which is clearly 1730 not appropriate in the reported context.",
"To address this issue, we asked our expert linguist to manually inspect all of the automatically detected multiword instances within the AMR 3.0 dataset in order to maintain, modify or delete them.",
"Graph Conversion Finally, using the multiwords and the alignments derived from the previous steps, we navigate the AMR graphs bottom-to-top and collapse together nodes referring to the same word or multiword expression (i.e., first reducing nodes closer to the graph leaves and then moving towards the graph root).",
"As a result, we move from the original figure of 936 , 769 nodes of AMR 3.0 to 828 , 483 in BMR 1.0, reducing the graph density by a notable 11 .",
"6% .",
"Even though AMR is able to encode textual information in its semantic structure, its formalism does not account for the inclusion of word components that are crucial for understanding meaning, and that languages express via the grammatical categories of number, tense and aspect.",
"This, along with the fact that the importance of incorporating such details in Semantic Parsing formalisms has already been stressed in the literature (Donatelli et al., 2018; Bonial et al., 2019), leads us to implement these features to further enhance the representative power of BMR.",
"To this end, we employ SpaCy in order to retrieve the Penn Treebank part-of-speech tags (Marcus et al., 1993), which inherently provide information with respect to number, tense, and aspect, for all the words and multiword expressions aligned with a node in the graphs.",
"In practice, we account for tense by enriching each verbal node with the semantic role :timing showing a value of + or to indicate events that will take place in the future or that happened in the past, respectively.",
"Similarly, we handle plurality of the nominal nodes by adding the :quantity relation followed by a + value (see Figure 2).",
"Lastly, we account for aspect by adding the relation :ongoing followed by a + mark to verbal nodes expressing the imperfective aspect (ongoing or usual actions).",
"An interlingual representation of meaning has the basic requirement of being fully linked to an inventory of meanings which can be expressed in multiple languages.",
"For this reason, in order to make nodes in BMR graphs language-independent, we enhance them with BabelNet synsets information.",
"An example of why this is needed is provided in Figure 1, where the predicate take-01 employed in AMR is defined in OntoNotes with the very coarse-grained gloss of take, acquire, come to have, choose, bring with you from somewhere, receiving, internalizing, bringing along, enacting, and the ambiguous word plane is merely represented as a lexical node, which provides no cues for understanding whether it refers to an airplane, a geometric plane, or a carpenter's plane, inter alia.",
"Moreover, the combination of the two does not clarify whether take the plane means to take a flight or to take the carpenter's plane somewhere.",
"Lacking a pointer to a more fine-grained and multilingual word sense inventory also has the disadvantage of preventing the use of the formalism as a means of moving across languages effectively.",
"For example, if the word parents is not assigned the proper word sense, it would lead to ambiguous translations in languages such as Spanish, where the corresponding word padres can indicate both the meaning of parents, but also the meaning of fathers.",
"Therefore, the advantages that come from the disambiguation of nodes with BabelNet are twofold:",
"(i) resolving language ambiguity while representing word meaning explicitly, and",
"(ii) interconnecting the same meanings across languages.",
"Adding the disambiguation information to AMR 3.0 graphs is our last step in order to complete its conversion to BMR 1.0.",
"To this end, we employ a set of different strategies:",
"(a) we exploit the mapping from VerbAtlas frames to BabelNet synsets to assign word senses to nodes based on their lemmas,",
"(b), we use the Wikipedia page information featured in AMR nodes representing named entities to retrieve the corresponding synset BabelNet identifies that page with, and",
"(c), we make use of ESCHER (Barba et al., 2021), 9 a state-of-the-art system for Word Sense Disambiguation, i.e., the task of automatically assigning a meaning to a word in context (Bevilacqua et al., 2021b), to disambiguate the nodes without word senses.",
"As a result, we succeed in assigning a BabelNet synset to an overall figure of 92% AMR content nodes (i.e., nodes aligned with content words), with 42 , 549 fully disambiguated graphs out of 59 , 255 .",
"To demonstrate the importance of BMR's semantic framing, its aptness at preserving lexical information, and its effectiveness in acting as an interlingual representation, we devise three experiments to assess its performance in comparison with AMR.",
"Before delving into their details (Section 5.2), as well as describing our models and the evaluation measures we employ (Sections 5.3 and 5.4, respec-tively), we first provide thorough information regarding the datasets used in our experiments.",
"Aside from the original AMR 3.0 and BMR 1.0 datasets described in Section 4, 10 the following datasets are employed in our experiments, namely:",
"(i) AMR + , which features the set of enhancements applied to the English AMR 3.0, as described from Section 4.1 to 4.3 (excluding node disambiguation), and",
"(ii) BMR*, i.e., a version of BMR 1.0 that does not include lemma information.",
"For each dataset, we also create language-specific versions in German (DE), Italian (IT) and Spanish (ES): starting from the English AMR 3.0, we followed Blloshmi et al. (2020) and create training and development sets for these languages by using gold AMR graphs and their converted AMR + , BMR and BMR* versions and pairing them with silver sentences translated with the machine translation models of Tiedemann and Thottingal (2020, OPUS-MT).",
"As test data, we use the 1 , 371 parallel sentences of Abstract Meaning Representation 2.0 Four Translations, 11 that translate into our set of non-English languages their English (EN) counterparts (a subset of AMR 3.0) found in the AMR 2.0 test split.",
"12 5.2 Tasks Graph-to-Text (GtoT) Our first experiment concerns the Graph-to-Text generation task, i.e., the task of transforming graph meaning representations into their corresponding text, and has the goal of appraising the effectiveness of BMR as a tool for generating texts in different languages.",
"In this context, we also conduct an ablation study on AMR + 10 We use the training/development/test splits of AMR 3.0 for both AMR 3.0 and BMR 1.0 datasets.",
"Text-to-Graph (TtoG) Our second experiment deals instead with the Text-to-Graph generation task (Semantic Parsing), i.e., the task of generating a graph according to a given formalism, starting from raw text.",
"The aim of TtoG is to assess the complexity of generating BMR graphs compared to AMR ones.",
"Text-to-Graph-to-Text (TGT) Finally, in the third experiment, we evaluate the suitability of AMR and BMR to be used as interlingual representations by means of the combination of Text-to-Graph and Graph-to-Text parsing going from a source to a target language.",
"In the same context, we also conduct an ablation study on BMR to assess the impact of the disambiguation in the graphs.",
"All models employed in our experiments are built on top of SPRING (Bevilacqua et al., 2021a), an auto-regressive model for AMR parsing and generation based on the BART (Lewis et al., 2020) pretrained language model.",
"Since the original SPRING works with pairs of sentences and linearized versions of the graphs, we modify its tok-enizer to account for BMR nodes, since they contain BabelNet synset IDs too.",
"Furthermore, we add all synsets that appear more than once 13 within BMR 1.0 to the model's vocabulary and adapt SPRING to the mBART language model (Liu et al., 2020) in order to account for multiple languages in the GtoT and TGT experiments.",
"Given the datasets described in Section 5.1, we confront models trained on AMR 3.0, BMR 1.0, AMR + and BMR* for each language (AMR/BMR/AMR + /BMR* EN,DE,IT,ES ).",
"As regards the ablation study of the GtoT experiment, we apply each modification introduced to AMR 3.0 one at a time, and obtain several versions of the dataset, each of which is used to train additional models, namely, AMR 3.0",
"(i) including self-explanatory relations (AMRREL ),",
"(ii) including self-explanatory relations and node merging (AMRNOD ),",
"(iii) featuring the number category (AMRNUM ),",
"(iv) featuring the tense and aspect categories (AMRTEN ), and",
"(v) featuring the number, tense and aspect categories together (AMRNT ).",
"Language English (EN) German (DE) Italian (IT) Spanish (ES) Model AMR AMR + BMR BMR AMR AMR + BMR BMR AMR AMR + BMR BMR AMR AMR + BMR BMR BLEU 44.8 49.8 50.7 45.7 23.2 24.3 24.8 22.2 29.0 31.3 31.4 29.1 34.6 36.8 37.3 35.5 chrF++ 73.4 76.0 76.3 72.1 55.8 57.0 57.1 54.7 60.7 62.1 62.2 60.0 64.0 65.2 65.5 63.7 METEOR 42.2 43.9 44.3 42.4 25.4 26.4 26.4 25.3 28.9 30.4 30.5 29.2 32.4 33.5 33.7 32.8 Rouge-L 68.2 71.7 72.8 69.7 49.3 50.7 51.1 49.7 51.9 54.2 54.3 52.4 57.4 60.9 61.0 59.8",
"To evaluate the text generation tasks (i.e., GtoT and TGT), we use five standard Natural Language Generation measures, namely, BLEU (Papineni et al., 2002), chrF++ (Popovic, 2017), METEOR (Banerjee and Lavie, 2005), and ROUGE-L (Lin, 2004), tokenizing system predictions with the JAMR script (Flanigan et al., 2014).",
"For the TtoG experiment, instead, as is customary, we employ the Smatch measure (Cai and Knight, 2013).",
"Results for the GtoT experiment are reported in Table",
"1. As can be seen, BMR obtains the highest scores for all the measures across the board, testifying to its effectiveness at generating text in multiple languages.",
"Interestingly, when BMR is confronted with AMR + , the benefits of featuring disambiguation information immediately become evident, with highest scores on each measure.",
"Results for the ablation study are, instead, shown in Table",
"2. Even though the impact of self-explanatory relations is not striking in this scenario (AMRREL model), the use of node merging already leads to an evident performance boost, particularly for BLEU and ROUGE-L (AMRNOD ).",
"Not surprisingly, the addition of the grammatical categories of number, tense, and aspect to AMR 3.0 corroborates the thesis of Donatelli et al. (2018) and Bonial et al. (2019), with results for the different measures growing between 1 .",
"3 to 4 .",
"2 points for AMRNT compared to the baseline AMREN model.",
"Moreover, demostrating the beneficial interaction of all features described in Section 4, the AMR + EN model significantly outperforms the baseline model by 1 .",
"7 points on METEOR (low-est) and 5 .",
"0 points on BLEU (highest), while also outscoring each other model featuring only specific modifications.",
"Results for this experiment are shown in Table 3 and provide evidence for the high degree of complexity that BMR graphs have in comparison to their AMR counterparts.",
"In particular, AMR + EN (which, except for the disambiguated nodes, has the same graph structure as BMREN ) outperforms BMREN by 3 .",
"5 Smatch points, demostrating that the extra layer represented by the inclusion of disambiguation information makes BMR graphs harder to generate automatically starting from raw text.",
"As a matter of fact, a model attempting to generate BMR graphs needs to provide disambiguation for each node (and not just for the verbal predi-cates), hence it faces a much more difficult task.",
"Finally, in Table 4 we report the scores for the TGT experiment, by means of which we appraise the capability of formalisms to act as bridges to translate sentences, first, performing a Text-to-Graph step, and then a Graph-to-Text one.",
"Despite having shown lower performances in comparison to AMR 1733 Pairs EN EN EN DE EN IT EN ES Model AMR AMR + BMR BMR AMR AMR + BMR BMR AMR AMR + BMR BMR AMR AMR + BMR BMR BLEU 45.3 49.3 50.1 45.1 23.0 25.1 24.4 22.8 29.0 30.7 30.9 29.1 34.0 36.5 36.6 35.4 chrF++ 73.5 75.2 75.4 71.4 55.6 56.8 56.1 54.1 60.2 61.4 61.2 59.8 63.3 64.6 64.8 63.3 METEOR 42.3 43.5 43.7 41.9 25.4 26.4 26.0 25.1 28.7 29.8 29.9 29.1 32.0 33.3 33.2 32.6 Rouge-L 68.8 71.8 73.0 69.5 49.6 50.8 50.8 49.3 51.9 53.5 53.2 52.6 57.2 61.1 62.2 59.7 Table 4: Results for the TGT experiment.",
"in the TtoG experiment, the high scores obtained by BMR in this experiment demonstrate that it is better suited as an interlingua.",
"Nevertheless, AMR + outperforms BMR in a few settings, likely due to the higher complexity entailed by BMR parsing, as explained in Section 6.2.",
"Corroborating this thesis are the results shown in Table 5, where BMR scores are compared against a model (AMR+*) in which, to perform the Graph-to-Text step, AMR+ uses a BMR parser with the synset information removed, rather than its own parser.",
"The outcome of this ablation study, with BMR now systematically outscoring its competitor, sheds further light upon the effectiveness of synset-driven disambiguation for encoding valuable sentence information.",
"Returning to the results given in Table 4, even though performances for BMR* models are the lowest (yet competitive, and sometimes higher than AMR) on the board, it is worth remarking that this setting does not feature the lemma information.",
"In fact, in order to be purely semantic, BMR graphs should solely feature the BabelNet synset information.",
"However, given that state-of-the-art Semantic Parsing and generation models make use of pre-trained language models such as BART and mBART, which are trained with data in human language (hence devoid of synset information), the performance of fully-semantic models drops if lemmas are not taken into account.",
"Additionally, currently available text generation metrics are suboptimal when employed to assess semantics, since these measures evaluate similarities at the lemma level.",
"Therefore, though a fully-semantic model could infer the meaning of a BabelNet synset, its performances will be penalized for not generating specific lemmas while outputting perfectly suitable synonyms.",
"In view of this, BMR 1.0 incorporates the lemma information along with the BabelNet synset specifying its meaning (see also Appendix A), demostrating that lexical-semantic representations improve over purely lexical ones.",
"Results for the experiments we conducted depict BMR* as the model that, on the whole, achieves the lowest scores.",
"With the aim of showing how such results might arise due to inadequate evaluation measures (see Section 6.3), we propose a focused case study in which we qualitatively inspect the differences between graphs and sentences generated by means of the AMR and BMR* models.",
"Starting from the sentence My friend did not tolerate his father's behaviour (Figure 3), it can be seen how the grammatical categories of number and tense for the words friend and tolerate are correctly preserved by BMR* only.",
"Additionally, it can be noted how the complex structure that defines child in AMR can confuse the model when there is a reentrant node (in this case, the model does not know to whom the father is related).",
"As interestingly, the sentence generated via BMR* replaces tolerate with the synonym put up with , which worsens its performance according to exact string matching metrics, but, at the same time, provides an insight of a higher level of abstraction when lemmas are omitted.",
"Although the experiments reported in Section 5 testify to the quality of BMR, following an in-house behavioral analysis inspired by the work of Ribeiro et al. (2020), we identify three main classes of errors that undermine the application of BMR as an interlingua, one concerning the formalism ( repository contraints ), one tied to the data contained in the BMR 1.0 dataset ( disambiguation constraints ),",
"Repository constraints BabelNet features a wealth of synsets covering content words in a multilingual setting, but, at the same time, does not provide information regarding parts of speech other than nouns, verbs, adjectives and adverbs.",
"As a result, BMR uses language-specific lemmas to represent conjunctions or ambiguous pronouns such as anyone , which can mean either not a single person or everyone, depending on the use of negative or positive phrasing.",
"On a different note, with roughly 6,500 languages spoken in the world and BabelNet 5.0 featuring a subset of them, the definition of BMR as an interlingua is actually constrained to the number, albeit large, of 500 BabelNet languages.",
"Disambiguation constraints The creation of BMR 1.0 is based upon the Word Sense Disambiguation task carried out via a state-of-the-art system (Barba et al., 2021, ESCHER).",
"And yet, this neural architecture is trained to predict word senses featured in the WordNet 3.0 sense inventory only.",
"By virtue of the fact that, following the node merging strategy (Section 4.2), we can obtain polysemous multiwords found in BabelNet but not in WordNet (as is the case of run off at the mouth ), we cannot provide disambiguation for such instances.",
"This justifies the fact that 8% of content nodes in BMR are not disambiguated (see also Section 4.4).",
"Language-specific constraints The number of items in a lexicon and the degree of word polysemy vary from language to language (Talmy, 2000).",
"Using BabelNet synsets to represent abstract concepts and connect them multilingually is certainly a desirable feature.",
"However, there are concepts and expressions that exist in a given language only, e.g., owing to their being culturally connoted.",
"For example, the Spanish word espeto , which refers to a traditional way of cooking freshly-caught sea fish, has no equivalent in English.",
"Though the concept is featured in BabelNet, it has no lexicalizations in other languages and, as such, it would need to be paraphrased in order to be rendered.",
"Current Semantic Parsing formalisms share tight dependencies with semantic repositories which are both language-specific and isolated from word senses in other languages.",
"As a result, they are not fit to be used as interlingual representations of Figure 3: AMR and BMR* graph representations and generated sentences for the original sentence My friend did not tolerate his father's behaviour .",
"meaning.",
"In this paper, we put forward BMR, a new language-independent formalism that abstracts away from language-specific constraints thanks to two multilingual semantic resources, namely, BabelNet and VerbAtlas.",
"To put our formalism into practice, we also created BMR 1.0, the first dataset labeled according to BMR.",
"Our experiments demostrate the impact that the fully-semantic framing of our formalism has in comparison to the widely-employed formalism of AMR, as well as showing its ability to be a better tool at encoding textual information, and a much more effective interlingua in a text-to-graph-to-text machine translation task.",
"As future work, we plan to",
"(i) create a single multilingual model to parse graphs and generate text in any language,",
"(ii) apply BMR cross-lingually to other downstream tasks such as text summarization,",
"(iii) evolve the formalism to prevent the inclusion of lexical information of any kind.",
"We make our code and data available to the research community at https://github.",
"com/SapienzaNLP/bmr .",
"The authors gratefully acknowledge the support of the ERC Consolidator Grant MOUSSE No. 726487, the Marie Skodowska-Curie project Knowledge Graphs at Scale (Know-Graphs) No. 860801, and the ELEXIS project No. 731015 under the European Union's Horizon 2020 research and innovation programme."
] | [
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"objective",
"objective",
"other",
"method",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other"
] |
[
"Dialogue state tracker is responsible for inferring user intentions through dialogue history.",
"Previous methods have difficulties in handling dialogues with long interaction context, due to the excessive information.",
"We propose a Dialogue State Tracker with Slot Attention and Slot Information Sharing (SAS) to reduce redundant information's interference and improve long dialogue context tracking.",
"Specially, we first apply a Slot Attention to learn a set of slot-specific features from the original dialogue and then integrate them using a Slot Information Sharing.",
"The sharing improve the models ability to deduce value from related slots.",
"Our model yields a sig-nificantly improved performance compared to previous state-of-the-art models on the MultiWOZ dataset.",
"The recent global adoption of personal assistants such as Alexa and Siri made dialogue system a more popular topic in research.",
"The major difference between dialogue systems and question-answering is that dialogue systems need to track dialogue history effectively.",
"So, we normally use a dialogue state tracking component to track user's intention throughout the conversation.",
"A dialogue state is typically composed as a set of slot value pairs in a task-oriented dialogue, such as hotel-internet-yes.",
"It means the slot hotel-internet has a value of yes.",
"Early dialogue state tracking model needs a predefined ontology which means the values of every slot are enumerated in advance (Henderson et al., 2014; Mrksic et al., 2017; Zhong et al., 2018; Sharma et al., 2019).",
"Such practice is inefficient and costly.",
"The large number of possible slot-value pairs makes deploying these models in the real-life *Corresponding author.",
"applications difficult (Rastogi et al., 2017).",
"This difficulty is further amplified in multi-domain dialogue state tracking where the dialogues have more than one tasks.",
"Because the manual effort grows exponentially with the complexity of the dialogues.",
"In (Wu et al., 2019), Wu et al. introduced a transferable dialogue state generator (TRADE), which can generate dialogue states from utterances using a copy mechanism.",
"This generative model achieved relative good performance, but it still has trouble in extracting relevant information from the original dialogues.",
"For example, a user may tell the agent that he/she needs a taxi in a turn, but the taxi's departure location is implicitly mentioned several turns ago.",
"Inspired by the (Chen et al., 2017; Chen, 2018), (Chen et al., 2019) studied on utilizing attention mechanism to deal with the long distance slot carryover problem.",
"In their work, they first fused the information of the slot, its corresponding value and the dialogue distance into a single vector.",
"Then they computed the attention between this single vector and the concatenation of dialogue and intent information.",
"We simplify the attention method and introduce it into the dialogue state tracking task.",
"Moreover, it is a common sense that there is some kind of relevance between two slots involving the same domain or the same attribute.",
"For example, people tend to have a meal near the attraction they visit, so slot attraction-area and slot restaurant-area have the same value at most times.",
"For these slots with a common or related value, if a slot never or seldom appears in the training set, sharing the learned feature of data-sufficient slot may benefit the model's tracking ability on these rare or unknown slots.",
"So we propose SAS, a new multi-domain dialogue state tracking model to resolve this issue to some extent.",
"To be specific, we use an Slot Attention to localize the key features from the original information-excessive dialogue and a Slot Infor-6367 mation Sharing to improve the models ability to deduce value from related slots.",
"The processed information provided by the slot attention and the sharing module makes the generator more sensitive to the location of the values in the dialogue history and thus generates correct slot values.",
"Experiments on the multi-domain MultiWOZ dataset (Budzianowski et al., 2018) shows SAS can achieve 51.03% joint goal accuracy and outperform previous state-of-the-art model by 2.41%.",
"On the single domain dataset which only contains the restaurant domain, we achieve 67.34% joint goal accuracy, outperforming prior best by 1.99%.",
"In addition, we conduct an analysis of the experimental results to evaluate the quality of values generated by our model.",
"The early research of DST focused on the pipelined approach which involves a special module named Spoken Language Understanding (SLU) before the DST module (Wang and Lemon, 2013; Williams, 2014; Perez and Liu, 2017).",
"But obviously, it was not reasonable to train SLU and DST respectively since the accumulated error in SLU may be passed to the DST.",
"In order to alleviates this problem, later study focuses on the joint training methods (Hen-derson et al., 2014; Zilka and Jurcicek, 2015; Wen et al., 2017).",
"Although the higher performance shows the effectiveness of models without SLU, there still remains some shortcomings.",
"For example, these models typically rely on semantic dictionaries which list the potential rephrasings for all slots and values in advance.",
"Make such a list is costly.",
"Fortunately, the recent development of deep learning and representation learning helps the DST to get rid of this problem.",
"(Mrksic et al., 2017) proposed a novel Neural Belief Tracking (NBT) framework which was able to learn distributed representations of dialogue context over pre-trained word vectors, while (Dernoncourt et al., 2017) described a novel tracking method which used elaborate string matching and coreference resolution to detect values explicitly presented in the utterance.",
"These models greatly improve the performance of DST, but they are not good at handling rare and unknown slot value pairs which seldom or never appear in the training set.",
"There were many efforts to exploit general features between rare slot value pairs and common ones.",
"(Zhong et al., 2018) proposed GLAD, a model which built global modules to share parameters between estimators for different slots and local modules to learn slot-specific features.",
"(Nouri and Hosseini-Asl, 2018) improved GLAD by reducing the latency in training and inference time, while preserving its powerful performance of state tracking.",
"But as the dialogues become increasingly complex, the performance of these models on multi-domain is not as satisfying as on single domain.",
"Because of the dependency on the dialogue ontology, they have difficulty in scaling up with domains.",
"Once the number of domains increases, the amount of slot value pairs will boom.",
"With the copy mechanism, the sequence-to-sequence model TRADE (Wu et al., 2019) successfully got rid of any predefined slot value pairs and generated dialogue states from conversation utterances.",
"But we find there still remain several crucial limitations which have not been well solved on multi-domain dialogues.",
"First, these models rely on the long dialogue history to identify the values which belong to various domains and slots.",
"Sometimes the information contained in the dialogue history is too rich for these models to efficiently utilize and the redundant information tends to interfere with their value identification or value generation.",
"Second, the related information among similar slots is wasted.",
"To alleviate these problems, a slot attention and a slot information sharing module are suggested.",
"The former can isolate the most valuable information for each slot, while the latter integrates information kept by its all similar slots and improve the models ability to deduce value from related slots.",
"The dialogue state tracking models take the interaction context as input and extract slot value pairs explicitly or implicitly presented in conversations.",
"The combinations of these slot value pairs are the representations of the user's goal.",
"In this paper, we denote X = { ( u 1 , r 1 ) , , ( u T , r T ) } as the dialogue history, where u 1 , , u T and r 1 , , r T are respectively the set of user utterances and the set of system responses in T turns.",
"The dialogue state of turn t is marked as ST t = (slot: s j , value: y valuej ).",
"Here, s j indicates the j -th slot, while y value j means the ground turth value sequence for this slot.",
"All the slots in ontology are obtained by preprocessing the original MultiWOZ dataset with the delexicalization.",
"Moreover, we extend the def-6368 * Vocabulary list Slot Information Sharing Slot Attention History Context Slot Similarity Matrix (cid:2169) Slot (cid:2778) Slot (cid:2779) Slot J (cid:2185) (cid:2191)(cid:2196)(cid:2202) history hidden states slot hidden states (cid:2185) (cid:2778) (cid:2185) (cid:2779) (cid:2185) (cid:2166) Final Distribution Slot name Slot type Decoder slot hidden states history hidden states Encoder Figure 1: SAS model's architecture.",
"inition of the slot to include the domain name for convenience.",
"For instance, a slot in this paper will be hotel-star, rather than star.",
"Our primary goal is to learn a generative dialogue state tracker model M : X O ST that can efficiently capture the user's intentions for dialogues including multiple domains.",
"And unlike most of the previous models, the ontology O mentioned in this paper only contains the predefined slots and excludes their values.",
"Figure 1 shows the architecture of SAS.",
"SAS is a sequence-to-sequence model augmented with slot attention and slot information sharing.",
"Slot attention enables better feature representation and slot information sharing helps understanding less-seen slots.",
"We describe the details of every component in SAS as follows: 4.1 Encoder We use a 1-layer bidirectional gated recurrent unit (GRU) (Chung et al., 2014) to encode the dialogue history.",
"As TRADE (Wu et al., 2019), our input to the model is the concatenation of all words in the recent l -turn dialogue history X t = [ u t l +1 , r t l +1 , , u t , r t ] R | X t | d emb , where d emb means the embedding size.",
"First, each word in the dialogue history X is mapped to a distributed embedding vector.",
"Then, a GRU is utilized to obtain the hidden state corresponding to each word in the text and we denote these hidden state as the history hidden states H t = { h enc 1 , h enc 2 , , h enc | X t | } R | X t | d hdd .",
"To isolate key features from the noisy dialogue history, we build the slot attention.",
"In fact, the multi-domain dialogues are usually complex and contain rich features.",
"This challenges the model's ability to cope with the excessively rich information.",
"To be specific, in one dialogue, user can mention various information, such as wanting to book a restaurant for a meal and then planning to see an attraction after the meal by ordering a taxi.",
"There are in total 10 slots mentioned spanning across restaurant, attraction and taxi domains.",
"Information from one domain maybe not useful for other domain and can even cause confusion.",
"For example, both restaurant and taxi mention time and people.",
"So we propose the slot attention to only extract useful history information to every slot.",
"More concretely, for a particular slot s j , we first encode its slot name into slot hidden states SH j = [ sh encj 1 , , sh encj | N | ] , where | N | is the maximum size of the slot name phrase.",
"Since the last hidden state sh encj | N | provided by the GRU contains the context information of the entire phrase, we pick it as the representation of slot s j .",
"After that, we calculate the attention between the slot information, sh encj | N | and the hidden states of the dialogue history H t = [ h enc 1 , , h enc | X t | ] to obtain the context vector c j : 6369 a j = ( h enc ) (cid:3) sh encj | N | (1) sc ji = exp ( a ji ) (cid:2) | X t | i =1 exp ( a ji ) (2) c j = | X t | (cid:3) i =1 sc ji h enci (3) Here, the score sc jt indicates the relevance between info slots s j and dialogue history.",
"The context vector c j R d hdd denotes the slot-specific information grabbed from the entire dialogue history.",
"Finally, we obtain the context vectors c = [ c 1 , c 2 , , c J ] R d hdd J for all J slots.",
"In the slot information sharing, there is a special matrix called the slot similarity matrix.",
"This matrix controls the information flow among similar slots.",
"We introduce two sharing methods according to their different calculation of the slot similarity matrix: fix combination sharing and the k-means sharing.",
"We will compare the effectiveness of the two methods in Section 6.",
"We calculate the similarity between every two slots to construct switch matrix.",
"We first compute the cosine similarity over the two slot names and then calculate the similarity over the slot types.",
"Specifically, the slot types can be divided into several categories such as date, location.",
"For example, if there are two slots restaurant-area and restaurant-book day, then the similarity in the first part may be high since the two slot names share a common word restaurant, while the similarity in the second part is quite low: slot restaurant-area has a value whose type is location, and restaurant-book day has a value which belongs to date.",
"Next, the two calculated similarities sname and vtype will be integrated with a hyperparameter [0 , 1] and we can get a special matrix sim RJ J as a result.",
"Here, the integration ratio actually controls the final similarity of the slots.",
"In Table 2, we show that different choices of this ratio will impact the model's tracking performance.",
"After that, matrix sim is transformed into the slot similarity matrix M by the mask mechanism.",
"Here, hyperparameter acts as a threshold to decide whether the two slots are similar enough to trigger the sharing switch and open the information path between them.",
"Since the fix combination method needs manual efforts to search for the best hyperparameter, we propose another method, K-means Sharing Method, which requires no hyperparameter tuning and can achieve an averagely good performance.",
"In this sharing method, we also compute the slot name similarity sname ij and the value type similarity vtype ij between slot s i and s j as the way in the fix combination one.",
"Then we put vectors ( sname ij , vtype ij ) onto flat space and divide these vectors into two groups by the k-means clustering algorithm.",
"One group stands for the slot s i and s j are similar enough, while the other one not similar.",
"The element in M ij is 1 if they are in similar group, 0 if they are in unsimilar group.",
"After getting the slot similarity matrix whose value is either 1 or 0, we do the matrix multiplication between the context vectors c = [ c 1 , c 2 , , c J ] R d hdd J and the slot similarity matrix M RJ J .",
"Then we get the integrated vectors int = [ int 1 , int 2 , , int J ] R d hdd J .",
"These new vectors keep more expressive information for every slot.",
"Specifically, int j is calculated as following: int j = J (cid:3) i =1 c i M ij , M ij { 0 , 1 } (6) As shown in the above equation, int j is essentially the integrated result of all related context vectors c i in c and the integration is guided by the slot similarity matrix M .",
"The matrix M actually plays the role of a switch which controls the information flow between slots and provides a selective integration.",
"For example, this integration makes the data-insufficient slot attraction-type receive the information from its related and data-sufficient slot attraction-name, and helps our model deduce the related value for data-insufficient slots.",
"The value prediction process of our decoder can be divided into two steps: first, predicting whether the value of a certain slot is constrained by the user; and then extracting the value if the constraint is mentioned in the dialogue.",
"In the first step, a three-way classifier called slot gate is used and it can map a vector taken from the encoded hidden states H t to a probability distribution over ptr, none, and dontcare labels.",
"Once the slot gate predicts ptr, the decoder will fill the slots with the values extracted from the dialogues.",
"Otherwise, it just fills the slots with not-mentioned or does not care.",
"In the second step, another GRU is utilized as the decoder.",
"During the decoding step of the j th slot, given a sequence of word embeddings [ w j 1 , w j 2 , , w j | N | ] , the GRU transforms them into decoded hidden states [ h decj 1 , h decj 2 , , h decj | N | ] with the slot's integrated vector int j : z jk = ( U z 1 w j k + U z 2 h decj k 1 ) (7) r jk = ( U r 1 w j k + U r 2 h decj k 1 ) (8) h jk = tanh( U 1 w j k + U 2 ( r jk h decj k 1 )) (9) h decj k = (1 z jk ) h decj k 1 + z jk h jk (10) Here, | N | is the length of the slot sequence and int j is the initial hidden state input h decj 0 .",
"The integrated vector int j makes the decoded hidden states contain more information about the dialogue history.",
"So they are more sensitive about whether the value of slot j is mentioned in the dialogue and where it locates.",
"With the decoded hidden state h decj k , the generator computes P genjk , the probability of the value generated from the vocabulary list E R | V | d hdd and P copyjk , the one copied from the interaction history.",
"| V | is the vocabulary size and d hdd is the dimension of the hidden state.",
"In the end, we sum the probability P genjk and P copyjk to yield the final prediction P jk : P genjk = Softmax ( E ( h decjk ) (cid:3) ) (11) P copyjk = Softmax ( H t ( h decjk ) (cid:3) ) (12) P jk = g jk P genjk + (1 g jk ) P copyjk (13) g jk = Sigmoid ( W g [ h decjk ; w jk ; P copyjk H t ]) (14) Here, g jk is a scalar which controls the model behaviour.",
"It determines whether to generate values from the vocabulary list or copy words from the historical context.",
"In this section, we first introduce the dataset and the evaluation metrics.",
"We then describe our model's implementation details.",
"Finally, we show our baseline models.",
"MultiWOZ (Budzianowski et al., 2018) is a fully-labelled collection of human-human written conversations spanning over multiple domains and topics.",
"There are 7,032 multi-domain dialogues consisting of 2-5 domains in MultiWOZ.",
"Because these dialogues have multiple tasks, so the long dialogue history makes state tracking more difficult.",
"Since there are no dialogues from hospital and police domains in validation and testing sets of MultiWOZ, we follow TRADE (Wu et al., 2019) and use five out of the seven domains to train, valid and test, including restaurant, hotel, attraction, taxi and train.",
"These domains involve 30 slots.",
"We also test our model on a subset of MultiWOZ which only contains the dialogues from the restaurant domain to verify whether our model still works for single-task dialogues.",
"We evaluate all the models using two metrics, slot accuracy and joint goal accuracy, similar to (Nouri and Hosseini-Asl, 2018): Slot accuracy.",
"We use slot accuracy to check whether each single slot in the ground truth dialogue states is correct.",
"The metric only focuses on if the slot requested is correct or not.",
"Joint goal accuracy.",
"The joint goal accuracy is used to evaluate whether the user's goal in each turn is captured.",
"Only when every slot in the ground-truth dialogue state is considered and has correct value, can we consider the joint goal is achieved.",
"It is the most important metric in the dialogue state tracking task.",
"We use the concatenation embedding of GloVe embedding (Pennington et al., 2014) and the character-wise embedding (Hashimoto et al., 2017) in the",
"experiment.",
"The model is trained with ADAM optimizer (Kingma and Ba, 2014) and a batch size of 32.",
"Both the encoder and the decoder use 400 hidden dimensions.",
"The learning rate is initially set to 0.001, but once the joint goal accuracy does not rise with the training, the network will automatically decrease its learning rate to improve the performance.",
"We apply dropout with 0.2 dropout rate for regularization (Srivastava et al., 2014).",
"Besides that, a word dropout technique is also utilized in the way proposed by (Bowman et al., 2015) which simulates the out-of-vocabulary setting.",
"Our k-means clustering algorithm is implemented with the sklearn module, and we set all the hyperparameter in k-means algorithm as default.",
"We compare SAS with several previous methods: MDBT, GLAD, GCE, SpanPtr and TRADE.",
"Based on the classical NBT model, MDBT (Ramadan et al., 2018) extended the task into multiple domains.",
"MDBT makes full use of the semantic similarities between the dialogue and the slot ontology to track the domain and the value of the slot jointly.",
"GLAD relies on global modules to learn the general information and local modules to catch the slot-specific information (Zhong et al., 2018) from the dialogues.",
"GCE efficiently improves and simplifies GLAD, while keeping the excellent performance of GLAD (Nouri and Hosseini-Asl, 2018).",
"SpanPtr first introduces the pointer network (Vinyals et al., 2015) into the dialogue state tracking task to extract unknown slot values (Xu and Hu, 2018).",
"And in that paper, they also apply an effective dropout technique for training.",
"TRADE directly generates slot values from the dialogues by using the copy mechanism and gets rid of the predefined value list (Wu et al., 2019).",
"It achieves the previous state-of-the-art performance.",
"We use the fix combination version of SAS in Table 1 with the integration ratio of 0.8 and the threshold of 0.8.",
"That's the best hyperparameters we find for MultiWOZ.",
"In this section, we first show the result of our model on MultiWoZ dataset, then on Multi-WoZ(restaurant) and MultiWOZ (except hotel) dataset.",
"After conducting the ablation experiment, we also display the improvement the slot attention and slot information sharing brings.",
"Our model achieves the best performance in the most important metric, joint goal accuracy.",
"Our model outperformed the previous state-of-the-art model, TRADE by 2.41% absolute score on joint goal accuracy.",
"We only observe slight increase of slot accuracy compared to TRADE.",
"We suspect it is because TRADE was already achieving nearly 97% accuracy, which is close to the up-bound of the slot accuracy in this task.",
"After carefully checking the error cases, we found these errors mainly come from the difficulty of generating name phrases.",
"To test SAS's ability on single domain dialogue tasks, we also evaluate our model on the a subset of MultiWOZ which contains only the restaurant search task.",
"As displayed in Table 1, SAS achieved 1.99% improvement over TRADE on the joint goal accuracy as well, suggesting SAS's good performance generalize to single domain task.",
"Table 2 also shows how different choices of the hyperparameters influence the final results.",
"On MultiWOZ, the integration ratio of 0.8 and the threshold of 0.8 are the best hyperparamters.",
"But as 6372 illustrated in Table 2, the best integration ratio is no longer 0.8 on MultiWOZ (except hotel).",
"The best values of the integration ratio and the threshold will vary with the ontology.",
"We also perform ablation study to quantify different modules' contribution.",
"We observe in Table 3 that adding the slot attention improves the state tracking results by 1.37% on MultiWOZ.",
"Such improvement suggests having slot attention that focuses on the key information of the history is useful.",
"And the slot information sharing further enhances the performance by 1.04%.",
"The reason behind this may be that the information sharing of the related slots makes the data-insufficient slot receive more information.",
"This handles the rare or unknown slot-value problems to some extent.",
"As illustrated in Table 3, a model with the fix combination sharing method performs better than the k-means sharing method.",
"But the fix combination method has an obvious shortcoming.",
"It is difficult to generalize to new ontology.",
"We need search the hyperparameters for every new ontology and these efforts are usually costly and time-consuming.",
"Results in Table 2 and Table 3 indicate that the k-means algorithm provides a more robust model with respect to different parameters.",
"To investigate whether the slot similarity matrices used by the two sharing methods really reflect the similarity among slots, we also compare them with a human constructed similarity matrix.",
"We invite three volunteers to carefully rate (1 or 0) the relationship between every two slots and obtain the slot similarity matrix used in the human evaluated method.",
"As shown in Table 2 and Table 3, the performance of the k-means sharing method is close to the one the human constructed method.",
"This indicates human knowledge cannot further improve this task.",
"Besides that, we also notice that the fix combination model usually outperforms the human constructed method, demonstrating that the fix combination model can automatically discover some hidden relationship among all slots that human cannot capture.",
"To better understand why our model improves the performance, we investigated some dialogue examples and shown them in Table 4.",
"In the first dialogue, by asking Could you also find me a hotel with a moderate price that offers internet?, the user has briefly informed the agent that he/she is looking for a hotel with internet.",
"The previous model missed the hotel-internet in the tracked slots.",
"Because the model is mislead by the long interaction history.",
"Our model learns to focus on important information using the slot attention to track the correct internet slot.",
"In the second dialogue, although the previous model manages to capture the value 21:30.",
"It still confused arriveby with leaveat.",
"While SAS can distinguish them.",
"We suspect this is because our model can learn the differences between these slots by training on isolated key features per slot without seeing any irrelevant information.",
"In the third example, the user agrees to visit an attraction named Christ's College from many college-type choices the agent suggests.",
"Previous model fetches a wrong message and fills the slot attraction-name with Clare College.",
"In contrast, SAS captures the correct attraction name and also deduces that the attraction type is college.",
"Similar to the first dialogue, the slot attention helps model gain more clean information to detect slot values more accurately.",
"And by sharing the information fetched from slot attraction-name with the slot attraction-type, our model is more sensitive with the value college.",
"We also investigate the limitation of our model by analyzing the state tracking errors.",
"We noticed two types of errors.",
"First, SAS can not effectively identify value dontcare for most slots.",
"For example, when the agent asks the user about his/her requirement on the hotel rating, though he/she answers that is not really important for me, the model fails to fill dontcare into the slot hotel-star.",
"We believe this is due to the fact that the meaning of dontcare has plenty of expressions, it is much harder for the model to learn the semantic 6373 No Model Context 1 I am looking for a train that leaves on saturday and arrives by 10:30.",
"of dontcare than other slots.",
"Besides that, we also notice that the tracking errors of departure or destination location are still common.",
"The reason may be that location name words are usually rich in variations and have few grammatical feature.",
"We present SAS, an effective DST model which successfully extracts the key feature from the original information excessive dialogue.",
"The slot attention of SAS enables it to isolate the key information for each slot, while the slot information sharing enhances the expressiveness of the infor-6374 mation passed to each slot by integrating the information from similar slots.",
"The sharing allows SAS to generalize on rare slot-value pairs with few training data.",
"Our model reaches the state-of-the-art performance compared with previous models.",
"We believe that SAS provides promising potential extensions, such as adapting our model on other tasks where are troubled by excessive information.",
"Besides that, we also notice that it is hard for SAS to correctly extract names of hotel or attraction which have rich variations.",
"Designing a new model to address these problems may be our future work.",
"This research is funded by the Science and Technology Commission of Shanghai Municipality (19511120200 & 18511105502) and by Xiaoi Research.",
"The computation is performed in ECNU Multifunctional Platform for Innovation (001)."
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"objective",
"other",
"other"
] |
[
"Self-attentive neural syntactic parsers using contextualized word embeddings (e.g. ELMo or BERT) currently produce state-of-the-art results in joint parsing and disfluency detection in speech transcripts.",
"Since the contextualized word embeddings are pre-trained on a large amount of unlabeled data, using additional unlabeled data to train a neural model might seem redundant.",
"However, we show that self-training a semi-supervised technique for incorporating unlabeled data sets a new state-of-the-art for the self-attentive parser on disfluency detection, demonstrating that self-training provides benefits orthogonal to the pre-trained contextualized word representations.",
"We also show that ensembling self-trained parsers provides further gains for disfluency detection.",
"Speech introduces challenges that do not appear in written text, such as the presence of disfluencies.",
"Disfluency refers to any interruptions in the normal flow of speech, including false starts, corrections, repetitions and filled pauses.",
"Shriberg (1994) defines three distinct parts of a speech disfluency, referred to as the reparandum , the interregnum and the repair .",
"As illustrated in the example below, the reparandum The first kind of invasion of is the part of the utterance that is replaced or repaired, the interregnum uh I mean (which consists of a filled pause uh and a discourse marker I mean ) is an optional part of the disfluency, and the repair the first type of privacy replaces the reparandum.",
"The fluent version is obtained by removing the reparandum and the interregnum.",
"This paper will focus on joint disfluency detection and constituency parsing of transcribed speech.",
"In the Switchboard treebank corpus (God-frey and Holliman, 1993; Marcus et al., 1999), which is a standard corpus for parsing studies on conversational speech, the reparanda , filled pauses and discourse markers are dominated by EDITED, INTJ and PRN nodes, respectively (see Figure 1).",
"Filled pauses and discourse markers belong to a finite set of words and phrases, so INTJ and PRN nodes are trivial to detect (Johnson and Charniak, 2004).",
"Detecting EDITED nodes, however, is challenging and is the main focus of disfluency detection models.",
"Jamshid Lou et al. (2019) showed that a self-attentive constituency parser achieves state-of-the-art results for joint parsing and disfluency detection.",
"They observed that because the Switchboard trees include both syntactic constituency nodes and EDITED nodes that indicate disfluency, training a parser to predict the Switchboard trees can be regarded as multi-task learning (where the tasks are syntactic parsing and identifying disfluencies).",
"In this paper, we extend the multi-task learning in Jamshid Lou et al. (2019) to explore the impact of self-training (McClosky et al., 2006) and ensembling (Kitaev et al., 2019) on the performance of the self-attentive parser.",
"We aim to answer two questions about the state-of-the-art self-attentive parser: Does self-training improve the performance of the self-attentive parser on disfluency detection?",
"Self-training is a semi-supervised technique for incorporating unlabeled data into a new model, where an existing model trained on manually labeled (i.e. gold) data is used to label unlabeled data.",
"The automatically (i.e. silver) labeled data are treated as truth and combined with the gold labeled data to re-train a new model (McClosky et al., 2006; Choe and Charniak, 2016).",
"Since neural models use rich representations of language pre-trained on a large amount of unlabeled data (Peters et al., 2018; Devlin et al., 2019), we might expect that self-training adds no new information to the self-attentive parser.",
"Surprisingly, however, we find that self-training improves disfluency detection f-score of the BERT-based self-attentive parser , demonstrating that self-training provides benefits orthogonal to the pre-trained contextualized embeddings.",
"Does ensembling improve disfluency detection in speech transcripts?",
"Ensembling is a commonly used technique for improving parsing where scores of multiple instances of the same model trained on the same or different data are combined at inference time (Dyer et al., 2016; Fried et al., 2017; Kitaev et al., 2019).",
"We expect ensembling parsers to improve the performance of the model on disfluency detection, too.",
"We show ensembling four self-trained parsers (using different BERT word representations) via averaging their span label scores increases disfluency detection f-score in comparison with a single self-trained parser.",
"Parsing speech transcripts is challenging for conventional syntactic parsers, mainly due to the presence of disfluencies.",
"In disfluent sentences, the relation between reparandum and repair is different from other words in the sentence.",
"The repair is usually a rough copy of the reparandum, using the same or similar words in roughly the same word order 1 (Charniak and Johnson, 2001).",
"Designed to capture tree-like structures, conventional syntactic parsers fail to detect rough copies which are strong indicators of disfluency.",
"Moreover, the reparandum and repair often do not form a syntactic phrase, which makes detecting the reparandum even harder.",
"For these reasons, specialized disfluency detection models were developed to remove disfluencies prior to parsing (Char-niak and Johnson, 2001; Kahn et al., 2005; Lease and Johnson, 2006) or special mechanisms were added to parsers to handle disfluencies (Rasooli and Tetreault, 2013; Honnibal and Johnson, 2014; Yoshikawa et al., 2016).",
"Conventional parsing based models can use the syntactic location of the disfluency as a feature in a reranker (Johnson et al., 2004).",
"A similar gain can be achieved in neural models by training a joint parsing and disfluency detection model.",
"In this multi-task learning setting, syntactic information helps the neural model detect disfluencies more accurately (Jamshid Lou et al., 2019).",
"State-of-the-art results for disfluency detection have been reported for Transformer models using contextualized embeddings (e.g. ELMo and BERT) (Jamshid Lou et al., 2019; Tran et al., 2019; Dong et al., 2019).",
"The self-attention mechanism of the Transformer is apparently effective for capturing rough copy dependencies between words.",
"A recent study shows that prosody slightly improves the parsing performance of the self-attentive model over the text-only model, especially in long sentences (Tran et al., 2019).",
"In this paper, we use a self-attentive model for joint disfluency detection and constituency parsing.",
"Disfluency detection models are usually trained and evaluated on the Switchboard corpus.",
"Switchboard is the largest disfluency annotated dataset.",
"However, only 5.9% of the words in the Switchboard are disfluent (Charniak and Johnson, 2001).",
"1 For example in Figure 1, the reparandum The first kind of invasion of and the repair the first type of privacy are rough copies of each other.",
"To mitigate the scarcity of labeled data, some studies have leveraged additional data by using:",
"(i) contextualized embeddings pre-trained on enormous amount of unlabeled data (Jamshid Lou et al., 2019; Tran et al., 2019; Bach and Huang, 2019) and",
"(ii) synthetic data generated by adding noise in the form of disfluencies to fluent sentences (e.g. repeating, deleting or inserting words in a sentence) (Wang et al., 2018; Bach and Huang, 2019; Dong et al., 2019).",
"By contrast, this paper focuses on self-training, which is a simple semi-supervised technique that has been effective in different NLP tasks, including parsing (McClosky et al., 2006; Clark et al., 2018; Droganova et al., 2018).",
"To our best knowledge, this is the first work that investigates self-training a neural disfluency detection model.",
"Another technique commonly used for improving parsing is ensembling.",
"Ensembling is a model combination method, where scores of multiple models (they can be the same or different models, trained on the same or different data, with different random initializations) are combined in some way (Dyer et al., 2016; Choe and Charniak, 2016; Fried et al., 2017).",
"The state-of-the-art for parsing written text is an ensemble of four BERT-based self-attentive parsers, where the parsers are combined by averaging their span label scores (Kitaev et al., 2019).",
"While ensembling is widely used in parsing, it has not been investigated for disfluency detection.",
"In this paper, we also explore the impact of ensembling several parsing based disfluency detection models on disfluency detection performance.",
"Following Jamshid Lou et al. (2019), we use a self-attentive constituency parser for joint disfluency detection and syntactic parsing 2 .",
"The parsing model is based on the architecture introduced by Kitaev and Klein (2018), which is state-of-the-art for",
"(i) parsing written texts (Kitaev et al., 2019; Fried et al., 2019),",
"(ii) parsing transcribed speech (Tran et al., 2019), and",
"(iii) joint parsing and disfluency detection (Jamshid Lou et al., 2019).",
"The self-attentive parser assigns a score s ( T ) to each tree T by calculating the sum of the potentials 2 The code is available at: https://github.com/pariajm/joint-disfluency-detection-and-parsing on its labeled constituent spans: s ( T ) = (cid:88) ( i,j,l ) T s ( i, j, l ) (1) where s ( i, j, l ) is the score of a constituent beginning at string position i ending at position j with label l .",
"The input to the parser is a sequence of vectors corresponding to the sequence of words in a sentence followed by one or more self-attention layers.",
"For each span ( i, j ) , a hidden vector h ij is constructed by subtracting the representations of the start and end of the span.",
"A span classifier, including two fully connected layers followed by a non-linearity, assigns labeling scores s ( i, j, . ) to each span.",
"Then, the highest scoring parse tree is found for a given sentence as follows: T = argmax T s ( T ) (2) using a modified CYK algorithm.",
"The parser introduced in Kitaev and Klein (2018) relies on an external POS tagger to predict preterminal labels, but because the parser's accuracy does not decrease when no external POS tagger is used, we use their parser here without an external POS tagger (hence, all the preterminal labels are UNK).",
"For more details, see Kitaev and Klein (2018).",
"We incorporate BERT (Devlin et al., 2019) in our self-attentive parser by fine-tuning the parameters as part of the training process.",
"Following Kitaev et al. (2019), we apply a learned projection matrix on the output of BERT to project the vectors to our desired dimensionality.",
"The representations are then fed into the parser.",
"BERT learns the representations for sub-word units, so to extract the word representations, we consider the representations of the last sub-word unit for each word in the sentence (Kitaev et al., 2019).",
"We train the self-attentive parser on the Penn Treebank-3 Switchboard corpus which contains gold disfluency labeled parse trees (Godfrey and Holliman, 1993; Marcus et al., 1999).",
"Using the trained model, we parse unlabeled data and add the silver parse trees to the gold Switchboard training data and re-train the self-attentive parser using the enlarged training set.",
"The unlabeled data we use include Fisher Speech Transcripts Part 1 (Cieri et al., 2004) and Part 2 (Cieri et al., 2005).",
"Following Charniak and Johnson (2001), we split the Switchboard into training, dev and test sets as follows: training data consists of the sw[23]",
".mrg files, dev data consists of the sw4[5-9]",
".mrg files and test data consists of the sw4[0-1]",
".mrg files.",
"All partial words 3 and punctuations are removed from the data, as they are not available in realistic ASR applications (Johnson and Charniak, 2004).",
"Our baseline is the self-attentive parser trained on the gold Switchboard corpus with BERT word representations.",
"The BERT-based parser is the current state-of-the-art, providing a very strong baseline for our work.",
"We trained different versions of the baseline parser using four different BERT models, namely BERTBASE [cased | uncased] and BERTLARGE [cased | uncased] , and then selected the best model i.e. BERTBASE [cased] on the Switchboard dev set.",
"We also tuned the hyperparameters by optimizing for performance on parsing EDITED nodes F ( SE ) .",
"Preliminary experiments on the Switchboard dev set showed that the hyperparameters given by Kitaev et al. (2019) perform well; therefore, this is what we used here.",
"Since random seeds lead to different results, in this paper we report average scores across 5 runs of each model initialized with different random seeds.",
"We evaluate the self-attentive parser in terms of parsing accuracy, as well as disfluency detection.",
"Since certain words are identified as EDITED in the parse tree, we can measure how well a parser 3 Words tagged as XX or words ending in classifies words as EDITED.",
"We can also evaluate how accurately the parser can identify all disfluency words, i.e., the words dominated by EDITED, INTJ or PRN nodes.",
"Therefore, we report precision (P), recall (R) and f-score (F) for both constituent spans (S) and word positions (W), where each word position is treated as labeled by all the constituents containing that word.",
"We also report the result for subsets of constituent spans and word positions:",
"(i) SE , the set of constituent spans labeled EDITED,",
"(ii) WE , the set of word positions dominated by one or more EDITED nodes, and",
"(iii) WEIP , the set of word positions dominated by one or more EDITED, INTJ or PRN nodes.",
"For more details, see Jamshid Lou et al. (2019).",
"To find the optimal proportion of additional silver training data, we select n percent (ranging from 10% to 90% ) of the training data in each mini-batch from silver parse trees and the rest from the gold ones.",
"This has the same effect as re-weighting the main gold corpus as in McClosky et al. (2006).",
"The results for using different proportions of the silver parse trees are presented in Figure 2.",
"The BERT-based parser self-trained with 40% silver Fisher trees and 60% gold Switchboard trees is our best model.",
"In other words, for a batch size of 30, in each mini-batch 12 parse trees come from the silver Fisher data and 18 parse trees from the gold Switchboard.",
"All self-training results in this paper use this proportion of gold and silver parse trees.",
"Tables 2 and 3 compare the baseline and the self-trained parser in terms of parsing and disfluency detection.",
"The parser self-trained on the silver Fisher data increases parsing and disfluency detection performance, indicating the BERT-based model benefits from additional silver labeled data.",
"Self-training is especially effective for recognizing EDITED disfluency nodes ( 1 . 5% increase in f-score).",
"Only 5 .",
"9% of the words in the Switchboard are disfluent, and BERT is only trained on fluent texts such as books and Wikipedia, so the baseline parser may be starved of disfluent training examples.",
"As a result, self-training on a corpus of conversational speech may compensate for the scarcity of disfluent gold data.",
"To explore this, we tried self-training on a wide variety of fluent clean datasets, including Gigaword 5 (which is an unlabelled newswire corpus) and WSJ and Brown (which include gold parse trees of written text), but the performance did not improve significantly.",
"This suggests that the parser benefits more from additional in-domain (i.e. conversational) silver data than additional out-of-domain (i.e. written) silver/gold data.",
"Moreover, if we learn the embeddings as part of training instead of using pre-trained BERT, EDITED word f-score would drop from 90.9% to 86.4% and self-training on Fisher leads to little improvement (0.2% increase in EDITED word f-score compared to 1.5% improvement when using BERT).",
"This suggests that self-training works well when the baseline model is powerful enough to predict accurate silver labels.",
"To further investigate the influence of self-training on disfluency detection, we randomly select 100 sentences containing disfluencies from Disfluency F(W E ) F(W EIP ) Baseline 90 .",
"the Switchboard dev set.",
"We categorize disfluencies into repetition , correction and restart according to Shriberg's (1994) typology of speech repairs.",
"Repetitions are repairs where the reparandum and repair portions of the disfluency are identical, while corrections are where the reparandum and repairs differ (which are much harder to de-tect).",
"Restarts are where the speaker abandons a sentence and starts a new one (i.e. the repair is empty).",
"As Table 4 shows, the self-trained parser outperforms the baseline in detecting all types of disfluency.",
"It especially has a better performance at detecting corrections and restarts which are more challenging types of disfluency in comparison with repetitions.",
"We investigate the impact of ensembling on the performance of the self-attentive parser, where we combine parsers by averaging their span label scores as follows: s ensemble ( i, j, l ) = 1 4 4 (cid:88) n =1 s n ( i, j, l ) (3) We tried different ensembling of parsers and the best result was achieved when we trained the baseline parser four times using four BERT word rep-# Model EDITED Disfluency Labels 1 Gold if if you call the any eight hundred number if you you can call up any eight hundred number Baseline if if you call the any eight hundred number if you you can call up any eight hundred number Self-trained if if you call the any eight hundred number if you you can call up any eight hundred number 2 Gold she was going to get picked up she was going to pick him up because she only Baseline she was going to get picked up she was going to pick him up because she only Self-trained she was going to get picked up she was going to pick him up because she only 3 Gold It goes back to you know what right what can society impose on people Baseline It goes back to you know what right what can society impose on people Self-trained It goes back to you know what right what can society impose on people 4 Gold and the money they do have they're not they do not use it wisely Baseline and the money they do have they're not they do not use it wisely Self-trained and the money they do have they're not they do not use it wisely 5 Gold For two years we didn't and we which was a kind of stupid Baseline For two years we didn't and we which was a kind of stupid Self-trained For two years we didn't and we which was a kind of stupid 6 Gold We we couldn't survive in a in a juror in a trial system without a jury Baseline We we couldn't survive in a in a juror in a trial system without a jury Self-trained We we couldn't survive in a in a juror in a trial system without a jury 7 Gold I think it's like ninety-nine point ninety-nine think it is Baseline I think it's like ninety-nine point ninety-nine think it is Self-trained I think it's like ninety-nine point ninety-nine think it is 8 Gold Do you think for a big or a little place Baseline Do you think for a big or a little place Self-trained Do you think for a big or a little place Table 5: Some examples from the Switchboard dev set and corresponding EDITED disfluency labels given by the baseline and the best self-trained parser, as well as the gold (i.e. correct) labels.",
"resentations, namely BERTBASE [cased | uncased] and BERTLARGE [cased | uncased] , and combined the results at inference time (Kitaev et al., 2019).",
"The ensembled models not only reflect variations of different pre-trained representations but also the randomness in initialization of the models.",
"As shown in Table 6, ensembling and self-training both improve the performance of the baseline single model on parsing and detecting EDITED disfluency nodes.",
"Self-training is more effective than ensembling, especially for EDITED node detection.",
"The best results are reported for ensembling the best of the self-trained parsers for each of different BERT models from the 5 random restarts 4 .",
"We compare the performance of our best model with previous work on the Switchboard test set.",
"As demonstrated in Table 7, our model outperforms prior work in parsing.",
"The parsing result for our model is higher than Tran et al. (2018) which utilizes prosodic cues, as well as text based features.",
"We compare the performance of the self-attentive parser with state-of-the-art disfluency detection models.",
"As shown in Table 8, our model has the best f-score.",
"We also compare our model with prior work that reported EDITED, INTJ and PRN word f-score for disfluency detection and find that our model has the best performance (see Table 9).",
"Compared to Wang et al. (2018) which uses GANs to leverage additional unlabelled data and Bach and Huang (2019) which leverages synthetic data, our model significantly improves the recall.",
"This demonstrates that standard techniques such as self-training and ensembling are as good or better than these specialized, complex approaches.",
"We conduct a qualitative analysis on the Switchboard dev set to characterize the disfluencies that",
"the baseline model cannot detect but the self-trained one can.",
"We provide representative examples in Table 5.",
"In general, the self-trained model is better at detecting long complex corrections (# 1-4), restarts (# 5) and stutter-like repetitions (# 6).",
"It also does a better job of discriminating fluent repetitions and fluent parallel structures from repetition and correction types of disfluency (# 7 and 8).",
"Figure 3 depicts a sentence parsed by the baseline and the self-trained self-attentive parser, where the self-trained model correctly predicts all disfluency EDITED nodes.",
"As explained in Section 3, we do not use an external POS tagger, so POS tags are not available when parsing from raw text.",
"That's why all preterminal labels in Figure 3 are shown by a dummy token i.e. UNK.",
"We introduced a new state-of-the-art for joint disfluency detection and constituency parsing of transcribed speech.",
"We showed that self-training and ensembling are effective methods for improving disfluency detection.",
"A qualitative analysis of the results also indicated that self-training is helpful for detecting complicated types of disfluencies, including corrections and restarts.",
"In future work, we intend to explore the idea of self-training for parsing written texts.",
"We also aim at integrating syntactic parsing and self-training more closely with automatic speech recognition.",
"The first step is to develop parsing models that parse ASR output, rather than speech transcripts.",
"We would like to thank the anonymous reviewers for their insightful comments and suggestions.",
"This research was supported by a Google award through the Natural Language Understanding Focused Program, by a CSIRO's DATA61 Top-up Scholarship, and under the Australian Research Councils Discovery Projects funding scheme (project number DP160102156)."
] | [
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"objective",
"objective",
"objective",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"objective",
"objective",
"abstain",
"other",
"other"
] |
[
"Automatic identification of spurious instances (those with potentially wrong labels in datasets) can improve the quality of existing language resources, especially when annotations are obtained through crowdsourcing or automatically generated based on coded rankings.",
"In this paper, we present an effective approach inspired by queueing theory and psychology of learning to automatically identify spurious instances in datasets.",
"Our approach discriminates instances based on their difficulty to learn, determined by a downstream learner.",
"Our method can be applied to any dataset assuming the existence of a neural network model for the target task of the dataset.",
"Our best approach outperforms competing state-of-the-art baselines and has a MAP of 0 .",
"85 and 0 .",
"22 in identifying spurious instances in synthetic and carefully-crowdsourced real-world datasets respectively.",
"The importance of error-free language resources cannot be overstated as errors can inversely affect interpretations of the data, models developed from the data, and decisions made based on the data.",
"Although the quality of language resources can be improved through good annotation guidelines, test questions, etc., annotation noise still exists (Gupta et al., 2012; Lasecki et al., 2013).",
"For example, Figure 1 shows sample spurious instances (those with potentially wrong labels) in CIFAR-10 (Krizhevsky, 2009) which is a benchmark dataset for object classification.",
"Spurious instances can mislead systems, and, if available in test data, lead to unrealistic comparison among competing systems.",
"Previous works either directly identify noise in datasets (Hovy et al., 2013; Dickinson and Meurers, 2003; Eskin, 2000; Loftsson, 2009),",
"or develop models that are more robust against noise (Guan et al., 2017; Natarajan et al., 2013; Zhu et al., 2003; Zhu and Wu, 2004).",
"Furthermore, recent works on adversarial perturbation have tackled this problem (Goodfellow et al., 2015; Feinman et al., 2017).",
"However, most previous approaches require either annotations generated by each individual annotator (Guan et al., 2017), or both task-specific and instance-type (genuine or adversarial) labels for training (Hendrik Met-zen et al., 2017; Zheng et al., 2016), or noise-free data (Xiao et al., 2015).",
"Such information is often not available in the final release of most datasets.",
"Current approaches utilize prediction probabil-ity/loss of instances to tackle the above challenges in identifying spurious instances.",
"This is because prediction probability/loss of spurious instances tend to be lower than that of genuine instances (He and Garcia, 2009).",
"In particular, the Bayesian Uncertainty model (Feinman et al., 2017) defines spurious instances as those that have greater uncertainty (variance) in their stochastic predictions, and the Variational Inference model (Rehbein and Ruppenhofer, 2017; Hovy et al., 2013) expects greater posterior entropy in predictions made for spurious instances.",
"In this paper, our hypothesis is that spurious instances are frequently found to be difficult to 2006 learn during training process.",
"This difficulty in learning stems from the intrinsic discrepancy between spurious and the cohort of genuine instances which frequently makes a learner less confident in predicting the wrong labels of spurious instances.",
"Based on this hypothesis, we present two frameworks which are inspired by findings in queueing theory and psychology, namely Leitner queue network (Leitner, 1974) and Curriculum Learning (Bengio et al., 2009).",
"Our frameworks can be considered as schedulers that schedule instances to train a downstream learner (e.g. a neural network) with respect to easiness/difficulty of instances determined by the extent to which the learner can correctly label (e.g. classify) instances during the training process.",
"The two frameworks, however, differ in their views on the theory of learning as we describe below: Curriculum learning is inspired by the learning principle that humans can learn more effectively when training starts with easier concepts and gradually proceeds with more difficult ones (Ben-gio et al., 2009).",
"On the other hand, Leitner system is inspired by spaced repetition (Dempster, 1989; Cepeda et al., 2006), the learning principle that effective and efficient learning can be achieved by working more on difficult concepts and less on easier ones.",
"Both frameworks are effective, conceptually simple, and easy to implement.",
"The contributions of this paper are as follows:",
"(a) we develop a cognitively-motivated and effective algorithm for identifying spurious instances in datasets,",
"(b) our approach can be applied to any dataset without modification if there exists a neural network architecture for the target task of the dataset, and",
"(c) we release a tool that can be easily used to generate a ranked list of spurious instances in datasets.",
"1 Our tool requires a dataset and its corresponding network architecture to generate a ranked list of spurious instances in the dataset.",
"Our best approach (Leitner model) has a mean average precision (MAP) of 0 .",
"85 and 0 .",
"22 in identifying spurious instances on real-world and synthetic datasets and outperforms competing state-of-the-art baselines.",
"gen-1 https://scholar.harvard.edu/hadi/spot",
"Bengio et al. (2009) and Kumar et al. (2010) developed training paradigms which are inspired by the learning principle that humans can learn more effectively when training starts with easier concepts and gradually proceeds with more difficult ones.",
"Since easiness of information is not readily available in most datasets, previous approaches used heuristic techniques (Spitkovsky et al., 2010; Basu and Christensen, 2013) or optimization algorithms (Jiang et al., 2015, 2014) to quantify easiness for instances.",
"These approaches consider an instance as easy if its prediction loss is smaller than a threshold ( ).",
"Given a neural network as the learner, we adopt curriculum learning to identify spurious instances as follows (see Figure 2): At each iteration i , we divide all instances into easy and hard batches using the iteration-specific threshold i and the loss values of instances at iteration i , obtained from the current partially-trained network.",
"All instances with a loss smaller than i are considered as easy and the rest are consid-2007 ered as hard.",
"All easy instances in conjunction with i [0 , 1] fraction of easiest hard instances (those with smallest loss values greater than i ) are used for training at iteration i .",
"We set each i to the average 2 loss of training instances that are correctly classified by the current partially-trained network.",
"Furthermore, at each iteration i > 1 , we set i = i/k where k is the total number of iterations.",
"In this way, difficult instances are gradually introduced to the network at every new iteration.",
"The update stat ( . ) function in Figure 2 scores instances based on their frequency of occurrence in the hard batch.",
"In particular, for each instance h i : S e ( h i ) = S e 1 ( h i )+ (1) 1 hard batch e ( h i ) (cid:16) 1 | hard batch e | + loss e ( h i ) (cid:17) , where S e ( h i ) is the score of h i at iteration e , 1 Y ( x ) is an indicator function which is 1 when x Y and otherwise 0 , hard batch e indicates the set of hard instances at iteration e , and loss e ( h i ) is the loss of the network for h i at iteration e .",
"The above function assigns higher scores to instances that are frequently considered as hard instances by the curriculum learning framework (such instances are ranked higher in the final ranked list of spurious instances).",
"It also assigns a final score of S k ( h i ) = 0 to instances that are treated as easy instances throughout the training process, i.e. those that have a loss smaller than the iteration-specific threshold i at each iteration i and, therefore, are always placed in the easy batch .",
"To break the tie for these instances in the final ranking, we resort to their final loss values as follows: S k ( h i ) = loss k ( h i ) , if S k ( h i ) = 0 .",
"The Leitner System is inspired by the broad evidence in psychology that shows human ability to retain information improves with repeated exposure and exponentially decays with delay since last exposure (Cepeda et al., 2006).",
"Spaced repetition forms the building block of many educational devices, such as flashcards, in which small pieces of information are repeatedly presented to a learner on a schedule determined by a spaced repetition 2 We also considered maximum and median loss, but average loss led to greater training gain in terms of effectiveness.",
"algorithm.",
"Such algorithms show that human learners can learn efficiently and effectively by increasing intervals of time between subsequent reviews of previously learned materials (Dempster, 1989; Novikoff et al., 2012).",
"We adopt the Leitner system to identify spurious instances as follows: Suppose we have n queues { q 0 , q 1 , . . . , q n 1 } .",
"The Leitner system initially places all instances in the first queue, q 0 .",
"As Figure 3 shows, the system trains with instances of q i at every 2 i iterations.",
"At each iteration, only instances in the selected queues will be used for training the network.",
"During training, if an instance from q i is correctly classified by the network, the instance will be pro-moted to q i +1 , otherwise it will be demoted to the first queue, q 0 .",
"Therefore, as the network trains through time, higher queues will accumulate easier instances which the network is most accurate about, while lower queues carry either hard or potentially spurious instances.",
"This is because of the intrinsic discrepancy between spurious instances and the cohort of genuine instances which makes the network less confident in predicting the wrong labels of spurious instances.",
"Figure 3 (bot-2008 tom) provides examples of queues and their corresponding processing epochs.",
"The update stat ( . ) function in Figure 3 scores instances based on their occurrence in q 0 .",
"In particular, for each instance h i : S e ( h i ) = S e 1 ( h i )+ 1 q e 0 ( h i ) (cid:16) 1 | q e 0 | + loss e ( h i ) (cid:17) , (3) where | q e 0 | indicates the number of instance in q 0 at iteration e .",
"The above function assigns higher scores to instances that frequently occur in q 0 .",
"It also assigns a final score of S k ( h i ) = 0 (at the last iteration) to instances that have never been demoted to q 0 .",
"To break the tie for such instances, we use their final loss value as follows: S k ( h i ) = loss k ( h i ) , if S k ( h i ) = 0 .",
"We employ a TREC-like evaluation setting to compare models against each other.",
"For this, we create a pool of K most spurious instances identified by different models.",
"If needed, e.g. in case of real-world datasets, we manually label all instances in the pool and come to agreement about their labels.",
"Then, we compare the resulting labels with the original labels in the dataset to determine spurious/genuine instances.",
"We compare models based on the standard TREC evaluation measures, namely mean average precision (MAP), precision after r instances are retrieved (P@r), and, only for synthetic data, precision after all spurious instances are retrieved (Rprec).",
"We use the trec-eval toolkit to compute performance of different models.",
"3 3.2 Datasets We develop synthetic and real-world datasets for our experiments.",
"Since, in contrast to real-world datasets, (most 4 ) synthetic datasets do not contain any noisy instances, we can conduct large-scale evaluation by injecting spurious instances into such datasets.",
"Table 1 shows detail information about our datasets.",
"3 http://trec.nist.gov/trec_eval/ 4 Some synthetic datasets may contain noise, see sample inconsistencies that our model identified in the bAbi dataset (Weston et al., 2016) in Table 3. Dataset Train/Val SpRatio Input Output Synthetic Dataset Addition 10 K/ 2 K (0 , 0 .",
"The Addition dataset, initially developed by Zaremba and Sutskever (2014), is a synthetic dataset in which an input instance is a pair of non-negative integers smaller than 10 l and the corresponding output is the arithmetic sum of the input; we set l = 4 in our experiments.",
"Since this dataset contains only genuine instances, we create noisy datasets by injecting N spurious instances into (1 ) N genuine instances, where N = 10 K is the total number of training instances and 0 .",
"5 indicates the noise level in the dataset.",
"We create spurious instances as follows: given three random numbers x i , x j , x k [0 , 10 l ) such that x j 6 = x k , the wrong sum (output) for the pair ( x i , x j ) is computed as: max(0 , x i + ( 1) o x k ) , where o is a random variable that takes values from O = { 1 , 2 } with equal probability. 3.2.2 Real-world Datasets We crowdsource annotations for two real-world datasets, namely Twitter and Reddit posts (see Table 1). For quality control, we carefully develop annotation schemas as well as high quality test questions (see below) to minimize the chances of spurious labels in the resulting annotations. The Twitter dataset contains tweets about a telecommunication brand. Tweets contain brand name or its products and services. Annotators are instructed to label tweets as positive/negative if they describe positive/negative sentiment about the target brand. We use 500 labeled instances for annotation quality assurance and ignore data generated by annotators who have less than 80 % accuracy on these instances. The resulting Fleiss' kappa (Fleiss, 1971) is = 0 . 66 on our Twitter dataset which indicates substantial agreement. 2009 The Reddit dataset includes posts about colon, breast, or brain cancer. These posts contain phrases like colon cancer , breast cancer , or brain cancer . Annotators are instructed to label a post as relevant if it describes a patient's experience (including sign and symptoms, treatments, etc.,) with respect to the cancer. In contrast, irrele-vant posts are defined as generic texts (such as scientific papers, news, etc.,) that discuss cancer in general without describing a real patient experience. We use 300 labeled instances for annotation quality assurance and ignore annotations generated by users who have less than 80 % accuracy on these instances. The resulting Fleiss' kappa is = 0 . 48 for the Reddit dataset which indicates moderate agreement. 3.3 Settings For the synthetic Addition dataset, we set the size of the TREC pool to K = 10 , 000 (size of training data) which indicates there is no limitation on the number of spurious instances that a model can retrieve; note that we have a spurious/genuine label for each instance in the Addition dataset and therefore do not need to label the resulting TREC pool manually. Furthermore, we consider the LSTM network developed by Sutskever et al. (2014) as the downstream learner. 5 Without noise in data, this network obtains a high accuracy of 99 . 7 % on the Addition task. For the real-world datasets, we allow each model to submits its top 50 most spurious instances to the TREC pool (we have five models including our baselines). As mentioned before, we manually label these instances to determine their spurious/genuine labels. This leads to TREC pools of size 198 and 152 posts (with 59 and 35 spurious instances) for the Twitter and Reddit datasets respectively. We use the MLP network fastText (Joulin et al., 2017) as the downstream learner for more effective prediction, we add a Dense layer of size 512 before the last layer of fastText . This network obtains accuracy of 74 . 6 % and 70 . 2 % on Twitter and Reddit datasets respectively. Finally, for the Leitner system, we experiment with different queue lengths, n = { 3 , 5 , 7 } , and set n = 5 in the experiments as this value leads to slightly better performance in our experiments. 5 http://github.com/fchollet/keras/ blob/master/examples/addition_rnn.py 3.4 Baselines We consider the following baselines; each baseline takes a dataset and a model as input and generates a ranked list of spurious instances in the dataset: Prediction Probability (PP) : Since prediction loss of spurious instances tend to be higher than that of genuine ones (He and Garcia, 2009; Hendrycks and Gimpel, 2016), this baseline ranks instances in descending order of their prediction loss after networks are trained through standard (rote) training. Variational inference (VI) (Hovy et al., 2013; Rehbein and Ruppenhofer, 2017): This model approximates posterior entropy from several predictions made for each individual instance (see below). 6 Bayesian Uncertainty (BU) (Feinman et al., 2017): This model ranks instances with respect to the uncertainty (variance) in their stochastic predictions. 7 BU estimates an uncertainty score for each individual instance by generating T = 50 predictions for the instance from a distribution of network configurations. The prediction disagreement tends to be common among spurious instances (high uncertainty) but rare among genuine instances (low uncertainty). Uncertainty of instance x with predictions { y 1 , . . . , y T } is computed as follows: 1 TTX i =1 y > i y i (cid:16) 1 TTX i =1 y i (cid:17) > (cid:16) 1 TTX i =1 y i (cid:17) . Variational inference (VI) (Rehbein and Rup-penhofer, 2017; Hovy et al., 2013) detects spurious instances by approximating the posterior p ( y | x ) with a simpler distribution q ( y ) (called variational approximation to the posterior) which models the prediction for each instance. The model jointly optimizes the two distributions through EM: in the E-step, q is updated to minimize the divergence between the two distributions, D ( q || p ) ; in the M-step, q is kept fixed while p is adjusted. The two steps are repeated until convergence. Instances are then ranked based on their posterior entropies. Similar to BU, we generate T = 50 predictions for each instance. For both BU and VI baselines, we apply a dropout rate of 0 . 5 after the first and last hidden 6 http://isi.edu/publications/ licensed-sw/mace/ 7 http://github.com/rfeinman/ detecting-adversarial-samples 2010 layers of our downstream networks to generate predictions. See (Gal and Ghahramani, 2016) for the ability of dropout neural networks in representing model uncertainty. 3.5 Experimental Results The overall mean average precisions (MAPs) of different models on synthetic and real-world datasets are reported in Table 2. For the synthetic dataset (Addition), we report average MAP across all noise levels, and for real-world datasets (Twit-ter and Reddit), we report average MAP at their corresponding noise levels obtained from corresponding TREC pools. We use t-test for signifi-cance testing and asterisk mark (*) to indicate sig-nificant difference at = 0 . 05 between top two competing systems. The results show that Leitner (Lit) and Bayesian uncertainty (BU) models considerably outperform prediction probability (PP) and curriculum learning (CL) on both synthetic and real-world datasets. In case of real-world datasets, we didn't find sig-nificant difference between top two models perhaps because of the small size of corresponding TREC pools ( 198 Twitter posts and 152 Reddit posts, see Table 1). Overall, BU and Lit show average MAP of 0 . 81 , and 0 . 85 on the synthetic dataset and 0 . 15 , 0 . 22 on real-world datasets respectively. The higher performance of Lit indicates that spurious instances often appear in q 0 . The lower performance of CL, however, can be attributed to its training strategy which may label spurious instances as easy instances if their loss values are smaller than the loss threshold (section 2.1). The large difference between the performances of Lit and CL (two methods based on repeated scoring across training epochs) shows that the way that repetition is utilized by different methods largely affects their final performance in spotting spurious instances. In addition, VI shows lower performance than BU and Lit on synthetic data, but comparable performance to BU on real-world datasets. Furthermore, the results show that the performance of all models are considerably lower on real-world datasets than the synthetic dataset. This could be attributed to the more complex nature of our real-world datasets which leads to weaker generalizability of downstream learners on these datasets (see next section for discussion on training performance). This can in turn inversely affect the performance of different spotters, e.g. by en-Synthetic Real-world noise = [0 . 1 , 0 . 5] noise = { 0 .",
"couraging most instances to be considered as hard and thus placed in lower queues of Lit or in the hard batch of CL, or by increasing the prediction uncertainty and entropy in case of BU and VI respectively.",
"In addition, as we mentioned before, we carefully setup the annotation task to minimize the chances of spurious labels in the resulting annotations.",
"Therefore, we expect a considerably smaller fraction of spurious instances in our real-world datasets.",
"Figures",
"4(a) and",
"4(d) report MAP and precision after all spurious instances have been retrieved (Rprec) on Addition at different noise levels respectively; note that = 0 .",
"5 means equal number of spurious and genuine instances in training data (here, we do not report the performance of CL due to its lower performance and for better pre-sentation).",
"First, the results show that Lit and BU considerably outperform PP and VI.",
"Furthermore, BU shows considerably high performance at lower noise levels, 0 .",
"2 , while Lit considerably outperforms BU at greater noise levels, > 0 .",
"2 .",
"The lower performance of BU at higher noise levels might be because of the poor generalizability of LSTM in the context of greater noise which may increase the variance in the prediction probabilities of most instances (see section 3.6 for our note on training performance).",
"In terms of average Rprec, the overall performance of PP, CL, VI, BU, and Lit models is 0 .",
"62 , 0 .",
"57 , 0 .",
"65 , 0 .",
"70 , and 0 .",
"74 respectively on the Addition dataset across all noise levels (see the corresponding values for MAP in Table 2).",
"The lower Rprec values than MAP indicate that some spurious instances are ranked very low by models.",
"These are perhaps the most difficult spurious instances to identify.",
"For the real-world datasets, we only report MAP and P@r (precision at rank r ) as spurious/genuine labels are only available for those instances that make it to the TREC pool but not for all instances.",
"The results on Reddit, Figures",
"4(b) and",
"4(e) respectively, show that Lit outperforms other mod-2011 ADDITION REDDIT TWITTER 0.1 0.2 0.3 0.4 0.5 noise ( ) 0.70 0.75 0.80 0.85 0.90 MAPPP VI BU Lit",
"els, but VI and BU show comparable MAP (in contrast to their performance on Addition).",
"Furthermore, Figure",
"4(e) shows that Lit generates a more accurate ranked list of spurious instances and consistently outperforms other models at almost all ranks.",
"In particular, it maintains a MAP of around 60 % at rank 20 , while other models have consistently lower MAP than 50 % at all ranks.",
"The results on the Twitter dataset, Figures",
"4(c) and",
"4(f), show that Lit outperforms other models.",
"However, interestingly, PP outperforms BU in terms of both MAP and P@r across almost all ranks.",
"This result could be attributed to the substantial annotation agreement on Twitter dataset (Fleiss' = 0 . 66 ) which could make network predictions/loss values more representative of gold labels.",
"Figure",
"4(f) also shows that Lit is the most precise model in identifying spurious instances.",
"Note that P@ 5 is an important metric in search applications and as Figures",
"4(e) and",
"4(f) show, at rank 5 , Lit is 2 3 times more precise than the best-performing baseline on our real-world datasets.",
"Given any dataset and its corresponding neural network, our Leitner model simultaneously trains the network and generates a ranked list of spurious instances in the dataset.",
"For this purpose, the model tracks loss values and occurrences of instances in the lower Leitner queue during training.",
"Figure",
"5(a) shows the accuracy of the LSTM network (Sutskever et al., 2014) trained with different training regimes on the validation data of Addition with different noise levels; note that Rote represents standard training where at each iteration all instances are used to train the network.",
"As the results show, at lower noise levels, the training performance (i.e. the generalizability/accuracy of the LSTM network) is generally high and comparable across different training regimes, e.g. close to 100 % at = 0 .",
"However, Lit leads to a slightly weaker training performance than CL and Rote as the noise level increases.",
"This is because Lit learns from spurious instances more frequently than genuine ones.",
"This may decrease the training performance of Lit, especially with greater amount of noise in data.",
"However, this training strategy increases the spotting performance of Lit as spurious instances seem to occur in lower queues of Leitner more frequently, see Figure 4.",
"In addition, the accuracy of fastText (Joulin et al., 2017) is reported in Figure",
"5(b).",
"The results show that different training regimes lead to comparable performance on both datasets (accu-racy of around 75 % and 70 % on Twitter and Reddit respectively).",
"The relatively lower training per-2012 0.0 0.1 0.2 0.3 0.4 0.5 noise ( ) 0.4 0.5 0.6 0.7 0.8 0.9 1.0 A cc u r a c y Rote CL Lit",
"We first report insights on why prediction loss alone is not enough to identify spurious instances.",
"For this analysis, we track the loss of spurious and genuine instances at each training iteration.",
"8 Figure",
"6(a) shows the number of spurious/genuine instances with low/high loss at each epoch; where, we use the average loss of correctly classified training instances at each epoch as a pivot value to determine the low and high loss values for that epoch.",
"Initially, almost all spurious and genuine instances have high loss values (see SH and GH in Figure",
"6(a)).",
"However, the sheer imbalance of genuine instances relative to spurious instances means that there will still be a relatively large number of genuine instances with large loss these are simply difficult instances.",
"Furthermore, the number of spurious instances with lower loss values (SL) slowly increases as the network gradually learns the wrong labels of some spurious instances; this, in turn, decreases the expected loss 8 Here we use Addition with N = 10 K training instances and noise level of = 0 .",
"of such instances.",
"Since PP merely ranks instances based on loss values, the above two factors may cause some spurious instances to be ranked lower than genuine ones by PP; see Figure",
"6(b) for MAP of PP in detecting spurious instances at every iteration.",
"Using queue information from the Leitner system adds information that loss alone does not; we suspect that the learner can find principled solutions that trade off losses between one difficult genuine instance and another (causing them to bounce between q 0 and higher queues) without harming total loss, but that the more random nature of spurious instances means that they are consistently misclassified, staying in q 0 .",
"Verifying this hypothesis will be the subject of future work.",
"For our second analysis, we manually inspect highly ranked instances in q 0 of Lit.",
"We use the synthetic dataset bAbi (Weston et al., 2016) which is a systematically generated QA dataset for which the task is to generate an answer given a question and its corresponding story.",
"As the learner, we use an effective LSTM network specifically developed for this task.",
"9 Table 3 shows sample instances from bAbi which are highly ranked by 9 https://github.com/fchollet/keras/ blob/master/examples/babi_rnn.py 2013 Story : Mary traveled to the garden.",
"Daniel went to the garden.",
"Mary journeyed to the kitchen.",
"Mary went back to the hallway .",
"Daniel traveled to the office.",
"Daniel moved to the garden.",
"Sandra went back to the kitchen.",
"John traveled to the bathroom.",
"Question : Where is Mary?",
"Answer : hallway Story : John went to the office.",
"Daniel journeyed to the office.",
"Sandra picked up the football there .",
"Sandra went to the bedroom.",
"Sandra left the football there.",
"Sandra went back to the kitchen .",
"Sandra traveled to the hallway.",
"Sandra moved to the garden.",
"Question : Where is the football?",
"Answer : bedroom Table 3: Sample inconsistencies in bAbi dataset.",
"Lit.",
"We observe inconsistencies in the given stories.",
"In the first case, the story contains the sentence Mary went back to the hallway, while the previous sentences indicate that Mary was in the garden/kitchen but not hallway before.",
"In the second case, the sentence Sandra picked up the football there is inconsistent with story because the word there doesn't refer to any specific location.",
"We conjecture that these inconsistencies can mislead the learner or at least make the learning task more complex.",
"Our model can be used to explore language resources for such inconsistencies.",
"There is broad evidence in psychology that shows human ability to retain information improves with repeated exposure and exponentially decays with delay since last exposure.",
"Ebbinghaus (1913, 2013), and recently Murre and Dros (2015), studied the hypothesis of the exponential nature of forgetting in humans.",
"Three major indicators were identified that affect memory retention in humans: delay since last review of learning materials and strength of human memory (Ebbinghaus, 1913; Dempster, 1989; Wixted, 1990; Cepeda et al., 2006; Novikoff et al., 2012), and, more recently, difficulty of learning materials (Reddy et al., 2016).",
"The above findings show that human learners can learn efficiently and effectively by increasing intervals of time between subsequent reviews of previously learned materials (spaced repetition).",
"In (Amiri et al., 2017), we built on these findings to develop efficient and effective training paradigms for neural networks.",
"Previous research also investigated the development of cognitively-motivated training paradigms named curriculum learning for artificial neural networks (Bengio et al., 2009; Kumar et al., 2010).",
"The difference between the above models is in their views to learning: curriculum learning is inspired by the learning principle that training starts with easier concepts and gradually proceeds with more difficult ones (Bengio et al., 2009).",
"On the other hand, spaced repetition models are inspired by the learning principle that effective and efficient learning can be achieved by working more on difficult concepts and less on easier ones.",
"In this research, we extend our spaced repetition training paradigms to simultaneously train artificial neural networks and identify training instances with potentially wrong labels (spurious instances) in datasets.",
"Our work is important because spurious instances may inversely affect interpretations of the data, models developed from the data, and decisions made based on the data.",
"Furthermore, spurious instances lead to unrealistic comparison among competing systems if they exist in test data.",
"We present a novel approach based on queueing theory and psychology of learning to identify spurious instances in datasets.",
"Our approach can be considered as a scheduler that iteratively trains a downstream learner (e.g. a neural network) and detects spurious instances with respect to their difficulty to learn during the training process.",
"Our approach is robust and can be applied to any dataset without modification given a neural network designed for the target task of the dataset.",
"Our work can be extended by:",
"(a) utilizing several predictions for each training instance,",
"(b) investigating the extent to which a more sophisticated and effective downstream learner can affect the performance of different spotters,",
"(c) developing models to better distinguish hard genuine instances from spurious ones, and",
"(d) developing ranking algorithms to improve the performance of models on real-world datasets.",
"We thank anonymous reviewers for their thoughtful comments.",
"This work was supported by National Institutes of Health (NIH) grant R01GM114355 from the National Institute of General Medical Sciences (NIGMS).",
"The content is solely the responsibility of the authors and does not represent the official views of the NIH."
] | [
"abstain",
"method",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"objective",
"objective",
"other",
"objective",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Transformer-based language models (LMs) pretrained on large text collections implicitly store a wealth of lexical semantic knowledge, but it is non-trivial to extract that knowledge effectively from their parameters.",
"Inspired by prior work on semantic specialization of static word embedding (WE) models, we show that it is possible to expose and enrich lexical knowledge from the LMs, that is, to specialize them to serve as effective and universal decontex-tualized word encoders even when fed input words in isolation (i.e., without any context).",
"Their transformation into such word encoders is achieved through a simple and efficient lexical fine-tuning procedure (termed LEXFIT ) based on dual-encoder network structures.",
"Further, we show that LEXFIT can yield effective word encoders even with limited lexical supervision and, via cross-lingual transfer, in different languages without any readily available external knowledge.",
"Our evaluation over four established, structurally different lexical-level tasks in 8 languages indicates the superiority of LEXFIT -based WEs over standard static WEs (e.g., fastText) and WEs from vanilla LMs.",
"Other extensive experiments and ablation studies further profile the LEXFIT framework, and indicate best practices and performance variations across LEXFIT variants, languages, and lexical tasks, also directly questioning the usefulness of traditional WE models in the era of large neural models.",
"Probing large pretrained encoders like BERT (De-vlin et al., 2019) revealed that they contain a wealth of lexical knowledge (Ethayarajh, 2019; Vulic et al., 2020).",
"If type-level word vectors are extracted from BERT with appropriate strategies, they can even outperform traditional word embeddings (WEs) in some lexical tasks (Vulic et al., 2020; Bommasani et al., 2020; Chronis and Erk, 2020).",
"However, LexFit loss (w, v) = (dormant, asleep)",
"both static and contextualized WEs ultimately learn solely from the distributional word co-occurrence signal.",
"This source of signal is known to lead to distortions in the induced representations by con-flating meaning based on topical relatedness rather than authentic semantic similarity (Hill et al., 2015; Schwartz et al., 2015; Vulic et al., 2017).",
"This also creates a ripple effect on downstream applications, where model performance may suffer (Faruqui, 2016; Mrkic et al., 2017; Lauscher et al., 2020).",
"Our work takes inspiration from the methods to correct these distortions and complement the distributional signal with structured information, which were originally devised for static WEs.",
"In particular, the process known as semantic specialization (or retrofitting) injects information about lexical relations from databases like WordNet (Beckwith et al., 1991) or the Paraphrase Database (Ganitke-vitch et al., 2013) into WEs.",
"Thus, it accentuates relationships of pure semantic similarity in the re-fined representations (Faruqui et al., 2015; Mrkic et al., 2017; Ponti et al., 2019, inter alia ).",
"Our goal is to create representations that take advantage of both 1) the expressivity and lexical knowledge already stored in pretrained language models (LMs) and 2) the precision of lexical fine-tuning.",
"To this effect, we develop LEXFIT , a versatile lexical fine-tuning framework, illustrated in Figure 1, drawing a parallel with universal sentence encoders like SentenceBERT (Reimers and Gurevych, 2019).",
"1 Our working hypothesis, extensively evaluated in this paper, is as follows: pretrained encoders store a wealth of lexical knowledge, but it is not straightforward to extract that knowledge.",
"We can expose this knowledge by rewiring their parameters through lexical fine-tuning, and turn the LMs into universal (decontextualized) word encoders.",
"Compared to prior attempts at injecting lexical knowledge into large LMs (Lauscher et al., 2020), our LEXFIT method is innovative as it is deployed post-hoc on top of already pretrained LMs, rather than requiring joint multi-task training.",
"Moreover, LEXFIT is: 1) more efficient, as it does not incur the overhead of masked language modeling pretraining; and 2) more versatile, as it can be ported to any model independently from its architecture or original training objective.",
"Finally, our results demonstrate the usefulness of LEXFIT : we report large gains over WEs extracted from vanilla LMs and over traditional WE models across 8 languages and 4 lexical tasks, even with very limited and noisy external lexical knowledge, validating the rewiring hypothesis .",
"The code is available at: https://github.com/cambridgeltl/lexfit .",
"The motivation for this work largely stems from the recent work on probing and analyzing pretrained language models for various types of knowledge they might implicitly store (e.g., syntax, world knowledge) (Rogers et al., 2020).",
"Here, we focus on their lexical semantic knowledge (Vulic et al., 2020; Liu et al., 2021), with an aim of extracting high-quality static word embeddings from the parameters of the input LMs.",
"In what follows, we describe lexical fine-tuning via dual-encoder networks (2.1), followed by the WE extraction pro-1 These approaches are connected as they are both trained via contrastive learning on dual-encoder architectures, but they provide representations for a different granularity of meaning.",
"2.1 LEXFIT : Methodology Our hypothesis is that the pretrained LMs can be turned into effective static decontextualized word encoders via additional inexpensive lexical fine-tuning (i.e., LEXFIT -ing) on lexical pairs from an external resource.",
"In other words, they can be specialized to encode lexical knowledge useful for downstream tasks, e.g., lexical semantic similarity (Wieting et al., 2015; Mrkic et al., 2017; Ponti et al., 2018).",
"Let P = { ( w, v, r ) m } Mm =1 refer to the set of M external lexical constraints.",
"Each item p P comprises a pair of words w and v , and denotes a semantic relation r that holds between them (e.g., synonymy, antonymy).",
"Further, let P r denote a subset of P where a particular relation r holds for each item, e.g., P syn is a set of synonymy pairs.",
"Finally, for each positive tuple ( w, v, r ) , we can construct 2 k negative no-relation examples by randomly pairing w with another word w ,k (cid:48) , and pairing v with v ,k (cid:48) , k (cid:48) = 1 , . . . , k , ensuring that these negative pairs do not occur in P .",
"We refer to the full set of negative pairs as NP .",
"Lexical fine-tuning then leverages P and NP ; We propose to tune the underlying LMs (e.g., BERT, mBERT), using external lexical knowledge, via different loss functions, relying on dual-encoder networks with shared LM weights and mean pooling, as illustrated in Figure",
"1. We now briefly describe several loss functions, evaluated later in 4.",
"Classification Loss.",
"Similar to prior work on sentence-level text inputs (Reimers and Gurevych, 2019), for each input word pair ( w, v ) we concatenate their d -dimensional encodings w and v (obtained after passing them through BERT and after pooling, see Figure 1) with their element-wise difference | w v | .",
"The objective is then: L = softmax (cid:0) W ( w v | w v | ) (cid:1) .",
"denotes concatenation, and W R 3 d c is a trainable weight matrix of the softmax classifier, where c is the number of classification classes.",
"We experiment with two variants of this objective, termed SOFTMAX henceforth: in the simpler binary variant, the goal is to distinguish between positive synonymy pairs (the subset P syn ) and the corresponding set of 2 k | P syn | no-relation negative pairs.",
"In the ternary variant ( c = 3 ), the classi-fier must distinguish between synonyms ( P syn ), antonyms ( P ant ), and no-relation negatives.",
"The classifiers are optimized via standard cross-entropy.",
"Ranking Loss.",
"The multiple negatives ranking loss ( MNEG ) is inspired by prior work on learning universal sentence encoders (Cer et al., 2018; Henderson et al., 2019, 2020); the aim of the loss, now adapted to word-level inputs, is to rank true synonymy pairs from P syn over randomly paired words.",
"The similarity between any two words w and v is quantified via the similarity function S operating on their encodings S ( w i , w j ) .",
"In this work we use the scaled cosine similarity following Henderson et al. (2019): S ( w i , w j ) = C cos ( w 1 , w 2 ) , where C is the scaling constant.",
"Lexical fine-tuning with MNEG then proceeds in batches of B pairs ( w i , v i ) , . . . , ( w B , v B ) from P syn , with the MNEG loss for a single batch computed as follows: L = B (cid:88) i =1 S ( w i , v i ) + B (cid:88) i =1 log B (cid:88) j =1 ,j (cid:54) = i e S ( w i , v j ) (2) Effectively, for each batch Eq.",
"(2) maximizes the similarity score of positive pairs ( w i , v i ) , and minimizes the score of B 1 random pairs.",
"For simplicity, as negatives we use all pairings of w i with v j -s in the current batch where ( w i , v j ) (cid:54) P syn (Yang et al., 2018; Henderson et al., 2019).",
"Multi-Similarity Loss.",
"We also experiment with a recently proposed state-of-the-art multi-similarity loss of Wang et al. (2019), labeled MSIM .",
"The aim is again to rank positive examples from P syn above any corresponding no-relation 2 k negatives from NP .",
"Again using the scaled cosine similarity scores, the adapted MSIM loss per batch of B positive pairs ( w i , v i ) from P syn is defined as follows: L = 1 BB (cid:88) i =1 (cid:32) log (cid:16) 1 + k (cid:88) k (cid:48) =1 e C ( cos ( w i , w i , , k (cid:48) ) (cid:15) ) (cid:17) + 1 C log (cid:16) 1 + e C ( cos ( w i , v i ) (cid:15) ) (cid:17)(cid:33) .",
"(3) For brevity, in Eq.",
"(3) we only show the formulation with the k negatives associated with w i , but the reader should be aware that the complete loss function contains another term covering k negatives v i, ,k (cid:48) associated with each v i .",
"C is again the scaling constant, and (cid:15) is the offset applied on the similarity matrix.",
"2 MSIM can be seen as an extended variant of the MNEG ranking loss.",
"Finally, for any input word w , we extract its word vector via the approach outlined in 2.2; exactly the same approach can be applied to the original LMs (e.g., BERT) or their lexically fine-tuned variants (L EXFIT -ed BERT), see Figure",
"1. 2.2 Extracting Static Word Representations The extraction of static type-level vectors from any underlying Transformer-based LM, both before and after LEXFIT fine-tuning, is guided by best practices from recent comparative analyses and probing work (Vulic et al., 2020; Bommasani et al., 2020).",
"Starting from an underlying LM with N Transformer layers { L 1 (bottom layer) , . . . , LN (top) } and referring to the embedding layer as L 0 , we extract a decontextualized word vector for some input word w , fed into the LM in isolation without any surrounding context, following Vulic et al. (2020): 1) w is segmented into 1 or more of its constituent subwords [ sw i ] , i 1 , where [] refers to the sequence of i subwords; 2) Special tokens [ CLS ] and [ SEP ] are respectively prepended and appended to the subword sequence, and the sequence [ CLS ][ sw i ][ SEP ] is then passed through the LM; 3) The final representation is constructed as the average over the subword encodings further averaged over n N layers (i.e., all layers up to layer L n included, denoted as AVG ( n ) ).",
"3 Further, Vulic et al. (2020) empirically verified that:",
"(a) discarding final encodings of [ CLS ] and [ SEP ] produces better type-level vectors we follow this heuristic in this work; and",
"(b) excluding higher layers from the average may also result in stronger vectors with improved performance in lexical tasks.",
"This approach operates fully in isolation (ISO): we extract vectors of words without any surrounding context.",
"The ISO approach is lightweight: 1) it disposes of any external text corpora; 2) it encodes words efficiently due to the absence of context.",
"Moreover, it allows us to directly study the richness of lexical information stored in the LM's parameters, and to combine it with ISO lexical knowledge from external resources (e.g., WordNet).",
"Languages and Language Models.",
"Our language selection for evaluation is guided by the following (partially clashing) constraints (Vuli c et al., 2020):",
"a) availability of comparable pretrained monolingual LMs;",
"b) task and evaluation data availabil-3 Note that this always includes the embedding layer ( L 0 ).",
"ity; and",
"c) ensuring some typological diversity of the selection.",
"The final test languages are English ( EN ), German ( DE ), Spanish ( ES ), Finnish ( FI ), Italian ( IT ), Polish ( PL ), Russian ( RU ), and Turkish ( TR ).",
"For comparability across languages, we use monolingual uncased BERT Base models for all languages ( N = 12 Transformer layers, 12 attention heads, hidden layer dimensionality is 768), available (see the appendix) via the HuggingFace repository (Wolf et al., 2020).",
"External Lexical Knowledge.",
"We use the standard collection of EN lexical constraints from previous work on (static) word vector specialization (Zhang et al., 2014; Ono et al., 2015; Vulic et al., 2018; Ponti et al., 2018, 2019).",
"It covers the lexical relations from WordNet (Fellbaum, 1998) and Roget's Thesaurus (Kipfer, 2009); it comprises 1,023,082 synonymy ( P syn ) word pairs and 380,873 antonymy pairs ( P ant ).",
"For all other languages, we rely on non-curated noisy lexical constraints, obtained via an automatic word translation method by Ponti et al. (2019); see the original work for the details of the translation procedure.",
"LEXFIT : Technical Details.",
"The implementation is based on the SBERT framework (Reimers and Gurevych, 2019), using the suggested settings: AdamW (Loshchilov and Hutter, 2018); learning rate of 2 e 5 ; weight decay rate of 0 .",
"01 , and we run LEXFIT for 2 epochs.",
"The batch size is 512 with MNEG , and 256 with SOFTMAX and MSIM , where one batch always balances between B positive examples and 2 k B negatives (see 2.1).",
"Word Vocabularies and Baselines.",
"We extract decontextualized type-level WEs in each language both from the original BERTs (termed BERT-REG ) 4 and the LEXFIT -ed BERT models for exactly the same vocabulary.",
"Following Vulic et al. (2020), the vocabularies cover the top 100K most frequent words represented in the respective fastText (FT) vectors, trained on lowercased monolingual Wikipedias by Bojanowski et al. (2017).",
"5 The equivalent vocabulary coverage allows for a direct comparison of all WEs regardless of the induc-tion/extraction method; this also includes the FT 4 For the baseline BERT-REG WEs, we report two variants:",
"(a) all performs layerwise averaging over all Transformer layers (i.e., AVG ( 12) );",
"(b) best reports the peak score when potentially excluding highest layers from the layer averaging (i.e., AVG ( n ) , n 12 ; see 2.2) (Vulic et al., 2020).",
"5 Note that the LEXFIT procedure does not depend on the chosen vocabulary, as it operates only on the lexical items found in the external constraints (i.e., the set P ).",
"vectors, used as baseline traditional static WEs (termed FASTTEXT . WIKI ) in all evaluation tasks.",
"Task 1: Lexical semantic similarity (LSIM) is an established intrinsic task for evaluating static WEs (Hill et al., 2015).",
"We use the recent comprehensive multilingual LSIM benchmark Multi-SimLex (Vulic et al., 2020), which comprises 1,888 pairs in 13 languages, for our EN , ES , FI , PL , and RULSIM evaluation.",
"We also evaluate on a verb-focused ENLSIM benchmark: SimVerb-3500 (SV) (Gerz et al., 2016), covering 3,500 verb pairs, and SimLex-999 (SL) for DE and IT (999 pairs) (Le-viant and Reichart, 2015).",
"6 Task 2: Bilingual Lexicon Induction (BLI) , a standard task to assess the semantic quality of static cross-lingual word embeddings (CLWEs) (Ruder et al., 2019), enables investigations on the alignability of monolingual type-level WEs in different languages before and after the LEXFIT procedure.",
"We learn CLWEs from monolingual WEs obtained with all WE methods using the established and supervision-lenient mapping-based approach (Mikolov et al., 2013a; Smith et al., 2017) with the VECMAP framework (Artetxe et al., 2018).",
"We run main BLI evaluations for 10 language pairs spanning EN , DE , RU , FI , TR .",
"7 Task 3: Lexical Relation Prediction (RELP).",
"We assess the usefulness of lexical knowledge in WEs to learn relation classifiers for standard lexical relations (i.e., synonymy , antonymy , hyper-nymy , meronymy , plus no relation ) via a state-of-the-art neural model for RELP which learns solely based on input type-level WEs (Glava and Vulic, 2018).",
"We use the WordNet-based evaluation data of Glava and Vulic (2018) for EN , DE , ES ; they contain 10K annotated word pairs per language, 8K for training, 2K for test, balanced by class and in the splits.",
"We extract evaluation data for two more languages: FI and IT .",
"We report micro-averaged F 1 scores, averaged across 5 runs for each input WE space; the default RELP model setting is used.",
"In RELP and LSIM, we remove all training and test 6 The evaluation metric is the Spearman's rank correlation between the average of human LSIM scores for word pairs and the cosine similarity between their respective WEs.",
"7 A standard BLI setup and data from Glava et al. (2019) is adopted: 5K training word pairs are used to learn the mapping, and another 2K pairs as test data.",
"The evaluation metric is standard Mean Reciprocal Rank (MRR).",
"For EN ES , we run experiments on MUSE data (Conneau et al., 2018).",
"Task 4: Lexical Simplification (LexSIMP) aims to automatically replace complex words (i.e., specialized terms, less-frequent words) with their simpler in-context synonyms, while retaining grammaticality and conveying the same meaning as the more complex input text (Paetzold and Specia, 2017).",
"Therefore, discerning between semantic similarity (e.g., synonymy injected via LEXFIT ) and broader relatedness is critical for LexSIMP (Glava and Vulic, 2018).",
"We adopt the standard LexSIMP evaluation protocol used in prior research on static WEs (Ponti et al., 2018, 2019).",
"1) We use Light-LS (Glava and tajner, 2015), a language-agnostic LexSIMP tool that makes simplifications in an unsupervised way based solely on word similarity in an input (static) WE space; 2) we rely on standard LexSIMP benchmarks, available for EN (Horn et al., 2014), IT (Tonelli et al., 2016), and ES (Saggion, 2017); and 3) we report the standard Accuracy scores (Horn et al., 2014).",
"9 Important Disclaimer.",
"We note that the main purpose of the chosen evaluation tasks and experimental protocols is not necessarily achieving state-of-the-art performance, but rather probing the vectors in different lexical tasks requiring different types of lexical knowledge, 10 and offering fair and insightful comparisons between different LEXFIT variants, as well as against standard static WEs (fastText) and non-tuned BERT-based static WEs.",
"The main results for all four tasks are summarized in Tables 1-4, and further results and analyses are available in 4.1 (with additional results in the ap-pendix).",
"These results offer multiple axes of comparison, discussed in what follows.",
"Comparison to Other Static Word Embeddings.",
"The results over all 4 tasks indicate that static WEs from LEXFIT ed monolingual BERT 1) outperform traditional WE methods such as FT, and 2) offer also large gains over WEs originating from non-L EXFIT ed BERTs (Vulic et al., 2020).",
"These re-8 In BLI and RELP, we do PCA ( d = 300 ) on all input WEs, which slightly improves performance.",
"sults demonstrate that the inexpensive lexical fine-tuning procedure can indeed turn large pretrained LMs into effective decontextualized word encoders, and this can be achieved for a reasonably wide spectrum of languages for which such pretrained LMs exist.",
"What is more, LEXFIT for all non-EN languages has been run with noisy automatically translated lexical constraints, which holds promise to support even stronger static LEXFIT based WEs with human-curated data in the future, e.g., extracted from multilingual WordNets (Bond and Foster, 2013), PanLex (Kamholz et al., 2014), or BabelNet (Ehrmann et al., 2014).",
"The results give rise to additional general implications.",
"First, they suggest that the pretrained LMs store even more lexical knowledge than thought previously (Ethayarajh, 2019; Bommasani et al., 2020; Vulic et al., 2020); the role of LEXFIT fine-tuning is simply to rewire' and expose that knowledge from the LM through (limited) lexical-level supervision.",
"To further investigate the rewiring' hypothesis, in 4.1, we also run LEXFIT with a drastically reduced amount of external knowledge.",
"BERT-REG vectors display large gains over FT vectors in tasks such as RELP and LexSIMP, again hinting that plenty of lexical knowledge is stored in the original parameters.",
"However, they still lag FT vectors for some tasks (BLI for all language pairs; LSIM for ES , RU , PL ).",
"However, LEXFIT -ed BERT-based WEs offer large gains and outperform FT WEs across the board.",
"Our results indicate that classic' WE models such as skip-gram (Mikolov et al., 2013b) and FT are undermined even in their last field of use, lexical tasks.",
"This comes as a natural finding, given that word2vec and FT can in fact be seen as reduced and training-efficient variants of full-fledged language models (Bengio et al., 2003).",
"The modern LMs are pretrained on larger training data with more parameters and with more sophisticated Transformer-based neural architectures.",
"However, it has not been verified before that effective static WEs can be distilled from such LMs.",
"Efficiency differences aside, this begs the following discussion point for future work: with the existence of large pretrained LMs, and effective methods to extract static WEs from them, as proposed in this work, how useful are traditional WE models still in NLP applications?",
"indicate that all LEXFIT variants are effective and can expose the lexical knowledge from the fine-tuned",
"BERTs.",
"However, there are differences across their task performance: the ranking-based MNEG and MSIM variants display stronger performance on similarity-based ranking lexical tasks such as LSIM and BLI.",
"The classification-based SOFTMAX objective is, as expected, better aligned with the RELP task, and we note slight gains with its ternary variant which leverages extra antonymy knowledge.",
"This finding is well aligned with the recent findings demonstrating that task-specific pretraining results in stronger (sentence-level) task performance (Glass et al., 2020; Henderson et al., 2020; Lewis et al., 2020).",
"In our case, we show that task-specific lexical fine-tuning can reshape the underlying LM's parameters to not only act as a universal word encoder , but also towards a particular lexical task .",
"The per-epoch time measurements from Table 1 validate the efficiency of LEXFIT as a post-training fine-tuning procedure.",
"Previous approaches that attempted to inject lexical information (i.e., word senses and relations) into large LMs (Lauscher et al., 2020; Levine et al., 2020) relied on joint LM (re)training from scratch : it is effectively costlier than training the original BERT models.",
"Performance across Languages and Tasks.",
"As expected, the scores in absolute terms are highest for EN : this is attributed to",
"(a) larger pretraining LM data as well as",
"(b) to clean external lexical knowledge.",
"However, we note encouragingly large gains in target languages even with noisy translated lexical constraints.",
"LEXFIT variants show similar relative patterns across different languages and tasks.",
"We note that, while BERT-REG vectors are unable to match FT performance in the BLI task, our LEXFIT methods (e.g., see MNEG and MSIMBLI scores) outperform FT WEs in this task Full 100k 50k 20k 10k 5k #ofL EXFIT fine-tuningexamples 50 55 60 65 70 75 S p e a r m a n c o rr e l a t i o n EN EN (SV) ES FI IT (SL)",
"; = SOFTMAX (the binary variant plotted for RELP and BLI, ternary for RELP).",
"The numbers in the parentheses denote performance of FT vectors.",
"The full results with more languages and LEXFIT variants are in the appendix.",
"as well, offering improved alignability (Sgaard et al., 2018) between monolingual WEs.",
"The large gains of BERT-REG over FT in RELP and LexSIMP across all evaluation languages already suggest that plenty of lexical knowledge is stored in the pretrained BERTs' parameters; however, LEXFIT -ing the models offers further gains in LexSIMP and RELP across the board, even with limited external supervision (see also Figure 2c).",
"High scores with FI in LSIM and BLI are aligned with prior work (Virtanen et al., 2019; Rust et al., 2021) that showcased strong monolingual performance of FIBERT in sentence-level tasks.",
"Along this line, we note that the final quality of LEXFIT based WEs in each language depends on several factors: 1) pretraining data; 2) the underlying LM; 3) the quality and amount of external knowledge.",
"The multi-component LEXFIT framework allows for a plethora of additional analyses, varying components such as the underlying LM, properties of the LEXFIT variants (e.g., negative examples, fine-tuning duration, the amount of lexical constraints).",
"We now analyze the impact of these components on the lexical quality of the LEXFIT -tuned static WEs.",
"Unless noted otherwise, for computational feasibility and to avoid clutter, we focus 1) on a subset of target languages: EN , ES , FI , IT , 2) on the MSIM variant ( k = 1 ), which showed robust perfor-EN ES FI IT Language 50 55 60 65 70 75 S p e a r m a n c o rr e l a t i o n MONOBERTMBERT",
"mance in the main experiments before, and 3) on LSIM, BLI, and RELP as the main tasks in these analyses, as they offer a higher language coverage.",
"Varying the Amount of Lexical Constraints.",
"We also probe what amount of lexical knowledge is required to turn BERTs into effective decontextualized word encoders by running tests with reduced lexical sets P sampled from the full set.",
"The scores over different P sizes, averaged over 5 samples per each size, are provided in Figure 2, and we note that they extend to other evaluation languages and LEXFIT objectives.",
"As expected, we do observe performance drops with fewer external data.",
"However, the decrease is modest even when relying on n = 2 4 6 8 10 12 LSIMEN : REG 51.6 51.8 50.7 49.5 48.0 46.7 EN : MSIM 58.8 61.5 64.2 65.0 71.7 74.3 FI : REG 57.3 59.8 61.5 61.1 59.3 55.3 FI : MSIM 57.0 64.1 66.6 69.6 70.2 71.1 BLIEN FI : REG 39.2 43.8 47.6 48.6 48.3 47.1 EN FI : MSIM 40.2 45.6 50.7 54.3 56.1 57.7 Table 5: Task performance of WEs extracted via layerwise averaging over different Transformer layers (AVG ( n ) extraction variants; 2.2) for a selection of tasks and languages.",
"only 5k external constraints (e.g., see the scores in BLI and RELP for all languages; EN Multi-SimLex score is 69.4 with 50k constraints, 65.0 with 5k), or even non-existent (RELP in FI ).",
"Remarkably, the LEXFIT performance with only 10k or 5k fine-tuning pairs 11 remains substantially higher than with FT or BERT-REG WEs in all tasks.",
"This empirically validates LEXFIT 's sample efficiency and further empirically corroborates our knowledge rewiring hypothesis: the original LMs already contain plenty of useful lexical knowledge implicitly, and even a small amount of external supervision can expose that knowledge.",
"Copying or Rewiring Knowledge?",
"Large gains over BERT-REG even with mere 5k pairs (LEXFIT ing takes only a few minutes), where the large portion of the 100K word vocabulary is not covered in the external input, further reveal that LEXFIT does not only copy the knowledge of seen words and relations into the LM: it leverages the (small) external set to generalize to uncovered words.",
"We confirm this hypothesis with another experiment where our input LM is the same BERT Base architecture parameters with the same subword vocabulary as English BERT, but with its parameters now randomly initialized using the Xavier initialization (Glorot and Bengio, 2010).",
"Running LEXFIT on this model for 10 epochs with the full set of lexical constraints (see 3) yields the following LSIM scores: 23.1 (Multi-SimLex) and 14.6 (SimVerb), and the English RELP accuracy score of 61.8%.",
"The scores are substantially higher than those of fully random static WEs (see also the ap-pendix), which indicates that the LEXFIT procedure does enable storing some lexical knowledge into the model parameters.",
"However, at the same 11 When sampling all reduced sets, we again deliberately excluded all words occurring in our LSIM benchmarks.",
"time, these scores are substantially lower than the ones achieved when starting from LM-pretrained models, even when LEXFIT is run with mere 5k fine-tuning lexical pairs.",
"12 This again strongly suggests that LEXFIT 'unlocks' already available lexical knowledge stored in the pretrained LM, yielding benefits beyond the knowledge available in the external data.",
"Another line of recent work (Liu et al., 2021) further corroborates our findings.",
"Multilingual LMs.",
"Prior work indicated that massively multilingual LMs such as multilingual BERT (mBERT) (Devlin et al., 2019) and XLM-R (Con-neau et al., 2020) cannot match the performance of their language-specific counterparts in both lexical (Vulic et al., 2020) and sentence-level tasks (Rust et al., 2021).",
"We also analyze this conjecture by LEXFIT -ing mBERT instead of monolingual BERTs in different languages.",
"The results with MSIM ( k = 1 ) are provided in Figure 4; we observe similar comparison trends with other languages and LEXFIT variants, not shown due to space constraints.",
"While LEXFIT -ing mBERT offers huge gains over the original mBERT model, sometimes even larger in relative terms than with monolingual BERTs (e.g., LSIM scores for EN in-crease from 0.21 to 0.69, and from 0.24 to 0.60 for FI ; BLI scores for EN-FI rise from 0.21 to 0.37), it cannot match the absolute performance peaks of LEXFIT -ed monolingual BERTs.",
"Storing the knowledge of 100+ languages in its limited parameter budget, mBERT still cannot capture monolingual knowledge as accurately as language-specific BERTs (Conneau et al., 2020).",
"However, we believe that its performance with LEXFIT may be further improved by leveraging recently proposed multilingual LM adaptation strategies that mitigate a mismatch between shared multilingual and language-specific vocabularies (Artetxe et al., 2020; Chung et al., 2020; Pfeiffer et al., 2020); we leave this for future work.",
"Layerwise Averaging.",
"A consensus in prior work (Tenney et al., 2019; Ethayarajh, 2019; Vulic et al., 2020) points that out-of-context lexical knowledge in pretrained LMs is typically stored in bottom Transformer layers (see Table 5).",
"However, Table 5 also reveals that this does not hold after LEXFIT ing: the tuned model requires knowledge from all layers to extract effective decontextualized WEs and reach peak task scores.",
"Effectively, this means 12 The same findings hold for other tasks and languages.",
"that, through lexical fine-tuning, model reformats all its parameter budget towards storing useful lexical knowledge, that is, it specializes as (decontex-tualized) word encoder.",
"Varying the Number of Negative Examples and their impact on task performance is recapped in Figure 3b.",
"Overall, increasing k does not benefit (and sometimes even hurts) performance the exceptions are ENLSIM; and the RELP task with the SOFTMAX variant for some languages.",
"We largely attribute this to the noise in the target-language lexical pairs: with larger k values, it becomes increasingly difficult for the model to discern between noisy positive examples and random negatives.",
"Longer Fine-Tuning.",
"Instead of the standard setup with 2 epochs (see 3), we run LEXFIT for 10 epochs.",
"The per-epoch snapshots of scores are summarized in the appendix.",
"The scores again validate that LEXFIT is sample-efficient: longer fine-tuning yields negligible to zero improvements in ENLSIM and RELP after the first few epochs, with very high scores achieved after epoch 1 already.",
"It even yields small drops for other languages in LSIM and BLI: we again attribute this to slight overfitting to noisy target-language lexical knowledge.",
"We proposed LEXFIT , a lexical fine-tuning procedure which transforms pretrained LMs such as BERT into effective decontextualized word encoders through dual-encoder architectures.",
"Our experiments demonstrated that the lexical knowledge already stored in pretrained LMs can be further exposed via additional inexpensive LEXFIT ing with (even limited amounts of) external lexical knowledge.",
"We successfully applied LEXFIT even to languages without any external human-curated lexical knowledge.",
"Our LEXFIT word embeddings (WEs) outperform traditional static WEs (e.g., fastText) across a spectrum of lexical tasks across diverse languages in controlled evaluations, thus directly questioning the practical usefulness of the traditional WE models in modern NLP.",
"Besides inducing better static WEs for lexical tasks, following the line of lexical probing work (Ethayarajh, 2019; Vulic et al., 2020), our goal in this work was to understand how (and how much) lexical semantic knowledge is coded in pretrained LMs, and how to unlock' the knowledge from the LMs.",
"We hope that our work will be beneficial for all lexical tasks where static WEs from traditional WE models are still largely used (Schlechtweg et al., 2020; Kaiser et al., 2021).",
"Despite the extensive experiments, we only scratched the surface, and can indicate a spectrum of future enhancements to the proof-of-concept LEXFIT framework beyond the scope of this work.",
"We will test other dual-encoder loss functions, including finer-grained relation classification tasks (e.g., in the SOFTMAX variant), and hard (instead of random) negative examples (Wieting et al., 2015; Mrkic et al., 2017; Lauscher et al., 2020; Kalan-tidis et al., 2020).",
"While in this work, for simplicity and efficiency, we focused on fully decontextualized ISO setup (see 2.2), we will also probe alternative ways to extract static WEs from pretrained LMs, e.g., averages-over-context (Liu et al., 2019; Bommasani et al., 2020; Vulic et al., 2020).",
"We will also investigate other approaches to procuring more accurate external knowledge for LEXFIT in target languages, and extend the framework to more languages, lexical tasks, and specialized domains.",
"We will also focus on reducing the gap between pretrained monolingual and multilingual LMs.",
"We thank the three anonymous reviewers, Nils Reimers, and Jonas Pfeiffer for their helpful comments and suggestions.",
"Ivan Vulic and Anna Korhonen are supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no. 648909) awarded to Korhonen, and the ERC PoC Grant MultiConvAI: Enabling Multilingual Conversational AI (no. 957356).",
"Goran Glava is supported by the Baden Wrttemberg Stiftung (Eliteprogramm, AGREE grant)."
] | [
"abstain",
"result",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"method",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"method",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"result",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"method",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"result",
"abstain",
"method",
"abstain",
"abstain",
"result",
"objective",
"method",
"other",
"other",
"other"
] |
[
"Existing software-based energy measurements of NLP models are not accurate because they do not consider the complex interactions between energy consumption and model execution.",
"We present IrEne, an interpretable and extensible energy prediction system that accurately predicts the inference energy consumption of a wide range of Transformer-based NLP models.",
"IrEne constructs a model tree graph that breaks down the NLP model into modules that are further broken down into low-level machine learning (ML) primitives.",
"IrEne predicts the inference energy consumption of the ML primitives as a function of generalizable features and fine-grained runtime resource usage.",
"IrEne then aggregates these low-level predictions recursively to predict the energy of each module and finally of the entire model.",
"Experiments across multiple Transformer models show IrEne predicts inference energy consumption of transformer models with an error of under 7% compared to the ground truth.",
"In contrast, existing energy models see an error of over 50%.",
"We also show how IrEne can be used to conduct energy bottleneck analysis and to easily evaluate the energy impact of different architectural choices.",
"We release the code and data at https:// github.com/StonyBrookNLP/irene .",
"Accurately measuring the energy consumption of NLP models is becoming ever more important.",
"Models are growing exponentially, with billions, even approaching trillions, of parameters with correspondingly large resource consumption (e.g. GPT-3 (Brown et al., 2020) has 175 billion parameters and Switch Transformers can have 1.6 trillion parameters (Fedus et al., 2021)).",
"Recent works have sought to estimate energy consumption and suggest ways to reduce the resulting costs and carbon impacts (Strubell et al., 2019; Schwartz et al., 2019; Henderson et al., 2020; Anthony et al., 2020) Unfortunately, there are no easy-to-use and accurate solutions for measuring or predicting the energy consumption.",
"On the one hand, measuring energy consumption directly through hardware power monitors is not feasible as it requires exclusive access to the hardware and detailed instrumentation.",
"On the other hand, there are software models that predict energy as a function of resource utilization (Strubell et al., 2019; Henderson et al., 2020) but these energy prediction models are inaccurate (Cao et al., 2020).",
"The inaccuracy stems from the prediction models not accounting for the complex interactions between energy consumption and resource utilization.",
"In this work, we focus on inference energy which can incur substantial costs especially for models that support high-volume web services.",
"We ask how we can build an energy prediction method that is accurate, interpretable, and extensible.",
"We make three contributions in answering this question.",
"First, we frame the problem of interpretable energy prediction over a model tree abstraction.",
"This abstraction represents the model as the root node that is composed from model-specific modules, which themselves are recursively composed from lower-level machine learning (ML) primitives, ones that are not model-specific.",
"Given a model, the energy prediction problem is framed as the task of predicting the energy of all the nodes in its model tree abstraction.",
"The result is that IrEne can predict not only the inference energy consumption of the entire model, but also of its components, making the energy prediction highly interpretable.",
"Second, we develop IrEne, that includes a multilevel prediction method that predicts energy in all nodes of the abstraction tree in a bottom-up fashion using resource utilization and model description features.",
"For each of the leaf-nodes that are re-used in different models, the ML primitives, IrEne uses a separate regressor trained on ground-truth energy measurements.",
"One simple way to get energy for all other higher-level nodes is to recursively sum-up the values.",
"While this works reasonably well (even better than a prior prediction model), direct summing of the raw predictions is sub-optimal because the error can propagate through the model tree thus making upper-level nodes estimation more erroneous.",
"Instead, we learn a single regressor for all intermediate nodes, one that essentially adjusts the sum of children's predicted energy values based on features of the children.",
"Since IrEne is built on top of energy predictions of ML primitives that are not model specific, it is generalizable and can be used to predict the energy for previously unseen (Transformer-based) models.",
"Third, to evaluate IrEne, we create an evaluation dataset with ground-truth energy measurements for multiple Transformer-based models at all levels in the model tree abstraction.",
"Evaluations show that IrEne is more accurate with an average model-level energy error of 5 7 % compared against the ground-truth, while existing software-based method (Strubell et al., 2019) has over 55% error.",
"The module-level energy errors are also substantially small showing that IrEne is both accurate and interpretable.",
"Last, we also conduct multiple analyses that show the utility of IrEne for interpretable energy predictions.",
"Over the last couple of years, there has been increased interest in the energy consumption of NLP models, starting with the work by Strubell et al. (Strubell et al., 2019).",
"This work, and a follow up software framework called experiment-impact-tracker (Henderson et al., 2020) tracks the resource (i.e., CPU, GPU, memory) utilization of an NLP model and predicts energy consumption as a function of resources.",
"However, our previous study shows that this type of resource utilization only modeling can be highly inaccurate (Cao et al., 2020).",
"This is in part due to the complex relationship between resource utilization and energy consumption.",
"Further, there are other activities that are not accounted via resource utilization such as data movement in GPU memory which can also cause significant energy footprint (Chen et al., 2016; Boroumand et al., 2018).",
"Other works (Zhou et al., 2020; Schwartz et al., 2019) report the energy numbers through alternate metrics including dollar cost or in terms of floating point operations.",
"However, these do not directly map to the energy consumption.",
"Energy prediction of applications on mobile devices is a well-studied topic in the systems community (Pathak et al., 2011, 2012; Yoon et al., 2012; Cao et al., 2017) but these work require fine-grained understanding of the application.",
"None of the existing systems predict energy for NLP applications.",
"We design the energy prediction model with three design goals:",
"(i) accurate prediction while incurring low profiling overheads; high overheads when measuring runtime resource utilization can hide the true energy costs of the NLP model,",
"(ii) provide interpretable energy analysis of the components inside the NLP model, especially for analyzing energy bottlenecks;",
"(iii) extensible and generalizable , in the sense that, they are trained once but can work on unseen NLP models to remain useful as new models emerge.",
"To achieve the above goals, we first need a representation of the NLP model that is at a suitable abstraction both from interpretability and generalization standpoints.",
"On the one hand, using only low-level abstractions such as the math operations can help with easy generalization to new models as their units are basic math (or other compute) operations that are building blocks of any model.",
"However, they lack interpretability since they don't directly convey the model architecture semantics.",
"For example, a BERT (Devlin et al., 2019) model has matrix multiplications in both the self-attention and feed forward layers.",
"Only having the energy of each matrix multiplication alone, without knowing which higher level logic units (i.e., either self-attention or feed forward layer) they belong to, does not help analyze if they are the bottlenecks for that particular unit.",
"On the other hand, high-level abstractions preserve the architecture semantics and are interpretable for practitioners, but they don't Module Level ML Level BertModel BertEmbeddings BertEncoder BertPooler Embedding:word LayerNorm BertLayer:0 BertAttention BertIntermediate BertOutput BertSelfAttention BertSelfOutput Linear:query matmul softmax Linear:dense LayerNorm Linear:dense Linear:dense LayerNorm Linear:dense Tanh Figure 1: A tree view of a 1-layer BERT model.",
"easily generalize to unseen models that may not",
"have the same modules used for training. Instead, we use a model tree abstraction that represents the model nodes in three-levels: math level, machine learning (ML) level and module level. Math level nodes are a finite set of mathematical operations (like addition, subtraction, matrix multiplication etc); they form model-agnostic ML level nodes (such as Linear, LayerNorm etc.), which further can be used to construct complex module level nodes. Module level nodes are groups of lower ML level node operations that reflect the logic units of the NLP algorithms defined by model authors. The model tree abstraction is such that each parent node captures computation of all of its children nodes. Figure 1 shows an example of a one-layer BERT (Devlin et al., 2019) model (omitted math level nodes). The execution of the model tree nodes can be in parallel, but current systems have a fixed sequential order for executing the sibling nodes. In this work, we only focus on sequential execution. Note that the model tree doesn't capture the order of execution. E.g., BertOutput appears right after BertIntermediate in BERT's computation graph, but here they'll be represented as siblings of the same parent BertLayer:0 , and their energy will be treated separately. The parent node BertLayer:0 encapsulates the energy and computation of its children node BertIntermediate , BertOutput , and BertAttention , in no particular order.",
"of a NLP model. Given a model tree abstraction of a NLP model M consisting of a set of nodes N = { n | n ml n mod } ( n ml is the set of ML level nodes, n mod is the set of module level nodes), for an input size I (a pair of batch size b and sequence length s ) 1 , we can predict the energy E n for every node n in the model tree. The energy of root node is the energy for the entire model.",
"Figure 2 shows the IrEne architecture. IrEne takes the user-specified model and builds an energy predictor for a target hardware device. The model is run once on the target hardware and the runtime resource utilization is logged. During this run, IrEne uses code instrumentation and just-in-time (JIT) run-time tracing to break down the model into sub-components, and extracts a model tree representation (see details in A).",
"IrEne then provides interpretable energy analysis by predicting the energy for every node in the model tree in a bottom-up fashion. At the leaves, where the nodes correspond to the ML primitives, IrEne uses separate regression models for each type of ML primitive (e.g., one regressor for Linear Layer, another for LayerNorm etc.). For the intermediate nodes, their energy is predicted recursively using a single regressor that makes a weighted combination of the predicted energy values from its children. For both types of regressors, they use features that are derived from resource utilization (e.g. cpu utilization) and generalized node features",
"1 The batch size and input sequence length together decide the amount of input data to the model, therefore, they both affect the model energy consumption.",
"IrEne represents higher-level modules via generalizable features and the ML primitives. Even if the intermediate modules are model-specific (e.g. Bert-SelfAttention), the features are general, allowing IrEne to predict energy of unseen models.",
"The IrEne model is trained using ground-truth energy measurements of ML primitives and a handful of NLP models; we use a highly accurate hardware power monitor to measure ground truth energy (A). Of course, one can use the power monitor to measure energy directly at runtime. However, this is cumbersome and requires physical access to the device which is not always feasible with cloud-based deployments. Further, the hardware meter only measures the total energy, which is not interpretable in terms of its components.",
"At the leaf-level, the energy prediction problem requires predicting the energy of ML primitives. As an offline step, IrEne first enumerates all relevant ML primitives and builds a specialized regressor for each primitive by training over ground truth data. In some cases, model developers can define their own ML primitives. We extract information about such custom primitives from the JIT trace. Formally, for a leaf node n with ML primitive i ,",
"PML i e ( n ) = W i feat ( n ) + b i (1) using primitive specific parameters W i the weight vector and b i the bias. We learn these parameters using a mean squared error loss between predicted P e ( n ) and ground-truth energy G e ( n ) .",
"Our hierarchical tree representation gives a naturally interpretable way of propagating this prediction through the tree. Since each node represents total computation of its children nodes, the total energy from children nodes should also roughly correspond to that of the parent node. Formally,",
"P e ( n ) = (cid:88) c child ( n ) P e ( c ) if n is non-leaf = PML i e ( n ) if n is leaf (2)",
"We call this baseline prediction model PredictedSum . This model is interpretable but naively summing up the energy values accumulates error going up the tree and results in noisy module-level predictions. To account for this, we use a weighted sum of child node energy, where the weights are learnt using node features. Formally,",
"P e ( n ) = (cid:88) c child ( n ) ( c ) P e ( c ) if n is non-leaf = PML i e ( n ) if n is leaf ( c ) = 1 + tanh ( W feat ( c ) + b ) / (3)",
"where W and b are parameters and is a hyperparameter. Unlike ML primitives, here we have a single regressor with one set of weight vector ( W ) and bias scalar ( b ) parameters across all non-leaf nodes of any type. Note that this single regressor doesn't predict node's energy directly, but determines how much the predicted energy from its child node should be scaled before summing the children node energy.",
"It does this recursively starting from the root, and hence encodes tree structure in its computation.",
"We do not learn node-specific regressors because that does not allow generalizing to new models that may have different modules than the ones during training.",
"Since the method is essentially calibrating the sum of the energy values, regularizing the model so that the computed weights on the energy values to be around 1 helps the learning.",
"We do this by equation 3, which makes the range of computed weights, ( c ) to be within 1 .",
"To supervise this model, we use the ground-truth energy from all the non-leaf nodes, and we train it in an end-to-end fashion.",
"Formally, loss ( n ) = (cid:88) s subtree ( n ) (cid:0) P e ( s ) G e ( s ) (cid:1) 2 G e ( s ) 2 (4) We scale the mean squared error with ground-truth energy, since scales of energy at different levels of the tree are vastly different.",
"We refer to this model as the End2End regressor, since the error signal in energy prediction of any node back-propagates through the whole subtree.",
"We use this training scheme in IrEne.",
"In our evaluation (sec-tion 5), we perform an ablation study to show why the tree structure and the end-to-end regressor is crucial for accuracy.",
"We design two categories of energy-relevant features in IrEne :",
"(i) the model features that reflect hardware-independent compute and memory information, and",
"(ii) the resource features that capture how the models use hardware resources and cause energy activities.",
"Table 1 shows the features used in IrEne.",
"For the model description related information, we use features that characterize the compute, memory, and size of input etc.",
"These are features that are independent of the underlying hardware.",
"For resource features, we use utilization, usage and clock speed of hardware components including CPU, memory and GPU.",
"Note that these two sets of features are extensible, meaning that one can add more either hardware-specific features or new model features.",
"See Appendix sections A.2 and A.3 for details on how we obtain these features.",
"Our evaluation is aimed at measuring the accuracy of IrEne relative to ground truth and the state-of-the-art.",
"We show the IrEne only causes 5-7% error for the model energy prediction.",
"We also show that for a given Transformer model, IrEne can be used to find the energy bottlenecks and analyze the energy versus task performance trade-offs.",
"flops : floating point operations (unit: million)",
"Target Hardware: we use 2 GPU-equipped desktop PCs as the target hardware for running our models.",
"See Table 2 for details.",
"Software and models: We perform inference in Transformer models using PyTorch (Paszke et al., 2019) v1.7 through the HuggingFace Transformers (Wolf et al., 2020) library.",
"The six models we study are BERT-base (Devlin et al., 2019), RoBERTa-base (Liu et al., 2019), Distill-BERT (Sanh et al., 2020), DistilGPT2 (Sanh et al., 2020; Radford et al., 2019), OpenAI GPT (Radford et al., 2018) and GPT2 (Radford et al., 2019).",
"Software-based Measurement Baseline: For comparisons, we use the software-based energy measurements provided by the experiment-impact-tracker (Henderson et al., 2020) which estimates energy as a function of the GPU, CPU, and memory utilization.",
"The method computes energy by aggregating resource usage as follows: e total = P UE (cid:80) p ( p dram e dram + p cpu e cpu + p gpu e gpu ) , where p resource 2 are the percentages of each system resource used by the attributable processes relative to the total in-use resources and e resource is the energy usage of that resource.",
"The constant 2 resources can be dram , cpu , gpu for power usage effectiveness (PUE) compensates for extra energy used to cool or heat data centers.",
"For each model, we obtain the model tree and for each node in it, we associate ground-truth energy measurements using the power monitor and its resource features using low-overhead logging (Sec-tion A).",
"For each node we run it repetitively for 20 seconds, since it often takes a very short time for one run (e.g. from 0.1 to 100 millisecond).",
"We repeat this process for five rounds (the variations are within <1%) and record the average energy as the ground-truth for the node.",
"We use 1 GPU to run all experiments.",
"We record the start and end timestamp of the model program, and extract the energy values by comparing and aligning the timestamps from the resource profiler logs and power monitor logs.",
"Ground Truth Energy: We measure ground truth energy using a emonPi power monitor (Hudson, 2021) which is open source.",
"The emonPi uses a clip-on CT sensor to monitor the energy consumed by the computer which records the passthrough current and voltage every 170 ms. This allows us to accurately measure the power draw at a sub second granularity.",
"We obtain current, voltage, and timestamp values from the power meter's builtin serial port.",
"The energy ( e ) consumed during a time period is then calculated using the sampled current ( I t ) and voltage ( V t ) values in that period: e = (cid:80) t V t I t .",
"To guarantee the consistency and reliability of the hardware energy measurements, we cool down the PCs after each experiment finishes to avoid potential overheating issue that can cause subsequent energy distortions.",
"We measure the standby power consumption (when the CPU load is < 0 . 1 %) and ensure before running the experiments that the PC does not draw more than the standby power.",
"Further, no other application is running during our experiments.",
"To understand the scale of energy usage, Table 3 shows the estimated energy consumption (in kWh) using our ground truth measurement.",
"We also show the cost of answering one million queries (in USD) when using a BERT-base model in a reading comprehension (over one passage), and in an end-to-end setting (over 150 passages) ignoring retrieval compute.",
"For reference, Google search handles millions of queries every minute (Kenshoo, 2019).",
"Energy Dataset: To evaluate the energy prediction, we create a dataset that cover a wide range of input sizes for the six studied Transformer models and the 24 BERT model variants (Turc et al., 2019).",
"Each instance in the dataset can be of type ML, Module or Model level and is associated with features shown in Table 1 and hardware measured energy.",
"We show the statistics of the dataset for BERT-base, DistilBERT and GPT2 in Table 4.",
"Energy Error Metric: We measure the energy error percentage as 100 | P E GE | /GE , where P E is the predicted energy and GE is the ground truth energy.",
"We compare IrEne with the existing software measurement methods (Strubell et al., 2019; Henderson et al., 2020).",
"We apply their method directly for all the models in our dataset.",
"Note that their method is a fully-defined estimation model with a fixed set of parameters without any training.",
"For IrEne experiments, we report cross-validated evaluation on the energy prediction dataset leaving data from one model out of training set and evaluating on it, and then repeating the same for all the models.",
"3 based on the US national average as of May 2021 according to https://www.electricchoice.com/ electricity-prices-by-state .",
"IrEne is accurate Table 5 shows the energy prediction errors at the model-level for all the models on the two PCs.",
"The existing software-based baseline method from Strubell et al. (2019) incurs large energy prediction errors of over 50%.",
"IrEne on the other hand incurs substantially lower errors, with at most 7.6% errors across the models, showing its value for reliable and accurate energy analysis.",
"As seen from the cumulative distribution function for the model errors in Figure 3, all of IrEne's errors are below 17% and nearly half of its errors are below 10%.",
"We note here that our leave-one-model-out cross validation specifically evaluates the generalizability of IrEne.",
"ML and Module Levels Errors are also low.",
"Table 7, 6 show a break down of the IrEne errors at the ML and module levels respectively.",
"Accurately predicting ML level energy is key to accurate predictions for at the module level and higher, as the errors will accumulate up the model tree in IrEne.",
"It turns out that we can indeed predict ML level energy with high-levels of accuracy errors are lower than 1%, providing reliable values for the module level predictions.",
"Note that even unseen models (ie ones evaluated in the test partition) will be made up of the same set of ML primitives (per-haps with different input and batch sizes).",
"The results here cannot be directly generalized to unseen ML-primitives.",
"Module level errors are higher and vary in range (5.4% to 16.7%) across different models.",
"Module level errors also turn out to be higher than the model level errors.",
"This is mainly because the module level errors are averages across all intermediate module level nodes in the model tree; some modules might have bigger errors, but these get calibrated by our End2End energy regressor.",
"We further characterize these effects in IrEne ablation and validation analysis.",
"Table 8 shows the contribution of model and resource features in IrEne energy prediction.",
"We observe that resource features provide most of the benefits for energy estimation IrEne for all levels, confirming that resource information is important for energy prediction.",
"Model features do not reduce ML level error because the error is already small, but they help further reduce the prediction errors for module and model levels and combining model and resource features together brings the average estimation errors further down to 8.5% and 5.5%.",
"To understand the impact of learning and the architectural choices of aggregating ML level energy into module level energy in IrEne affect the model accuracy, we build three (ablated) models:",
"Is end-to-end learning necessary?",
"To test this, we build a StepWise regressor that simply learns to predict the energy of parent node from the ground-truth energy of its child nodes at the training time.",
"At the test time, it uses predicted energy generating predictions from ground up.",
"P e ( n ) = (cid:88) c child ( n ) ( c ) G e ( c ) Training P e ( n ) = (cid:88) c child ( n ) ( c ) P e ( c ) Testing (5) Here, ( c ) and loss are still as defined in equation 3 and 4 respectively.",
"However, unlike the IrEne ( End2End ) regressor, the errors in the prediction of root node, do not backpropogate to its prediction of descendant nodes i.e. there is no end-to-end training.",
"Is tree-structure necessary?",
"To test this, we build an Unstructured regressor that ignores the tree structure completely, and directly predicts the energy from the feature representation of nodes (Mod-ule and Model level) using linear regression as in equation (1).",
"Unlike ML-level regressor though, here we need to use single set of parameters for common across the nodes.",
"Is learning necessary?",
"To test this, we use the PredictedSum model (equation 2).",
"Recall this model also aggregates energy predictions over the tree-structure but has no parameters to train.",
"Table 9 shows the ablation of IrEne with respect to different algorithmic choices of the module level energy aggregation.",
"First, we find that the regressor that ignores the tree structure ( Unstructured ) Machine System BERT-base DistilBERT RoBERTa-base GPT2 DistilGPT2 OpenaiGPT Average PC1 Strubell et al., 2019 57.9 56.3 62.5 62.6 55.9 61.8 57.8 IrEne 5.8 11.6 7.1 3.5 2.2 2.7 5.5 PC2 Strubell et al., 2019 55.1 52.6 58.9 54.6 49.8 60.6 55.6 IrEne 10.0 9.4 7.1 6.1 4.9 5.9 7.2 Table 5: Energy Prediction Errors at Model level: Comparing IrEne and a software measurement baseline for the two PCs.",
"performs significantly worse than all other regressors that do consider it.",
"Interestingly, learning without structure even performs worse than PredictedSum regressor that naively adds child energy without any learning, highlighting the importance of tree-structure.",
"Further, learnt weighted sum outperforms PredictedSum regressor.",
"In particular, End2End regressor performs better than StepWise regressor showing the importance of optimizing on whole tree in an end-to-end fashion.",
"In this section, we use the interpretable energy analysis from IrEne to show energy bottlenecks for",
"given Transformer models, how energy varies for different model architectures, and how it can be used to effectively pick accuracy-energy trade-offs.",
"Finding energy bottlenecks: We use IrEne to analyze the energy bottlenecks in Transformer models.",
"For simplicity of analysis, we predict the energy for modules that are immediate parents of the ML level nodes and use it calculate the percentage of energy it contributes to the model overall.",
"Table 10 shows the energy breakdown of two models: RoBERTa-base and GPT2.",
"We observe that self-attention layers in RoBERTa-base model consume 31% of the total energy while it is the feed forward layers in GPT2 that consume more than 59% of the energy.",
"The module level energy breakdown of all models in Table 12 in Appendix C. We also present the full energy breakdown of the BERT-base model and annotate each node with predicted energy percentage in Figure 5 in the Appendix.",
"We fine-tune BERT-24 models (Turc et al., 2019) on the Stanford Sentiment Treebank V2 (SST2) (Socher et al., 2013) using the default examples in the HuggingFace Transformers (Wolf et al., 2020) without any hyperparameter tuning.",
"We evaluate the accuracy on the dev set of SST2.",
"These Module Energy % RobertaSelfAttention 31.24 RobertaIntermediate 30.57 RobertaOutput 28.64 RobertaSelfOutput 09.11 RobertaEmbeddings 00.41 RobertaPooler 00.03",
"models are not part of our energy prediction training data.",
"We additionally exclude BERT-base from training data to show the extensibility of IrEne.",
"Given an energy budget, IrEne allows for selection of an optimal architecture that gets the highest accuracy for a task.",
"In Figure 4, we see that it is possible for models to use more energy but return lower accuracy than other models which might use less energy.",
"Similarly, given an accuracy target, we can choose an architecture with the lowest energy use.",
"For example, for a target of 88% accuracy or above, there are many such models ranging from 4J all the way to 12J.",
"Last, we point out that the trade-off curve based on the predicted energy mirrors that of the ground-truth well enough to be used as an accurate proxy.",
"This work focused on inference energy predictions of Transformers on a target hardware device.",
"The model tree abstraction is general and not tied to Transformer architectures nor to specific deep learning frameworks, it is extensible to other neural networks like LSTM and frameworks like Ten-sorFlow.",
"The abstraction is built from the computational graph and knowledge about the model architecture and underlying software.",
"As long as these are available we can apply our methodology to other architectures as well.",
"Predicting the training energy is an important and a more challenging problem.",
"We believe our methodology can be extended.",
"However, it will require tracking the energy of both forward and backward processes and even modeling other aspects training dynamics, for example, time to converge to specific accuracy.",
"Scaling to unseen hardware is an important and challenging area that needs further research.",
"It requires both measuring the ground truth energy for a more diverse collection of hardware and designing proper hardware-specific features (i.e., L1 cache size, CPU cores, etc.).",
"We believe IrEne's methodology can be extended to calibrate software reported energy as a way to scale how we collect ground truths (as weak-supervision).",
"In the future, we plan to study workloads on more hardware to choose proper features that capture the hardware energy differences.",
"Energy consumption of NLP models is an important consideration from a cost perspective and increasingly, from an environmental impact perspective as well.",
"Designing energy efficient and cost-effective models requires both accurate and interpretable energy modeling.",
"In this work, we showed that by carefully combining resource utilization with model description based features, we can develop a multi-level energy prediction model that is not only highly accurate but is also able to provide a break-down of how its different components contribute to its overall energy.",
"This material is based upon work supported by the National Science Foundation under Grant No 2007362."
] | [
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"other",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"other",
"other",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"other"
] |
[
"Recurrent neural network grammars (RNNG) are generative models of language which jointly model syntax and surface structure by incrementally generating a syntax tree and sentence in a top-down, left-to-right order.",
"Supervised RNNGs achieve strong language modeling and parsing performance, but require an annotated corpus of parse trees.",
"In this work, we experiment with unsupervised learning of RNNGs.",
"Since directly marginal-izing over the space of latent trees is intractable, we instead apply amortized variational inference.",
"To maximize the evidence lower bound, we develop an inference network parameterized as a neural CRF constituency parser.",
"On language modeling, unsupervised RNNGs perform as well their supervised counterparts on benchmarks in English and Chinese.",
"On constituency grammar induction, they are competitive with recent neural language models that induce tree structures from words through attention mechanisms.",
"1 Introduction Recurrent neural network grammars (RNNGs) (Dyer et al., 2016) model sentences by first generating a nested, hierarchical syntactic structure which is used to construct a context representation to be conditioned upon for upcoming words.",
"Supervised RNNGs have been shown to outperform standard sequential language models, achieve excellent results on parsing (Dyer et al., 2016; Kuncoro et al., 2017), better encode syntactic properties of language (Kuncoro et al., 2018), and correlate with electrophysiological responses in the human brain (Hale et al., 2018).",
"However, these all require annotated syntactic trees for training.",
"In this work, we explore unsupervised learning of recurrent neural network grammars for language modeling and grammar induction.",
"Work done while the first author was an intern at DeepMind.",
"Code available at https://github.com/harvardnlp/urnng The standard setup for unsupervised structure learning is to define a generative model p ( x , z ) over observed data x (e.g. sentence) and unobserved structure z (e.g. parse tree, part-of-speech sequence), and maximize the log marginal likelihood log p ( x ) = log (cid:80) z p ( x , z ) .",
"Successful approaches to unsupervised parsing have made strong conditional independence assumptions (e.g. context-freeness) and employed auxiliary objectives (Klein and Manning, 2002) or priors (John-son et al., 2007).",
"These strategies imbue the learning process with inductive biases that guide the model to discover meaningful structures while allowing tractable algorithms for marginalization; however, they come at the expense of language modeling performance, particularly compared to sequential neural models that make no independence assumptions.",
"Like RNN language models, RNNGs make no independence assumptions.",
"Instead they encode structural bias through operations that compose linguistic constituents.",
"The lack of independence assumptions contributes to the strong language modeling performance of RNNGs, but make unsupervised learning challenging.",
"First, marginalization is intractable.",
"Second, the biases imposed by the RNNG are relatively weak compared to those imposed by models like PCFGs.",
"There is little pressure for non-trivial tree structure to emerge during unsupervised RNNG (URNNG) learning.",
"In this work, we explore a technique for handling intractable marginalization while also injecting inductive bias.",
"Specifically we employ amortized variational inference (Kingma and Welling, 2014; Rezende et al., 2014; Mnih and Gregor, 2014) with a structured inference network.",
"Variational inference lets us tractably optimize a lower bound on the log marginal likelihood, while employing a structured inference network encourages non-trivial structure.",
"In particular, a conditional random field (CRF) constituency parser (Finkel et al., 2008; Durrett and Klein, 2015), which makes significant independence assumptions, acts as a guide on the generative model to learn meaningful trees through regularizing the posterior (Ganchev et al., 2010).",
"We experiment with URNNGs on English and Chinese and observe that they perform well as language models compared to their supervised counterparts and standard neural LMs.",
"In terms of grammar induction, they are competitive with recently-proposed neural architectures that discover tree-like structures through gated attention (Shen et al., 2018).",
"Our results, along with other recent work on joint language modeling/structure learning with deep networks (Shen et al., 2018, 2019; Wiseman et al., 2018; Kawakami et al., 2018), suggest that it is possible learn generative models of language that model the underlying data well (i.e. assign high likelihood to held-out data) and at the same time induce meaningful linguistic structure.",
"We use x = [ x 1 , . . . , x T ] to denote a sentence of length T , and z ZT to denote an unlabeled binary parse tree over a sequence of length T , represented as a a binary vector of length 2 T 1 .",
"Here 0 and 1 correspond to SHIFT and REDUCE actions, explained below.",
"1 Figure 1 presents an overview of our approach.",
"An RNNG defines a joint probability distribution p ( x , z ) over sentences x and parse trees z .",
"We consider a simplified version of the original RNNG (Dyer et al., 2016) by ignoring constituent labels and only considering binary trees.",
"The RNNG utilizes an RNN to parameterize a stack data structure (Dyer et al., 2015) of partially-completed constituents to incrementally build the parse tree while generating terminals.",
"Using the current stack representation, the model samples an action ( SHIFT or REDUCE ): SHIFT generates a terminal symbol, i.e. word, and shifts it onto the stack, 2 REDUCE pops the last two elements off the stack, composes them, and shifts the composed 1 The cardinality of ZT { 0 , 1 } 2 T 1 is given by the ( T 1) -th Catalan number, |Z T | = (2 T 2)!",
"T",
"!( T 1)!",
"2 A better name for SHIFT would be GENERATE (as in Dyer et al. (2016)), but we use SHIFT to emphasize similarity with the shift-reduce parsing.",
"Formally, let S = [( 0 , 0 )] be the initial stack.",
"Each item of the stack will be a pair, where the first element is the hidden state of the stack LSTM, and the second element is an input vector, described below.",
"We use top( S ) to refer to the top pair in the stack.",
"The push and pop operations are de-fined imperatively in the usual way.",
"At each time step, the next action z t ( SHIFT or REDUCE ) is sampled from a Bernoulli distribution parameterized in terms of the current stack representation.",
"Letting ( h prev , g prev ) = top( S ) , we have z t Bernoulli( p t ) , p t = ( w (cid:62) h prev + b ) .",
"Subsequent generation depend on z t : If z t = 0 ( SHIFT ), the model first generates a terminal symbol via sampling from a categorical distribution whose parameters come from an affine transformation and a softmax, x softmax( Wh prev + b ) .",
"Then the generated terminal is shifted onto the stack using a stack LSTM, h next = LSTM( e x , h prev ) , push( S, ( h next , e x )) , where e x is the word embedding for x .",
"If z t = 1 ( REDUCE ), we pop the last two elements off the stack, ( h r , g r ) = pop( S ) , ( h l , g l ) = pop( S ) , and obtain a new representation that combines the left/right constituent representations using a tree LSTM (Tai et al., 2015; Zhu et al., 2015), g new = TreeLSTM( g l , g r ) .",
"Note that we use g l and g r to obtain the new representation instead of h l and h r .",
"3 We then update the stack using g new , ( h prev , g prev ) = top( S ) , h new = LSTM( g new , h prev ) , push( S, ( h new , g new )) .",
"The generation process continues until an end-of-sentence symbol is generated.",
"The parameters of the generative model are w , b, W , b , and the parameters of the stack/tree LSTMs.",
"For a sentence x = [ x 1 , . . . , x T ] of length T , the binary parse tree is given by the binary vector z = [ z 1 , . . . , z 2 T 1 ] .",
"4 The joint log likelihood decomposes as a sum of terminal/action log likelihoods, log p ( x , z ) = T (cid:88) t =1 log p ( x t | x <t , z <n ( t ) ) (cid:124) (cid:123)(cid:122) (cid:125) log p ( x | z ) + 2 T 1 (cid:88) j =1 log p ( z j | x <m ( j ) , z <j ) (cid:124) (cid:123)(cid:122) (cid:125) log p ( z | x < z ) , (1) where z <n ( t ) refers to all actions before generating the t -th word, and similarly x <m ( j ) refers to all words generated before taking the j -th action.",
"For brevity, from here on we will use log p ( x | z ) to refer to the first term (terminal log likelihood) and log p ( z | x < z ) to refer to the second term (action log likelihood) in the above decomposition.",
"5 3 The update equations for the tree LSTM (and the stack LSTM) also involve cell states in addition to the hidden states.",
"To reduce notational clutter we do not explicitly show the cell states and instead subsume them into g .",
"If one (or both) of the inputs to the tree LSTM is a word embedding, the associated cell state is taken to be zero.",
"See Tai et al. (2015) for the exact parameterization.",
"4 As it stands, the support of z is { 0 , 1 } 2 T 1 , all binary vectors of length 2 T 1 .",
"To restrict our distribution to ZT (binary vectors which describe valid trees), we constrain z t to be valid at each time step, which amounts to deterministically choosing z t = 0 ( SHIFT ) if there are fewer than two elements (not counting the initial zero tuple) on the stack.",
"5 The action log likelihood is the sum of log conditional priors, which is obviously different from the unconditional log prior log p ( z ) = log (cid:80) x p ( x , z ) .",
"In the supervised case where ground-truth z is available, we can straightforwardly perform gradient-based optimization to maximize the joint log likelihood log p ( x , z ) .",
"In the unsupervised case, the standard approach is to maximize the log marginal likelihood, log p ( x ) = log (cid:88) z (cid:48) Z T p ( x , z (cid:48) ) .",
"However this summation is intractable because z t fully depends on all previous actions [ z 1 , . . . , z t 1 ] .",
"Even if this summation were tractable, it is not clear that meaningful latent structures would emerge given the lack of explicit independence assumptions in the RNNG (e.g. it is clearly not context-free).",
"We handle these issues with amortized variational inference.",
"Amortized variational inference (Kingma and Welling, 2014) defines a trainable inference network that parameterizes q ( z | x ) , a variational posterior distribution, in this case over parse trees z given the sentence x .",
"This distribution is used to form an evidence lower bound (ELBO) on the log marginal likelihood, ELBO( , ; x ) = E q ( z | x ) (cid:20) log p ( x , z ) q ( z | x ) (cid:21) .",
"We maximize the ELBO with respect to both model parameters and inference network parameters .",
"The ELBO is still intractable to calculate exactly, but this formulation will allow us to obtain unbiased gradient estimators based on Monte Carlo sampling.",
"Thus, is trained to match the variational posterior q ( z | x ) to the true posterior p ( z | x ) , but is also trained to match the true posterior to the variational posterior.",
"Indeed, there is some evidence to suggest that generative models trained with amortized variational inference (i.e. variational autoencoders) learn posterior distributions that are close to the variational family (Cremer et al., 2018).",
"We can use this to our advantage with an inference network that injects inductive bias.",
"We propose to do this by using a context-free model for the inference network, in particular, a neural CRF parser (Durrett and Klein, 2015).",
"This choice can seen as a form of posterior regularization that limits posterior flexibility of the overly powerful RNNG generative model.",
"The parameterization of span scores is similar to recent works (Wang and Chang, 2016; Stern et al., 2017; Kitaev and Klein, 2018): we add position embeddings to word embeddings and run a bidirectional LSTM over the input representations to obtain the forward [ h 1 , . . . , h T ] and backward [ h 1 , . . . , h T ] hidden states.",
"The score s ij R for a constituent spanning x i to x j is given by, s ij = MLP([ h j +1 h i ; h i 1 h j ]) .",
"Letting B be the binary matrix representation of a tree ( B ij = 1 means there is a constituent spanning x i and x j ), the CRF parser defines a distribution over binary trees via the Gibbs distribution, q ( B | x ) = 1 ZT ( x ) exp (cid:16) (cid:88) i j B ij s ij (cid:17) , where ZT ( x ) is the partition function, ZT ( x ) = (cid:88) B (cid:48) B T exp (cid:16) (cid:88) i j B (cid:48) ij s ij (cid:17) , and denotes the parameters of the inference network (i.e. the bidirectional LSTM and the MLP).",
"Calculating ZT ( x ) requires a summation over an exponentially-sized set BT { 0 , 1 } T T , the set of all binary trees over a length T sequence.",
"However we can perform the summation in O ( T 3 ) using the inside algorithm (Baker, 1979), shown in 6 While it has a similar goal, this formulation differs the from posterior regularization as formulated by Ganchev et al. (2010), which constrains the distributional family via linear constraints on posterior expectations.",
"In our case, the conditional independence assumptions in the CRF lead to a curved exponential family where the vector of natural parameters has fewer dimensions than the vector of sufficient statistics of the full exponential family.",
"This curved exponential family is a subset of the marginal polytope of the full exponential family, but it is an intersection of both linear and nonlinear manifolds, and therefore cannot be characterized through linear constraints over posterior expectations.",
"7 In preliminary experiments, we also attempted to learn latent trees with a transition-based parser (which does not make explicit independence assumptions) that looks at the entire sentence.",
"However we found that under this setup, the inference network degenerated into a local minimum whereby it always generated left-branching trees despite various optimization strategies.",
"Williams et al. (2018) observe a similar phenomenon in the context of learning latent trees for classification tasks.",
"However Li et al. (2019) find that it is possible use a transition-based parser as the inference network for dependency grammar induction, if the inference network is constrained via posterior regularization (Ganchev et al., 2010) based on universal syntactic rules (Naseem et al., 2010).",
"Algorithm",
"1. This computation is itself differentiable and amenable to gradient-based optimization.",
"Finally, letting f : BT ZT be the bijection between the binary tree matrix representation and a sequence of SHIFT / REDUCE actions, the inference network defines a distribution over ZT via q ( z | x ) (cid:44) q ( f 1 ( z ) | x ) .",
"the ELBO,",
"A Monte Carlo estimate for the gradient with respect to is ELBO( , ; x ) 1 KK (cid:88) k =1 log p ( x , z ( k ) ) , with samples z (1) , . . . , z ( K ) from q ( z | x ) .",
"E q ( z | x ) [log p ( x , z )] + H [ q ( z | x )] , where H [ q ( z | x )] = E q ( z | x ) [ log q ( z | x )] is the entropy of the variational posterior.",
"Sampling uses the intermediate values calculated during the inside algorithm to sample split points recursively (Goodman, 1998; Finkel et al., 2006), as shown in Algorithm",
"2. The gradient with respect to involves two parts.",
"The entropy term H [ q ( z | x )] can be calculated exactly in O ( T 3 ) , again using the intermediate values from the inside algorithm (see Algorithm 3).",
"8 Since each step of this dynamic program is differentiable, we can obtain the gradient H [ q ( z | x )] using automatic differentation.",
"9 An estimator for the gradient with respect to E q ( z | x ) [log p ( x , z )] is obtained via the score function gradient estimator (Glynn, 1987; Williams, 1992), E q ( z | x ) [log p ( x , z )] = E q ( z | x ) [log p ( x , z ) log q ( z | x )] 1 KK (cid:88) k =1 log p ( x , z ( k ) ) log q ( z ( k ) | x ) .",
"8 We adapt the algorithm for calculating tree entropy in PCFGs from Hwa (2000) to the CRF case.",
"9 H [ q ( z | x )] can also be computed using the insideoutside algorithm and a second-order expectation semiring (Li and Eisner, 2009), which has the same asymptotic runtime complexity but generally better constants.",
"The above estimator is unbiased but typically suffers from high variance.",
"To reduce variance, we use a control variate derived from an average of the other samples' joint likelihoods (Mnih and Rezende, 2016), yielding the following estimator, 1 KK (cid:88) k =1 (log p ( x , z ( k ) ) r ( k ) ) log q ( z ( k ) | x ) , where r ( k ) = 1 K 1 (cid:80) j (cid:54) = k log p ( x , z ( j ) ) .",
"This control variate worked better than alternatives such as estimates of baselines from an auxiliary network (Mnih and Gregor, 2014; Deng et al., 2018) or a language model (Yin et al., 2018).",
"For English we use the Penn Treebank (Marcus et al., 1993, PTB) with splits and preprocessing from Dyer et al. (2016) which retains punctuation and replaces singleton words with Berkeley parser's mapping rules, resulting in a vocabulary of 23,815 word types.",
"10 Notably this is much larger than the standard PTB LM setup from Mikolov et al. (2010) which uses 10K types.",
"11 Also different from the LM setup, we model each sentence separately instead of carrying information across sentence boundaries, as the RNNG is a generative model of sentences.",
"Hence our perplexity numbers are not comparable to the PTB LM results (Melis et al., 2018; Merity et al., 2018; Yang et al., 2018).",
"corpus (Chelba et al., 2013).",
"We randomly sample 1M sentences for training and 2K sentences for validation/test, and limit the vocabulary to 30K word types.",
"While still a subset of the full corpus (which has 30M sentences), this dataset is two orders of magnitude larger than PTB.",
"Experiments on Chinese utilize version 5.1 of the Chinese Penn Treebank (CTB) (Xue et al., 2005), with the same splits as in Chen and Manning (2014).",
"Singleton words are replaced with a single (cid:104) UNK (cid:105) token, resulting in a vocabulary of 17,489 word types.",
"3.2 Training and Hyperparameters The stack LSTM has two layers with input/hidden size equal to 650 and dropout of 0.5.",
"The tree LSTM also has 650 units.",
"The inference network uses a one-layer bidirectional LSTM with 256 hidden units, and the MLP (to produce span scores s ij for i j ) has a single hidden layer with a ReLU nonlinearity followed by layer normalization (Ba et al., 2016) and dropout of 0.5.",
"We share word embeddings between the generative model and the inference network, and also tie weights between the input/output word embeddings (Press and Wolf, 2016).",
"Optimization of the model itself required standard techniques for avoiding posterior collapse in VAEs.",
"12 We warm-up the ELBO objective by linearly annealing (per batch) the weight on the conditional prior log p ( z | x < z ) and the entropy H [ q ( z | x )] from 0 to 1 over the first two epochs (see equation (1) for definition of log p ( z | x < z ) ).",
"This is analogous to KL-annealing in VAEs with continuous latent variables (Bowman et al., 2016; Snderby et al., 2016).",
"We train for 18 epochs (enough for convergence for all models) with a batch size of 16 and K = 8 samples for the Monte Carlo gradient estimators.",
"The generative model is optimized with SGD with learning rate equal to 1, 12 Posterior collapse in our context means that q ( z | x ) always produced trivial (always left or right branching) trees.",
"except for the affine layer that produces a distribution over the actions, which has learning rate 0.1.",
"Gradients of the generative model are clipped at 5.",
"The inference network is optimized with Adam (Kingma and Ba, 2015) with learning rate 0.0001, 1 = 0 .",
"9 , 2 = 0 .",
"999 , and gradient clipping at",
"1. As Adam converges significantly faster than SGD (even with a much lower learning rate), we stop training the inference network after the first two epochs.",
"Initial model parameters are sampled from U [ 0 . 1 , 0 . 1] .",
"The learning rate starts decaying by a factor of 2 each epoch after the first epoch at which validation performance does not improve, but this learning rate decay is not triggered for the first eight epochs to ensure adequate training.",
"We use the same hyperparameters/training setup for both PTB and CTB.",
"For experiments on (the subset of) the one billion word corpus, we use a smaller dropout rate of 0.1.",
"The baseline RNNLM also uses the smaller dropout rate.",
"All models are trained with an end-of-sentence token, but for perplexity calculation these tokens are not counted to be comparable to prior work (Dyer et al., 2016; Kuncoro et al., 2017; Buys and Blunsom, 2018).",
"To be more precise, the inference network does not make use of the end-of-sentence token to produce parse trees, but the generative model is trained to generate the end-of-sentence token after the final REDUCE operation.",
"We compare the unsupervised RNNG (URNNG) against several baselines: (1) RNNLM, a standard RNN language model whose size is the same as URNNG's stack LSTM; (2) Parsing Reading Predict Network (PRPN) (Shen et al., 2018), a neural language model that uses gated attention layers to embed soft tree-like structures into a neural network (and among the current state-of-the-art in grammar induction from words on the full cor-pus); (3) RNNG with trivial trees (left branching, right branching, random); (4) supervised RNNG trained on unlabeled, binarized gold trees.",
"13 Note that the supervised RNNG also trains a discriminative parser q ( z | x ) (alongside the generative model p ( x , z ) ) in order to sample parse forests for perplexity evaluation (i.e. importance sam-pling).",
"This discriminative parser has the same ar-13 We use right branching binarizationMatsuzaki et al. (2005) find that differences between various binarization schemes have marginal impact.",
"Our supervised RNNG therefore differs the original RNNG, which trains on non-binarized trees and does not ignore constituent labels.",
"chitecture as URNNG's inference network.",
"For all models, we perform early stopping based on validation perplexity.",
"Table 1 shows perplexity for the different models on PTB/CTB.",
"As a language model URNNG outperforms an RNNLM and is competitive with the supervised RNNG.",
"14 The left branching baseline performs poorly, implying that the strong performance of URNNG/RNNG is not simply due to the additional depth afforded by the tree LSTM composition function (a left branching tree, which always performs REDUCE when possible, is the deepest model).",
"The right branching baseline is essentially equivalent to an RNNLM and hence performs similarly.",
"We found PRPN with default hyperparameters (which obtains a perplexity of 62.0 in the PTB setup from Mikolov et al. (2010)) to not perform well, but tuning hyperparameters improves performance.",
"15 The supervised RNNG performs well as a language model, despite being trained on the joint (rather than marginal) likelihood objective.",
"16 This indicates that explicit 14 For RNNG and URNNG we estimate the log marginal likelihood (and hence, perplexity) with K = 1000 importance-weighted samples, log p ( x ) log (cid:16) 1 K (cid:80) Kk =1 log p ( x , z ( k ) ) q ( z ( k ) | x ) (cid:17) .",
"During evaluation only, we also flatten q ( z | x ) by dividing span scores s ij by a temperature term 2 .",
"0 before feeding it to the CRF.",
"15 Using the code from https://github.com/yikangshen/ PRPN, we tuned model size, initialization, dropout, learning rate, and use of batch normalization.",
"16 RNNG is trained to maximize log p ( x , z ) while URNNG is trained to maximize (a lower bound on) the language modeling objective log p ( x ) .",
"modeling of syntax helps generalization even with richly-parameterized neural models.",
"Encouraged by these observations, we also experiment with a hybrid approach where we train a supervised RNNG first and continue fine-tuning the model (including the inference network) on the URNNG objective (RNNG URNNG in Table 1).",
"17 This approach results in nontrivial perplexity improvements, and suggests that it is potentially possible to improve language models with supervision on parsed data.",
"In Figure 2 we show perplexity by sentence length.",
"We find that a standard language model (RNNLM) is better at modeling short sentences, but underperforms models that explicitly take into account structure (RNNG/URNNG) when the sentence length is greater than 10.",
"Table 2 (top) compares our results against prior work on this version of the PTB, and Table 2 (bot-tom) shows the results on a 1M sentence subset of the one billion word corpus, which is two orders of magnitude larger than PTB.",
"On this larger dataset URNNG still improves upon the RNNLM.",
"We also trained an RNNG (and RNNG URNNG) on this dataset by parsing the training set with the self-attentive parser from Kitaev and Klein (2018).",
"18 These models improve upon the RNNLM but not the URNNG, potentially highlighting the limitations of using predicted trees for supervising RNNGs.",
"Table 1 also shows the F 1 scores for grammar induction.",
"Note that we induce latent trees directly from words on the full dataset.",
"19 For RNNG/URNNG we obtain the highest scoring 17 We fine-tune for 10 epochs and use a smaller learning rate of 0.1 for the generative model.",
"18 To parse the training set we use the benepar en2 model from https://github.com/nikitakit/self-attentive-parser, which obtains an F 1 score of 95.17 on the PTB test set.",
"19 Past work on grammar induction usually train/evaluate on short sentences and also assume access to gold POS tags (Klein and Manning, 2002; Smith and Eisner, 2004; Bod, 2006).",
"However more recent works do train directly words (Jin et al., 2018; Shen et al., 2018; Drozdov et al., 2019).",
"tree from q ( z | x ) through the Viterbi inside (i.e. CKY) algorithm.",
"We calculate unlabeled F 1 using evalb , which ignores punctuation and discards trivial spans (width-one and sentence spans).",
"20 Since we compare F 1 against the original, non-binarized trees (per convention), F 1 scores of models using oracle binarized trees constitute the upper bounds.",
"We confirm the replication study of Htut et al. (2018) and find that PRPN is a strong model for grammar induction.",
"URNNG performs on par with PRPN on English but PRPN does better on Chinese; both outperform right branching baselines.",
"Table 3 further analyzes the learned trees and shows the F 1 score of URNNG trees against 20 Available at https://nlp.cs.nyu.edu/evalb/.",
"We evaluate with COLLINS.prm parameter file and LABELED option equal to 0.",
"We observe that the setup for grammar induction varies widely across different papers: lexicalized vs. unlex-icalized; use of punctuation vs. not; separation of train/test sets; counting sentence-level spans for evaluation vs. ignoring them; use of additional data; length cutoff for training/evaluation; corpus-level F 1 vs. sentence-level F 1 ; and, more.",
"In our survey of twenty or so papers, almost no two papers were identical in their setup.",
"Such variation makes it difficult to meaningfully compare models across papers.",
"Hence, we report grammar induction results mainly for the models and baselines considered in the present work.",
"other trees (left), and the recall of URNNG/PRPN trees against ground truth constituents (right).",
"We find that trees induced by URNNG and PRPN are quite different; URNNG is more sensitive to SBAR and VP, while PRPN is better at identifying NP.",
"While left as future work, this naturally suggests a hybrid approach wherein the intersection of constituents from URNNG and PRPN is used to create a corpus of partially annotated trees, which can be used to guide another model, e.g. via posterior regularization (Ganchev et al., 2010) or semi-supervision (Hwa, 1999).",
"Finally, Table 4 compares our results using the same evaluation setup as in Drozdov et al. (2019), which differs considerably from our setup.",
"Table 5 shows some standard metrics related to the learned generative model/inference network.",
"The reconstruction perplexity based on E q ( z | x ) [log p ( x | z )] is much lower than regular perplexity, and further, the Kullback-Leibler divergence between the conditional prior and the variational posterior, given by E q ( z | x ) (cid:20) log q ( z | x ) p ( z | x < z ) (cid:21) , PTB CTB RNNG URNNG RNNG URNNG PPL 88.7 90.6 193.1 195.7 Recon.",
"is highly nonzero.",
"(See equation (1) for definitions of log p ( x | z ) and log p ( z | x < z ) ).",
"This indicates that the latent space is being used in a meaningful way and that there is no posterior collapse (Bowman et al., 2016).",
"As expected, the entropy of the variational posterior is much lower than the entropy of the conditional prior, but there is still some uncertainty in the posterior.",
"4.4 Syntactic Evaluation We perform a syntactic evaluation of the different models based on the setup from Marvin and Linzen (2018): the model is given two minimally different sentences, one grammatical and one ungrammatical, and must identify the grammatical sentence by assigning it higher probability.",
"21 Table 6 shows the accuracy results.",
"Overall the supervised RNNG significantly outperforms the other models, indicating opportunities for further work in unsupervised modeling.",
"While the URNNG does slightly outperform an RNNLM, the distribution of errors made from both models are similar, and thus it is not clear whether the out-performance is simply due to better perplexity or learning different structural biases.",
"There are several limitations to our approach.",
"For one, the URNNG takes considerably more time/memory to train than a standard language model due to the O ( T 3 ) dynamic program in the inference network, multiple samples to obtain low-variance gradient estimators, and dynamic computation graphs that make efficient batching 21 We modify the publicly available dataset from https:// github.com/BeckyMarvin/LM syneval to only keep sentence pairs that did not have any unknown words with respect to our vocabulary, resulting in 80K sentence pairs for evaluation.",
"nontrivial.",
"The model is sensitive to hyperparameters and required various optimization strategies (e.g. separate optimizers for the inference network and the generative model) to avoid posterior collapse.",
"Finally, the URNNG also seemed to rely heavily on punctuation to identify constituents and we were unable to improve upon a right-branching baseline when training the URNNG on a version of PTB where punctuation is removed.",
"22 5 Related Work There has been much work on incorporating tree structures into deep models for syntax-aware language modeling, both for unconditional (Emami and Jelinek, 2005; Buys and Blunsom, 2015; Dyer et al., 2016) and conditional (Yin and Neubig, 2017; Alvarez-Melis and Jaakkola, 2017; Rabi-novich et al., 2017; Aharoni and Goldberg, 2017; Eriguchi et al., 2017; Wang et al., 2018; Gu et al., 2018) cases.",
"These approaches generally rely on annotated parse trees during training and maximizes the joint likelihood of sentence-tree pairs.",
"Prior work on combining language modeling and unsupervised tree learning typically embed soft, tree-like structures as hidden layers of a deep network (Cho et al., 2014; Chung et al., 2017; Shen et al., 2018, 2019).",
"In contrast, Buys and Blun-som (2018) make Markov assumptions and perform exact marginalization over latent dependency 22 Many prior works that induce trees directly from words often employ additional heuristics based on punctuation (Seginer, 2007; Ponvert et al., 2011; Spitkovsky et al., 2013; Parikh et al., 2014), as punctuation (e.g. comma) is usually a reliable signal for start/end of constituent spans.",
"The URNNG still has to learn to rely on punctuation, similar to recent works such as depth-bounded PCFGs (Jin et al., 2018) and DIORA (Drozdov et al., 2019).",
"In contrast, PRPN (Shen et al., 2018) and Ordered Neurons (Shen et al., 2019) induce trees by directly training on corpus without punctuation.",
"We also reiterate that punctuation is used during training but ignored during evaluation (except in Table 4).",
"trees.",
"Our work is also related to the recent line of work on learning latent trees as part of a deep model through supervision on other tasks, typically via differentiable structured hidden layers (Kim et al., 2017; Bradbury and Socher, 2017; Liu and Lapata, 2018; Tran and Bisk, 2018; Peng et al., 2018; Niculae et al., 2018; Liu et al., 2018), policy gradient-based approaches (Yogatama et al., 2017; Williams et al., 2018; Havrylov et al., 2019), or differentiable relaxations (Choi et al., 2018; Maillard and Clark, 2018).",
"The variational approximation uses amortized inference (Kingma and Welling, 2014; Mnih and Gregor, 2014; Rezende et al., 2014), in which an inference network is used to obtain the variational posterior for each observed x .",
"Since our inference network is structured (i.e., a CRF), it is also related to CRF autoencoders (Ammar et al., 2014) and structured VAEs (Johnson et al., 2016; Krish-nan et al., 2017), which have been used previously for unsupervised (Cai et al., 2017; Drozdov et al., 2019; Li et al., 2019) and semi-supervised (Yin et al., 2018; Corro and Titov, 2019) parsing.",
"It is an open question as to whether explicit modeling of syntax significantly helps neural models.",
"Strubell et al. (2018) find that supervising intermediate attention layers with syntactic heads improves semantic role labeling, while Shi et al. (2018) observe that for text classification, syntactic trees only have marginal impact.",
"Our work suggests that at least for language modeling, incorporating syntax either via explicit supervision or as latent variables does provide useful inductive biases and improves performance.",
"Finally, in modeling child language acquisition, the complex interaction of the parser and the grammatical knowledge being acquired is the object of much investigation (Trueswell and Gleitman, 2007); our work shows that apparently grammatical constraints can emerge from the interaction of a constrained parser and a more general grammar learner, which is an intriguing but underexplored hypothesis for explaining human linguistic biases.",
"We thank the members of the DeepMind language team for helpful feedback.",
"YK is supported by a Google Fellowship.",
"AR is supported by NSF Career 1845664."
] | [
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"result",
"abstain",
"result",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"result",
"result",
"other",
"other",
"other"
] |
[
"The human mind is a dynamical system, yet many analysis techniques used to study it are limited in their ability to capture the complex dynamics that may characterize mental processes.",
"This study proposes the continuous-time deconvolutional regressive neural network (CDRNN), a deep neural extension of continuous-time deconvolutional regression (CDR, Shain and Schuler, 2021) that jointly captures time-varying, non-linear, and delayed influences of predictors (e.g. word surprisal) on the response (e.g. reading time).",
"Despite this flexibility, CDRNN is interpretable and able to illuminate patterns in human cognition that are otherwise difficult to study.",
"Behavioral and fMRI experiments reveal detailed and plausible estimates of human language processing dynamics that generalize better than CDR and other baselines, supporting a potential role for CDRNN in studying human language processing.",
"Central questions in psycholinguistics concern the mental processes involved in incremental human language understanding: which representations are computed when, by what mental algorithms (Fra-zier and Fodor, 1978; Just and Carpenter, 1980; Abney and Johnson, 1991; Tanenhaus et al., 1995; Almor, 1999; Gibson, 2000; Coltheart et al., 2001; Hale, 2001; Lewis and Vasishth, 2005; Levy, 2008, inter alia )?",
"Such questions are often studied by caching out a theory of language processing in an experimental stimulus, collecting human responses, and fitting a regression model to test whether measures show the expected effects (e.g. Grodner and Gibson, 2005).",
"Regression techniques have grown in sophistication, from ANOVA (e.g. Pickering and Branigan, 1998) to newer linear mixed-effects approaches (LME, Bates et al., 2015) that enable direct word-by-word analysis of effects in naturalistic human language processing (e.g. Demberg and Keller, 2008; Frank and Bod, 2011).",
"However, these methods struggle to account for delayed effects.",
"Because the human mind operates in real time and experiences computational bottlenecks of various kinds (Bouma and De Voogd, 1974; Just and Carpenter, 1980; Ehrlich and Rayner, 1981; Mollica and Piantadosi, 2017), delayed effects may be pervasive, and, if left uncontrolled, can yield misleading results (Shain and Schuler, 2018).",
"Continuous-time deconvolutional regression (CDR) is a recently proposed technique to address delayed effects in measures of human cognition (Shain and Schuler, 2018, 2021).",
"CDR fits parametric continuous-time impulse response functions (IRFs) that mediate between word features and response measures.",
"An IRF maps the time elapsed between a stimulus and a response to a weight describing the expected influence of the stimulus on the response.",
"CDR models the response as an IRF-weighted sum of preceding stimuli, thus directly accounting for effect latencies.",
"Empirically, CDR reveals fine-grained processing dynamics and generalizes better to human reading and fMRI responses than established alternatives.",
"However, CDR retains a number of simplifying assumptions (e.g. that the IRF is fixed over time) that may not hold of the human language processing system.",
"Deep neural networks (DNNs), widely used in natural language processing (NLP), can relax these strict assumptions.",
"Indeed, psycholinguistic regression analyses and NLP systems share a common structure: both fit a function from word features to some quantity of interest.",
"However, psycholinguistic regression models face an additional constraint: they must be interpretable enough to allow researchers to study relationships between variables in the model.",
"This requirement may be one reason why black box DNNs are not generally used to analyze psycholinguistic data, despite the tremendous gains DNNs have enabled in natural language tasks (Peters et al., 2018; Devlin et al., 2019; Radford et al., 2019; Brown et al., 2020, inter alia ), in part by better approximating the complex dynamics of human cognition as encoded in natural language (Linzen et al., 2016; Gulordava et al., 2018; Tenney et al., 2019; Hewitt and Manning, 2019; Wilcox et al., 2019; Schrimpf et al., 2020).",
"This study proposes an attempt to leverage the flexibility of DNNs for psycholinguistic data analysis.",
"The continuous-time deconvolutional regressive neural network (CDRNN) is an extension of CDR that reimplements the impulse response function as a DNN describing the expected influence of preceding events (e.g. words) on future responses (e.g. reading times) as a function of their properties and timing.",
"CDRNN retains the deconvolutional design of CDR while relaxing many of its simplifying assumptions (linear-ity, additivity, homosketasticity, stationarity, and context-independence, see Section 2), resulting in a highly flexible model.",
"Nevertheless, CDRNN is interpretable and can shed light on the underlying data generating process.",
"Results on reading and fMRI measures show substantial generalization improvements from CDRNN over baselines, along with detailed insights about the underlying dynamics that cannot easily be obtained from existing methods.",
"1 2 Background Psycholinguists have been aware for decades that processing effects may lag behind the words that trigger them (Morton, 1964; Bouma and De Voogd, 1974; Rayner, 1977; Erlich and Rayner, 1983; Mitchell, 1984; Rayner, 1998; Vasishth and Lewis, 2006; Smith and Levy, 2013), possibly because cognitive buffers may exist to allow higher-level information processing to catch up with the input (Bouma and De Voogd, 1974; Baddeley et al., 1975; Just and Carpenter, 1980; Ehrlich and Rayner, 1981; Mollica and Piantadosi, 2017).",
"They have also recognized the potential for non-linear, interactive, and/or time-varying relationships between word features and language processing (Smith and Levy, 2013; Baayen et al., 2017, 2018).",
"No prior regression method can jointly address these 1 Because of page constraints, additional replication details and synthetic results are provided in an external supplement, available here: https://osf.io/z89vn/ .",
"concerns in non-uniform time series (e.g. words with variable duration) like naturalistic psycholinguistic experiments.",
"Discrete-time methods (e.g. lagged/spillover regression, Sims, 1971; Erlich and Rayner, 1983; Mitchell, 1984) ignore potentially meaningful variation in event duration, even if some (e.g. generalized additive models, or GAMs, Hastie and Tibshirani, 1986; Wood, 2006) permit non-linear and non-stationary (time-varying) feature interactions (Baayen et al., 2017).",
"CDR (Shain and Schuler, 2018, 2021) addresses this limitation by fitting continuous-time IRFs, but assumes that the IRF is stationary (time invariant), that features scale linearly and combine additively, and that the response variance is constant (homoskedastic).",
"By implementing the IRF as a time-varying neural network, CDRNN relaxes all of these assumptions, incorporating the featural flexibility of GAMs while retaining the temporal flexibility of CDR.",
"Previous studies have investigated latency and non-linearity in human sentence processing.",
"For example, Smith and Levy (2013) attach theoretical significance to the functional form of the relationship between word surprisal and processing cost, using GAMs to show that this relationship is linear and arguing on this basis that language processing is highly incremental.",
"This claim is under active debate (Brothers and Kuperberg, 2021), underlining the importance of methods that can investigate questions of functional form.",
"Smith and Levy (2013) also investigate the timecourse of surprisal effects using spillover and find a more delayed surprisal response in self-paced reading (SPR) than in eye-tracking.",
"Shain and Schuler (2021) support the latter finding using CDR, and in addition show evidence of strong inertia effects in SPR, such that participants who have been reading quickly in the recent past also read more quickly now.",
"However, this outcome may be an artifact of the stationarity assumption: CDR may be exploiting its estimates of rate effects in order to capture broad non-linear negative trends (e.g. task adaptation, Prasad and Linzen, 2019) in a stationary model.",
"Similarly, the generally null word frequency estimates reported in Shain and Schuler (2021) may be due in part to the assumption of additive effects: word frequency and surprisal are related, and they may coordinate interactively to determine processing costs (Nor-ris, 2006).",
"Thus, in general, prior findings on the timecourse and functional form of effects in human sentence processing may be influenced by method-h (0) = (cid:20) x t (cid:21) RNN h IRF convolution (cid:80) s RNN + i n g ( ) x 1 RNN + i n g ( ) x 1 RNN + i n g ( ) x 1 RNN + i n g ( ) x 1 Figure 1: CDRNN model.",
"By jointly relaxing these potentially problematic assumptions, CDRNN stands to support more reliable conclusions about human language comprehension, while also possibly enabling new insights into cognitive dynamics.",
"This section presents a high-level description of the model design (for formal definition, see Appendix A).",
"The CDRNN architecture is represented schematically in Figure 1.",
"The primary goal of estimation is to identify the deep neural IRF g ( ) (top) that computes the influence of a preceding event on the predictive distribution over a subsequent response as a function of their distance in time .",
"As shown, the IRF is a feedforward projection of into a matrix that defines a weighted sum over the values of input vector x , which is concatenated with a bias to capture general effects of stimulus timing ( rate ).",
"This matrix multiplication determines the contribution of the stimulus event to the parameters of the predictive distribution (e.g. the mean and variance parameters of a Gaussian predictive distri-bution).",
"Defining the IRF as a function of ensures that the model has a continuous-time definition.",
"To capture non-linear effects of stimulus features, the IRF projection is itself parameterized by a projection of a hidden state h .",
"The dependence on h permits non-linear influences of the properties of the stimulus sequence on the IRF itself.",
"To generate h , the predictors x are concatenated with their timestamps t and submitted to the model as input.",
"Inputs are cast to a hidden state for each preceding event as the sum of three quantities: a feedforward projection h in of each input, a forward-directional RNN projection h RNN of the events up to and including each input, and random effects h Z containing offsets for the relevant random effects level(s) (e.g. for each participant in an experiment).",
"In this study, the recurrent component is treated as optional (gray arrows).",
"Without the RNN, the model is non-stationary (via input t ) but cannot capture contextual influences on the IRF.",
"The summation over IRF outputs at the top of the figure ensures that the model is deconvolutional: each preceding input contributes to the response in some proportion, with that proportion determined by the features, context, and relative timing of that input.",
"Because the IRF depends on a deep neural projection of the current stimulus as well as (op-tionally) the entire sequence of preceding stimuli, it implicitly estimates all interactions between these variables in governing the response.",
"Predictors may thus coordinate in a non-linear, non-additive, and time-varying manner.",
"The CDRNN IRF describes the influence over time of predictors on all parameters of the predictive distribution (in these experiments, the mean and variance parameters of a Gaussian predictive distribution).",
"Such a design (i.e. modeling dependencies on the predictors of all parameters of the predictive distribution) has previously been termed distributional regression (Burkner, 2018).",
"Despite their flexibility and task performance (Section 5), CDRNN models used in this study have few parameters (Table A1) by current deep learning standards because they are relatively shallow and small (Supplement S1).",
"Given (1) an input configuration C containing predictors X , input timestamps t , and response timestamps t (cid:48) , (2) CDRNN parameter vector w , (3) output distribution p , (4) random effects vector z , and (5) response vector y , the model uses gradient descent",
"In addition to random effects shrinkage governed by z and any arbitrary additional regularization penalties L reg (see Supplement S1), models are regularized using dropout (Srivastava et al., 2014) with drop rate d h at the outputs of all feedforward hidden layers.",
"Random effects are also dropped at rate d z , which is intended to encourage the model to find population-level estimates that accurately reflect central tendency.",
"Finally, the recurrent contribution to the CDRNN hidden state ( h RNN above) is dropped at rate d r , which is intended to encourage accurate IRF estimation even when context is unavailable.",
"Because it is a DNN, CDRNN lacks parameters that selectively describe the size and shape of the response to a specific predictor (unlike CDR), and indeed individual parameters (e.g. individual biases or connection weights) are not readily interpretable.",
"Thus, from a scientific perspective, the quantity of general interest is not a distribution over parameters, but rather over the effect of a predictor on the response.",
"The current study proposes to accomplish this using perturbation analysis (e.g. Ribeiro et al., 2016; Petsiuk et al., 2018), manipulating the input configuration and quantifying the influence of this manipulation on the predicted response.",
"2 For example, to obtain an estimate of rate effects (i.e. the base response or deconvolutional inter-cept, see Shain and Schuler, 2021), a reference stimulus can be constructed, and the response to it can be queried at each timepoint over some interval of interest.",
"To obtain CDR-like estimates of predictor-wise IRFs, the reference stimulus can be increased by 1 in the predictor dimension of interest (e.g. word surprisal) and requeried, taking the difference between the obtained response and the reference response to reveal the influence of an extra unit of the predictor.",
"3 This study uses the 2 Perturbation analyses is one of a growing suite of tools for black box interpretation.",
"It is used here because it straightforwardly links properties of the input to changes in the estimated response, providing a highly general method for querying aspects of the the non-linear, non-stationary, non-additive IRF defined by the CDRNN equations.",
"3 Note that 1 is used here to maintain comparability of effect estimates to those generated by methods that assume training set mean of x and t as a reference, since this represents the response of the system to an average stimulus.",
"The model also supports arbitrary additional kinds of queries, including of the curvature of an effect in the IRF over time and of the interaction between two effects at a point in time.",
"Indeed, the IRF can be queried with respect to any combination of values for predictors, t , and , yielding an open-ended space of queries that can be constructed as needed by the researcher.",
"Because the estimates of interest all derive from the model's predictive distribution, uncertainty about them can be measured with Monte Carlo techniques as long as training involves a stochastic component, such as dropout (Srivastava et al., 2014) or batch normalization (Ioffe and Szegedy, 2015).",
"This study estimates uncertainty using Monte Carlo dropout (Gal and Ghahramani, 2016), which recasts training neural networks with dropout as variational Bayesian approximation of deep Gaussian process models (Damianou and Lawrence, 2013).",
"At inference time, an empirical distribution over responses to an input is constructed by resampling the model (i.e. sampling different dropout masks).",
"4 As argued by Shain and Schuler (2021) for CDR, in addition to intervals-based tests, common hypothesis tests (e.g. for the presence of an effect) can be performed in a CDRNN framework via bootstrap model comparison on held out data (e.g. of models with and without the effect of interest).",
"Following Shain and Schuler (2021), CDRNN is applied to naturalistic human language processing data from three experimental modalities: the Natural Stories self-paced reading corpus ( 1M instances, Futrell et al., 2020), the Dundee eye-tracking corpus ( 200K instances, Kennedy",
"linearity of effects (especially CDR), but that 1 has no special meaning in the non-linear setting of CDRNN modeling, and effects can be queried at any offset from any reference.",
"Results here show that deflections move relatively smoothly away from the reference, even at smaller steps than 1, and that IRFs queried at 1 are similar to those obtained from (linear) CDR, indicating that this method of effect estimation is reliable.",
"Note finally that because predictors are underlyingly rescaled by their training set standard deviations (though plotted at the original scale for clarity), 1 here corresponds to 1 standard unit, as was the case with the CDR estimates discussed in Shain and Schuler (2021).",
"4 Initial experiments also explored uncertainty quantifica-tion by implemententing CDRNN as a variational Bayesian DNN.",
"Compared to the methods advocated here, the variational approach was more prone to instability, achieved worse fit, and yielded implausibly narrow credible intervals.",
"et al., 2003), and the Natural Stories fMRI corpus ( 200K instances, Shain et al., 2020), using the train/dev/test splits for these corpora defined in Shain and Schuler (2021).",
"Further details about datasets and preprocessing are given in Supplement S2.",
"For reading data, CDRNN is compared to CDR as well as lagged LME and GAM baselines equipped with four spillover positions for each predictor (values from the current word, plus three preceding words), since LME and GAM are well established analysis methods in psycholinguistics (e.g. Baayen et al., 2007; Demberg and Keller, 2008; Frank and Bod, 2011; Smith and Levy, 2013; Baayen et al., 2017; Goodkind and Bicknell, 2018, inter alia ).",
"Because the distribution of reading times is heavy-tailed (Frank et al., 2013), following Shain and Schuler (2021) models are fitted to both raw and log-transformed reading times.",
"For fMRI data, CDRNN is compared to CDR as well as four existing techniques for analyzing naturalistic fMRI data: pre-convolution with the canonical hemodynamic response function (HRF, Brennan et al., 2012; Willems et al., 2015; Henderson et al., 2015, 2016; Lopopolo et al., 2017), linear interpolation (Shain and Schuler, 2021), binning (Wehbe et al., 2020), and Lanczos interpolation (Huth et al., 2016).",
"Statistical model comparisons use paired permutation tests of test set error (Demsar, 2006).",
"Models use predictors established by prior psycholinguistic research (e.g. Rayner, 1998; Demberg and Keller, 2008; van Schijndel and Schuler, 2013; Staub, 2015; Shain and Schuler, 2018, inter alia ): unigram and 5-gram surprisal , word length (read-ing only), saccade length (eye-tracking only), and previous was fixated (eye-tracking only).",
"Predictor definitions are given in Appendix C. The deconvolutional intercept term rate (Shain and Schuler, 2018, 2021), an estimate of the general influence of observing a stimulus at a point in time, independently of its properties, is implicit in CDRNN (unlike CDR) and is therefore reported in all results.",
"Reading models include random effects by subject, while fMRI models include random effects by subject and by functional region of interest (fROI).",
"Unlike LME, where random effects capture linear differences in effect size between e.g. subjects, random effects in CDRNN capture differences in overall dynamics between subjects, including differences in size, IRF shape, functional form (e.g. linearity), contextual influences on the IRF, and interactions with other effects.",
"Two CDRNN variants are considered in all experiments: the full model (CDRNN-RNN) containing an RNN over the predictor sequence, and a feedforward only model (CDRNN-FF) with the RNN ablated (gray arrows removed in Figure 1).",
"This manipulation is of interest because CDRNN-FF is both more parsimonious (fewer parameters) and faster to train, and may therefore be preferred in the absence of prior expectation that the IRF is sensitive to context.",
"All plots show means and 95% credible intervals.",
"Code and documentation are available at https://github.com/coryshain/cdr .",
"Since CDRNN is designed for scientific modeling, the principal output of interest is the IRF itself and the light it might shed on questions of cognitive dynamics, rather than on performance in some task (predicting reading latencies or fMRI measures are not widely targeted engineering goals).",
"However, predictive performance can help establish the trustworthiness of the IRF estimates.",
"To this end, as a sanity check, this section first evaluates predictive performance on human data relative to existing regression techniques.",
"While results may resemble bake-off comparisons familiar from machine learning (and indeed CDRNN does outperform all baselines), their primary purpose is to establish that the CDRNN estimates are trustworthy, since they describe the phenomenon of interest in a way that generalizes accurately to an unseen sample.",
"Baseline models, including CDR, are as reported Model Train Expl Test Canonical HRF 11.3548 11.8263 11.5661 Linearly interpolated 11.4236 11.9888 11.6654 Averaged 11.3478 11.9280 11.6090 Lanczos interpolated 11.3536 11.9059 11.5871 CDR 11.2774 11.6928 11.5369 CDRNN-FF 10.5648 11.3602 11.3042 CDRNN-RNN 10.8736 11.5631 11.3914 Table 2: fMRI.",
"Table 1 gives mean squared error by dataset of CDRNN vs. baseline models on reading times from both Natural Stories and Dundee.",
"Both versions of CDRNN outperform all baselines on the dev partition of all datasets except for raw (ms) latencies in Natural Stories (SPR), where CDRNN is edged out by CDR 6 but still substantially outperforms the non-CDR baselines.",
"Nonetheless, results indicate that CDRNN estimates of Natural Stories (ms) are similarly reliable to those of CDR, and, as discussed in Section 5.2, CDRNN largely replicates the CDR estimates on Natural Stories while offering advantages for analysis.",
"Although CDR struggles against GAM baselines on Dundee, CDRNN has closed the gap.",
"This is noteworthy in light of speculation in Shain and Schuler (2021) that CDR's poorer performance on Dundee might be due in part to non-linear effects, which GAM can estimate but CDR cannot.",
"CDRNN performance supports this conjecture: once the model can account for non-linearities, it overtakes GAMs.",
"Results from fMRI are shown in Table 2, where both CDRNN variants yield substantial improvements to training, dev, and test set error.",
"These results indicate that the relaxed assumptions afforded by CDRNN are beneficial for describing the fMRI response, which is known to saturate over time (Friston et al., 2000; Wager et al., 2005; Vazquez et al., 2006; Lindquist et al., 2009).",
"Following Shain and Schuler (2021), model error is statistically compared using a paired permu-5 For all datasets, the CDR baseline used here is the variant that was deployed on the test set in Shain and Schuler (2021).",
"6 Note that a major advantage of CDRNN is its ability to model dynamics in response variance, which are not reflected in squared error.",
"For example, although CDRNN-FF achieves worse test set error than CDR on the Natural Stories (ms) task, it affords a 31,040 point log likelihood improvement.",
"tation test that pools across all datasets covered by a given baseline (reading data for LME and GAM, fMRI data for canonical HRF, linearly interpolated, averaged, and Lanczos interpolated, and both for CDR).",
"7 Results are given in Table 3.",
"As shown, both variants of CDRNN significantly improve over all baselines, and CDRNN-RNN significantly improves over CDRNN-FF.",
"Notwithstanding, CDRNN-FF may be preferred in applications: simpler, faster to train, better at recovering synthetic models (Supplement S3), more reliable in noisy domains like fMRI, and close in performance to CDRNN-RNN.",
"Results overall support the reliability of patterns revealed by CDRNN's estimated IRF, which is now used to explore and visualize sentence processing dynamics.",
"CDR-like IRF estimates can be obtained by increasing a predictor by 1 (standard deviation) relative to the reference and observing the change in the response over time.",
"Visualizations using this approach are presented in Figure 2 alongside CDR estimates from Shain and Schuler (2021).",
"In general, CDRNN finds similar patterns to CDR.",
"This suggests both (1) that CDRNN is capable of recovering estimates from a preceding state-of-the-art deconvolutional model for these domains, and (2) that CDR estimates in these domains are not driven by artifacts introduced by its simplifying assumptions, since a model that lacks those assumptions and has a qualitatively different architecture largely recovers them.",
"Nonetheless there are differences.",
"For example, Dundee estimates decay more quickly over time in CDRNN than in CDR, indicating an even less pronounced influence of temporal diffusion in 7 The comparison rescales each pair of error vectors by their joint standard deviation in order to enable comparability across datasets with different error variances.",
"eye-tracking than CDR had previously suggested.",
"Estimates from CDRNN-FF and CDRNN-RNN roughly agree, except that CDRNN-RNN estimates for fMRI are more attenuated.",
"CDR shows little uncertainty in the fMRI domain despite its inherent noise (Shain et al., 2020), while CDRNN more plausibly shows more uncertainty in its estimates for the noisier fMRI data.",
"As noted in Section 2, Shain and Schuler (2021) report negative rate effects in reading i.e., a local decrease in subsequent reading time at each word, especially in SPR.",
"This was interpreted as an inertia effect (faster recent reading engenders faster current reading), but it might also be an artifact of non-linear decreases in latency over time (due to task habituation, e.g. Baayen et al., 2017; Harrington Stack et al., 2018; Prasad and Linzen, 2019) that CDR cannot model.",
"CDRNN estimates nonetheless thus support the prior interpretation of rate effects as inertia, at least in SPR: a model that can flexibly adapt to non-linear habituation trends finds SPR rate estimates that are similar in shape and magnitude to those estimated by CDR.",
"In addition, CDRNN finds a slower response to word surprisal in self-paced reading than in eye-tracking.",
"This result converges with word-discretized timecourses reported in Smith and Levy (2013), who find more extensive spillover of surprisal effects in SPR than in eye-tracking.",
"Results thus reveal important hidden dynamics in the reading response (inertia effects), continuous-time delays in processing effects, and influences of modality the continuous dynamics of sentence processing, all of which are difficult to estimate using existing regression techniques.",
"Greater response latency and more pronounced inertia effects in self-paced reading may be due to the fact that a gross motor task (paging via button presses) is overlaid on the sentence comprehension task.",
"While the motor task is not generally of interest to psycholinguistic theories, controlling for its effects is crucial when using self-paced reading to study sentence comprehension (Mitchell, 1984).",
"CDRNN also allows the analyst to explore other aspects of the IRF, such as functional curvature at a point in time.",
"For example, in the context of reading, Smith and Levy (2013) argue for a linear increase in processing cost as a function of word surprisal.",
"The present study allows this claim to be assessed across modalities by checking the curva-NatStor (SPR) I n s t a n t a n e o u s Dundee NatStor (fMRI) O v er T i m e Figure 3: CDRNN-FF-estimated functional curvature of the 5-gram surprisal response.",
"ture of the 5-gram surprisal response (in raw ms) at a timepoint of interest (0ms for reading and 5s for fMRI).",
"As shown in the top row of Figure 3, reading estimates are consistent with a linear response (the credible interval contains a straight line), as predicted, but are highly non-linear in fMRI, with a rapid peak above the mean (zero-crossing) followed by a sharp dip and plateau, and even an estimated increased response at values below the mean (though estimates at the extremes have high uncertainty).",
"This may be due in part to ceiling effects: blood oxygen levels measured by fMRI are bounded, but reading times are not.",
"While this is again a property of experimental modality rather than sentence comprehension itself, understanding such influences is important for drawing scientific conclusions from experimental data.",
"For example, due to the possibility of saturation, fMRI may not be an ideal modality for testing scientific claims about the functional form of effects, and the linearity assumptions of e.g. CDR and LME may be particularly constraining.",
"The curvature of effects can also be queried over time.",
"If an effect is temporally diffuse but linear, its curvature should be roughly linear at any delay of interest.",
"The second row of Figure 3 shows visualizations to this effect.",
"These plots in fact subsume the kinds of univariate plots shown above: univariate IRFs to 5-gram surprisal like those plotted in Figure 2 are simply slices taken at a predictor value (1 sample standard deviation above the mean), whereas curvature estimates in the first row of Figure 3 are simply slices taken at a time value (0s for reading and 5s for fMRI).",
"Plots are consistent with the linearity hypothesis for reading, but again show strong non-linearities in the fMRI domain that are consistent with saturation effects Delay (s) rate sound power unigram surprisal 5-gram surprisal PCFG surprisal Figure 4: Effect interactions in a CDRNN-FF replication of Shain et al. (2020).",
"In addition to exploring multivariate relationships of a predictor with time, relationships between predictors can also be studied.",
"Such relationships constitute interactions in a CDRNN model, though they are not constrained (cf. interactions in linear models) to be strictly multiplicative indeed, a major advantage of CDRNN is that interactions come for free, along with estimates of their functional form.",
"To explore effect interactions, a CDRNN-FF version of the full model in Shain et al. (2020) is fitted to the fMRI dataset.",
"The model contains more predictors to explore than models considered above, including surprisal computed from a probabilistic context-free grammar ( PCFG surprisal , see Appendix C for details).",
"Univariate IRFs are shown in the top left panel of Figure 4, and pairwise interaction surfaces at a delay of 5s (near the peak response) are shown in the remaining panels.",
"Plots show that the response at any value of the other predictors is roughly flat as a function of sound power (i.e. signal power of the auditory stimulus, middle row).",
"This accords with prior arguments that the cortical language system, whose activity is measured here, does not strongly regis-ter low-level perceptual effects (Fedorenko et al., 2010; Braze et al., 2011).",
"The estimate for unigram surprisal (middle left) shows an unexpected non-linearity: although activity increases with higher surprisal (lower frequency words), it also increases at lower surprisal (higher frequency words), suggesting the existence of high frequency items that nonetheless engender a large response.",
"The interaction between PCFG surprisal and unigram surprisal possibly sheds light on this outcome, since it shows a sharper increase in the PCFG surprisal response in higher frequency (lower unigram surprisal) regions.",
"This may be because the most frequent words in English tend to be function words that play an outsized role in syntactic structure building (e.g. prepositional phrase attachment decisions).",
"In addition, 5-gram surprisal interacts with PCFG surprisal , showing a non-linear increase in response for words that are high on both measures.",
"This is consistent with a unitary predictive mechanism that experiences strong error signals when both string-level (5-gram) and structural (PCFG) cues are poor.",
"All these interactions should be interpreted with caution, since the uncertainty interval covers much weaker degrees of interaction.",
"As discussed in Section 3, CDRNN implements distributional regression and thus also contains an IRF describing the influence of predictors on the variance of the predictive distribution as a function of time.",
"IRFs of the variance can be visualized identically to IRFs of the mean.",
"For example, Figure 5 shows the estimated change in the standard deviation of the predictive distribution over time from observing a stimulus.",
"8 Estimates show stimulus-dependent changes 8 Because standard deviation is a bounded variable and the IRF applies before the constraint function (softplus), the relationship between the standard deviation and the y axis of the plots is not straightforward.",
"Estimates nonetheless clearly indicate the shape and relative contribution to the response in variance across datasets whose shapes are not straightforwardly related to that of the IRFs of the mean (Figure 2).",
"For example, both reading datasets (left and center) generally show mean and standard deviation traveling together, with increases in the mean corresponding to increases in standard deviation.",
"In Dundee, the shapes of these changes resemble each other strongly, whereas in Natural Stories the IRFs of the standard deviation (especially rate ) differ substantially from the IRFs of the mean.",
"By contrast, in fMRI (right), the IRFs of the standard deviation look roughly like inverted HRFs (especially for rate and 5-gram surprisal ), indicating that BOLD variance tends to decrease with larger values of the predictors.",
"While detailed interpretation of these patterns is left to future work, these results demonstrate the utility of CDRNN for analyzing a range of links between predictors and response that are otherwise difficult to study.",
"This study proposed and evaluated CDRNN, a deep neural extension of continuous-time deconvolutional regression that relaxes implausible simplifying assumptions made by widely used regression techniques in psycholinguistics.",
"In so doing, CDRNN provides detailed estimates of human language processing dynamics that are difficult to obtain using other measures.",
"Results showed plausible estimates from human data that generalize better than alternatives and can illuminate hitherto understudied properties of the human sentence processing response.",
"This outcome suggests that CDRNN may play a valuable role in analyzing human experimental data."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain"
] |
[
"Natural Language Generation (NLG) is a key component in a task-oriented dialogue system, which converts the structured meaning representation (MR) to the natural language.",
"For large-scale conversational systems, where it is common to have over hundreds of intents and thousands of slots, neither template-based approaches nor model-based approaches are scalable.",
"Recently, neural NLGs started leveraging transfer learning and showed promising results in few-shot settings.",
"This paper proposes AUGNLG, a novel data augmentation approach that combines a self-trained neural retrieval model with a few-shot learned NLU model, to automatically create MR-to-Text data from open-domain texts.",
"The proposed system mostly outperforms the state-of-the-art methods on the FEWSHOTWOZ data in both BLEU and Slot Error Rate.",
"We further confirm improved results on the FEWSHOTSGD data and provide comprehensive analysis results on key components of our system.",
"Our code and data are available at https: //github.com/XinnuoXu/AugNLG .",
"Large-scale conversational systems provide a natural interface to achieve various daily-life tasks.",
"Natural Language Generation (NLG) is a key component in such a system to convert the structured meaning representation (MR) to the natural language, as shown in Figure 1.",
"In task-oriented dialogue systems, NLG is typically accomplished by filling out a basic set of developer-provided templates, leading to a conversational system generating unnatural, robotic responses.",
"In order to make the system sound more human-like, model-based NLG approaches, in particular neural models, have recently been gaining an increasing traction (Gao et al., 2018; Wen et al., 2015).",
"However, neither the template-based approaches nor the model-based System MR Intent: request Slot-value pairs: [city =?] Generated Text which city are you interested in?",
"approaches are sufficiently scalable for large-scale conversational systems, where it is common to have over hundreds of intents and thousands of slots.",
"With the rise of neural transfer learning for NLP using pretrained LMs, recently, neural NLGs started to leverage transfer learning and showed some promising results (Radford et al., 2019; Brown et al., 2020; Dai et al., 2019; Edunov et al., 2019).",
"In particular, Peng et al. (2020) proposed FEWSHOTWOZ, the first NLG benchmark test in few-shot learning settings, and achieved a SOTA performance by leveraging existing MR-to-Text data sets via task-specific continued pre-training.",
"Despite the improved result, their approach leaves little room for further improvements as MR-to-Text data are expensive to obtain for new domains, practically circling back to the same scalability problem after exhausting the existing data.",
"In order to go beyond this restriction, this paper proposes AUGNLG, a novel data augmentation approach, that automatically creates MR-to-Text data from open-domain texts by combining a self-trained neural retrieval model with a few-shot learned NLU model.",
"Since our data augmentation approach is orthogonal to the prior transfer learning approaches, one can use our approach in conjunction with other approaches.",
"In experiments, we empirically show that AUGNLG mostly boosts the performance of both the fine-tuned GPT-2 (FT-GPT) (Radford et al., 2019) and SC-GPT (Peng et al., 2020), the continued pretraining approach with existing MR-to-Text data, on the FEWSHOT -Auto-augmented MR-to-Text pairs NLG Pre-training NLG Fine-tuning Plain Text LM Pre-training In-domain MR-to-Text pairs Figure 2: The training procedure for AUGNLG.",
"WOZ task.",
"Furthermore, we construct another fewshot learning testbed, FEWSHOTSGD, out of the Schema-Guided Dialogue (SGD) corpus (Rastogi et al., 2020) and confirm improved results by applying AUGNLG to the FT-GPT.",
"1 Finally, we provide comprehensive analysis results on the key components of our system to gain detailed insights into the relationship between component-wise behavior and various parameters.",
"NLG for Dialogue Response Generation There has been a body of work on neural NLG models, adopting various architectures, such as RNNs (Wen et al., 2015), attention RNNs (Dusek and Jurccek, 2016), SC-LSTM (Wen et al., 2016), T2G2 (Kale and Rastogi, 2020), AdapterCL (Madotto et al., 2020) and associated variants (Tran and Le Nguyen, 2017; Tran et al., 2017).",
"Despite the improved flex-ibility and naturalness over template-based methods, neural approaches require large amounts of annotated data to reach good performance.",
"Data Augmentation Data augmentation has been widely applied to a variety of NLP tasks, including sentence classification (Xie et al., 2020), natural language inference (Hu et al., 2019) and spoken language understanding (Li et al., 2019; Quan and Xiong, 2019; Zhao et al., 2019).",
"Prior approaches for text data utilized back-translation (Sennrich et al., 2016; Edunov et al., 2018), c-BERT word replacement (Jiao et al., 2020), mixed labels and representations (Guo et al., 2019; Chen et al., 2020) and paraphrase data (Gao et al., 2020).",
"However, the range of augmented data will be inherently limited, particularly in few-shot learning settings due to the nature of prior approaches, which only leverages in-domain data.",
"In contrast, we take a rarely explored approach, tapping into a wealth of open-domain text that covers almost all topics.",
"Recently, Du et al. (2021) proposed a self-training method 1 Since SGD accounts for a large portion of the existing MR-to-Text data that SC-GPT utilized in training, we could not apply AUGNLG to SC-GPT for the FEWSHOTSGD task.",
"to augment data for NLU tasks by retrieving sentences from data crawled on the web.",
"However, their method cannot be directly applied to the NLG problem since it does not yield MR annotations.",
"Our approach, in contrast, generates MR-to-Text data by jointly employing a self-trained neural retrieval model with a few-shot learned NLU model.",
"The goal of NLG is to translate an MR A into its natural language response x = (cid:2) x 1 , . . . , x T (cid:3) , where x i is the i th token in the sequence x and T is the sequence length.",
"A is defined as the combination of intent I and slot-value pairs { ( s i , v i ) } Pi =1 : A = {I , ( s 1 , v 1 ) , . . . , ( s P , v P ) } , (1) where the intent stands for the illocutionary type of the system action while slot-value pairs indicate category names and their values to embed in the utterance.",
"For example, in the MR, inform (food = chinese ; price = cheap) , inform is the intent, food and price are two slot keys and chinese and cheap are the corresponding slot values.",
"Given in-domain MR-to-Text data D = { ( A n , x n ) } Nn =1 for training, where N is the number of examples, a statistical neural language model parameterized by is adopted to characterize the conditional probability p ( x |A ) .",
"By adopting the chain rule on auto-regressive generation, the joint probability of x conditioned on A is decomposed as (cid:81) Tt =1 p ( x t | x <t , A ) .",
"The training process, i.e. the learning of , is then defined as maximizing the log-likelihood of the conditional probabilities over the entire training dataset: L ( D ) = | D | (cid:88) n =1 log p ( x n |A n ) .",
"In the few-shot learning setup, the number of training examples N is extremely small (e.g. 50 ), which easily leads to non-fluent generated sentences with many grammar mistakes or missing pieces of information.",
"In order to combat the data sparseness problem, inspired by prior transfer learning approaches, we introduce a three-step pipeline to gradually evolve a general large-scale language model to a domain-specific NLG model (shown in Figure 2): (1) pre-training a base language model with massive amounts of text, (2) NLG-specific continued pre-training with auto-augmented MR-to-Text data, and (3) final fine-tuning with the limited in-domain MR-to-Text ground-truth data.",
"Specifically, in Step (1), we adopt GPT-2 (Rad-ford et al., 2019) as our base language model since GPT-2 has demonstrated a remarkable performance on auto-regressive text generation tasks, which is close to MR-to-Text generation, in a variety of domains.",
"However, GPT-2 is pre-trained on Open-WebText and the language style and topics thereof are quite different from those of daily conversations in a target domain.",
"Furthermore, the generation task in NLG is conditioned on the input MR, as opposed to the unconditioned generation of the underlying GPT-2 pre-training task.",
"Thus, to bring the model a step closer to the final NLG model in the target domain, in Step (2), we continuously pre-train the GPT-2 model on an automatically constructed set of augmented MR-to-Text pairs D (cid:48) = { ( A m , x m ) } Mm =1 , where M is the number of augmented examples, which is much larger than the amount of in-domain ground-truth data.",
"Data augmentation is achieved by retrieving a large amount of relevant text from Reddit (Henderson et al., 2019) with a self-trained neural retrieval model and then synthesizing MRs with a few-shot learned NLU model.",
"The details of data augmentation is described in Section 4.",
"Finally, in Step (3), we fine-tune the NLG model on a limited amount of in-domain ground-truth MR-to-Text pairs D for a final adaptation.",
"The data augmentation procedure aims to construct a large amount of MR-to-Text pairs D (cid:48) from open-domain texts that are relevant to the in-domain ground-truth MR-to-Text pairs D .",
"The augmentation process consists of two stages: (1) retrieving keyword-matching utterances and filtering out domain-irrelevant instances, (2) generating synthetic MR annotations.",
"Figure 3 illustrates the overall pipeline with some examples.",
"For further analysis and studies, we release the data from all intermediate steps for each domain at https://github.com/XinnuoXu/ AugNLG/tree/master/augmented_data .",
"The utterance retrieval and filtering procedure consists of three steps: (1) keyword extraction that collects n-gram keywords from all in-domain utterances X = { x n } Nn =1 ; (2) keyword-based retrieval that searches the open-domain texts for utterances that match any keywords extracted in the previous step, yielding a set of utterances X (cid:48) cand ; (3) self-trained neural classifier that filters out some retrieved utterances that are semantically irrelevant to the target domain.",
"After the filtering, we form an augmented set of utterances X (cid:48) with the unfiltered utterances.",
"Keywords Extraction.",
"To efficiently extract keywords, we first gather all n-gram phrases that appear in X .",
"Since some phrases are too general to be effective, e.g. I cannot , is your , we use TF-IDF scores to measure the specificity of a phrase (see Appendix A for more detail).",
"We first rank the collected n-grams according to their TF-IDF scores and filter out those n-gram phrases with relatively low TF-IDF score.",
"the keywords, we retrieve utterances from the open-domain utterance pool that contains at least one",
"extracted keyword in it.",
"The aim of this step is to source a large amount of domain-relevant utterances X (cid:48) cand based on the surface-level overlap.",
"Self-trained Neural Filtering.",
"Although the keyword-based retrieval is efficient, the retrieved utterances X (cid:48) cand can be quite noisy since an n-gram keyword only matches some part of the utterance, failing to detect the existence of irrelevant pieces in other parts.",
"For example, in Figure 3, even though the utterance With kids movies? contains the keyword with kids , it is irrelevant to the target domain Restaurant given the word movies .",
"Thus, we introduce a self-trained neural classifier to filter out domain-irrelevant utterances from X (cid:48) cand by considering the semantic representation of an entire utterance and yield a domain-relevant set X (cid:48) .",
"The algorithm of the self-training and filtering process is listed in Algorithm 1.",
"We adopt a BERT (Devlin et al., 2019) model with a binary classification layer atop as the base model and then train the classifier with in-domain utterances X and randomly selected open-domain utterances 2 , serving as positive and negative examples ( U + and U ), respectively.",
"After that, the self-training and filtering cycle starts.",
"At each iteration, we make predictions on the utterances in X (cid:48) cand with the classifier 2 All utterances in X (cid:48) cand are excluded from the open-domain utterance pool.",
"To balance the precision and recall, we control the size of the initial negative set such that (cid:12)(cid:12) U (cid:12)(cid:12) = 1 (cid:12)(cid:12) U + (cid:12)(cid:12) , where 1 = 10 .",
"trained in the previous iteration.",
"All utterances with a score over the threshold + , together with the in-domain utterances X , are then taken as a new set of positive examples E + , whereas all utterances with a score less than the threshold are collected as a new set of negative examples E .",
"3 The self-training loop terminates if either the increment of positive examples at the last iteration is less than the threshold or the iterations is over the pre-defined maximum number of iterations.",
"Otherwise, a new classifier is trained on E + and E and the algorithm keeps going on the loop.",
"Once the loop terminated, we label all utterances in X (cid:48) cand with the classifier from the last iteration.",
"Finally, we build a domain-relevant set of augmented utterances X (cid:48) by taking all utterances with a score over the threshold .",
"4 4.2 Synthetic MR Annotation Having built the domain-relevant set of augmented utterances X (cid:48) , we now proceed to synthesize MR labels to produce a complete MR-to-Text dataset D (cid:48) .",
"To this end, we build a few-shot NLU model by fine-tuning a BERT model with in-domain ground-truth data.",
"To put the data in the right format for the NLU task, we take MRs and utterances as labels and model inputs, respectively.",
"Each token is annotated with the slot name if it is a part of the associated slot value and the final hidden state of the special token [CLS] is used to predict the intent (see Figure 5 in Appendix B).",
"Finally, we generate an MR-to-Text dataset D (cid:48) by concatenating the utterances in X (cid:48) with the synthetic MR labels predicted by the few-shot NLU model.",
"Fewshot NLG Data FEWSHOTWOZ is a fewshot NLG benchmark, built upon RNNLG and MultiWOZ (Budzianowski et al., 2018).",
"In each domain, MR-to-Text pairs are grouped according to their delexicalized MRs (i.e. slot values being masked) and a training set is created by taking a pair each from 50 random groups and then the rest are taken as the test set.",
"We also construct a new dataset FEWSHOTSGD by applying the same 3 To guarantee the precision of the positive examples, we use + = 0 .",
"99 and = 0 .",
"5 .",
"Also, we sub-sample negative examples such that (cid:12)(cid:12) E (cid:12)(cid:12) = 2 (cid:12)(cid:12) E + (cid:12)(cid:12) , where 2 = 5 .",
"preparation steps to the SGD corpus.",
"The comparison of FEWSHOTWOZ and FEWSHOTSGD is presented in the top section in Table 1.",
"Comparing to FEWSHOTWOZ, FEWSHOTSGD has (1) more domains, (2) less intents, slots and delexicalized MRs 5 (3) more testing examples for each delexicalized MR, (4) more novel n-grams 6 in test utterances.",
"Augmented Data Since Reddit has shown to provide natural conversational English data, we adopt Reddit (Henderson et al., 2019) as the open-domain utterance pool after filtering for utterances of length between 2 and 40, totalling about 0.7B utterances.",
"The average number of extracted keywords, retrieved utterances, final augmented MR-to-Text pairs and delexicalized MRs over all domains in FEWSHOTWOZ and FEWSHOTSGD are shown in the bottom section of Table 1.",
"The detailed breakdowns of each domain are listed in Table 9 and Table 10 in Appendix C. 5.2 Evaluation Metrics Following Wen et al. (2015) and Peng et al. (2020), we use BLEU score and Slot Error Rate (ERR) for automatic evaluation.",
"BLEU score measures the surface-level similarity between generated responses and human-authored references.",
"Whereas, 5 Note that, the average number of delexicalized MRs in the training set is 33, which means the number of training examples in some domains are less than 50.",
"6 The novelty is calculated by dividing the number of n-grams in the test set that does not appear in the training set by the number of n-grams in the test set.",
"ERR measures the semantic alignment in terms of slot-value insertion and omission.",
"Specifically, ERR = ( p + q ) /M , where M is the total number of slots in the MR and p , q are the number of missing and redundant slots in the surface realisation.",
"Since the SGD dataset does not provide enough information to compute ERR, we report ERR only on FEWSHOTWOZ.",
"We apply our data augmentation approach AUGNLG to two baseline systems,",
"FT-GPT GPT-2 is directly fine-tuned on the in-domain ground-truth MR-to-Text data.",
"We introduce AUGNLG-FT , which further pre-trains GPT-2 on the augmented MR-to-Text data and performs a final fine-tuning on the in-domain data.",
"SC-GPT (Peng et al., 2020) further pre-trains GPT-2 on existing MR-to-Text data borrowed from other NLG corpora and fine-tunes on the in-domain data.",
"We introduce AUGNLG-SC , which pre-trains GPT-2 on both existing MR-to-Text data and automatically augmented data, and finally fine-tunes on the in-domain data.",
"FEWSHOTWOZ Table 2 reports the results on FEWSHOTWOZ.",
"AUGNLG-FT substantially outperforms FT-GPT across all domains in both BLEU and ERR.",
"Similarly, AUGNLG-SC performs better than SC-GPT and achieves the state-of-the-art performance in most domains.",
"Remarkably, AUGNLG-FT achieves a competitive performance with SC-GPT in many domains without leveraging any existing MR-to-Text data.",
"It even outperforms SC-GPT in TV and Attraction domain in both BLEU and ERR.",
"FEWSHOTSGD Table 3 shows the results in FEWSHOTSGD.",
"Due to the higher novelty of the test examples and the smaller amount of training examples (see Avg. # Test Novelty n-gram and # Training Instances in Table 1), FT-GPT performs worse than on FEWSHOTWOZ.",
"This indicates that the few-shot settings on FEWSHOTSGD are even more challenging.",
"But AUGNLG-FT managed to outperform FT-GPT by a large margin via the continued pre-training on the augmented examples.",
"Model Restaurants Hotels Flights Calendar Banks Weather Buses Events FT-GPT 08.98 08.84 12.18 05.27 06.09 10.52 07.77 09.17 AUGNLG-FT 17.83 17.23 17.58 10.45 08.94 13.75 14.26 18.68 Model Homes Media Movies Music Rentalcars Ridesharing Services Travel FT-GPT 03.75 03.17 10.05 05.79 06.79 13.87 09.79 02.08 AUGNLG-FT 12.27 08.62 11.96 12.76 13.32 15.54 16.82 14.35 Table 3: Evaluation results in BLEU on FEWSHOTSGD.",
"Qualitative Evaluation Table 4 compares some generated utterances by different models on FEWSHOTWOZ (examples in FEWSHOTSGD are shown in Table 16 in Appendix E).",
"Both FT-GPT and SC-GPT are prone to omit important slots.",
"Comparing to SC-GPT, FT-GPT tends to over-generate and introduces hallucinations.",
"However, AUGNLG and AUGNLG-SC managed to generate fluent, natural text while precisely reflecting the the input MR. We further examined 70 randomly sampled utterances generated by AUGNLG-SC, whose BLEU scores are lower than those generated by SC-GPT, in the Hotel , Train and Taxi domain to understand some potential factors causing the lower BLEU scores We found that the lower BLEU scores are mainly driven by BLEU penalizing semantically correct paraphrases due to the nature of BLEU only checking surface-level matches.",
"Some examples of such penalization are provided in Table 15 in Appendix E. Only 7 out of the 70 manually checked examples generated by AUGNLG-SC are actually worse than SC-GPT.",
"8 In sum, the results (1) verify the effectiveness of complementing existing transfer learning methods with our novel data augmentation approach; (2) reveal that automatically augmented MR-to-Text data alone can lead to a competitive performance, previously only achieved with existing MR-to-Text data.",
"Since existing MR-to-Text data is not a scalable data source, our approach brings more practical values to real-world applications; (3) indicate that 8 We also examined 70 randomly sampled utterances generated by AUGNLG-SC, whose BLEU scores are equal/higher than those generated by SC-GPT.",
"Among these examples, 35 examples are actually better and 7 examples are worse than the SC-GPT generations.",
"leveraging augmented MR-to-Text data on top of existing MR-to-Text data yields a new SOTA performance on the benchmark test.",
"In this section, we provide comprehensive analysis results on the key components and parameters of our system to gain detailed insights: (1) intrinsic evaluation on augmented data, (2) influence of NLU quality, and (3) performance trends over varying amounts of augmented data.",
"MR coverage (MR Cov.) evaluates the coverage of delexicalized MRs of the test set in the augmented set:",
"where A (cid:48) and A test denote delexicalized MRs in the augmented set and the test set, respectively.",
"Higher MR Cov.",
"values indicate that more delexicalized MRs of the test set appear in the augmented set.",
"Slot coverage (SL Cov.) evaluates the coverage of slot keys of the test set in the augmented set.",
"Language model perplexity (PPL) is the perplexity of augmented utterances calculated by a GPT-2 language model fine-tuned on the test set.",
"Lower PPL values indicate that the distribution of augmented utterances is close to that of the test utterances.",
"Average n-gram novelty (Nvt.) N-gram novelty measures the fraction of the n-grams in the test set Domain: Restaurant Input MR inform(name=marlowe; goodformeal=dinner; area=mission bay) Reference marlowe serves dinner in the mission bay area.",
"where X (cid:48) and X test denote utterances in the augmented set and test set, respectively.",
"Lower Nvt.",
"values indicate that more n-grams of the test set appear in the augmented set.",
"We consider from 1-grams to 4-grams and report the average value.",
"The results of MR Cov.",
"/ SL Cov.",
"on FEWSHOTWOZ and FEWSHOTSGD are shown in Table 5 and Table 6, respectively.",
"SL Cov.",
"achieves 70% in most domains on both datasets while MR Cov.",
"has a wide range of values across domains.",
"Noteworthily, Table 6 strongly correlates with Table 3 Banks and Media domains are worse than other domains in both coverage metrics and NLG performance.",
"On the other hand, Restaurants and Events domains are better than the others in both aspects.",
"Although we do not see the same pattern on FEWSHOTWOZ, it could be attributed to the large variance in the number of delexicalized MRs in each domain (see Table 2 in (Peng et al., 2020)).",
"The results of PPL and Nvt.",
"on FEWSHOTWOZ are shown in Table 7.",
"We compare the augmented data ( AUG ) with the existing MR-to-Text data ( EXIST ).",
"The top section shows that AUG achieves lower PPL values in all seven domains compared to EXIST .",
"The bottom section again demonstrates that AUG achieves lower Nvt.",
"values in most domains.",
"However, in the Train and Taxi domains EXIST attains lower novelty values, which matches the results in Table 2, SC-GPT outperforming AUGNLG-SC in these two domains.",
"9 9 Detailed breakdowns of novelty scores from 1-grams to 4-grams are provided in Table 11 in Appendix C. The Nvt.",
"re-Metrics Data Restaurant Laptop Hotel TV Attraction Train Taxi PPL EXIST 04.14 22.92 04.09 19.53 08.28 09.04 06.74 AUG 03.48 08.46 02.89 05.77 04.73 06.77 06.72 Nvt.",
"Few-shot NLU performance Since few-shot NLU models are a key component of our system, we report their performance in F1 score.",
"For each domain, we evaluate the few-shot NLU model on the Text-to-MR test set, prepared in Section 4.2.",
"The average F1 over all domains on FEWSHOTWOZ and FEWSHOTSGD are 0.77 and 0.68, respectively.",
"A further breakdown over the domains are provided in Table 13 and Table 14 in Appendix D. Influence of NLU Quality The mediocre NLU performance on FEWSHOTSGD leads to the following research question: can better NLU models boost NLG performance?",
"To answer this question, we select four domains from FEWSHOTSGD with relatively low NLU performance: Buses (0.63), Flights (0.74), Movies (0.44), and Ridesharing (0.63).",
"In each domain, we construct a new test set by randomly sampling 500 MR-to-Text pairs from the original test set, and take the rest as the NLU training pool.",
"To obtain NLU models of varying quality, we train a set of models while varying the amount of training data with stratified sampling.",
"The top row in Figure 4 clearly shows that F1 score increases in proportion to the training size, reaching 0.95 in F1 in all four domains.",
"We then annotate the augmented utterances with different sults on FEWSHOTSGD are shown in Table 12 in Appendix C, demonstrating similar trends.",
"NLU models and pre-train the NLG models with the augmented MR-to-Text data updated with new MR labels.",
"Finally, we fine-tune the NLG models on the in-domain training set D and perform evaluation on the newly constructed 500 test set.",
"The bottom row in Figure 4 confirms that there is a general proportional relationship between the performances of NLU and NLG.",
"Lastly, we investigate the relationship between the amount of in-domain ground-truth data and the effect of augmentation.",
"As in the previous section, we build new test sets by randomly taking 500 examples and vary the size of training set to train both NLU and NLG models.",
"Table 8 shows that, in all four domains, the performance difference between AUGNLG-FT and FT-GPT culminates at the smallest training set and gradually diminishes as more training data become available.",
"In this paper, we proposed AUGNLG, a novel data augmentation approach that combines a self-trained retrieval model with a few-shot learned NLU, to automatically create MR-to-Text data from open-domain texts.",
"Experimental results verify the effectiveness of our approach by establishing new SOTA performances on two benchmark tests.",
"More importantly, we showed how our approach complements the previous SOTA approach, which hinges on unscalable data sources, with unlimited open-domain data.",
"Future work includes (1) technical innovations on each component of our system for further performance improvements, (2) exploring self-training on the NLU side too to evolve both the NLU and NLG model at the same time.",
"We would like to thank the first author of Peng et al. (2020), Baolin Peng, for his generous help.",
"We also thank the anonymous reviewers for their helpful comments."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"other",
"abstain",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"objective",
"other",
"other"
] |
[
"We present a content-based method for recommending citations in an academic paper draft.",
"We embed a given query document into a vector space, then use its nearest neighbors as candidates, and rerank the candidates using a discriminative model trained to distinguish between observed and unobserved citations.",
"Unlike previous work, our method does not require metadata such as author names which can be missing, e.g., during the peer review process.",
"Without using metadata, our method outperforms the best reported results on PubMed and DBLP datasets with relative improvements of over 18% in F1@20 and over 22% in MRR.",
"We show empirically that, although adding metadata improves the performance on standard metrics, it favors self-citations which are less useful in a citation recommendation setup.",
"We release an online portal for citation recommendation based on our method, 1 and a new dataset OpenCorpus of 7 million research articles to facilitate future research on this task.",
"Due to the rapid growth of the scientific literature, conducting a comprehensive literature review has become challenging, despite major advances in digital libraries and information retrieval systems.",
"Citation recommendation can help improve the quality and efficiency of this process by suggesting published scientific documents as likely citations for a query document, e.g., a paper draft to be submitted for ACL 2018.",
"Existing citation recommendation systems rely on various information of the query documents such as author names and publication venue (Ren et al., 2014; Yu et al., Work done while on contract with AI2 Work done while at AI2 1 http://labs.semanticscholar.org/citeomatic/ 2012), or a partial list of citations provided by the author (McNee et al., 2002; Liu et al., 2015; Jia and Saule, 2017) which may not be available, e.g., during the peer review process or in the early stage of a research project.",
"Our method uses a neural model to embed all available documents into a vector space by encoding the textual content of each document.",
"We then select the nearest neighbors of a query document as candidates and rerank the candidates using a second model trained to discriminate between observed and unobserved citations.",
"Unlike previous work, we can embed new documents in the same vector space used to identify candidate citations based on their text content, obviating the need to re-train the models to include new published papers.",
"Further, unlike prior work (Yang et al., 2015; Ren et al., 2014), our model is computationally ef-ficient and scalable during both training and test time.",
"We assess the feasibility of recommending citations when some metadata for the query document is missing, and find that we are able to outperform the best reported results on two datasets while only using papers' textual content (i.e. its title and ab-stract).",
"While adding metadata helps further improve the performance of our method on standard metrics, we found that it introduces a bias for self-citation which might not be desirable in a citation recommendation system.",
"See 5 for details of our experimental results.",
"Our main contributions are: a content-based method for citation recommendation which remains robust when metadata are missing for query documents, large improvements over state of the art results on two citation recommendation datasets despite omitting the metadata, a new dataset of seven million research papers, addressing some of the limitations in 238 d q d 1 d 2 d 3 d 4 d 5 d 6 d 7 d q d 1 d 2 d 3 d 6 d 7 d 4 d 5 d q d 1 cites d 4 d 1 cites d 5 d 3 cites d 7 K=4 Phase 1: candidate selection Phase 2: reranking NNRank d 2 0.7 rerankedlist cited in nearest neighbors: d q NNRank d 6 0.8 d q NNRank d 3 0.5 d q NNRank d 4 0.3 d q NNRank d 7 0.9 d 7 d 6 d 2 d 3 d 4 t o p N = 3 r e c o mm e nd a t i o n s query document document embeddings nearest neighbors of d q : Figure 1: An overview of our Citation Recommendation system.",
"previous datasets used for citation recommendation, and a scalable web-based literature review tool based on this work.",
"2 2 Overview We formulate citation recommendation as a ranking problem.",
"Given a query document d q and a large corpus of published documents, the task is to rank documents which should be referenced in d q higher than other documents.",
"Following previous work on citation recommendation, we use standard metrics (precision, recall, F-measure and mean reciprocal rank) to evaluate our predictions against gold references provided by the authors of query documents.",
"Since the number of published documents in the corpus can be large, it is computationally expensive to score each document as a candidate reference with respect to d q .",
"Instead, we recommend citations in two phases:",
"(i) a fast, recall-oriented candidate selection phase, and",
"(ii) a feature rich, 2 https://github.com/allenai/citeomatic precision-oriented reranking phase.",
"Figure 1 provides an overview of the two phases using a toy example.",
"Phase 1 Candidate Selection: In this phase, our goal is to identify a set of candidate references for d q for further analysis without explicitly iterating over all documents in the corpus.",
"3 Using a trained neural network, we first project all published documents into a vector space such that a document tends to be close to its references.",
"Since the projection of a document is independent of the query document, the entire corpus needs to be embedded only once and can be reused for subsequent queries.",
"Then, we project each query document d q to the same vector space and identify its nearest neighbors as candidate references.",
"See 3 for more details about candidate selection.",
"3 In order to increase the chances that all references are present in the list of candidates, the number of candidates must be significantly larger than the total number of citations of a document, but also significantly smaller than the number of documents in the corpus.",
"Phase 2 Reranking: Phase 1 yields a manageable number of candidates making it feasible to score each candidate d i by feeding the pair ( d q , d i ) into another neural network trained to discriminate between observed and unobserved citation pairs.",
"The candidate documents are sorted by their estimated probability of being cited in d q , and top candidates are returned as recommended citations.",
"See 4 for more details about the reranking model and inference in the candidate selection phase.",
"In this phase, we select a pool of candidate citations for a given query document to be reranked in the next phase.",
"First, we compute a dense embedding of the query document d q using the document embedding model (described next), and select K nearest neighbor documents in the vector space as candidates.",
"4 Following Strohman et al. (2007), we also include the outgoing citations of the K nearest neighbors as candidates.",
"The output of this phase is a list of candidate documents d i and their corresponding scores NNSelect ( d q , d i ) , defined as the cosine similarity between d q and d i in the document embedding space.",
"Document embedding model.",
"We use a supervised neural model to project any document d to a dense embedding based on its textual content.",
"We use a bag-of-word representation of each textual field, e.g., d [ title ] = { content-based', citation', recommendation' } , and compute the feature vector: f d [ title ] = X t d [ title ] w mag t w dir t k w dir t k 2 , (1) where w dir t is a dense direction embedding and w mag t is a scalar magnitude for word type t .",
"5 We then normalize the representation of each field and compute a weighted average of fields to get the document embedding, e d .",
"In our experiments, we use the title and abstract fields of a document d : e d = title f d [ title ] k f d [ title ] k 2 + abstract f d [ abstract ] k f d [ abstract ] k 2 , 4 We tune K as a hyperparameter of our method.",
"5 The magnitude-direction representation is based on Salimans and Kingma (2016) and was found to improve results in preliminary experiments, compared to the standard direction-only word representation.",
"Training.",
"We learn the parameters of the document embedding model (i.e., , w mag , w dir ) using a training set T of triplets h d q , d + , d i where d q is a query document, d + is a document cited in d q , and d is a document not cited in d q .",
"The model is trained to predict a high cosine similarity for the pair ( d q , d + ) and a low cosine similarity for the pair ( d q , d ) using the per-instance triplet loss (Wang et al., 2014): loss = max (cid:0) + s ( d q , d ) s ( d q , d + ) , 0 (cid:1) , (2) where s ( d i , d j ) is defined as the cosine similarity between document embeddings cos-sim ( e d i , e d j ) .",
"We tune the margin as a hyperparameter of the model (see Appendix B for more details).",
"Next, we describe how negative examples are selected.",
"Selecting negative examples.",
"Defining positive examples is straight-forward; we use any ( d q , d + ) pair where a document d q in the training set cites d + .",
"However, a careful choice of negative training examples is critical for model performance.",
"We use three types of negative examples: 1. Random: any document not cited by d q .",
"2. Negative nearest neighbors: documents that are close to d q in the embedding space, but are not cited in it.",
"6 3. Citation-of-citation: documents referenced in positive citations of d q , but are not cited directly in d q .",
"Negative examples belong to at least one of these types that serve different, and complementary purposes.",
"Selecting a paper from the corpus at random as a negative example typically results in easy negative examples.",
"Selecting nearest neighbor documents in the embedding space used for candidate selection enables the re-ranking phase (described in 4) to fix some of the mistakes made in the candidate selection step.",
"Finally, using citations-of-citations as negative examples is based on the assumption that the authors would have included them as positive examples if they were relevant for the query paper.",
"In Appendix A, we describe the number of negative examples of each type used for training.",
"Next, we describe how to rerank the candidate documents.",
"Since the set of approximate neighbors depend on model parameters, we recompute a map from each query document to its K nearest neighbors before each epoch while training the document embedding model.",
"In this phase, we train another model which takes as input a pair of documents ( d q , d i ) and estimates the probability that d i should be cited in d q .",
"Input features.",
"A key point of this work is to assess the feasibility of recommending citations without using metadata, but we describe all features here for completeness and defer this discussion to 5. For each document, we compute dense feature vectors f d [ field ] as defined in Eq.",
"1 for the following fields: title, abstract, authors, venue and keyphrases (if available).",
"For the title and abstract, we identify the subset of word types which appear in both documents (intersection), and compute the sum of their scalar weights as an additional feature, e.g., P t title w t .",
"We also use log number of times the candidate document d i has been cited in the corpus, i.e., log ( d i [ in-citations ] ).",
"Finally, we use the cosine similarity between d q and d i in the embedding space, i.e., cos-sim ( e d q , e d i ) .",
"output layer is defined as: s ( d i , d j ) = FeedForward ( h ) , (3) h = h g title ; g abstract ; g authors ; g venue ; g keyphrases ; cos-sim ( e d q , e d i ); P t title w t ; P t abstract w t ; d i [ in-citations ] i , g field = cos-sim ( f d [ field ] , f d [ field ] ) ,",
"where FeedForward' is a three layer feed-forward neural network with two exponential linear unit layers (Clevert et al., 2015) and one sigmoid layer.",
"';' indicates concatenation.",
"Training.",
"The parameters of the NNRank model are w mag , w dir , w and parameters of the three dense layers in FeedForward'.",
"We reuse the triplet loss in Eq.",
"2 to learn these parameters, but redefine the similarity function s ( d i , d j ) as the sigmoid output described in Eq.",
"3. At test time, we use this model to recommend candidates d i with the highest s ( d q , d i ) scores.",
"In this section, we describe experimental results of our citation recommendation method and compare it to previous work.",
"Datasets.",
"We use the DBLP and PubMed datasets (Ren et al., 2014) to compare with previous work on citation recommendation.",
"The DBLP 241 dataset contains over 50K scientific articles in the computer science domain, with an average of 5 citations per article.",
"The PubMed dataset contains over 45K scientific articles in the medical domains, with an average of 17 citations per article.",
"In both datasets, a document is accompanied by its title, abstract, venue (i.e. journal or conference where the document was published), authors, citations (i.e. other documents in the corpus that are referenced in the given document) and keyphrases (i.e. phrases considered important by automated extraction methods).",
"We replicate the experimental setup of Ren et al. (2014) by excluding papers with fewer than 10 citations and using the standard train, dev and test splits.",
"7 We also introduce OpenCorpus , 8 a new dataset of 7 million scientific articles primarily drawn from the computer science and neuroscience domain.",
"Due to licensing constraints, documents in the corpus do not include the full text of the scientific articles, but include the title, abstract, year, author, venue, keyphrases and citation information.",
"The mutually exclusive training, development, and test splits were selected such that no document in the development or test set has a publication year less than that of any document in the training set.",
"Papers with zero citations were removed from the development and test sets.",
"We describe the key characteristics of OpenCorpus in Table 1. Statistic Value # of documents in corpus 6.9 million # of unique authors 8.3 million # of unique keyphrases 823,677 # of unique venues 23,672 avg.",
"Baselines.",
"We compare our method to two baseline methods for recommending citations: ClusCite and BM25.",
"ClusCite (Ren et al., 2014) clusters nodes in a heterogeneous graph of terms, authors and venues in order to find related documents which should be cited.",
"We use the ClusCite results 7 The dataset characteristics reported here are different from those in Table 3 in (Ren et al., 2014) because we report the size of the filtered datasets while they report the size of the datasets before filtering.",
"as reported in Ren et al. (2014), which compared it to several other citation recommendation methods and found that it obtains state of the art results on the PubMed and DBLP datasets.",
"The BM25 results are based on our implementation of the popular ranking function Okapi BM25 used in many information retrieval systems.",
"See Appendix D for details of our BM25 implementation.",
"Evaluation.",
"We use Mean Reciprocal Rank (MRR) and F1@20 to report the main results in this section.",
"In Appendix F, we also report additional metrics (e.g., precision and recall at 20) which have been used in previous work.",
"We compute F1@20 as the harmonic mean of the corpus-level precision and recall at 20 (P@20 and R@20).",
"Following (Ren et al., 2014), precision and recall at 20 are first computed for each query document then averaged over query documents in the test set to compute the corpus-level P@20 and R@20.",
"Configurations.",
"To find candidates in NNSelect , we use the approximate nearest neighbor search algorithm Annoy 9 , which builds a binary-tree structure that enables searching for nearest neighbors in O (log n ) time.",
"To build this tree, points in a high-dimensional space are split by choosing random hyperplanes.",
"We use 100 trees in our approximate nearest neighbors index, and retrieve documents using the cosine distance metric.",
"We use the hyperopt library 10 to optimize various hyperparameters of our method such as size of hidden layers, regularization strength and learning rate.",
"To ensure reproducibility, we provide a detailed description of the parameters used in both NNSelect and NNRank models, our hyperparameter optimization method and parameter values chosen in Appendix A. Main results.",
"Table 2 reports the F1@20 and MRR results for the two baselines and three variants of our method.",
"Since the OpenCorpus dataset is much bigger, we were not able to train the ClusCite baseline for it.",
"Totti et al. (2016) have also found it difficult to scale up ClusCite to larger datasets.",
"Where available, we report the mean standard deviation based on five trials.",
"The first variant, labeled NNSelect , only uses the candidate selection part of our method (i.e., phase 1) to rank candidates by their cosine 9 https://github.com/spotify/annoy 10 https://github.com/hyperopt/hyperopt 242 Method DBLP PubMed OpenCorpus F1@20 MRR F1@20 MRR F1@20 MRR BM25 0.119 0.425 0.209 0.574 0.058 0.218 ClusCite 0.237 0.548 0.274 0.578 NNSelect 0.282 0.002 0.579 0.007 0.309 0.001 0.699 0.001 0.109 0.221 + NNRank 0.302 0.001 0.672 0.015 0.325 0.001 0.754 0.003 0.126 0.330 + metadata 0.303 0.001 0.689 0.011 0.329 0.001 0.771 0.003 0.125 0.330 Table 2: F1@20 and MRR results for two baselines and three variants of our method.",
"similarity to the query document in the embedding space as illustrated in Fig. 1. Although the document embedding space was designed to efficiently select candidates for further processing in phase 2, recommending citations directly based on the cosine distance in this space outperforms both baselines.",
"The second variant, labeled NNSelect + NNRank , uses the discriminative model (i.e., phase 2) to rerank candidates selected by NNSelect , without encoding metadata (venues, authors, keyphrases).",
"Both the first and second variants show that improved modeling of paper text can significantly outperform previous methods for citation recommendation, without using metadata.",
"The third variant, labeled NNSelect + NNRank + metadata, further encodes the metadata features in the reranking model, and gives the best overall results.",
"On both the DBLP and PubMed datasets, we obtain relative improvements over 20% (for F1@20) and 25% (for MRR) compared to the best reported results of ClusCite.",
"In the rest of this section, we describe controlled experiments aimed at analyzing different aspects of our proposed method.",
"Choice of negative samples.",
"As discussed in 3, we use different types of negative samples to train our models.",
"We experimented with using only a subset of the types, while controlling for the total number of negative samples used, and found that using negative nearest neighbors while training the models is particularly important for the method to work.",
"As illustrated in Table 3, on the PubMed dataset, adding negative nearest neighbors while training the models improves the F1@20 score from 0.306 to 0.329, and improves the MRR score from 0.705 to 0.771.",
"Intuitively, using nearest neighbor negative examples focuses training on the harder cases on which the model is more likely to make mistakes.",
"Valuable features.",
"We experimented with different subsets of the optional features used in NNRank in order to evaluate the contribution of various features.",
"We found intersection features, NNSelect scores, and the number of incoming citations to be the most valuable feature.",
"As illustrated in Table 3, the intersection features improves the F1@20 score from 0.296 to 0.329, and the MRR score from 0.653 to 0.771, on the PubMed dataset.",
"The numerical features ( NNSelect score and incoming citations) improve the F1@20 score from 0.314 to 0.329, and improves the MRR score from 0.735 to 0.771.",
"This shows that, in some applications, feeding engineered features to neural networks can be an effective strategy to improve their performance.",
"Performance across venues We studied the variability of performance of our model for papers from different venues.",
"Figure 3 shows the F1@20 score of NNRank for papers belonging to the top 243 0 0 .",
"ten venues (by their paper count) in the Pubmed corpus.",
"NNRank 's performance is robust across venues.",
"Encoding textual features.",
"We also experimented with using recurrent and convolutional neural network to encode the textual fields of query and candidate documents, instead of using a weighted sum as described in Eq.",
"1. We found that recurrent and convolutional encoders are much slower, and did not observe a significant improvement in the overall performance as measured by the F1@20 and MRR metrics.",
"This result is consistent with previous studies on other tasks, e.g., Iyyer et al. (2015).",
"Number of nearest neighbors.",
"As discussed in 3, the candidate selection step is crucial for the scalability of our method because it reduces the number of computationally expensive pairwise comparisons with the query document at runtime.",
"We did a controlled experiment on the OpenCorpus dataset (largest among the three datasets) to measure the effect of using different numbers of nearest neighbors, and found that both P@20 and R@20 metrics are maximized when NNSelect fetches five nearest neighbors using the approximate nearest neighbors index (and their out-going citations), as illustrated in Table 4. Self-citation bias.",
"We hypothesized that a model trained with the metadata (e.g., authors) could be biased towards self-citations and other well-cited authors.",
"To verify this hypothesis, we compared two NNRank models one with meta-# of neighbors R@20 P@20 Time(ms) 1 0.123 0.079 131 5 0.142 0.080 144 10 0.138 0.069 200 50 0.081 0.040 362 Table 4: OpenCorpus results for NNSelect step with varying number of nearest neighbors on 1,000 validation documents.",
"data, and one without.",
"We measured the mean and max rank of predictions that had at least one author in common with the query document.",
"This experiment was performed with the OpenCorpus dataset.",
"A lower mean rank for NNRank + Metadata indicates that the model trained with metadata tends to favor documents authored by one of the query document's authors.",
"We verified the prevalence of this bias by varying the number of predictions for each model from 1 to 100.",
"Figure 4 shows that the mean and max rank of the model trained with metadata is always lower than those for the model that does not use metadata.",
"Citation recommendation systems can be divided into two categories local and global .",
"A local citation recommendation system takes a few sentences (and an optional placeholder for the candidate citation) as input and recommends citations based on the local context of the input sen-244 tences (Huang et al., 2015; He et al., 2010; Tang and Zhang, 2009; Huang et al., 2012; He et al., 2011).",
"A global citation recommendation system takes the entire scholarly article as input and recommends citations for the paper (McNee et al., 2002; Strohman et al., 2007; Nallapati et al., 2008; Kataria et al., 2010; Ren et al., 2014).",
"We address the global citation recommendation problem in this paper.",
"A key difference of our proposed method compared to previous work is that our method is content-based and works well even in the absence of metadata (e.g. authors, venues, key phrases, seed list of citations).",
"Many citation recommendation systems crucially rely on a query document's metadata.",
"For example, the collaborative filtering based algorithms of McNee et al. (2002); Jia and Saule (2017); Liu et al. (2015) require seed citations for a query document.",
"(Ren et al., 2014; Yu et al., 2012) require authors, venues and key terms of the query documents to infer interest groups and to extract features based on paths in a heterogeneous graph.",
"In contrast, our model performs well solely based on the textual content of the query document.",
"Some previous work (e.g. (Ren et al., 2014; Yu et al., 2012)) have addressed the citation recommendation problem using graph-based methods.",
"But, training graph-based citation recommendation models has been found to be expensive.",
"For example, the training complexity of the ClusCite algorithm (Ren et al., 2014) is cubic in the number of edges in the graph of authors, venues and terms.",
"This can be prohibitively expensive for datasets as large as OpenCorpus .",
"On the other hand our model is a neural network trained via batched stochastic gradient descent that scales very well to large datasets (Bottou, 2010).",
"Another crucial difference between our approach and some prior work in citation prediction is that we build up a document representation using its constituent words only.",
"Prior algorithms (Huang et al., 2015, 2012; Nallapati et al., 2008; Tanner and Charniak, 2015) learn an explicit representation for each training document separately that isn't a deterministic function of the document's words.",
"This makes the model effectively transductive since a never-before-seen document does not have a ready-made representation.",
"Similarly, Huang et al. (2012)'s method needs a candidate document to have at least one in-coming citation to be eligible for citation this disadvantages newly published documents.",
"Liu et al. (2015) form document representations using citation relations, which are not available for unfinished or new documents.",
"In contrast, our method does not need to be re-trained as the corpus of potential candidates grows.",
"As long as the new documents are in the same domain as that of the model's training documents, they can simply be added to the corpus and are immediately available as candidates for future queries.",
"While the citation recommendation task has attracted a lot of research interest, a recent survey paper (Beel et al., 2016) has found three main concerns with existing work:",
"(i) limitations in evaluation due to strongly pruned datasets,",
"(ii) lack of details for re-implementation, and",
"(iii) variations in performance across datasets.",
"For example, the average number of citations per document in the DBLP dataset is 5, but Ren et al. (2014) filtered out documents with fewer than 10 citations from the test set.",
"This drastically reduced the size of the test set.",
"We address these concerns by releasing a new large scale dataset for future citation recommendation systems.",
"In our experiments on the OpenCorpus dataset, we only prune documents with zero outgoing citations.",
"We provide extensive details of our system (see Appendix A) to facilitate reproducibility and release our code 11 .",
"We also show in experiments that our method consistently outperforms previous systems on multiple datasets.",
"Finally, recent work has combined graph node representations and text-based document representations using CCA (Gupta and Varma, 2017).",
"This sort of approach can enhance our text-based document representations if a technique to create graph node representations at test-time is available.",
"In this paper, we present a content-based citation recommendation method which remains robust when metadata is missing for query documents, enabling researchers to do an effective literature search early in their research cycle or during the peer review process, among other scenarios.",
"We show that our method obtains state of the art results on two citation recommendation datasets, even without the use of metadata available to the 11 https://github.com/allenai/citeomatic 245 baseline method.",
"We make our system publicly accessible online.",
"We also introduce a new dataset of seven million scientific articles to facilitate future research on this problem.",
"We would like to thank Oren Etzioni, Luke Zettle-moyer, Doug Downey and Iz Beltagy for participating in discussions and for providing helpful comments on the paper draft; Hsu Han and rest of the Semantic Scholar team at AI2 for creating the OpenCorpus dataset.",
"We also thank Xiang Ren for providing the data used in their experiments on the DBLP and Pubmed datasets.",
"Finally, we thank the anonymous reviewers for insightful comments on the draft."
] | [
"method",
"method",
"abstain",
"result",
"result",
"objective",
"abstain",
"abstain",
"other",
"method",
"method",
"objective",
"abstain",
"result",
"result",
"result",
"other",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"objective",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"result",
"abstain",
"abstain",
"method",
"abstain",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"abstain",
"objective",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"other",
"abstain",
"objective",
"other",
"other",
"other"
] |
[
"Automatic image captioning has improved sig-nificantly over the last few years, but the problem is far from being solved, with state of the art models still often producing low quality captions when used in the wild.",
"In this paper, we focus on the task of Quality Estimation (QE) for image captions, which attempts to model the caption quality from a human perspective and without access to ground-truth references, so that it can be applied at prediction time to detect low-quality captions produced on previously unseen images .",
"For this task, we develop a human evaluation process that collects coarse-grained caption annotations from crowdsourced users, which is then used to collect a large scale dataset spanning more than 600k caption quality ratings.",
"We then carefully validate the quality of the collected ratings and establish baseline models for this new QE task.",
"Finally, we further collect fine-grained caption quality annotations from trained raters, and use them to demonstrate that QE models trained over the coarse ratings can effectively detect and filter out low-quality image captions, thereby improving the user experience from captioning systems.",
"Image captioning technology produces automatic image descriptions using natural language with the goal of being consumed by end-users that may not be able to directly access the images.",
"This need arises either because the user has a permanent condition (accessibility for visually impaired people), or due to a temporary situation where the user cannot use the visual modality (such as limited bandwidth, or smart voice-assistant).",
"In any of these situations, exposing the end-users to a generated caption that is incorrect negatively impacts user-trust, as it can have undesirable consequences for how they act next (for example, how they comment on a social-media site based on their misguided understanding).",
"In this paper, we propose to mitigate such risks through Quality Estimation (QE) of image captions.",
"That is, we propose to automatically compute a quality estimation score QE ( image, caption ) for a generated caption, and use it to control the quality of the captions presented to the user.",
"For example, by filtering out captions with a low QE score (below a carefully chosen threshold), only high scoring captions would be served thereby minimizing the risks associated with low-quality captions.",
"We emphasize two aspects of QE that have guided us in our design choices: First, the QE task is distinct from the model selection task: model selection measures output similarity to a fixed, ground-truth annotated dataset during training time (with traditional offline solutions such as CIDEr and SPICE).",
"In contrast, a QE model estimates the caption quality with respect to the input image only and does so on previously unseen samples at prediction time where ground-truth captions are unavailable.",
"Second, a QE model's goal is to assess the caption as a whole and relate it to the image content in a way that QE(image, caption) aligns with human understanding of language and their perception of visual information.",
"To address these aspects we develop an image-caption evaluation process for collecting vast amounts of human judgements.",
"Specifically, we design the process to elicit only the type of human signal that is required for quality estimation human annotators are shown the image and asked to evaluate the caption as a whole by simply answering whether it is good or not.",
"This type of high level feedback trades away the ability to understand in what way the caption is wrong, but its simplicity enables scaling up human evaluations to cover many more images, which promotes the generalization of the QE model to unseen images.",
"The dataset resulting from the evaluation process includes captions generated by various image-captioning model over 16,000 unique images from the Open Image Dataset (Kuznetsova et al., 2018) for a total of 55,000 unique (cid:104) image, caption (cid:105) pairs, over which we collected approximately 600,000 binary human ratings.",
"We denote this dataset as Caption-Quality, provide extensive details on its generation process as well as make it publicly available 1 , available.",
"The following summarizes our contributions:",
"1. We release the Caption-Quality dataset of roughly 65k human rated image-caption pairs, obtained by collecting approximately 600k binary human ratings in total.",
"By analyzing the collected ratings, we show that they encode a stable and consistent signal about the caption.",
"2. We establish baseline results on the QE task and demonstrate that the signal encoded in the collected ratings is learnable, yet, cannot be trivially captured by an image-text simi-lairty model trained over a large scale image-captioning dataset.",
"3. We further test our QE models, trained over the Caption-Quality dataset, and show that they can successfully rank correct-and-helpful captions higher than incorrect or unhelpful ones, even though they were never exposed to such a fine-grained signal.",
"This is done by collecting additional fine-grained caption annotations from trained human raters, over images that are out-of-domain for the QE model.",
"Our paper is most similar to work done on evaluation metrics of image captions, where the main difference is that QE does not have access to the ground truth captions.",
"Quality estimation has more than a decade long history in the Machine Translation (MT) field, from the early work based on feature engineering (Specia et al., 2009; Soricut and Echihabi, 2010), to more recent neural-networkbased approaches (Kreutzer et al., 2015; Kim and Lee, 2016; Kim et al., 2017).",
"The QE track at the WMT conference (Specia et al., 2019) has been running for several years, with multiple participants and notable improvements in model performance over the years.",
"However, there are significant differences in the formulation of the QE task between MT and image captioning, most notably the fact that the MT formulation is 1 https://github.com/ google-research-datasets/image-caption-quality-dataset uni-modal (text-only alignment).",
"As a result, solutions for QE in the MT context tend to focus on feature-engineering that exploits this aspect (Spe-cia et al., 2013; Kreutzer et al., 2015; Martins et al., 2017; Wang et al., 2018).",
"In contrast, QE for Image Captioning is a bi-modal problem (image-and-text alignment), and therefore better suited to approaches based primarily on deep feature representations and multi-modal feature integration, as we present in this paper.",
"Beyond quality estimation modeling, the issue of effectively using quality estimators to improve the accessibility use-case for Blind or Visually Impaired (BVI) people has been previously studied (MacLeod et al., 2017).",
"The main question of their study is how to best inform the BVI user about the uncertainty around the generated captions, experimenting with framing the captions using phrases like I'm not really sure but I think it's $CAPTION or I'm 98% sure that's $CAPTION.",
"The findings are relevant in that BVI users of this technology have difficulties calibrating themselves into trusting or distrusting $CAPTION, mostly because there is no alternative form of reference for the image content.",
"Therefore, if the caption provided to them (even accompanied by I'm not really sure but ...) is in dissonance with the rest of the context (as it may be available in text form, e.g., as part of a tweet thread as in the study cited above), they tend to resolve this dissonance not by believing that the caption is wrong, but by constructing scenarios or explanations that would somehow connect the two sources of information.",
"To mitigate this problem, we propose a thresholding-based approach that simply decides whether to show a caption or not based on a QE model's prediction (See section 6.2).",
"The key contribution of this paper is the Caption-Quality dataset, a large collection of binary human judgments on the quality of machine-generated image captions (in English).",
"Below, we describe the dataset generation process, as well as the rating collection process with which we collect approximately 600,000 binary ratings via crowdsourcing.",
"We then provide an analysis of the ratings which shows that they contain a consistent signal about the captions.",
"Note that in the experiments (sec-tion 6.2), we further verify that indeed this signal captures the quality of the caption as perceived by trained humans annotators.",
"The starting point for our dataset is the Open Images Dataset (OID) (Kuznetsova et al., 2018) from which we randomly sample 16,000 images and then, for legal and privacy concerns, filter out those which contain faces 2 .",
"The choice for OID images is driven by their image copyright status (CC BY) and the fact that they are out-of-domain for popular image captioning datasets such as COCO and Conceptual Captions.",
"To generate a diverse set of captions for annotation, we used several variants of Transformer-based (Vaswani et al., 2017) image-captioning models, trained on the Conceptual Captions dataset (Sharma et al., 2018), which consists of 3.3M training and 15,000 validation images-caption pairs.",
"As previous work indicates (Sharma et al., 2018), for out-of-domain images (OID), captions produced by Conceptual Captions trained models tend to have higher quality compared to captions produced by COCO-trained models.",
"All of the models are trained to minimize the ground-truth caption perplexity; however, they differ on several important aspects (which contributes to caption diversity): the image feature representations, the number of object detection results they use, and the caption decoding procedure.",
"We briefly discuss these differences below; for further details, see (Sharma et al., 2018; Changpinyo et al., 2019).",
"Global Image Representation Our captioning models use one of the following pretrained image encoders: (1) The Inception-ResNet-v2 model (Szegedy et al., 2016), (2) The Picturebook image encoder (Kiros et al., 2018), or, (3) The Graph-RISE model (Juan et al., 2019), a ResNet-101 model (He et al., 2016) trained for an image classification task at ultra-fine granularity levels.",
"Object Representations The identification of objects in an image is done using a Faster R-CNN model, training it to predict both 1,600 object and 400 attribute labels in Visual Genome (Krishna et al., 2017), following the Bottom-Up Top-Down setting (Anderson et al., 2018).",
"In terms of featurization for the identified bounding boxes, we use variants that include a ResNet-101 model pretrained on ImageNet (Russakovsky et al., 2015) 2 Detected using the Google Cloud Vision API, https: //cloud.google.com/vision/ Figure 1: Our caption evaluation interface.",
"and one pre-trained using the Graph-RISE model (Juan et al., 2019).",
"Object Labels In addition to object-level representations, we detect object labels over the entire image, using a ResNet object-detection classifier trained on the JFT dataset (Hinton et al., 2015).",
"The classifier produces a list of detected object-label identifiers, sorted in decreasing order by the classi-fier's confidence score.",
"These identifiers are then mapped to embeddings o j using an object-label embedding layer which is pre-trained to predict label co-occurrences in web documents using a word2vec approach (Mikolov et al., 2013).",
"Traditional approaches for human evaluation of automatically generated text, such as for image captioning (Vinyals et al., 2016) and machine translation (Banchs et al., 2015), approach the task by collecting human ratings across multiple evaluation dimensions, such as correctness, informativeness and fluency.",
"Such fine-grained evaluations are typically used to expose model deficiencies during Set Samples Unique Images Unique Captions Unique Models Train 58354 11027 34532 11 Dev 2392 654 1832 4 Test 4592 1237 3359 4 Table 1: The Caption-Quality dataset statistics development and can also assist during model selection.",
"However, obtaining fine-grained rating on a large scale is a slow and costly process because it requires extensive manual labor by professionally trained human annotators.",
"Furthermore, it is not immediately clear how the resulting multidimensional ratings can be combined to estimate the overall caption quality in a human-like manner.",
"To avoid these complications we develop an evaluation process that asks the human evaluators to rate the generated text not per dimension, but as a whole .",
"The benefits of our approach are threefold: (1) the collected ratings better align with our end goal of quality estimation from a human perspective (2) having a single question accelerates caption evaluation, and (3) it substantially reduces the training and qualification requirements from the raters, which further contributes to the scalability of the evaluation process.",
"Specifically, we formulate the quality of an image-caption as the binomial probability p = P ( GOOD | image, caption ) that can be estimated from the Bernoulli process in which every trial corresponds to a different rater.",
"We then leverage Google's crowdsourcing platform 3 on which we present (image, caption) pairs and ask volunteer raters the following coarse binary question, Is this a good caption for the image? .",
"The raters can then select YES/NO, or skip to the next sample (SKIP) (see Fig. 1).",
"In adopting this approach we take into account the fact that the plat-form's community consists of passionate volunteer raters, who may not have the linguistic background to provide fine-grained annotations.",
"Furthermore, allowing the raters to skip captions reduces the risk of an undecided rater arbitrarily picking YES/NO just to move to the next image.",
"In order to reliably estimate the quality p we collect a high number of 10 ratings per image-caption sample.",
"Once collected, the human ratings are further processed by: (1) filtering out (image, caption) entries that received more than 2 SKIP ratings (practically, the vast majority of images were kept), and (2) estimating p by averaging the 8 to 10 ratings r i for each of the remaining (image, caption) pairs, and rounding to the closest score in { 0 , 18 , . . . , 78 , 1 } , using the equation p = round ( mean ( r i ) 8) / 8 , 3 https://crowdsource.google.com Figure 2: A histogram of the dev, test and train p .",
"where r i is 0 for NO answers and 1 for YES.",
"The resulting dataset, which we call the Caption-Quality v1.0 dataset, is then split into three image-disjoint subsets, used as train, dev and test folds in our experiments.",
"We provide statistics for these subsets in Table 1, as well as histograms of p in Fig.",
"2. Finally, we provide examples from the dev set in Table",
"2. 3.3 Stability Analysis As described above, the interpretation of what a GOOD caption means is left up to the raters, which could lead to unstable or inconsistent human ratings (Graham et al., 2013).",
"In order to verify the stability of the quality ratings p , we study Figure 3: A set of 509 captions were evaluated twice by different sets of 10 raters and 4 weeks apart.",
"the degree of agreement between different sets of 10 raters.",
"We ran an evaluation over the same set of 509 image-captions twice, but 4 weeks apart 4 .",
"An analysis of the difference of scores ( p 1 p 2 ) over these 509 pairs results in an almost zero mean (mean=0.015) as well as low variance (std=0.212).",
"Figure 3 provides a histogram of the differences ( p 1 p 2 ) which clearly shows a concentration of the difference about",
"0. Furthermore, repeating this analysis over a different set of image-captions results in similar statistics.",
"In conclusion, the stability analysis shows that by collecting and averaging 8-10 coarse binary ratings, we obtain consistent and reproducible P(GOOD) estimates p that are well-concentrated on a sample-level .",
"We further collect fine-grained human annotations of image-captions to ascertain that the signal in the Caption-Quality dataset is beneficial for estimating the quality of image captions and filtering out low-quality ones.",
"Specifically, we ask professional human annotators to evaluate image-captions across two specific dimensions: helpfulness and correctness 5 .",
"Fig 4 shows the evaluation interface.",
"Distinguishing between correctness and helpfulness is particularly crucial for quality estimation, as it helps diagnose models that produce abstract 4 The evaluation platform roughly guarantees that the ratings are provided by different subsets of raters.",
"5 We also evaluate along a fluency dimension, but current captioning models tend to produce overall fluent outputs, which makes this dimension non-discriminative.",
"or irrelevant captions which, while correct, do not provide useful image descriptions (specifically, for a person who is unable to see the image).",
"For example, consider the correct yet abstract caption Person in a sport event compared to the more descriptive caption Ice hockey player celebrates his goal against sports team (See Fig 4).",
"Another example of a correct but unhelpful caption is A view of the game from my living room because it conveys more information about the camera position rather than the actual image content.",
"While the previously discussed Fast&Simple evaluation may assign all these captions with similar scores, the fine-grained evaluation is capable of capturing such nuanced differences.",
"We posit that the large-scale annotations obtained by the Fast&Simple approach will enable a model to distinguish between correct-and-helpful captions, and those that are not.",
"We ran the fine-grained evaluation once over 2,700 images, collecting 3 ratings per image.",
"The resulting dataset, denoted Caption-Ext is used for our extrinsic QE evaluations (Sec. 6.2).",
"This section presents a simple bilinear QE model which learns to combine the image and caption features to arrive at a quality estimate QE ( image, caption ) .",
"To construct the bilinear model we rely on expressive image and text representations that are produced by pretrained models that were themselves trained on vast amounts of uni-modal data.",
"Note that aside from building on top of pretrained models, we restrict further modeling to a simple architecture.",
"This was done in order to establish a baseline for our new QE task, as well as to remain focused on providing evidence that the signal in the Caption Quality dataset is both learnable and beneficial for quality estimation of image captions.",
"Our bilinear neural network model relies on three input types: caption, image and object labels.",
"These representations are produced by the following pretrained models: Global Image Embedding For a global image representation, we used the latest Graph-RISE model version (Juan et al., 2019) which produces a compact image embedding i of dimension D i = 64 .",
"Object Labels Embeddings Objects present in the image (e.g. cat, vehicle, flower) can help assess the correctness and helpfulness of a candidate caption, where the intuition is that the caption should likely mention the more salient objects.",
"We use the object label model mentioned in Sec. 3.1, whose resulting embedding sequence is O = ( o 1 , . . . , o | O | ) , where each o j has dimension D o = 256 .",
"Caption Universal Sentence Embedding The caption text is embedded using a pretrained version of the Universal Sentence Encoder (USE) (Cer et al., 2018) into a D s = 512 dimensional vector s .",
"The USE model itself is trained on large amounts of English sources (Wikipedia, web news, discussion forums, etc.) and fine-tuned using supervised labels from the SNLI corpus (Bowman et al., 2015).",
"We have alternatively tried a BERT (Devlin et al., 2019) model as an encoder, but observed it provides no additional gains (Alikhani et al., 2020) Given these features, the bilinear QE model (il-lustrated in Figure 5) processes each individual feature using a dense layer with a leaky-ReLU activation (Xu et al., 2015), and then combines each of the resulting vector pairs using bilinear layers (see below).",
"All bilinear outputs are then concatenated and fed to a dense layer with a sigmoid activation, to produce the quality estimation y .",
"A bilinear layer models the inner product of its two inputs after applying a linear transformation to the second input.",
"This layer is defined as: b ( x, y ; B ) = x T By = (cid:104) x, By (cid:105) (1) where x RD x and y RD y are input features, and B RD x D y is the learned parameter matrix.",
"Linear and bias terms can be added by appending a constant 1 to each of x and y .",
"the interaction between each pair of input-types:",
"1. B o,i RD o D i , applied to each of the object-label embeddings [ o 1 , . . . , o | O | ] and the image embedding i .",
"2. B o,s RD o D s , applied to each of the object-label embeddings [ o 1 , . . . , o | O | ] and the sentence embedding",
"s 3. B i,s RD i D s , for the image embedding i and sentence embedding s .",
"Having the large scale Conceptual Captions dataset (Sharma et al., 2018) opens up the option to pretrain a QE model on an image-text similarity task (Cui et al., 2018) before fine-tuning on the Caption-Quality dataset.",
"We exercise this option by setting up a classification task whose goal is to match each image within a mini-batch with its corresponding ground truth caption.",
"Specifically, we feed the bilinear QE model mini-batches of size 256 and train it to detect the ground-truth caption of each image among the other ground-truth captions in the batch (along the lines of noise-contrastive estimation (Gutmann and Hyvrinen, 2010)).",
"The pretrained model achieves 62% accuracy over the Conceptual Captions dev set and serves as an image-text similarity baseline.",
"In addition, its parameters serve as a fine-tuning initialization point that is better informed about the relationship between image and text compared to random initialization.",
"All QE models are trained on the Caption-Quality training set (Section 3).",
"We use Mean Squared Error ( MSE = (cid:80) Bj =1 1 N ( y j y j ) 2 ) as the loss function, where y j are the predicted scores and y j the ground-truth human scores.",
"For optimization, we use Adam (Kingma and Ba, 2015) with batch size B = 256 and tune the learning rate lr { 1 e 4 , 1 e 5 , 1 e 6 } .",
"Dropout rate is set to 0 .",
"2 , and applied on the inputs of all trainable layers.",
"The following pretrained models are fixed during optimization: the image encoder, the USE caption encoder, and object-label encoder.",
"The number of object-labels is tuned over { 0 , 5 , 10 , 20 } , while the pretrained variants were fixed to 16.",
"Model selection is done by picking the checkpoint that maximizes the dev set Spearman's correlation S ( y, y ) .",
"Specifically, compared to MSE (the objective), the Spearman-based selection criterion better matches the intended use of the QE model, where at inference time, only images whose QE scores pass some threshold will be served.",
"Since this threshold can be tuned, the absolute value of the predicted scores y is not as critical as obtaining a monotonic relationship between the predicted and ground truth scores (using S as the loss function is less feasible due to non-differentiability).",
"We present in Table 3 our dev and test Spearman results based on selecting the best-performing model configurations over the dev set.",
"Rows 1 and 2 show the bilinear model achieves minor improvements given additional 20 object labels.",
"The poor Spearman scores in row 3, which were obtained without fine tuning over the Caption-Quality dataset, demonstrate that predicting the human ratings cannot be trivially achieved with an image-text similarity model, even when trained on a large dataset as Conceptual Captions.",
"On the other hand, after fine-tuning it for the QE task (row 4), both dev and test Spearman scores increase substantially by 6-7 Spearman points over the best non-pretrained variant, which demonstrates the ef-fectivenss of bi-modal pretraining for the QE task.",
"So far we have shown that the signal in Caption-Quality is both consistent and learnable.",
"In this section, we further show that the collected signal is effective for filtering out low-quality image captions.",
"To do so, we evaluate the performance of Caption-Quality trained QE models over the Caption-Ext dataset, a more challenging setting which contains out-of-domain images (non-OID) and where each caption is annotated by three trained raters for its correctness and helpfulness (Sec. 4).",
"Our analysis Model QE training features learning rate Sdev Stest MSE dev MSE test Bilinear image, caption 1e-5 0.49 0.47 0.055 0.056 Bilinear + 20 object labels 1e-5 0.50 0.47 0.055 0.058 Bilinear (Pretrained) -1e-5 0.26 0.25 0.075 0.073 Bilinear (Pretrained) image, caption, 16 labels 1e-5 0.57 0.53 0.053 0.053 Table 3: Spearman's S scores on the Caption-Quality dev and test dataset (higher is better).",
"reveals that QE models trained over the Caption-Quality dataset generalize well to this harder task, having the ability to distinguish between correct-and-helpful image-captions and those that are not, even though these models were never exposed to such fine-grained signal.",
"Specifically, for a given image, we define a caption as Ext-Good (extrinsically good) if a majority of raters agreed that it is at least partially-correct, and, a majority of raters agreed it is at least somewhat-useful.",
"With this definition, we compute the Ext-Good precision and recall statistics of a QE model Q for each threshold th [0 , 1] using the following equations: precision Qth = (cid:80) s 1 sExt Good 1 QE ( s ) >th (cid:80) s 1 QE ( s ) >th (2) recall Qth = (cid:80) s 1 sExt Good 1 QE ( s ) >th (cid:80) s 1 sExt Good (3) where the indicator variable 1 sExt Good is on only when s is Ext-Good, and similarly the indicator Figure 6: Precision-Recall curves for the various Bilinear models.",
"variable 1 QE ( s ) >th is on only when the QE score of sample s is higher than the threshold th .",
"Figure 6 shows the precision-recall curves and AUC scores for the same models analyzed in the previous section.",
"A visual inspection of this figure shows that the precision of the pretrained and fine-tuned bilinear model (black) dominates the other models across almost all recall values.",
"Indeed, in terms of AUC, the worst performing model is the image-text similarity baseline (blue; AUC=0.76) which has no access to the Caption-Quality dataset and its human ratings.",
"On the other hand, the pretrained and fine-tuned model (which is also the Spearman maxmizing model) attains the highest AUC score (AUC=0.84).",
"Put differently, to achieve precision=0.8 (i.e., 80% of served captions are both correct and help-ful), the image-text similarity model would be thresholded to serve only its top 21% scoring image-captions (recall=0.21) while the pretrained and fine-tuned model would serve its top 71% scoring image-captions (recall=0.71, or x3.4 improve-ment).",
"This analysis clearly demonstrates the usefulness of the Caption-Quality dataset for filtering out image-captions of low quality (where quality is determined by professional human raters).",
"Beyond its relevance for the QE task, we expect that the collected signal in the Caption-Quality dataset will find usage in other image captioning tasks, such as (1) fine-grained caption evaluation (that is, caption classifiers that evaluate captions across multiple dimensions) for example, by way of pretraining against our dataset, as well as (2) improving caption generation itself, for example, by means of QE-based caption re-ranking, or by using the ratings in a reinforcement learning setup, as has recently been done by (Seo et al., 2020).",
"In this paper we discussed how low-quality image-captions can negatively impact end-users and proposed a thresholding solution that relies on quality estimation of image captions, where caption quality is defined from a human perspective.",
"To make this solution feasible we developed a scalable human evaluation process with which we annotated a large number of image-captions with their human estimated quality.",
"We provided supporting evidence that the resulting dataset contains a consistent and reliable signal, as well as reported experimental results over professionally labeled fine-grained caption annotations, which verify that QE models trained over the Caption-Quality dataset are effective at filtering out low-quality image captions.",
"To encourage further research in automatic evaluation of image-captions, we make available our large-scale dataset of human judgments at https://github.",
"com/google-research-datasets/ Image-Caption-Quality-Dataset ."
] | [
"abstain",
"method",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"result",
"method",
"objective",
"result",
"result",
"objective",
"result",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"objective",
"result",
"other",
"abstain"
] |
[
"In classic instruction following, language like I'd like the JetBlue flight maps to actions (e.g., selecting that flight).",
"However, language also conveys information about a user's underlying reward function (e.g., a general preference for JetBlue), which can allow a model to carry out desirable actions in new contexts.",
"We present a model that infers rewards from language pragmatically: reasoning about how speakers choose utterances not only to elicit desired actions, but also to reveal information about their preferences.",
"On a new interactive flightbooking task with natural language, our model more accurately infers rewards and predicts optimal actions in unseen environments, in comparison to past work that first maps language to actions (instruction following) and then maps actions to rewards (inverse reinforcement learning).",
"Language is a natural interface for systems like robots or personal assistants that interact with human users.",
"One way to interpret language in these interactive settings is to train an instruction following agent: a model that learns to map commands like go three steps forward to the door to a sequence of actions in context (e.g., Brana-van et al. 2009; Tellex et al. 2011, inter alia ).",
"Instructions describe how an agent should act in an immediate context, but to build models that can generalizecarrying out a user's goals in new contexts and learning user preferences over repeated interactionsagents should also infer why actions are taken.",
"Grounding language to reward functions extends the standard instruction following setup in this way, representing the goals and preferences that underlie actions, and allowing agents to autonomously carry out correct actions in new contexts (e.g., Fu et al. 2019).",
"However, when people interact with systems they often primarily aim to achieve specific tasks, Available flights: JetBlue Delta price stop length User's true reward function:",
"rather than literally describing their preferences in full.",
"How do we infer general goals and preferences from utterances in these settings?",
"Consider a flight booking agent like the one in Figure 1.",
"By inferring the user's reward function (indicating their preference for carrier, price, and other flight features) beyond just selecting the right flight, such a system would be able to autonomously book flights on behalf of the user in other instances.",
"To do so, the system might use the actions the user commands as evidence about what they prefer, recovering rewards from actions using (language-free) techniques like inverse reinforcement learning (IRL; Ng and Russell 2000).",
"For example, the system can select a flight the user might like in a new instance by 8546 matching features from their past flight bookings.",
"The key idea of our work is that the way that a user refers to their desired actions with language also reveals important information about their reward: the fact that they said the JetBlue flight and not the expensive flight conveys what matters to them.",
"Intuitively, in settings with repeated interactions, utterances are optimized to communicate information that is generalizableimplicitly helping listeners make useful inferences for acting on a longer horizon.",
"We implement this idea with a pragmatic model of how speakers (humans) generate such language: speakers choose utterances that both elicit reward-maximizing actions in a particular context and faithfully describe the reward.",
"Given an utterance, our model infers that the most likely rewards are the ones that would have made a speaker likely to choose that utterance.",
"To evaluate our model, we construct and release a dataset for mapping language to rewards, FLIGHTPREF , containing natural language utterances from humans with underlying preferences.",
"Humans interact in a multi-turn flight booking game similar to Figure 1, where we provide a user player with a reward function representing flight preferences.",
"The goal of the game is for the user to communicate these preferences in natural language to an assis-tant player, who is tasked with booking preferred flights for the user.",
"We present this dataset as a challenging benchmark for reward learning from language and interaction.",
"In our experiments, we show that our model can infer reward functions from natural language, improve reward estimates consistently over repeated interactions, and use inferred rewards to accurately select optimal actions in held-out environments.",
"Our full model obtains relative accuracy improvements of 12% when compared to models that only treat language as descriptions of actions.",
"1 2 Related Work Instruction following.",
"A long line of work on grounded instruction following has developed various methods for producing actions from language, including approaches that use intermediary structured semantic representations (MacMahon et al., 2006; Tellex et al., 2011; Chen and Mooney, 2011; Matuszek et al., 2013; Artzi and Zettlemoyer, 2013; She et al., 2014; Thomason et al., 2015; Wang et al., 1 We release our code and dataset at https://github.com/ jlin816/rewards-from-language . 2016; Fried et al., 2018a; Arumugam et al., 2017; Suhr et al., 2018) or map directly to primitive actions (Branavan et al., 2009; Andreas and Klein, 2015; Mei et al., 2016; Bisk et al., 2016; Misra et al., 2017; Guu et al., 2017; Suhr and Artzi, 2018; Anderson et al., 2018; Shridhar et al., 2020).",
"All of these approaches interpret any given utterance (in-struction) solely in the context that elicited the utterance, producing one particular sequence of actions.",
"The method we present extends these approaches, using utterances to infer the rewards that underlie the actions that should be taken across a range of environments: both the context that elicited the utterance, and other unseen environments.",
"Reward learning.",
"The majority of work on reward learning has been in the robotics and reinforcement learning communities and has not incorporated language, rather using techniques such as inverse reinforcement learning (IRL; Ng and Russell 2000; Ratliff et al. 2006; Ziebart et al. 2008; Hadfield-Menell et al. 2017; Jeon et al. 2020) to infer the rewards that underlie human demonstrations of actions.",
"Even works that incorporate language into reward learning also take this primarily action-centric approach: either by using datasets pairing utterances with trajectories and using (language-free) IRL to then recover reward functions from trajectories (MacGlashan et al., 2015; Fu et al., 2019), or learning an instruction-following model guided by a language-conditioned discriminator (Bahdanau et al., 2019).",
"The language in these settings are unambiguous commands, giving a complete description of a goal (e.g., go to the red door).",
"In contrast, we are concerned with language used to guide agents in repeated interactions (where language may be a partial or ambiguous mix of instructions and reward descriptions).",
"Pragmatics.",
"A long line of work on pragmatics (Grice, 1975), particularly in the Rational Speech Acts (RSA) framework (Goodman and Frank, 2016), has developed computational models for inferring the behavior or belief that a speaker wishes to induce in a listener.",
"However, the majority of this work has only focused on single-turn interactions, where an utterance conveys an action in a single context, e.g., choosing the correct referent in signaling games (Golland et al., 2010; Frank and Goodman, 2012; Degen et al., 2013; Monroe et al., 2017; McDowell and Goodman, 2019), interpreting implicatures (Goodman and Stuhlmller, 8547 What reward would make the referenced trajectory optimal?",
"2013; Bergen et al., 2016), or generating (Fried et al., 2018a; Sumers et al., 2021) or interpreting grounded instructions (Fried et al., 2018b).",
"Our work extends this past work by showing that in repeated interactions, listeners can also benefit by reasoning pragmatically about how speakers communicate information about and over longer time horizons.",
"Problem Formulation.",
"We parameterize the user's preference as a reward function r with parameters .",
"In our flight booking domain from Figure 1, is a weight vector which specifies preferences over flight features (carrier, price, etc.).",
"We formalize the general reward inference problem as sequence of Markov decision processes (MDPs) M 1 , . . . , MI that share the same reward function r .",
"In each MDP M i , the agent receives an utterance u i from the user and must execute a trajectory .",
"The agent's goal is to infer over the sequence of interactions, which should allow the agent to execute trajectories with high reward in as-yet unseen contexts.",
"The agent maintains an estimate over over the course of interactions.",
"We introduce a model p ( | u, M ) that the agent will use to perform Bayesian updates of a posterior over : p ( | u 1: i , M 1: i ) p ( | u i , M i ) p ( | u 1: i 1 , M 1: i 1 ) In the flight domain, we specialize this formulation to study a one-step MDP (contextual bandit).",
"Trajectories consist of a single action, choosing one of the available flights.",
"Over a series of these rounds where the agent books a flight given the user's utterance u i , the agent must infer the user's flight preferences to book flights from other unseen sets of options, without explicit language instruction from the user.",
"Our model, summarized in Figure 2, defines a rational listener , L 2 , which predicts a distribution over rewards , conditioned on an utterance u and a context M .",
"(The terminology we use for listeners and speakers follows Bergen et al. 2016.)",
"The rational listener uses Bayesian reasoning about a speaker model, S 1 , which produces utterances conditioned on a reward function and context: p L 2 ( | u, M ) p S 1 ( u | , M ) p ( | M ) Key to our model is that the S 1 speaker distribution p S 1 ( u | , M ) defines how speakers produce language that functions both to elicit correct actions and describe their underlying reward: p S 1 ( u | , M ) = p action ( u | , M ) + (1 ) p reward ( u | ) , where controls the speaker's nearsightedness how much does the speaker care about the listener choosing the correct action in the current context, rather than describing the reward in a context-independent way so that the agent can make good choices in future contexts?",
"Optimizing for action.",
"The behavior-optimizing term p action specifies that the speaker chooses utterances that elicit reward-maximizing behavior from a listener in the current environment: p action ( u | , M ) = (cid:88) p refer ( u | , M ) p opt ( | , M ) , 8548 where the optimality model p opt ( | , M ) specifies the probability the speaker refers to trajectory if their true reward is .",
"We can formulate the optimality model with the Boltzmann distribution common in IRL, where speakers are noisily-rational about which trajectories to refer to: p opt ( | , M ) exp( r ( ; M )) , with rationality parameter .",
"This term specifies that utterances are more likely to refer to trajectories that have high reward according to the speaker's , compared to other trajectories in M .",
"Then, for a particular trajectory , p refer ( u | , M ) specifies what utterances are likely to refer to that trajectory.",
"In particular, we model that speakers choose utterances that would make a listener execute that trajectory: p refer ( u | , M ) p L base ( | u, M ) using a base listener model L base of the type common in past work on instruction following.",
"We provide details on L base in Section 5.",
"Optimizing for reward descriptiveness.",
"Finally, we model p reward ( u | ) , the second term in PS 1 , with a base speaker model, S base , that maps rewards to reward descriptive utterances: p S base ( u | ) 2 .",
"We also provide details on S base in Section 5.",
"Our account of pragmatic generation can also be viewed as the graphical model in Figure",
"3(c), where, importantly, the reward influences the utterance both directly and via the action that the 2 In principle, p reward could also do pragmatic reasoning to optimize a listener's reward belief, but we did not find an improvement from doing so empirically.",
"speaker refers to.",
"We define p ( u | , , M ) to be: p ( u | , , M ) = p ( u | , M ) + (1 ) p ( u | , M ) and assume that utterances are reward-descriptive in a way that is independent of the current context, p ( u | , M ) = p ( u | ) .",
"We can confirm this leads us back to p S 1 by marginalizing out : p ( u | , M ) = (cid:88) p ( u | , , M ) p ( | , M ) = (cid:88) (cid:16) p ( u | , M ) p ( | , M ) (cid:17) + (1 ) p ( u | , M ) = p action ( u | , M ) + (1 ) p reward ( u | ) Using this graphical model, we illustrate how our model differs from prior work in similar settings: Classic reference game pragmatics collapses belief and behavior.",
"In general, RSA allows the speaker to optimize for any utility function, and in the simplest form the utility function optimizes for the listener's belief over world states (Good-man and Frank, 2016).",
"However, in most work on RSA the only relevant world-state belief is belief about behavior, e.g., the referent that should be selected (Figure 3a).",
"Instead, our setting disentangles communication about intended referents in a single context and communication about (reward) beliefs, which influence behavior on longer horizons.",
"Andreas et al. (2017); Sumers et al. (2021) have made the same observation: reference games conflate whether the speaker's objective is to influence beliefs or actions, and modeling the speaker as one or the other produces distinct interpretations of utterances (e.g., speakers that only optimize for correct behavior may do so at the cost of being truthful about the reward).",
"IRL assumes all information about the reward function is modulated by the trajectory.",
"Prior work (MacGlashan et al., 2015; Fu et al., 2019) uses IRL to recover rewards from trajectories (e.g., from datasets pairing utterances with trajectories), and then supervising a model with these induced (utterance, reward) pairs.",
"While prior work has not specifically considered pragmatics (i.e., speaker models), their implicit speaker model amounts to assuming that all information about the reward 8549 comes from trajectories, as in Figure 3b.",
"In our experiments we compare against a pragmatic version of this action-centric speaker, which is equivalent to setting = 1 in our model (only using p action ).",
"In realistic settings where utterances are not unambiguous commands like go to the red door, it becomes important to model how actions and utterances reveal complementary information about rewards.",
"We design FLIGHTPREF , a task for reward inference from natural language in the flight booking domain.",
"FLIGHTPREF is designed to simulate a simplified interaction with a flight booking agent, where users communicate with the agent via language to book flights from a set of options.",
"Effective agents must not only learn to book the preferred flight given an instruction in the immediate context (instruction following), but also learn the user's preferences over repeated interactions to book preferred flights in unseen contexts.",
"We collect a dataset of natural language in a multi-turn game between a user (the speaker) and an assistant (the listener agent).",
"Each flight is represented by a feature vector ( ) R 8 (e.g., features of carrier, price, etc.).",
"We assume the user has a linear reward function with parameters R 8 , specifying a reward for a particular flight r ( ) = (cid:124) ( ) .",
"In the first round of the game, the user and assistant observe a set of three flight options and the user provides an utterance to describe the flight they want (the optimal flight under the reward function), e.g., the flight with the most stops.",
"In each of the subsequent rounds, the user and assistant are presented with a new set of three flights.",
"The assistant can either choose by guessing the user's preferred flight (under the same reward function), or prompt the user for another utterance describing the desired flight in the new set.",
"If the assistant chooses but does so incorrectly, the user is prompted for another utterance describing the correct flight.",
"Both players are penalized if the assistant chooses incorrectly, and earn points if the assistant chooses correctly (with more points for each round the assistant can do so without asking for help).",
"The user is thus incentivized to provide utterances that inform the agent which flight to choose, while enabling long-term success over later rounds.",
"that is the cheapest and has less stops.",
"anything but american jetblue one i need a flight with any airline but jet blue, price and number of stops are a bad factor for me also.",
"i prefer delta if aordable and low layovers.",
"can you help me?",
"even american is undesirable, paying more is important i like the flight that is $64 Figure 4: Sample text from the task, exhibiting a diversity of instructive and reward-descriptive language.",
"To collect data for the task, we recruit Amazon Mechanical Turk workers and randomly pair them to play six games (i.e., six different reward functions) of six rounds each.",
"Each game thus consists of 1-6 utterances describing options for the same reward function in different contexts.",
"One person plays the role of the user and the other acts as the assistant.",
"The user has access to a hidden reward function, which is a discretized, randomly-sampled vector { 1 , 0 .",
"5 , 0 , 0 .",
"5 , 1 } 8 .",
"In total, we collected 2,568 utterances across 813 games, of which we split off the 91 games with the highest score (where the speaker and listener were able to communicate most effectively) for the evaluation set.",
"More details about the data collection process can be found in Section A of the appendix.",
"A sampling of text is shown in Figure 4.",
"Utterances exhibit a range of phenomena: some users lean towards describing very option-specific features (e.g. i like the flight that is $64 ).",
"Other users attempt to describe as much of their reward function as possible (e.g. i need a flight with any airline but jetblue,. . . )we note that even when they did so, the user's tradeoffs between features remain ambiguous.",
"Many of the utterances are neither fully option-specific nor fully reward-descriptive: instructions like one stop that is short both instruct the agent which flight to select in the present context, while communicating some generalizable (but incomplete) information about the user's preferences.",
"Our pragmatic model (Section 3.1) relies on base listener and speaker models L base and S base .",
"In this section, we describe implementations of these models for the FLIGHTPREF dataset.",
"To train the base models, we use the speaker-side data of (utterance, 8550 option set, reward function) tuples from each round.",
"Our base listener and speaker models assume that the utterances are generated conditionally independently given the reward; we capture the dynamics of multiple turns in the posterior reward inference.",
"Both base models learn neural encodings of utterances u , actions , and rewards , and produce distributions by applying softmax functions to inner products between these encodings.",
"We use to denote the optimal action in each context, i.e., = arg max r ( ) .",
"Base listener model.",
"The base listener model L base is defined using inner product similarities between learned representations of actions produced by an MLP encoder, and learned representations of utterances produced by a BERT-base (De-vlin et al., 2019) encoder: p L base ( | u, M ) exp( MLPL base ( ) BERTL ( u )) where the distribution is normalized over all actions (flights) available in the context, (cid:48) M .",
"We set the rationality parameter = in p opt as speakers tend to refer primarily to the optimal option in our domain.",
"Base speaker model.",
"The base reward speaker model S base is defined using an inner product between representations of rewards from an MLP encoder, and utterance representations from a BERT encoder: p S base ( u | ) exp( MLPS base ( ) BERTS ( u ) / ) where p S base is normalized over a set of utterances taken from the training data (see Section C in the appendix), and = 3 is a temperature parameter.",
"Training.",
"We fine-tune all model parameters, including the parameters of the initially-pretrained BERT utterance encoders in the listener and speaker on ( u, , M ) pairs from the training data using the AdamW optimizer (Kingma and Ba, 2015; Loshchilov and Hutter, 2019).",
"The listener and speaker models are trained separately, without sharing any parameters between the encoders used in the two models.",
"We independently train 5 random seeds of each base model and ensemble them together in evaluation by averaging their output probabilities, which we found improved performance of all models (both our full model and baselines).",
"See Section C in the appendix for details and model hyperparameters.",
"Pragmatic inference We follow previous work (Fried et al., 2018a; Monroe et al., 2017) and approximate the S 1 distribution by normalizing over a fixed set of utterances: the de-duplicated set of short utterances (less than 8 tokens, making up the majority of utterances) with no digits from the training data.",
"We implement the full pragmatic model p L 2 ( | u, M ) in Pyro (Bingham et al., 2018) and use importance sampling to generate samples from the posterior over rewards.",
"Given our dataset collection procedure (where we uniformly sample re-wards), we model an uniform prior over rewards p ( | M ) for the first interaction.",
"We evaluate models in the same repeated turn setup that humans carried out in the task.",
"For each game, models play the role of the listener in that game, updating the reward posterior (Section 3.1) after observing the utterance and option set in each round.",
"Our goal is to estimate rewards that allow the agent to carry out the person's preferences: choosing the optimal option (flight) in unseen contexts (sets of flight options).",
"To that end, we directly compare models on held-out accuracy : on 1,000 randomly-generated sets of three options, how often the model's estimate of the reward, , selects the option that is optimal under the true reward.",
"3 We use the model's reward posterior mean as the estimate, = E p .",
"We additionally provide com-3 Note that when collecting the dataset, we also tested human listeners's ability to generalize, but only had them select an option on a single unseen option setthe next one in the sequenceto make data collection tractable.",
"parisons of reward L2 distance between the estimated reward and the true reward as a context-independent metric: (cid:113)(cid:80) 8 i =1 ( i i ) 2 , where is the true reward.",
"For our full action + reward model, we set the nearsightedness parameter = 0 .",
"5 for all posterior updates.",
"We compare to an action-only model that uses only p action (i.e., setting = 1 . 0 ).",
"This model is representative of approaches from past work on language-conditioned reward learning (e.g., Mac-Glashan et al. 2015; Fu et al. 2019) that infer rewards purely from the actions that utterances refer to.",
"We also compare to a reward-only model that uses only p reward (inferring rewards purely from the utterance, without conditioning on actions, i.e., setting = 0 . 0 ).",
"For comparison to versions of our approach that remove pragmatic modeling, see Section D.1 in the appendix.",
"In Table 1 we compare all models on held-out accuracy averaged over all rounds in the evaluation set (for each round, having observed all previous rounds in that game).",
"Note that because held-out accuracy is assessed by the proportion of randomly-generated flight sets (out of 1,000) where the true reward function and the inferred reward function pick out the same optimal flight, it is significantly more difficult than achieving high accuracy on a single three-choice instance.",
"Our full action+reward model achieves a held-out accuracy of 59.1%, +6.3% over the action-only model and +1.3% over the reward-only model, indicating that combining both sources of information allows better inference of rewards that enable optimal actions in novel contexts.",
"For reference, an oracle baseline that infers the value of k randomly chosen features perfectly and is uniform on the other features obtains the following held-out accuracies: k = 1 (43%), 2 (51%), 3 (60%), 4 (65%), showing that our model is able to attain similar generalization performance even in the presence of uncertainty (without receiving oracle information about the true value of any feature).",
"We analyze why our model benefits from both components in Section 6.3, and discuss potential for further improvements in Section 6.4.",
"We explore how each model's reward inferences change as more observations are obtained over the course of a game.",
"In Figure 5, we plot held-out accuracy and L2 distance to the true reward as a function of number of observed utterances.",
"Our model outperforms the action-only and reward-only models for all numbers of observed utterances.",
"is most important when there are few observations.",
"While our full action+reward model improves substantially over the action-only model at all points, this improvement generally decreases as more utterances are observed (Figure 5).",
"Conversely, the improvement of the full model over reward-only generally increases.",
"Qualitatively, we observe that this occurs because utterances tend to mention the most extreme features of the reward 8552 price .59 # stops .50 longest stop .96 arrival time 0.0 Southwest price .07 # stops .50 longest stop .56 arrival time .92 Delta price .78 # stops 0.0 longest stop .80 arrival time .08 Delta want a long time before meeting Arrival time Action-only Belief-only Ours",
"(a) Both the described action (the referenced flight is the one with the highest arrival time) and the explicit reward description in the utterance provide evidence that the user's true reward on arrival time is positive, leading the posterior in our model to (correctly) place more probability mass on positive values of this feature.",
"(b) Evidence from actions and from the utterance complement each other: the action-based model captures that rewards that are positive on arrival time make the selected flight optimal, even though it is unmentioned, while the reward-based model captures evidence about the reward from the user's utterance.",
"function, which allow our model to estimate the values of these important features.",
"When there are few observations, inferring reward information from utterances in this way is more informative than using only the option implied by the user's utterance, which does not disambiguate between rewards that select the same option (a commonly discussed problem in IRL; Ziebart et al. (2008)).",
"Inferring evidence from actions is most important when there are more observations.",
"We observe that the action-only model improves more consistently over rounds.",
"Qualitatively, the information that utterances provides about rewards is correlated across multiple roundsspeakers frequently mention salient reward features, whereas actions consistently provide new information about all features.",
"This is particularly pronounced in our domain, due to a relatively small feature and action space.",
"In other more complex domains, actions might provide even more benefits as they provide fine-grained information about reward values and tradeoff boundaries that are more difficult to communicate precisely in language.",
"In this section, we investigate why our model benefits from both the action and reward models.",
"show the reward posteriors for each model after a single update on a round (starting from a uniform prior).",
"In Figure 6a, we observe how the action-and reward-only models can make correlated updates on an utterance and context where both the action (a flight with a high value on arrival time) and the utterance provide evidence about the arrival time feature.",
"This leads our model's posteriors to aggregate more probability mass on positive values of that feature.",
"In Figure 6b, we show how each model can make inferences about different features for the same contextthe action-only model inferring positive values for arrival time given the observed flight and the reward-only model updating on flight price and stops.",
"Our model posterior aggregates information from both.",
"Some utterances are primarily nearsighted, and others primarily farsighted.",
"Another reason our full model improves is because some utterances are particularly farsightedmentioning a great deal of explicit information about the reward (which the action-only model cannot take advantage of)while other utterances are more nearsightedspecialized to the particular action, e.g., saying just enough to uniquely identify the optimal flight.",
"Sorting the utterances by difference in accuracy between the action-only and reward-only models confirms that they exhibit qualitatively different phenomena: examples where the reward-only model helps the most are highly reward-8553 descriptive (e.g., if i had a choice, i would never fly with delta and american! get me jetblue or southwest. . . ) while examples where the action-only model helps most have less informative utterances (e.g., the cheaper the better ).",
"Our full model is able to handle both kinds of language use.",
"To further analyze the influence of the action and reward component, we evaluate an oracle model that switches between the action-only and reward-only models, choosing the model with highest held-out accuracy in each round.",
"This model outperforms our action+reward model (improving from 59.1 to 62.9% on overall held-out accuracy), suggesting that further improvements could be obtained by integrating evidence from the two models.",
"Doing so optimally is challenging in our setting: when a user says i like the cheap jetblue flight , do they mean to say they like JetBlue generally, or just that they want to choose a desirable flight that happens to be uniquely identified by JetBlue?",
"Future work might explore adaptively switching policies (e.g., using the utterance, or knowledge about the user).",
"While our base models have fairly high performance (e.g., the base listener model L base has an average accuracy of 74% at selecting the optimal choice in each option set that has an utterance in the evaluation data), they naturally have some errors which lead to errors in reward inference.",
"We test the influence of this underlying prediction error by skipping posterior updates on all rounds where the base listener predicts the incorrect option for the true reward function.",
"This change improves held-out accuracy by 6% over the reward-only model after six observations (+4% from the original gap), indicating (1) that dataset affords future work on improved instruction following models and (2) that our reward inference procedure benefits from base model improvements.",
"We note that in our task design, the user does not provide a demonstration (i.e., a choice of flight) to the model.",
"However, if it is convenient to obtain demonstrations from users (e.g., a flight booking interface could let the person click on the flight they want in addition to specifying what they want in natural language), demonstrations would effectively serve as an oracle instruction-following model for that context, which could be incorporated into our full reward inference model.",
"We presented a method for using natural language to infer reward functions: representing the goals, preferences, and intents underlying action.",
"Conceptually, our work builds on previous work on language grounding by exploring how language serves a dual purpose.",
"Utterances can refer directly to actions to be taken, as studied in instruction following.",
"Beyond that, they communicate information about why those actions should be taken, and what actions may be desirable in new contexts.",
"To build language-guided agents that can interact with people over longer horizons, it may be useful to model this relationship between language, actions, and rewards.",
"Furthermore, language is ambiguous about both actions and goals.",
"Standard settings for studying pragmatics (e.g., reference games) address how to resolve ambiguity about what object or action the speaker is choosing to refer to.",
"We have explored how these settings can be extended by considering the preferences underlying those choices.",
"We introduced FLIGHTPREF , a new dataset of naturalistic interactions between people in a multi-turn flight booking game.",
"FLIGHTPREF uses held-out accuracy as a metric for evaluating interpretation success beyond selecting the right action in a single environment.",
"Future work can build on the task by 1) learning or evaluating with more complex reward functions (e.g., using deep reward representations); 2) exploring how people communicate about their real preferences and modeling a natural prior (e.g., that people tend to prefer cheaper flights), instead of providing annotators with ground-truth preferences; 3) allowing other ways to handle uncertainty, e.g., leveraging the reward posterior to interactively learn to ask; or 4) extending these approaches to other domains where modeling goals and preferences may be important (e.g., language-conditioned robotics).",
"We thank Eric Wallace, Jerry He, and the other members of the Berkeley NLP group and InterACT Lab for helpful feedback and discussion.",
"This work is supported by a grant from the Office of Naval Research (ONR-YIP)."
] | [
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"result",
"result",
"abstain",
"other",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"other",
"other",
"other"
] |
[
"We propose a generative model of paraphrase generation, that encourages syntactic diversity by conditioning on an explicit syntactic sketch.",
"We introduce Hierarchical Refinement Quantized Variational Autoencoders (HRQ-VAE), a method for learning decompositions of dense encodings as a sequence of discrete latent variables that make iterative refinements of increasing granularity.",
"This hierarchy of codes is learned through end-to-end training, and represents fine-to-coarse grained information about the input.",
"We use HRQ-VAE to encode the syntactic form of an input sentence as a path through the hierarchy, allowing us to more easily predict syntactic sketches at test time.",
"Extensive experiments, including a human evaluation, confirm that HRQ-VAE learns a hierarchical representation of the input space, and generates paraphrases of higher quality than previous systems.",
"Humans use natural language to convey information, mapping an abstract idea to a sentence with a specific surface form.",
"A paraphrase is an alternative surface form of the same underlying semantic content.",
"The ability to automatically identify and generate paraphrases is of significant interest, with applications in data augmentation (Iyyer et al., 2018), query rewriting, (Dong et al., 2017) and duplicate question detection (Shah et al., 2018).",
"While autoregressive models of language (in-cluding paraphrasing systems) predict one token at a time, there is evidence that in humans some degree of planning occurs at a higher level than individual words (Levelt, 1993; Martin et al., 2010).",
"Prior work on paraphrase generation has attempted to include this inductive bias by specifying an alternative surface form as additional model input, either in the form of target parse trees (Iyyer et al., 2018; Chen et al., 2019a; Kumar et al., 2020), exemplars (Meng et al., 2021), or syntactic codes x syn x sem z syn z sem q 1 q 2 q 3",
"(b) Generative model (decoder) Figure 1: The generative models underlying our approach.",
"(Shu et al., 2019; Hosking and Lapata, 2021).",
"Most of these approaches suffer from an all or noth-ing' problem: the target surface form must be fully specified during inference.",
"However, predicting the complete syntactic structure is almost as difficult as predicting the sentence itself, negating the benefit of the additional planning step.",
"In this paper, we propose a generative model for paraphrase generation, that combines the diversity introduced by an explicit syntactic target with the tractability of models trained end-to-end.",
"Shown in Figure 1, the model begins by assuming the existence of some semantic content z sem .",
"Conditioned on this semantic information, the model predicts a syntactic sketch' in the form of a hierarchical set of discrete codes q 1: D , that describe the target syntactic structure with increasing granularity.",
"The sketch is combined into an embedding z syn , and fed along with the original meaning z sem to a de-2489 coder that generates the final output utterance y .",
"Choosing a discrete representation for the sketch means it can be predicted from the meaning as a simple classification task, and the hierarchical na-ture means that the joint probability over the codes admits an autoregressive factorisation, making prediction more tractable.",
"The separation between z sem and z syn is induced by a training scheme introduced in earlier work (Hosking and Lapata, 2021; Huang and Chang, 2021) and inspired by prior work on separated latent spaces (Chen et al., 2019b; Bao et al., 2019), whereby the model must reconstruct a target output from one input with the correct meaning, and another input with the correct syntactic form.",
"To learn the discretized sketches, we propose a variant of Vector-Quantized Variational Autoencoders (VQ-VAE, or VQ) that learns a hierarchy of embeddings within a shared vector space, and represents an input encoding as a path through this hierarchy.",
"Our approach, which we call Hierarchical Refinement Quantized Variational Autoencoders or HRQ-VAE , leads to a decomposition of a dense vector into embeddings of increasing granularity, representing high-level information at the top level before gradually refining the encoding over subsequent levels.",
"Our contributions are summarized as follows: We propose a generative model of natural language generation, HRQ-VAE, that induces a syntactic sketch to account for the diversity exhibited by paraphrases.",
"We present a parameterization of our generative model that is a novel method for learning hierarchical discretized embeddings over a single latent encoding space.",
"These embeddings are trained end-to-end and jointly with the en-coder/decoder.",
"We use HRQ-VAE to induce hierarchical sketches for paraphrase generation, demonstrating that the known factorization over codes makes them easier to predict at test time, and leads to higher quality paraphrases.",
"Let y be a sentence, represented as a sequence of tokens.",
"We assume that y contains semantic content, that can be represented by a latent variable z sem .",
"Types of semantic content might include the description of an image, or a question intent.",
"However, the mapping from semantics to surface form is not unique: in general, there is more than one way to express the semantic content.",
"Sentences with the same underlying meaning z sem but different surface form y are paraphrases .",
"Standard approaches to paraphrasing (e.g., Bowman et al. 2016) map directly from z sem to y , and do not account for this diversity of syntactic structure.",
"Following recent work on syntax-guided paraphrasing (Chen et al., 2019a; Hosking and Lapata, 2021), and inspired by evidence that humans plan out utterances at a higher level than individual words (Martin et al., 2010), we introduce an intermediary sketching step, depicted in Figure 1b.",
"We assume that the output sentence y is generated as a function both of the meaning z sem and of a syntactic encoding z syn that describes the structure of the output.",
"Moreover, since natural language displays hierarchical organization in a wide range of ways, including at a syntactic level (con-stituents may contain other consituents), we also assume that the syntactic encoding z syn can be decomposed into a hierarchical set of discrete latent variables q 1: D , and that these q d are conditioned on the meaning z sem .",
"This contrasts with popular model architectures such as VAE (Bowman et al., 2015) which use a flat internal representation in a dense Euclidean vector space.",
"Intuitively, our generative model corresponds to a process where a person thinks of a message they wish to convey; then, they decide roughly how to say it, and incrementally refine this decision; fi-nally, they combine the meaning with the syntactic sketch to spell out' the sequence of words making up the sentence.",
"The graphical model in Figure 1b factorizes as",
"p ( y , z sem ) = (cid:88) q 1: D , z syn p ( y | z sem , z syn ) p ( z syn | q 1: D ) p ( z sem ) p ( q 1 | z sem ) D (cid:89) d =2 p ( q d | q <d , z sem ) .",
"(1) Although q 1: D are conditionally dependent on z sem , we assume that z sem may be determined from y without needing to explicitly calculate q 1: D or z syn .",
"We also assume that the mapping from discrete codes q 1: D to z syn is a deterministic func-2490 tion f q z ( ) .",
"The posterior therefore factorises as ( z sem , z syn | y ) = ( z sem | y ) ( z syn | y ) ( q 1 | z syn ) D (cid:89) d =2 ( q d | q <d , z syn ) .",
"(2) The separation between z sem and q 1: D , such that they represent the meaning and form of the input respectively, is induced by the training scheme.",
"During training, the model is trained to reconstruct a target y using z sem derived from an input with the correct meaning (a paraphrase) x sem , and q 1: D from another input with the correct form (a syntactic exemplar) x syn .",
"Hosking and Lapata (2021) showed that the model therefore learns to encode primarily semantic information about the input in z sem , and primarily syntactic information in q 1: D .",
"Exemplars are retrieved from the training data following to the process described in Hosking and Lapata (2021), with examples in Appendix C. The setup is shown in Figure 1a; in summary, during training we set ( z sem | y ) = ( z sem | x sem ) and ( q d | y , q <d ) = ( q d | x syn , q <d ) .",
"The final objective is given by ELBO = E (cid:2) log p ( y | z sem , q 1: D )) log p ( q 1 | z sem ) D (cid:88) d =2 log p ( q d | q <d , z sem ) (cid:3) + KL (cid:2) ( z sem | x sem ) || p ( z sem ) (cid:3) , (3) where q d ( q d | x syn ) and z sem ( z sem | x sem ) .",
"We assume a Gaussian distribution for z sem , with prior p ( z sem ) N ( 0 , 1 ) .",
"The encoders ( z sem | x sem ) and ( z syn | x syn ) are Transformers (Vaswani et al., 2017), and we use an autoregressive Transformer decoder for p ( y | z sem , z syn ) .",
"The mapping f q z ( ) from q 1: D to z syn and the posterior network ( q d | q <d , z syn ) are more complex, and form a significant part of our contribution.",
"Our choice of parameterization is learned end-to-end, and ensures that the sketches learned are hierarchical both in the shared embedding space and in the information they represent.",
"Let z syn RD be the output of the encoder network ( z syn | y ) , that we wish to decompose as a",
"sequence of discrete hierarchical codes.",
"Recall that q d [1 , K ] are discrete latent variables corresponding to the codes at different levels in the hierarchy, d [1 , D ] .",
"Each level uses a distinct codebook, C d RK D , which maps each discrete code to a continuous embedding C d ( q d ) RD .",
"The distribution over codes at each level is a softmax distribution, with the scores s d given by the distance from each of the codebook embeddings to the residual error between the input and the cumulative embedding from all previous levels, s d ( q ) = (cid:32)(cid:34) x d 1 (cid:88) d (cid:48) =1 C d (cid:48) ( q d (cid:48) ) (cid:35) C d ( q ) (cid:33) 2 .",
"Illustrated in Figure 2, these embeddings therefore represent iterative refinements on the quantization of the input.",
"The posterior network ( q d | q <d , z syn ) iteratively decomposes an encoding vector into a path through a hierarchy of clusters whose centroids are the codebook embeddings.",
"HRQ-VAE can be viewed as an extension of VQ-VAE (van den Oord et al., 2017), with two significant differences: (1) the codes are hierarchically ordered and the joint distribution p ( q 1 , . . . , q D ) admits an autoregressive factorization; and (2) the HRQ-VAE composition function is a sum, compared to concatenation in VQ or a complex neural network in VQ-VAE 2 (Razavi et al., 2019).",
"Under HRQ, latent codes describe a path through the learned hierarchy within a shared encoding space.",
"The form of the posterior ( q d | q <d , z syn ) and the composition function f q z ( ) do not rely on any particular properties of the paraphrasing task; the technique could be applied to any encoding space.",
"Initialisation Decay Smaller perturbations in encoding space should result in more fine grained changes in the information they encode.",
"Therefore, we encourage ordering between the levels of hierarchy (such that lower levels encode finer grained information) by initialising the codebook with a decaying scale, such that later embeddings have a smaller norm than those higher in the hierarchy.",
"Specifically, the norm of the embeddings at level d is weighted by a factor ( init ) d 1 .",
"Depth Dropout To encourage the hierarchy within the encoding space to correspond to hierarchical properties of the output, we introduce depth dropout , whereby the hierarchy is truncated at each level during training with some probability p depth .",
"The output of the quantizer is then given by z syn = D (cid:88) d =1 (cid:32) C d ( q d ) d (cid:89) d (cid:48) =1 d (cid:48) (cid:33) , (6) where h Bernoulli (1 p depth ) .",
"This means that the model is sometimes trained to reconstruct the output based only on a partial encoding of the input, and should learn to cluster similar outputs together at each level in the hierarchy.",
"During training the decoder is driven using sketches sampled from the encoder, but at test time exemplars are unavailable and we must predict a distribution over syntactic sketches p ( q 1: D | z sem ) .",
"Modelling the sketches as hierarchical ensures that this distribution admits an autoregressive factorization.",
"We use a simple recurrent network to infer valid codes at each level of hierarchy, using the semantics of the input sentence and the cumulative embedding of the predicted path so far as input, such that q d is sampled from p ( q d | z sem , q <d ) = Softmax ( MLP d ( z sem , z <d )) , where z <d = d 1 (cid:80) d (cid:48) =1 C d (cid:48) ( q d (cid:48) ) .",
"This MLP is trained jointly with the encoder/decoder model, using the outputs of the posterior network ( q d | x syn , q <d ) as targets.",
"To generate paraphrases as test time, we sample from the sketch prediction model p ( q d | z sem , q <d ) using beam search and condition generation on these predicted sketches.",
"We use the Gumbel reparameterisation trick (Jang et al., 2016; Maddison et al., 2017; Snderby et al., 2017) for the discrete codes and the standard Gaussian reparameterisation for the semantic representation.",
"To encourage the model to use the full codebook, we decayed the Gumbel temperature , according to the schedule given in Appendix A. We approximate the expectation in Equation (3) by sampling from the training set and updating via backpropagation (Kingma and Welling, 2014).",
"The full model was trained jointly by optimizing the ELBO in Equation (3).",
"Datasets A paraphrase is an alternative surface form in the same language expressing the same semantic content as the original form' (Madnani and Dorr, 2010), but it is not always clear what counts as the same semantic content'.",
"Our approach requires access to reference paraphrases; we evaluate on three English paraphrasing datasets which have clear grounding for the meaning of each sentence: Paralex (Fader et al., 2013), a dataset of question paraphrase clusters scraped from WikiAn-swers; Quora Question Pairs (QQP) 1 sourced from the community question answering forum Quora; and MSCOCO 2017 (Lin et al., 2014), a set of images that have been captioned by multiple annotators.",
"For the question datasets, each paraphrase is grounded to the (hypothetical) answer they share.",
"We use the splits released by Hosking and Lapata (2021).",
"For MSCOCO, each caption is grounded by the image that it describes.",
"We evaluate on the public validation set, randomly selecting one cap-1 https://www.kaggle.com/c/quora-question-pairs 2492 Figure 3: t-SNE visualisation of the syntactic encodings z syn for 10k examples from Paralex: colours indicate top-level codes q 1 , shapes indicate the second level, and patterns are used to label the third level.",
"Model Configuration Hyperparameters were tuned on the Paralex development set, and reused for the other evaluations.",
"We set the depth of the hierarchy D = 3 , and the codebook size K = 16 .",
"The Transformer encoder and decoder consist of 5 layers each, and we use the vocabulary and token embeddings from BERT-Base (Devlin et al., 2018).",
"We use an initialisation decay factor of init = 0 .",
"5 , and a depth dropout probability p depth = 0 .",
"3 .",
"A full set of hyperparameters is given in Appendix A, and our code is available at https://github.com/tomhosking/hrq-vae .",
"Comparison Systems As baselines, we consider three popular architectures: a vanilla autoencoder (AE) that learns a single dense vector representation of an input sentence; a Gaussian Variational AutoEncoder (VAE, Bowman et al., 2015), which learns a distribution over dense vectors; and a Vector-Quantized Variational AutoEncoder (VQ-VAE, van den Oord et al., 2017), that represents the full input sentence as a set of discrete codes.",
"All three models are trained to generate a sentence from one of its paraphrases in the training data, and are not trained with an autoencoder objective.",
"We implement a simple tf-idf baseline (Jones, 1972), retrieving the question from the training set with the highest cosine similarity to the input.",
"Finally, we include a basic copy baseline as a lower bound, that simply uses the input sentences as the output.",
"We also compare to a range of recent paraphrasing systems.",
"Latent bag-of-words (BoW, Fu et al., 2019) uses an encoder-decoder model with a discrete bag-of-words as the latent encoding.",
"SOW/REAP (Goyal and Durrett, 2020) uses a two stage approach, deriving a set of feasible syntactic rearrangements that is used to guide a second encoder-decoder model.",
"BTmPG (Lin and Wan, 2021) uses multi-round generation to improve diversity and a reverse paraphrasing model to preserve semantic fidelity.",
"We use the results after 10 rounds of paraphrasing.",
"Separator (Hosking and Lapata, 2021) uses separated, non-hierarchical encoding spaces for the meaning and form of an input, and an additional inference model to predict the target syntactic form at test time.",
"All comparison systems were trained and evaluated on our splits of the datasets.",
"As an upper bound, we select a sentence from the evaluation set to use as an oracle syntactic exemplar, conditioning generation on a sketch that is known to represent a valid surface form.",
"Our experiments were designed to test two primary hypotheses: (1) Does HRQ-VAE learn hierarchical decompositions of an encoding space?",
"and (2) Does our choice of generative model enable us to generate high quality and diverse paraphrases?",
"Figure 3 shows a t-SNE (van der Maaten and Hinton, 2008) plot of the syntactic encodings z syn for 10,000 examples from Paralex.",
"The encodings are labelled by their quantization, so that colours indicate top-level codes q 1 , shapes denote q 2 , and patterns q 3 .",
"The first plot shows clear high level structure, with increasingly fine levels of substructure visible as we zoom into each cluster.",
"This confirms that the discrete codes are ordered, with lower levels in the hierarchy encoding more fine grained information.",
"To confirm that intermediate levels of hierarchy represent valid points in the encoding space, we generate paraphrases using oracle sketches, but truncate the sketches at different depths.",
"Masking one level (i.e., using only q 1 , q 2 ) reduces performance by 2 .",
"5 iBLEU points, and two levels by 5 .",
"5 .",
"(iBLEU is an automatic metric for assessing paraphrase quality; see Section 5.2).",
"Although encodings using the full depth are the most informative, partial encodings still lead to good quality output, with a gradual degradation.",
"This implies both that each level in the hierarchy contains useful information, and that the cluster centroids at each level are representative of the individual members of those clusters.",
"Metrics Our primary metric is iBLEU (Sun and Zhou, 2012),",
"that measures the fidelity of generated outputs to reference paraphrases as well as the level of diversity introduced.",
"We use the corpus-level variant.",
"Following the recommendations of Sun and Zhou (2012), we set = 0 .",
"8 , with a sensitivity analysis shown in Appendix A. We also report BLEU ( outputs, references ) as well as Self-BLEU ( outputs, inputs ) .",
"The latter allows us to examine the extent to which models generate paraphrases that differ from the original input.",
"To evaluate the diversity between multiple candidates generated by the same system , we report pairwise-BLEU (Cao and Wan, 2020), P-BLEU = E i (cid:54) = j [ BLEU ( outputs i , outputs j )] .",
"This measures the average similarity between the different candidates, with a lower score indicating more diverse hypotheses.",
"Automatic Evaluation Shown in Table 1, the results of the automatic evaluation highlight the importance of measuring both paraphrase quality and similarity to the input: the Copy baseline is able to achieve high BLEU scores despite simply duplicating the input.",
"The VAE baseline is competitive but tends to have a hi gh Self-BLEU score, indicating that the semantic preservation comes at the cost of low syntactic diversity.",
"HRQ-VAE achieves both higher BLEU scores and higher iBLEU scores than the comparison systems, indicating that it is able to generate 2494 q 1 q 2 q 3 Output Input Two types of fats in body ?",
"0 3 6 What types of fats are in a body?",
"13 7 What types of fats are there in body?",
"2 1 2 How many types of fats are there in the body?",
"3 7 How many types of fats are there in a body?",
"5 3 6 What are the different types of fats in a body?",
"5 7 What are the different types of fats in body?",
"8 7 Types of fats are different from body fat?",
"14 Two types of fats in body?",
"13 0 2 What are the different types of fats in the body?",
"6 What are the different types of fats in a body?",
"3 7 What are two types of fats in a body?",
"5 7 What are the different types of fats in body?",
"8 What are the different types of fats?",
"14 What are the different types of fats in the body?",
"The examples in Table 2 demonstrate that HRQ is able to introduce significant syntactic variation while preserving the original meaning of the input.",
"However, there is still a gap between generation using predicted sketches and oracle' sketches (i.e., when the target syntactic form is known in advance), indicating ample scope for improvement.",
"Worked Example Since the sketches q 1: D are latent variables, interpretation is difficult.",
"However, a detailed inspection of example output reveals some structure.",
"Table 3 shows the model output for a single semantic input drawn from Paralex, across a range of different syntactic sketches.",
"It shows that q 1 is primarily responsible for encoding the question type, with q 1 = 13 leading to what' questions and q 1 = 2 how' questions.",
"q 2 and q 3 encode more fine grained details; for example, all outputs shown with q 3 = 6 use the indefinite article a'.",
"We also examine how using increasingly granular sketches refines the syntactic template of the output.",
"Table 4 shows the model output for a single semantic input, using varying granularities of sketch extracted from the exemplar.",
"When no sketch is specified, the model defaults to a canonical phrasing of the question.",
"When only q 1 is specified, the output becomes a how many' question, and Input Two types of fat in body?",
"Generating Multiple Paraphrases We evaluated the ability of our system to generate multiple diverse paraphrases for a single input, and compared to the other comparison systems capable of producing more than one output.",
"For both HRQ-VAE and Separator, we used beam search to sample from the sketch prediction network as in the top-1 case, and condition generation on the top-3 hypotheses predicted.",
"For BTmPG, we used the paraphrases generated after 3, 6 and 10 rounds.",
"For the VAE, we conditioned generation on 3 different samples from the encoding space.",
"The results in Table 5 show that HRQ-VAE is able to generate multiple high quality paraphrases for a single input, with lower similarity between the candidates than other systems.",
"In addition to automatic evaluation we elicited judgements from crowdworkers on Amazon Mechanical Turk.",
"They were shown a sentence and two paraphrases, each generated by a different system, and asked to select which one was preferred along three dimensions: the dissimilarity of the paraphrase compared to the original sentence; how 2495 Meaning Dissimilarity Fluency 20 0 20 R e l a t i v e p r e f e r e n c e % +36 -16 -24 +4 -33 +9 +27 -3 +22 -24 -5 +8 VAE Latent BoW Separator HRQ-VAE (ours) Figure 4: Results of our human evaluation.",
"well the paraphrase reflected the meaning of the original; and the fluency of the paraphrase (see Appendix B).",
"We evaluated a total of 300 sentences sampled equally from each of the three evaluation datasets, and collected 3 ratings for each sample.",
"We assigned each system a score of +1 when it was selected, 1 when the other system was selected, and took the mean over all samples.",
"Negative scores indicate that a system was selected less often than an alternative.",
"We chose the four best performing models for our evaluation: HRQ-VAE, Separator, Latent BoW, and VAE.",
"Figure 4 shows that although the VAE baseline is the best at preserving question meaning, it is also the worst at introducing variation to the output.",
"HRQ-VAE better preserves the original question intent compared to the other systems while introducing more diversity than the VAE, as well as generating much more fluent output.",
"To confirm that the hierarchical model allows for more expressive sketches, we performed two ablations.",
"We compared to the full model using oracle sketches, so that code prediction performance was not a factor.",
"We set the depth D = 1 and K = 48 , giving equivalent total capacity to the full model ( D = 3 , K = 16 ) but without hierarchy.",
"We also removed the initialisation scaling at lower depths, instead initialising all codebooks with the same scale.",
"Table 6 shows that a non-hierarchical model with the same capacity is much less expressive.",
"We also performed two ablations against the model using predicted sketches; we removed depth dropout, so that the model is always trained on a full encoding.",
"We confirm that learning the code-Variant Paralex QQP MSCOCO HRQ-VAE (oracle) 34.85 33.01 26.07 No initialisation scaling 3.06 2.48 3.02 No hierarchy 8.84 12.72 3.10 HRQ-VAE 24.93 18.42 19.04 No head dropout 0.62 0.74 0.81 Post-hoc k-means 3.30 5.35 2.83 Table 6: Changes in iBLEU score for a range of ablations from our full model.",
"books jointly with the encoder/decoder leads to a stronger model, by first training a model with a continuous Gaussian bottleneck (instead of the HRQ-VAE); then, we recursively apply k -means clustering (Lloyd, 1982), with the clustering at each level taking place over the residual error from all levels so far, analogous to HRQ-VAE.",
"The results of these ablations shown in Table 6 indicate that our approach leads to improvements over all datasets.",
"Hierarchical VAEs VQ-VAEs were initially proposed in computer vision (van den Oord et al., 2017), and were later extended to be hierarchical' (Razavi et al., 2019).",
"However, in vision the term refers to a stacked' version architecture, where the output of one variational layer is passed through a CNN and then another variational layer that can be continuous (Vahdat and Kautz, 2020) or quantized (Williams et al., 2020; Livin et al., 2019; Willetts et al., 2021).",
"Unlike these approaches, we induce a single latent space that has hierarchical properties.",
"Other work has looked at using the properties of hyperbolic geometry to encourage autoencoders to learn hierarchical representations.",
"Mathieu et al. (2019) showed that a model endowed with a Poincar ball geometry was able to recover hierarchical structure in datasets, and Surs et al. (2021) used this property to deal with uncertainty in predicting events in video clips.",
"However, their work was limited to continuous encoding spaces, and the hierarchy discovered was known to exist a priori.",
"Syntax-controlled Paraphrase Generation Prior work on paraphrasing has used retrieval techniques (Barzilay and McKeown, 2001), Residual LSTMs (Prakash et al., 2016), VAEs (Bowman et al., 2016), VQ-VAEs (Roy and Grangier, 2019) and pivot languages (Mallinson et al., 2017).",
"Syntax-controlled paraphrase generation has seen significant recent interest, as a 2496 means to explicitly generate diverse surface forms with the same meaning.",
"However, most previous work has required knowledge of the correct or valid surface forms to be generated (Iyyer et al., 2018; Chen et al., 2019a; Kumar et al., 2020; Meng et al., 2021).",
"It is generally assumed that the input can be rewritten without addressing the problem of predicting which template should be used, which is necessary if the method is to be useful.",
"Hosking and Lapata (2021) proposed learning a simplified representation of the surface form using VQ, that could then be predicted at test time.",
"However, the discrete codes learned by their approach are not independent and do not admit a known factorization, leading to a mismatch between training and inference.",
"We present a generative model of paraphrasing, that uses a hierarchy of discrete latent variables as a rough syntactic sketch.",
"We introduce HRQ-VAE, a method for mapping these hierarchical sketches to a continuous encoding space, and demonstrate that it can indeed learn a hierarchy, with lower levels representing more fine-grained information.",
"We apply HRQ-VAE to the task of paraphrase generation, representing the syntactic form of sentences as paths through a learned hierarchy, that can be predicted during testing.",
"Extensive experiments across multiple datasets and a human evaluation show that our method leads to high quality paraphrases.",
"The generative model we introduce has potential application for any natural language generation task; z sem could be sourced from a sentence in a different language, from a different modality (e.g., images or tabular data) or from a task-specific model (e.g., summarization or machine translation).",
"Furthermore, HRQ-VAE makes no assumptions about the type of space being represented, and could in principle be applied to a semantic space, learning a hierarchy over words or concepts.",
"We thank our anonymous reviewers for their feedback.",
"This work was supported in part by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by the UKRI (grant EP/S022481/1) and the University of Edinburgh.",
"Lapata acknowledges the support of the European Research Council (award number 681760, Trans-lating Multiple Modalities into Text)."
] | [
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"method",
"result",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Unsupervised machine translation, which utilizes unpaired monolingual corpora as training data, has achieved comparable performance against supervised machine translation.",
"However, it still suffers from data-scarce domains.",
"To address this issue, this paper presents a novel meta-learning algorithm for unsupervised neural machine translation (UNMT) that trains the model to adapt to another domain by utilizing only a small amount of training data.",
"We assume that domain-general knowledge is a significant factor in handling data-scarce domains.",
"Hence, we extend the meta-learning algorithm, which utilizes knowledge learned from high-resource domains, to boost the performance of low-resource UNMT.",
"Our model surpasses a transfer learning-based approach by up to 2-3 BLEU scores.",
"Extensive experimental results show that our proposed algorithm is pertinent for fast adaptation and consistently outperforms other baselines.",
"Unsupervised neural machine translation (UNMT) leverages unpaired monolingual corpora for its training, without requiring an already labeled, parallel corpus.",
"Recently, the state of the art in UNMT (Conneau and Lample, 2019; Song et al., 2019; Ren et al., 2019) has achieved comparable performances against supervised neural machine translation (NMT) approaches.",
"In contrast to supervised NMT, which uses a parallel corpus, training the UNMT model requires a significant number of monolingual sentences (e.g., 1M-3M sentences).",
"However, the prerequisite limits UNMT's applicability to low-resource domains, especially for equal contributions This work done in NAVER Corp.",
"domain-specific document translation tasks.",
"Since gathering or creating those documents requires domain specific knowledge, the monolingual data themselves are scarce and expensive.",
"In addition, the minority languages (e.g., Uzbek and Nepali) make the problem of data scarcity even worse.",
"Yet, UNMT for low-resource domains is not an actively explored field.",
"One naive approach is to train a model on high-resource domains (e.g., econ-omy and sports) while hoping the model will generalize on an unseen low-resource domain (e.g., medicine).",
"However, recent studies have shown that non-trivial domain mismatch can significantly cause low translation accuracy on supervised NMT tasks (Koehn and Knowles, 2017).",
"Another reasonable approach is transfer learningparticularly, domain adaptationwhich has shown performance improvements in the supervised NMT literature (Freitag and Al-Onaizan, 2016; Zeng et al., 2019).",
"In this approach, the model is first pretrained using data from existing domains and then finetuned on a new domain.",
"However, this approach can suffer from overfitting and catastrophic forgetting due to a small amount of training data and a large domain gap.",
"As an effective method for handling a small amount of training data, meta-learning has shown its superiority in various NLP studies such as dialog generation, machine translation, and natural language understanding (Qian and Yu, 2019; Gu et al., 2018; Dou et al., 2019).",
"In general, the meta-learning approach is strongly affected by the number of different tasks where tasks are defined as languages or domains from the aforementioned studies.",
"However, in practice, the previous studies may struggle to gather data to define tasks because they rely on a supervised model that requires labeled corpora.",
"In this respect, we argue that applying a meta-learning approach to the unsupervised model is more feasible and achievable than the supervised model because it can define multiple different tasks with unlabeled corpora.",
"Therefore, we introduce a new meta-learning approach for UNMT, called MetaUMT, for low-resource domains by defining each task as a domain.",
"The objective of MetaUMT is to find the optimal initialization for the model parameters that can quickly adapt to a new domain even with only a small amount of monolingual data.",
"As shown in Fig. 1",
"(a), we define two different training phases, a meta-train and a meta-test phase, and simulate the domain adaption process to obtain optimally initialized parameters.",
"Specifically, the meta-train phase adapts model parameters to a domain while the meta-test phase optimizes the parameters obtained from the meta-train phase.",
"After obtaining optimally initialized parameters through these two phases, we fine-tune the model using a target domain (i.e., a low-resource domain).",
"Although the initial parameters optimized through MetaUMT are suitable for adapting to a low-resource domain, these parameters may not fully maintain the knowledge of high-resource domains.",
"Concretely, in the meta-test phase, MetaUMT optimizes initial parameters using the adapted parameters; however, it discards meta-train knowledge used to update adapted parameters in the meta-train phase.",
"Therefore, instead of validating the same domain used in the meta-train phase, we intend to inject generalizable knowledge into the initial parameters by utilizing another domain in the meta-test phase.",
"This prevents overfitting from the data scarcity issue.",
"As shown in Fig. 1",
"(b), we propose an improved meta-learning approach called MetaGUMT for low-resource UNMT by explicitly infusing common knowledge across multiple source domains as well as generalizable knowledge from one particular domain to another.",
"In other words, we do not only encourage the model to find the optimally initialized parameters that can quickly adapt to a target domain with low-resource data, but also encourage the model to maintain common knowledge (e.g., general words such as determiners, conjunctions, and pronouns) which is obtainable from multiple source domains.",
"Furthermore, due to a small amount of training data in a low-resource domain, the model can suffer from overfitting; however, we attempt to handle overfitting by leveraging generalizable knowledge that is available from one domain to another.",
"Our proposed meta-learning approach demonstrates consistent improvements over other baseline models.",
"Overall, our contributions can be summarized as follows: We apply a meta-learning approach for UNMT.",
"To the best of our knowledge, this is the first study to use a meta-learning approach for UNMT, where this approach is more suitable to a UNMT task than a supervised one.",
"We empirically demonstrate that our enhanced method, MetaGUMT, shows fast convergence on both pre-training (i.e., meta-learning with source domains) and finetuning (i.e., adapting to a target domain).",
"The model trained with MetaGUMT consistently outperforms all baseline models including MetaUMT.",
"This demonstrates that finding optimally initialized parameters that incorporate high-resource domain knowledge and generalizable knowledge is significant in handling a low-resource domain.",
"Our study leverages two components from the natural language processing (NLP) domain: low-resource NMT and meta-learning.",
"In this section, we discuss previous studies by concentrating on these two main components.",
"Based on the success of attention-based models (Luong et al., 2015; Vaswani et al., 2017), NMT obtains significant improvement in numerous language datasets, even showing promising results (Wu et al.) in different datasets.",
"However, the performance of NMT models depends on the size of the parallel dataset (Koehn and Knowles, 2017).",
"To address this problem, one conventional approach is utilizing monolingual datasets.",
"Recent studies point out the difficulty of gathering parallel data, whereas the monolingual datasets are relatively easy to collect.",
"To facilitate monolingual corpora, several studies apply dual learning (He et al., 2016), back-translation (Sennrich et al., 2016b), and pretraining the model with bilingual corpora (Hu et al., 2019; Wei et al., 2020).",
"Furthermore, as a challenging scenario, recent studies propose the UNMT methods without using any Meta-train",
"parallel corpora (Lample et al., 2018a; Artetxe et al., 2018; Yang et al., 2018).",
"The UNMT models show comparable performances by extending the back-translation method (Conneau et al., 2018) and incorporating methods such as shared Byte Pair Encoding (BPE) (Lample et al., 2018b) and cross-lingual representations (Conneau and Lample, 2019), following those of the supervised NMT.",
"However, since these approaches require plenty of monolingual datasets, they suffer in a low-resource domain.",
"Transferring the knowledge from high-resource domains to a low-resource domain is one alternative way to address this challenge.",
"A few studies concentrate on transferring the knowledge from the rich-resource corpora into the low-resource one.",
"Several models (Chu and Wang, 2018; Hu et al., 2019) show better performances than when trained with the low-resource corpora only.",
"However, these approaches are applicable in specific scenarios where one or both of the source and target domains consist of a parallel corpus.",
"To address the issues, we define a new task as the unsupervised domain adaptation on the low-resource dataset.",
"Our work is more challenging than any other previous studies, since we assume that both the low-resource target domain and the source domain corpora are monolingual.",
"Given a small amount of training data, most of the machine learning models are prone to overfitting, thus failing to find a generalizable solution.",
"To handle this issue, meta-learning approaches seek for how to adapt quickly and accurately to a low-resource task, and show impressive results in various domains (Finn et al., 2017; Javed and White, 2019).",
"The meta-learning approaches aim to find the optimal initialization of the model parameters that adapts the model to a low-resource dataset in a few iterations of training (Finn et al., 2017; Ravi and Larochelle, 2016).",
"Owing to the success of the meta learning, recent studies apply the meta learning to low-resource NMT tasks, including multilingual NMT (Gu et al., 2018) and the domain adaptation (Li et al., 2020).",
"These studies assume that all the training corpora consist of the parallel sentences.",
"However, a recent work (Li et al., 2018) utilizes the meta learning approach to find a generalized model for multiple target tasks.",
"However, it is not focused on adapting a specific target task since its main goal is to handle the target task without using any low-resource data.",
"Our study attempts to address the low-resource UNMT by exploiting meta-learning approaches.",
"Moreover, we present two novel losses that encourage incorporating high-resource knowledge and generalizable knowledge into the model parameters.",
"Our proposed approaches show significant performance improvements in adapting to a low-resource target domain.",
"In this section, we first introduce the notation of the general UNMT models.",
"We then describe the three steps for the UNMT task: initialization, language modeling, and back-translation.",
"On these three steps, we illustrate how each step contributes to improving the performance of UNMT.",
"Notations .",
"We denote S and T as a source and a target monolingual language dataset.",
"x and y represent the source and the target sentences from S and T .",
"We assume the NMT model is parame-Choosing a domain Sampling another domain English Sentences English Sentences English Sentences English Sentences Meta-test phase Metatrain phase (E) Update : (C) Obtain by using one-step gradient : (D) Compute each loss : (B) (A) English Sentences English Sentences English Sentences English Sentences English Sentences English Sentences English Sentences English Sentences English Sentences English Sentences Figure 2: Overall training process of our proposed MetaGUMT.",
"terized by .",
"We also denote M s s and M t t as language models in a source and a target language, respectively, while denoting M s t and M t s as the machine translation models from the source to the target language and vice versa.",
"Initialization .",
"A recent UNMT model (Lample et al., 2018b) is based on a shared encoder and decoder architecture for the source and the target language.",
"Due to the shared encoder and decoder for each language, initializing the model parameters of the shared encoder and decoder is an important step for competitive performances (Conneau et al., 2018; Lample et al., 2018a; Artetxe et al., 2018; Yang et al., 2018).",
"Conneau and Lample (2019) propose the XLM (cross-lingual language model) to initialize parameters, showing significantly improved performances for UNMT.",
"Among various initialization methods, we leverage the XLM as our initialization method.",
"Language modeling .",
"We use a denoising auto-encoder (Vincent et al., 2008) to train the UNMT model, reconstructing an original sentence from a noisy one in a given language.",
"The objective function is defined as follows: L lm = E x S [ log M s s ( x | C ( x ))]+ E y T [ log M t t ( y | C ( y ))] , (1) where C is a noise function described in (Lam-ple et al., 2018b), which randomly drops or swaps words in a given sentence.",
"By reconstructing the sentence from the noisy sentence, the model learns the language modeling in each language.",
"Back-translation .",
"Back-translation helps the model learn the mapping functions between the source and the target language by using only the monolingual sentences.",
"For example, we sample a sentence x and y from source language S and target language T .",
"To make pseudo-pair sentences from the sampled source sentence, we deduce the target sentence from the source sentence, such that y (cid:48) = M s t ( x ) , resulting in the pseudo parallel sentence, i.e., ( x, y (cid:48) ) .",
"Similarly, we obtain ( x (cid:48) , y ) , where x (cid:48) is the translation of a target sentence, i.e., M t s ( y ) .",
"We do not back-propagate when we generate the pseudo-parallel sentence pairs.",
"In short, the back-translation objective function is L bt = E y T [ logM s t (cid:0) y | x (cid:48) (cid:1) ]+ E x S [ logM t s (cid:0) x | y (cid:48) (cid:1) ] .",
"This section first explains our formulation of a low-resource unsupervised machine translation task where we can apply a meta-learning approach.",
"Afterwards, we elaborate our proposed methods, MetaUMT and MetaGUMT.",
"We utilize the meta-learning approach to address a low-resource challenge for unsupervised machine translation.",
"Moreover, we extend MetaUMT into MetaGUMT to explicitly incorporate learned knowledge from multiple domains.",
"Finn et al. (2017) assume multiple different tasks to find the proper initial parameters that can quickly adapt to a new task using only a few training",
"examples.",
"In this paper, we consider tasks in the meta-learning as domains, where D out = {D 0 out , ..., D nout } represents n out-domain datasets (i.e., source domain datasets), and D in indicates an in-domain dataset (i.e., a target domain dataset), which can be the dataset in an arbitrary domain not included in D out .",
"Each domain in both D out and D in is assumed to be composed of unpaired language corpora, and we create D in as a low-resource monolingual dataset 1 .",
"To adapt our model to the low-resource in-domain data, we finetune the UNMT model by minimizing both the losses described in Eqs.",
"(1) and (2) with D in .",
"In order to obtain an optimal initialization of the model parameters, allowing the model to quickly adapt to a new domain with only a small number of monolingual training data, MetaUMT uses two training phases, the meta-train phase and the meta-test phase.",
"During the meta-train phase, the model first learns domain-specific knowledge by updating initial model parameters to temporary model parameters i , i.e., adapted parameters.",
"Then, in the meta-test phase, the model learns the adaptation by optimizing with respect to i .",
"From the domain adaption perspective, two phases simulate the domain adaption process.",
"The model first adapts to a specific domain through the meta-train phase, and this adaption is evaluated in the meta-test phase.",
"Meta-train phase .",
"We obtain i for each ith out-domain dataset by using one-step gradient descent, i.e., i = L s D i out ( ) , (3) where L s D i out is represented as L s D i out = L lm D i out ( ) + L bt D i out ( ) .",
"(4) D iout is the i -th out-domain dataset, and is the learning rate for the meta-train phase.",
"As previously discussed in Section 3, the language modeling and back-translation losses are essential in facilitating the unsupervised machine translation.",
"Hence, L s consists of L lm and L bt , where each loss function is computed with D iout .",
"Meta-test phase .",
"The objective of the meta-test phase is to update using each i learned from the 1 We randomly sample the 5,000 tokens ( 300 sentences) from the in-domain training dataset.",
"meta-train phase by using each L s D iout (cid:48) 2 .",
"We call this update as a meta-update, defined as n (cid:88) i =0 L s D iout (cid:48) ( i ) , (5) where is another learning rate in the meta-test phase.",
"Since Eq.",
"(5) requires the second-order gradient, the equation is simplified with the first-order gradient by replacing the second-order term.",
"Finn et al. (2017) showed that the first-order approximation of the meta-learning maintains the performance while minimizing the computational cost.",
"To handle a data scarcity issue from a meta-learning perspective, it is critical to be able to make the initialized model to adapt to a data-scarce domain.",
"However, since a small amount of training data in the new domain may cause the model to overfit and prevent utilizing high-resource domain knowledge, it is important to incorporate high-resource domain knowledge and generalizable knowledge into the model parameters.",
"To address this issue, we extend the existing meta-learning approach via two novel losses, which we call an aggregated meta-train loss and a cross-domain loss.",
"The former contributes to incorporating high-resource domain knowledge into the model parameters, while the latter encourages our model, after trained using a particular domain, to still generalize well to another domain, i.e., cross-domain generalization.",
"Meta-train phase .",
"As shown in Fig. 2 (C), via Eqs.",
"(3) and (4), we obtain i from each ith out-domain datasets.",
"Since this phase is exactly same with the meta-train phase of MetaUMT, we leave out the details.",
"Meta-test phase .",
"The aggregated meta-train loss, which refers to Fig. 2 (D), is computed using all out-domain datasets, i.e., L ag = n (cid:88) i =0 L s D iout ( ) .",
"This loss term allows the model to learn the source domain knowledge that is potentially applicable to a target domain.",
"Moreover, to alleviate the overfitting after adapting to the low-resource domain, we 2 L s D iout and L s D iout (cid:48) indicate different batch sampled data from same D i .",
"introduce a cross-domain loss, which is in Fig. 2 (D), as L cd = n (cid:88) i =0 L s D icd ( i ) , (7) where L s D icd = L s D iout (cid:48) ( i )+ L s D iother ( i ) , i.e., computing the cross-domain loss with the data from D iout (cid:48) as well as those from other domains D iother .",
"To obtain the optimal initialization for model parameters, we define our total loss function, which is Fig. 2 (E), as the sum of the two of our losses, i.e., ( L cd + L ag ) .",
"In summary, our aggregated meta-train and cross-domain losses encourage our model to accurately and quickly adapt to an unseen target domain.",
"The overall procedure is described in Algorithm A.1.",
"This section first introduces experiment settings and training details.",
"Afterwards, we show empirical results in various scenarios.",
"We conduct our experiments on eight different domains 3 (Appendix T.2).",
"Each domain dataset is publicly available on OPUS 4 (Tiedemann, 2012).",
"We utilize the eight domains for out-domain ( D out ) and in-domain datasets ( D in ).",
"To build the monolingual corpora of in-domain and out-domain datasets, we sample data from the parallel corpus.",
"We made sure to include at most one sentence from each pair of parallel sentences.",
"For instance, we sample the first half of the sentences as unpaired 3 Acquis (Law), EMEA (Medical), IT, Tanzil (Koran), Subtitles, EUbookshop (EUB), Europarl, and GlobalVoices (GV) 4 http://opus.nlpl.eu/ source data and the other half as truly unpaired target data.",
"Consequently, the sampled monolingual corpora contain no translated sentence in each language.",
"Each of the two monolingual corpora contains the equal number of sentences for each language (e.g., English and German).",
"For our low-resource scenarios, we sample 5,000 tokens from a selected in-domain corpus for each language.",
"Note that the out-domain dataset represents the full monolingual corpora.",
"As our base model, we use a Transformer (Vaswani et al., 2017), which is initialized by a masked language model from XLM (Conneau and Lample, 2019) using our out-domain datasets.",
"All the models consist of 6 layers, 1,024 units, and 8 heads.",
"UNMT model is trained with only the in-domain monolingual data, composed of 5,000 words for each language.",
"Supervised neural machine translation model (NMT) is trained with in-domain parallel datasets, which we arrange in parallel with the two in-domain monolingual corpora.",
"Unadapted model is pretrained with only the out-domain datasets and evaluated on the in-domain datasets.",
"Transfer learning model is a finetuned model, which is pretrained with the out-domain datasets and then finetuned with a low-resource in-domain dataset.",
"it utilizes both in-domain and out-domain datasets for finetuning.",
"That is, the training batch is sampled evenly from in-domain and out-of-domain datasets.",
"In order to verify that leveraging the high-resource domains (i.e., the source domains) effects to handle the low-resource domains (i.e., the target domain), we compare the unsupervised and supervised models with ours and other baseline models.",
"As shown in Table 1, the unsupervised model trained on in-domain data suffers from data scarcity because it only uses low-resource in-domain data.",
"Although the unsupervised and supervised models are initialized by XLM, those models show the worst performance in all the cases.",
"This result indicates that when the small size of an in-domain corpus is given, it is appropriate to utilize the out-domain datasets rather than to train only with low-resource data.",
"In addition, the performance of the unadapted model is far behind compared to other models, such as the mixed finetuned model, transfer learning model, MetaUMT, and MetaGUMT.",
"This implies that we need an adequate strategy of leveraging the high-resource domains to improve the performance.",
"We further compare the performance between our proposed approaches (i.e., MetaUMT and MetaGUMT) and the other two finetuning models (i.e., the transfer learning and the mixed finetuned ones).",
"Our methods exhibit the leading performances in both directions of translation ( en de ), and consistently achieve improvements of 23 BLEU score in most of settings.",
"Furthermore, MetaGUMT consistently obtains better BLEU scores and converges faster than MetaUMT.",
"We assert that our proposed losses (i.e., the aggregated meta-train and the cross-domain losses) help the model not only to perform well even on the unseen in-domain dataset but also to accelerate the convergence speed.",
"As shown in Fig. 3 (A), we compare our proposed methods with the transfer learning approach by varying the sizes of an in-domain monolingual corpus.",
"The smaller the size of training data is, the wider the performance gap between the two approaches and the transfer learning model becomes.",
"It means that meta-learning is an effective approach to alleviate the performance degradation, preventing the model from overfitting to the low-resource data.",
"Compared to the transfer learning model, MetaUMT demonstrates a better performance than other methods in various settings.",
"However, MetaGUMT exhibits even better performances consistently in all settings owing to our proposed losses (Eq.",
"(8)).",
"The transfer learning approach shows the worst performance except for the unadapted model, even though it exploits the in-domain corpus after being pretrained with the out-domain datasets.",
"Additionally, we analyze the number of iterations required for a model to converge given an in-domain dataset.",
"As shown in Fig. 3 (B), the meta-learning approaches rapidly converge after only a few iterations, even faster than the transfer learning one does.",
"As the number of in-domain training words increases, the transfer learning approach requires a much larger number of iterations until convergence than our meta-learning approaches.",
"It can be seen that MetaUMT and MetaGUMT rapidly adapt to an unseen domain.",
"Moreover, owing to the encapsulated knowledge from the high-resource do-Parameter Initial Finetuned DD out D in Unseen Meidcal Law Koran EUB IT GV Subtitles Europarl De-En En-De De-En En-De De-En En-De De-En En-De De-En En-De De-En En-De De-En En-De De-En En-De Transfer 30.98 26.96 34.8 30.28 13.72 11.59 12.32 10.01 20.98 17.74 17.4 14.25 10.92 9.18 22.31 16.58 Mixed finetuned ------11.77 9.96 22.84 16.92 MetaUMT 33.0 23.39 27..74 15.4 4.89 0.79 6.78 2.59 9.45 4.68 2.77 1.06 12.95 10.58 23.91 18.7 MetaGUMT 37.37 31.63 42.73 37.3 18.2 13.84 13.72 11.8 24.0 19.24 21.24 17.38 13.45 10.89 24.44 19.31 Table 2: BLEU scores evaluated on out-domain and in-domain data with initial and finetuned , respectively.",
"mains, MetaGUMT converges within a relatively earlier iteration than MetaUMT does.",
"In summary, the meta-learning-based methods quickly converge in the low-resource domain, improving the performances over the transfer learning method in various low-resource settings.",
"This indicates that the meta-learning-based approaches are suitable to alleviate the data deficiency issue in scarce domains.",
"Furthermore, our losses in Eq.",
"(8) enhance the capabilities of aggregating domain general knowledge and finding adequate initialization.",
"An advantage of our meta-learning approaches is that they can find an optimal initialization point from which the model can quickly adapt to a low-resource in-domain dataset.",
"The transfer learning model requires twice more iterations until convergence than ours does.",
"As shown in Fig. 3 (C), MetaUMT and MetaGUMT not only converge quickly but also outperform the other baseline methods.",
"Specifically, compared to MetaUMT, MetaGUMT is effective in achieving an optimized initialization at an earlier iteration.",
"These results indicate that our additional losses (i.e., the cross-domain and aggregated meta-train losses) are ben-eficial in boosting up the ability for finding an optimal initialization point when training the model with the out-domain datasets.",
"We assume that the domain generalization ability and high-resource domain knowledge are helpful for the UNMT model to translate the low-resource domain sentences.",
"First, to identify whether the model encapsulates the high-resource knowledge from multiple sources, we evaluate our model on out-domain datasets (i.e., D out ) with initial .",
"As shown in Table.",
"2, MetaGUMT shows remarkable performances over MetaUMT in all domains, even better than the transfer learning models.",
"In other words, MetaUMT demonstrates poor performances Cross-domain Aggregated meta-train De-En En-De Average (cid:55) (cid:55) 27.09 24.6 25.85 (cid:88) (cid:55) 27.37 24.76 26.06 +0.21 (cid:55) (cid:88) 27.54 24.90 26.22 +0.37 (cid:88) (cid:88) 27.85 25.06 26.46 +0.61 Table 3: Effectiveness of each cross-domain and aggregated meta-train loss.",
"in D out , compared to MetaGUMT.",
"This can be explained as MetaGUMT uses an aggregated meta-train loss such that MetaGUMT is able to encapsulate the high-resource domain knowledge.",
"As shown in Table.",
"1, MetaGUMT achieves superior performances, showing that MetaGUMT is capable of leveraging the encapsulated knowledge when finetuning the low-resource target domain.",
"Secondly, our cross-domain loss encourages the model to have a generalization capability after adapting to the low-resource target domain.",
"As shown in Unseen column in Table.",
"2, MetaGUMT outperforms the other models.",
"It can be seen that our model has the domain generalization ability after the finetuning stage due to the cross-domain loss in the meta-test phase.",
"In UNMT, data unbalancing is often the case in that source language (e.g., English) data are abundant and the target language (e.g., Nepali) data are scarce (Kim et al., 2020).",
"We extend our experiment to the unbalanced scenarios to examine whether our proposed model shows the same tendency.",
"In this scenario, the low-resource target domain dataset consists of monolingual sentences from one side with two times more tokens than the monolingual sentences from the other.",
"As shown in Table.",
"4, MetaGUMT outperforms in all unbalanced data cases.",
"It shows that MetaGUMT is feasible to a practical UNMT scenario where the number of sentences is different in the source and target languages.",
"The only difference against the main experiment setting 5.1 is the condition that the in-domain corpus is unbalanced.",
"We also include the # tokens Mixed MetaUMT MetaGUMT En De En-De De-En En-De De-En En-De De-En 5k 10k 26.04 31.90 28.80 32.65 29.43 34.28 8k 16k 26.09 32.01 27.84 32.93 29.62 34.39 16k 32k 26.44 32.37 27.92 32.96 30.10 34.44 32k 64k 27.39 32.84 28.67 33.52 29.83 34.77 Table 4: Results on the unbalanced monolingual Law domain data during the finetuning stage, where D out is GV, Euorparl, EUB, Subtitles, Medical and Koran.",
"result of the transfer learning model in Table.",
"T.4.",
"We empirically show the effectiveness of the cross-domain and aggregated meta-train losses, as shown in Table 3 5 .",
"First, compared to MetaUMT which does not use any of the two losses, incorporating the cross-domain loss improves the average BLEU score by 0.21.",
"The cross-domain loss acts as a regularization function that prevents the model from overfitting during the finetuning stage.",
"Second, the aggregated meta-train loss, another critical component of our model, allows the model to utilize the high-resource domain knowledge in the finetuning stage.",
"This also improves the average BLEU score by 0.37 from MetaUMT.",
"Lastly, combining both cross-domain and aggregated meta-train losses significantly enhances the result in both directions of translation ( En De ), indicating that they are complementary to each other.",
"We examine how the performances change against the different number of source domains for each approach.",
"As shown in Table.",
"5 6 , MetaGUMT consistently outperforms the transfer, the mixed-finetune, and MetaUMT approaches.",
"As the size of the source domains increases, so does the performance gap between ours and the transferring based models, i.e., transferring and mixed-finetune models.",
"This indicates that the meta-learning based approaches are highly effected by the size of the domains in the meta-train phase, and also, if the number of source domains is large enough to capture the general knowledge, the meta-learning based approaches are suitable to handle the low-resource target task (i.e., machine translation in a low-resource domain).",
"5 The models are pretrained on Subtitles, Law, EUB, Europarl, IT, and GV and then finetuned on the Medical data.",
"6 The 4 case contains the Medical, Law, Koran and EUB domains.",
"5 and 6 additionally utilize one more domain(i.e., IT) and two more domains(i.e.,IT and GV), respectively.",
"# D out MetaGUMT MetaUMT Transfer Mixed En-De De-En En-De De-En En-De De-En En-De De-En 4 5.97 7.47 5.87 7.24 5.75 7.17 5.87 7.22 5 7.58 9.49 7.33 9.01 7.17 8.08 7.20 8.68 6 10.89 13.45 10.58 12.95 9.18 10.92 9.96 11.77 Table 5: Effectiveness of the different number of source domains between meta-learning based approaches and the transfer learning approach, where # D out represents the number of out-domain datasets in the pretraining stage.",
"This paper proposes a novel meta-learning approach for low-resource UNMT, called MetaUMT, which leverages multiple source domains to quickly and effectively adapt the model to the target domain even with a small amount of training data.",
"Moreover, we introduce an improved method called MetaGUMT, which enhances cross-domain generalization and maintains high-resource domain knowledge.",
"We empirically show that our proposed approach consistently outperforms the baseline methods with a nontrivial margin.",
"We believe that our proposed methods can be extended to semi-supervised machine translation as well.",
"In the future, we will further analyze other languages, such as Uzbek and Nepali, instead of languages like English and German.",
"This work was partially supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.",
"2019-0-00075, Artificial Intelligence Graduate School Program (KAIST) and No. 2020-0-00368, A Neural-Symbolic Model for Knowledge Acquisition and Inference Techniques) and by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. NRF-2019R1A2C4070420) References Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho."
] | [
"abstain",
"abstain",
"objective",
"result",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"objective",
"other",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"result",
"objective",
"objective",
"method",
"other",
"other"
] |
[
"Applying existing methods to emotional support conversationwhich provides valuable assistance to people who are in needhas two major limitations:",
"(a) they generally employ a conversation-level emotion label, which is too coarse-grained to capture user's instant mental state;",
"(b) most of them focus on expressing empathy in the response(s) rather than gradually reducing user's distress.",
"To address the problems, we propose a novel model MISC , which firstly infers the user's fine-grained emotional status, and then responds skillfully using a mixture of strategy.",
"Experimental results on the benchmark dataset demonstrate the effectiveness of our method and reveal the benefits of fine-grained emotion understanding as well as mixed-up strategy modeling.",
"Our code and data could be found in https: //github.com/morecry/MISC .",
"Empathy is the ability to perceive what others feel, think in their places and respond properly.",
"It has a broad application scenarios to endow machines with the ability of empathy, including automatic psycho-therapist, intelligent customer service, empathetic conversational agents, and etc (Fitzpatrick et al., 2017; Shin et al., 2019; Ma et al., 2020).",
"In this work, we focus on a special kind of human-computer empathetic conversation, i.e., emotional support conversation (Liu et al., 2021).",
"Distinguishedly, emotional support conversation happens between a seeker and supporter, where the supporter aims to gradually reduce seeker's distress as the conversation goes.",
"This makes existing approaches unsuitable for our setting for at least two reasons.",
"Firstly, existing work on emotional chatting learns to predict user emotion using a conversation-level emotion label, which is Equal Contribution.",
"coarse-grained and static to the conversation context (Rashkin et al., 2019; Lin et al., 2019c; Li et al., 2020a).",
"However, emotion is complex and user emotion intensity will change during the developing of the conversation (Liu et al., 2021).",
"It is thus a necessity to tell seeker's fine-grained mental state at each utterance.",
"Secondly, most of empathetic chatbots are trained to respond emotionally in accordance with the predicted coarse-grained emotion class, without consideration on how to address the seeker's emotional problem (De Graaf et al., 2012; Majumder et al., 2020; Xie and Park, 2021).",
"Hence, they are deficient to apply for emotional support conversation whose goal is to help others work through the challenges they face.",
"To tackle these issues, we propose a novel approach MISC , a.k.a. MI xed S rategy-aware model integrating C OMET for emotional support conversation.",
"As to the first issue, we introduce COMET, a pre-trained generative commonsense reasoning model (Bosselut et al., 2019a), and devise an attention mechanism to selectively adopt the COMET knowledge tuples for fine-grained emotion understanding.",
"As shown in Figure 1, this allows us to capture seeker's instantaneous mental state using different COMET tuples.",
"In addition, we propose to also consider response strategy when generating empathetic responses for the second issue.",
"Instead of modeling response strategy as a one-hot indi-308 cator, we formulate it as a probability distribution over a strategy codebook, and guide the response generation using a mixture of strategies.",
"At last, our MISC produces supportive responses based on both COMET-enhanced mental information and distributed strategy representation.",
"The unique design of mixed strategy not only helps to increase the expressed empathy, but also facilitates to learn the gradual transition in the long response, as the last utterance in Figure 1, which will in turn make the conversation more smooth.",
"To evaluate our model, we conduct extensive experiments on ESConv benchmark (Liu et al., 2021) and compare with 5 state-of-the-art empathetic chatbots.",
"Based on both automatic metrics and manual judgments, we demonstrate that the responses generated by our model MISC are more relevant and empathetic.",
"Besides, additional experimental analysis reveal the importance of response strategy modeling, and sheds light on how to learn a proper response strategy as well as how response strategy could influence the empathy of the chatbot.",
"In brief, our contributions are as follows: (1) We present a Seq2Seq model MISC, which incorporates commonsense knowledge and mixed response strategy into emotional support conversation; (2) We conduct experiments on ESConv dataset, and demonstrate the effectiveness of the proposed MISC by comparing with other SOTA methods.",
"(3) We implement different ways of strategy modeling and give some hints on strategy-aware emotional support conversation.",
"As suggested in Liu et al. (2021), emotion-aware dialogue systems can be categorized into three classes: emotional chatting, empathetic responding and emotional support conversation.",
"Early work target at emotional chatting and rely on emotional signals (Li et al., 2017; Zhou et al., 2018a; Wei et al., 2019; Zhou and Wang, 2018; Song et al., 2019).",
"Later, some researchers shift focus towards eliciting user's specific emotion (Lubis et al., 2018; Li et al., 2020b).",
"Recent work begin to incorporate extra information for deeper emotion understanding and empathetic responding (Lin et al., 2020; Li et al., 2020a; Roller et al., 2021).",
"Li et al. (2021a) and Zhong et al. (2021) exploit ConceptNet to enhance emotion reasoning for response generation.",
"Different from them, our work exploits a generative commonsense model COMET (Bosselut et al., 2019b), which enables us to capture seeker's mental states and facilitates strategy prediction in emotional support conversation.",
"Recently, there is a large body of literature injecting commonsense knowledge into various NLP tasks, including classification (Chen et al., 2019; Paul and Frank, 2019), question answering (Mihaylov and Frank, 2018; Bauer et al., 2018; Lin et al., 2019a), story and language generation (Guan et al., 2019; Ji et al., 2020), and also dialogue systems (Zhou et al., 2018b; Zhang et al., 2020; Li et al., 2021a; Zhong et al., 2021).",
"These dialogue systems often utilize ConceptNet (Speer et al., 2017), aiming to complement conversation utterances with physical knowledge.",
"Distinguished from ConceptNet, ATOMIC (Sap et al., 2019) covers social knowledge including event-centered causes and effects as well as person-related mental states.",
"To this end, ATOMIC is expected beneficial for emotion understanding and contributing to response empathy.",
"In this work, we leverage COMET (Bosselut et al., 2019b), a commonsense reasoning model trained over ATOMIC for emotional support conversation.",
"Conversation strategy can be defined using different notions from different perspectives.",
"A majority of research works is conducted under the notion of dialog acts, where a plethora of dialog act schemes have been created (Mezza et al., 2018; Paul et al., 2019; Yu and Yu, 2021).",
"Dialog acts are empirically validated beneficial in both task-oriented dialogue systems and open-domain social chatbots (Zhao et al., 2017; Xu et al., 2018; Peng et al., 2020; Li et al., 2020c).",
"As to empathetic dialogues, conversation strategy is often defined using the notion of response intention or communication strategy, which is inspired from the theories of empathy in psychology and neuroscience (Lubis et al., 2019; Li et al., 2021b).",
"Whereas Welivita and Pu (2020) define a taxonomy of 15 response intentions through which humans empathize with others, Liu et al. (2021) define a set of 8 support strategies that humans utilize to reduce other's emotional distress.",
"This partially reveals that response strategy is complex, which motivates us to condition on a mixture of strategy when generating supportive responses.",
"In this paper, we use the E motional S upport Conv ersation dataset, ESConv (Liu et al., 2021).",
"Before conversations start, seekers should determine their emotion types, and tell the situation they are dealing with to supporters.",
"Besides, the strategy of every supporter's utterance is marked, which is the most important to our work.",
"In total, there are 8 kinds of strategies, and they are almost evenly distributed.",
"More details are given in Appendix.",
"For general dialogue response generation, the target is to estimate the probability distribution p ( r | c ) of the dataset D = { c ( i ) , r ( i ) } Ni =1 , where c ( i ) = ( u ( i ) 1 , u ( i ) 2 , ..., u ( i ) n i ) consists of a sequence of n i utterances in the dialogue history, and r ( i ) is the target response.",
"For the sake of brevity, we omit the superscript ( i ) when denoting a single example in the remaining part.",
"In the setting of emotional support conversation, the seeker's situation s is considered as an extra input, which describes the seeker's problem in freeform text.",
"We also denote the seeker's last post (ut-terance) as x .",
"Consequently, the target becomes to estimate the probability distribution p ( r | c, s, x ) .",
"The overview of our approach is shown in Figure 2.",
"Based on blenderbot-small (Roller et al., 2021), our model MISC consists three main components: (1) a mental state-enhanced encoder (Bosselut et al., 2019a); (2) a mixed strategy learning module; and (3) a multi-factor-aware decoder.",
"Following common practice, we firstly represent the context using the encoder E :",
"To better understand the seeker's situation, we exploit COMET (Bosselut et al., 2019a), a commonsense knowledge generator to supply mental state information related to the conversation.",
"Concretely, we treat the situation s as an event, and feed it with different relations into COMET: B s = N r (cid:91) j =1 COMET ( rel j , s ) (2) where N r is the number of pre-defined relations in COMET, and rel j stands for the j -th specific relation, such as xAttr and xReact .",
"1 Note that given a certain event-relation pair, COMET is able to generate multiple tails of free-form mental state information, B s is a set of N s mental state blocks, i.e., B s = { b sj } N s j =1 .",
"Similarly, we can obtain the set of mental state blocks B x using the seeker's last post x .",
"Then, all of the free-form blocks will be transformed into dense vectors using our encoder E : H s = [ h s 1 , 1 , h s 2 , 1 , ..., h sN st , 1 ] h sj = E ( b sj ) (3) and the hidden state of each block's first token will be used to represent the corresponding block.",
"Later, due to the noisy of COMET blocks, a lot of them are irrelevant to the context.",
"We creatively take attention method to refine the strongly relevant blocks.",
"That operation could be expressed as Z = softmax ( H s CT ) C H s = LN ( H s + Z ) (4) where LN is the LayerNorm module (Ba et al., 2016).",
"Similarly, we could transform x to H x following the same method as s to H s .",
"At last, we get the conversation-level and utterance-level representation of seeker's mental state H s and H x , which are enhanced with commonsense information.",
"One straightforward way to predict the response strategy is to train a classifier upon the CLS states of the context representation C from Eq.",
"(1): p g = MLP ( C 1 ) (5) where MLP is a multi-layer perceptron, and p g records the probabilities of each strategy to be used.",
"To model the complexity of response strategy as discussed before, we propose to employ the distribution p g and model a mixture of strategies for 1 Please refer to the appendix file for the definitions of all the relations as well as a brief introduction of COMET.",
"response generation.",
"Here, we masterly learn from the idea of VQ-VAE's codebook to represent strat-egy(Oord et al., 2017).",
"The strategy codebook T R m d represent m strategy latent vectors (here m = 8) with the dimension size d .",
"By weighting T using p g , we are able to obtain a comprehensive strategy representation h g h g = p g T (6) Our codebook-based method has two benefits: (1) It is beneficial when long responses are needed to skillfully reduce the seeker's distress, which is common in emotional support conversation.",
"(2) It is flexible to learn.",
"Intuitively, if a strategy has a higher probability in p g , it should take greater effect in guiding the support conversation.",
"In the extreme case where we have a sharp distribution, one single strategy will take over the control.",
"The remaining is to properly utilize the inferred mental states and the strategy representation.",
"To notify the decoder of these information, we modify the backbone's cross attention module as: A c = CROSS ATT ( O , H ) A s = CROSS ATT ( O , H s ) A x = CROSS ATT ( O , H x ) A g = CROSS ATT ( O , h g ) O = LN ( A c + A s + A x + A g + O ) (7) where CROSS ATT stands for the backbone's cross attention module, and O is the hidden states of the decoder, which produces the final response by interacting with multi-factors.",
"Based on blenderbor-small (Roller et al., 2021), we jointly train the model to predict the strategy and produce the response: L r = n r (cid:88) t =1 log ( p ( r t | r j<t , c , s , x )) L g = log ( p ( g | c , s , x )) L = L r + L g (8) where n r is the length of response, g is the true strategy label, L g is the loss of predicting strategy, L r is the loss of predicting response, and L is combined objective to minimize.",
"We evaluate our and the compared approaches on the dataset ESConv (Liu et al., 2021).",
"For preprocessing, we truncate the conversation examples every 10 utterances, and randomly spilt the dataset into train, valid, test with the ratio of 8:1:1.",
"The statistics is given in Table 1.",
"Automatic Metrics .",
"(1) We take the strategy prediction accuracy ACC.",
"as an essential metric.",
"A higher ACC.",
"indicates that the model has a better capability to choose the response strategy.",
"(2) We then acquire the conventional PPL (perplex-ity), B-2 (BLEU-2), B-4 (BLEU-4) (Papineni et al., 2002), R-L (ROUGE-L) (Lin, 2004) and M (Me-teor) (Denkowski and Lavie, 2014) metrics to evaluate the lexical and semantic aspects of the generated responses.",
"(3) For response diversity, we report D-1 (Distinct-1) and D-2 (Distinct-2) numbers, which assesses the ratios of the unique n-grams in the generated responses (Li et al., 2016).",
"Human Judgments .",
"Following See et al. (2019), we also recruit 3 professional annotators with linguistic and psychologist background and ask them to rate the generated responses according to Fluency, Knowledge and Empathy aspects with level of {0,1,2}.",
"For fair comparison, the expert annotators do not know which model the response is from.",
"Note that these 3 writers are paid and the results are proof-checked by 1 additional person.",
"Transformer is a vanilla Seq2Seq model trained based on the MLE loss (Vaswani et al., 2017).",
"MT Transformer is the M ultiT ask transformer which considers emotion prediction as an extra learning task (Rashkin et al., 2018).",
"In specific, we use the conversation-level emotion label provided in ESConv to learn emotion prediction.",
"MoEL softly combines the output states from multiple listeners (decoders) to enhance the response empathy for different emotions (Lin et al., 2019b).",
"MIME considers the polarity-based emotion clusters and emotional mimicry for empathetic response generation (Majumder et al., 2020).",
"BlenderBot-Joint is the SOTA model on ESConv dataset, which prepends a special strategy token before the response utterances (Liu et al., 2021).",
"We implement our approach based on blenderbot-small (Roller et al., 2021) using the default sizes of vocabulary and the hidden states.",
"For the last post x and the situation s , we set the maximum number of the retrieved COMET blocks as 30 and 20 respectively.",
"The inferred COMET blocks will be sent to the encoder with a maximum of 10 words.",
"To be comparable with the SOTA model in Liu et al. (2021), we fine-tune MISC based on the blenderbot-small with the size of 90M parameters by a Tesla-V100 GPU.",
"The batch size of training and evaluating is 20 and 50, respectively.",
"We initialize the learning rate as 2e-5 and change it during training using a linear warmup with 120 warmup steps.",
"We use AdamW as optimizer (Loshchilov and Hutter, 2018) with 1 =0.9, 2 =0.999 and =1e-8.",
"After training 8 epochs, the checkpoint with the lowest perplexity on the validation set is selected for testing.",
"Following (Liu et al., 2021), we also adopt the decoding algorithms of Topp and Topk sampling with p =0.3, k =30, temperature =0.7 and the repetition penalty 1.03.",
"We will release the source code to facilitate future work.",
"As shown in Table 2, the vanilla Transformer performs the worst according to its relatively low PPL, BLEU-n and distinct-n scores.",
"This is not suprising because it does not have any other specific optimization objective to learn the ability of empathy, and it is observed to be deficient for capturing long context as that in the ESConv dataset.",
"The performances of MT Transformer, MoEL and MIME, are also disappointing.",
"Even though they three are equipped with empathetic objectives such as emotion prediction and ensembling listener, they are based on the conversation-level static emotion label, which is not adequate for fine-grained emotion understanding.",
"More importantly, these three empathetic models lack of the ability of strategically consoling the seekers in the setting of emotional support conversation.",
"By comparing with the SOTA model BlenderBot-Joint, we can see that our model MISC is more effective especially in predicting more accurate response strategy.",
"Whereas BlenderBot-Joint predicts one single strategy at the first decoding step, our method MISC models mixed response strategies using a strategy codebook and allows the decoder to learn the smooth transition and exhibit empathy more naturally.",
"The comparison result suggests that it is beneficial to predict the response strategy as an extra task and to take into consideration the strategy complex for emotional support conversation.",
"trained LM blenderbot-small (Rashkin et al., 2018), BlenderBot-Joint and our MISC significantly outperform other models on the Fluency aspect.",
"Notably, our MISC yields the highest Knowledge score, which indicates that the responses produced by our approach contain much more specific information related to the context.",
"We conjecture that our multi-factor-aware decoder successfully learns utilize the mental state knowledge from COMET with the mixture of the predicted strategies.",
"Overall speaking, MISC performs the best on almost every metric.",
"It strongly demonstrates the effectiveness of our approach, and highlights the importance of fine-grained mental state modeling and mixed response strategy incorporation.",
"Our method MISC has two novel designs: considering the fine-grained mental states and incorporating a mixture of response strategy.",
"To investigate more, we conduct extra experiments, and the analysis results give us hints of how to develop better emotional support conversational agents.",
"In order to verify the improvement brought by each added part ( g , s , x ), we drop these three parts from the MISC and check the performance changes.",
"As shown in Table 4, the scores on all the metrics decrease dramatically when the g is albated.",
"Consequently, we suppose the strategy attention is vital for guiding the semantics of the response.",
"In addition, the scores also decline when we remove the the situation s and the seeker's last query x .",
"According to the above experiments, each main part of the MISC is proven effective.",
"In Table 5, an example is present to compare the response generated by the MISC and the other models.",
"Various problems appear in the compared models, such as inconsistency, repetition, contradiction, etc.",
"Intuitively, our model achieves the best performance in contrast.",
"Besides, we present a visualization in Figure 4 to interpret how the MISC organizes the response under the combined effect of the COMET blocks and the mixture of strategies.",
"As discussed before, one limitation of previous approaches is that they solely rely on a conversation-level emotion label, which is too coarse to guide the chatbot respond strategically and help the emotional conversation progress healthily.",
"To remedy this issue, we exploit the commonsense knowledge generator COMET to supplement fine-grained information of seeker's mental state.",
"In order to fairly examine the effects of different emotional information, we discard the COMET blocks and implement a variant of our method MISE, a.k.a. MI xedS rategy-aware model integrating E motion, where an extra emotion classification objective is added to the main architecture, as in Rashkin et al. (2018).",
"Table 6 summarizes the comparison results between our full model MISC and its variant MISE.",
"Obviously, all the metrics 313 Situation Seeker My boyfriend and I recently broke up due to long-distance relationship and the impact",
"To depict the advantage of fine-grained mental state information, we visualize the attended COMET blocks of the example in Table 5.",
"As shown in Figure 4, our chatbot MISC pays much attention of those inferred knowledge that are beneficial for fine-grained emotion understanding and strategy-aware empathetic responding.",
"More specifically, the attended COMET blocks ( xReact , hurt ) and ( xAttr , sad ) permit our chatbot MISC to utter the words it was painful which reflects its understanding of the seeker's feeling.",
"Besides, note that the COMET blocks with white background are retrieved using the situation information s , and the grey ones are collected using the seeker's last post x .",
"Despite of some overlapping, the white and grey attended blocks do contain distinct and crucial mental state knowledge.",
"This partially validates that s and x is complementary to each other, and they two are useful information for emotional support conversation.",
"Meanwhile, the mixture of response strategy also plays a vital role for emotional support conversation.",
"By analyzing the aforementioned case in depth, we find some hints on why our way to model conversation strategy is more preferred in the setting of emotional support conversation.",
"Hint 1: Mixed strategy is beneficial for Smooth Emotional Support .",
"In Figure 4, we visualize the predicted strategy representation and the generated support response in Table 5.",
"After understanding the seeker's situation of break-up and feelings of sadness, our MISC reasons that it might be proper to employ the strategies of Self-disclosure , Reflection of feelings to emotionally reply and effectively console the seeker's.",
"Then, MISC organizes the response by firstly reveals that it has similar experiences and knows the feelings like.",
"Moreover, the chatbot also supplements detailed information of move on from a relationship to suggest that the life will go on.",
"These added-up words could be regarded as using the strategy of Information or Others , which is useful to transit the conversation to the next step smoothly.",
"This case vividly shows how response generation is guided by the mixed strategies, and how skillful of our chatbot MISC is.",
"single strategy .",
"In addition to the case study, we also attempt to quantitatively assess the benefit of the mixed strategy modeling.",
"To do so, we implement another variant of our chatbot Single where the mixed representation is replaced with an one-hot representation.",
"Typically, we pick up the strategy dimension with the largest probability value as the one-hot output.",
"The comparison results are given in Table 7.",
"Although yielding a slightly better distinct-n scores, the single-strategy variant lags far behind according to the lexical and semantic scores.",
"the BlenderBot-Joint.",
"Recall that the SOTA model BlenderBot-Joint (Liu et al., 2021) can also be regarded as a single-strategy model where a special strategy token is firstly decoded at the beginning of the response generation.",
"We then compare their way of strategy modeling with our mixed strategy representation.",
"As shown in Figure 5, the top-k strategy prediction accuracy of our MISC always surpasses that of BlenderBot-Joint, and the top-5 accuracy of our model reaches over 80%.",
"This again proves the success of our strategy modeling.",
"Hint 3: Mixed strategy is suitable for ESC Framework .",
"The emotional support conversations in the dataset ESConv are guided by the ESC Framework, which suggests that emotional support generally follows a certain order of strategy flow.",
"Similar to (Liu et al., 2021), here we also visualize the strategy distributions learned from different models, and compare them with the ground-truth strategy distribution in the original dataset.",
"As shown in Figure 3, we can find: (1) Comparing our Figure 5: The Top-k Strategy Prediction Accuracy.",
"model with the SOTA model BlenderBot-Joint, we can find that our MISC better mimics the skill of strategy adoption in emotional support conversation.",
"(2) At almost all stages of the conversation, our model is less likely to predict the strategy of Others (the grey part), as compared to BlenderBot-Joint.",
"This indicates that the strategy acquired by our model is more discriminative than those by BlenderBot-Joint.",
"(3) Overall speaking, the strategy distribution from our model share very similar patterns as compared to the ground-truth distribution.",
"This implies that our way to model the strategy learning is suitable for the ESC framework.",
"In this paper, we propose MISC, a novel framework for emotional support conversation, which introduces COMET to capture user's instant mental state, and devises a mixed strategy-aware decoder to generate supportive response.",
"Through extensive experiments, we prove the superiority and rationality of our model.",
"In the future, we plan to learn the mixed response strategy in a dynamic way.",
"At last, we discuss the potential ethic impacts of this work: (1) The ESConv dataset is a publicly-available, well-established benchmark for emotional support conversation; (2) Privacy : The origi-315",
"nal providers have filtered the sensitive information such as personally identifiable information (Liu et al., 2021); (3) Nevertheless, due to the limitation of filtering coverage, the conversations might still remain some languages that are emotionally triggering.",
"Note that our work focuses on building emotional support conversational agents.",
"For risky situations such as self-harm-related conversations, we do not claim any treatments or diagnosis.",
"We would like to thank the anonymous reviewers for their constructive comments.",
"This work was supported by National Natural Science Foundation of China (NSFC Grant No. 62122089 & No. 61876196), Beijing Outstanding Young Scientist Program (NO. BJJWZYJH012019100020098), and Intelligent Social Governance Platform, Major Innovation Planning Interdisciplinary Platform for the \"Double-First Class\" Initiative, Renmin University of China.",
"Rui Yan is the corresponding author, and is supported as a young fellow at Beijing Academy of Artificial Intelligence (BAAI)."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"objective",
"method",
"method",
"abstain",
"method",
"objective",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"other",
"other"
] |
[
"Recent research investigates factual knowledge stored in large pretrained language models (PLMs).",
"Instead of structural knowledge base (KB) queries, masked sentences such as Paris is the capital of [MASK] are used as probes.",
"The good performance on this analysis task has been interpreted as PLMs becoming potential repositories of factual knowledge.",
"In experiments across ten linguistically diverse languages, we study knowledge contained in static embeddings.",
"We show that, when restricting the output space to a candidate set, simple nearest neighbor matching using static embeddings performs better than PLMs.",
"E.g., static embeddings perform 1.6% points better than BERT while just using 0.3% of energy for training.",
"One important factor in their good comparative performance is that static embeddings are standardly learned for a large vocabulary.",
"In contrast, BERT exploits its more sophisticated, but expensive ability to compose meaningful representations from a much smaller subword vocabulary.",
"Pretrained language models (PLMs) (Peters et al., 2018; Howard and Ruder, 2018; Devlin et al., 2019) can be finetuned to a variety of natural language processing (NLP) tasks and then generally yield high performance.",
"Increasingly, these models and their generative variants (e.g., GPT, Brown et al., 2020) are used to solve tasks by simple text generation, without any finetuning.",
"This motivated research on how much knowledge is contained in PLMs: Petroni et al. (2019) used models pretrained with a masked language objective to answer cloze-style templates such as: (Ex1) Paris is the capital of [MASK].",
"Using this methodology, Petroni et al. (2019) showed that PLMs capture some knowledge implicitly.",
"This has been interpreted as suggesting Equal contribution random order.",
"that PLMs are promising as repositories of factual knowledge.",
"In this paper, we present evidence that simple static embeddings like fastText perform as well as PLMs in the context of answering knowledge base (KB) queries.",
"Answering KB queries can be decomposed into two subproblems, typing and ranking .",
"Typing refers to the problem of predicting the correct type of the answer entity; e.g., country is the correct type for [MASK] in (Ex1), a task that PLMs seem to be good at.",
"Ranking consists of finding the entity of the correct type that is the best fit (France in (Ex1)).",
"By restricting the output space to the correct type we disentangle the two subproblems and only evaluate ranking.",
"We do this for three reasons.",
"(i) Ranking is the knowledge-intensive step and thus the key research question.",
"(ii) Typed querying reduces PLMs' dependency on the template.",
"(iii) It allows a direct comparison between static word embeddings and PLMs.",
"Prior work has adopted a similar approach (Xiong et al., 2020; Kassner et al., 2021).",
"to the output embedding for [MASK].",
"For static embeddings, we rank entities (e.g., entities of type country) with respect to similarity to the query entity (e.g., Paris in (Ex1)).",
"In experiments across ten linguistically diverse languages, we show that this simple nearest neighbor matching with fastText embeddings performs comparably to or even better than BERT.",
"For example for English, fastText embeddings perform 1.6% points better than BERT (41.2% vs. 39.6%, see Table 1, column LAMA).",
"This suggests that BERT's core mechanism for answering factual queries is not more effective than simple nearest neighbor matching using fastText embeddings.",
"We believe this means that claims that PLMs are KBs have to be treated with caution.",
"Advantages of BERT are that it composes meaningful representations from a small subword vocabulary and handles typing implicitly (Petroni et al., 2019).",
"In contrast, answering queries without restricting the answer space to a list of candidates is hard to achieve with static word embeddings.",
"On the other hand, static embeddings are cheap to obtain, even for large vocabulary sizes.",
"This has important implications for green NLP.",
"PLMs require tremendous computational resources, whereas static embeddings have only 0.3% of the carbon footprint of BERT (see Table 4).",
"This argues for proponents of resource-hungry deep learning models to try harder to find cheap green baselines or to combine the best of both worlds (cf. Poerner et al., 2020).",
"In summary, our contributions are:",
"i) We propose an experimental setup that allows a direct comparison between PLMs and static word embeddings.",
"We find that static word embeddings show performance similar to BERT on the modified LAMA analysis task across ten languages.",
"ii) We provide evidence that there is a trade-off between composing meaningful representations from subwords and increasing the vocabulary size.",
"Storing information through composition in a network seems to be more expensive and challenging than simply increasing the number of atomic representations.",
"iii) Our findings may point to a general problem: baselines that are simpler and greener are not given enough attention in deep learning.",
"Code and embeddings are available online.",
"1 1 https://github.com/pdufter/staticlama Language Code Family Script Arabic AR Afro-Asiatic Arabic German DE Indo-European Latin English EN Indo-European Latin Spanish ES Indo-European Latin Finnish FI Uralic Latin Hebrew HE Afro-Asiatic Hebrew Japanese JA Japonic Japanese Korean KO Koreanic Korean Turkish TR Turkic Latin Thai TH Tai-Kadai Thai Table 2: Overview of the ten languages in our experiments, including language family and script.",
"We follow the LAMA setup introduced by Petroni et al. (2019).",
"More specifically, we use data from TREx (Elsahar et al., 2018).",
"TREx consists of triples of the form (object, relation, subject).",
"The underlying idea of LAMA is to query knowledge from PLMs using templates without any finetun-ing: the triple (Paris, capital-of, France) is queried with the template Paris is the capital of [MASK].",
"TREx covers 41 relations.",
"Templates for each relation were manually created by Petroni et al. (2019).",
"LAMA has been found to contain many easy-to-guess triples; e.g., it is easy to guess that a person with an Italian sounding name is Italian.",
"LAMA-UHN is a subset of triples that are hard-to-guess created by Poerner et al. (2020).",
"Beyond English, we run experiments on nine additional languages using mLAMA, a multilingual version of TREx (Kassner et al., 2021).",
"For an overview of languages and language families see Table 2.",
"For training static embeddings, we use Wikipedia dumps from October 2020.",
"We describe our proposed setup, which allows to compare PLMs with static embeddings.",
"We use the following two PLMs:",
"(i) BERT for English (BERT-base-cased, Devlin et al. (2019)),",
"(ii) mBERT for all ten languages (the multilingual version BERT-base-multilingual-cased).",
"Petroni et al. (2019) use templates like Paris is the capital of [MASK] and give arg max w V p ( w | t ) as answer where V is the vocabulary of the PLM and p ( w | t ) is the probability that word w gets predicted in the template t .",
"2021) and use typed querying: for each relation, we create a candidate set C and then predict arg max c C p ( c | t ) .",
"For most templates, there is only one valid entity type, e.g., country for (Ex1).",
"We choose as C the set of objects across all triples for a single relation.",
"The candidate set could also be obtained from an entity typing system (e.g., Yaghoobzadeh et al., 2018), but this is beyond the scope of this paper.",
"Variants of typed prediction have been used before (Xiong et al., 2020).",
"We accommodate multi-token objects, i.e., objects that are not contained in the vocabulary, by including multiple [MASK] tokens in the templates.",
"We then compute an object's score as the average of the log probabilities for its individual tokens.",
"Note that we do not perform any finetuning.",
"The vocabulary V of the wordpiece tokenizer is of central importance for static embeddings as well as PLMs.",
"BERT models come with fixed vocabularies.",
"It would be prohibitive to retrain the models with a new vocabulary.",
"It would also be too expensive to increase the vocabulary by a large factor: the embedding matrix is responsible for the majority of the memory consumption of these models.",
"cheap for static embeddings.",
"We thus experiment with different vocabulary sizes for static embeddings.",
"To this end, we train new vocabularies for each language on Wikipedia using the wordpiece tokenizer (Schuster and Nakajima, 2012).",
"Using either newly trained vocabularies or existing BERT vocabularies, we tokenize Wikipedia.",
"We then train fastText embeddings (Bojanowski et al., 2017) with default parameters (http://fasttext.cc).",
"We consider the same candidate set C as for PLMs.",
"Let c C be a candidate that gets split into tokens t 1 , . . . , t k by the wordpiece tokenizer.",
"We then assign to c the embedding vector e c = 1 k k (cid:88) i =1 e t i where e t i is the fastText vector for token t i .",
"We compute the representations for a query q analogously.",
"For a query q (the subject of a triple), we then compute the prediction as: arg max c C cosine-sim ( e q , e c ) , i.e., we perform simple nearest neighbor matching.",
"Note that the static embedding method does not get any signal about the relation.",
"The method's only input is the subject of a triple, and we leave incorporating a relation vector to future work.",
"We compute precision at one for each relation, i.e., 1 / | T | (cid:80) t T 1 { t object = t object } where T is the set of all triples and t object the object predicted using contextualized/static embeddings.",
"Note that T is different for each language.",
"Our final measure (p1) is then the precision at one (macro-)averaged over relations.",
"As a consistency check we provide an Oracle baseline: it always predicts the most frequent object across triples based on the gold candidate sets.",
"In this section, we compare the performance of BERT and fastText, analyze their resource consumption, and give evidence that BERT composes meaningful representations from subwords.",
"Results for English are in Table 1.",
"The table shows that when increasing the vocabulary size, static embeddings and BERT exhibit similar performance on LAMA.",
"The Oracle baseline is mostly outperformed.",
"Only for small vocabulary sizes, fastText is worse.",
"Performance of fastText increases with larger vocabulary sizes and with a vocabulary size of 1000k we observe a 1.6% absolute performance increase of fastText embeddings compared to BERT (41.2% vs. 39.6%).",
"The performance gap between fastText and BERT increases to 2.7% points on LAMA-UHN, indicating that fastText is less vulnerable to misleading clues about the subject.",
"Only providing results on English can be prone to unexpected biases.",
"Thus, we verify our results for nine additional languages.",
"Results are shown in Table 3 and the conclusions are similar: for large enough vocabularies, static embeddings consistently have better performance.",
"For languages outside the Indo-European family, the performance gap between mBERT and fastText is much larger (e.g., 31.7 vs. 17.2 for Arabic) and mBERT is sometimes worse than the Oracle.",
"Our fastText method is quite primitive: it is a type-restricted search for entities similar to what is most prominent in the context (whose central element is the query entity, e.g., Paris in (Ex1)).",
"The fact that fastText outperforms BERT raises the question: Does BERT simply use associations between entities (like fastText) or has it captured factual knowledge beyond this?",
"The entropy of the distribution of predicted objects is 6.5 for BERT vs. 7.3 for fastText.",
"So BERT's predictions are less diverse.",
"Of 151 possible objects on average, BERT predicts (on average) 85, fastText 119.",
"For a given relation, BERT's prediction tend to be dominated by one object, which is often the most frequent correct object possibly because these objects are frequent in Wikipedia/Wikidata.",
"When filtering out triples whose correct answer is the most frequent object, BERT's performance drops to 35.7 whereas fastText's increases to 42.5.",
"See Table 7 in the appendix for full results on diversity.",
"We leave investigating why BERT has these narrower object preferences for future work.",
"BERT's attention mechanism should be able to handle long subjects in contrast to fastText, for which we use simple averaging.",
"Figure 1 shows that fastText's performance indeed drops when the query gets tokenized into multiple tokens.",
"In contrast, BERT's performance remains stable.",
"We conclude that token averaging harms fastText's performance and that the attention mechanism in BERT composes meaningful representations from subwords.",
"We try to induce static embeddings from BERT by feeding object and subject surface forms to BERT without any context and then averaging the hidden representations for each layer.",
"Figure 2 analyzes whether a nearest neighbor matching over this static embedding space extracted from BERT's representations is effective in extracting knowledge from it.",
"We find that performance on LAMA is significantly lower across all hidden layers with the first two layers performing best.",
"That simple averaging does not work as well as contextualization indicates that BERT is great at composing meaningful representations through attention.",
"In future work, it would be interesting to extract better static representations from BERT, for example by extracting the representations of entities in real sentences.",
"Table 4 compares resource consumption of BERT vs. fastText following Strubell et al. (2019).",
"fastText can be efficiently computed on CPUs with a drastically lower power consumption and computation time.",
"Overall, fastText has only 0.3% of the 0 2 4 6 8 10 12 Layer 20 40 p 1 BERT mBERT BERT mBERT Figure 2: Contextualization in BERT.",
"carbon emissions compared to BERT.",
"In a recent study, Zhang et al. (2020) showed that capturing factual knowledge inside PLMs is an especially resource hungry task.",
"These big differences demonstrate that fastText, in addition to performing better than BERT, is the environmentally better model to encode knowl-edge of Wikipedia in an unsupervised fashion.",
"This calls into question the use of large PLMs as knowledge bases, particularly in light of the recent surge of knowledge augmented LMs, e.g., (Lewis et al., 2020; Guu et al., 2020).",
"Petroni et al. (2019) first asked: can PLMs function as KBs?",
"Subsequent analysis focused on different aspects, such as negation (Kassner and Schtze, 2020; Ettinger, 2020), paraphrases (Elazar et al., 2021), easy to guess names (Poerner et al., 2020), finding alternatives to a cloze-style approach (Bouraoui et al., 2020; Heinzerling and Inui, 2020; Jiang et al., 2020) or analyzing different model sizes (Roberts et al., 2020).",
"There is a recent surge of work that tries to improve PLMs' ability to harvest factual knowledge: Zhang et al. (2019), Peters et al. (2019) and Wang et al. (2020) inject factual knowledge into PLMs.",
"Guu et al. (2020), Lewis et al. (2020), Izacard and Grave (2020), Kassner and Schtze (2020) and Petroni et al. (2020) combine PLMs with information retrieval and Bosselut et al. (2019), Liu et al. (2020) and Yu et al. (2020) with knowledge bases.",
"In contrast, we provide evidence that BERT's ability to answer factual queries is not more effective than capturing knowledge with simple traditional static embeddings.",
"This suggests that learning associations between entities and type-restricted similarity search over these associations may be at the core of BERT's ability to answer cloze-style KB queries, a new insight into BERT's working mechanism.",
"We have shown that, when restricting cloze-style questions to a candidate set, static word embeddings outperform BERT.",
"To explain this puzzling superiority of a much simpler model, we put forward a new characterization of factual knowledge learned by BERT: BERT seems to be able to complete cloze-style queries based on similarity assessments on a type-restricted vocabulary much like a nearest neighbor search for static embeddings.",
"However, BERT may still be the better model for the task: we assume perfect typing (for BERT and fastText) and only evaluate ranking.",
"Typing is much harder with static embeddings and BERT has been shown to perform well at guessing the expected entity type based on a template.",
"BERT also works well with small vocabularies, storing most of its knowledge in the parameterization of subword composition.",
"Our results suggest that increasing the vocabulary size and computing more atomic entity representations with fastText is a cheap and environmentally friendly method of storing knowledge.",
"In contrast, learning high quality composition of smaller units requires many more resources.",
"fastText is a simple cheap baseline that outperforms BERT on LAMA, but was not considered in the original research.",
"This may be an example of a general problem: green baselines are often ignored, but should be considered when evaluating resource-hungry deep learning models.",
"A promising way forward would be to combine the best of both worlds, e.g., by building on work that incorporates large vocabularies into PLMs after pretraining.",
"Acknowledgements.",
"This work was supported by the European Research Council (# 740516) and the German Federal Ministry of Education and Research (BMBF) under Grant No. 01IS18036A.",
"The authors of this work take full responsibility for its content.",
"The first author was supported by the Bavarian research institute for digital transformation (bidt) through their fellowship program.",
"We thank Yanai Elazar and the anonymous reviewers for valuable comments."
] | [
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"method",
"objective",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"other",
"result",
"objective",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain"
] |
[
"Neural module networks (NMNs) are a popular approach for modeling compositionality: they achieve high accuracy when applied to problems in language and vision, while reflect-ing the compositional structure of the problem in the network architecture.",
"However, prior work implicitly assumed that the structure of the network modules, describing the abstract reasoning process, provides a faithful explanation of the model's reasoning; that is, that all modules perform their intended behaviour.",
"In this work, we propose and conduct a systematic evaluation of the intermediate outputs of NMNs on NLVR2 and DROP, two datasets which require composing multiple reasoning steps.",
"We find that the intermediate outputs differ from the expected output, illustrating that the network structure does not provide a faithful explanation of model behaviour.",
"To remedy that, we train the model with auxiliary supervision and propose particular choices for module architecture that yield much better faithfulness, at a minimal cost to accuracy.",
"Models that can read text and reason about it in a particular context (such as an image, a paragraph, or a table) have been recently gaining increased attention, leading to the creation of multiple datasets that require reasoning in both the visual and textual domain (Johnson et al., 2016; Suhr et al., 2017; Talmor and Berant, 2018; Yang et al., 2018a; Suhr et al., 2019; Hudson and Manning, 2019; Dua et al., 2019).",
"Consider the example in Figure 1 from NLVR2: a model must understand the compositional sentence in order to then ground dogs in the input, count those that are black and verify that the count of all dogs in the image is equal to the number of black dogs.",
"Both models that assume an intermediate structure (Andreas et al., 2016; Jiang and Bansal, 2019) and models without such structure (Tan and Bansal, 2019; Hu et al., 2019; Min et al., 2019) have been proposed for these reasoning problems.",
"While good performance can be obtained without a structured representation, an advantage of structured approaches is that the reasoning process in such approaches is more interpretable .",
"For example, a structured model can explicitly denote that there are two dogs in the image, but that one of them is not black .",
"Such interpretability improves our sci-entific understanding, aids in model development, and improves overall trust in a model.",
"Neural module networks (NMNs; Andreas et al., 2016) parse an input utterance into an executable program composed of learnable modules that are designed to perform atomic reasoning tasks and can be composed to perform complex reasoning against an unstructured context.",
"NMNs are appealing since their output is interpretable; they provide a logical meaning representation of the utterance and also the outputs of the intermediate steps (modules) to reach the final answer.",
"However, because module parameters are typically learned from end-task supervision only, it is possible that the program will not be a faithful explanation of the behaviour of the model (Ross et al., 2017; Wiegreffe and Pinter, 2019), i.e., the model will solve the task by executing modules according to the program structure, but the modules will not perform the reasoning steps as intended .",
"For example, in Figure 1, a basic NMN predicts the correct answer False , but incorrectly predicts the output of the find [ dogs ] operation.",
"It does not correctly locate one of the dogs in the image because two of the reasoning steps ( find and filter ) are collapsed into one module ( find ).",
"This behavior of the find module is not faithful to its intended reasoning operation; a human reading the program would expect find [ dogs ] to locate all dogs.",
"Such unfaithful module behaviour yields an unfaithful explanation of the model behaviour.",
"Unfaithful behaviour of modules, such as multiple reasoning steps collapsing into one, are undesirable in terms of interpretability; when a model fails to answer some question correctly, it is hard to tell which modules are the sources of error.",
"While recent work (Yang et al., 2018b; Jiang and Bansal, 2019) has shown that one can obtain good performance when using NMNs, the accuracy of individual module outputs was mostly evaluated through qualitative analysis, rather than systematically evaluating the intermediate outputs of each module.",
"We provide three primary contributions regarding faithfulness in NMNs.",
"First, we propose the concept of module-wise faithfulness a systematic evaluation of individual module performance in NMNs that judges whether they have learned their intended operations, and define metrics to quantify this for both visual and textual reasoning ( 3).",
"Empirically, we show on both NLVR2 (Suhr et al., 2019) and DROP (Dua et al., 2019) that training a NMN using end-task supervision, even using gold programs , does not yield module-wise faithfulness, i.e., the modules do not perform their intended reasoning task.",
"Second, we provide strategies for improving module-wise faithfulness in NMNs ( 4).",
"Specifically,",
"(a) we demonstrate how module architecture affects faithfulness ( 4.1),",
"(b) propose supervising module outputs with either a proxy task or heuristically generated data ( 4.2), and",
"(c) show that providing modules with uncon-texualized token representations improves faithfulness ( 4.3).",
"Figure 1 shows an example where our approach ( Faithful-NMN ) results in expected module outputs as compared to the Basic-NMN .",
"Last, we collect human-annotated intermediate outputs for 536 examples in NLVR2 and for 215 examples in DROP to measure the module-wise faithfulness of models, and publicly release them for future work.",
"Our code and data are available at https://github.com/allenai/faithful-nmn .",
"Overview Neural module networks (NMNs; Andreas et al., 2016) are a class of models that map a natural language utterance into an executable program, composed of learnable modules that can be executed against a given context (im-ages, text, etc.), to produce the utterance's denotation (truth value in NLVR2, or a text answer in DROP).",
"Modules are designed to solve atomic reasoning tasks and can be composed to perform complex reasoning.",
"For example, in Figure 1, the utterance All the dogs are black is mapped to the program equal(count(find[ dogs ]), count(filter[ black ](find[ dogs ]))) .",
"The find module is expected to find all dogs in the image and the filter module is expected to output only the black ones from its input.",
"Figure 2 shows two other example programs with the expected output of each module in the program.",
"A NMN has two main components: (1) parser, which maps the utterance into an executable program; and (2) executor, which executes the program against the context to produce the denotation.",
"In our setup, programs are always trees where each tree node is a module.",
"In this work, we focus on the executor, and specifically the faithfulness of module execution.",
"We examine NMNs for both text and images, and describe their modules next.",
"In this task, given two images and a sentence that describes the images, the model should output True iff the sentence correctly describes the images.",
"We base our model, the Visual-NMN, on LXMERT (Tan and Bansal, 2019), which takes as input the sentence x and raw pixels, uses Faster R-CNN (Ren et al., 2015) to propose a set of bounding boxes, B , that cover the objects in the image, and passes the tokens of x and the bounding boxes through a Transformer (Vaswani et al., 2017), encoding the interaction between both modalities.",
"This produces a contextualized representation t R | x | h for each one of the tokens, and a representation v R |B| h for each one of the bounding boxes, for a given hidden dimension h .",
"We provide a full list of modules and their implementation in Appendix A. Broadly, modules take as input representations of utterance tokens through an utterance attention mechanism (Hu et al., 2017), i.e., whenever the parser outputs a module, it also predicts a distribution over the utterance tokens ( p 1 , . . . , p | x | ) , and the module takes as input (cid:80) | x | i =1 p i t i , where t i is the hidden representation of token i .",
"In addition, modules produce as output (and take as input) vectors p [0 , 1] |B| , indicating for each bounding box the probability that it should be output by the module (Mao et al., 2019).",
"For example, in the program filter [ black ]( find [ dog ]), the find module takes the word dog' (using utterance attention , which puts all probability mass on the word dog'), and outputs a probability vector p [0 , 1] |B| , where ideally all bounding boxes corresponding to dogs have high probability.",
"Then, the filter module takes p as input as well as the word black', and is meant to output high probabilities for bounding boxes with black dogs'.",
"For the Visual-NMN we do not use a parser, but rely on a collected set of gold programs (including gold utterance attention ), as described in 5. We will see that despite this advantageous setup, a basic NMN does not produce interpretable outputs.",
"Our Text-NMN is used to answer questions in the DROP dataset and uses the modules as designed for DROP in prior work (Gupta et al., 2020) along with three new modules we define in this work.",
"The modules introduced in Gupta et al. (2020) and used as is in our Text-NMN are find , filter , relocate , count , find-num , find-date , find-max-num , find-min-num , num-compare and date-compare .",
"All these modules are probabilistic and produce, as output, a distribution over the relevant support.",
"For example, find outputs a distribution over the passage tokens and find-num outputs a distribution over the numbers in the passage.",
"We extend their model and introduce additional modules; addition and subtraction to add or subtract passage numbers, and extract-answer which directly predicts an answer span from the representations of passage tokens without any explicit compositional reasoning.",
"We use BERT-base (Devlin et al., 2019) to encode the input question and passage.",
"The Text-NMN does not have access to gold programs, and thus we implement a parser as an encoder-decoder model with attention similar to Krishnamurthy et al. (2017), which takes the utterance as input, and outputs a linearized abstract syntax tree of the predicted program.",
"Neural module networks (NMNs) facilitate interpretability of their predictions via the reasoning steps in the structured program and providing the outputs of those intermediate steps during execution.",
"For example, in Figure 2, all reasoning steps taken by both the Visual-NMN and Text-NMN can be discerned from the program and the intermediate module outputs.",
"However, because module parameters are learned from an end-task, there is no guarantee that the modules will learn to perform their intended reasoning operation.",
"In such a scenario, when modules do not perform their intended reasoning, the program is no longer a faithful explanation of the model behavior since it is not possible to reliably predict the outputs of the intermediate reasoning steps given the program.",
"Work on NMNs thus far (Yang et al., 2018b; Jiang and Bansal, 2019) has overlooked systematically evaluating faithfulness, performing only qualitative analysis of intermediate outputs.",
"We introduce the concept of module-wise faithfulness aimed at evaluating whether each module has correctly learned its intended operation by judging the correctness of its outputs in a trained NMN.",
"For example, in Figure 2 (top), a model would be judged module-wise faithful if the outputs of all the modules, find , relocate , and with relation , are correct i.e. similar to the outputs that a human would expect.",
"We provide gold programs when evaluating faithfulness, to not conflate faithfulness with parser accuracy.",
"Modules in Visual-NMN provide for each bounding box a probability for whether it should be a module output.",
"To evaluate intermediate outputs, we sampled examples from the development set, and annotated gold bounding boxes for each instance of find , filter , with-relation and relocate .",
"The annotator draws the correct bounding-boxes for each module in the gold program, similar to the output in Figure 2 (top).",
"A module of a faithful model should assign high probability to bounding-boxes that are aligned with the annotated bounding boxes and low probabilities to other boxes.",
"Since the annotated bounding boxes do not align perfectly with the model's bounding boxes, our evaluation must first induce an alignment.",
"We consider two bounding boxes as aligned if the intersection-over-union (IOU) between them exceeds a pre-defined threshold T = 0 .",
"5 .",
"Note that it is possible for an annotated bounding box to be aligned with several proposed bounding boxes and vice versa.",
"Next, we consider an annotated bounding box BA as matched w.r.t a module output if BA is aligned with a proposed bounding box BP , and BP is assigned by the module a probability > 0 .",
"5 .",
"Similarly, we consider a proposed bounding box BP as matched if BP is assigned by the module a probability > 0 .",
"5 and is aligned with some annotated bounding box BA .",
"We compute precision and recall for each module type (e.g. find ) in a particular example by considering all instances of the module in that example.",
"We define precision as the ratio between the number of matched proposed bounding boxes and the number of proposed bounding boxes assigned a probability of more than 0.5.",
"We define recall as the ratio between the number of matched annotated bounding boxes and the total number of annotated bounding boxes.",
"1 F 1 is the harmonic mean of precision and recall.",
"Similarly, we compute an overall precision, recall, and F 1 score for an example by considering all instances of all module types in that example.",
"The final score is an average over all examples.",
"Please see Appendix B.2 for further discussion on this averaging.",
"Each module in Text-NMN produces a distribution over passage tokens ( 2.2) which is a soft distributed representation for the selected spans.",
"To measure module-wise faithfulness in Text-NMN, we obtain annotations for the set of spans that should be output by each module in the gold program (as seen in Figure 2 (bottom)) Ideally, all modules ( find , filter , etc.) should predict high probability for tokens that appear in the gold spans and zero probability for other tokens.",
"To measure a module output's correctness, we use a metric akin to cross-entropy loss to measure the deviation of the predicted module output p att from the gold spans S = [ s 1 , . . . , s N ] .",
"Here each span s i = ( t i s , t i e ) is annotated as the start and end tokens.",
"Faithfulness of a module is measured by: I = (cid:80) Ni =1 (cid:32) log (cid:80) t i e j = t i s p j att (cid:33) .",
"Lower cross-entropy corresponds to better faithfulness of a module.",
"Module-wise faithfulness is affected by various factors: the choice of modules and their implementation ( 4.1), use of auxiliary supervision ( 4.2), and the use of contextual utterance embeddings ( 4.3).",
"We discuss ways of improving faithfulness of NMNs across these dimensions.",
"Visual reasoning The count module always appears in NLVR2 as one of the top-level modules (see Figures 1 and 2).",
"2 We now discuss how its architecture affects faithfulness.",
"Consider the program, count(filter[ black ](find[ dogs ])) .",
"Its gold denotation (correct count value) would provide minimal feedback using which the descendant modules in the program tree, such as filter and find , need to learn their intended behavior.",
"However, if count is implemented as an expressive neural network, it might learn to perform tasks designated for find and filter , hurting faithfulness.",
"Thus, an architecture that allows counting, but also encourages descendant modules to learn their intended behaviour through backpropagation, is desirable.",
"We discuss three possible count architectures, which take as input the bounding box probability vector p [0 , 1] |B| and the visual features v R |B| h .",
"Layer-count module is motivated by the count architecture of Hu et al. (2017), which uses a linear projection from image attention, followed by a softmax.",
"This architecture explicitly uses the visual features, v , giving it greater expressivity compared to simpler methods.",
"First we compute p v , the weighted sum of the visual representations, based on their probabilities, and then output a scalar count using: FF 1 ( LayerNorm ( FF 2 ( p v )) , where FF 1 and FF 2 are feed-forward networks, and the activation function of FF 1 is ReLU in order to output positive numbers only.",
"As discussed, since this implementation has access to the visual features of the bounding boxes, it can learn to perform certain tasks itself, without providing proper feedback to descendant modules.",
"We show in 5 this indeed hurts faithfulness.",
"Sum-count module on the other extreme, ignores v , and simply computes the sum (cid:80) |B| i =1 p i .",
"Be-2 Top-level modules are Boolean quantifiers, such as number comparisons like equal (which require count ) or exist .",
"We implement exist using a call to count and greater-equal (see Appendix A), so count always occurs in the program.",
"ing parameter-less, this architecture provides direct feedback to descendant modules on how to change their output to produce better probabilities.",
"However, such a simple functional-form ignores the fact that bounding boxes are overlapping, which might lead to over-counting objects.",
"In addition, we would want count to ignore boxes with low probability.",
"For example, if filter predicts a 5% probability for 20 different bounding boxes, we would not want the output of count to be 1 .",
"0 .",
"Graph-count module (Zhang et al., 2018) is a mid-dle ground between both approaches the nave Sum-Count and the flexible Layer-Count .",
"Like Sum-Count , it does not use visual features, but learns to ignore overlapping and low-confidence bounding boxes while introducing only a minimal number of parameters (less than 300 ).",
"It does so by treating each bounding box as a node in a graph, and then learning to prune edges and cluster nodes based on the amount of overlap between their bounding boxes (see paper for further details).",
"Because this is a light-weight implementation that does not access visual features, proper feedback from the module can propagate to its descendants, encouraging them to produce better predictions.",
"Textual reasoning In the context of Text-NMN (on DROP), we study the effect of several modules on interpretability.",
"First, we introduce an extract-answer module.",
"This module bypasses all compositional reasoning and directly predicts an answer from the input contextualized representations.",
"This has potential to improve performance, in cases where a question describes reasoning that cannot be captured by pre-defined modules, in which case the program can be the extract-answer module only.",
"However, introducing extract-answer adversely affects interpretability and learning of other modules, specifically in the absence of gold programs.",
"First, extract-answer does not provide any interpretability.",
"Second, whenever the parser predicts the extract-answer module, the parameters of the more interpretable modules are not trained.",
"Moreover, the parameters of the encoder are trained to perform reasoning internally in a noninterpretable manner.",
"We study the interpretability vs. performance trade-off by training Text-NMN with and without extract-answer .",
"Second, consider the program find-max-num(find[ touchdown ]) that aims to find the longest touchdown .",
"find-max-num should sort spans by their value and return the maximal one; if we remove find-max-num , the program would reduce to find[ touchdown ] , and the find module would have to select the longest touchdown rather than all touchdowns, following the true denotation.",
"More generally, omitting atomic reasoning modules pushes other modules to compensate and perform complex tasks that were not intended for them, hurting faithfulness.",
"To study this, we train Text-NMN by removing sorting and comparison modules (e.g., find-max-num and num-compare ), and evaluate how this affects module-wise interpretability.",
"As explained, given end-task supervision only, modules may not act as intended, since their parameters are only trained for minimizing the end-task loss.",
"Thus, a straightforward way to improve interpretability is to train modules with additional atomic-task supervision.",
"Visual reasoning For Visual-NMN, we pre-train find and filter modules with explicit intermediate supervision, obtained from the GQA balanced dataset (Hudson and Manning, 2019).",
"Note that this supervision is used only during pre-training we do not assume we have full-supervision for the actual task at hand.",
"GQA questions are annotated by gold programs; we focus on exist questions that use find and filter modules only, such as Are there any red cars? .",
"Given gold annotations from Visual Genome (Kr-ishna et al., 2017), we can compute a label for each of the bounding boxes proposed by Faster-RCNN.",
"We label a proposed bounding box as positive' if its IOU with a gold bounding box is > 0 .",
"75 , and negative' if it is < 0 .",
"25 .",
"We then train on GQA examples, minimizing both the usual denotation loss, as well as an auxiliary loss for each instance of find and filter , which is binary cross entropy for the labeled boxes.",
"This loss rewards high probabilities for positive' bounding boxes and low probabilities for negative' ones.",
"Textual reasoning Prior work (Gupta et al., 2020) proposed heuristic methods to extract supervision for the find-num and find-date modules in DROP.",
"On top of the end-to-end objective, they use an auxiliary objective that encourages these modules to output the gold numbers and dates according to the heuristic supervision.",
"They show that supervising intermediate module outputs helps improve model performance.",
"In this work, we evaluate the effect of such supervision on the faithfulness of both the supervised modules, as well as other modules that are trained jointly.",
"The goal of decomposing reasoning into multiples steps, each focusing on different parts of the utterance, is at odds with the widespread use of contextualized representations such as BERT or LXMERT.",
"While the utterance attention is meant to capture information only from tokens relevant for the module's reasoning, contextualized token representations carry global information.",
"For example, consider the program filter[ red ](find[ car ]) for the phrase red car .",
"Even if find attends only to the token car , its representation might also express the attribute red , so find might learn to find just red cars , rather than all cars , rendering the filter module useless, and harming faithfulness.",
"To avoid such contextualiza-tion in Visual-NMN, we zero out the representations of tokens that are unattended, thus the input to the module is computed (with LXMERT) from the remaining tokens only.",
"We first introduce the datasets used and the experimental setup for measuring faithfulness ( 5.1).",
"We demonstrate that training NMNs using end-task supervision only does not yield module-wise faithfulness both for visual and textual reasoning.",
"We then show that the methods from 4 are crucial for achieving faithfulness and how different design choices affect it ( 5.2).",
"Finally, we qualitatively show examples of improved faithfulness and analyze possible reasons for errors ( 5.3).",
"Please see Appendix C for further detail about the experimental setups.",
"Visual reasoning We automatically generate gold program annotations for 26 , 311 training set examples and for 5 , 772 development set examples from NLVR2.",
"The input to this generation process is the set of crowdsourced question decompositions from the BREAK dataset (Wolfson et al., 2020).",
"See Appendix C.1 for details.",
"For module-wise faithfulness evaluation, 536 examples from the development set were annotated with the gold output for each module by experts.",
"Table 1 : Faithfulness and accuracy on NLVR2.",
"decont. refers to decontextualized word representations.",
"Precision, recall, and F 1 are averages across examples, and thus F 1 is not the harmonic mean of the corresponding precision and recall.",
"Table 2 : Faithfulness and performance scores for various NMNs on DROP.",
"lower is better.",
"min-max is average faithfulness of find-min-num and find-max-num ; find-arg of find-num and find-date .",
"Textual reasoning We train Text-NMN on DROP, which is augmented with program supervision for 4 , 000 training questions collected heuristically as described in Gupta et al. (2020).",
"The model is evaluated on the complete development set of DROP which does not contain any program supervision.",
"Module-wise faithfulness is measured on 215 manually-labeled questions from the development set, which are annotated with gold programs and module outputs (passage spans).",
"Visual reasoning Results are seen in Table 1.",
"Accuracy for LXMERT, when trained and evaluated on the same subset of data, is 71.7%; slightly higher than NMNs, but without providing evidence for the compositional structure of the problem.",
"For faithfulness, we measure an upper-bound on the faithfulness score.",
"Recall that this score measures the similarity between module outputs and annotated outputs.",
"Since module outputs are constrained by the bounding boxes proposed by Faster-RCNN ( 2.1), while annotated boxes are not, perfect faithfulness could only be achieved by a model if there are suitable bounding boxes.",
"Upper Bound shows the maximal faithfulness score conditioned on the proposed bounding boxes.",
"We now compare the performance and faithfulness scores of the different components.",
"When training our NMN with the most flexible count module, ( NMN w/ Layer-count ), an accuracy of 71 .",
"2% is achieved, a slight drop compared to LXMERT but with low faithfulness scores.",
"Using Sum-count drops about 3% of performance, but increases faithfulness.",
"Using Graph-count increases accuracy while faithfulness remains similar.",
"Next, we analyze the effect of decontextualized word representations (abbreviated decont.) and pre-training.",
"First, we observe that NMN w/ Graph-count + decont.",
"increases faithfulness score to 0 .",
"33 F 1 at the expense of accuracy, which drops to 67 .",
"3% .",
"Pre-training ( NMN w/ Graph-count + pretraining ) achieves higher faithfulness scores with a higher accuracy of 69 .",
"6% .",
"Combining the two achieves the best faithfulness ( 0 . 47 F 1 ) with a minimal accuracy drop.",
"We perform a paired permutation test to compare NMN w/ Graph-count + decont.",
"+ pretraining with NMN w/ Layer-count and find that the difference in F 1 is statistically significant ( p < 0 . 001 ).",
"Please see Appendix D.1 for further details.",
"Textual reasoning As seen in Table 2, when trained on DROP using question-program supervision, the model achieves 65 .",
"3 F 1 performance and a faithfulness score of 11 .",
"2 .",
"When adding supervision for intermediate modules ( 4.2), we find that the module-wise faithfulness score improves to 6 .",
"5 .",
"Similar to Visual-NMN, this shows that supervising intermediate modules in a program leads to better faithfulness.",
"To analyze how choice of modules affects faithfulness, we train without sorting and comparison modules ( find-max-num , num-compare , etc.).",
"We find that while performance drops slightly, faithfulness deteriorates significantly to 8 .",
"4 , showing that modules that perform atomic reasoning are crucial for faithfulness.",
"When trained without program supervision, removing extract-answer improves faithfulness ( 9 . 5 6 . 9 ) but at the cost of performance ( 63 . 5 60 . 8 F 1 ).",
"This shows that such a black-box module encourages reasoning in an opaque manner, but can improve performance by overcoming the limitations of pre-defined modules.",
"All improvements in faithfulness are significant as measured using paired permutation tests ( p < 0 . 001 ).",
"Generalization A natural question is whether models that are more faithful also generalize better.",
"We conducted a few experiments to see whether this is true for our models.",
"For NLVR2, we performed (1) an experiment in which programs in training have length at most 7 , and programs at test time have length greater than 7 , (2) an experiment in which programs in training have at most 1 filter module and programs at test time have at least 2 filter modules, and (3) an experiment in which programs in training do not have both filter and with-relation modules in the same program, while each program in test has both modules.",
"We compared three of our models NMN w/ Layer-count , NMN w/ Sum-count , and NMN w/ Graph-count + decont.",
"+ pretraining .",
"We did not observe that faithful models generalize better (in fact, the most unfaithful model tended to achieve the best generalization).",
"To measure if faithful model behavior leads to better generalization in Text-NMN we conducted the following experiment.",
"We selected the subset of data for which we have gold programs and split the data such that questions that require maximum and greater-than operations are present in the training data while questions that require computing minimum and less-than are in the test data.",
"We train and test our model by providing gold-programs under two conditions, in the presence and absence of additional module supervision.",
"We find that providing auxiliary module supervision (that leads to better module faithfulness; see above) also greatly helps in model generalization (perfor-mance increases from 32 . 3 F 1 78 . 3 F 1 ).",
"Model comparisons We analyze outputs of different modules in Figure 3.",
"Figures 3a, 3b show the output of find [ llamas ] when trained with contextualized and decontextualized word representations.",
"With contextualized representations (3a), the find fails to select any of the llamas , presumably because it can observe the word eating , thus effectively searching for eating llamas , which are not in the image.",
"Conversely, the decontextualized model correctly selects the boxes.",
"Figure 3c shows that find outputs meaningless probabilities for most of the bounding boxes when trained with Layer-count , yet the count module produces the correct value (three).",
"Figure 3d shows that find fails to predict all relevant spans when trained without sorting modules in Text-NMN.",
"Error analysis We analyze cases where outputs were unfaithful.",
"First, for visual reasoning, we notice that faithfulness scores are lower for long-tail objects.",
"For example, for dogs , a frequent noun in NLVR2, the execution of find[ dogs ] yields an average faithfulness score of 0.71, while items such as roll of toilet paper , barbell and safety pin receive lower scores (0.22, 0.29 and 0.05 respectively; example for a failure case for safety pin in Fig. 3e).",
"In addition, some objects are harder to annotate with a box ( water , grass , ground ) and therefore receive low scores.",
"The issue of small objects can also explain the low scores of relocate .",
"In the gold box annotations used for evaluation, the average areas for find , filter , with-relation , and relocate (as a fraction of the total image area) are 0 .",
"19 , 0 .",
"19 , 0 .",
"15 , and 0 .",
"07 , respectively.",
"Evidently, relocate is executed with small objects that are harder to annotate ( tongue , spots , top of ), and indeed the upper-bound and model scores for relocate are lowest among the module types.",
"NMNs were originally introduced for visual question answering and applied to datasets with syn-utt:",
"In such prior work, module-wise faithfulness was mostly assessed via qualitative analysis of a few examples (Jiang and Bansal, 2019; Gupta et al., 2020).",
"Yang et al. (2018b) did an evaluation where humans rated the clarity of the reasoning process and also tested whether humans could detect model failures based on module outputs.",
"In contrast, we quantitatively measure each module's predicted output against the annotated gold outputs.",
"A related systematic evaluation of interpretability in VQA was conducted by Trott et al. (2018).",
"They evaluated the interpretability of their VQA counting model, where the interpretability score is given by the semantic similarity between the gold label for a bounding box and the relevant word(s) in the question.",
"However, they studied only counting questions, which were also far less compositional than those in NLVR2 and DROP.",
"Similar to the gold module output annotations that we provide and evaluate against, HOTPOTQA (Yang et al., 2018a) and COQA (Reddy et al., 2019) datasets include supporting facts or rationales for the answers to their questions, which can be used for both supervision and evaluation.",
"In concurrent work, Jacovi and Goldberg (2020) recommend studying faithfulness on a scale rather than as a binary concept.",
"Our evaluation method can be viewed as one example of this approach.",
"We introduce the concept of module-wise faithfulness , a systematic evaluation of faithfulness in neural module networks (NMNs) for visual and textual reasoning.",
"We show that nave training of NMNs does not produce faithful modules and propose several techniques to improve module-wise faithfulness in NMNs.",
"We show how our approach leads to much higher module-wise faithfulness at a low cost to performance.",
"We encourage future work to judge model interpretability using the proposed evaluation and publicly published annotations, and explore techniques for improving faithfulness and interpretability in compositional models.",
"We thank members of UCI NLP, TAU NLP, and the AllenNLP teams as well as Daniel Khashabi for comments on earlier drafts of this paper.",
"We also thank the anonymous reviewers for their comments.",
"This research was partially supported by The Yandex Initiative for Machine Learning, the European Research Council (ERC) under the European Union Horizons 2020 research and innovation programme (grant ERC DELPHI 802800), funding by the ONR under Contract No.",
"N00014-19-1-2620, and by sponsorship from the LwLL DARPA program under Contract No.",
"FA8750-19-2-0201.",
"This work was completed in partial fulfillment for the Ph.D degree of Ben Bogin."
] | [
"abstain",
"abstain",
"objective",
"objective",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"method",
"other",
"abstain",
"abstain",
"objective",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"Over the last few years, there has been a move towards data curation for multilingual task-oriented dialogue (ToD) systems that can serve people speaking different languages.",
"However, existing multilingual ToD datasets either have a limited coverage of languages due to the high cost of data curation, or ignore the fact that dialogue entities barely exist in countries speaking these languages.",
"To tackle these limitations, we introduce a novel data curation method that generates GlobalWoZ a large-scale multilingual ToD dataset globalized from an English ToD dataset for three unexplored use cases of multilingual ToD systems.",
"Our method is based on translating dialogue templates and filling them with local entities in the target-language countries.",
"Besides, we extend the coverage of target languages to 20 languages.",
"We will release our dataset and a set of strong baselines to encourage research on multilingual ToD systems for real use cases.",
"1 1 Introduction One of the fundamental objectives in pursuit of artificial intelligence is to enable machines with the ability to intelligently communicate with human in natural languages, with one of the widely-heralded applications being the task-oriented dialogue (ToD) systems (Gupta et al., 2006; Bohus and Rudnicky, 2009).",
"Recently, ToD systems have been successfully deployed to assist users with accomplishing certain domain-specific tasks such as hotel booking, alarm setting or weather query (Eric et al., 2017; Wu et al., 2019; Lin et al., 2020; Zhang et al., 2020), thanks to the joint advent of neural networks and availability of domain-specific data.",
"However, most existing ToD systems are predominately built for English, limiting their service for all of the Bosheng Ding is under the Joint PhD Program between Alibaba and Nanyang Technological University.",
"world's citizens.",
"The reason of this limitation lies in the stark lack of high-quality multilingual ToD datasets due to the high expense and challenges of human annotation (Razumovskaia et al., 2021).",
"One solution to this is annotating conversations in other languages from scratch, e.g., CrossWoZ (Zhu et al., 2020) and BiToD (Lin et al., 2021).",
"However, these methods involve expensive human efforts for dialogue collection in the other languages, resulting in a limited language/domain coverage.",
"The other major line of work focused on translating an existing English ToD dataset into target languages by professional human translators (Upadhyay et al., 2018; Schuster et al., 2019; van der Goot et al., 2021; Li et al., 2021).",
"Despite the increasing language coverage, these methods simply translated English named entities (e.g., location, restaurant name) into the target languages, while ignored the fact that these entities barely exist in countries speaking these languages.",
"This hinders a trained ToD system from supporting the real use cases where a user looks for local entities in a target-language country.",
"For example in Figure 1, a user may look for the British Museum when traveling to London (A.), while look for the Oriental Pearl Tower when traveling to Shanghai (B.).",
"In addition, prior studies (Cheng and Butler, 1989; Kim, 2006) have shown that code-switching phenomena frequently occurs in a dialogue when a speaker cannot express an entity immediately and has to alternate between two languages to convey information more accurately.",
"Such phenomena could be ubiquitous during the cross-lingual and cross-country task-oriented conversations.",
"One of the reasons for code-switching is that there are no exact translations for many local entities in the other languages.",
"Even though we have the translations, they are rarely used by local people.",
"For example in Figure 1 (C.), after obtaining the recommendation from a ToD system, a Chinese speaker traveling to London would rather use the English 1639 London A. Use Case: E&E I'm looking for an attraction to visit in London.",
"entity British Museum than its Chinese translation to search online or ask local people.",
"To verify this code-switching phenomena, we have also conducted a case study (6.1) which shows that searching the information about translated entities online yields a much higher failure rate than searching them in their original languages.",
"Motivated by these observations, we define three unexplored use cases 2 of multilingual ToD where a foreign-language speaker uses ToD in the foreign-language country ( F&F ) or an English country ( F&E ), and an English speaker uses ToD in a foreign-language country ( E&F ).",
"These use cases are different from the traditional E&E use case where an English speaker uses ToD in an English-speaking country.",
"To bridge the aforementioned gap between existing data curation methods and the real use cases, we propose a novel data curation method that global-izes an existing multi-domain ToD dataset beyond English for the three unexplored use cases.",
"Specifically, building on top of MultiWoZ (Budzianowski et al., 2018) an English ToD dataset for dialogue state tracking (DST), we create GlobalWoZ, a new multilingual ToD dataset in three new target-languages via machine translation and crawled ontologies in the target-language countries.",
"Our method only requires minor human efforts to post-edit a few hundred machine-translated dialogue templates in the target languages for evaluation.",
"Besides, as cross-lingual transfer via pre-2 See comparisons of these use cases in Appendix A trained multilingual models (Devlin et al., 2019; Conneau et al., 2020; Liu et al., 2020; Xue et al., 2021) has proven effective in many cross-lingual tasks, we further investigate another question: How do these multilingual models trained on the English ToD dataset transfer knowledge to our globalized dataset?",
"To answer this question, we prepare a few baselines by evaluating popular ToD systems on our created test datasets in a zero-shot cross-lingual transfer setting as well as a few-shot setting.",
"Our contributions include the following: To the best of our knowledge, we provide the first step towards analyzing three unexplored use cases for multilingual ToD systems.",
"We propose a cost-effective method that creates a new multilingual ToD dataset from an existing English dataset.",
"Our dataset consists of high-quality test sets which are first translated by machines and then post-edited by professional translators in three target languages (Chinese, Spanish and Indonesian).",
"We also leverage machine translation to extend the language coverage of test data to another 17 target languages.",
"Our experiments show that current multilingual systems and translate-train methods fail in zero-shot cross-lingual transfer on the dialogue state tracking task.",
"To tackle this problem, we propose several data augmentation methods to train strong baseline models in both zero-shot and few-shot cross-lingual transfer settings.",
"In order to globalize an existing English ToD dataset for the three aforementioned use cases, we propose an approach consisting of four steps as shown in Figure 2: (1) we first extract dialogue templates from the English ToD dataset by replacing English-specific entities with a set of general-purpose placeholders (2.1); (2) we then translate the templates to a target language for both training and test data, with one key distinction that we only post-edit the test data by professional translators to ensure the data quality for evaluation (2.2); (3) next, we collect ontologies (Kiefer et al., 2021) containing the definitions of dialogue acts, local entities and their attributes in the target-language countries (2.3); (4) finally, we tailor the translated templates by automatically substituting the placeholders with entities in the extracted ontologies to construct data for the three use cases (2.4).",
"2.1 Automatic Template Creation We start with MultiWoZ 2.2 (Zang et al., 2020) a high-quality multi-domain English ToD dataset with more accurate human annotations compared to its predecessors MultiWoZ 2.0 (Budzianowski et al., 2018) and MultiWoz 2.1 (Eric et al., 2020).",
"For the sake of reducing human efforts for collecting ToD context in the target languages, we re-use the ToD context written by human in MultiWoZ as the dialogue templates.",
"Specifically as shown in Figure 2, we replace the English entities in MultiWoz by a set of general-purpose placeholders such as [attraction-name0] and [attraction-postcode1] , where each placeholder contains the entity's domain, attribute and ID.",
"To do so, we first build a dictionary with entity-placeholder pairs by parsing the annotations of all dialogues.",
"For example, from a dialogue text I recommend Whale of a time and the post code is cb238el. , we obtain two entity-placeholder pairs from its human annotations, i.e., ( Whale of a time , [attraction-name0] ) and ( cb238el , [attraction-postcode1] ).",
"Next, we identify entities in the dialogue by their word index from the human annotations, replace them with their placeholders in the dictionary, and finally obtain dialogue templates with placeholders.",
"Notably, we skip the entities with their attributes of [choice] and [ref] that represent the number of choices and booking reference number, as these attributes could be used globally.",
"Following Liu et al. (2021) that translates sentences with placeholders, we use a machine translation system 3 to translate dialogue templates with our designed placeholders.",
"As we observe, a placeholder containing an entity domain, attribute and ID (e.g., attraction-name0 ) is useful to provide contextually meaningful information to the translation system, thus usually resulting in a high-quality translation with the placeholder unchanged 4 .",
"This also enables us to easily locate the placeholders in the translation output and replace them with new entities in the target language.",
"To build a high-quality test set for evaluation, we further hire professional translators to post-edit a few hundred machine-translated templates, which produces natural and coherent sentences in the target languages.",
"5 With the goal of selecting representative test templates for post-editing, we first calculate the frequency of all the 4-gram combinations in the MultiWoZ data, and then score each dialogue in the test set by the sum of the frequency of all the 4-gram combinations in the dialogue divided by the dialogue's word length.",
"We use this scoring function to estimate the representiveness of a dialogue in the original dataset.",
"Finally, we select the top 500 high-scoring dialogues in the test set for post-editing.",
"6 We also use the same procedure to create a small high-quality training set for few-shot cross-lingual transfer setting.",
"Meanwhile, we crawl the attribute information of local entities in three cities from public websites (e.g., tripadvisor.com, booking.com) to create three ontologies for the three corresponding target languages respectively.",
"As shown in Table 8 in Appendix E, we select Barcelona for Spanish (an Indo-European language), Shanghai for Mandarin (a Sino-Tibetan language) and Jakarta for Indonesian (an Austronesian language), which cover a set of typologically different language families.",
"Given a translated dialogue template, we can easily sample a random set of entities for a domain of interest from a crawled ontology and assign the entities to the template's placeholders to obtain a 3 We use Google Translate ( https://cloud.google. com/translate ), an off-the-shelf MT system.",
"4 Appendix B has an example of label sequence translation.",
"5 Appendix C shows the bleu scores between MT test data and MTPE test data.",
"6 Appendix D shows the English test data distribution.",
"Repeating this procedure on each dialogue template, we can easily build a high-quality labeled dataset in the target language.",
"Table 9 in Appendix F shows the statistics of our collected entities in the target languages compared with the English data.",
"The number of our collected entities are either larger than or equal to those in the English data except for the train domain; we collected the information about only 100 trains for each languages due to the complexity in collecting relevant information.",
"After the above steps, we assign entities in a target language to the translated templates in the same target language for the F&F case, while assigning target-language entities to the English (source-language) templates for the F&E case.",
"As for the E&F case, we keep the original English context by skipping the translation step and replace the placeholders with local entities in the target language (see Figure 2 for examples).",
"To sum up, our proposed method has three key properties: (1) our method is cost-effective as we only require a limited amount of post-editing efforts for a test set when compared to the expensive crowd-sourced efforts from the other studies; (2) we can easily sample entities from an ontology to create large-scale machine-translated data as a way of data augmentation for training; (3) our method is flexible to update entities in a ToD system whenever an update of ontology is available, e.g., extension of new entities.",
"We refer the readers to Table 10 for the data statistics of GlobalWoZ and Figure 9 for dialogue examples in the appendix.",
"Our experiments focus on the dialogue state tracking (DST), one of the fundamental components in a ToD system that predicts the goals of a user query in multi-turn conversations.",
"We follow the setup in MultiWoZ (Budzianowski et al., 2018) to evaluate ToD systems for DST by the joint goal accuracy which measures the percentage of correctly predicting all goals in a multi-turn conversation.",
"Zero-Shot Cross-lingual Transfer: Unlike prior studies that annotate a full set of high-quality training data for a target language, we investigate the zero-shot cross-lingual transfer setting where we have access to only a high-quality human-annotated English ToD data (referred to as gold standard data hereafter).",
"In addition, we assume that we have access to a machine translation system that translates from English to the target language.",
"We investigate this setting to evaluate how a multilingual ToD system transfers knowledge from a high-resource source language to a low-resource target language.",
"Few-Shot Cross-lingual Transfer: We also investigate few-shot cross-lingual transfer, a more practical setting where we are given a small budget to annotate ToD data for training.",
"Specifically, we include a small set (100 dialogues) of high-quality training data post-edited by professional translators 1642 (2.2) in a target language, and evaluate the efficiency of a multilingual ToD on learning from a few target-language training examples.",
"We prepare a base model for GlobalWoZ in the zero-shot and few-shot cross-lingual transfer settings.",
"We select Transformer-DST (Zeng and Nie, 2020) as our base model as it is one of the state-of-the-art models on both MultiWoZ 2.0 and MultiWoZ 2.1 7 .",
"In our paper, we replace its BERT encoder with an mBERT encoder (Devlin et al., 2019) for our base model and propose a series of training methods for GlobalWoZ.",
"As detailed below, we propose several data augmentation baselines that create different training and validation data for training a base model.",
"Note that all the proposed baselines are model agnostic and the base model can be easily substituted with other popular models (Heck et al., 2020; Lin et al., 2020).",
"For each baseline, we first train a base model on its training data for 20 epochs and use its validation set to select the best model during training.",
"Finally we evaluate the best model of each baseline on the same test set from GlobalWoZ.",
"We will release GlobalWoZ and our pre-trained models to encourage faster adaptation to future research.",
"We refer the readers to Table 11 and Table 12 in Appendix I while reading the subsequent methods for a better understanding.",
"We train a base model on the gold standard English data (E&E) and directly apply the learned model to the test data of the three use cases in GlobalWoZ.",
"With this method, we simulate the condition of having labeled data only in the source language for training, and evaluate how the model transfers knowledge from English to the three use cases.",
"We use Zero-Shot (E&E) to denote this method.",
"We use our data curation method (2) to translate the templates by an MT system but replace the placeholders in the translated templates with machine-translated entities to create a set of pseudo-labeled training data.",
"Next, we train a base model on the translated training data without local entities, and evaluate the model on the three use cases.",
"We denote this method as Translate-Train .",
"7 According to the leaderboards of Multi-domain Dialogue State Tracking on MultiWoZ 2.0 and MultiWoZ 2.1 on paperwithcode.com as of 11/15/2021.",
"By skipping the human post-editing step in our data curation method (2), we leverage a machine translation system to automatically create a large set of pseudo-labeled training data with local entities for the three use cases.",
"In the F&F case, we translate the English templates by the MT system and replace the placeholders in the translated templates with foreign-language entities to create a training dataset.",
"In the F&E case, we replace the placeholders in the translated templates with the original English entities to create a code-switched training dataset.",
"In the E&F case, we use the original English templates and replace the placeholders in the English templates with foreign-language entities to create a code-switch training dataset.",
"With this data augmentation method, we can train a base model on each pseudo-labeled training dataset created for each use case.",
"We denote this method as SUC (Single-Use-Case).",
"We investigate the performance of combining the existing English data and the pseudo-labeled training data created for one of the three use cases (i.e., F&F, F&E, E&F), one at a time, to do bi-use-case training.",
"In the bilingual training, we only combine the gold English data (E&E) with the pseudo-labeled training data in one target language in one use case for joint training.",
"We denote this method as BBUC (Bilingual Bi-Use-Case).",
"In the multilingual training, we combine gold English data (E&E) and pseudo-labeled training data in all languages in one use case for joint training.",
"We denote this method as MBUC (Multilingual Bi-Use-Case).",
"We also propose to combine the existing English data (E&E) and all the pseudo-labeled training data in all target languages for all the use cases (F&F, F&E, E&F).",
"We then train a single model on this combined multilingual training dataset and evaluate the model on test data in all target languages for all three use cases .",
"We denote this method as MMUC (Multilingual Multi-Use-Case).",
"In this section, we show the results of all methods in the zero-shot (5.1) and few-shot (5.2) settings.",
"Table 1 reports the joint goal accuracy of all proposed methods on the three different sets of test data in the F&F, F&E, and E&F use cases 8 .",
"Both Zero-Shot (E&E) and Translate-Train struggle, achieving average accuracy of less than 10 in all use cases.",
"Despite its poor performance, Zero-Shot (E&E) works much better in F&E than F&F, while its results in F&F and E&F are comparable, indicating that a zero-shot model trained in E&E can transfer knowledge about local English entities more effectively than knowledge about English context in downstream use cases.",
"Besides, we also find that Zero-Shot (E&E) performs better on the Spanish or Indonesian context than the Chinese context in F&E.",
"One possible reason is that English is closer to the other Latin-script languages (Spanish and Indonesian) than Chinese.",
"Our proposed data augmentation methods (SUC, BBUC, MBUC) perform much better than non-adapted methods (Zero-Shot (E&E) and Translate-Train) that do not leverage any local entities for training.",
"In particular, it is worth noting that even though Translate-Train and SUC both do training on foreign-language entities in F&F and E&F, there is a huge gap between these two methods, since Translate-Train has only access to the machine-translated entities rather than the real local entities used by SUC.",
"This huge performance gaps not only show that Translate-Train is not an effective method in practical use cases but also prove that having access to local entities is a key to building a multilingual ToD system for practical usage.",
"Comparing our data augmentation methods SUC and BBUC, we find that the base model can benefit from training on additional English data (E&E), especially yielding a clear improvement of up to 5.58 average accuracy points in F&E.",
"Moreover, when we increase the number of languages in the bi-use-case data augmentations (i.e., MBUC), we observe an improvement of around 1 average accuracy points in all three use cases w.r.t. BBUC.",
"These observations encourage a potential future direction that explores better data augmentation methods to create high-quality pseudo-training data.",
"Notice that we can train a single model by MMUC for all use cases rather than training separate mod-8",
"els, one for each use case.",
"In Figure 3, we compare MMUC and MBUC (rows) on the test data in the four use cases (columns).",
"Although MMUC may not achieve the best results in each use case, it achieves the best average result over the four use cases, indicating the potential of using one model to simultaneously handle all the four use cases.",
"In few-shot experiments, we use the same scoring function based on frequency of all 4-gram combinations (2.2) to select 100 additional dialogues from train set for human-post editing, and create high-quality training data for each of the three use cases.",
"To avoid overfitting on this small few-shot dataset, we combine the few-shot data with the existing English data for training a base model (Few-Shot+Zero-Shot (E&E)).",
"Next, we also investigate a model trained with additional synthetic data created by our proposed SUC.",
"In Figure 4, we find that our proposed SUC without additional few-shot 1644 F&F F&E E&F 0 10 20 30 40 50 60 1.28 9.12 1.77 11.72 40.84 17.78 28.96 48.71 36.79 34.16 55.15 36.88 34.70 56.77 37.61 Zero Shot (E&E) Few Shot+Zero Shot (E&E) SUCFew Shot+SUC Few Shot+Zero Shot (E&E)+SUC Figure 4: Few-shot cross-lingual average joint accuracy on DST over three target languages in three use cases.",
"data has already outperformed the model trained with few-shot data and English data (Few-shot + Zero-Shot (E&E)), indicating that the model benefit more from a large amount of pseudo-labeled data than a small set of human-labeled data.",
"If we combine the data created by SUC with the few-shot data or with both few-shot and English data to train the model, we observe improvements over SUC, especially with a clear gain of 8.06 accuracy points in F&E.",
"We refer the readers to Table 14 in the appendix for detailed scores in all target languages.",
"One key research question is to validate whether code-switched use cases with local entities (i.e., F&E, E&F) are practically more useful for information seeking.",
"To answer this question, we compare the failure rate of using local entities and machine-translated entities in information search, which is a proxy to the efficiency of using these two types of entities in conversations.",
"We first randomly select 100 entities (33 attractions, 33 hotels and 34 restaurants) of Cambridge, Shanghai, Barcelona and Jakarta.",
"We translate the English entities into Mandarin, Spanish and Indonesian and the foreign-language entities into English via Google Translate.",
"We then manually search the translated entities on Google to check whether we can find the right information of the original entities.",
"Notice that the failure of the above verification partially come from the translation error made by Google Translate, or the search failure due to the fact that this entity does not have a bilingual version at all.",
"In Table 2, we observe a high failure rate of around 60% for almost all translated directions (except Zh En) Translate Search En Zh En Es En Id Zh En Es En Id En (cid:34) (cid:34) 35 42 36 62 30 31 (cid:34) (cid:37) 61 34 51 18 18 15 (cid:37) (cid:34) 0 24 13 11 50 54 (cid:37) (cid:37) 4 0 0 8 2 0 Failure Case (MTed Entities) 65 58 64 37 70 69 Failure Rate (MTed Entities) 65% 58% 64% 37% 70% 69% Failure Rate (Original Entities) 3% 3% 3% 0% 1% 0% Table 2: The search and translation results of 100 translated entities on Google.",
"due to translation and search failures, significantly exceeding the low failure rate of searching original entities online.",
"Besides, even if we can find the right information of the translated entities, local people may not recognize or use the translated entities for communication, thus this results in in-efficient communication with local people.",
"In previous translation-based work, a multilingual ToD system is usually built based on the translation of English training data (Translate-Train), and is evaluated on translated test data without any local entities (Translate-Test).",
"To verify whether this procedure is reliable to build a multilingual ToD system, we also create a test dataset with translated entities instead of local entities in the target languages.",
"As shown in Figure 5, we find the Translate-Train model performs well on the test data with translated entities, but performs badly on the test data with real local entities.",
"To the best of our knowledge, we provide the first analysis to identify this performance gap between the translated test data and data with real local entities in a more realistic use case 9 .",
"Our work sheds light on the development of a globalized multilingual ToD system in practical use cases.",
"We can tackle 9 Please refer to Appendix L for concrete examples where Translate-Train fails in predicting real local entities.",
"the challenge of localization issues by exploring new data augmentation method.",
"Alternatively we can also explore new methods from the model level by building modular network to update the entities or perform transfer learning to adapt to new case without retraining.",
"We compare the impact of training a model on data with either local contexts or local entities when the model is evaluated on monolingual test data in F&F and E&E.",
"Specifically, when the train set has access to local context only, all the entities in the train set are replaced by entities in non-target languages.",
"Similarly, when the train set has access to local entities only, the contexts in the train set are replaced by context in the non-target languages.",
"Table 3 shows that both local contexts and local entities are essential to building ToD systems in the target language.",
"A further analysis in Table 15 and Table 16 in the appendix shows that training with local entities is more important if the entities and contexts are written in the same type of language script (e.g. Latin script).",
"With our proposed data curation method, it is possible to extend the dataset to cover more languages without spending extra costs if we skip the human post-editing step.",
"Before doing so, one key question is whether the evaluation on the translated data without human post-editing is reliable as a proxy of the model performance.",
"Thus, we conduct the experiments by evaluating the model performance of all baselines (4) on two sets of test data built with local entities: (1) MT test data where translated template is created by machine translation only (2.2); (2) MTPE test data where translated template is first translated by machines and post-edited later by professional translators.",
"As shown in Table 4, the overall reported results on MT test data are higher than those reported on MTPE test data, which is expected because the distribution of the MT test data is more similar to the MT training Use Case F2F F2E Methods MT Test MTPE Test MT Test MTPE Test Zero-Shot (E&E) 1.29 1.28 9.64 9.12 Translate-Train 3.71 3.65 4.17 3.97 SUC 35.78 28.96 56.15 48.71 BBUC 36.31 29.74 57.84 54.29 MBUC 37.89 30.76 58.76 56.28 Spearman's correlation 1.0 1.0 Table 4: Comparison of average joint accuracy on DST reported on MT test data and MTPE test data for use case F&F and F&E Case Method Avg F&F Zero-Shot (E&E) 1.48 SUC 16.12 F&E Zero-Shot (E&E) 9.03 SUC 34.20 E&F Zero-Shot (E&E) 1.97 SUC 23.40 Table 5: Average results of Zero-Shot (E&E) on test data of F&F, F&E and E&F in 20 languages.",
"data.",
"Although there are some differences on individual languages, the conclusions derived from the evaluations on the MT test data remain the same as those derived from the evaluation on the MTPE test data.",
"We also calculate the Spearman rank correlation coefficient between the average results reported on MTPE test data and MT test data in Table 4, which shows a statistically high correlation between the system performance on the MT test data and MTPE test data 10 .",
"Therefore, we show that the MT test data can be used as a proxy to estimate the model performance on the real test data for more languages.",
"Thus we build MT test data for another 17 languages that are supported by Google Translate, Trip Advisor and Booking.com at the same time, as stated in Table 8 and Table 9 in the appendix.",
"Table 5 shows the results of Zero-Shot (E&E) and SUC on the test data of F&F, F&E and E&F in 20 languages.",
"The results show that the model has the best performance in the F&E use case compared with the other two use cases, which is consistent with our findings in Table",
"1. 7 Related Work Over the last few years, the success of ToD systems is largely driven by the joint advent of neural network models (Eric et al., 2017; Wu et al., 2019; Lin et al., 2020) and collections of large-10 Table 17 in the appendix shows detailed scores.",
"scale annotation corpora.",
"These corpora cover a wide range of topics from a single domain (e.g., ATIS (Hemphill et al., 1990), DSTC 2 (Henderson et al., 2014), Frames (El Asri et al., 2017), KVRET (Eric et al., 2017), WoZ 2.0 (Wen et al., 2017), M2M (Schatzmann et al., 2007)) to multiple domains (e.g., MultiWoZ (Budzianowski et al., 2018), SGD (Rastogi et al., 2020)).",
"Most notably among these collections, MultiWoZ is a large-scale multi-domain dataset that focuses on transitions between different domains or scenarios in real conversations (Budzianowski et al., 2018).",
"Due to the high cost of collecting task-oriented dialogues, only a few monolingual or bilingual non-English ToD datasets are available (Zhu et al., 2020; Quan et al., 2020; Lin et al., 2021).",
"While there is an increasing interest in data curation for multilingual ToD systems, a vast majority of existing multilingual ToD datasets do not consider the real use cases when using a ToD system to search for local entities in a country.",
"We fill this gap in this paper to provide the first analysis on three previously unexplored use cases.",
"In this paper, we provide an analysis on three unexplored use cases for multilingual task-oriented dialogue systems.",
"We propose a new data curation method that leverages a machine translation system and local entities in target languages to create a new multilingual TOD dataset, GlobalWoZ.",
"We propose a series of strong baseline methods and conduct extensive experiments on GlobalWoZ to encourage research for multilingual ToD systems.",
"Besides, we extend the coverage of languages on multilingual ToD to 20 languages, marking the one step further towards building a globalized multilingual ToD system for all of the world's citizen.",
"In this section, we would like to address the ethical concerns.",
"All the professional translators in this project have been properly compensated.",
"For Chinese and Spanish, we have followed the standard procurement requirements and engaged three translation companies for quality and price comparison.",
"A small sample of the data had been given to them for MTPE and we then compared their translation results.",
"Following that, we selected the company that produced the best sample translation, and submitted the full translation orders according to the agreed price quotations.",
"For Indonesian, three translation companies were also requested to provide sample MTPE, but our quality check found the quality of these samples to be unsatisfactory.",
"So, no company was engaged, and our in-house Indonesian linguistic resources were used instead.",
"These Indonesian linguists were assigned to work on this project during normal working hours and given proper compensation complying with the local labor laws.",
"This research is partly supported by the Alibaba-NTU Singapore Joint Research Institute, Nanyang Technological University.",
"All the costs for machine translation post-editing are funded by DAMO Academy, Alibaba Group.",
"We would like to thank the help from our Alibaba colleagues, Haiyun Peng, Zifan Xu and Ruidan He, and our NTU-NLP team member, Chengwei Qin in this work as well."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"method",
"objective",
"objective",
"objective",
"objective",
"result",
"objective",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"result",
"method",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"other",
"other"
] |
[
"We explore two solutions to the problem of mistranslating rare words in neural machine translation.",
"First, we argue that the standard output layer, which computes the inner product of a vector representing the context with all possible output word embeddings, rewards frequent words disproportionately, and we propose to fix the norms of both vectors to a con-stant value.",
"Second, we integrate a simple lexical module which is jointly trained with the rest of the model.",
"We evaluate our approaches on eight language pairs with data sizes ranging from 100k to 8M words, and achieve improvements of up to + 4.3 BLEU, surpassing phrase-based translation in nearly all settings.",
"1 1 Introduction Neural network approaches to machine translation (Sutskever et al., 2014; Bahdanau et al., 2015; Luong et al., 2015a; Gehring et al., 2017) are appealing for their single-model, end-to-end training process, and have demonstrated competitive performance compared to earlier statistical approaches (Koehn et al., 2007; Junczys-Dowmunt et al., 2016).",
"However, there are still many open problems in NMT (Koehn and Knowles, 2017).",
"One particular issue is mistranslation of rare words.",
"For example, consider the Uzbek sentence: Source: Ammo muammolar hali ko'p, deydi amerikalik olim Entoni Fauchi.",
"Reference: But still there are many problems, says American scientist Anthony Fauci.",
"Baseline NMT: But there is still a lot of problems, says James Chan.",
"At the position where the output should be Fauci , the NMT model's top three candidates are Chan , 1 The code for this work can be found at https://github.com/tnq177/improving_lexical_choice_in_nmt Fauci , and Jenner .",
"All three surnames occur in the training data with reference to immunologists: Fauci is the director of the National Institute of Allergy and Infectious Diseases, Margaret (not James) Chan is the former director of the World Health Organization, and Edward Jenner invented smallpox vaccine.",
"But Chan is more frequent in the training data than Fauci , and James is more frequent than either Anthony or Margaret .",
"Because NMT learns word representations in continuous space, it tends to translate words that seem natural in the context, but do not reflect the content of the source sentence (Arthur et al., 2016).",
"This coincides with other observations that NMT's translations are often fluent but lack accuracy (Wang et al., 2017b; Wu et al., 2016).",
"(cid:16) (cid:17) where W e and b e are a vector and a scalar depending only on e , and h is a vector depending only on the source sentence and previous output words.",
"We propose two modifications to this layer.",
"First, we argue that the term W e h , which measures how well e fits into the context h , favors common words disproportionately, and show that it helps to fix the norm of both vectors to a constant.",
"Second, we add a new term representing a more direct connection from the source sentence, which allows the model to better memorize translations of rare words.",
"Below, we describe our models in more detail.",
"Then we evaluate our approaches on eight language pairs, with training data sizes ranging from 100k words to 8M words, and show improvements of up to + 4.3 BLEU, surpassing phrase-based translation in nearly all settings.",
"Finally, we provide some analysis to better understand why our modifications work well.",
"Given a source sequence f = f 1 f 2 f m , the goal of NMT is to find the target sequence e = e 1 e 2 e n that maximizes the objective function:",
"X",
"We use the global attentional model with general scoring function and input feeding by Luong et al. (2015a).",
"We provide only a very brief overview of this model here.",
"It has an encoder, an attention, and a decoder.",
"The encoder converts the words of the source sentence into word embeddings , then into a sequence of hidden states .",
"The decoder generates the target sentence word by word with the help of the attention.",
"At each time step t , the attention calculates a set of attention weights a t ( s ).",
"These attention weights are used to form a weighted average of the encoder hidden states to form a context vector c t .",
"From c t and the hidden state of the decoder are computed the attentional hidden state h t .",
"Finally, the predicted probability distribution of the t 'th target word is: p ( e t | e < t , f ) = softmax( W o h t + b o ) .",
"1",
"The rows of the output layer's weight matrix W o can be thought of as embeddings of the output vocabulary, and sometimes are in fact tied to the embeddings in the input layer, reducing model size while often achieving similar performance (Inan et al., 2017; Press and Wolf, 2017).",
"We verified this claim on some language pairs and found out that this approach usually performs better than without tying, as seen in Table 1. For this reason, we always tie the target embeddings and W o in all of our models.",
"The output word distribution (1) can be written as:",
"(cid:16) (cid:17) where W e is the embedding of e , b e is the e 'th component of the bias b o , and W e , h is the angle between W e and h .",
"We can intuitively interpret the terms as follows.",
"The term k h k has the e ect of sharpening or flattening the distribution, reflect-ing whether the model is more or less certain in a particular context.",
"The cosine similarity cos W e , h measures how well e fits into the context.",
"The bias b e controls how much the word e is generated; it is analogous to the language model in a log-linear translation model (Och and Ney, 2002).",
"Finally, k W e k also controls how much e is generated.",
"Figure 1 shows that it generally correlates with frequency.",
"But because it is multiplied by cos W e , h , it has a stronger e ect on words whose embeddings have direction similar to h , and less e ect or even a negative e ect on words in other directions.",
"We hypothesize that the result is that the model learns k W e k that are disproportionately large.",
"Observe that cos W e , h and even b e both favor the correct output word Fauci , whereas k W e k favors the more frequent, but incorrect, word Chan .",
"The most frequently-mentioned immunologist trumps other immunologists.",
"To solve this issue, we propose to fix the norm of all target word embeddings to some value r .",
"Followingthe weight normalization approach of Salimans and Kingma (2016), we reparameterize W e as r v e k v e k , but keep r fixed.",
"A similar argument could be made for k h t k : because a large k h t k sharpens the distribution, causing frequent words to more strongly dominate rare words, we might want to limit it as well.",
"We compared both approaches on a development set and found that replacing h t in equation (1) with r h t k h t k indeed performs better, as shown in Table 1. 335 2 4 6 8 k W e k ha-entu-en hu-en 0 5 , 000 10 , 000 15 , 000 2 1 0 frequency rank of e b e ha-entu-en hu-en Figure 1: The word embedding norm k W e k generally correlates with the frequency of e , except for the most frequent words.",
"The attentional hidden state h contains information not only about the source word(s) corresponding to the current target word, but also the contexts of those source words and the preceding context of the target word.",
"This could make the model prone to generate a target word that fits the context but doesn't necessarily correspond to the source word(s).",
"Count-based statistical models, by contrast, don't have this problem, because they simply don't model any of this context.",
"Arthur et al. (2016) try to alleviate this issue by integrating a count-based lexicon into an NMT system.",
"However, this lexicon must be trained separately using GIZA ++ (Och and Ney, 2003), and its parameters form a large, sparse array, which can be di cult to store in GPU memory.",
"We propose instead to use a simple feedforward neural network (FFNN) that is trained jointly with the rest of the NMT model to generate a target word based directly on the source word(s).",
"Let f s ( s = 1 , . . . , m ) be the embeddings of the source words.",
"We use the attention weights to form a tokens vocab layers 10 6 10 3 num / size ta-en 0.2 / 0.1 4.0 / 3.4 1 / 512 ur-en 0.2 / 0.2 4.2 / 4.2 1 / 512 ha-en 0.8 / 0.8 10.6 / 10.4 2 / 512 tu-en 0.8 / 1.1 21.1 / 13.3 2 / 512 uz-en 1.5 / 1.9 29.8 / 17.4 2 / 512 hu-en 2.0 / 2.3 27.3 / 15.7 2 / 512 en-vi 2.1 / 2.6 17.0 / 7.7 2 / 512 en-ja (BTEC) 3.6 / 5.0 17.8 / 21.8 4 / 768 en-ja (KFTT) 7.8 / 8.0 48.2 / 49.1 4 / 768 Table 2: Statistics of data and models: e ective number of training source / target tokens, source / target vocabulary sizes, number of hidden layers and number of units per layer.",
"weighted average of the embeddings (not the hidden states, as in the main model) to give an average source-word embedding at each decoding time step t : f t = tanh X s a t ( s ) f s . Then we use a one-hidden-layer FFNN with skip connections (He et al., 2016): h t = tanh( W f t ) + f t and combine its output with the decoder output to get the predictive distribution over output words at time step t : p ( y t | y < t , x ) = softmax( W o h t + b o + W h t + b ) . For the same reasons that were given in Section 3 for normalizing h t and the rows of W ot , we normalize h t and the rows of W as well. Note, however, that we do not tie the rows of W with the word embeddings; in preliminary experiments, we found this to yield worse results. 5 Experiments We conducted experiments testing our normalization approach and our lexical model on eight language pairs using training data sets of various sizes. This section describes the systems tested and our results. 5.1 Data We evaluated our approaches on various language pairs and datasets: 336 Tamil (ta), Urdu (ur), Hausa (ha), Turkish (tu), and Hungarian (hu) to English (en), using data from the LORELEI program. English to Vietnamese (vi), using data from the IWSLT 2015 shared task. 2 To compare our approach with that of Arthur et al. (2016), we also ran on their English to Japanese (ja) KFTT and BTEC datasets. 3 We tokenized the LORELEI datasets using the default Moses tokenizer, except for Urdu-English, where the Urdu side happened to be tokenized using Morfessor FlatCat ( w = 0 . 5). We used the preprocessed English-Vietnamese and English-Japanese datasets as distributed by Luong et al., and Arthur et al., respectively. Statistics about our data sets are shown in Table 2. 5.2 Systems We compared our approaches against two baseline NMT systems: untied , which does not tie the rows of W o to the target word embeddings, and tied , which does. In addition, we compared against two other baseline systems: Moses : The Moses phrase-based translation system (Koehn et al., 2007), trained on the same data as the NMT systems, with the same maximum sentence length of 50. No additional data was used for training the language model. Unlike the NMT systems, Moses used the full vocabulary from the training data; unknown words were copied to the target sentence. Arthur: Our reimplementation of the discrete lexicon approach of Arthur et al. (2016). We only tried their auto lexicon, using GIZA ++ (Och and Ney, 2003), integrated using their bias approach. Note that we also tied embedding as we found it also helped in this case. Against these baselines, we compared our new systems: fixnorm : The normalization approach described in Section 3. fixnorm + lex : The same, with the addition of the lexical translation module from Section 4. 2 https://nlp.stanford.edu/projects/nmt/ 3 http://isw3.naist.jp/~philip-a/emnlp2016/ 5.3 Details Model For all NMT systems, we fed the source sentences to the encoder in reverse order during both training and testing, following Luong et al. (2015a). Information about the number and size of hidden layers is shown in Table 2. The word embedding size is always equal to the hidden layer size. Following common practice, we only trained on sentences of 50 tokens or less. We limited the vocabulary to word types that appear no less than 5 times in the training data and map the rest to UNK . For the English-Japanese and English-Vietnamese datasets, we used the vocabulary sizes reported in their respective papers (Arthur et al., 2016; Luong and Manning, 2015). For fixnorm , we tried r { 3 , 5 , 7 } and selected the best value based on the development set performance, which was r = 5 except for English-Japanese (BTEC), where r = 7. For fixnorm + lex , because W s h t + W h t takes on values in [ 2 r 2 , 2 r 2 ], we reduced our candidate r values by roughly a factor of 2, to r { 2 , 3 . 5 , 5 } . A radius r = 3 . 5 seemed to work the best for all language pairs. Training We trained all NMT systems with Adadelta (Zeiler, 2012). All parameters were initialized uniformly from [ 0 . 01 , 0 . 01]. When a gra-dient's norm exceeded 5, we normalized it to 5. We also used dropout on non-recurrent connections only (Zaremba et al., 2014), with probability 0.2. We used minibatches of size 32. We trained for 50 epochs, validating on the development set after every epoch, except on English-Japanese, where we validated twice per epoch. We kept the best checkpoint according to its BLEU on the development set. Inference We used beam search with a beam size of 12 for translating both the development and test sets. Since NMT often favors short translations (Cho et al., 2014), we followed Wu et al. (2016) in using a modified score s ( e | f ) in place of log-probability: s ( e | f ) = log p ( e | f ) lp ( e ) lp ( e ) = (5 + | e | ) (5 + 1) We set = 0 . 8 for all of our experiments. Finally, we applied a postprocessing step to replace each UNK in the target translation with the 337 source word with the highest attention score (Lu-ong et al., 2015b). Evaluation For translation into English, we report case-sensitive NIST BLEU against deto-kenized references. For English-Japanese and English-Vietnamese, we report tokenized, case-sensitive BLEU following Arthur et al. (2016) and Luong and Manning (2015). We measure statistical significance using bootstrap resampling (Koehn, 2004). 6 Results and Analysis 6.1 Overall Our results are shown in Table 3. First, we observe, as has often been noted in the literature, that NMT tends to perform poorer than PBMT on low resource settings (note that the rows of this table are sorted by training data size). Our fixnorm system alone shows large improvements (shown in parentheses) relative to tied . Integrating the lexical module ( fixnorm + lex ) adds in further gains. Our fixnorm + lex models surpass Moses on all tasks except Urduand Hausa-English, where it is 1.6 and 0.7 BLEU short respectively. The method of Arthur et al. (2016) does improve over the baseline NMT on most language pairs, but not by as much and as consistently as our models, and often not as well as Moses. Unfortunately, we could not replicate their approach for English-Japanese (KFTT) because the lexical table was too large to fit into the computational graph. For English-Japanese (BTEC), we note that, due to the small size of the test set, all systems except for Moses are in fact not significantly different from tied ( p > 0 . 01). On all other tasks, however, our systems significantly improve over tied ( p < 0 . 01). 6.2 Impact on translation In Table 4, we show examples of typical translation mistakes made by the baseline NMT systems. In the Uzbek example (top), untied and tied have confused 34 with UNK and 700 , while in the Turkish one (middle), they incorrectly output other proper names, Afghan and Myanmar , for the proper name Kenya . Our systems, on the other hand, translate these words correctly. The bottom example is the one introduced in Section 1. We can see that our fixnorm approach does not completely solve the mistranslation issue, since it translates Entoni Fauchi to UNK UNK (which is arguably better than James Chan ). On the other hand, fixnorm + lex gets this right. To better understand how the lexical module helps in this case, we look at the top five translations for the word Fauci in fixnorm + lex : e cos W e , h cos W le , h l b e + b le logit Fauci 0.522 0.762 8 . 71 7.0 UNK 0.566 0 . 009 1 . 25 5.6 Anthony 0.263 0.644 8 . 70 2.4 Ahmedova 0.555 0.173 8 . 66 0.3 Chan 0.546 0.150 8 . 73 0 . 2 As we can see, while cos W e , h might still be confused between similar words, cos W le , h l significantly favors Fauci . 6.3 Alignment and unknown words Both our baseline NMT and fixnorm models suffer from the problem of shifted alignments noted by Koehn and Knowles (2017). As seen in Figure 2a and 2b, the alignments for those two systems seem to shift by one word to the left (on the source side). For example, ni should be aligned to said instead of Telekom , and so on. Although this is not a problem per se , since the decoder can decide to attend to any position in the encoder states as long as the state at that position holds the information the decoder needs, this becomes a real issue when we need to make use of the alignment information, as in unknown word replacement (Luong et al., 2015b). As we can see in Figure 2, because of the alignment shift, both tied and fixnorm incorrectly replace the two unknown words (in bold) with But Deutsche instead of Deutsche Telekom . In contrast, under fixnorm + lex and the model of Arthur et al. (2016), the alignment is corrected, causing the UNK s to be replaced with the correct source words. 6.4 Impact of r The single most important hyper-parameter in our models is r . Informally speaking, r controls how much surface area we have on the hypersphere to allocate to word embeddings. To better understand its impact, we look at the training perplexity and dev BLEUs during training with di erent values of r . Table 6 shows the train perplexity and best tokenized dev BLEU on Turkish-English for fixnorm and fixnorm + lex with di erent values of r . As we can see, a smaller r results in 338 untied tied fixnorm fixnorm + lex Moses Arthur ta-en 10.3 11.1 14 ( + 2.9) 15.3 ( + 4.2) 10.5 ( 0 . 6) 14.1 ( + 3.0) ur-en 7.9 10.7 12 ( + 1.3) 13 ( + 2.3) 14.6 ( + 3.9) 12.5 ( + 1.8) ha-en 16.0 16.6 20 ( + 3.4) 21.5 ( + 4.9) 22.2 ( + 5.6) 18.7 ( + 2.1) tu-en 12.2 12.6 16.4 ( + 3.8) 19.1 ( + 6.5) 18.1 ( + 5.5) 16.3 ( + 3.7) uz-en 14.9 15.7 18.2 ( + 2.5) 19.3 ( + 3.6) 17.2 ( + 1.5) 17.1 ( + 1.4) hu-en 21.6 23.0 24.0 ( + 1.0) 25.3 ( + 2.3) 21.3 ( 1 . 7) 22.7 (-0.3) en-vi 25.1 25.3 26.8 ( + 1.5) 27 ( + 1.7) 26.7 ( + 1.4) 26.2 ( + 0.9) en-ja (BTEC) 51.2 53.7 52.9 (-0.8) 51.3 ( 2 . 6) 46.8 ( 6 . 9) 52.4 ( 1 . 3) en-ja (KFTT) 24.1 24.5 26.1 ( + 1.6) 26.2 ( + 1.7) 21.7 ( 2 . 8) Table 3: Test BLEU of all models. Di erences shown in parentheses are relative to tied , with a dagger ( ) indicating an insignificant di erence in BLEU ( p > 0 . 01). While the method of Arthur et al. (2016) does not always help, fixnorm and fixnorm + lex consistently achieve significant improvements over tied ( p < 0 . 01) except for English-Japanese (BTEC). Our models also outperform the method of Arthur et al. on all tasks and outperform Moses on all tasks but Urdu-English and Hausa-English. input Dushanba kuni Hindistonda kamida 34 kishi halok bo'lgani xabar qilindi . reference At least 34 more deaths were reported Monday in India . untied At least UNK people have died in India on Monday . tied It was reported that at least 700 people died in Monday . fixnorm At least 34 people died in India on Monday . fixnorm + lex At least 34 people have died in India on Monday . input Yarn Kenya'da bir yardm konferans dzenlenecek . reference Tomorrow a conference for aid will be conducted in Kenya . untied Tomorrow there will be an Afghan relief conference . tied Tomorrow there will be a relief conference in Myanmar . fixnorm Tomorrow it will be a aid conference in Kenya . fixnorm + lex Tomorrow there will be a relief conference in Kenya . input Ammo muammolar hali ko'p , deydi amerikalik olim Entoni Fauchi .",
"worse training perplexity, indicating underfitting, whereas if r is too large, the model achieves better training perplexity but decrased dev BLEU, indicating overfitting.",
"One byproduct of lex is the lexicon, which we can extract and examine simply by feeding each source word embedding to the FFNN module and calculating p ( y ) = softmax( W h + b ). In Table 5, we show the top translations for some entries in the lexicons extracted from fixnorm + lex for Hungarian, Turkish, and Hausa-English. As expected, the lexical distribution is sparse, with a few top translations accounting for the most probability mass.",
"Byte-Pair-Encoding (BPE) (Sennrich et al., 2016) is commonly used in NMT to break words into word-pieces, improving the translation of rare words. For this reason, we reran our experiments using BPE on the LORELEI and English-Vietnamese datasets. Additionally, to see if our proposed methods work in high-resource scenarios, we run on the WMT 2014 English-German (en-de) dataset, 4 using newstest2013 as the development set and reporting tokenized, case-sensitive BLEU on newstest2014 and newstest2015 .",
"We validate across di erent numbers of BPE operations; specifically, we try {1k, 2k, 3k} merge operations for ta-en and ur-en due to their small sizes, {10k, 12k, 15k} for the other LORELEI datasets and en-vi, and 32k for en-de. Using BPE results in much smaller vocabulary sizes, so we do not apply a vocabulary cut-o . Instead, we train on",
"4 https://nlp.stanford.edu/projects/nmt/",
"an additional copy of the training data in which all types that appear once are replaced with UNK , and halve the number of epochs accordingly. Our models, training, and evaluation processes are largely the same, except that for en-de, we use a 4-layer decoder and 4-layer bidirectional encoder (2 layers for each direction).",
"Table 7 shows that our proposed methods also significantly improve the translation when used with BPE, for both high and low resource language pairs. With BPE, we are only behind Moses on Urdu-English.",
"The closest work to our lex model is that of Arthur et al. (2016), which we have discussed already in Section 4. Recent work by Liu et al. (2016) has very similar motivation to that of our fixnorm model. They reformulate the output layer in terms of directions and magnitudes, as we do here. Whereas we have focused on the magnitudes, they focus on the directions, modifying the loss function to try to learn a classifier that separates the classes' directions with something like a margin.",
"Wang et al. (2017a) also make the same observation that we do for the fixnorm model, but for the task of face verification.",
"Handling rare words is an important problem for NMT that has been approached in various ways.",
"Some have focused on reducing the number of UNK s by enabling NMT to learn from a larger vocabulary (Jean et al., 2015; Mi et al., 2016); others have focused on replacing UNK s by copying source words (Gulcehre et al., 2016; Gu et al., 2016; Luong et al., 2015b).",
"However, these methods only help with unknown words, not rare words.",
"An approach that addresses both unknown and rare words is to use subword-level information (Sennrich et al., 2016; Chung et al., 2016; Luong and Manning, 2016).",
"Our approach is different in that we try to identify and address the root of the rare word problem.",
"We expect that our models would benefit from more advanced UNK replacement or subword-level techniques as well.",
"Recently, Liu and Kirchho (2018) have shown that their baseline NMT system with BPE already outperforms Moses for low-resource translation.",
"However, in their work, they use the Transformer network (Vaswani et al., 2017), which is quite different from our baseline model.",
"It would be interesting to see if our methods benefit the Trans-341 tied fixnorm fixnorm + lex ta-en 13 15 ( + 2.0) 15.9 ( + 2.9) ur-en 10.5 12.3 ( + 1.8) 13.7 ( + 3.2) ha-en 18 21.7 ( + 3.7) 22.3 ( + 4.3) tu-en 19.3 21 ( + 1.7) 22.2 ( + 2.9) uz-en 18.9 19.8 ( + 0.9) 21 ( + 2.1) hu-en 25.8 27.2 ( + 1.4) 27.9 ( + 2.1) en-vi 26.3 27.3 ( + 1.0) 27.5 ( + 1.2) en-de (newstest2014) 19.7 22.2 ( + 2.5) 20.4 ( + 0.7) en-de (newstest2015) 22.5 25 ( + 2.5) 23.2 ( + 0.7) Table 7: Test BLEU for all BPE-based systems.",
"In this paper, we have presented two simple yet e ective changes to the output layer of a NMT model.",
"Both of these changes improve translation quality substantially on low-resource language pairs.",
"In many of the language pairs we tested, the baseline NMT system performs poorly relative to phrase-based translation, but our system surpasses it (when both are trained on the same data).",
"We conclude that NMT, equipped with the methods demonstrated here, is a more viable choice for low-resource translation than before, and are optimistic that NMT's repertoire will continue to grow.",
"This research was supported in part by University of Southern California subcontract 67108176 under DARPA contract HR0011-15-C-0115.",
"Nguyen was supported in part by a fellowship from the Vietnam Education Foundation.",
"We would like to express our great appreciation to Sharon Hu for letting us use her group's GPU cluster (supported by NSF award 1629914), and to NVIDIA corporation for the donation of a Titan X GPU.",
"We also thank Tomer Levinboim for insightful discussions."
] | [
"objective",
"objective",
"method",
"result",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"result",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"method",
"other",
"method",
"abstain",
"result",
"objective",
"other",
"other",
"other",
"other"
] |
[
"Understanding the functional (dis)-similarity of source code is significant for code modeling tasks such as software vulnerability and code clone detection.",
"We present DISCO ( DISsimilarity of COde ), a novel self-supervised model focusing on identifying (dis)similar functionalities of source code.",
"Different from existing works, our approach does not require a huge amount of randomly collected datasets.",
"Rather, we design structure-guided code transformation algorithms to generate synthetic code clones and inject real-world security bugs, augmenting the collected datasets in a targeted way.",
"We propose to pre-train the Transformer model with such automatically generated program contrasts to better identify similar code in the wild and differentiate vulnerable programs from benign ones.",
"To better capture the structural features of source code, we propose a new cloze objective to encode the local tree-based context ( e.g., parents or sibling nodes).",
"We pre-train our model with a much smaller dataset, the size of which is only 5% of the state-of-the-art models' training datasets, to illustrate the effectiveness of our data augmentation and the pre-training approach.",
"The evaluation shows that, even with much less data, DISCO can still outperform the state-of-the-art models in vulnerability and code clone detection tasks.",
"Understanding the functional similar-ity/dissimilarity of source code is at the core of several code modeling tasks such as software vulnerability and code clone detection, which are important for software maintenance (Kim et al., 2017; Li et al., 2016).",
"Existing pre-trained Transformer models (Guo et al., 2021; Feng et al., 2020; Ahmad et al., 2021) show promises for understanding code syntax ( i.e., tokens and structures).",
"However, they still get confused when trying to identify functional (dis)-similarities.",
"For instance, syntax-based models can embed two code fragments with identical functionality but very different tokens and structures as distinct vectors and fail to identify them as semantically similar.",
"Likewise, these models cannot distinguish between two code fragments that differ in functionalities but share a close syntactic resemblance.",
"For example, consider an if statement if(len(buf) < N) checking buffer length before accessing the buffer.",
"Keeping the rest of the program the same, if we simply replace the token < ' with ,' the modification can potentially trigger security vulnerability, e.g., buffer overflow bug 1 .",
"It is challenging for existing pre-training techniques to tell apart such subtle differences in the functionalities.",
"In addition, existing pre-training techniques rely on a huge volume of training corpus that is randomly selected.",
"For fine-tuning tasks like code clone detection or vulnerability detection, such random selection of training data is never tailored to teach the model about code functionalities.",
"To address these limitations, we present DISCO, a self-supervised pre-trained model that jointly learns the general representations of source code and specific functional features for identifying source code similarity/dis-similarity.",
"Similar to state-of-the-art pre-trained Transformer models (Devlin et al., 2019; Liu et al., 2019), we apply the standard masked language model (MLM) to capture the token features of source code.",
"To learn about the structural code properties, we propose a new auxiliary pre-training task that consumes additional inputs of local tree-based contexts ( e.g., parent or sibling nodes in abstract syntax trees) and embeds such structural context, together with the token-based contexts, into each token representation.",
"On top of such well-learned general code representations, we further incorporate prior knowledge of code clones and vulnerable programs into the pre-training to help the model learn the functional (dis)-similarity.",
"We design structure-guided 1 https://en.wikipedia.org/wiki/Buffer_overflow 6300 code transformation heuristics to automatically augment each training sample with one synthetic code clone ( i.e., positive samples) that is structurally different yet functionally identical and one vulnerable contrast ( i.e., hard negative samples) that is syntactically similar but injected with security bugs.",
"During the pre-training, DISCO learns to bring similar programs closer in the vector space and differentiate the benign code from its vulnerable contrast, using a contrastive learning objective.",
"Since we augment the dataset in a more targeted way than existing works and the model explicitly learns to reason about a code w.r.t. its functional equivalent and different counterparts during pre-training, DISCO can learn sufficient knowledge for downstream applications from a limited amount of data, consequently saving computing resources.",
"In particular, we evaluate DISCO for clone detection and vulnerability detection, as the knowledge of similar/dissimilar code fragments is at the core of these tasks.",
"To this end, we pre-train DISCO on a small dataset, with only 865 MB of C code and 992 MB Java code from 100 most popular GitHub repositories, and evaluate the model on four different datasets for vulnerability and code clone detection.",
"Experiments show that our small models outperform baselines that are pre-trained on 20 larger datasets.",
"The ablation study (5.4) also reveals that pre-training our model with 10 larger datasets further improves the performance up to 8.2%, outperforming state-of-the-art models by 1% for identifying code clones and up to 9.6% for vulnerability detection, even if our dataset is still smaller.",
"In summary, our contributions are: 1) We design structure-guided code transformation heuristics to automatically augment training data to integrate prior knowledge of vulnerability and clone detection without human labels.",
"2) We propose a new pre-training task to embed structural context to each token embedding.",
"3) We develop DISCO, a self-supervised pre-training technique that jointly and efficiently learns the textual, structural, and functional properties of code.",
"Even though pre-trained with significantly less data, DISCO matches or outperforms the state-of-the-art models on code clone and vulnerability detection.",
"models (Vaswani et al., 2017) for source code with two categories: encoder-only and encoder-decoder (Ahmad et al., 2021; Wang et al., 2021; Rozire et al., 2021; Phan et al., 2021).",
"Our work focuses on pre-training encoder-only Transformer models to understand code.",
"Existing models are pre-trained with different token level objectives, such as masked language model (MLM) (Kanade et al., 2020; Buratti et al., 2020), next sentence prediction (NSP) (Kanade et al., 2020), replaced token detection, and bi-modal learning between source code and natural languages (Feng et al., 2020).",
"However, these approaches ignore the underlying structural information to fully understand the syntax and semantics of programming languages.",
"Recently, more works aimed to understand the strict-defined structure of source code leveraging abstract syntax tree (AST) (Zgner et al., 2021; Jiang et al., 2021), control/data flow graphs (CFG/DFG) (Guo et al., 2021).",
"DISCO leverages code structures differently from existing works in two ways: (a.) with AST/CFG/DFG, we automatically generate program contrasts to augment the datasets targeting specific downstream tasks.",
"(b.)",
"DISCO takes an additional input of local AST context, and we propose a new cloze task to embed local structural information into each token representation.",
"Self-supervised Contrastive Learning.",
"Self-supervised contrastive learning, originally proposed for computer vision (Chen et al., 2020), has gained much interest in language processing (Giorgi et al., 2021; Wu et al., 2020; Gao et al., 2021).",
"The common practice of self-supervised contrastive learning is building similar counterparts, without human interference, for the original samples and forcing the model to recognize such similarity from a batch of randomly selected samples.",
"Corder (Bui et al., 2021) leverages contrastive learning to understand the similarity between a program and its functionally equivalent code.",
"While Corder approach will help code similarity detection type of applications, their pre-training does not learn to differentiate syntactically very close, but functionally different programs.",
"Such differentiation is crucial for models to work well for bug detection (Ding et al., 2020).",
"ContraCode (Jain et al., 2020) also leverages contrastive learning.",
"However, they generate negative contrast for a program from unrelated code examples, not from variants of the same code.",
"They also do not encode the structural information into the code as we do.",
"Inspired by the empirical findings that hard negative image and text 6301 samples are beneficial for contrastive learning (Gao et al., 2021; Robinson et al., 2021), DISCO learns both from equivalent code as the positive contrast, and functionally different yet syntactically close code as the hard-negative contrast.",
"We generate hard-negative samples by injecting small but crucial bugs in the original code (3.1).",
"Our pre-training aims to identify similar programs that can be structurally different (positive sample) and differentiate the buggy programs (negative sample) that share structural resemblances with the benign ones.",
"Thus, we need a labeled positive and a negative example for each original sample.",
"Manually collecting them is expensive, especially at the scale of pre-training.",
"To this end, we design code transformation heuristics to automatically generate such positive and negative samples so that the transformation can be applied to any amount of programs without human efforts.",
"We first represent a code sample as Abstract Syntax Tree (AST), and build a control/data flow graph from the AST.",
"The code transformation heuristics are then applied to this graph.",
"For every original code sample ( x ), we apply semantic preserving transformation heuristics (3.2) to generate a positive sample ( x + ) and a bug injection heuristics (3.1) to generate a hard-negative code example ( x ).",
"We design the heuristics in a way that makes x + be the functional equivalent or semantic clone of x and x be the buggy/noisy version of x .",
"Noted that not all heuristics are applicable to all code samples; we decide on applicable heuristics based on the flow graph of the original code.",
"Figure 1 shows an example of the code transformation.",
"To generate a hard negative sample ( x ) from a given code ( x ), we define six categories of bug injection heuristics.",
"Here our goal is to maintain maximum token-level similarity to the original code, so that the model can learn to analyze source code beyond token-level similarity.",
"These heuristics are inspired by the buggy code patterns from a wide range of Common Weakness Enumeration (CWE) types (Appendix A.1).",
"While it is challenging to guarantee that x will exhibit vulnerability or security bug, our heuristics will force x to exhibit different functionality than x .",
"Compared with a concurrent work from Allamanis et al. (2021), our methods are significantly different.",
"First, we focus on concrete types of security bugs that have been identified by the security experts, while they mainly target regular bugs.",
"Second, our scope is not only bug detection but clone detection as well, and we apply contrastive learning to differentiate the code functionalities of code clones and vulnerabilities.",
"Misuse of Data Type.",
"Usage of the wrong data type can trigger several security flaws.",
"For instance, using a smaller data type ( e.g., short ) to replace a larger one ( e.g., long ) may result in an overflow bug ( e.g., CVE-2021-38094 (2021)).",
"Such errors are complicated to track since they are usually exhibited in input extremities ( i.e., very large or very small values).",
"For languages allowing implicit typecasting, such an incorrect type may even cause imprecision, resulting in the unpredictable behavior of the code.",
"We intentionally change the data types in x to inject potential bugs, while ensuring the code can still be compiled ( e.g., we will not replace int with char ).",
"Misuse of Pointer.",
"Incorrect pointer usage is a ma-jor security concern.",
"Accessing uninitialized pointers may lead to unpredictable behavior.",
"ANULL pointer or freed pointer could lead to Null Pointer Dereferencing vulnerability ( e.g., CVE-2021-3449 (2021)).",
"To inject such bugs, we randomly remove the initialization expression during pointer declaration, or set some pointers to NULL .",
"Change of Conditional Statements.",
"Programmers usually check necessary preconditions using if-statement before doing any safety-critical operation.",
"For instance, before accessing an array with an index, a programmer may add a condition checking the validity of the index.",
"Lack of such checks can lead to buffer-overflow bugs in code ( e.g., CVE-2020-24020 (2020)).",
"We introduce bugs in the code by removing such small if-statement s.",
"In addition, we also inject bugs by modifying randomly selected arithmetic conditions replace the comparison operator ( < , > , , , == , ! = ) with another operator, to inject potential out-of-bound access, forcing the program to deviate from its original behavior.",
"Misuse of Variables.",
"When there are multiple variables present in a code scope, incorrect use of variables may lead to erroneous behavior of the program.",
"Such errors are known as VARMISUSE bug (Allamanis et al., 2018).",
"We induce code with such bugs by replacing one variable with another.",
"To keep the resultant code compilable, we perform scope analysis on the AST and replace a variable with another variable reachable in the same scope.",
"Misuse of Values.",
"Uninitialized variables or variables with wrong values may alter the program behaviors and consequently cause security flaws ( e.g., CVE-2019-12730 (2019)).",
"We modify the original code by removing the initializer expression of some variables.",
"In addition, to induce the code with divide-by-zero vulnerability, we identify the potential divisor variables from the flow graph and forcefully assign zero values to them immediately before the division.",
"Change of Function Calls.",
"We induce bugs in the code by randomly changing arguments of function calls.",
"For a randomly selected function call, we add, remove, swap, or assign NULL value to arguments, forcing the code to behave unexpectedly.",
"To generate positive samples ( x + ) from a given code, we use three different heuristics.",
"In this case, our goal is to generate functionally equivalent code while inducing maximum textual difference.",
"These heuristics are inspired by code clone literature (Fu-naro et al., 2010; Sheneamer et al., 2018).",
"Variable Renaming.",
"Variable renaming is a typical code cloning strategy and frequently happens during software development (Ain et al., 2019).",
"To generate such a variant of the original code, we either (a.) rename a variable in the code with a random identifier name or (b.) with an abstract name such as VAR_i (Rozire et al., 2021).",
"While choosing random identifier names, we only select available identifiers in the dataset.",
"We ensure that both the definition of the variable and subsequent usage(s) are renamed for any variable renaming.",
"We also ensure that a name is not used to rename more than one variable.",
"we make more tokens different compared with the original code but keep the same syntax and semantics.",
"We do not rename library calls for the code ( e.g., memcpy() in C).",
"Noted that even if tokens like VAR_i and FUNC_i are rare in normal code, the model will not bias towards identifying samples with these tokens as positive samples.",
"The reason is that, as shown in Figure 2, x + , y + and z + all potentially have these abstract tokens, but the model learns to move EMB x closer to EMB x + and further from EMB y + and EMB z + , regardless of the existence of abstract tokens.",
"Statement Permutation.",
"The relative order among the program statements that are independent of each other can be changed without altering the code functionality.",
"More specifically, we focus on the variable declaration or initialization statements.",
"We first conduct the dependency analysis to identify a set of local variables that do not depend on other values for initialization.",
"Then we move their declaration statements to the beginning of the function and permute them.",
"This section presents the model architecture, input representation, and pre-training tasks.",
"DISCO uses a 12-layered Transformer encoder model similar to BERT.",
"We feed the model with both source code text and structure (AST) information (4.1).",
"We pre-train DISCO using three different pre-training tasks (4.2).",
"Figure 2 depicts an example workflow of DISCO.",
"We randomly select tokens in the original sample, mask them and their node types, and then use the embedding of these masks to predict them back.",
"We further extract the sequence embed-dings within a minibatch and contrast them based on the code functionality.",
"Source Code.",
"Given a program ( x ), we apply a lexical analyzer to tokenize it based on the language grammar and flatten the program as a token sequence ( x 1 x 2 ...x m , where x i is i th token in the code).",
"We further train a sentencepiece (Kudo and Richardson, 2018) tokenizer based on such flattened code token sequences with vocabulary size 20,000.",
"We use this tokenizer to divide the source code tokens into subtokens.",
"We prepend the subtoken sequence with a special token [CLS] and append with a special token [SEP] .",
"Finally, DISCO converts the pre-processed code sequence C = { [ CLS ] , c 1 , c 2 , ..., c k , [ SEP ] } to vectors V src = { v src [ CLS ] , v src 1 , v src 2 , ..., v srck , v src [ SEP ] } with a token embedding layer.",
"Local AST Types.",
"For every token in the input code, we extract the node type ( tt ) from the syntax tree.",
"Since such types are all terminal node types ( e.g., keyword, identifier, punctuation), we do not get enough information about the structure only with these types.",
"In order to add more information about the tree, we also extract its parent type ( pt ) for each token.",
"Such parent type provides us with information about the structural context of a token.",
"For instance, when the parent type of an identifier is Function-Declarator , we know that the identifier is a function name.",
"In contrast, when the identifier 's parent is a Binary Expression , it should be a variable.",
"Consequently, we annotate each code subtoken c i with a local AST-type token t = tt # pt .",
"It is worth noting that sub-tokens coming from the same code token will all have the same type.",
"Therefore, we have the AST-type sequence for the code T = { [ CLS ] , t 1 , t 2 , ..., t k , [ SEP ] } , and DISCO converts it as vectors V type = { v type [ CLS ] , v type 1 , v type 2 , ..., v typek , v type [ SEP ] } with a type embedding layer.",
"Appendix Table 7 shows an example of code tokens and their AST types.",
"DISCO generates token representation v i of subtoken c i as a sum of token embedding v srci and type embedding v typei .",
"Thus, V = V src + V type .",
"We aim to pre-train the DISCO to learn the representation of source code based on (a.) token-based context, (b.) AST-based context, and (c.) code functionality.",
"In that spirit, we pre-train DISCO to optimize on three different objectives, i.e., masked language model (MLM), local AST node type-MLM (NT-MLM), and Contrastive Learning (CLR).",
"For a given program x , we first embed the tokens and node-types to vectors V = { v [ CLS ] , v 1 , ..., v [ SEP ] } .",
"We optimize MLM loss ( LMLM ) (4.2.1) and NT-MLM loss ( LNT MLM ) (4.2.2) based on x .",
"These two loss functions learn about the textual and syntactic context of source code.",
"For every code sample x in a minibatch of input, we generate a positive example x + and a hard-negative example x using the heuristics described in Section 3.",
"We optimize CLR loss ( LCLR ) (4.2.3) considering the original code and its positive and hard-negative counterparts.",
"The final loss function to optimize for pre-training DISCO is L ( ) = LMLM ( ) + LNT MLM ( ) + LCLR ( ) 6304 4.2.1 Encoding Token-based Context We apply the standard masked language model to the original code ( x ).",
"Given a source code sequence C , we randomly choose 15% of tokens and replace them with a special token [ MASK ] for 80% of the time and a random token for 10% of the time and leave the rest 10% unchanged.",
"We record the indices of masked token as loc m , replaced token as loc r and unchanged tokens as loc u for node-type MLM.",
"We define the union of these indices as M = loc m loc r loc u .",
"MLM will learn to recover the masked source code { c i | i M } given the Transformer encoder's output h i .",
"We present the loss for MLM as LMLM = (cid:80) i M logP ( c i | h i ) 4.2.2 Encoding AST-based Context Token-based MLM re-builds the token using its surrounding tokens and successfully encodes the contextual information into each token representation.",
"Motivated by MLM, we propose the tree-based context-aware pre-training task, to encode the structural context, such as parent, sibling, and children nodes.",
"As we have shown in Figure 2, we flatten the ASTs as sequences and we expect the flattened trees can preserve the local structure information (i.e., sub-trees containing terminal nodes), and existing work (Chakraborty et al., 2020; Hellen-doorn et al., 2020) has empirically shown such potentials.",
"To this end, we introduce AST node-type masked language model (NT-MLM).",
"Given the corresponding AST-type sequence T of source code C , we mask the AST types { t p | p loc m } with the special token [ MASK ] , and replace the AST types { t q | q loc r } with random tokens.",
"Specifically, by doing this, we make sure that if a source code token is chosen to be masked or replaced, its corresponding AST type will perform the same operation.",
"NT-MLM will learn to recover the masked AST type { t i | i M } given the Transformer encoder's output h i .",
"We present the loss for NT-MLM as LNT MLM = (cid:80) i M logP ( t i | h i ) A recent work, CodeT5 (Wang et al., 2021), proposes to predict token type as well.",
"However, our new objective is different from them in both high-level designs and the detailed implementation.",
"First, their objective only predicts one single token type: identifiers, while our approach predicts all possible AST types.",
"Also, we do not only consider the AST node type of tokens, but also include their AST parents to embed the local sub-tree context (4.1).",
"Second, CodeT5 implements the identifier tagging task as a binary classification (0/1) for each token, while our NT-MLM reconstructs the local ASTs out of hundreds of distinct types.",
"We adopt contrastive learning to focus on the functional characteristics of code.",
"With the structure-guided code transformation algorithms in Section 3, we are able to generate a positive sample ( x + in Figure 2) and a hard negative sample ( x in Figure 2) for each program in the dataset.",
"More specifically, we have a minibatch of N programs, and for each program, we extract the sequence representation from the Transformer outputs h = h [ CLS ] .",
"We will augment every sequence in the minibatch with positive and negative samples, and then the minibatch is extended to N triplets of ( h , h + , h ) .",
"We refer to the contrastive loss with hard negative samples from Gao et al. (2021) and we adapt it to our scope as follows.",
"We use cosine similarity as the sim () function and is the temperature parameter to scale the loss, and we use = 0 .",
"05 .",
"We also consider to pre-train the model with only positive counterparts as a variation.",
"In such a case, the minibatch will contain N pairs of ( h , h + ) and the loss is computed as LCLR = log e sim ( h , h + ) / (cid:80) Nn =1 (cid:16) e sim ( h , h + n ) / (cid:17) 5 Experiments In this section, we will explain our experimental settings and report the results.",
"We evaluate our model on vulnerability and code clone detection.",
"Data.",
"We collect our pre-training corpus from open-source C and Java projects.",
"We rank Github repositories by the number of stars and focus on the most popular ones.",
"After filtering out forks from existing repositories, we collect the dataset for each language from top-100 repositories.",
"We only consider the .java and .c files for Java and C repositories respectively, and we further remove comments and empty lines from these files.",
"The corresponding datasets for Java and C are of size of 992MB and 865MB, respectively.",
"Our datasets are significantly smaller than existing pre-training models (Feng et al., 2020; Ahmad et al., 2021; Guo 6305 et al., 2021).",
"For example, while CodeBERT and GraphCodeBERT are trained on 20GB data, we used an order of magnitude less data.",
"Details of our datasets and the comparison can be found in Appendix Table 5.",
"Models.",
"To study the different design choices, we train four variations of DISCO.",
"(i) MLM+CLR +NT-MLM is trained by all three tasks with hard negative samples.",
"(ii) MLM+CLR .",
"The input of this model only considers the source code sequence and ignores the AST-type sequence.",
"This model helps us understand the impact of NT-MLM.",
"(iii) MLM+CLR + .",
"This variant evaluates the effectiveness of hard negative code samples, by contrasting its performance with MLM+CLR .",
"(iv) MLM .",
"This is the baseline trained with only MLM objective.",
"We provide detailed model configuration in Appendix A.4 to ensure the reproducibility.",
"Baselines.",
"We consider two types of baselines: encoder-only pre-trained Transformers and existing deep-learning tools designed for code clone and vulnerability detection.",
"We do not consider encoder-decoder pre-trained Transformers as baselines, since such generative models always need much more pre-training data and training steps to converge, so it is unfair to compare our model with them.",
"For example, PLBART uses 576G source code for pre-training, while we only use less than 1G.",
"Based on the data size.",
"As future work, we plan to pre-train the model on much larger datasets.",
"VD is the task to identify security bugs: given source code function, the model predicts 0 (benign) or 1 (vulnerable) as binary classification.",
"Dataset and Metrics.",
"We consider two datasets for VD task: REVEAL (Chakraborty et al., 2021) and CodeXGLUE (Lu et al., 2021; Zhou et al., 2019).",
"In the real-world scenario, vulnerable programs are always rare compared to the normal ones, and Chakraborty et al. (2021) have shown such imbalanced ratio brings challenges for deep-learning models to pinpoint the bugs.",
"To imitate the real-world scenario, they collect REVEAL dataset from Chromium (open-source project of Chrome) and Linux Debian Kernel, which keeps the ratio of vulnerable to benign programs to be roughly 1:10.",
"Following Chakraborty et al. (2021), we consider precision, recall and F1 as the metrics.",
"security vulnerabilities.",
"It is less real-world than REVEAL, since it a balanced dataset, but it has been frequently used by existing Transformer-based models to evaluate their tools for VD task.",
"To compare with these baselines, we use CodeXGLUE train/valid/test splits for training and testing.",
"We use accuracy as the metric, following the design of the benchmark.",
"REVEAL.",
"Table 1 shows the results.",
"We compare with four deep-learning-based VD tools.",
"VulDeePecker (Li et al., 2018b) and SySeVR (Li et al., 2018a) apply program slices and sequence-based RNN/CNN to learn the vulnerable patterns.",
"Devign (Zhou et al., 2019) uses graph-based neural networks (GNN) to learn the data dependencies of program.",
"REVEAL (Chakraborty et al., 2021) applies GNN + SMOTE (Chawla et al., 2002) + triplet loss during training to handle the imbalanced distribution.",
"We also consider pre-trained RoBERTa, CodeBERT and GraphCodeBERT, and a 12-Layer Transformer model trained from scratch.",
"*We take this result from Buratti et al. (2020).",
"They did not use CodeXGLUE splits, so the test data can be different with other baselines.",
"In our case, the best DISCO variation with contrastive learning and NT-MLM objective outperforms all the baselines, including the graph-based approaches and models pre-trained with larger datasets.",
"This empirically proves that DISCO can 6306 efficiently understand the code semantics and data dependencies from limited amount of data, helping the identification of the vulnerable patterns.",
"We notice that hard negative samples ( i.e., buggy code contrasts) helps DISCO improve the performance.",
"The reason is that REVEAL contains thousands of (buggy version, fixed version) pairs for the same function.",
"Two functions in such a pair are different by only one or a few tokens.",
"Such real-world challenges align well with our automatically generated buggy code, and pre-training with these examples teaches the model better distinguish the buggy code from the benign ones.",
"We provide an example in Appendix Figure 3 to illustrate this.",
"CodeXGLUE.",
"We consider four pre-trained models: RoBERTa, CodeBERT, GraphCodeBERT and C-BERT.",
"The first three are pre-trained on much larger datasets than ours.",
"However, even trained with small dataset, three variations of DISCO outperforms the baselines.",
"Unlike REVEAL, CodeXGLUE does not have those challenging pairs of functions' buggy and patched version; thus the hard negative contrast in DISCO does not help the model much.",
"Clone detection aims to identify the programs with similar functionality.",
"It also can help detecting security vulnerabilitiesgiven a known vulnerability, we can scan the code base with clone detector and check for similar code snippets.",
"Dataset and Metrics.",
"We consider POJ-104 (Mou et al., 2016) and BigCloneBench (Svajlenko et al., 2014) as the evaluation datasets.",
"We again strictly follow the CodeXGLUE train/dev/test splits for experiments.",
"Following CodeXGLUE's design, we use MAP@R as the metric for POJ-104 and preci-sion/recall/F1 as the metric for BigCloneBench.",
"12-layer Transformer model trained from scratch as baselines.",
"Table 3 shows that, with hard negative contrast and NT-MLM, DISCO outperforms all baselines including CodeBERT, which is pre-trained on much larger datasets.",
"This highlights the significance of learning the code contrasts together with syntactical information to better capture the functional similarities.",
"Interestingly, we notice that DISCO-MLM performs the best among all variations.",
"This indicates that our current positive heuristics might not align with all the clone patterns in this benchmark.",
"As future work, we will propose more code transformation rules to imitate more real-world clone patterns.",
"BigCloneBench.",
"Our best model achieves slightly better precision than the baselines indicating that our designs with contrastive learning and structure information can compensate the loss brought by less data.",
"However, our recall is slightly worse than GraphCodeBERT, since they are pre-trained on large datasets with code graph.",
"We conclude that enlarging our Java pre-training dataset is necessary for code clone detection and we regard this as future work.",
"As shown in Section 5, DISCO trained on a small dataset achieves comparable or even better performance than models pre-trained on large datasets in vulnerability and clone detection (Let's call this version DISCO small ).",
"We further explore the benefits of pre-training using larger data.",
"We pre-train a MEDIUM model, DISCO medium , on our extended datasets with more C-language Github repositories (13G).",
"Note that our medium dataset is still smaller than the large dataset of the baseline models (13G vs. 20G).",
"We evaluate DISCO medium on C-language tasks.",
"The results are shown in Table 4.",
"Increasing the pre-training dataset improves the performance of downstream tasks, outperforming the best baselines' results.",
"In this work, we present DISCO, a self-supervised contrastive learning framework to both learn the general representations of source code and specific characteristics of vulnerability and code clone detections.",
"Our evaluation reveals that DISCO pre-trained with smaller dataset can still outperform the large models' performance and thus prove the effectiveness of our design.",
"We would appreciate the insightful feedback and comments from the anonymous reviewers.",
"This work was partially done when Yangruibo Ding was an intern at IBM Research.",
"This work is also supported in part by NSF grants CCF-2107405, CCF-1845893, IIS-2040961, and IBM.",
"The main goal of DISCO is to generate functionality-aware code embeddings, producing similar representations for code clones and differentiating security bugs from the benign programs.",
"Our data is collected from either the open-source projects, respecting corresponding licences' restrictions, or publicly available benchmarks.",
"Meanwhile, throughout the paper we make sure to summarize the paper's main claims.",
"We also discussed DISCO's limitation and potential future work for clone detection in Section 5.3.",
"We report our model configurations and experiment details in Appendix A.4."
] | [
"abstain",
"objective",
"method",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"other",
"abstain",
"objective",
"method",
"method",
"result",
"result",
"objective",
"objective",
"objective",
"abstain",
"other",
"method",
"other",
"other",
"other",
"objective",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"objective",
"abstain",
"other",
"method",
"objective",
"other",
"method",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"result",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain"
] |
[
"Metaphor is a linguistic device in which a concept is expressed by mentioning another.",
"Identifying metaphorical expressions, therefore, requires a non-compositional understanding of semantics.",
"Multiword Expressions (MWEs), on the other hand, are linguistic phenomena with varying degrees of semantic opacity and their identification poses a challenge to computational models.",
"This work is the first attempt at analysing the interplay of metaphor and MWEs processing through the design of a neural architecture whereby classification of metaphors is enhanced by informing the model of the presence of MWEs.",
"To the best of our knowledge, this is the first MWE-aware metaphor identification system paving the way for further experiments on the complex interactions of these phenomena.",
"The results and analyses show that this proposed architecture reach state-of-the-art on two different established metaphor datasets.",
"Human language is rife with a wide range of techniques that facilitate communication and expand the capacities of thinking and argumentation.",
"One phenomenon of such kind is metaphor.",
"Metaphor is defined as a figure of speech in which the speaker makes an implicit comparison between seemingly unrelated things which nonetheless have certain common characteristics (Shutova, 2010).",
"This is done to convey an idea which is otherwise difficult to express succinctly or simply for rhetorical effect.",
"As an example, in the sentence she devoured his novels , the verb devour is used in a metaphorical sense that implies reading quickly and eagerly.",
"The literal and metaphorical senses share the element of intense desire which in turn helps to decode the meaning of the word in its context.",
"It is clear that a mere literal understanding of semantics would not result in proper understanding of a metaphorical expression and a non-compositional approach would be required (Shutova et al., 2013; Vulchanova et al., 2019).",
"The human brain is equipped with the necessary machinery to decode the intended message behind a metaphorical utterance.",
"This involves mentally linking the seemingly unrelated concepts based on their similarities (Rapp et al., 2004).",
"Verbal MWEs (VMWEs) are another example of non-literal language in which multiple words form a single unit of meaning.",
"These two phenomena share some common ground.",
"Expressions like take the bull by the horns , go places , kick the bucket , or break someone's heart can be categorised as metaphorical VMWEs.",
"Based on this observation we hypothesise that a metaphor classification model can be bolstered by knowledge of VMWEs.",
"In this work we focus on how identification of verbal metaphors can be helped by verbal MWEs.",
"We devise a deep learning model based on attention-guided graph convolutional neural networks (GCNs) that encode syntactic dependencies alongside information about the existence of VMWEs and we test the model on two established metaphor datasets.",
"The tasks of MWE and metaphor identification share some similarities.",
"Many idiomatic MWEs can be considered as lexicalised metaphors.",
"Idioms are where the overlap becomes clear (Ko-rdoni, 2018).",
"It is important to note, however, that not all verbal metaphors are VMWEs.",
"Metaphors that are less conventionalised and appear in creative context (e.g. within a poem or a literary piece) and are not established enough to make it as entries into dictionaries are examples of such cases.",
"However, the distinction between these categories is not always clear, and few precise tests exist for the annotators to tell them apart (Gross, 1982).",
"1 Most state-of-the-art MWE identification models are based on neural architectures (Ramisch et al., 2018; Taslimipoor and Rohanian, 2018) with some employing graph-based methods to make use of structured information such as dependency parse trees (Waszczuk et al., 2019; Rohanian et al., 2019).",
"Top-performing metaphor detection models also use neural methods (Rei et al., 2017; Gao et al., 2018), with some utilising additional data such as sentiment and linguistic information to further improve performance (Mao et al., 2019; Dankers et al., 2019).",
"Graph Convolutional Networks (GCNs) (Kipf and Welling, 2016) are a variation of the classic CNNs that perform the convolution operation on nodes of a graph, making them suitable for capturing nonsequential inter-dependencies in the input.",
"Using the per-sentence formalism (Marcheg-giani and Titov, 2017; Rohanian et al., 2019), GCN can be defined as: GCN = f ( W XTA + b ) (1) where W , X , A , b , and GCN refer to the weight matrix, representation of the input sentence, adjacency matrix, bias term, and the output of the convolution respectively.",
"f is a nonlinearity which is often the relu function.",
"Attention is a mechanism inspired by human visual attention which aims to encode sequences by emphasising their most informative parts through weighting.",
"Self-attention (Cheng et al., 2016), also referred to as intra-attention, is a special case of the attention mechanism which relates different parts of the same sequence and relies only on information from the same sequence.",
"When the sequence is a series of words, this means encoding the sentence by learning correlations between words in the sentence.",
"Self-attention is a powerful method to learn long-range dependencies in a sequence.",
"In this work, we use a particular form of self-attention introduced by Vaswani et al. (2017) in which the weighting is determined by scaled dot product.",
"Given the input representation X , three smaller sized vectors are created.",
"These are Query, 1 See PARSEME annotation guidelines at https://parsemefr.lis-lab.fr/parseme-st-guidelines/1.1/ Key, and Value which are represented with Q , K , and V respectively.",
"N different self-attention mechanisms are activated in parallel.",
"This approach is known as N -headed self-attention, where each head H i = Att ( QW Qi , KW Ki , V ) and the projections W Qi and W Ki are parameter matrices.",
"The outputs from these individual heads are later used in GCN layers (Guo et al., 2019).",
"Central to GCN is the adjacency matrix where the relations between nodes are defined.",
"Converting the graph of relations to an adjacency matrix involves a rule-based hard pruning strategy and potentially results in discarding valuable information due to the sparsity of the matrix.",
"Influenced by Guo et al. (2019), in this work we consider dependency parse information as an undirected graph with adjacency A .",
"To obtain A , we combine matrix A with matrices H 0 , H 1 ,..., HN 1 induced by the N -headed self-attention mechanism defined in Section 3.1.",
"Given an N -headed attention, each A is converted to several A i s where i { 1 , 2 ,",
"..N } and each A i is a linear combination of A and H i .",
"Each A i can be interpreted as a fully connected graph where the relation strength between every two nodes is determined by a weight value.",
"In this case, a higher weight signifies a stronger relation and a value close to zero would signal a lack of connection.",
"These edge-weighted graphs are then fed to separate GCNs.",
"A consolidated representation is finally achieved by a linear combination of the outputs from these N different GCNs.",
"The use of attention within the GCN network is motivated by the assumption that multi-hop paths between distantly related nodes could potentially be captured this way.",
"We stack n layers of attention-guided GCNs using residual connections with n being a hyper-parameter that is tuned independently in each dataset.",
"Graph Attention (GAT) (Velickovic et al., 2017) is a closely related work where the scope of attention is the neighbourhood of each node, whereas we make use of the entire sentence.",
"In order to inform the model of the structural hierarchy within the sentence and encode information about MWEs, our attention-guided GCN component integrates information from two separate sources; namely, the dependency parse information and token-level relations between components of existing MWEs in the sentence.",
"These correspond to adjacencies ADEP and AMWE which are fed each into separate GCNs and the output is a concatenation of the outputs from both components: GCN = concat [ GCN s MWE ; GCN s DEP ] (4) 4 Experiments We describe the datasets used in the experiments and then provide details of the overall system.",
"We apply the systems on two different metaphor datasets: MOH-X, and TroFi, which contain annotations for verb classification.",
"Both of these datasets contain a set of sentences in which a single verb token is labelled as metaphorical or not.",
"There is also an index provided that specifies the location of the target token in the sentence.",
"MOH-X .",
"MOH-X is based on earlier work by Mohammad et al. (2016).",
"It consists of short example' sentences from WordNet (Fellbaum, 1998) 2 with labels for metaphorical verbs along with associated confidence scores.",
"Shutova et al. (2016) created a subset of this dataset, referred to as MOH-X, and added annotations for each verb and its argument.",
"This dataset has 214 unique verbs.",
"TroFi .",
"Similar to MOH-X, TroFi (Birke and Sarkar, 2006) has annotations for target verbs in each sentence.",
"It has a comparatively longer average sentence length with 28 .",
"3 words per sentence compared to MOH-X's 8 .",
"0 .",
"The sentences in TroFi are constructed from the Wall Street Journal Corpus (Charniak et al., 2000).",
"There are only 50 unique target verbs in this dataset.",
"We extract MWEs using the GCN-based system proposed by Rohanian et al. (2019).",
"Since we are focusing on verbal metaphors in this study, we train the system on the PARSEME English dataset 2 Examples are sentences after the gloss that show in-context usage TroFi MOH-X verbal metaphor 1627 315 MWE 257 77 Table 1: Number of predicted MWEs among target verbs.",
"(Ramisch et al., 2018), which is annotated for verbal MWEs.",
"As a result, predicted MWE labels in our target datasets are IOB formatted, where B and I denote the beginning and inside tokens of an MWE and O signifies tokens not belonging to MWEs.",
"We encode the relations between components of MWEs in each sentence using an adjacency matrix.",
"Tokens of a sentence are nodes of the adjacency matrix; edges exist between tokens of an MWE.",
"Relation matrices are then fed to the attention guided system as explained in Section 4.3.",
"The numbers of verbal MWEs in correlation with target verbs in metaphor datasets are shown in Table 1.",
"As can be seen, almost 16% of metaphors in TroFi and 24% of metaphors in MOH-X are automatically labelled as VMWEs.",
"This provides a strong motivation for incorporating this information into the metaphor identification system.",
"For our experiments, we devise two strong baselines and compare them against our proposed model.",
"All three systems are built on top of a pre-trained BERT architecture (Devlin et al., 2019).",
"The starting baseline (BERTBaseline) is vanilla pre-trained BERT with a classification layer added on top.",
"The other two models (BERT+GCN and BERT+MWE-Aware GCN) are created by adding extra layers with trainable parameters on top of the BERT model, augmenting its original structure.",
"3 BERT+GCN is BERT plus an attention-guided GCN that uses dependency parse information.",
"Finally, BERT+MWE-Aware GCN refers to the system that uses BERT along with the added MWE-aware GCN component that utilises both dependency and VMWE information as detailed in Section 3.3.",
"Adam (Kingma and Ba, 2014) is used for optimising the network; the learning rate is controlled with a linear warmup scheduler in which the rate 3 For all the experiments we use the pre-trained BERT model, bert-base-uncased , from the transformers library (Wolf et al., 2019).",
"decreases linearly after increasing during a warmup period.",
"In all the models, given the verb index in the dataset 4 , and before passing the token-level output of the GCN to the softmax layer, we slice the output tensor based on the provided index and only select for the representation of the token of interest and subsequently pass this sliced tensor to the classification layer.",
"We report the results in terms of accuracy, precision, recall and F 1 -score, macro averaged over the measures obtained from 10 fold cross-validation.",
"As can be seen in Table 2, our proposed model outperforms the baselines and also surpasses state-of-the-art in terms of F 1 -score and precision in both datasets.",
"As a whole, the results obtained for the two datasets are more homogeneous across the four metrics compared to previous state-of-the-art.",
"In order to have a fair comparison with the previous state-of-the-art, it is important to consider their architectures.",
"Gao et al. (2018), which our model outperforms in most criteria across the two datasets, is a BiLSTM-based system that uses a combination of ELMo and GLoVe vectors for input representation.",
"The two models by Mao et al. (2019) are more competitive, especially in accuracy and precision for the TroFi dataset.",
"RNN-HG and RNN-MHCA are BiLSTM-based systems grounded in linguistic theories of Selectional Preference Violation (SPV) (Wilks, 1978) and Metaphor Identification Procedure (MIP) (Steen et al., 2007) which are based on the semantic contrast between the metaphorical word and its context or between the literal and contextualised meanings of a target token.",
"These two models also make use of contextualised em-beddings.",
"The larger portion of annotated VMWEs in both datasets are figurative and thus provide a valuable signal to metaphoricity.",
"TroFi proved to be more challenging as sentences can be as long as 118 tokens with several different VMWEs and only a single token of interest which could be labelled as literal.",
"On the other hand, MOH-X is more focused and VMWEs, for the most part, coincide with the target verb.",
"A notable pattern in the results is when the baselines miss a metaphor and the proposed model correctly identifies it due to the presence of a non-compositional VMWE.",
"A typical example is given below where tack together , identified initially as an MWE, signals metaphoricity: 5 (1) He tacked together some verses.",
"There are examples of sentences falsely classi-fied by BERT+GCN as metaphorical which are correctly identified as not by BERT+MWE-Aware GCN.",
"This shows the model has picked up informative cues and general patterns.",
"There are also metaphors missed by BERT+GCN that do not have explicitly tagged VMWEs, but the proposed model is still able to capture them.",
"Example 2 is an instance of such case: (2) The residents of this village adhered to Catholicism.",
"Due to their correlation with metaphoricity, VMWE information equips the model with the ability to identify metaphorical usage, which is reflected in the superior precision scores.",
"However, this correlation is not always definitive, and in certain cases where a VMWE is realised in its literal meaning, the model might incorrectly associate its 5 Target tokens are boldfaced presence with metaphor.",
"The following two sentences from MOH-X are examples of false positives influenced by VMWEs.",
"Here, jam the brake and land in are VMWEs with literal meanings which can be idiomatic in other contexts: (3) The driver jammed the brake pedal to the floor.",
"(4) The ship landed in Pearl Harbor There are only a few such cases in MOH-X, however in TroFi, the problem is exacerbated by longer sentences with multiple target tokens.",
"One possible remedy could be to not attend to all the tokens in each sentence but instead look at a certain window around the target token.",
"We did not explore this idea in this work as it would defeat the purpose of attention-guided GCNs, but are open to considering it in future in such a way that accuracy is improved without hurting the precision scores which are higher in both datasets than previous state-of-the-art.",
"In this work, we presented a neural model to classify metaphorical verbs in their sentential context using information from the dependency parse tree and annotations for verbal multiword expressions.",
"To the best of our knowledge, this is the first MWE-aware metaphor identification system, that demonstrates how the knowledge of MWEs can enhance the performance of a metaphor classification model.",
"Experiments showed that the resulting system sets a new state-of-the-art in several criteria across two benchmark metaphor datasets.",
"The code used in the experiments will be made publicly available 6 .",
"For future work, we plan to add VMWE annotations to the VU Amsterdam Corpus (Steen, 2010) which is the largest metaphor dataset and extend our experiments using that resource.",
"Directionality of edges did not result in improvement in our models in this work, however for future, we plan to develop GCNs that incorporate edge typing, which would enable us to differentiate between different MWE types and dependency relations while comparing them against the current models."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"objective"
] |
[
"Most existing research on visual question answering (VQA) is limited to information explicitly present in an image or a video.",
"In this paper, we take visual understanding to a higher level where systems are challenged to answer questions that involve mentally simulating the hypothetical consequences of performing specific actions in a given scenario.",
"Towards that end, we formulate a vision-language question answering task based on the CLEVR (Johnson et al., 2017a) dataset.",
"Wethen modify the best existing VQA methods and propose baseline solvers for this task.",
"Finally, we motivate the development of better vision-language models by providing insights about the capability of diverse architectures to perform joint reasoning over image-text modality 1 .",
"In 2014, Michael Jordan, in an interview (Gomes, 2014) said that Deep learning is good at certain problems like image classification and identifying objects in the scene, but it struggles to talk about how those objects relate to each other, or how a person/robot would interact with those objects. For example, humans can deal with inferences about the scene: what if I sit down on that?, what if I put something on top of something? etc. There exists a range of problems that are far beyond the capability of today's machines. \"",
"While this interview was six years ago, and since then there has been a lot of progress in deep learning and its applications to visual understanding. Additionally, a large body of visual question answering (VQA) datasets (Antol et al., 2015; Ren et al., 2015; Hudson and Manning, 2019) have been compiled and many models have been developed",
"corresponding author 1 Dataset setup scripts and code for baselines are made available at https://github.com/shailaja183/clevr_hyp. For additional details about the dataset creation process, refer supplementary material.",
"over them, but the above mentioned inferences about the scene issue stated by Jordan remains largely unaddressed.",
"In most existing VQA datasets, scene understanding is holistic and questions are centered around information explicitly present in the image (i.e. objects, attributes and actions).",
"As a result, advanced object detection and scene graph techniques have been quite successful in achieving good performance over these datasets.",
"However, provided an image, humans can speculate a wide range of implicit information.",
"For example, the purpose of various objects in a scene, speculation about events that might have happened before, consider numerous imaginary situations and predicting possible future outcomes, intentions of a subject to perform particular actions, and many more.",
"Among the above, an ability to imagine taking specific actions and simulating probable results without actually acting or experiencing is an important aspect of human cognition (Figure 1 gives an example of this).",
"Thus, we believe that having autonomous systems equipped with a similar capability will further advance AI research.",
"This is particularly useful for robots performing on-demand tasks in safety-critical situations or navigating through dynamic environments, where they imagine possible outcomes for various situations without executing instructions directly.",
"Motivated by the above, we propose a challenge that attempts to bridge the gap between state-of-the-art AI and human-level cognition.",
"The main contributions of this paper 2 are as follows; We formalize a novel question answering task with respect to a hypothetical state of the world (in a visual form) when some action (described in a textual form) is performed.",
"We create a large-scale dataset for this task, and refer it as CLEVR_HYP i.e. VQA with hypothetical actions performed over images in CLEVR (Johnson et al., 2017a) style.",
"We first evaluate the direct extensions of top VQA and NLQA (Natural language QA) solvers on this dataset.",
"Then, we propose new baselines to solve CLEVR_HYP and report their results.",
"Through analysis and ablations, we provide insights about the capability of diverse architectures to perform joint reasoning over image-text modality.",
"In this section we situate and compare our work with related areas such as implicit text genera-tion/retrieval for a visual, visual question answering (VQA) over synthetic images, question answering (QA) involving hypothetical reasoning, and language-based manipulation in visual domains closest to CLEVR_HYP .",
"Implicit Text Generation for a Visual: VisualComet (Park et al., 2020) and Video2Commonsense (Fang et al., 2020) have made initial attempts to derive implicit information about images/videos contrary to traditional factual descriptions which leverage only visual attributes.",
"VisualComet aims to generate commonsense inferences about events that could have happened before, events that can happen after and people's intents at present for each subject in a given image.",
"They use a vision-language 2 Our work focuses on the capability of neural models to reason about the effects of actions given a visual-linguistic context and not on models that deal with intuitive physics.",
"transformer that takes a sequence of inputs (image, event, place, inference) and train a model to predict inference in a language-model style.",
"Video2Commonsense focuses on generating video descriptions that can incorporate commonsense facts related to intentions, effects, and implicit attributes about actions being performed by a subject.",
"They extract top-ranked commonsense texts from the Atomic dataset and modify training objective to incorporate this information.",
"While both involve a visual-textual component and actions, their key focus is about generating plausible events and commonsense respectively.",
"Whereas, our work is related to performing certain actions and reasoning about its effect on the overall visual scene.",
"Language-based Manipulation in Visual Domain: Learning a mapping from natural language instructions to a sequences of actions to be performed in a visual environment is a common task in robotics (Kanu et al., 2020; Gaddy and Klein, 2019; Shridhar et al., 2020).",
"Another relevant task is vision-and-language navigation (Ander-son et al., 2018; Chen et al., 2019; Nguyen et al., 2019), where an agent navigates in a visual environment to find goal location by following natural language instructions.",
"Both above works include visuals, natural language instructions and a set of actions that can be performed to achieve desired goals.",
"In this way, it is similar to our CLEVR_HYP , but in our case, models require reasoning about the effect of actions performed rather than determining which action to perform.",
"Also, we frame this in a QA style evaluation rather than producing instructions for low-level controls.",
"Manipulation of natural images with language is an emerging research direction in computer vision.",
"(Teney et al., 2020) proposed a method for generating counterfactual of VQA samples using image in-painting and masking.",
"Also, there are works (Dong et al., 2017; Nam et al., 2018; Reed et al., 2016) which use Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) for language conditioned image generation and manipulation.",
"However, both the above tasks are more focused at object and attribute level manipulation rather than at action level.",
"VQA over Synthetic Images: While natural images-based VQA datasets reflect challenges one can encounter in real-life situations, the require-I : 1. TA : Paint the small green ball with cyan color.",
"ment of costlier human annotations and vulnerability to biases are two major drawbacks.",
"Contrary to them, synthetic datasets allow controlled data generation at scale while being flexible to test specific reasoning skills.",
"For the above reasons, following benchmark VQA datasets have incorporated synthetic images; COG (Yang et al., 2018) and Shapes (An-dreas et al., 2016) contain images with rendered 2D shapes; SHRDLU (Winograd, 1971), CLEVR (Johnson et al., 2017a), and CLEVR-dialog (Kot-tur et al., 2019) have rendered scenes with 3D objects; DVQA (Kafle et al., 2018) and FigureQA (Kahou et al., 2017) have synthetically generated charts (bar chart, pie chart, dot-line etc.); VQA-abstract (Antol et al., 2015) and IQA (Gordon et al., 2018) involves question-answering over synthetically rendered clipart-style scenes and interactive environments respectively.",
"Our proposed dataset CLEVR_HYP uses CLEVR (Johnson et al., 2017a) style rendered scenes with 3D objects as a visual component.",
"It is distinct from all other synthetic VQA datasets for two key reasons; first, integration of action domain in synthetic VQA and second, the requirement of mental simulation in order to answer the question.",
"QA involving Hypothetical Reasoning: In the language domain, WIQA (Tandon et al., 2019) dataset tests the model's ability to do what-if reasoning over procedural text as a 3-way classification (the influence between pair of events as positive, negative or no-effect).",
"In vision-language domains, a portion of TQA (Kembhavi et al., 2017) and VCR (Zellers et al., 2019) are relevant.",
"Questions in TQA and VCR involve hypothetical scenarios about multi-modal science contexts and movie scenes respectively.",
"However, none of the above two datasets' key focus is on the model's capability to imagine changes performed over the image.",
"(a benchmark dataset for physical intelligence) (Wagner et al., 2018) has some similarity with ours.",
"It has synthetically rendered table-top scenes, four types of actions (push, rotate, remove and drop) being performed on an object and what-if questions.",
"To our best knowledge, TIWIQ dataset is not publicly available.",
"Based on our understanding from their manuscript, we observe following important distinction with this work.",
"Our questions focus on the impact of actions on the whole image, while in TIWIQ questions are about impact of actions on a specific object in the image.",
"Moreover, we frame CLEVR_HYP as a classification task, contrary to TIWIQ which is a generative task.",
"Our CLEVR_HYP dataset has 175k automatically generated image-action text-question samples which is much larger compared to TIWIQ which has only 1020 samples and manually crafted ground-truths.",
"Figure 2 gives a glimpse of CLEVR_HYP task.",
"We opt for synthetic dataset creation as it allows automated and controlled data generation at scale with minimal biases.",
"More details are described below.",
"1. Image(I): It is a given visual for our task.",
"Each image in the dataset contains 4-10 randomly selected 3D objects rendered using Blender (Blender Online Community, 2019) in CLEVR (Johnson et al., 2017a) style.",
"Objects have 4 attributes listed in the Table 1. Additionally, these objects can be referred using 5 relative spatial relations (left, right, in front, behind and on).",
"We provide scene graphs 3 containing all ground-truth information about a scene, that can be considered as a visual oracle for a given image.",
"2. Action Text (TA ): It is a natural language text describing various actions performed over the current scene.",
"The action can be one of four:",
"(i) Add new object(s) to the scene",
"(ii) Remove object(s) from the scene",
"(iii) Change attributes of the object(s)",
"(iv) Move object(s) within scene (might be in plane i.e. left/right/front/back or out of plane i.e. move one object on top of another object 4 ) To generate action text, we start with manually written templates involving the aforementioned actions.",
"For example, action involving change in the attribute of object(s) to a given value, we have a template of the following kind; Change the < A > of < Z >< C >< M >< S > to < V > ' .",
"Where < A > , < Z > , < C > , < M > , < S > , < V > are placeholders for the attribute, size, color, material, shape and a value of attribute respectively.",
"Each action text in the CLEVR_HYP is associated with a functional program which if executed on an image's scene graph, yields the new scene graph that simulates the effects of actions.",
"Functional programs for action texts 3 are built from the basic functions that correspond to elementary action operations (right part of Figure 4a).",
"For the above mentioned change' attribute action template, the equivalent functional program can be written as; change_attr( < A > ,filter_size( < Z > ,filter 3 Scene graphs and Functional Programs (for action text and question) are not provided at the test-time. 4 For simplicity, we assume that any object can be put on another object regardless of its size, material or shape. _color( < C > , filter_material( < M > filter_ shape( < S > , scene())))), < V > )' .",
"It essentially means, first filter out the objects with desired attributes and then update the value of their current attribute A to value V. 3. Question about Hypothetical Situation (QH ): It is a natural language query that tests various visual reasoning abilities after simulating the effects of actions described in TA .",
"There are 5 possible reasoning types similar to CLEVR;",
"(i) Counting objects fulfilling the condition",
"(ii) Verify existence of certain objects",
"(iii) Query attribute of a particular object",
"(iv) Compare attributes of two objects",
"(v) Integer comparison of two object sets (same, larger or smaller) Similar to action texts, we have templates and corresponding programs for questions.",
"Functional programs for questions 3 are executed on the image's updated scene graph (after incorporating effects of the action text) and yields the ground-truth answer to the question.",
"Functional programs for questions are made of primitive functions shown in left part of the Figure 4a).",
"Paraphrasing: In order to create a challenging dataset from linguistic point of view and to prevent models from overfitting on templated representations, we leverage noun synonyms, object name paraphrasing and sentence-level paraphrasing.",
"For noun synonyms, we use a pre-defined dictionary (such as cubeblock, sphereball and so on).",
"We programmatically generate all possibilities to refer to an object in the image (i.e. object name paraphrasing) and randomly sample one among them.",
"For sentence level paraphrasing, we use Text-To-Text Transfer Transformer (T5) (Raffel et al., 2020) fine-tuned over positive samples from Quora Question Pairs (QQP) dataset (Iyer et al., 2017) for question paraphrasing.",
"We use Fairseq (Ott et al., 2019) for action text paraphrasing which uses round-trip translation and mixture of experts (Shen et al., 2019).",
"Note that we keep the action text and question as separate inputs for the purpose of simplicity and keeping our focus on building solvers that can do mental simulation.",
"One can create a simple template like < QH > if < proper-noun/pronoun > < TA > ?\" or If < proper-noun/pronoun > < TA > ,",
"However, having them together adds further complexity on the solver side as it first has to figure out what actions are performed and what is the question.",
"By providing ground-truth object information (as a visual oracle) and machine-readable form of questions & action texts (oracle for linguistic com-ponents).",
"This information can be used to develop models which can process semi-structured representations of image/text or for the explainability purposes (to precisely know which component of the model is failing).",
"Output: Answer (A) to the Question (QH ), which can be considered as a 27-way classification over attributes (8 colors + 3 shapes + 2 sizes + 2 material), numeric (0-9) and boolean (yes/no).",
"CLEVR_HYP dataset containing 175k image-action text-question samples using the process mentioned",
"mentioned in Figure 4b.",
"For each image, we generate 5 kinds of action texts (one for each add, remove, move in-plane and move out-of-plane and change attribute).",
"For each action text type, we generate 5 questions (one for each count, exist, compare integer, query attribute and compare attribute).",
"Hence, we get 5*5 unique action text-question pairs for each image, covering all actions and reasoning types in a balanced manner as shown in Figure 5a (referred as Original partition).",
"However, it leads to a skewed distribution of answers as observed from 5b.",
"Therefore, we curate a version of the dataset (referred as Balanced partition) consisting of 67.5k samples where all answer choices are equally-likely as well.",
"Additionally, we create two small challenge test sets (1500 image-action text-question samples each)2HopActionText (2HopT A ) and 2HopQues-tion (2HopQ H ) to test generalization capability of the trained models.",
"In 2HopT A , we create action text which requires model to understand two different actions being taken on the scene.",
"For example, Add a small blue metal cylinder to the right of large yellow cube and remove the large cylinder from the scene.' and 'Move the purple object on top of small red cube then change its color to",
"cyan.'.",
"In",
"2HopQ H , we create questions which require model to understand logical combinations of questions using and', or' and not'.",
"For example, How many objects are either red or cylinder?' and Are there any rubber cubes that are not green?'.",
"In Table 2, we provide size of the various partitions and measure the diversity of the dataset in various aspects.",
"For images, we calculate average number of objects present in the scene from the length of scene graph.",
"For balanced partition, the number of images are much less compared to original, but more average number of objects per image.",
"This is most likely due to the need to accommodate integers 4-9 more frequently as ground-truth answers.",
"For textual components, we show average lengths (number of tokens separated by whites-paces) and count unique utterances as a measure of diversity.",
"The original partition of the resulting dataset has 80% and 83% unique action text and questions respectively.",
"For balanced partition, length and unique utterances for action text are nearly same as the original partition but for questions, it decreases.",
"Questions in the original partition have been observed to enforce more strict and specific object references (such as small red metal cubes) compared to balanced partition (small cubes, red metal objects etc.), reducing the average length and uniqueness.",
"It is intuitive for 2Hop partitions to have higher average length and uniqueness for TA and QH respectively.",
"This shows that despite having created this dataset from templates and rendered images with a limited set of attributes, it is still fairly challenging.",
"(i) understand hypothetical actions and questions in complex natural language,",
"(ii) correctly disambiguate the objects of interest and obtain the structured representation (i.e. scene graphs or functional programs) of various modalities if required by the solver,",
"(iii) understand the dynamics of the world based on the various actions performed over it,",
"(iv) perform various kind of reasoning to answer the question.",
"The QA task in CLEVR_HYP dataset can be considered as a 27-class classification problem.",
"Each answer choice is likely to be picked with a probability of 1/27.",
"Therefore, the performance of the random baseline is 3.7%.",
"We performed human evaluation with respect to 500 samples from the CLEVR_HYP dataset.",
"Accuracy of human evaluations on original test, 2Hop AT and 2Hop QH are 98.4%, 96.2% and 96.6% respectively.",
"Pre-trained transformer-based architectures have been observed (Li et al., 2020) to capture a rich hierarchy of language-structures (text-only models) and effectively map entities/words with corresponding image regions (vision-language models).",
"We experiment with various transformer-based models to understand their capability to understand the effects of actions on a visual domain.",
"Baseline 1Machine Comprehension using RoBERTa: To evaluate the hypothetical VQA task through the text-only model, we convert images into the templated text using scene graphs.",
"The templated text contains two kind of sentences; one describing properties of the objects i.e. There is a < Z > < C > < M > < S > \", the other one describing the relative spatial location i.e. The < Z > < C > < M > < S > is < R > the < Z1 > < C1 > < M1 > < S1 > \". For example, There is a small green metal cube.\" and The large yellow rubber sphere is to the left of the small green metal cube\". Then we concatenate templated text with the action text to create a reading comprehension passage. We use state-of-the-art machine comprehension baseline RoBERTa (Liu et al., 2019) finetuned on the RACE dataset (Lai et al., 2017) 5 . Finally, we pre-5 architecture=roberta large, epochs=5, learning rate= 1e 05 , batch size=2, update frequency=2, dropout=0.1, dict an answer to the question using this reading comprehension passage. Baseline 2Visual Question Answering using LXMERT Proposed by (Tan and Bansal, 2019), LXMERT is one of the best transformer based pre-trainable visual-linguistic representations which supports VQA as a downstream task. Typical VQA systems take an image and a language input. Therefore, to evaluate CLEVR_HYP in VQA style, we concatenate action text and question to form a single text input. Since LXMERT is pre-trained on the natural images, we finetune it over CLEVR_HYP dataset 6 and then use it to predict answer. 4.4 Systematically incorporating effects of actions into neural models Baseline 3Text-editing Image Baseline: In this method, we break-down the QA task with mental simulation in two parts; first, learn to generate an updated image (such that it has incorporated the effects of actions) and then perform visual question answering with respect to the updated image. We use the idea from Text Image Residual Gating proposed in (Vo et al., 2019) to implement the first part. However there are two important distinctions; Their focus is on the retrieval from the given database. We modify their objective and develop text-adaptive encoder-decoder with residual connections to generate new image. Also, editing instructions in their CSS dataset (Vo et al., 2019) were quite simple. For example, add red cube' and remove yellow sphere'. In this case, one can add the red cube anywhere in the scene. We modify their architecture to precisely place objects to their optimizer=adam with eps= 1e 06 . 6 epochs=4, learning rate= 5e 05 , batch size=8 Nomenclature I: Image, SG: Scene Graph, TT: Templated Text, TA : Action Text, QH : Hypothetical Question, A: Answer, FP: Functional Program, ': Updated Modality Baseline 1: I SG T T + TA RoBERT a RACEA QH Baseline 3: I I (cid:48) LXMERTCLEV RA TAF P QH Baseline 2: I LXMERTCLEV R _ HY PA TA + QH Baseline 4: I SG SG (cid:48) Symbolic A TAF P QHF P Figure 6: Graphical visualization of baseline models over CLEVR_HYP described above. relative spatial references (on left/right/front/ be-hind). Once we get the updated image, we feed it to the LXMERT (Tan and Bansal, 2019) finetuned over the CLEVR (Johnson et al., 2017a) dataset along with the question and predict the answer. Baseline 4Scene Graph Update Model: Instead of directly manipulating images, in this method, we leverage image scene graphs to convert image-editing problem into graph-editing problem, conditioned on the action text. This is an emerging research direction to deal with changes in the visual modality over time or with new sources of information, as observed from recent parallel works (chang Chen et al., 2020; He et al., 2020). We first use Mask R-CNN (He et al., 2017) to get the segmentation mask of the objects and predict attributes (color, material, size, and shape) with an acceptance threshold of 0.9. Segmentation mask of each object along with original image is then passed through ResNet-34 (He et al., 2016) to extract precise 3D coordinates of the object. We get the structured scene graph for the image. Then we use seq2seq with attention model originally proposed in (Johnson et al., 2017b) to generate functional programs (FP) for action text and question. The execution engine executes programs on scene graph, implemented as a neural module network (Andreas et al., 2016) to update the scene representation and answer questions. We learn to update scene graphs according to functional program for the action text using reinforcement learning 7 . The reward function is as-7 finetuning learning rate= 1e 05 , 1M iterations with early sociated with our ground-truth program executor and generates reward if prediction exactly matches with ground-truth execution. Once we get the updated scene representation, we use neural-symbolic model 8 proposed by (Yi et al., 2018) to obtain the final answer. It is notable that (Yi et al., 2018) achieved near-perfect performance on the CLEVR QA task in addition to being fully explainable. 5 Baseline Results In this section, we benchmark models described above on the CLEVR_HYP . The dataset is formulated as a classification task with exactly one correct answer, so we use standard accuracy as evaluation metric. We then analyze their performance according to question and action types. Quantitative results from above experiments can be visualized in top part of the Table 3. Among the methods described above, the scene graph update model has the best overall performance 70.5% on original test data. Text-editing model is best over balanced set, but observed to have the poor generalization capability when two actions or reasoning capabilities have to be performed. CLEVR_HYP requires models to reason about effect of hypothetical actions taken over images. LXMERT is not directly trained for this objective therefore, it struggles to do well on this task. The reason behind the poor performance of text-only baseline is due to its limitation to incorporate detailed spatial locations stopping, batch size=32 8 supervised pretraining learning rate= 7e 04 , num itera-tions=20k, batch size=32 and then finetuning 1e 05 , at most 2M iterations with early stopping, batch size=32 Overall Baseline Performance for Various Test Sets of CLEVR_HYP Original Test Balanced Test 2HopTA Test 2HopQH Test BL1 BL2 BL3 BL4 BL1 BL2 BL3 BL4 BL1 BL2 BL3 BL4 BL1 BL2 BL3 BL4 57.2 63.9 64.7 70.5 55.3 65.2 69.5 68.6 53.3 49.2 55.6 64.4 55.2 52.9 58.7 66.5 Performance break-down by Action Types and Reasoning Types for Baseline 3 and 4 Original Test 2Hop AT Test Original Test 2Hop QH Test BL3 BL4 BL3 BL4 BL3 BL4 BL3 BL4 Add 58.2 65.9 Add+Remove 53.6 63.2 Count 60.2 74.3 And 59.2 67.1 Remove 89.4 88.6 Add+Change 55.4 64.7 Exist 69.6 72.6 Or 58.8 67.4 Change 88.7 91.2 Add+Move 49.7 57.5 CompInt 56.7 67.3 Not 58.1 65.0 Move(in-plane) 61.5 69.4 Remove+Change 82.1 85.5 CompAttr 68.7 70.5 Move(on) 53.3 66.1 Remove+Move 52.6 66.4 QueryAttr 65.4 68.1 Change+Move 53.8 63.3 Table 3: Baseline performance over CLEVR_HYP (BLx represents one of the four Baselines described above). into the templates that we use to convert image into a machine comprehension passage. Two of our models (scene graph update and text-editing image) are transparent to visualize intermediate changes in the scene after performing actions. We analyse their ability to understand actions and make appropriate changes as shown in below part of Table 3. For the scene graph method, we compare the ground-truth functional program with the generated program and measure their exact-match accuracy. For the text-editing image method, we generate scene graphs for both images (original image and image after text-editing) and compare them. For attributes, we do exact-match, whereas for location information we consider matching only on the basis of relative spatial location. Both scene graph and text-editing models do quite well on remove' and change' actions whereas struggle when new objects are added or existing objects are moved around. The observation is consistent when multiple actions are combined. Therefore, actions remove+change can be performed with maximum accuracy whereas other combinations of actions accomplish relatively lower performance. It leads to the conclusion that understanding the effect of different actions are of varied complexity. Most models demonstrate better performance over counting, existence and attribute query type of questions than comparison questions. The scene graph update and text-editing methods show a performance drop of 6.1% and 9.1% respectively when multiple actions are performed on the scene. However, there is less of a performance gap for models on 2HopQ H compared to the test set, suggesting that models are able to better generalize with respect to multiple reasoning skills than complex actions. 6 Conclusion We introduce CLEVR_HYP , a dataset to evaluate the ability of VQA systems after hypothetical actions are performed over the given image. We create this dataset by extending the data generation framework of CLEVR (Johnson et al., 2017a) that uses synthetically rendered images and templates for reasoning questions. Our dataset is challenging because rather than asking to reason about objects already present in the image, it asks about what would happen in an alternative world where changes have occurred. We provide ground-truth representations for images, hypothetical actions and questions to facilitate the development of models that systematically learn to reason about underlying process. We create several baseline models to benchmark CLEVR_HYP and report their results. Our analysis shows that the models are able to perform reasonably well (70.5%) on the limited number of actions and reasoning types, but struggle with complex scenarios. While neural models have achieved almost perfect performance on CLEVR and considering human performance as upperbound (98%), there is a lot of room for improvement on CLEVR_HYP . Our future work would include relaxing constraints by allowing a larger variety of actions, attributes and reasoning types. By extending this approach further for natural images, we aim to contribute in the development of better vision+language models. Acknowledgements We are thankful to the anonymous reviewers for the constructive feedback. This work is partially supported by the grants NSF 1816039, DARPA W911NF2020006 and ONR N00014-20-1-2332. References Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Snderhauf, Ian D. Reid, Stephen Gould, and Anton van den Hengel. 2018. Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018 , pages 3674 3683. IEEE Computer Society. Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Neural module networks. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016 , pages 3948. IEEE Computer Society. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: visual question answering. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015 , pages 24252433. IEEE Computer Society. Blender Online Community. 2019. Blender a 3D modelling and rendering package . Blender Foundation. Li chang Chen, Guosheng Lin, S. Wang, and Qingyao Wu. 2020. Graph edit distance reward: Learning to edit scene graph. In Proceedings of the European Conference on Computer Vision (ECCV) . Howard Chen, Alane Suhr, Dipendra Misra, Noah Snavely, and Yoav Artzi. 2019. Touchdown: Natural language navigation and spatial reasoning in visual street environments. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 1253812547. Hao Dong, Simiao Yu, Chao Wu, and Yike Guo. 2017. Semantic image synthesis via adversarial learning. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017 , pages 57075715. IEEE Computer Society. Zhiyuan Fang, Tejas Gokhale, Pratyay Banerjee, Chitta Baral, and Yezhou Yang. 2020. Video2Commonsense: Generating commonsense descriptions to enrich video captioning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 840860. Association for Computational Linguistics. David Gaddy and Dan Klein. 2019. Pre-learning environment representations for data-efficient neural instruction following. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 19461956. Association for Computational Linguistics. Lee Gomes. 2014. Machine-learning maestro michael jordan on the delusions of big data and other huge engineering efforts. Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial networks. arXiv preprint arXiv:1406.2661 , 4(5):6. Daniel Gordon, Aniruddha Kembhavi, Mohammad Rastegari, Joseph Redmon, Dieter Fox, and Ali Farhadi. 2018. IQA: visual question answering in interactive environments. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018 , pages 40894098. IEEE Computer Society. Kaiming He, Georgia Gkioxari, Piotr Dollr, and Ross B. Girshick. 2017. Mask R-CNN. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017 , pages 29802988. IEEE Computer Society. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016 , pages 770778. IEEE Computer Society. Xuanli He, Quan Hung Tran, Gholamreza Haffari, Walter Chang, Zhe Lin, Trung Bui, Franck Dernoncourt, and Nhan Dam. 2020. Scene graph modification based on natural language commands. In Findings of the Association for Computational Linguistics: EMNLP 2020 , pages 972990. Association for Computational Linguistics. Drew A Hudson and Christopher D Manning. 2019. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , pages 67006709. Shankar Iyer, Nikhil Dandekar, and Kornl Csernai. 2017. First quora dataset release: Question pairs. data. quora. com . Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C. Lawrence Zitnick, and Ross B. Girshick. 2017a. CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017 , pages 1988 1997. IEEE Computer Society. Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Judy Hoffman, Li Fei-Fei, C. Lawrence Zit-nick, and Ross B. Girshick. 2017b. Inferring and executing programs for visual reasoning. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017 , pages 30083017. IEEE Computer Society. Kushal Kafle, Brian L. Price, Scott Cohen, and Christopher Kanan. 2018. DVQA: understanding data visualizations via question answering. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018 , pages 56485656. IEEE Computer Society. Samira Ebrahimi Kahou, Vincent Michalski, Adam Atkinson, kos Kdr, Adam Trischler, and Yoshua Bengio. 2017. Figureqa: An annotated figure dataset for visual reasoning. arXiv preprint arXiv:1710.07300 . John Kanu, Eadom Dessalene, Xiaomin Lin, Cornelia Fermuller, and Yiannis Aloimonos. 2020. Following instructions by imagining and reaching visual goals. arXiv preprint arXiv:2001.09373 . Aniruddha Kembhavi, Min Joon Seo, Dustin Schwenk, Jonghyun Choi, Ali Farhadi, and Hannaneh Ha-jishirzi. 2017. Are you smarter than a sixth grader? textbook question answering for multimodal machine comprehension. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017 , pages 53765384. IEEE Computer Society. Satwik Kottur, Jos M. F. Moura, Devi Parikh, Dhruv Batra, and Marcus Rohrbach. 2019. CLEVR-dialog: A diagnostic dataset for multi-round reasoning in visual dialog. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) , pages 582595. Association for Computational Linguistics. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale ReAding comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing , pages 785794. Association for Computational Linguistics. Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2020. What does BERT with vision look at? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 52655275, Online. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 . Seonghyeon Nam, Yunji Kim, and Seon Joo Kim. 2018. Text-adaptive generative adversarial networks: Manipulating images with natural language. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montral, Canada , pages 4251. Khanh Nguyen, Debadeepta Dey, Chris Brockett, and Bill Dolan. 2019. Vision-based navigation with language-based assistance via imitation learning with indirect intervention. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019 , pages 1252712537. Computer Vision Foundation / IEEE. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations) , pages 4853. Association for Computational Linguistics. J. Park, Chandra Bhagavatula, R. Mottaghi, A. Farhadi, and Yejin Choi. 2020. Visualcomet: Reasoning about the dynamic context of a still image. In Proceedings of the European Conference on Computer Vision (ECCV) . Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research , 21(140):167. Scott E. Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. 2016. Generative adversarial text to image synthesis. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016 , volume 48 of JMLR Workshop and Conference Proceedings , pages 1060 1069. JMLR.org. Mengye Ren, Ryan Kiros, and Richard S. Zemel. 2015. Exploring models and data for image question answering. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada , pages 2953 2961. Tianxiao Shen, Myle Ott, Michael Auli, and Marc'Aurelio Ranzato. 2019. Mixture models for diverse machine translation: Tricks of the trade. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA , volume 97 of Proceedings of Machine Learning Research , pages 57195728. PMLR. Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, and Dieter Fox. 2020. ALFRED: A benchmark for interpreting grounded instructions for everyday tasks. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020 , pages 1073710746. IEEE. Hao Tan and Mohit Bansal. 2019. LXMERT: Learning cross-modality encoder representations from transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages 51005111. Association for Computational Linguistics. Niket Tandon, Bhavana Dalvi, Keisuke Sakaguchi, Peter Clark, and Antoine Bosselut. 2019. WIQA: A dataset for what if... reasoning over procedural text."
] | [
"abstain",
"method",
"method",
"abstain",
"objective",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"method",
"objective",
"objective",
"method",
"method",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"Non-goal oriented dialog agents (i.e. chat-bots) aim to produce varying and engaging conversations with a user; however, they typically exhibit either inconsistent personality across conversations or the average personality of all users.",
"This paper addresses these issues by controlling an agent's persona upon generation via conditioning on prior conversations of a target actor.",
"In doing so, we are able to utilize more abstract patterns within a person's speech and better emulate them in generated responses.",
"This work introduces the GENERATIVECONVERSATIONCONTROL model, an augmented and fine-tuned GPT-2 language model that conditions on past reference conversations to probabilistically model multi-turn conversations in the actor's persona.",
"We introduce an accompanying data collection procedure to obtain 10.3M conversations from 6 months worth of Reddit comments.",
"We demonstrate that scaling model sizes from 117M to 8.3B parameters yields an improvement from 23.14 to 13.14 perplexity on 1.7M held out Reddit conversations.",
"Increasing model scale yielded similar improvements in human evaluations that measure preference of model samples to the held out target distribution in terms of realism (31% increased to 37% preference), style matching (37% to 42%), grammar and content quality (29% to 42%), and conversation coherency (32% to 40%).",
"We find that conditionally modeling past conversations improves perplexity by 0.47 in automatic evaluations.",
"Through human trials we identify positive trends between conditional modeling and style matching and outline steps to further improve persona control.",
"Modeling dialog agents, otherwise known as chat-bots, has been a longstanding goal within artificial intelligence research.",
"Historically, approaches to this task can be divided into one of the two categories: retrieval and generative .",
"The former is posed as a search problem where an appropriate response for a conversation is selected from a large set of candidate replies, whereas the latter autoregressively samples a reply, thereby potentially creating a response that the model may not have seen before.",
"The flexibility and creativity afforded by not prespecifying every possible response is a ma-jor draw for generative based approaches.",
"In recent years, advances in neural methods have shown promise in effectively modeling this task.",
"Early progress first demonstrated potential with recurrent network based models capable of holding simple conversations (Sordoni et al., 2015).",
"Further architecture optimizations and tweaks improved user experiences; however, they largely experienced issues with the agent exhibiting an inconsistent personality and producing uninteresting comments (Li et al., 2015).",
"Some works have attempted to alleviate this through conditioning on various factors of the conversation through methods such as sentiment or speaker embeddings (Li et al., 2016), but the added data annotation makes these methods not scale well to the gargantuan amounts of data needed to train larger models.",
"A persona-based conversation task was introduced by Zhang et al. (2018) where a set of Reddit comments and their replies were accompanied by brief descriptions or factoids about the speakers, such as their hobbies and interests.",
"Recent works Wolf et al. (2019) have shown that leveraging this format with pre-trained transformer-based language models yield state-of-the-art (SOTA) performance in generative conversation modeling.",
"How-Speaker Conversation Turn A They are worried about themes becoming an exploit.",
"ever, in our interactions with these models they produced conversations that adhered to the reference facts, but were devoid of unique personality and instead exhibited a mean average style.",
"Personality, as seen through text, manifests itself not just through content, but also through a person's tone, grammar, and vernacular.",
"As such, a criticism of prior persona-based solutions is that the personas only reflect surface-level characteristics of a person's manner of speaking and can result in banal generations.",
"What does showcase a personality are actual conversation examples from a person.",
"By conditioning on previous, unrelated, conversation turns for a speaker, we generate new replies that utilize more abstract personality traits inherent in the reference examples.",
"We define this as a conditional conversation task.",
"Emulating this abstract notion of style requires large amount of data and sufficiently powerful model.",
"We propose a data collection procedure that heuristically scrapes user data and comment threads from Reddit 1 to produce conversations that vary widely in content, speakers, and reference histories to condition on.",
"This work also introduces the GENERATIVECONVERSATIONCONTROL (GCC) model, an augmented and fine-tuned GPT-2 language model.",
"We take advantage of 1 https://reddit.com/ large transformers' ability to model long contexts and dependencies, and successfully model multiturn and multi-actor conversations that are significantly longer (up to 15 turns) than most prior work.",
"We find that scaling model sizes from 117M to 8.3B parameters yields an improvement from 23.14 to 13.14 perplexity on 1.7M held out Reddit conversations.",
"Similar improvements from model scaling are found in human evaluations that measure sample preference to the held out target distribution in terms of realism (31% increased to 37% prefer-ence), style matching (37% to 42%), grammar and content quality (29% to 42%), and conversation coherency (32% to 40%).",
"i We introduce a new conversational task and demonstrate added value over traditional conversation modeling through both better control and response generation.",
"ii We document the creation of a large, multiturn, multi-actor conversational dataset and the techniques used to clean it and extract conversations and reference material for style.",
"iii We demonstrate that by increasing model size from 117M to 8.3B parameters, human evaluations measuring preference of model generated samples over held out target distribution increase with respect to realism, style matching, grammar, and conversation coherency.",
"Automatic evaluations also showcase similar trends with the largest model leading to significantly lower perplexities.",
"Let c represent a multi-turn conversation of variable-length, and let x j represent a single turn that contains a variable-amount of tokens.",
"Mathematically, this is represented as c = ( x 1 , . . . , x | c | ) , with x j = ( x j, 1 , . . . , x j, | x j | ) .",
"Every token, in every turn, belongs to the same fixed vocabulary (i.e. x j,t V ).",
"Assume that p ( ) represents the true distribution of content.",
"Standard language modeling involves modeling sequences of tokens.",
"After factorizing, the problem is most commonly construed as a next-token prediction problem where p ( x ) is approximated via: p ( x ) = | x | (cid:89) t =1 p ( x t | x <t ) (1) where is optimized over a set of documents, D = { x (1) , . . . , x |D| } , using maximum likelihood estimation: L ( , D ) = |D| (cid:88) i =1 log p ( x ( i ) ) (2) Likewise, to model dialog in the same vein requires just a small alteration.",
"Instead of modeling just a single sequence of tokens, x , the new objective is to model several sequences of tokens that comprise a conversation, c .",
"As such, p ( c ) is approximated via: p ( c ) = | c | (cid:89) j =1 p ( x j | x <j ) = | c | (cid:89) j =1 | x j | (cid:89) t =1 p ( x j,t | x j,<t , x <j ) (3) where is optimized over a set of conversations, D = { c (1) , . . . , c |D| } , using maximum likelihood estimation: L ( , D ) = |D| (cid:88) i =1 log p ( c ( i ) ) (4) 2.2 Conditioning on Prior Conversations To have more control over generation and better insight into the distribution of turns within a conversation, it is better to conditionally model c instead of modeling it unconditionally as in Equation 3. For every turn in a particular conversation, x j c , let r j be a corresponding set of reference history tuples.",
"These tuples contain",
"(i) a prior turn of conversation",
"(ii) a turn of conversation spoken by the same agent as x j in response to the first member of the tuple.",
"In the event that",
"(ii) corresponds to the beginning of a conversation",
"(i) is left blank.",
"We stipulate that the turns of c and the turns of r j are disjoint.",
"This is defined mathematically as: r j = { ( x k 1 , x k ) | author ( x k ) = author ( x j ) x k / c } (5) The intention of including previous replies by the same person is to get a better idea of the personality, tone, vernacular, and content of potential responses when predicting next tokens for the given turn.",
"Likewise, the turns that the agent was replying to in r j are also included to get a better idea as to the transition dynamics of how they interact with other agents.",
"We update our prior equations to reflect this change in modeling objective: p ( c | r ) = | c | (cid:89) j =1 p ( x j | x <j , r j ) = | c | (cid:89) j =1 | x j | (cid:89) t =1 p ( x j,t | x j,<t , x <j , r j ) L ( , D ) = |D| (cid:88) i =1 log p ( c ( i ) | r ( i ) ) (6) 3 Data In order to sufficiently train a model to be able to autoregressively generate turns in a conversation conditioned on prior conversations, we require an ample amount of diverse examples accompanied with plenty of reference material.",
"A suitable source of data for this purpose can be found from comments made on Reddit posts.",
"Thanks to a publicly available archive on pushshift.io , comments are processed from Reddit ranging from October of 2018 to March of 2019 for training, and April of 2019 for validation.",
"The techniques described in this section can naturally be extended to the full range of Reddit data spanning as far back as 2005; however, we choose to focus on just the 6 months in question for the sake of tractability.",
"Comments for a singular post on Reddit naturally exist as a tree structure; however, conversations necessitate a sequence of turns.",
"As such, we obtain conversations by extracting valid paths from the comment graph structure.",
"Paths are extracted sequentially from the longest candidates to shortest, and a candidate path is considered valid if and only if it satisfy the following conditions: 1. The path has a minimum of 5 turns 2. The path has a maximum of 15 turns 3. At least one turn has minimum karma score 2 of 4 within the path 4. All turns in the path have at least 3 words 5. The path shares a maximum of 2 turns with previously extracted paths 6. No turns in the path originate from a not safe for work subreddit These rules were decided upon to ensure that the model is able to learn multi-turn conversations (rules 1 and 2) with appropriate and meaningful comments being made (3, 4, and 6) while ensuring a diverse set of examples (5) are available.",
"Due to memory constraints, comments are only processed on a month to month basis so any conversations that span across months are lost; however, this loss is negligible due to the vast amount of data at hand.",
"Furthermore, this technique possibly results in more relevant references than those collected from prior months as the reference data is temporally local to the conversations in question and reflects users' current personas and interests.",
"After all conversations have been extracted, a reference set of turns (and comments that they were replying to) are collected for every user.",
"We save, at most if available, the top 8 scoring comments for every user.",
"Most users have much more than 8 comments, so an average of 7.1 reference tuples per user are collected, with about half of the tuples containing a parent comment that the user was replying to.",
"All models proposed stem from the GPT-2 model architecture as their base design (Radford et al., 2019).",
"The class of models will be defined as GENERATIVECONVERSATIONCONTROL models, GCC We experiment with the number of layers, l , the hidden size, h , and the number of attention heads, A in the GPT-2 model architecture.",
"Modeling conversations with GCC requires three steps:",
"(i) identify a speaker to emulate and obtain their reference history consisting of comments they made on other Reddit posts,",
"(ii) input the reference history and conversation turns into the model, and",
"(iii) retrieve estimated next-token probabilities only associated with turns in the conversation spoken by the target speaker.",
"Due to supporting multi-actor conversations present in our dataset, special care is needed for presenting this information to the model.",
"In general, this is accomplished by designating a speaker of interest to model in a conversation.",
"As visualized in Figure 1, the designated speaker's reference history tokens are gathered and concatenated together, with a parent comment followed by its associated reply (made by the speaker of interest) followed by another parent comment and so forth.",
"Positional embeddings will signal to the model the order of comments being made; however, additional signal is needed to tell the model which comments are made by the speaker of interest and which are not.",
"This is achieved by token type embeddings that get added to the positional and vocabulary embeddings.",
"All tokens in the reference history that belong to the speaker get the same token type embedding, and all others get a different one.",
"This representation choice allows us to naturally handle multi-actor conversation by only making a distinction between the speaking 2 Karma can be thought of as the net amount of likes and dislikes a comment has, as voted upon by the users of Reddit.",
"user and non speaking users.",
"Reference history sequences larger than 512 are truncated from the end to keep the length within 512 tokens.",
"The conversation turns are similarly represented by concatenating them together in order of occurrence with a special token at the beginning signifying the start of the conversation.",
"For practicality, all turns after the final turn associated with the target speaker are discarded for a given iteration.",
"Each token in the conversation sequence receives a spe-cific token type embedding if it is associated with the speaker of interest, and receives a different type if not.",
"Note, the conversation and reference history have disjoint sets of token type embeddings to differentiate the different types of content.",
"The max length a conversation sequence can be is 512 tokens with extra tokens truncated from the beginning of the conversation to encourage a variety of conversation lengths.",
"In models that have access to the reference history this leads to a total sequence length of 1024 tokens and 512 tokens otherwise.",
"There is flexibility in how to model conversations with reference histories due to the turns in a conversation and reference comments being indirectly related, both content and style-wise.",
"As such, the design choices we consider either encode the references separate from the conversation, or together.",
"Decoder-Only: GCC-DEC The simplest of the three considered models consists of only a transformer for decoding, which is the original configuration for GPT-2 .",
"The input consists of the reference history tokens concatenated with the conversation turn tokens and the corresponding token types.",
"A left-to-right (LR) mask is used across the entire sequence.",
"See Figure 1 for an illustration.",
"Despite it's simplicity we find that this model performs the best.",
"Seq2Seq Baseline: GCC-S2S For this model the reference material with corresponding token types is encoded in a separate transformer using a bidirectional mask.",
"The conversation turns are then decoded with a LR mask using both self-attention and attention against the final hidden states of the encoded reference.",
"This is representative of the typical formulation for attention-based Seq2Seq models (Vaswani et al., 2017).",
"Variational Autoencoder Baseline: GCC-VAE This configuration also encodes the reference history and corresponding token types in a separate transformer using a bidirectional mask.",
"The final hidden state of a special classification token is then linearly transformed into the sufficient statistics of a normal latent state which is then sampled.",
"This latent state is then prepended to the embedded inputs of the conversation turns.",
"The final sequence is then decoded using a LR mask in a separate transformer.",
"We explored this method as latent variables are commonly used to control aspects of style across various areas of study.",
"No Reference Context Baseline: GCC-NRC This version is similar to GCC-DEC except that there are no reference material included when decoding information.",
"This model can be seen as a re-implementation of Olabiyi and Mueller (2019) with the minor differences being that we introduced token types for multi-actor modeling and we did not utilize their random padding strategy.",
"We found this unnecessary as we did not experience overfit-ting due to the large amount of training data available.",
"As such, GCC-NRC will largely serve as our previous SOTA baseline to compare against when demonstrating the advantage of conditioning on prior conversations.",
"It is known that for the language modeling validation perplexity measures using teacher forcing is not the best evaluation of generative capabilities, even if there is correlation between the two.",
"However, it is a commonly used metric for language modeling, and can be parallelized and computed inexpensively without the need for autoregressive sampling of output text.",
"With that in mind, two sets of evaluations were done, the first of which being an architecture search using automatic evaluation with validation perplexity and the second being a qualitative study using Amazon's Mechanical Turk 3 to assess generated samples.",
"All evaluations in this section are done on the validation set (Reddit comments from April, 2019) using perplexity, which is calculated as follows:",
"All models are trained using mixed precision arithmetic, a learning rate that linearly increases from 0 .",
"0 to 1 .",
"5 e 4 over the first 1% of iterations followed by it decaying to 0 .",
"0 over the remaining iterations with a cosine annealing schedule, and the Adam optimization algorithm with default hyper-parameters (Kingma and Ba, 2014).",
"Architecture We evaluate three main architectures under two scenarios: similar total number of encoder and decoder parameters, and similar total number of decoder parameters.",
"As such, a 355M parameter version of GCC-DEC is compared to two versions each of GCC-S2S and GCC-VAE .",
"When present, the encoder and decoder transformers shared the same hidden sizes, number of layers, and number of attention heads.",
"Additionally, all 3 https://www.mturk.com/ Model h l A Params PPL GCC-S2S 768 18 16 375M 22.09 GCC-VAE 768 20 16 362M 22.43 GCC-DEC 1024 24 16 355M 19.10 GCC-S2S 1024 24 16 810M 19.89 GCC-VAE 1024 24 16 711M 20.49 Table 3: Comparison of model architecture perplexity (PPL) trained from scratch for 200K iterations.",
"models were trained from scratch for 200,000 iterations at a global batch size of 256.",
"The results are presented in Table 3. We see that for models with similar parameter counts the GCC-DEC has the advantage, and that under similar decoder sizes having direct access to the reference material (i.e. processing the reference and conversation together in a single decoder) results in superior performance.",
"This indicates that the added complexity from additional encoding is not needed and that concatenating all relevant context is both the simplest, and most effective means of incorporating previous information.",
"Since the parameters are shared and no latent variable bottleneck is used, the model has full access to the information from the references.",
"With this, the self attention operation is able to automatically modify the model's output distribution in complex, non-linear ways without the need for manual architecture design.",
"Pre-training and References We will use GCC-DEC going forward.",
"It is important to see if we can gain additional predictive power using pre-trained models trained on large, diverse, language modeling corpora, or at the very least utilize less computing resources to achieve similar performance.",
"The GCC-DEC trained from scratch in the previous section will be compared against another model of the same size that was pre-trained using Megatron-LM (Shoeybi et al., 2019).",
"The pre-trained GCC-DEC will be fine-tuned for 70,000 iterations at a global batch size of 128.",
"We will also compare against GCC-NRC fine-tuned from the same checkpoint with the same batch size and amount of iterations.",
"The results can be seen in Table 4. We observe that with less data, the pre-trained model quickly eclipses the model trained from scratch and achieves better perplexity, highlighting the need for models with robust linguistic features learned from non-Reddit corpora.",
"Additionally, including refer-Model P.T. Iter.",
"ence history improves performance as well.",
"This difference of 0.47, while smaller than differences between results from different model sizes, is notable due to the large amount of out of sample data that the models were tested on.",
"Model Size Finally, we performed an ablation study on the size of GCC-DEC used.",
"The different size configurations and results can be seen in Table 5. All models fine-tuned from a pre-trained checkpoint for 70,000 iterations at a global batch size of 128.",
"As shown in Shoeybi et al. (2019), perplexity decreases as the size of the model increases.",
"This increase in performance is significant as it has been shown that for conversational models there is a correlation between held-out perplexity measures and human-likeness of sampled turns, especially for models within the same family (Adiwardana et al., 2020).",
"The goal of the human evaluations is to verify the results of the quantitative ablations studies concerning both model size and presence of reference history.",
"This is done by presenting participants on Mechanical Turk with 375 different ground truth conversations of variable lengths (2, 4, and 8 turns) in even proportions.",
"We utilize 3 raters per example in our setting.",
"To filter out spurious raters we explicitly detail in the instructions that payment is contingent on spending at least a certain amount of time on the samples and completing a survey about their Reddit use.",
"If a rater fails to satisfy both these conditions we discard their label.",
"Adopting this simple heuristic for rater quality led to the disqual-ification of 33.2% of our labels.",
"As is common in other work a single conversation is presented with two different realizations for the last turn (Ser-ban et al., 2017).",
"These last turns can be either machine-generated or ground truth depending on the experiment; however, every model generates exactly one reply for each of the ground truth to be used across all experiments.",
"Samples where three new turns are generated can be seen in Table 1 or in Tables 7 13 in the Appendix.",
"When presented with these different realizations, the participant is asked to rate the pair on several qualities such as which is likely to be human generated, which follows the references well, which has good quality, and which exhibits good turn-to-turn coherency.",
"For each of these the rater is asked to decide which in the pair showcases these qualities better.",
"Note that the rater has the option of selecting both of them exhibit the quality of interest, or neither of them do.",
"These were conducted in pairs to provide a frame of reference for the rater.",
"We present the findings as paired results to account for grounding effects.",
"Exact phrasings of these questions, several sample conversations, and details on our Turk setup can be found in Appendix A. We found inter-rater agreement in our studies about 75-80% of the time between 2 of the 3 users who judged samples, and about 10% of the time all 3 agreed unanimously.",
"This is in light of 4 possible choices and 3 raters.",
"It should be noted that our goal is not to make the distribution between model and human statistically different, but rather to make them as close as possible.",
"We have taken several steps to assure the quality of our human evaluations as mentioned in the previous paragraph.",
"Beyond that, any experiment with sufficient statistical power would need a prohibitively expensive number of samples per comparison.",
"The results of this study can be seen in Table 6. We find that in pairwise comparisons bigger models nearly always outperform their smaller counterpart across all tests we ran.",
"For our pairwise tests we only considered pairings between a model and the next largest model size due to the prohibitive cost of computing all pairwise comparisons.",
"For tests against our ground truth we found the results to be rather noisy.",
"Generally, we observed that the models were close to 30-40% in all categories meaning that they were sufficiently similar to the ground truth distribution of data (the neutral options were chosen more frequently).",
"However, we found that Source A Realistic Reference Quality Coherency Source B GCC-NRC (355M) 31% 35% 37% 41% 29% 36% 32% 39% Human GCC-DEC (355M) 32% 34% 38% 40% 31% 33% 32% 36% Human GCC-DEC (774M) 31% 35% 40% 39% 33% 33% 34% 36% Human GCC-DEC (1.2B) 32% 37% 40% 40% 34% 38% 29% 36% Human GCC-DEC (8.3B) 37% 40% 42% 38% 42% 42% 40% 42% Human GCC-DEC (355M) 31% 34% 41% 39% 37% 36% 33% 35% GCC-NRC (355M) GCC-DEC (774M) 33% 33% 39% 40% 34% 29% 34% 36% GCC-DEC (355M) GCC-DEC (1.2B) 31% 31% 40% 38% 33% 32% 38% 38% GCC-DEC (774M) GCC-DEC (8.3B) 41% 37% 39% 43% 38% 38% 42% 39% GCC-DEC (1.2B) Table 6: Experiment results for pairwise comparisons grading if conversation samples seemed human-like (Real-istic), were inline with the reference history (Reference), were interesting and had good grammar (Quality), and if they fit the conversation as a whole (Coherency).",
"our 8.3B parameter model was significantly more polarizing than the rest.",
"The model was capable of generating unique and engaging conversations that, when compared to the ground truth, led to it being explicitly preferred more than other models in all tests.",
"It proved to adhere to the persona more than even the ground truth conversations.",
"In addition to effectively utilizing its references to modulate style as we'd hoped, we also found that its realism, linguistic quality, and coherency was superb.",
"Furthermore, we also tested pairwise comparisons between samples from successive model sizes.",
"On average, the larger model tended to achieve similar or superior performance in all of the categories.",
"All in all, these findings reinforce the results from the quantitative experiments in that larger models better match the target distribution.",
"Reference use From our qualitative study we can clearly see the benefit of using reference history as was alluded to in prior sections.",
"In all four experiments the presence of references leads to better ground truth performance compared to GCC-NRC .",
"In Figure 2 we delve deeper into the results of the ground truth experiments and display labeler preference as a function of conversation length.",
"As can be seen, when the conversation has built up a lot of context, GCC-NRC (355M) moves away from the user style, instead focusing presumably on the style within the conversation.",
"Alternatively, GCC-DEC (355M) adheres more closely to the references instead of the prior conversation context, thus resulting in higher style match for longer conversations.",
"However, this over-adherance to the conversation style does seem to impact conversation quality for longer conversations.",
"It is possible that our inclusion of random reference conversations leads to Figure 2: Test scores compared against the number of dialog turns given as context prior to generating samples for GCC-DEC (355M) and GCC-NRC (355M).",
"this quality degradation.",
"To investigate this future work could consider incorporating information retrieval components to select contextually relevant reference conversations for more accurate personality transfer that does not degrade conversation quality.",
"Transformer Language Models Radford et al. released the first widely used transformer based generative language model, GPT .",
"Follow up work, GPT-2 , showed that language modeling quality improved as model size grew, up to 1.5B parameters (Radford et al., 2019), and that large transformer language models were able to successfully incorporate long term dependencies to model and generate diverse content.",
"Further work with generative transformer language models would go on to push model scale by testing up to 8.3B parameters and 11B parameters in two separate studies (Shoeybi et al., 2019; Raffel et al., 2019).",
"These results have demonstrated performance scaling not only for the original language modeling task, but also on plenty of downstream NLP tasks as well (Radford et al., 2019; Dai et al., 2019; Howard and Ruder, 2018; Liu et al., 2019; Zellers et al., 2019; Yang et al., 2019; Devlin et al., 2018).",
"We demonstrate that this scaling trend applies to the conditional conversation modeling task as well and validate the efficacy of transformer based language models for dialog modeling.",
"Dialog Modeling Generative, non-goal oriented dialog modeling (i.e. chit-chat) has a history of difficulty with modeling long contexts (Serban et al., 2016b), exhibiting a consistent personality (Li et al., 2016), and producing interesting and engaging responses (Li et al., 2015).",
"In general approaches to mitigating these issues have included: tweaking the base recurrent network architecture to introduce persona-based latent variables (that are either learned, amortized, or adversarially generated) (Serban et al., 2017; Bak and Oh, 2019; Chan et al., 2019; Olabiyi et al., 2019), learning speaker embeddings to modulate style (Li et al., 2016), and conditioning on outside information or heuristics to control generation (Young et al., 2018; Joshi et al., 2017; Ghazvininejad et al., 2018).",
"One particular way that inconsistent personalities have been addressed is by conditioning the model on a set of sentences describing the target personality (Zhang et al., 2018; Mazare et al., 2018).",
"As described in the prior section large transformer models have demonstrated success in generating diverse and engaging content.",
"Recent work in conversational modeling has built upon the success of these transformer-based architectures to allow for longer contexts and incorporating multiple turns (Wolf et al., 2019; Olabiyi and Mueller, 2019).",
"Several datasets have been proposed for multiturn conversation modeling (Serban et al., 2016a; Lowe et al., 2015); however, these are limited to relatively short median conversation lengths of 3 and 6-turn respectively.",
"Contexts of these lengths are not able to take full advantage of GPT-2 and other large transformer's modeling capabilities.",
"Addressing this shortcoming and curating a dataset of diverse conversations that cover a wider distribution of conversation lengths from 0 to 15 turn contexts is a central goal of this work.",
"Concurrent work has shown the value of leveraging large amounts of Reddit data to harvest naturally occurring conversations for the purposes of downstream conversational tasks (Zhang et al., 2019).",
"However, this work does not address the issue of stylistic control or the effects of scaling models to large sizes, which are central themes of our work.",
"Other concurrent work has also shown the benefit of learning from large amounts of social media conversations, but it also did not attempt to influence the model output style nor did it scale up the model to 8.3 billion parameters (Adiwardana et al., 2020).",
"When a large conversational model is trained on a diverse collection of multi-turn conversations, it is able to generate quality conversations that are engaging, coherent, and plausibly human.",
"Furthermore, when conditioned on prior conversations, the model is able to utilize a speaker's personality when choosing how to reply in a conversation to allow for greater control and more diverse responses.",
"In the future, we aim to leverage these pre-trained models to advance SOTA on downstream conversational tasks, such as knowledge-grounded conversations or question answering.",
"Recent advancements in learnable information retrieval systems could select contextually relevant references to further strengthen the quality of generated dialogue."
] | [
"abstain",
"method",
"method",
"abstain",
"result",
"objective",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"abstain",
"other",
"result",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain"
] |
[
"Transformer-based models have achieved state-of-the-art performance on short-input summarization.",
"However, they still struggle with summarizing longer text.",
"In this paper, we present DYLE , a novel dynamic latent extraction approach for abstractive long-input summarization.",
"DYLE jointly trains an extractor and a generator and treats the extracted text snippets as the latent variable, allowing dynamic snippet-level attention weights during decoding.",
"To provide adequate supervision, we propose simple yet effective heuristics for oracle extraction as well as a consistency loss term, which encourages the extractor to approximate the averaged dynamic weights predicted by the generator.",
"We evaluate our method on different long-document and long-dialogue summarization tasks: GovReport, QMSum, and arXiv.",
"Experiment results show that DYLE outperforms all existing methods on GovReport and QMSum, with gains up to 6.1 ROUGE, while yielding strong results on arXiv.",
"Further analysis shows that the proposed dynamic weights provide interpretability of our generation process.",
"1 1 Introduction Transformer-based (Vaswani et al., 2017) pretrained language models (PLMs) such as BART (Lewis et al., 2020a) and T5 (Raffel et al., 2020), have achieved state-of-the-art performance on short text summarization.",
"However, due to the high memory complexity of the full self-attention (Tay et al., 2020a), PLMs still struggle to handle long inputs (Rohde et al., 2021).",
"Model efficiency and summary quality present a pair of challenges (Huang et al., 2021): models need to capture information scattered across the long input while maintaining a low computational cost.",
"Prior models tackled long input summarization mostly in four ways.",
"First, sparse attention (Child et al., 2019; Beltagy et al., 2020; Tay et al., 2020b) is used to reduce the memory complexity of the Transformers so that they can attend to more tokens.",
"Second, extract-then-generate methods extract salient texts from the input and then summarize the extracted texts.",
"Extractors are either independently trained with full supervision (Zhong et al., 2021b) or optimized using reinforcement learning (Williams, 1992; Chen and Bansal, 2018; Bae et al., 2019; Brainskas et al., 2021).",
"Third, models are proposed to divide source text into sections (Gidiotis and Tsoumakas, 2020; Wu et al., 2021; Liu et al., 2021) which are individually summarized and combined to form a full summary.",
"Fourth, hierarchical models (Rohde et al., 2021; Zhu et al., 2020) improve summarization by capturing sentence or discourse level dependencies.",
"We elaborate on these four directions and their limitations in Section 2. We believe that the extract-then-generate approach mimics how a person would handle long-input summarization: first identify important pieces of information in the text and then summarize them (Kiyoumarsi, 2015; Sun et al., 2020).",
"The extract-then-generate framework is based on the assumption that salient information useful for summarization only occupies a small portion of the input, 1687 which is a sensible assumption given the long input length.",
"This approach shortens the source input to a pre-set length, which addresses the main challenge of the model not being able to handle longer input beyond a certain limit.",
"However, previous separately-trained extract-then-generate approaches are limited as they suffer from cascaded errors from the extractor to the generator.",
"Though various reinforcement learning techniques are introduced to bridge the two steps, they have noticeable drawbacks (discussed in Section 3.3), and we argue that the long input makes this approach suboptimal.",
"In this paper, we propose a new approach for long-input summarization: Dynamic Latent Extraction for Abstractive Summarization (DYLE ).",
"DYLE jointly trains the extractor and the generator and keeps the extracted text snippets latent.",
"For an output token, DYLE compute its probability conditioned on each input snippet separately , and its generation probability is computed by marginal-izing over all the input snippets under a learned dynamic weights assigned by the generator conditioned on the previously generated tokens.",
"We optimize the extractor with two surrogate losses.",
"First, we compute the extractive oracle based on the reference summary with a greedy search over the best ROUGE scores.",
"These oracle snippets are used as targets for the extractor learning signal.",
"Moreover, we propose consistency loss to encourage the extractor to approximate its own predicted weights on the snippet to the averaged dynamic weights predicted by the generator.",
"We conducted experiments on three long-input summarization datasets: GovReport (Huang et al., 2021) and arXiv (Cohan et al., 2018) for long-document summarization, and QMSum (Zhong et al., 2021b) for long-dialogue summarization.",
"Our method DYLE largely outperforms existing methods on GovReport and QMSum, while achieving strong results on arXiv.",
"Notably, DYLE yields gains of 4.2/6.1/4.0 of ROUGE-1/2/L points over the previous best method on GovReport.",
"These experiments demonstrate the generalizability of DYLE to multiple long-input summarization tasks.",
"We summarize our contributions as follows: We introduce DYLE , a dynamic latent extraction approach for abstractive long-input summarization.",
"DYLE better captures information in the long input and reduces computational cost; We propose multiple auxiliary optimizations for the effective training of DYLE : 1) extractive oracle as a learning signal for the extractor; 2) consistency loss that bridges extraction and generation; 3) hybrid training methods that make the extraction more robust; Experimental results show that DYLE largely outperforms the state-of-the-art on two long input summarization datasets.",
"We also conducted a detailed analysis that shows dynamic weights improve model interpretability.",
"Sparse attention mechanism The full attention mechanism has a quadratic memory cost.",
"Prior research works have proposed different sparse attention mechanisms to reduce the memory cost.",
"Longformer (Beltagy et al., 2020) uses a dilated sliding window of blocks and global attention patterns.",
"BigBird (Zaheer et al., 2020) employs sliding windows and random blocks.",
"Reformer (Kitaev et al., 2020) uses the locality-sensitive hashing.",
"In addition to optimizing the encoder self-attention, Huang et al. (2021) proposes head-wise positional strides to reduce the cost of the encoder-decoder attention.",
"However, sparse attention diminishes the benefits of pretraining and sacrifices parts of the receptive field.",
"Extract-then-generate method This method extracts salient text snippets from the input, followed by generating an overall summary.",
"Most of these approaches are trained separately (Zhang et al., 2019; Lebanoff et al., 2019; Xu and Durrett, 2019; Bajaj et al., 2021; Zhang et al., 2021b), which suffer from information loss as we pass the extracted snippets to the generator.",
"Some approaches attempt to reduce that loss by bridging the two stages.",
"Chen and Bansal (2018) adopts reinforcement learning (RL) with a sentence-level policy gradient.",
"Bae et al. (2019) proposes summary-level policy gradient.",
"Using RL suffers from various drawbacks on long input texts, which will be elaborated in Section 3.3.",
"DYLE is different as we jointly train an extract-then-generate model for summarization using latent variables.",
"conquer (Gidiotis and Tsoumakas, 2020; Grail et al., 2021; Zhang et al., 2021a).",
"It breaks a long input into multiple parts, which are summarized separately and combined to produce a final summary.",
"However, these models do not capture the contextual dependencies across parts and assume that the input has certain structure.",
"Hierarchical models Various hierarchical models have been proposed to handle the longer inputs.",
"Cohan et al. (2018) models the document discourse structure with a hierarchical encoder and a discourse-aware decoder.",
"HAT-Bart (Rohde et al., 2021) proposes a new Hierarchical Attention Transformer-based architecture that attempts to capture sentence and paragraph-level information.",
"HMNet (Zhu et al., 2020) builds a hierarchical structure that includes discourse-level information and speaker roles.",
"However, these models focus mainly on model performance and not on reducing the memory and computational cost.",
"An overview of our approach is shown in Figure 1. In Section 3.1, we formulate our task and the extractor-generator framework.",
"In Section 3.2, we introduce our parameterization of the extractor for long inputs.",
"In Section 3.3, we introduce generator formulation and the novel consistency loss.",
"The extractor module is both optimized with the consistency loss and the oracle loss, which we elaborate on in Section 3.4.",
"The overall training objective is summarized in Section 3.5.",
"In the long-input summarization task, the input consists of L text snippets, X = ( x 1 , . . . , x L ) , and an optional query q if a query is paired with a summary.",
"In long-input summarization, the number of text snippets, L , could be potentially large.",
"The output is a summary y of length T .",
"For the dialogue summarization task, dialogue utterances by each speaker are used as snippets.",
"For documents, we tokenize the input into sentences and use each sentence as a snippet.",
"The goal is to learn a model that generates a sequence of summary tokens y given the input snippets X and the previously generated tokens y <t : P ( y | q, X ) = T (cid:89) t =1 P ( y t | q, X, y <t ) RoBERTa RoBERTa query query Top-Extracted snippets Document Figure 2: Long-input extractor.",
"The extractor takes the query and the source text as input and outputs a score s i = E ( q, x i ) for each text snippet x i .",
"Here is the extractor parameters.",
"We extract K snippets XK from the document X based on their scores: XK = topK ( E ( q, x i ) , x i X ) (1) After retrieving XK from X , the extractor-generator framework models the output probability by replacing X with XK , i.e., P ( y | q, X ) = P ( y | q, XK ) = T (cid:89) t =1 P ( y t | q, XK , y <t ) (2) Note that the topK operation in Eq.",
"(1) is nondifferentiable, and we do not propagate gradients through topK ; instead, we propose methods to optimize the extractor in Section 3.3 and Section 3.4.",
"An interesting research question is how to design the extractor for long inputs.",
"Limited by GPU memory, it is impractical to concatenate all snippets and encode them with a large pre-trained language model.",
"As shown in Figure 2, we group consecutive snippets into chunks .",
"We concatenate the query q with each chunk and compute the encoded vector for each snippet independently within the chunk it belongs to.",
"We project the encoded vectors to scalar scores s i = E ( q, x i ) using an MLP.",
"The first challenge is that the extraction operation (topK in Eq.",
"(1)) is non-differentiable.",
"One approach is to adopt RL-based optimizations (Chen and Bansal, 2018; Bae et al., 2019), which has two drawbacks.",
"First, reinforcement learning for large action spaces (i.e., extracting K out of L snippets when L is very large) has high variances.",
"Second, current methods mostly use sentence-level ROUGE (Chen and Bansal, 2018) or summary-level ROUGE (Bae et al., 2019) as training rewards.",
"Using sentence-level ROUGE could potentially select sentences with overlapping contents (Narayan et al., 2018), resulting in redundant final summaries.",
"Using a summary-level ROUGE leads to the sparsity of the training signal, and longer input makes this approach harder to train.",
"The second challenge is interpretability: one might want to know whether the generator is leveraging the extracted information at each decoding time step.",
"To address these challenges, we propose a generator that dynamically assigns weights to every extracted snippet at each time step.",
"Different from the extractor scores, which are independent of the decoding time step, the generator assigns different dynamic scores at different time steps.",
"Dynamic weights make the decoding process interpretable and help denoise the extraction by down-weighting irrelevant snippets.",
"It also provides training signals for the extractor using consistency loss .",
"Generator formulation The overview of the generator is shown in Figure 3.",
"For each extracted snippet x , the generator predicts the generation probability P ( y t | q, x, y <t ) on this snippet and a dynamic weight P ( x | q, XK , y <t ) for this snippet.",
"The independent encoding of each extracted snippet saves memory because the snippets do not need to attend to each other.",
"Without loss of generality, we assume that P ( | q, x, y <t ) is computed by first mapping the input ( q, x, y <t ) to a contextualized representation vector h xt .",
"For Transformers (Vaswani et al., 2017) and encoder-decoder with attention models (Bahdanau et al., 2015), h xt is usually the model's output before the final language model head.",
"The generation probability P ( y t | q, x, y <t ) is computed by feeding h xt into the language model head.",
"For the dynamic weight P ( x | q, XK , y <t ) , we adopt a separate MLP to map each h xt to a scalar logit l x , and P ( | q, X, y <t ) is defined as softmax( { l x } x X ) .",
"We compute the generation probability by marginal-izing over all extracted snippets: P ( y | q, XK ) = T (cid:89) t =1 (cid:88) x XKP ( y t | q, x, y <t ) P ( x | q, XK , y <t ) (3) The dynamic weight P ( x | q, XK , y <t ) at each decoding time step t allows us to interpret how the generator utilizes the extracted snippets.",
"For example, a larger weight to a particular snippet indicates the larger importance of the snippet to the current decoding time step.",
"The generation loss is defined as the NLL of the gold summary: L gen = log P ( y | q, XK ) (4) where P ( y | q, XK ) is defined in Eq.",
"(2).",
"Here we do not propagate gradients of L gen to the extractor parameters since topK is non-differentiable.",
"Instead, methods to optimize the extractor are described in Section 3.3 and Section 3.4.",
"Consistency loss We also leverage the dynamic weights to provide a training signal for the extractor.",
"Since the dynamic weight of a snippet can be interpreted as the importance of the snippet at a particular time step, we average the dynamic weights over all the decoding steps and view the averaged weight as the overall importance of the snippet.",
"Based on this intuition, we propose what we term as consistency loss , which measures the distance between the averaged dynamic weights distribution and the extractor distribution.",
"We want these two distributions to be close on an arbitrary subset of X .",
"For simplicity, we take XK as the subset and define the consistency loss as L consist = KL (cid:104) 1 TT (cid:88) t =1 P ( | q, XK , y <t ) || softmax ( E ( q, x i ) , x i XK ) (cid:105) (5) 1690 Note that the consistency loss is superscripted with the extractor's parameters , which means that we do not compute gradients for the generator's parameters .",
"Since we want the distributional distance to be small on an arbitrary subset of X , we do not propagate gradients through the topK operator.",
"For long-input summarization, the extracted snippets XK used during training are important for stable optimization.",
"Instead of using XK defined in Eq.",
"(1), we propose to leverage extractive oracles during training.",
"No extractive oracles are used during test time.",
"Greedy search for extractive oracles Extractive oracles denote a set of selected text snippets whose concatenation maximizes the evaluation metric given the gold summary.",
"We implement the extractive oracle using greedy search.",
"Specifically, we start with an empty set, and we iteratively select a snippet from the input such that the concatenation of that snippet and the already selected snippets maximizes the average of ROUGE-1, ROUGE-2, and ROUGE-L scores given the gold summary.",
"We denote the extractive oracles as X o .",
"Hybrid training We leverage the extractive oracles to define XK used during training.",
"If the number of oracles equals or exceeds K , we define XK as the first K oracle snippets.",
"If the number of oracles is less than K , we define XK as the union of X o and the top snippets ranked by the extractor that is not appearing in X o .",
"Such hybrid training has two benefits.",
"First, compared with XK defined in Eq.",
"(1), it provides higher-quality inputs to the generator.",
"Second, it reduces the reliance on the oracle and improves the generalizability of our model beyond the training set, as other text snippets omitted in the greedy search might help the generation.",
"Oracle loss The extractive oracles X o are used as a supervision signal for the extraction part of our model.",
"The oracle loss L oracle is computed from the cross-entropy loss between all chunks in the extractor selected set and the extractive oracle.",
"Formally, the oracle loss is computed as L oracle = 1 | X o | (cid:88) x X o log e E ( q,x ) (cid:80) x i X e E ( q,x i ) (6) Dataset Query Format Src.",
"The overall training objective of our method is",
"where g , o , and c are hyperparameters to bal-ance the loss components.",
"Gradients are computed for the superscripted parameters.",
"Specifically, the extractor is solely optimized with the consistency loss and the oracle loss, and the generator is solely optimized with the generation loss.",
"QMSum (Zhong et al., 2021b) is a benchmark for query-based multi-domain meeting summarization.",
"It consists of meetings from three domains: AMI (Carletta et al., 2005), ICSI (Janin et al., 2003), and committee meetings of the Welsh Parliament and Parliament of Canada; GovReport (Huang et al., 2021) is a large-scale long document summarization dataset, consisting of about 19.5k U.S. government reports with expert-written abstractive summaries; GovReport is a good benchmark as it contains significantly longer documents (average 9.4k words) and summaries (553 words) than other long document datasets, such as ArXiv, PubMed (Cohan et al., 2018), Bill-Sum (Kornilova and Eidelman, 2019), and Big-Patent (Sharma et al., 2019); arXiv (Cohan et al., 2018) is a dataset of scien-tific articles from arXiv.",
"Abstracts of the articles are used as the target summary.",
"ArXiv is chosen over PubMed (Cohan et al., 2018) as arXiv contains longer articles compared to PubMed.",
"Baselines for Comparisons We compare DYLE with the previous state-of-the-art methods on the aforementioned three datasets.",
"More specifically: 1) For GovReport, we report the performance from the original paper, which uses various encoder self-attention and the proposed HEPOS encoder-decoder attention; 2) For QMSum, we compare with Zhong et al. (2021a), the current SoTA and other baselines mentioned in that work; 3) For arXiv, we include the results from the best performing models in previous works, including ExtSum-LG (Xiao and Carenini, 2019), PEGASUS (Zhang et al., 2020), DANCER (Gidiotis and Tsoumakas, 2020), BigBird (Zaheer et al., 2020), HEPOS + LSH (Huang et al., 2021), HAT-BART (Rohde et al., 2021), Longformer (Beltagy et al., 2020), and SSN-DM (Cui and Hu, 2021).",
"Note that those baselines spans over different strategies to handle long input, such as sparse-attention (HEPOS, BigBird, Longformer), hierarchical attention (HAT-BART), extract-then-generate (Locator + different generators).",
"Pretrained-LM The extractor is initialized with RoBERTa-base (Liu et al., 2019) weights.",
"The generator is initialized with BART-large (Lewis et al., 2020a) weights.",
"We use the Adam optimizer and set the extractor learning rate to 5e-5 and the generator learning rate to 5e-6.",
"Hyperparameters g , o , and c are the coeffi-cients for the generation loss, oracle loss, and the consistency loss respectively.",
"For g and o , we did a 2-step binary search between 0 and 2. For c , we did a 3-step binary search between 0 and 10.",
"For the QMSum dataset, we used g = 1 , o = 1 , c = 1 .",
"For the GovReport dataset, we used g = 0 .",
"5 , o = 1 , c = 1 .",
"For the ArXiv dataset, we used g = 0 .",
"5 , o = 1 , c = 5 .",
"Hardware We apply gradient checkpointing (Chen et al., 2016) to save the GPU memory.",
"Each experiment is run on one NVIDIA Quadro RTX 8000 GPU.",
"The effective batch size is set to 8.",
"The evaluation results are summarized in Table 2, Table 3, and Table 4. For GovReport, DYLE yields",
"gains of 4.15/6.21/4.00 of ROUGE-1/2/L scores compared to the previous best method.",
"Experiments on GovReport show that DYLE is performant over prior sparse attention approaches.",
"On QMSum, DYLE yields the new state-of-the-art ROUGE-1/2/L scores of 34.42/9.71/30.10, outperforms UniLM with DialogLM pretraining.",
"Comparing DYLE with locator-based models on the QMSum dataset shows that DYLE outperforms prior extract-then-generate approaches where the locator is independently trained with intermediate annotated text spans.",
"This shows the effectiveness of DYLE's joint training approach.",
"These results show that DYLE can be applied to both the long document summarization and long dialogue summarization tasks.",
"DYLE 's better performance can be attributed to lowered information loss between the extraction and the generation steps and its ability to handle input of a much longer length.",
"We notice that while DYLE largely outperforms the LSH baseline (Huang et al., 2021) on the GovReport dataset, it underperforms the LSH baseline on arXiv.",
"We posit two reasons.",
"First, the input of the GovReport is much longer than that of arXiv.",
"Most, if not all, of the sentences in the arXiv input article can be processed by the LSH model.",
"Second, the summaries of the arXiv dataset are more abstractive than those of GovReport.",
"It is possible that individually extracted text snippet is not the best linguistic unit for generating output tokens.",
"It is our future work to explore the optimal input unit for an extract-then-generate approach.",
"Nevertheless, DYLE outperforms other extraction-based approaches ( e.g., SSN-DM (Cui and Hu, 2021)) and divide-and-conquer approaches ( e.g., DANCER (Gidiotis and Tsoumakas, 2020)).",
"We conduct ablation studies to investigate the effectiveness of the auxiliary optimizations we introduced.",
"Specifically, we report the full model's performance after removing 1) hybrid training, 2) consistency loss, 3) extractive oracle loss.",
"In our default model, the consistency loss is computed on the combination of the extracted snippets and oracle snippets; in the w/o hybrid experiment, the consistency loss is only computed on the set of oracle snippets; in w/o consistency experiment, the consistency loss is not computed.",
"The results are summarized in Table 5. Note that without the hybrid training optimization, only the extractive oracles will be used to train the generator.",
"When the consistency loss is not calculated, the extractor and the generator can be viewed as being trained independently with the extractive oracles.",
"We see that excluding either of the hybrid training, consistency loss, or oracle loss optimization leads to a performance drop.",
"Training the model without the supervision of the oracle leads to the greatest decrease in model performance, showing the importance of good supervision for the extractor.",
"Removing the consistency loss also decreases the model performance.",
"This shows that the consistency loss allows the extractor to better learn to select salient snippets from the input text and enables DYLE to generalize better to the test set.",
"Analysis of extracted snippets We are interested in the amount of salient information passed to the generator.",
"To investigate this, we report the decomposed precision and recall of ROUGE scores in Table 6. We observe that the extracted snippets have much higher recall than the generated summaries, while the generated summaries have higher precision.",
"This suggests that to improve the overall performance, we can increase the information coverage (i.e., recall) of the extractor and improve the accuracy of the generator in identifying the salient snippets (i.e., precision).",
"approach is more interpretable than sparse attention and two-step extraction-generation pipeline meth-1693",
"Specifically, dynamic weights in the generator shows how the information is used throughout the decoding process.",
"In Figure 4, we visualize the dynamic weights for the extracted snippets assigned by the generator during decoding.",
"In each subfig-ure, we visualize the dynamic weight matrices of the generated summary and a random summary from other samples in the validation set.",
"The x axis and y -axis represent the index of the extracted topK snippets and the decoding time step, respectively.",
"Darker squares denote higher weights.",
"For each generated summary, we observe multiple consecutive high-weight areas, indicating alignments between the extracted snippets and the generated summary.",
"By contrast, weights are uniformly distributed for random summaries.",
"Interestingly, we observe that, on QMSum, fewer sentences are considered when generating the summaries.",
"Our explanation for this observation is that QMSum is a query-based dataset, where the queried information is more concentrated in a few snippets.",
"By contrast, we find that a larger number of snippets are used on the GovReport dataset as seen in Figure 4, as GovReport is a general summarization dataset.",
"Effect of number of extracted snippets To evaluate the effect of number of extracted snippets on model performance, we vary the value of K of topK in Eq.",
"(1) and test it on both the GovReport and QMSum datasets.",
"We observe that the model performance generally increases as the value of K increases.",
"This is expected as more extracted snippets provide the generator with more information to form a final summary.",
"The results are summa-1694 R-1 R-2 R-L GovReport K =25 61.01 28.83 57.82 K =20 59.25 27.46 55.74 K =15 58.55 26.95 54.89 K =10 54.98 24.10 51.25 QMSum K =25 34.42 9.71 30.10 K =20 33.10 8.69 29.62 K =15 31.78 8.36 28.31 K =10 33.30 9.18 29.53 Table 7: Comparing model performance with different values of K on the GovReport and QMSum dataset R-1 R-2 R-L GovReportExtractor Output 61.01 28.83 57.82 Oracle 68.02 39.16 65.29 QMSumExtractor Output 34.42 9.71 30.10 Oracle 39.80 14.74 36.06 Table 8: Feeding extractive oracles to generator.",
"rized in Table 7. Due to the limit of GPU memory, the largest K value we tried is 25.",
"Effect of consistency loss We evaluate the effect of consistency loss on extractor performance.",
"Note that removing the consistency loss means that the extractor and the generator are independently trained.",
"The results are presented in Table 5 as part of the ablation study.",
"Removing the consistency loss leads to worse model performance.",
"We observe that the consistency loss helps the model better learn the importance of the selected text snippets useful for the generation.",
"Extractor performance compared with extractive oracles We feed the extractive oracles to the generator.",
"The results are summarized in Table 8.",
"We observe that extractive oracles contain more salient information than the text snippets extracted by the extractor.",
"Feeding the extractive oracle to the generator indicates the upper bound of the extractor performance.",
"However, we observe that the gap between the performance of using the extractive oracle and using the extractor output is relatively small.",
"Comparison with RAG The generator of our method is related to but differs significantly from Retrieval-Augmented Generation (RAG) (Lewis et al., 2020b).",
"The similarity only lies in the idea of marginalization over a set of text snippets, which is shown to be useful in question answering as well (Ni et al., 2021b).",
"However, unlike our dynamic weights, the weights in RAG remains static during decoding.",
"In our notations, RAG's generation probability can be formulated as: P ( y | q, XK ) = T (cid:89) t =1 P ( y t | q, XK , y <t ) = T (cid:89) t =1 (cid:88) x XKP ( y t | q, x, y <t ) P ( x | q, XK ) (8) The static weight P ( x | q, XK ) in Eq.",
"8 is computed based on q and XK , while our dynamic weight P ( x | q, XK , y <t ) is additionally conditioned on the already generated tokens.",
"Limitations and future directions We acknowledge that joint training of the extractor and the generator cannot eliminate information loss, which might be addressed by combining DYLE and sparse attention to encode longer snippets.",
"Though formulated for long-input summarization, DYLE can be applied to general long-input generation tasks where information is scattered across the input, e.g., open-domain question answering and multi-turn dialogue systems with long dialogue history.",
"In this paper, we propose the first framework that jointly trains an extract-then-generate model with latent extraction.",
"The first-step extraction picks out salient information from the long input, thereby extending the input length that the model can handle.",
"Our novel joint training method addresses the challenge of information loss associated with the prior extract-then-generate approaches.",
"Our model largely outperforms the current state-of-the-art on GovReport and QMSum, while achieving strong results on arXiv.",
"Lastly, DYLE has the advantages of being able to process arbitrarily long input with a lower memory cost and interpretable generator weights.",
"The authors would like to thank Yixin Liu and Ming Zhong for the discussions.",
"We also would like to thank the anonymous reviewers for their helpful comments.",
"This work is supported in part by a grant from Microsoft Research."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"objective",
"other",
"other",
"abstain",
"abstain",
"objective",
"other",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"objective",
"abstain",
"objective",
"other",
"abstain",
"other",
"other",
"other"
] |
[
"Multilingual sequence labeling is a task of predicting label sequences using a single unified model for multiple languages.",
"Compared with relying on multiple monolingual models, using a multilingual model has the benefit of a smaller model size, easier in online serving, and generalizability to low-resource languages.",
"However, current multilingual models still underperform individual monolingual models significantly due to model capacity limitations.",
"In this paper, we propose to reduce the gap between monolingual models and the unified multilingual model by distilling the structural knowledge of several monolingual models (teachers) to the unified multilingual model (student).",
"We propose two novel KD methods based on structure-level information: (1) approximately minimizes the distance between the student's and the teachers' structure-level probability distributions, (2) aggregates the structure-level knowledge to local distributions and minimizes the distance between two local probability distributions.",
"Our experiments on 4 multilingual tasks with 25 datasets show that our approaches outperform several strong baselines and have stronger zero-shot generalizability than both the baseline model and teacher models.",
"Sequence labeling is an important task in natural language processing.",
"Many tasks such as named entity recognition (NER) and part-of-speech (POS) tagging can be formulated as sequence labeling problems and these tasks can provide extra information to many downstream tasks and products such as searching engine, chat-bot and syntax parsing (Jurafsky and Martin, 2009).",
"Most of the previKewei Tu is the corresponding author.",
"This work was conducted when Xinyu Wang was interning at Alibaba DAMO Academy.",
"ous work on sequence labeling focused on monolingual models, and the work on multilingual sequence labeling mainly focused on cross-lingual transfer learning to improve the performance of low-resource or zero-resource languages (Johnson et al., 2019; Huang et al., 2019a; Rahimi et al., 2019; Huang et al., 2019b; Keung et al., 2019), but their work still trains monolingual models.",
"However, it would be very resource consuming considering if we train monolingual models for all the 7,000+ languages in the world.",
"Besides, there are languages with limited labeled data that are required for training.",
"Therefore it is beneficial to have a single unified multilingual sequence labeling model to handle multiple languages, while less attention is paid to the unified multilingual models due to the significant difference between different languages.",
"Recently, Multilingual BERT (M-BERT) (Devlin et al., 2019) is surprisingly good at zero-shot cross-lingual model transfer on tasks such as NER and POS tagging (Pires et al., 2019).",
"M-BERT bridges multiple languages and makes training a multilingual sequence labeling model with high performance possible (Wu and Dredze, 2019).",
"However, accuracy of the multilingual model is still inferior to monolingual models that utilize different kinds of strong pretrained word representations such as contextual string embeddings (Flair) proposed by Akbik et al. (2018).",
"To diminish the performance gap between monolingual and multilingual models, we propose to utilize knowledge distillation to transfer the knowledge from several monolingual models with strong word representations into a single multilingual model.",
"Knowledge distillation (Bucilua et al., 2006; Hinton et al., 2015) is a technique that first trains a strong teacher model and then trains a weak student model through mimicking the output probabilities (Hinton et al., 2015; Lan et al., 2018; Mirzadeh et al., 2019) or hidden states (Romero et al., 2014; Seunghyun Lee, 2019) of the teacher model.",
"The student model can achieve an accuracy comparable to that of the teacher model and usually has a smaller model size through KD.",
"Inspired by KD applied in neural machine translation (NMT) (Kim and Rush, 2016) and multilingual NMT (Tan et al., 2019), our approach contains a set of monolingual teacher models, one for each language, and a single multilingual student model.",
"Both groups of models are based on BiLSTM-CRF (Lample et al., 2016; Ma and Hovy, 2016), one of the state-of-the-art models in sequence labeling.",
"In BiLSTM-CRF, the CRF layer models the relation between neighbouring labels which leads to better results than simply predicting each label separately based on the BiLSTM outputs.",
"However, the CRF structure models the label sequence globally with the correlations between neighboring labels, which increases the difficulty in distilling the knowledge from the teacher models.",
"In this paper, we propose two novel KD approaches that take structure-level knowledge into consideration for multilingual sequence labeling.",
"To share the structure-level knowledge, we either minimize the difference between the student's and the teachers' distribution of global sequence structure directly through an approximation approach or aggregate the global sequence structure into local posterior distributions and minimize the difference of aggregated local knowledge.",
"Experimental results show that our proposed approach boosts the performance of the multilingual model in 4 tasks with 25 datasets.",
"Furthermore, our approach has better performance in zero-shot transfer compared with the baseline multilingual model and several monolingual teacher models.",
"BiLSTM-CRF (Lample et al., 2016; Ma and Hovy, 2016) is one of the most popular approaches to sequence labeling.",
"Given a sequence of n word tokens x = { x 1 , , x n } and the corresponding sequence of gold labels y = { y 1 , , y n } , we first feed the token representations of x into a BiLSTM to get the contextual token representations r = { r 1 , , r n } .",
"The conditional probability p ( y | x ) is defined by: ( y (cid:48) , y, r i ) = exp( W Ty r i + b y (cid:48) ,y ) (1) p ( y | x ) = n (cid:81) i =1 ( y i 1 , y i , r i ) (cid:80) y (cid:48) Y ( x ) n (cid:81) i =1 ( y (cid:48) i 1 , y (cid:48) i , r i ) (2) where Y ( x ) denotes the set of all possible label sequences for x , is the potential function, W y and b y (cid:48) ,y are parameters and y 0 is defined to be a special start symbol.",
"W Ty r i and b y (cid:48) ,y are usually called emission and transition scores respectively.",
"During training, the negative log-likelihood loss for an input sequence is defined by: LNLL = log p ( y | x ) BiLSTM-Softmax approach to sequence labeling reduces the task to a set of label classification problem by disregarding label transitions and simply feeding the emission scores WT r i into a softmax layer to get the probability distribution of each variable y i .",
"p ( y i | x ) = softmax( WT r i ) (3) The loss function then becomes: LNLL = n (cid:88) i =1 log p ( y i | x ) In spite of its simplicity, this approach ignores correlations between neighboring labels and hence does not adequately model the sequence structure.",
"Consequently, it empirically underperforms the first approach in many applications.",
"A typical approach to KD is training a student network by imitating a teacher's predictions (Hinton et al., 2015).",
"The simplest approach to KD on BiLSTM-Softmax sequence labeling follows Eq.",
"3 and performs token-level distillation through minimizing the cross-entropy loss between the individual label distributions predicted by the teacher model and the student model: L Token = n (cid:88) i =1 |V| (cid:88) j =1 p t ( y i = j | x ) log p s ( y i = j | x ) (4) where p t ( y i = j | x ) and p s ( y i = j | x ) are the label distributions predicted by the teacher model and the student model respectively and |V| is the number of possible labels.",
"The final loss of the student M ono -E m b e d y 1 y 2 y 3 x 3 x 2 x 1 M o n o li n g u a l T e a c h e r M u l t i E m b e d y 1 y 2 y 3 x 3 x 2 x 1 M u l t ili n g u a l S t u d e n t Embed BiLSTM CRF-layer Pos.Mono Pos.",
"model combines the KD loss and the negative log-likelihood loss: L = L Token + (1 ) LNLL where is a hyperparameter.",
"As pointed out in Section 2.1, however, sequence labeling based on Eq.",
"3 has the problem of ignoring structure-level knowledge.",
"In the BiLSTM-CRF approach, we can also apply an Emission distillation through feeding emission scores in Eq.",
"3 and get emission probabilities p ( y i | x ) , then the loss function becomes: L Emission = n (cid:88) i =1 |V| (cid:88) j =1 p t ( y i = j | x ) log p s ( y i = j | x ) (5) 3 Approach In this section, we propose two approaches to learning a single multilingual sequence labeling model (student) by distilling structure-level knowledge from multiple mono-lingual models.",
"The first approach approximately minimizes the difference between structure-level probability distributions predicted by the student and teachers.",
"The second aggregates structure-level knowledge into local posterior distributions and then minimizes the difference between local distributions produced by the student and teachers.",
"Our approaches are illustrated in Figure 1.",
"Both the student and the teachers are BiLSTM-CRF models (Lample et al., 2016; Ma and Hovy, 2016), one of the state-of-the-art models in sequence labeling.",
"A BiLSTM-CRF predicts the distribution of the whole label sequence structure, so token-level distillation is no longer possible and structure-level distillation is required.",
"Inspired by Kim and Rush (2016), we propose to encourage the student to mimic the teachers' global structural probability distribution over all possible label sequences:",
"However, |Y ( x ) | is exponentially large as it represents all possible label sequences.",
"We propose two methods to alleviates this issue through efficient approximations of p t ( y | x ) using the k -best label sequences.",
"Top-K Eq.",
"6 can be seen as computing the expected student log probability with respect to the teacher's structural distribution: L Str = E p t ( y | x ) [log p s ( y | x )] (7) The expectation can be approximated by sampling from the teacher's distribution p t ( y | x ) .",
"However, unbiased sampling from the distribution is difficult.",
"We instead apply a biased approach that regards the k -best label sequences predicted by the ( y k 1 , y k , r k ) LABELSEQ .",
"teacher model as our samples.",
"We use a modified Viterbi algorithm to predict the k -best label sequences T = { y 1 , . . . , y k } .",
"Eq.",
"7 is then approximated as: L Top-K = 1 k (cid:88) y T log p s ( y | x ) (8) This can also be seen as data augmentation through generating k pseudo target label sequences for each input sentence by the teacher.",
"Weighted Top-K The Top-K method is highly biased in that the approximation becomes worse with a larger k .",
"A better method is to associate weights to the k samples to better approximate p t ( y | x ) .",
"Eq.",
"7 is then approximated as: L Top-WK = (cid:88) y T p (cid:48) t ( y | x ) log p s ( y | x ) (9) This can be seen as the student learning weighted pseudo target label sequences produced by the teacher for each input sentence.",
"The Top-K approach is related to the previous work on model compression in neural machine translation (Kim and Rush, 2016) and multilingual neural machine translation (Tan et al., 2019).",
"In neural machine translation, producing k -best label sequences is intractable in general and in practice, beam search decoding has been used to approximate the k -best label sequences.",
"However, for linear-chain CRF model, k -best label sequences can be produced exactly with the modified Viterbi algorithm.",
"The Top-K is approximate with respect to the teacher's structural distribution and still is slow on large k .",
"Our second approach tries to distill structure-level knowledge based on tractable local (token-wise) distributions q ( y k | x ) , which can be exactly computed.",
"where Z is the denominator of Eq.",
"2 that is usually called the partition function and ( y k ) and ( y k ) are calculated in forward and backward pass utilizing the forward-backward algorithm.",
"We assume that ( y n ) = 1 .",
"Given the local probability distribution for each token, we define the KD loss function in a similar manner with the token-level distillation in Eq.",
"5. L Pos.",
"The difference between token-level distillation and posterior distillation is that posterior distillation is based on BiLSTM-CRF and conveys global",
"Algorithm 1 KD for Multilingual Sequence Labeling",
"1: Input : Training corpora D = { D 1 , . . . , D l } with l languages, monolingual models T = { T 1 , . . . , T l } pretrained on the corresponding training corpus, learning rate , multilingual student model M with parameters , total training epochs S , loss interpolation coefficient , interpolation annealing rate .",
"2: Initialize : Randomly initialize multilingual model parameters .",
"Set the current training epoch S = 0 , current loss interpolation = 1 .",
"Create an new empty training dataset D .",
"3:4: for D i D do 5: for ( x ij , y ij ) D i do 6: Teacher model T i reads the input x ij and predicts probability distributions p ij required for KD.",
"7: Append ( x ij , y ij , p ij ) into the new training dataset D .",
"8: end for 9: end for 10:11: while S < S do 12: S = S + 1 .",
"13: for mini-batch ( x , y , p ) sampled from D do 14: Compute the KD loss LKD ( x , p ) .",
"15: Compute the golden target loss LNLL ( x , y ) .",
"16: Compute the final loss L = LKD + (1 ) LNLL .",
"17: Update : = L / .",
"18: if > 0 do 19: Update interpolation factor : = 20: else 21: Update interpolation factor : = 0 22: end if 23: end while structural knowledge in the local probability distribution.",
"Posterior distillation has not been used in the related research of knowledge distillation in neural machine translation because of intractable computation of local distributions.",
"In sequence labeling, however, local distributions in a BiLSTM-CRF can be computed exactly using the forward-backward algorithm.",
"Let D = { D 1 , . . . , D l } denotes a set of training data with l languages.",
"D i denotes the corpus of the i -th language that contains multiple sentence and label sequence pairs D i = { ( x ij , y ij ) } m i j =1 .",
"To train a single multilingual student model from multiple monolingual pretrained teachers, for each input sentence, we first use the teacher model of the corresponding language to predict the pseudo targets ( k -best label sequences or posterior distribution for posterior distillation).",
"Then the student jointly learns from the gold targets and pseudo targets in training by optimizing the following loss function: LALL = LKD + (1 ) LNLL where decreases from 1 to 0 throughout training following Clark et al. (2019), LKD is one of the Eq.",
"5, 8, 9, 13 or an averaging of Eq.",
"9, 13.",
"The overall distillation process is summarized in Algorithm 1.",
"Dataset We use datasets from 4 sequence labeling tasks in our experiment.",
"CoNLL NER: We collect the corpora of 4 languages from the CoNLL 2002 and 2003 shared task (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) WikiAnn NER (Pan et al., 2017): The dataset contains silver standard NER tags that are annotated automatically on 282 languages that exist in Wikipedia.",
"We select the data of 8 languages from different language families or from different language subgroups of Indo-European languages.",
"We randomly choose 5000 sentences from the dataset for each language except English, and choose 10000 sentences for English to reflect the abundance of English corpora in practice.",
"We split the dataset by 8:1:1 for training/development/test.",
"Universal Dependencies (UD) (Nivre et al., 2016): We use universal POS tagging annotations in the UD datasets.",
"We choose 8 languages from different language families or language subgroups and one dataset for each language.",
"Aspect Extraction : The dataset is from an aspect-based sentiment analysis task in SemEval-2016 Task 5 (Pontiki et al., 2016).",
"We choose subtask 1 of the restaurants domain which has the most languages in all domains 1 , and split 10% of the training data as the development data.",
"1 Subtask 1 of the restaurants domain contains 6 languages but we failed to get the French dataset as the dataset is not accessible from the provided crawling toolkit.",
"Model Configurations In our experiment, all the word embeddings are fixed and M-BERT token embeddings are obtained by average pooling.",
"We feed the token embeddings into the BiLSTM-CRF for decoding.",
"The hidden size of the BiLSTM layer is 256 for the monolingual teacher models and 600 or 800 for the multilingual student model depending on the dataset as larger hidden size for the multilingual model results in better performance in our experiment.",
"The settings of teacher and student models are as follows: Monolingual Teachers: Each teacher is trained with a dataset of a specific language.",
"We use M-BERT concatenated with language-specific Flair (Akbik et al., 2018) embeddings and fastText (Bojanowski et al., 2017) word embeddings as token embeddings 2 for all the 2 We use fastText + M-BERT instead if the Flair embedding is not available for a certain language.",
"Multilingual Student: The student model is trained with the datasets of all the languages combined.",
"We only use M-BERT as token embeddings for the multilingual student model.",
"Training For model training, the mini-batch size is set to 2000 tokens.",
"We train all models with SGD optimizer with a learning rate of 0.1 and anneal the learning rate by 0.5 if there is no improvements on the development set for 10 epochs.",
"For all models, we use a single NVIDIA Tesla V100 GPU for training including the student model.",
"We tune the loss interpolation anneal rate in { 0 .",
"5 , 1 .",
"0 } and the k value of Top-K ranging from [1 , 10] .",
"We report results of the following approaches.",
"Baseline represents training the multilingual model with the datasets of all the languages combined and without knowledge distillation.",
"Emission is the KD method based on Eq.",
"5. Top-K , Top-WK and Posterior are our KD methods formulated by Eq.",
"8, Eq.",
"9 and Eq.",
"13 resprectively.",
"",
"Pos.+Top-WK is a mixture of posterior and weighted Top-K distillation.",
"We also report the results of monolingual models as Teachers and multilingual BiLSTM-Softmax model with token-level KD based on Eq.",
"4 as Softmax and Token for reference.",
"Table 2, 3, and 4 show the effectiveness of our approach on 4 tasks over 25 datasets.",
"In all the tables, we report scores averaged over 5 runs.",
"Observation #0.",
"BiLSTM-Softmax models perform inferior to BiLSTM-CRF models in most cases in the multilingual setting: The results show that the BiLSTM-CRF approach is stronger than the BiLSTM-Softmax approach on three of the four tasks, which are consistent with previous work on sequence labeling (Ma and Hovy, 2016; Reimers and Gurevych, 2017; Yang et al., 2018).",
"The token-level KD approach performs almost the same as the BiLSTM-Softmax baseline in most of the tasks except the Aspect Extraction task.",
"Observation #1.",
"Monolingual teacher models outperform multilingual student models: This is probably because the monolingual teacher models are based on both multilingual embeddings M-BERT and strong monolingual embeddings (Flair/fastText).",
"The monolingual embedding may provide additional information that is not available to the multilingual student models.",
"Furthermore, note that the learning problem faced by a multilingual student model is much more difficult than that of a teacher model because a student model has to handle all the languages using roughly the same model size as a teacher model.",
"Observation #2.",
"Emission fails to transfer knowledge: Emission outperforms the baseline NER POS TEACHERS 41.85 56.01 BASELINE 50.86 84.11 EMISSION 50.19 84.17 POSTERIOR 51.43 84.28 POSTERIOR +T OP-K 51.14 84.24 Table 6: Averaged results of zero-shot transfer on another 28 languages of the NER task and 24 languages of the POS tagging task.",
"only on 12 out of 25 datasets.",
"This shows that simply following the standard approach of knowledge distillation from emission scores is not sufficient for the BiLSTM-CRF models.",
"Observation #3.",
"Top-K and Top-WK outperform the baseline: Top-K outperforms the baseline on 15 datasets.",
"It outperforms Emission on average on Wikiann NER and Aspect Extraction and is competitive with Emission in the other two tasks.",
"Top-WK outperforms the baseline on 18 datasets and it outperforms Top-K in all the tasks.",
"Observation #4.",
"Posterior achieves the best performance on most of the tasks: The Posterior approach outperforms the baseline on 21 datasets and only underperforms the baseline by 0.12 on 2 languages in WikiAnn and by 0.01 on one language in UD POS tagging.",
"It outperforms the other methods on average in all the tasks except that is slightly underperforms",
"Pos.+Top-WK in the CoNLL NER task.",
"Observation #5.",
"Top-WK+Posterior stays in between:",
"Pos.+Top-WK outperforms both Top-WK and Posterior only in the CoNLL NER task.",
"In the other three tasks, its performance is above that of Top-WK but below that of Posterior .",
"We use the monolingual teacher models, multilingual baseline models and our Posterior and",
"Pos.+Top-WK models trained on the CoNLL NER datasets to predict NER tags on the test sets of 7 languages in WikiAnn that used in Section 4.2.",
"Table 5 shows the results.",
"For the teacher models, we report the maximum score over all the teachers for English Dutch Spanish German Avg.",
"each language.",
"The results show that multilingual models significantly outperform the teacher models.",
"For languages such as Tamil and Hebrew, which are very different from the languages in the CoNLL datasets, the performance of the teacher models drops dramatically compared with the multilingual models.",
"It shows that the language specific features in teacher models limits their generalizability on new languages.",
"Our multilingual models, Posterior and",
"Pos.+Top-WK outperform the baseline on all the languages.",
"Emission slightly underperforms Baseline , once again showing its ineffectiveness in knowledge distillation.",
"We also conduct experiments on zero-shot transferring over other 28 languages on WikiAnn NER datasets and 24 languages on UD POS tagging datasets.",
"The averaged results are shown in Table",
"6. The NER experiment shows that our approaches outperforms Baseline on 24 out of 28 languages and the Posterior is stronger than",
"Pos.+Top-WK by 0.29 F1 score on average.",
"The POS tagging experiment shows that our approach outperforms Baseline on 20 out of 24 languages.",
"For more details, please refer to the Appendices A. 4.4 KD with Weaker Teachers To show the effectiveness of our approach, we train weaker monolingual teachers using only M-BERT embeddings on four datasets of the CoNLL NER task.",
"We run Posterior distillation and keep the setting of the student model unchanged.",
"In this setting, Posterior not only outperforms the baseline, but also outperforms the teacher model on average.",
"This shows that our approaches still work when the teachers have the same token embeddings as the student.",
"By comparing Table 7 and 2, we can also see that stronger teachers lead to better students.",
"To show how the k value affects the performance of Top-K and Top-WK distillation methods, we compare the models with two distillation methods and different k values on the CoNLL NER task.",
"Figure 2 shows that Top-K drops dramatically when k gets larger while Top-WK performs stably.",
"Therefore 1 2 3 5 7 10 12 15 87 .",
"We compare the training time of different approaches on the CoNLL NER task and report the results in Table 8.",
"Our Top-WK and Posterior approaches take 1.45 and 1.63 times the training time of the Baseline approach.",
"For the memory consumption in training, the GPU memory cost does not vary significantly for all the approaches, while the CPU memory cost for all the KD approaches is about 2 times that of the baseline model, because training models with KD requires storing predictions of the teachers in the CPU memory.",
"Multilingual Sequence Labeling Many important tasks such as NER and POS tagging can be reduced to a sequence labeling problem.",
"Most of the recent work on multilingual NER (Tck-strm, 2012; Fang et al., 2017; Enghoff et al., 2018; Rahimi et al., 2019; Johnson et al., 2019) and POS tagging (Snyder et al., 2009; Plank and Agic, 2018) focuses on transferring the knowledge of a specific language to another (low-resource) language.",
"For example, Johnson et al. (2019) proposed cross-lingual transfer learning for NER focusing on bootstrapping Japanese from English, which has a different character set than Japanese.",
"Pretrained Word Representations Recent progress on pretrained word representations such as ELMo (Peters et al., 2018), BERT (Devlin et al., 2019) and XLNet (Yang et al., 2019) significantly improve the performance of multiple NLP tasks.",
"Multilingual BERT is a pretrained BERT model incorporating 104 languages into a single multilingual model.",
"Pires et al. (2019) showed its ability of generalization and zero-shot transfer learning on NER and POS tagging and Keung et al. (2019) used adversarial learning with M-BERT and significantly improved zero-resource cross-lingual NER.",
"On the tasks of NER and POS tagging, Flair embeddings (Akbik et al., 2018, 2019) is a state-of-the-art method based on character-level language models.",
"Straka et al. (2019) found that concatenating Flair embeddings with BERT embeddings outperforms other mixtures of ELMo, BERT and Flair embeddings in most of the subtasks on the CoNLL 2018 Shared Task (Zeman and Hajic, 2018) datasets on 54 languages, which inspired us to use M-BERT + Flair embeddings as the word representation of teachers.",
"Knowledge Distillation Knowledge distillation has been used to improve the performance of small models with the guidance of big models, with applications in natural language processing (Kim and Rush, 2016; Kuncoro et al., 2016; Tan et al., 2019; Clark et al., 2019; Sun et al., 2019), computer vision (Ba and Caruana, 2014) and speech recognition (Huang et al., 2018).",
"For simple classification problems, there is a variety of work on tasks such as sentiment analysis (Clark et al., 2019), image recognition (Hinton et al., 2015) and cross-lingual text classification (Xu and Yang, 2017).",
"For structured prediction problems, there are lines of work on neural machine translation (Kim and Rush, 2016; Tan et al., 2019), connectionist temporal classification in the field of speech recognition (Huang et al., 2018) and dependency parsing (Kuncoro et al., 2016; Liu et al., 2018).",
"Many recent researches on BERT with knowledge distillation are focused on distilling a large BERT model into a smaller one.",
"(Tsai et al., 2019) distilled a large M-BERT model into a three layer M-BERT model for sequence labeling and achieved a competitively high accuracy with significant speed improvements.",
"(Jiao et al., 2019) proposed TinyBERT for natural language understanding.",
"(Sanh et al., 2019) proposed a distilled version of the BERT model which achieves a 60% faster speed and maintains 97% performance of the larger BERT model.",
"Previous work has discussed and empirically investigated two ways of adapting monolingual pretrained embedding models to monolingual downstream tasks (Peters et al., 2019): either fixing the models and using them for feature extraction, or fine-tuning them in downstream tasks.",
"They found that both settings have comparable performance in most cases.",
"Wu and Dredze (2019) found that fine-tuning M-BERT with the bottom layers fixed provides further performance gains in multilingual setting.",
"In this paper, we mainly focus on the first approach and utilize the pretrained embedding as fixed feature extractor because Flair/M-BERT fine-tuning is too slow for our large-scale experimental design of multilingual KD.",
"Designing a cheap and fast fine-tuning approach for pretrained embedding models might be an interesting direction for future work.",
"In this paper our major contributions are the two structure-level methods to distill the knowledge of monolingual models to a single multilingual model in sequence labeling: Top-K knowledge distillation and posterior distillation.",
"The experimental results show that our approach improves the performance of multilingual models over 4 tasks on 25 datasets.",
"The analysis also shows that our model has stronger zero-shot transfer ability on unseen languages on the NER and POS tagging task.",
"Our code is publicly available at https://github.",
"com/Alibaba-NLP/MultilangStructureKD .",
"This work was supported by the National Natural Science Foundation of China (61976139)."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"result",
"other",
"abstain",
"other"
] |
[
"Domain classification is the task of mapping spoken language utterances to one of the natural language understanding domains in intelligent personal digital assistants (IPDAs).",
"This is a major component in mainstream IPDAs in industry.",
"Apart from official domains, thousands of third-party domains are also created by external developers to enhance the capability of IPDAs.",
"As more domains are developed rapidly, the question of how to continuously accommodate the new domains still remains challenging.",
"Moreover, existing continual learning approaches do not address the problem of incorporating personalized information dynamically for better domain classification.",
"In this paper, we propose CONDA, a neural network based approach for domain classification that supports incremental learning of new classes.",
"Empirical evaluation shows that CONDA achieves high accuracy and outperforms baselines by a large margin on both incrementally added new domains and existing domains.",
"Domain classification is the task of mapping spoken language utterances to one of the natural language understanding (NLU) domains in intelligent personal digital assistants (IPDAs), such as Amazon Alexa, Google Assistant, and Microsoft Cortana, etc. (Sarikaya, 2017).",
"Here a domain is defined in terms of a specific application or functionality such as weather, calendar or music, which narrows down the scope of NLU.",
"For example, given an utterance Ask Uber to get me a ride from a user, the appropriate domain would be one that invokes the Uber app.",
"Traditionally IPDAs have only supported dozens of well-separated domains, where each is defined in terms of a specific application or functionality such as calendar and weather (Sarikaya et al., 2016; Tur and De Mori, 2011; El-Kahky et al., 2014).",
"In order to increase the domain coverage and extend the capabilities of the IPDAs, mainstream IPDAs released tools to allow third-party developers to build new domains.",
"Amazons Alexa Skills Kit, Googles Actions and Mi-crosofts Cortana Skills Kit are examples of such tools.",
"To handle the influx of new domains, large-scale domain classification methods like SHORTLISTER (Kim et al., 2018b) have been proposed and have achieved good performance.",
"As more new domains are developed rapidly, one of the major challenges in large-scale domain classification is how to quickly accommodate the new domains without losing the learned prediction power on the known ones.",
"A straightforward solution is to simply retraining the whole model whenever new domains are available.",
"However, this is not desirable since retraining is often time consuming.",
"Another approach is to utilize continual learning where we dynamically evolve the model whenever a new domain is available.",
"There is extensive work on the topic of continual learning, however there is very little on incrementally adding new domains to a domain classification system.",
"To mitigate this gap, in this paper we propose the CONDA solution for continuous domain adaptation.",
"Given a new domain, we keep all learned parameters, but only add and update new parameters for the new domain.",
"This enables much faster model updates and faster deployment of new features to customers.",
"To preserve the learned knowledge on existing domains to avoid the notorious catastrophic forgetting problem (Kemker et al., 2018), we propose cosine normalization for output prediction and domain embedding regularization for regularizing the new domain embedding.",
"Also, we summarize the data for existing domains by sampling exemplars, which will be used together with the new domain data for continuous domain adaptation.",
"This is shown to further alleviate the overfitting on the new domain data.",
"Empirical evaluation on real data with 900 domains for initial training and 100 for continuous adaptation shows that CONDA out performs the baselines by a large margin, achieving 95.6% prediction accuracy on average for the 100 new domains and 88.2% accuracy for all seen domains after 100 new domains have been accommodated (only 3.6% lower than the upperbound by retraining the model using all domain data).",
"To summarize, we make the following contributions in this paper: We introduce the problem of continuous domain adaptation for large-scale personalized domain classification.",
"We describe CONDA, a new solution for continuous domain adaptation with Cosine normalization, domain embedding regularization and negative exemplar sampling techniques.",
"Our solution advances the research in continuous domain adaptation.",
"We conduct extensive experiments showing that CONDA achieves good accuracy on both new and existing domains, and outperforms the baselines by a large margin.",
"Domain classification is the task of mapping spoken language utterances to one of the NLU domains in IPDAs.",
"A straightforward solution to tackle this problem is to ask users to explicitly mention the domain name with a specific invocation pattern.",
"For example, for the utterance Ask Uber to get me a ride , the invocation pattern is Ask { domain } to { perform action } .",
"While it makes things much simpler for the domain classifier, this significantly limits natural interaction with IPDAs as users need to remember the domain names as well as the invocation pattern.",
"To address this limitation, name-free domain classification methods were developed for more user friendly interactions, and have been getting more attention recently.",
"We specifically focus on the name-free scenario in this paper.",
"To our knowledge, the state-of-the-art for name-free domain classification is SHORTLISTER (Kim",
"et al., 2018b), which leverages personalized user provided information for better classification performance.",
"Specifically, it contains three main modules.",
"The first module is the LSTM-based encoder to map an utterance to a dimension-fixed vector representation.",
"Given an utterance, each word is first represented as dense vectors using word embeddings, then a bidirectional LSTM (Graves and Schmidhuber, 2005) is be used to encode the full utterance.",
"The second module is the personalized domain summarization module.",
"For each utterance from an IPDA user, a list of domains have been enabled by the user.",
"These enabled domains can be viewed as user-specific personalized information.",
"It has been shown that the domain classification accuracy can be significantly improved by leveraging information about enabled domains (Kim et al., 2018b).",
"To represent the domain enablement information, first each enabled domain is mapped to a fixed-dimensional embedding, then a summarization vector is generated by taking an attention weighted sum (Luong et al., 2015) over the enabled domain embeddings.",
"Once the utterance representation and the enabled domain summarization are calculated, we concatenate the two vectors as the final representation.",
"Then the third module, a feed-forward network, is used to predict the confidence score with a sigmoid function for each domain.",
"As more new domains are developed, a major challenges in large-scale domain classification is quickly accommodating the new domains into the live production domain classification model without having to perform a full retrain.",
"We refer to this problem as Continuous Domain Adaptation (CDA).",
"In this paper, we specifically focus on the case of purely online learning where new domains where added one by one, since in practice we want to quickly integrate a new domain into the system as soon as it becomes available.",
"We formally de-fine the problem below.",
"Definition 1 (Online continuous domain adaptation) Given a collection of k domains S k = { s 1 , s 2 , . . . , s k } , suppose we have a dataset D k defined on S k where each item is a triple ( u, s, E ) with the utterance u U (the set for all possible utterances), the ground-truth domain s S k , and the enabled domains E S k .",
"Denote P ( S k ) as the powerset of S k , a model M k : U P ( S k ) S k has been trained on D k for domain classification with the accuracy M k ( D k ) .",
"At some point, a new domain s k +1 is available with the corresponding dataset D k +1 = { ( u, s k +1 , E ) | E S k +1 } with S k +1 = S k { s k +1 } .",
"Taking advantage of D k +1 , the continuous adaptation for s k +1 is to update M k to M k +1 : U P ( S k +1 ) S k +1 so that the model can make predictions for s k +1 , with the goal of maximizing M k +1 ( D k +1 ) and minimizing M k ( D k ) M k +1 ( D k ) .",
"We introduce CONDA ( Co ntinuous N eural D omain A daptation), a variation of SHORTLISTER that is capable of handling online CDA decribed in Definition 1.",
"Similar to SHORTLISTER , it has three main modules.",
"The first module is the LSTM-based utterance encoder which shares the same architecture as the one used in SHORTLISTER , that maps an input utterance into a dense vector.",
"After the training on the initial k -domain data D k , we freeze all parameters (i.e., the word embedding lookup and the bi-LSTM parameters) of this module from changing for the subsequent online domain adaptation tasks.",
"Usually the value of k is large enough (hundreds or even thousands in real-world, at least 100 in our experiments), thus it is safe to assume that the parameters have been tuned sufficiently well to encode utterances from all existing and future domains.",
"In this work we treat new words in the new domains as unknown and leave the problem of vocabulary expansion as future work.",
"The second module is the personalized domain summarization module which will map the enabled domains of an input utterance to a dense vector representation.",
"It is also similar to the one in SHORTLISTER , except we will evolve the module as we are adding new domains.",
"Specifically, given dataset D k on k domains for initial training, a domain embedding table T k R k d s will be learned where d s is the size of the domain embeddings.",
"When a new domain s k +1 is available, we expand T k to T k +1 R ( k +1) d s by: (1) freezing the learned embeddings for all known domains; (2) adding a new row t k +1 R d s to T k as the domain embedding for s k +1 and updating the new parameters t k +1 using all available training data at hand (i.e., the dataset D k +1 and the negative samples which will be discussed later in this sec-tion).",
"We repeat this procedure whenever a new domain is available.",
"To avoid over-fitting on t k +1 , we introduce a new regularization term into the loss function.",
"We describe the details in Section 3.2.",
"The third module is a two-layer feed-forward network as the classifier.",
"The first layer f (1) : R d u + d s R d h maps the concatenation of the utterance embedding (in size d u ) and domain summarization (in size d s ) into fix-sized hidden representation (in size d h ) using a fully connected layer followed by SELU activation (Klambauer et al., 2017), which is identical to the one in SHORTLISTER .",
"Then the prediction layer f (2) : R d h R k maps the hidden representation to the final domain prediction scores.",
"Unlike SHORTLISTER where the final prediction score is the dot product of the weight vector and the hidden representation, we choose to use the cosine score of the two, referred to as cosine normalization .",
"To support online CDA when a new domain is available, we apply a similar approach to the domain embedding expansion described above to expand the prediction layer.",
"Specifically, denote W (2) k R k d h be the weights for the prediction layer that has been trained on the initial k domains.",
"To adapt the new domain d k +1 , we expand W (2) k to W (2) k +1 R ( k +1) d h by first freezing all learned parameters and adding a new row of learnable parameters w k +1 R d h to W (2) k .",
"As each time we only add one new domain, all training utterances during the update will have the same label.",
"Thus, it's easy to overfit the new data such that catastrophic forgetting occurs.",
"Inspired by (Rebuffi et al., 2017), we also propose a negative sampling procedure to leverage (limited) information on the known domains to alleviate the catastrophic forgetting problem.",
"For the rest of the section, we will first talk about cosine normalization, and then domain embedding regularization, and finally negative sampling.",
"As mentioned above, we use the cosine similarity of the weights and the hidden representation vector instead of the linear dot product in the prediction layer.",
"Formally, let f (2) k : R d h [ 1 , 1] k be the prediction layer for k domains with parameters W (2) k R k d h .",
"Given an input hidden representation h R d h from f (1) , the score for the i -th s 1 = Weather s k+1 = Uber h s 1 = Weather s k+1 = Uber h 2 1",
"To understand why cosine is better in the case of online CDA, let's first see the problem with the dot-product method.",
"Suppose we are accommodating s k +1 with dataset D k +1 , because we train the new parameters w k +1 only on D k +1 where all utterances have the same domain s k +1 , the model can easily get good training performance on M k +1 ( D k +1 ) by simply maximizing the values in w k +1 such that the dot product of the hidden representation with w k +1 is larger than the dot product with any other w i , 1 i k .",
"Effectively this leads to the model predicting domain s k +1 for any given utterance.",
"Using cosine normalization instead as described in Eq.",
"1 removes the incentive to maximize the vector length of w k +1 .",
"Example 1 Suppose M k has been initially trained on D k , and domain s 1 =Weather.",
"Given an utterance u = What's the weather today?, M k correctly classifies u into s 1 .",
"Now a new domain s k +1 =Uber is coming and we evolve M k to M k +1 .",
"As the norm of the weights w k +1 could be much larger than w 1 in the prediction layer, even if the hidden representation h of u is closer to s 1 in direction, M k +1 will classifier u into s k +1 as it has a higher score, shown in Figure 1.a.",
"However if we measure the cosine similarity, M k +1 will classify u correctly because we now care more about the directions of the vectors, and the angle 1 between h and s 1 is smaller (representing higher similarity) than the angle 2 between h and s k +1 , as shown in Figure 1.b.",
"As we use the cosine normalization, all prediction scores are mapped into the range [-1, 1].",
"Therefore it's not proper to use log-Sigmoid loss function as in SHORTLISTER .",
"So accompanying with the cosine normalization, the following hinge loss function has been used instead: (2) L hinge = n (cid:88) i =1 y i max { pos o i , 0 } + n (cid:88) i =1 (1 y i ) max { o i neg , 0 } where n is the number of all domains, o i is the predicted score for each domain, y is a n -dimensional one-hot vector with 1 in the ground-truth label and 0 otherwise.",
"pos and neg are the hinge thresholds for the true and false label predictions respectively.",
"The reason we use hinge loss here is that it can be viewed as another way to alleviate the overfitting on new data, as the restrictions are less by only requiring the prediction for the ground-truth to be above pos and false domain predictions below neg .",
"Our experiments show that this helps the model get better performance on the seen domains.",
"In this section, we introduce the regularizations on the domain embeddings used in the personalized domain summarization module.",
"Recall that given an utterance u with h u as the hidden representation from the encoder and its enabled domains E , personalized domain summarization module first compares u with each s i E (by calculating the dot product of h u and the domain embedding t i of s i ) to get a score a i , then gets the weight c i = exp ( a i ) / (cid:80) a j exp ( a j ) for domain s i , and finally computes the personalized domain summary as (cid:80) e i E c i t i .",
"We observed that after training on the initial dataset D k , the domain embedding vectors tend to roughly cluster around a certain (ran-dom) direction in the vector space.",
"Thus, when we add a new domain embedding s k +1 to this personalization module, the model tends to learn to move this vector to a different part of the vector space such that its easier to distinguish the new domain from all other domains.",
"Moreover, it also increases the (cid:96) 2 norm of the new domain embedding t k +1 to win over all other domains.",
"Example 2 Suppose a similar scenario to Example 1 where we have s 1 = Weather in S k and a new domain s k +1 = Uber.",
"As most utterances in D k +1 have s k +1 as an enabled domain, it's easy for the model to learn to enlarge the norm of the new domain embedding t k +1 as well as make it close to the context of ride sharing, so that t k +1 can dominate the domain summarization.",
"Then coordinating with the new weights w k +1 in the prediction layer f (2) k +1 , the network can easily predict high scores s k +1 and fit the dataset D k +1 .",
"However, when we have utterances belonging to s 1 with s k +1 as an enabled domain, s k +1 may still dominate the summarization which makes the prediction layer tends to cast those utterances to s k +1 .",
"We don't observe this on the initial training on D k because s k +1 was not visible at that time, thus cannot be used as an enabled domain.",
"And it's even worse if s 1 is similar to s k +1 in concept.",
"For example if s 1 = Lyft, in this case the utterances of the two domains are also similar, making the dot product of t k +1 and the hidden representations of the s 1 's utterances even larger.",
"To alleviate this problem, we add a new domain embedding regularization term in the loss function to constrain the new domain embedding vector length and force it to direct to a similar area where the known domains are heading towards, so that the new domain will not dominate the domain summarization.",
"Specifically, (3) L der = k (cid:88) i =1 i max { der cos( t k +1 , t i ) , 0 } + norm 2 (cid:107) t k +1 (cid:107) 2 We call the first part of Eq.",
"3 on the right hand side as the domain similarity loss where we ask the new domain embedding t k +1 to be similar to known domain t i 's controlled by a Cosine-based hinge loss.",
"As we may not need t k +1 to be similar to all seen domains, a coefficient i is used to weight the importance each similarity loss term.",
"In this paper we encourage t k +1 to be more similar to the ones sharing similar concepts (e.g. Uber and Lyft).",
"We assume all training data are available to us, and measure the similarity of two domains by comparing their average of utterance hidden representations.",
"Specifically, denote : U R d u as the LSTM-encoder that will map an utterance to its hidden representation with dimension d u .",
"For each domain s i S k +1 , we first calculate the average utterance representation on D i (cid:101) h i = (cid:88) ( u,s i ,e ) D i ( u ) | D i | (4) Then we set i = dsl max { cos( (cid:101) h i , (cid:101) h k +1 ) , 0 } with dsl as a scaling factor.",
"So far we developed our method by training only on the new data D k +1 , and use regularizations to prevent overfitting.",
"However, in many real applications all of the training data, not only D k +1 , is actually available, but it's not affordable to retrain the full model using all data.",
"Inspired by (Rebuffi et al., 2017), we can select a set of exemplars from the previously trained data to further improve continual adaptation.",
"Suppose we are handling the new domain s k +1 with D k +1 , and all data trained previously is D k on k domains S k .",
"For each known s i S k , we pick N utterances from D i as the exemplars for s i .",
"Denote P i be the exemplar set for s i and P = (cid:83) ki =1 P i be the total exemplar set.",
"To generate each P i , we pick the topN utterances that are closest to the average of the utterance hidden representation.",
"Specifically, following Eq.",
"4, we first get the average representation (cid:101) h i , then P i is defined as follow: P i = P i D i , | P i | = N (cid:88) ( u,s i ,e ) P i cos (cid:16) ( u ) , (cid:101) h i (cid:17) (5) If multiple candidates satisfying Eq.",
"5 for P i , we randomly pick one as P i to break the tie.",
"Once the domain adaptation for s k +1 is done, we similarly generate P k +1 and merge it to P .",
"We repeat this procedure for negative sampling whenever a new domain is coming later.",
"As we add more new domains, the exemplar set P also grows.",
"For some new domain D k +1 , we may have | P |(cid:29) | D k +1 | .",
"In this case, the prediction accuracy on the new domain data could be very low as the model will tend to not making mistakes on P rather than fitting D k +1 .",
"To alleviate this problem, when | P | > | D k +1 | , we select a subset P (cid:48) P with | P (cid:48) | = | D k +1 | , and P (cid:48) will be used as the final exemplar set to train together with D k +1 .",
"To generate P (cid:48) , we just randomly sample a subset from P , since it was observed to be effective in our experiments.",
"Dataset: We use a dataset defined on 1000 domains for our experiments which has 2.53M utterances, and we split them into two parts.",
"The first 80 90 100 10 20 30 40 50 60 70 80 90 100 Accuracy on each new domain -5 10 25 40 55 70 85 100 15913172125293337414549535761656973778185899397 Accumulated accuracy on previously trained new domains A cc u r a cy Number of newdomains Number of new domains -5 5 15 25 35 45 55 65 75 85 95 15913172125293337414549535761656973778185899397 Accumulated accuracy on all previously known domains linear-full-updatelinearcoscos+nscos+dercos+der+nsupperbound Number of new domains",
"part contains 900 domains where we use it for the initial training of the model.",
"It has 2.06M utterances, and we split into training, development and test sets with ratio of 8:1:1.",
"We refer to this dataset as InitTrain.",
"The second part consists of 100 domains and is used for the online domain adaptation.",
"It has 478K utterances and we split into training, development and test sets with the same 8:1:1 ratio.",
"We refer to this dataset as IncTrain.",
"Training Setup: We implement the model in PyTorch (Paszke et al., 2017).",
"All of the experiments are conducted on an Amazon AWS p3.16xlarge 1 cluster with 8 Tesla V100 GPUs.",
"For initial training, we train the model for 20 epochs with learning rate 0.001, batch size 512.",
"For the continuous domain adaptation, we add the new domains in a random order.",
"Each domain data will be trained independently one-by-one for 10 epochs, with learning rate 0.01 and batch size 128.",
"For both training procedures, we use Adam as the optimizer.",
"The development data is used to pick the best model in different epoch runs.",
"We evaluate the classification accuracy on the test set.",
"We first talk about the overall performance.",
"In our experiments we select two baselines.",
"The first one linear-full-update which simply extends 1 https://aws.amazon.com/ec2/instance-types/p3/ SHORTLISTER by adding new parameters for new domains and conducting full model updating.",
"The second linear is similar to the first baseline except that we freeze all trained parameters and only allow new parameter updating.",
"Both the two baselines update the model with D k +1 dataset only.",
"To show the effectiveness of each component of CONDA, we choose four variations.",
"The first one is cos where we apply the Cosine Normalization (CosNorm).",
"The second one cos+der applies CosNorm with the domain embedding regularization.",
"The third one cos+ns uses both CosNorm and negative exemplars.",
"And the last one cos+der+ns is the combination of all three techniques, which is our CONDA model.",
"For hyperparameters, we pick pos = 0 .",
"5 , neg = 0 .",
"3 , der = 0 .",
"1 , dsl = 5 , and norm = 0 .",
"4 .",
"Figure 2 shows the accuracy for new domain adaptations.",
"From the figure, here are the main observations.",
"First, without any constraints, linear-full-update can easily overfits the new data to achieve 100% accuracy as shown in Figure",
"2(a), but it causes catastrophic forgetting such that the accuracy on seen domains is (almost) 0 as shown in Figure",
"2(b) and",
"(c).",
"By freezing the all trained parameters, the catastrophic forgetting problem is a bit alleviated for linear , but the accuracy on the seen domains is still very low as we add more new domains.",
"Second, cos produces much better accuracy on seen domains with a bit lower accuracy on each new domain, showing the effectiveness of the Cosine normalization.",
"Third, as we add more regularizations to the model, we get better accuracy on the seen domains (Figure 2",
"(b) and",
"(c)), at the cost of sacrificing a bit on the new domain accuracy (Figure 2",
"(a)).",
"Also, cos+der+ns (the CONDA model) achieves the best performance, with an average of 95.6% accuracy for each new domain and 88.2% accuracy for all previously seen domains after we add 100 new ones, which is only 3.6% lower than the upperbound (by retraining the model on the whole dataset).",
"These demonstrate the superiority of our method.",
"Using Different Number of Initial Domains: We vary the number of domains for initial training to see if it will have a big impact on the model performance.",
"Specifically, we pick 100 and 500 domains from InitTrain, and use the same IncTrain data for domain adaptation.",
"Figure 3 compares the model performance on these three different number (i.e., 100, 500, 900) of initial training domains.",
"From the figure we can see that the curves share a similar pattern regardless of the number of initial domains, showing that our model is stable to the number of domains used for initial training.",
"Varying the hinge loss thresholds: We vary the classification hinge loss thresholds pos and neg to see how it will affect the performance.",
"Specifically, we fix neg = 0 .",
"3 and vary pos from 0.5 to 1.0, and fix pos = 0 .",
"5 and vary neg from 0 to 0.4, respectively.",
"For both of the them we use 0.1 as the step size.",
"Figure 4 shows the model performance.",
"From the figures, we summarize the following observations.",
"First, as we increase pos , on average the accuracy on each new domain gets better (Figure",
"4(a)), but we loss performance on all seen domains (Figure",
"4(b)).",
"This is in accord with our intuition that a larger pos puts more constraint on the new domain predictions such that it tends to overfit the new data and exacerbates catastrophic forgetting on existing domains.",
"Second, as we increase neg , on average the accuracy on each new domain gets worse (Figure",
"4(c)), but we get better performance on existing domains.",
"This is because a larger neg narrows down the prediction margin between positive and negative domains (similar to decreasing pos ), so that less constraint has been put onto predictions to alleviate overfitting on the new domain data.",
"Varying the domain similarity loss threshold: We vary the threshold der to see how it will affect the model performance.",
"Specifically, we vary der from 0 to 0.5 with step size 0.1, and Figure 5 shows the model performance.",
"As we increase der , the performance on the new domains gets worse, and the drop is significant when der is large.",
"On the other hand, the accumulated accuracy on seen domains increases when we start to increase der , and drops when der is too large.",
"This means we when we start to make the new domain embeddings to be similar to the existing ones, we alleviate the problem that the new domain dominates the domain summarization.",
"Thus the accuracy on existing domains improves at the cost of sacrificing some accuracy on the new domains.",
"However, if we continue to increase der to make it very similar to some of existing domains, the new domain will compete with some existing ones so that we loss accuracy on both new and existing domains.",
"Varying the weights for domain similarity loss: To see how the weighted domain similarity loss will affect the performance, we compare it against the plain version without the utterance similarity weights.",
"Specifically, we set each i = dsl having the same value.",
"And our experiments show that the plain version gets the average accuracy 94.1% on the new domains, which is 1.5% lower than the weighted version, and 88.7% accumulated accuracy on all domains after adding 100 new domains, which is 0.5% higher than the weighted version.",
"This means we can get a bit higher accumulated accuracy at the cost of sacrificing more new domain accuracy.",
"In real applications, the decision to whether use weighted domain similarity loss should be made by trading off the importance of the new and existing domains.",
"Varying the number of used negative exemplars: As we mentioned before, we down-sample the negative exemplar set P to reduce the impact on new domain performance.",
"To see if it's necessary, we compare it against the one without down-sampling.",
"Our experiments show that without down-sampling, the model achieves 87.5% new domain accuracy on average which is 8.1% lower than the down-sampling version, and 87.2% accumulated accuracy on all domains which is 1.0% lower than the down-sampling one.",
"training: We have shown Cosine normalization with hinge loss works better than linear dot product with sigmoid loss (used in SHORTLISTER ) for CDA.",
"Here we compare the two on the regular training setting where we train the model from scratch on a large dataset.",
"Specifically, we compare the initial training performance on 100, 500, and 900 domains which are the same as we used earlier.",
"Table 1 shows the accuracy numbers.",
"From the table we see that Linear works better than Cosine by 0.7-1.0% across different number of domains.",
"Though the difference is not large, this means Linear could be a better option than Cosine when we train the model from scratch.",
"Varying the order of the new domains: To see if the incoming order of the new domains will affect the performance, we generate two different orders apart from the one used in overall evaluation.",
"The first one sorts the new domains on the number of utterances in the decreasing order, and the second in the increasing order.",
"Denote these three orders as random, decreasing, and increas-ing, and we conduct domain adaptation on these orders.",
"Our experiments show that they achieve 95.6%, 95.5%, and 95.6% average accuracy on new domains respectively, and 88.2%, 88.2%, and 88.1% accumulated accuracy on all domains after accommodating all 100 new domains.",
"This indicates that there is no obvious difference on model performance, and our model is insensitive to the order of the new domains.",
"Using more new domains: We also experimented with adding a large number of new domains to see the limit of CONDA.",
"Figure 6 shows the results by continuously adapting 900 new domains one-by-one.",
"From the figure we can see that at the early stage of the new domain adaptation (e.g., first 200 new domains), we get high new domain accuracy with little performance decrease on the existing domains.",
"After that, the new domain performance becomes more unstable with violent oscillation, and the existing domain accuracy decreases more quickly.",
"This suggests that we cannot run the new domain adaptation forever, and 85 87 89 91 93 16111621263136414651566166717681869196 Number of new domains",
"after adapting a certain number of new domains (e.g., 200 new domains), it's more preferable to train the whole model from scratch.",
"Domain Classification: Traditional domain classifiers were built on simple linear models such as Multinomial logistic regression or Support Vector Machines (Tur and De Mori, 2011).",
"They were typically limited to a small number of domains which were designed by specialists to be well-separated.",
"To support large-scale domain classification, (Kim et al., 2018b) proposed SHORTLISTER , a neural-based model.",
"(Kim et al., 2018a) extended SHORTLISTER by using additional contextual information to rerank the predictions of SHORTLISTER .",
"However, none of them can continuously accommodate new domains without full model retrains.",
"Continuous Domain Adaptation: To our knowledge, there is little work on the topic of continuous domain adaptation for NLU and IPDAs.",
"(Kim et al., 2017) proposed an attention-based method for continuous domain adaptation, but it A cc u r a cy 0 10 20 30 40 50 60 70 80 90 100 1 51 101 151 201 251 301 351 401 451 501 551 601 651 701 751 801 851 900 new domains for continual learning new domain accumu.",
"introduced a separate model for each domain and therefore is difficult to scale.",
"Continual Learning: Several techniques have been proposed to mitigate the catastrophic forgetting (Kemker et al., 2018).",
"Regularization methods add constraints to the network to prevent important parameters from changing too much (Kirk-patrick et al., 2017; Zenke et al., 2017).",
"Ensemble methods alleviate catastrophic forgetting by explicitly or implicitly learning multiple classifiers and using them to make the final predictions (Dai et al., 2009; Ren et al., 2017; Fernando et al., 2017).",
"Rehearsal methods use data from existing domains together with the new domain data being accommodated to mitigate the catastrophic forgetting (Robins, 1995; Draelos et al., 2017; Re-buffi et al., 2017).",
"Dual-memory methods introduce new memory for handling the new domain data (Gepperth and Karaoguz, 2016).",
"Among the existing techniques, our model is most related to the regularization methods.",
"However, unlike existing work where the main goal is to regularize the learned parameters, we focus on regularizations on the newly added parameters.",
"Our model also shares similar ideas to (Rebuffi et al., 2017) on the topic of negative exemplar sampling.",
"In this paper, we propose CONDA for continuous domain adaptation.",
"By using various normalization and regularizations, our model achieves high accuracy on both the accommodated new domains and the existing known domains, and outperforms the baselines by a large margin.",
"For future work, we consider extending the model to handle unknown words.",
"Also, we want to find a more principled way to down sample the negative exemplars."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"result"
] |
[
"We investigate what kind of structural knowledge learned in neural network encoders is transferable to processing natural language.",
"We design artificial languages with structural properties that mimic natural language, pretrain encoders on the data, and see how much performance the encoder exhibits on downstream tasks in natural language.",
"Our experimental results show that pretraining with an artificial language with a nesting dependency structure provides some knowledge transferable to natural language.",
"A follow-up probing analysis indicates that its success in the transfer is related to the amount of encoded contextual information and what is transferred is the knowledge of position-aware context dependence of language.",
"Our results provide insights into how neural network encoders process human languages and the source of cross-lingual transferability of recent multilingual language models.",
"Pretrained language models (Devlin et al., 2019; Yang et al., 2019; Raffel et al., 2020) have demonstrated strong empirical performance not only within a language but also across languages.",
"Language models pretrained with a mix of monolingual corpora, such as multilingual BERT, exhibit a decent zero-shot cross-lingual transfer capability, i.e. , a model fine-tuned in a single source language (L1) can solve the task in another language (L2) (Con-neau et al., 2020a; Xue et al., 2021).",
"Surprisingly, the transfer happens without lexical overlaps between L1 and L2 (Karthikeyan K and Roth, 2020; Conneau et al., 2020b) or even without joint pretraining (Artetxe et al., 2020): an encoder only pretrained on L1 can be transferred to L2 without any parameter updates.",
"These results suggest that, whether the encoder is trained on single or multiple languages, it learns some transferable knowledge about language.",
"However, the characteristics of such transferable knowledge are still underexplored.",
"Recent studies with the probing methodology (Hupkes and Zuidema, 2018; Conneau et al., 2018) have revealed that multilingual BERT captures language-independent linguistic structures such as universal dependency relations (Chi et al., 2020) and subjecthood (Papadimitriou et al., 2021), but it remains unknown whether learning such linguistic properties actually contributes to the performance, and whether there exists more abstract knowledge transferred across languages.",
"In this study, we try to shed light on these questions with the framework of the Test for Inductive Bias via Language Model Transfer (Papadimitriou and Jurafsky, 2020), focusing on designing artificial languages with natural-language-like structural properties (Figure 1).",
"We pretrain encoders 7302 with artificial languages and transfer the encoders to natural language tasks with their parameters frozen.",
"This enables us to see how learning the specific structural properties of the artificial language affects the downstream performance.",
"Specifically, we explore whether it is beneficial for the encoder to know the following two characteristics of natural language: word distributions and latent dependency structures.",
"We design artificial languages that represent such characteristics and perform an extensive study with different encoder architectures (LSTM and Transformer) pretraining objectives (causal and masked language modelings).",
"The contribution is summarized as follows: We first start by complementing the study in Papadimitriou and Jurafsky (2020).",
"We then proceed to investigate transfer learning in masked language modeling (Devlin et al., 2019), one of the current dominant pretraining paradigms.",
"We evaluate pretrained Transformer encoders with dependency parsing and confirm that the nesting dependency structure is important to learn the structure of natural language.",
"We hypothesize that the transfer performance of pretrained encoders is related to the way the encoder preserves the input contextual information in the output vectors.",
"We perform a probing experiment and find that the artificial language with the nesting dependency structure trains encoders to encode the information on adjacent tokens into the output vector of each token.",
"We conclude this paper with the hypothesis that a part of transferable knowledge in language models could be explained by the knowledge of position-aware context dependence of language.",
"This framework enables us to 7303 assess if abstract features generalizable to L2 (nat-ural language) are encoded in L1.",
"We train LSTM and Transformer encoders with the sentence-level causal language modeling task and evaluate the encoders in English.",
"We show that an artificial language that models simple statistical dependency within a sentence provides decent transferable knowledge on natural language modeling.",
"Furthermore, we find that the inductive bias of a nesting head-to-tail dependency structure is more useful than a flat one.",
"Multilingual language models trained with masked language modeling objective (Devlin et al., 2019; Doddapaneni et al., 2021) have demonstrated a surprisingly strong cross-lingual transfer capability (Liu et al., 2020), given the model is only trained with a mix of monolingual corpora.",
"This leads to several studies investigating the source of the cross-lingual capability of multilingual models.",
"An early common hypothesis was that the models take advantage of a common word-piece vocabulary across languages (Wu and Dredze, 2019; Pires et al., 2019), which provides cross-lingual alignment signals to learn useful multilingual representations.",
"However, this hypothesis has been questioned by recent studies (Karthikeyan K and Roth, 2020; Conneau et al., 2020b) which show that shared word-pieces only play a minor role in the performance.",
"These studies suggest that the model can exploit abstract structures of languages to learn shared multilingual representations.",
"Another line of research suggests that the learning of transferable knowledge happens even in monolingual pretraining.",
"Artetxe et al. (2020) showed that a Transformer encoder pretrained only on L1 exhibits strong cross-lingual transfer performance simply by aligning the L2 embeddings to the encoder.",
"Papadimitriou and Jurafsky (2020) pretrained LSTM encoders with natural languages and non-linguistic data ( e.g. , code, music, and artificial data) to demonstrate that the encoders achieve reasonable performance in Spanish language modeling.",
"These studies provide additional evidence for the existence of transferable linguistic knowledge learned in the model.",
"Then what is such knowledge?",
"Probing studies (Hupkes and Zuidema, 2018; Conneau et al., 2018) have revealed that the model captures language-independent structures such as universal dependency relations (Chi et al., 2020) and subjecthood (Papadimitriou et al., 2021).",
"However, the probing methodology does not answer whether such linguistic knowledge contributes to the performance in cross-lingual transfer.",
"In this study, we shed light on this question by studying transfer learning from artificial language with the Test for Inductive Bias via Language Model Transfer (TILT) (Papadimitriou and Jurafsky, 2020).",
"Here we explicitly design artificial languages with some structural properties as L1 to investigate their transferability.",
"To study the behavior of language models, several studies have employed a specific type of artificial language : artificial variants of natural languages.",
"A typical experimental framework is as follows: (1) create an artificial language that differs from a natural language in one linguistic property, such as word orders (Sinha et al., 2021b; Dufter and Schtze, 2020; Sinha et al., 2021a), scripts (Karthikeyan K and Roth, 2020; Dufter and Schtze, 2020; Conneau et al., 2020b), or morphology (Ravfogel et al., 2019); (2) train or evaluate the natural/artificial language models and compare the performance to analyze the model's sensitivity to the linguistic property.",
"However, this methodology is limited to studying linguistic properties that are easily editable to create artificial variants and also offers limited control over the experiments.",
"To overcome this problem, White and Cotterell (2021) created artificial languages by defining their own probabilistic context-free grammars (PCFG).",
"As the concurrent work, Chiang and yi Lee (2022) trained Transformer encoders on artificial data with token dependencies in the sequences and showed that they perform reasonably well on the GLUE benchmark (Wang et al., 2019).",
"In this research, we design artificial languages with certain structural properties from scratch to study knowledge transferable to natural language.",
"We first describe the experimental framework used throughout this paper, the Test for Inductive Bias via Language Model Transfer (TILT) introduced by Papadimitriou and Jurafsky (2020).",
"TILT consists of pretraining and transfer steps: 1. Pretrain an encoder with a pretraining task in the source language (L1).",
"We explore pretraining with causal language modeling in 4 and masked language modeling in 5.",
"2. Transfer the encoder to the target language (L2) in a downstream task.",
"As we are interested in structural prior knowledge learned in the encoder, we discard the learned L1 word embeddings and initialize the embedding layer with the L2 vocabulary.",
"(1) 7304 However, this deviates from the token distribution of natural language.",
"We then train the model with the encoder parameters frozen and evaluate the task performance.",
"TILT reveals how transferrable the computation induced to solve the L1 pretraining task is to processing L2.",
"In this study, we are interested in the transferability of certain types of structures to natural language, and thus we primarily use hand-designed artificial languages with the structural properties as L1 and natural language as L2.",
"Artificial languages are designed to mimic a certain property of natural language.",
"After providing a formal definition of artificial language, we introduce several languages used in this paper.",
"A artificial language refers to a set of a vocabulary and algorithms to generate sequential data for pretraining.",
"Each language has a sentence-length distribution p len ( l ) , token vocabulary { w | w V} , and sentence-sampling function f ( l ) : l (cid:55) V l .",
"The training data is generated sentence by sentence as follows: we first sample a sentence length ( l p len ( l ) ) and then sample a sequence of tokens of that length ( [ w 1 , ..., w l ] f ( l ) ).",
"In this study, the token vocabulary V simply consists of integers (or integers with a special symbol) and is not intended to correspond to a vocabulary of any natural language.",
"Also the sentence-length distribution p len ( l ) is fitted with a baseline dataset in each experiment.",
"The focus is how to design the sentence-sampling function f ( l ) .",
"This determines what kind of characteristics we want to encode in the artificial dataset.",
"Words in natural language are distributed in nontrivial fashions.",
"We will study whether prior knowledge of token distribution facilitates learning from natural language.",
"We first present the simplest artificial language that serves as a baseline.",
"Uniform language samples each token in a sentence independently and uniformly.",
"Natural language is empirically known to follow the Zipf's law (Zipf, 1949), i.e. , the relation between the frequency of a word and its rank is given by frequency ( w ) rank ( w ) .",
"The coefficient is typically around 1, although the coefficient shows some variation according to the corpus domain (Zanette and Mon-temurro, 2005).",
"Zipf language captures this property and samples each token w from the following probability distribution assuming = 1 : p ( w ) 1 rank ( w ) .",
"The two languages introduced so far generate tokens in a sentence independently.",
"However, words within a sentence of natural language are known to have statistical dependencies, i.e. , specific cooccurrence patterns (Church and Hanks, 1989).",
"Consider the sentence The cat and dog are fighting over food. The words the and cat would cooccur much more often than by chance because cat (noun) is dependent on the (determinant); so would dog and cat because they are topically related.",
"The words in a sentence are usually coherent according to some syntactic and semantic dependencies.",
"Log-linear language is designed to capture this property.",
"Inspired by the log-linear model in Arora et al. (2016), tokens in a sentence s are drawn from the following probability distribution: p ( w | s ) exp( c s v w ) , (3) where c s is the discourse vector of the sentence and v w is the word vector of the token w .",
"Intuitively, we can imagine that the discourse vector represents the topic of the sentence and determines the unigram distribution over the vocabulary (Blei et al., 2003).",
"Sampling tokens this way, non-trivial cooccurrence patterns within sentences emerge in the language.",
"We speculate that pretraining with the Log-linear language will endow the model with an inductive bias to aggregate the context in a sentence to predict the identity or property of tokens, which is likely to benefit natural language processing.",
"In the experiments, the word vectors v w are initialized with the normal distribution, and the discourse vector c s is also drawn from the normal distribution each time we generate a sentence.",
"We set the dimension of the word and discourse vector to 10 as we empirically find that this makes the entire token distribution close to the Zipfian distribution.",
"Sentences in natural language are known to have latent structures, which are often described in the form of trees (Chomsky, 1957) or dependency graphs (Mel'cuk, 1988).",
"Now we consider how to endow the sampled tokens with such structures.",
"In this study, we adopt a dependency-based latent structure.",
"Words in sentences of natural language often have dependency relations and the existence of a certain word can be predictive of another word ( e.g. , the verb am always cooccurs with I ).",
"We hypothesize that, pretrained on such data, language models may acquire inductive bias towards finding relations between tokens in the input, which is presumably important in processing natural language.",
"Inspired by Papadimitriou and Jurafsky (2020), we design algorithms that generate structured sentences given a set of tokens sampled with any of the strategies described in 3.2.2.",
"The general idea is that half of the tokens (heads) in the vocabulary are all paired with another half of tokens (tails).",
"A pair of head and tail can be represented in right and left brackets with the same integer ( e.g. , <123 , 123> ).",
"The pairs always appear together in a sentence and express simple dependency relations.",
"After determining the sentence length l f ( l ) , we first sample l 2 (rounded to an integer) pairs of head and tail and then arrange them with one of the following structures.",
"Flat Dependency structure simply arranges the tokens randomly while keeping the right order of the brackets ( e.g. , [ <5 , <84 , 5> , <123 , 123> , 84> ]).",
"The dependency arcs are allowed to be crossed and thus often result in a nonprojective dependency structure.",
"Nesting Dependency language, by contrast, does not allow any dependency arcs to be crossed, and the brackets are nested hierarchically ( e.g. , [ <5 , <84 , 84> , 5> , <123 , 123> ]).",
"The sentences are generated from the stack-based algorithm described in Appendix A. These structures are similar to the Parenthesis languages used to study the inductive bias of language models in Papadimitriou and Jurafsky (2020).",
"However, our Dependency languages differ from them in how to represent the head and tail tokens.",
"In the Parenthesis language, the head and 7305 tail are represented with the same token ( e.g. , [ 5 , 84 , 84 , 5 , 123 , 123 ]), which we argue deviates from the dependency structure in natural language, because in natural language, dependency relations usually hold between different words ( e.g. , I and am ).",
"We will show that this difference is in fact crucial and draw a different conclusion from Papadimitriou and Jurafsky (2020) on the importance of the nested structure (4.2).",
"In this section, we complement the study of Papadimitriou and Jurafsky (2020).",
"While they studied the inductive bias learned in LSTM encoders with some artificial languages, here we provide additional studies with the newly introduced Log-linear and Dependency artificial languages, and the Transformer encoder.",
"Task.",
"We study sentence-level causal (left-to-right) language modeling (CLM), where the model needs to predict the next word given the previous context in the sentence.",
"Note that, Papadimitriou and Jurafsky (2020) experiment with language modeling across sentences, but we adopt sentence-level modeling because we would like to focus on the learning of sentence structures here.",
"As we will see in 4.2, we observe the same tendency in regard to the effect of artificial pretraining where we share the setups.",
"The task performance is measured by the average perplexity scores for each token.",
"Model.",
"We study two encoder architectures: LSTM (Hochreiter and Schmidhuber, 1997) and Transformer (Vaswani et al., 2017).",
"These architectures are known to exhibit different abilities in capturing the underlying hierarchical structure of sequential data (Tran et al., 2018).",
"The size of word embeddings is set to 300.",
"For both LSTM and Transformer encoders, the number of layers is set to 3, and the number of parameters is configured to be the same (6.9M parameters) to enable a fair comparison between architectures (for further details, see Appendix B).",
"Pretraining Data.",
"We generate artificial corpora with three unstructured languages, which randomly arrange the tokens sampled from Uniform, Zipf, and Log-linear languages, and four structured languages which combine the Zipf sampling strategy with the structures of Flat Parenthesis, Nesting Parenthesis, Flat Dependency, and Nesting Dependency.",
"We also experiment with natural language corpora.",
"We create training corpora from Wikipedia dumps of English, Japanese, and Spanish.",
"The sentences are tokenized with the Moses tokenizer 1 for English and Spanish and MeCab 2 for Japanese.",
"The sentence lengths of artificial data were sampled from the empirical distribution of the English Wikipedia corpus.",
"The size of the vocabulary | V | is set to 32,000 for both artificial and natural corpora, and out-of-vocabulary words in natural language are replaced with the OOV token.",
"For each corpus, we sample 12.8 M sentences and train the model with one iteration over the corpus.",
"Evaluation Data.",
"We evaluate the pretrained encoders on the Penn Treebank (PTB) corpus (Mar-cus et al., 1993) with preprocessing from Mikolov et al. (2010).",
"Note that, when we train language models with the pretrained encoders, the parameters of the encoder are not updated and only the English word embeddings are learned from scratch (optimization details in Appendix B.2).",
"We provide two baseline models trained on the L2 training corpus from scratch and trained with frozen random weights in the encoder to compare with pretrained encoders.",
"For each configuration, we pretrain three encoders with different random seeds, and for each encoder fine-tuned three models, which results in nine models in total.",
"We summarize the average scores and standard deviations in Figure 2. The Transformer encoder is more flexible than LSTM.",
"We start by discussing overall trends.",
"We observe that the Transformer encoders give lower perplexity scores compared to LSTM regardless of pretraining language.",
"This tendency is in line with the observations on the surprisingly good transferability or pretrained Transformer encoders to other languages (Conneau et al., 2020a), or even other modalities (Lu et al., 2021; Reid et al., 2022).",
"We think that this is because Transformer encoders are better at aggregating and preserving the context information at each time step, as we will see in 6, presumably because the Transformer architecture has self-attention and residual connections.",
"Natural languages are better than the artificial languages.",
"As expected, pretraining with natural languages (English, Spanish and Japanese) provides better encoders for language modeling than the artificial languages both with LSTM and Transformer.",
"However, the performance differences between natural languages seem to be negligible, indicating that there is not much difference in the way the encoders process these different languages, conforming with the observation of cross-lingual transferability of pretrained encoders (Artetxe et al., 2020).",
"The Uniform and Zipf languages degrade the encoders.",
"Looking at the difference among unstructured languages (Figure 2a), Uniform and Zipf languages give higher perplexities than the Random weights baseline particularly with LSTM.",
"In hindsight, it is natural that encoders would be degraded even from random weights when trained with sequences where tokens are drawn independently from each other because the encoders are not incentivized to use contextual information and will even learn to discard the input information.",
"We will demonstrate this with a follow-up probing experiment in 6.",
"contrary, the Log-linear language gives reasonably lower perplexities compared to Random weights (Figure 2a).",
"This indicates that knowing the existence of statistical dependency within a sentence, or learning to predict tokens from the cooccurrence information, is a useful inductive bias even though the cooccurrence statistics is not necessarily in line with L2.",
"nested structure in the Parenthesis languages.",
"Papadimitriou and Jurafsky (2020) showed that LSTM encoders trained on the Flat Parenthesis and Nesting Parenthesis structures do not provide a significant difference in perplexity, and concluded that simple non-hierarchical head-dependent-type relations are important in LSTM language processing.",
"A similar observation can be made in Figure 2b: although the Nesting Parenthesis exhibits the lower average score, there is no significant difference between Flat Parenthesis and Nesting Parenthesis ( 232 . 9 30 . 0 vs. 203 . 8 7 . 7 , p > 0 . 01 in Welch's t-test) with the unstable results of Flat Parenthesis.",
"Also, the trend of the average scores is reversed in Transformer: the Nesting Parenthesis exhibits the higher average score ( 212 . 4 8 . 8 ) than Flat Parenthesis ( 191 . 9 11 . 8 ), which makes it difficult to draw a consistent conclusion from here.",
"However, the Dependency languages suggest that the nested structure is actually important in language modeling.",
"While the Parenthesis language represents dependency relations with two identical tokens ( e.g. , 4543 and 4543 ), our Dependency language represents relations with two different tokens ( e.g. , <4543 and 4543> ).",
"We expect that expressing dependency relations with two different tokens is closer to natural language and thus provides more viable insights into natural language.",
"When we compare the scores of the Dependency languages, Nesting Dependency provides the lower and more stable perplexity than Flat Dependency with LSTM ( 175 . 7 4 . 3 vs. 187 . 2 10 . 7 ) and the significantly lower score with Transformer ( 160 . 6 1 . 6 vs. 175 . 7 4 . 3 , p > 0 . 01 in Welch's t-test).",
"Overall, Nesting Dependency performs best among other artificial languages, indicating our Dependency language is closer to natural language and the nested structure is useful for language modeling.",
"We proceed to investigate transfer learning from artificial languages in one of the most successful pretraining paradigms, masked language modeling (MLM) (Devlin et al., 2019) to see if we can observe similar trends to what we see in the CLM experiment (4).",
"Pretraining.",
"To allow for fast experimentation, we train small Transformer encoders.",
"The size of word embeddings is set to 300 and the encoders have three layers (further details in Appendix C).",
"The pretraining datasets are the same as in 4.1.",
"Downstream Task.",
"We evaluate the pretrained encoders with dependency parsing to see if the structural knowledge learned with artificial language is beneficial to predict the structure of natural language.",
"We use the English EWT dataset from Universal Dependencies (UD) v2.8 (Nivre et al., 2020) 3 .",
"Model.",
"We adopt the biaffine graph-based parser (Dozat and Manning, 2017) with the Transformer encoder.",
"The input word representations are the concatenation of word embeddings and character features computed by a character-level bidirectional LSTM encoder (Ling et al., 2015).",
"For 3 https://universaldependencies.org/ Figure 3: The downstream performance on two syntactic tasks with the English EWT dataset.",
"We provide two baseline models trained from scratch and trained with random encoder weights.",
"For each pretraining language, we again train three encoders and fine-tune three models for each, and take the mean and standard deviation of the nine models.",
"Figure 3 shows the results.",
"The unstructured languages do not provide useful transferable knowledge for dependency parsing.",
"The Uniform, Zipf, and Log-linear encoders perform comparably to or worse than the Random weights baseline.",
"This is in contrast with the causal language modeling task, where the Log-linear language at least outperforms the Random weights baseline (4.2).",
"On the other hand, learning from structured languages seems to be important in dependency parsing.",
"The Dependency encoders outperform the Random weights baseline, and also we can observe that learning from the nesting structure is more effective than the flat structure, and Dependency languages outperform Parenthesis languages, as observed in the CLM in 4.",
"In the previous sections, we have seen that the encoders pretrained with different artificial languages exhibit various degrees of transferability to natural 7308 language.",
"In this section, we try to explain why pretraining with some artificial languages is better or worse for the transfer to natural language from the perspective of the amount of contextual information in the encoder outputs.",
"The intuition is, for example, if a pretrained encoder has learned to discard the input information, we cannot expect the encoder to perform well when transferred to any tasks.",
"Also, existing studies show that neural language models assign more importance to local context when they make predictions (Khandelwal et al., 2018; Lai et al., 2020).",
"Can we observe that encoders pretrained with artificial languages exhibit similar patterns to natural languages regarding how they encode the contextual information?",
"We investigate how much contextual information can be extracted from the outputs of the pretrained encoders by setting up a simple probing task.",
"In this task, the encoder is asked to recover the identity of the contextual words given the contextualized vector of a target word.",
"Specifically, we first randomly generate 100K sequences of integers with the length of 15 25 (close to most frequent sequence lengths in the pretrained corpus) with the vocabulary size 100 and split them into training (90K sequences), validation (5K) and test (5K) sets.",
"Then we simultaneously train several linear classifiers, each of which predicts the ID of the context word at a fixed relative position to the target word in the sequence, on top of a frozen pretrained encoder.",
"For the encoders pretrained with CLM in 4, the target word is the last word in sequences and the classifiers predict the words at the positions of [-9, -4, -3, -2, -1, 0]; for the encoders pretrained with MLM in 5, the target word is the middle word and the classifiers predict the words at [-6, -3, -2, -1, 0, 1, 2, 3, 6].",
"After training, we measure the accuracy of predicting the words at each position on the test set and interpret this as how much information on each contextual word the encoder preserves.",
"(Figure 2a), we observed that the Uniform and Zipf encoders tend to perform worse even than Random weights.",
"Figure 4a and 4d demonstrate that their poor performance is because the encoders are trained to discard the input information.",
"The Uniform and Zipf encoders tend to preserve less contextual information even than Random weights because capturing the contextual information does not lead to solving the pretraining task in these languages.",
"On the other hand, if words are predictable from the context, encoders are encouraged to learn to preserve the contextual information.",
"The Log-linear encoders trained with CLM encode a decent amount of the contextual information (Fig-ure 4a and 4d) and also performed best among the unstructured artificial languages in CLM (Figure 2a).",
"Moreover, encoders trained with natural languages (Figure 4c, 4f and 4i) capture not only the local context well (at distance 0 2 ) but also a modest amount of the farther context (at distance 3 ), which is consistent with the existing observation that LSTM encoders trained with natural language are better at memorizing the inputs than ones trained with randomly sampled data (Liu et al., 2018).",
"In these cases, the downstream performance and the amount of the encoded contextual information seem to be correlated.",
"However, this trend is not as clear when comparing the structured artificial languages.",
"For example, the Nesting Dependency encoders perform the best for the downstream tasks among the structured artificial languages but do not necessarily in the probing task (Figure 4b and 4e).",
"The nesting structure seems to facilitate encoders to remember the local context with MLM.",
"The difference between the Nesting and Flat languages is striking in Figure 4f.",
"The Nesting encoders are consistently better at capturing the local contextual information (at positions 2 2 ) than their flat counterparts, which may explain the better performance of the Nesting encoders in dependency parsing (Figure 3), given that the local contextual information is particularly important to predict the syntactic characteristics of words (Levy and Goldberg, 2014; Ri and Tsuruoka, 2020).",
"In this paper, we studied what kind of structural properties in pretraining data is useful to train encoders for natural language tasks.",
"We have found 7309",
"that to achieve decent results, L1 needs at least statistical dependency in a sentence (4), and having the head-to-tail dependency with the nesting structure is further beneficial (4 and 5).",
"The probing experiment in 6 suggests that the encoders trained with languages with the above characteristics are good at capturing the positions and identities of the context words.",
"From these observations, we suggest a tentative answer to the initial research question: what knowledge in pretrained encoders are transferred across different languages?",
"That is position-aware context dependence of language, in other words, tokens in a sequence can be characterized by its neighbor tokens at specific positions .",
"We think that it can explain the success of transferring the encoder across languages to some extent.",
"To solve natural language tasks, it is often useful to characterize words in a sentence by the words around them.",
"For example, to understand the semantics of a sentence, it would be useful to look for the subject by looking for a noun that precedes the word is ; to parse a sentence, a word can be identified as a noun because it follows the article the .",
"If the encoder computes the output representation of a word in a sentence by aggregating the information from its surrounding words, that should be a useful inductive bias to solve most NLP tasks in any language.",
"Also, it is easy to imagine that the knowledge of position-aware context dependence gives a reasonable prior for solving sequence modeling problems in other domains, which may explain the success of cross-modality transfer of language models (Lu et al., 2021; Reid et al., 2022).",
"Of course, we do not expect that the knowledge of position-aware context dependence explains every aspect of the success of cross-lingual transfer.",
"As future work, we need further investigation for a more fine-grained view of the transferred knowledge.",
"Important questions include how much the model size affects the transferability of the encoder or if there is any difference in the knowledge transferred among different downstream tasks.",
"We thank the anonymous insightful comments and improve the paper.",
"Sumanth Doddapaneni, Gowtham Ramesh, Anoop Kunchukuttan, Pratyush Kumar, and Mitesh M. Khapra.",
"2021.",
"A Primer on Pretrained Multilingual Language Models.",
"ArXiv , abs/2107.00676.",
"Timothy Dozat and Christopher D. Manning.",
"2017.",
"Deep Biaffine Attention for Neural Dependency Parsing.",
"In International Conference on Learning Representations .",
"Philipp Dufter and Hinrich Schtze.",
"2020.",
"Identifying Elements Essential for BERT's Multilinguality.",
"In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing .",
"Sepp Hochreiter and Jrgen Schmidhuber.",
"1997.",
"Long Short-Term Memory.",
"Neural Computation , 9:1735 1780.",
"Dieuwke Hupkes and Willem Zuidema.",
"2018.",
"Visualisation and 'Diagnostic Classifiers' Reveal how Recurrent and Recursive Neural Networks Process Hierarchical Structure (Extended Abstract).",
"In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18 .",
"Stephen Mayhew Karthikeyan K, Zihan Wang and Dan Roth.",
"2020.",
"Cross-Lingual Ability of Multilingual BERT: An Empirical Study.",
"In International Conference on Learning Representations .",
"Urvashi Khandelwal, He He, Peng Qi, and Dan Jurafsky.",
"2018.",
"Sharp Nearby, Fuzzy Far Away: How Neural Language Models Use Context.",
"In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics .",
"Yi-An Lai, Garima Lalwani, and Yi Zhang.",
"2020.",
"Context Analysis for Pre-trained Masked Language Models.",
"In Findings of the Association for Computational Linguistics: EMNLP 2020 .",
"Omer Levy and Yoav Goldberg.",
"2014.",
"Dependency-Based Word Embeddings.",
"In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics .",
"Nelson F. Liu, Omer Levy, Roy Schwartz, Chenhao Tan, and Noah A. Smith.",
"2018.",
"LSTMs Exploit Linguistic Attributes of Data."
] | [
"objective",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"result",
"objective",
"objective",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"method",
"method",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"method",
"method",
"other",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"Deep reinforcement learning is a promising approach to training a dialog manager, but current methods struggle with the large state and action spaces of multi-domain dialog systems.",
"Building upon Deep Q-learning from Demonstrations (DQfD), an algorithm that scores highly in difficult Atari games, we leverage dialog data to guide the agent to successfully respond to a user's requests.",
"We make progressively fewer assumptions about the data needed, using labeled, reduced-labeled, and even unlabeled data to train expert demonstrators.",
"We introduce Reinforced Fine-tune Learning, an extension to DQfD, enabling us to overcome the domain gap between the datasets and the environment.",
"Experiments in a challenging multi-domain dialog system framework validate our approaches, and get high success rates even when trained on out-of-domain data.",
"The dialog manager (DM) is the brain of a task-oriented dialog system.",
"Given the information it has received or gleaned from a user, it decides how to respond.",
"Typically, this module is composed of an extensive set of hand-crafted rules covering the decision tree of a dialog (Litman and Allen, 1987; Bos et al., 2003).",
"To circumvent the high development cost of writing and maintaining these rules there have been efforts to automatically learn a dialog manager using reinforcement learning (RL; Walker 2000; Young et al. 2013).",
"RL solves problems of optimal control where past predictions affect future states making it well-suited to dialog management, in which a misstep by the agent can throw the whole dialog off course.",
"But using RL to train a dialog manager is not straightforward, and is often hindered by large dialog state spaces and sparse rewards (Gao et al., 2019).",
"Neural network-based deep RL (Mnih et al., 2015) mitigates the problem of large state spaces (Fatemi et al., 2016; Li et al., 2017) but it still struggles when the DM has to choose a response or action across multiple domains (e.g. hotel and flight booking).",
"In addition, deep RL performs poorly without regular feedback or reward on the correctness of its decisions.",
"In a dialog there is no obvious way to automatically quantify the appropriateness of each response, so RL training environments for dialog managers usually wait until conversation-end before assigning a reward based on whether the user's task, or goal , was completed.",
"An established way to deal with these difficul-ties is to guide the dialog manager with expert demonstrations during RL training (Lipton et al., 2018; Gordon-Hall et al., 2020), a high-level illustration of which is shown in Figure 1. This approach, however, requires a rule-based oracle to provide a suitable system response given a dialog state, and does not exploit the knowledge contained in the growing number of dialog datasets (Budzianowski et al., 2018; Rastogi et al., 2019).",
"In this paper, we address two key-questions that arise when training RL dialog agents with expert demonstrations:",
"(i) Can we move away from rule-based experts and use weaker, cheaper demonstrations to guide the RL dialog manager?",
"(ii) Can we exploit information gathered during RL training to improve the demonstrator and bridge the domain gap between dialog data and the RL environment?",
"To answer the first question, we explore three methods based on Deep Q-learning from Demonstrations (DQfD; Hester et al. 2017) that use trained experts derived from progressively weaker data.",
"Our first and strongest expert is a Full Label Expert (FLE) trained on a labeled, in-domain dataset to predict the next system response.",
"Second, we train a Reduced Label Expert (RLE) to predict the type of the next system response, but not its exact nature.",
"Finally our third expert is a No Label Expert (NLE) that does not rely on any annotation at all, but is instead trained on unlabeled user utterance and agent response sentences.",
"We show that all three experts can be used to successfully train RL agents, and two of them even allow us to train without expensive and often hard to come-by fully annotated in-domain dialog datasets.",
"We address our second key question how to improve the experts during RL training by presenting R einf o rced F ine-tune L earning (RoFL), a fine-tuning algorithm inspired by Dataset Aggregation (DAgger; Ross et al. 2011).",
"RoFL bridges the domain gap between dialog data and the RL environment by using the dialog transitions generated during training to update the expert's weights, adapting the previously learned knowledge to the learning environment.",
"Our experiments show that RoFL training improves demonstrations gathered from the employed experts, giving a boost in RL performance and hastening convergence.",
"Our work is closely related to research in using expert demonstrations to guide reinforcement learning dialog managers.",
"Lipton et al. (2018) spike the deep Q-network (DQN; Mnih et al. 2015) replay buffer with a few successful demonstrations from a rule-based dialog manager.",
"Gordon-Hall et al. (2020) extend this approach and apply Deep Q-learning from Demonstrations (DQfD) to dialog, prefilling a portion of the buffer with expert transitions and encouraging the agent to imitate them by adding an auxiliary term to the DQN loss.",
"Demonstrations are not the only way to incorporate external expertise into the dialog manager.",
"One alternative is to use supervised learning to train a neural network policy on an in-domain dialog dataset, and then fine-tune it with policy-gradient RL on a user-simulator (Su et al., 2016; Williams et al., 2017; Liu and Lane, 2017).",
"Liu et al. (2018) fine-tune their RL policy on human rather than simulated users.",
"Another, parallel, approach to RL-based DMs aims to increase the frequency of meaningful rewards.",
"Takanobu et al. (2019) use inverse RL to learn a dense reward based on a dialog corpus, while Lu et al. (2019) decompose the task into subgoals that can be regularly assessed.",
"Weak demonstrations have been used outside of dialog system research to tackle RL environments with large state spaces and sparse rewards.",
"Ay-tar et al. (2018) train an expert to imitate YouTube videos of people playing challenging Atari games and exceed human-level performance.",
"Salimans and Chen (2018) beat their score on Montezuma's Revenge using only a single human demonstration, resetting the environment to different states from the expert trajectory.",
"However we believe our work is the first to explore the use of weak demonstrations for DQfD in a dialog environment.",
"RoFL, our proposed fine-tuning method, is inspired by DAgger (Ross et al., 2011), an iterative imitation learning algorithm that incorporates feedback from an expert to improve the performance of a policy.",
"DAgger requires an on-line expert that can be queried at any time, and which bounds the policy's performance.",
"If the expert is suboptimal the policy will be too.",
"Chang et al. (2015) lift this restriction, allowing the policy to explore the search space around expert trajectories, but their method (LOLS) does not incorporate RL policy updates as we do.",
"Training a dialog manager or agent with reinforcement learning involves exposing it to an environment that assigns a reward to each of its actions.",
"This environment consists of a database that the DM can query, and a user-simulator that mimics a human user trying to achieve a set of goals by talking to the agent.",
"The more user goals the agent satisfies, the higher its reward.",
"Given the current state s t of the dialog, the agent chooses the next system action a t according to a policy , a t = ( s t ) , and receives a reward r t .",
"The expected total reward of taking an action a in state s with respect to is estimated by the Q-function: Q ( s, a ) = E (cid:2) T t (cid:88) k =0 k r t + k | s t = s, a t = a (cid:3) (1) ( s ) = arg max a AQ ( s, a ) (2) where T is the maximum number of turns in the dialog, t is the current turn, and is a discount factor.",
"The policy is trained to find the optimal Q-function Q ( s, a ) with which the expected total reward at each state is maximized.",
"( s ) is the optimal policy obtained by acting greedily in each state according to Q (Sutton and Barto, 2018).",
"Deep Q-network (DQN; Mnih et al. 2015) approximates Q ( s, a ) with a neural network.",
"The agent generates dialogs by interacting with the environment, and stores state-action transitions in a replay buffer in the form ( s t , a t , r t , s t +1 ).",
"Rather than always acting according to its policy , an (cid:15) -greedy strategy is employed in which the agent sometimes takes a random action according to an exploration parameter (cid:15) .",
"Transitions aggregated in the replay buffer are sampled at regular intervals and used as training examples to update the current estimate of Q ( s, a ) via the loss: y t = r t + max a (cid:48) Q ( s t +1 , a (cid:48) ; (cid:48) ) (3) L ( Q ) = ( y t Q ( s t , a t ; )) 2 (4) where (cid:48) are the fixed parameters of a target network which are updated with the current network parameters every steps, a technique which improves the stability of DQN learning.",
"Deep Q-learning from Demonstrations (DQfD; Hester et al. 2017), an extension to DQN, uses expert demonstrations to guide the agent.",
"DQfD, prefills a portion of the replay buffer with transitions generated by the expert.",
"The agent learns to imitate these demonstrations by augmenting L ( Q ) with an auxiliary loss term L aux ( Q ) : L DQfD ( Q ) = L ( Q ) + L aux ( Q ) (5) The term L aux depends on the expert used to provide demonstrations.",
"For each of our three experts we will define a different auxiliary loss.",
"It has been shown that DQfD successfully trains a dialog manager when its demonstrations come from either a rule-based, or strong pre-trained expert (Gordon-Hall et al., 2020).",
"To avoid writing rules, and to exploit the knowledge contained in external datasets, we expand on previous work and adapt DQfD for use with three progressively weaker and cheaper experts.",
"Furthermore, we introduce our RoFL algorithm, describing how we fine-tune the expert during RL training.",
"Full Label Expert We define a Full Label Expert (FLE) as a classifier trained on a human-to-human in-domain dialog dataset to predict, given the conversation state, the next action.",
"For such an expert, the action space of the dataset corresponds to the actions in the RL environment and, as a result, we can use the original DQfD large margin classification term as an auxiliary loss: L aux ( Q ) = max a A [ Q ( s, a ) + (cid:96) ( a E , a )] Q ( s, a E ) (6) where a E is the action the expert took in s , and (cid:96) ( a E , a ) is 0 when the agent's chosen action is the same as the action taken by the expert demonstrator, and a positive constant c otherwise: (cid:96) ( a E , a ) = (cid:40) 0 , if a = a E c, otherwise (7) This FLE approach is similar to the data-driven expert introduced by Gordon-Hall et al. (2020).",
"Reduced Label Expert A Full Label Expert is trained on fully-annotated in-domain data, but this is lacking for many domains, and is expensive to collect and label from scratch (Shah et al., 2018).",
"However, although existing dialog datasets often differ in annotation, many share high-level system labels: inform and request .",
"inform actions denote that the system provides information; request actions that the system asks for it.",
"A system utterance from a hotel-booking dataset, e.g. The Le Grand Hotel costs $48 per night, how many nights do you want to stay?, could be labelled: [ hotel-inform-price , hotel-request-duration ], while a sentence from a taxi-booking dataset, e.g. Please let me know the dropoff location., could be annotated: taxi-request-dropoff .",
"Although Figure 2: Reduced Label Expert (RLE) architecture.",
"the domain and type of information are different, all actions A in either dataset can be broadly partitioned into sets A reduced A according to whether they inform , request , or do both.",
"We introduce a Reduced Label Expert (RLE) to take advantage of this common annotation format across diverse datasets.",
"The RLE is a multi-label classifier that predicts the high-level annotation set A reduced or reduced label of the next system action given the list s NL of the last few utterances in the dialog.",
"The RLE is trained on a dialog dataset stripped down to inform , request , and other (for all other actions) annotations.",
"Its architecture is outlined in Figure 2. The previous user utterances are passed through a recurrent encoder, for example an RNN.",
"The final hidden state of the encoder is then passed through a multi-label classifier which uses the sigmoid function to score each reduced label.",
"Once trained, we use the RLE to guide the dialog manager during DQfD training.",
"First we divide all environment actions into reduced label sets.",
"For example, the inform set would consist of the environment actions that pertain to providing information to the user.",
"Unlike the FLE, the RLE does not predict exact actions, so we uniformly sample an environment action from the predicted reduced label set a E A reduced to use as an expert demonstration when prefilling the replay buffer.",
"For example, if the RLE predicts request the expert might take the action request-hotel-price .",
"In order to use the expert in network updates, we reformulate the (cid:96) term in the DQfD's auxiliary loss to account for the expert's reduced label prediction: (cid:96) ( A rdcd , s t ) = (cid:40) 0 , if ( s t ) A rdcd c, otherwise (8) Figure 3: No Label Expert (NLE) architecture.",
"The agent is penalized by a positive constant term c if the action predicted by its current policy is not in the set of actions licensed by the RLE.",
"No Label Expert While the RLE enables the use of data not annotated for the target dialog environment, it still requires labeled dialog data.",
"This raises the question: can we employ an expert that does not rely on annotations at all?",
"To address this challenge, we propose a No Label Expert (NLE) that uses an unannotated dialog dataset consisting of pairs of sentences ( s u , s a ) , representing user utterances and the corresponding agent responses.",
"The goal of the NLE is to predict whether, for a given pair of sentences, s a is an appropriate response to s u .",
"In this regard, it resembles models used to predict textual inference (Bowman et al., 2015).",
"The NLE architecture is outlined in Figure 3. The previous user utterance and a verbalized system response generated by an NLG component are consecutively passed through a sentence embedder.",
"Their encodings are then concatenated and passed through a network which scores how appropriate the response is given the utterance.",
"The NLE is trained on unannotated human-to-human dialog datasets which are formatted into pairs of user utterances and agent responses.",
"We treat these as positive instances, making the tacit assumption that in the data the agent's reply is always relevant given a user utterance.",
"As a result, the data lacks negative examples of irrelevant agent responses.",
"This can be mitigated by arti-ficially creating negative pairs ( s u , s (cid:48) a ) from the original data by pairing each user utterance s u with random agent sentences s (cid:48) a , drawn uniformly from all agent responses that were not observed for the original s u .",
"Given such a dataset of positive and negative user-agent interactions, we train an NLE that learns to output 1 if a system response corresponds to the last user utterance, and 0 if it does not.",
"Once trained, we use this NLE to guide the DQfD dialog manager.",
"When prefilling the replay buffer with expert demonstrations, we calculate the set A nolabel of all actions a whose verbalization s a leads to an NLE output that exceeds a threshold when taken as a response to the last user utterance s u .",
"We then use a random action from this set a E A nolabel as the expert demonstration and place it in the replay buffer.",
"We use a similar (cid:96) term in the auxiliary loss to the Reduced Label Expert, which penalizes the agent if the action a predicted by its current policy is not in the set of actions licensed by the expert, i.e., if a (cid:54) A nolabel : (cid:96) ( A nolbl , s t ) = (cid:40) 0 , if ( s t ) A nolbl c, otherwise (10) A nolbl = { a | NLE ([ s u ; s a ]) > } (11) where is between 0 and 1 and c is a positive constant penalty factor.",
"Domain Adaptation through Fine-tuning We train our experts on dialog datasets created by humans talking to humans.",
"This data is necessarily drawn from a different distribution to the transition dynamics of an RL environment.",
"In other words, there is a domain gap between the two.",
"We seek to narrow this gap by introducing R einf o rced F ine-tune L earning (RoFL): For d pretraining steps, transitions are generated according to a weak expert policy , where the weak expert has parameters .",
"If a transition's reward exceeds a threshold th , we treat it as in-domain data and add it to a buffer D .",
"Every steps the expert is fine-tuned on the in-domain data gathered so far and its parameters are updated.",
"At the end of pretraining the final fine-tuned expert's weights are frozen and its policy is used to generate demonstration transitions for another d steps.",
"This ensures that the permanent, demonstration portion of the replay buffer is filled with transitions from the fine-tuned expert.",
"RoFL is agnostic to the expert in question and we apply it to each of our methods described above.",
"Algorithm 1: Reinforced Fine-tune Learning Inputs : expert network with pre-trained parameters , fine-tune interval k , a reward threshold th , number of pre-training steps d , target network update rate , training interval Initialize : random Q-network weights , random target network weights (cid:48) , replay buffer B = , fine-tune data set D = for t 1 , 2 , ...d do Get conversational state s t Sample action from expert policy a E ( s t ) Take action a E and observe ( s t +1 , r t ) B B ( s t , a E , r t , s t +1 ) if r t > th then D D ( s t , a E ) if t mod k = 0 then arg min (cid:48) (cid:80) ( s,a E ) D a E log (cid:48) ( s ) endif t mod = 0 then train() endfor t 1 , 2 , ... do Get conversational state s t Sample action from behavior policy a t (cid:15)Q ( s t ) Take action a t and observe ( s t +1 , r t ) B B ( s t , a t , r t , s t +1 ) if t mod = 0 then train() endProcedure train() Sample transitions from B Calculate loss L ( Q ) Perform a gradient step to update if t mod = 0 then (cid:48) 5 Experimental Setup We evaluate our weak experts in ConvLab (Lee et al., 2019), a multi-domain dialog framework based on the MultiWOZ dataset (Budzianowski et al., 2018).",
"In ConvLab, the dialog manager's task is to help a user plan and book a trip around a city, a problem that spans multiple domains ranging from recommending attractions for sightseeing, to booking transportation (taxi and train) and hotel accommodation.",
"ConvLab supports RL training with an environment that includes an agenda-based user-simulator (Schatzmann et al., 2007) and a database.",
"The agent has a binary dialog state that encodes the task-relevant information that the environment has provided so far.",
"This state has 392 elements yielding a state space of size 2 392 .",
"In each state there are 300 actions that the DM can choose between, corresponding to different system responses when verbalized by the Natural Language Generation (NLG) module.",
"These actions are composite and can consist of several individual informs and requests.",
"For example, [ attraction-inform-name , attraction-request-area ] is one action.",
"We train our DMs on the exact dialog-acts produced by the user-simulator, avoiding error propagation from a Natural Language Understanding (NLU) module.",
"We use ConvLab's default template-based NLG module to verbalize system actions when using the RLE and NLE.",
"First, we experiment with experts trained on the in-domain MultiWOZ dataset 1 .",
"For the FLE we train on the full annotations; for the RLE we reduce the annotations to minimal inform , request , other labels; and for the NLE we only use the unannotated text.",
"We also experiment with experts trained on out-of-domain (OOD) data.",
"To this end, we combine two datasets: Microsoft E2E (Li et al., 2018) 10,087 dialogs composed of movie, restaurant and taxi booking domains and Maluuba Frames (El Asri et al., 2017) which is made up of 1,369 dialogs from the flight and hotel booking domains.",
"While three of these domains are also in MultiWOZ, the specifics of the conversations are different.",
"Our Full Label Expert is a feedforward neural network (FFN) with one 150 dimensional hidden layer, ReLU activation function and 0.1 dropout which takes the current dialog state as input.",
"The Reduced Label Expert uses the last utterance in the conversation as context, which is embedded with 300 dimensional pre-trained GloVe embeddings (Pennington et al., 2014), then passed through a uni-directional 128 dimensional hidden layer GRU (Cho et al., 2014) from which the last hidden state is used to make a multi-label prediction.",
"Finally, our No Label Expert uses pre-trained BERT base-uncased (Devlin et al., 2018) to embed and concatenate user and agent utterances into 1536-dimensional input vectors, and employs a feedforward neural network with SELU activations (Klambauer et al., 2017) to predict whether the agent's response is an appropriate answer to the last user utterance.",
"Note that the RLE and NLE both take natural language as input yet use different word embeddings.",
"We conducted preliminary experiments to evaluate the efficacy of BERT and GloVe embeddings for the respective expert training tasks.",
"While we found that the NLE greatly benefited from BERT over GloVe, the RLE performance did not differ between embeddings.",
"Since GloVe vectors yield a significant runtime advantage over the course of RL training, we used GloVe 1 We use MultiWOZ2.0 with ConvLab user annotations for the RLE, while employing slower BERT embeddings for the NLE due to the significantly better performance.",
"For RL training of our DQfD agents, we use a prioritized replay buffer (Schaul et al., 2015) with a maximum buffer size of 100,000 transitions.",
"We follow the DQfD setup of (Gordon-Hall et al., 2020) and apply L2 regularization with a weight of 10 5 and drop the n-step term from the original DQfD loss.",
"All RL networks have a 100 dimensional hidden layer, a dueling network structure, and use the double DQN loss (Wang et al., 2015; Van Hasselt et al., 2016).",
"All our networks are trained with the RAdam optimizer (Liu et al., 2019) with a learning rate of 0.01.",
"For a complete list of hyperparameters used for our experiments refer to the attached Supplemental Material.",
"We slightly alter the RoFL algorithm presented in 4 to account for the fact that ConvLab only rewards the agent based on whether it successfully completed the task at the end of a dialog (inter-mediate steps are uniformly assigned a -1 step penalty).",
"Rather than immediately adding transitions to the fine-tune dataset D , we wait until the end of a conversation and check if its total reward exceeds a threshold th .",
"If it does, we assume that all transitions in that conversation are perfect, and add them to D .",
"For our experiments we empirically determine th , and set it to 70.",
"We train all our RL-based dialog managers for 3 sessions of 2,500,000 steps, and anneal the exploration parameter (cid:15) over the first 500,000 to a final value of 0.01.",
"Results and training graphs in the following section are the average of these 3 sessions.",
"Each session takes under 10 hours on one NVIDIA GeForce RTX 2080 GPU.",
"We compare our approach to supervised and reinforcement learning baselines.",
"Table 1 shows evaluation results over 1,000 dialogs for baseline and DQfD dialog managers using our three proposed experts inside ConvLab's evaluation environment.",
"The Rule baseline is a rule-based DM included in ConvLab.",
"FFN is a supervised learning baseline DM that directly uses the same in-domain classifier introduced in Section 4 to predict the next action.",
"It is trained on MultiWOZ, and achieves 21.53% accuracy on the test set.",
"Deep Q-network (DQN) is an RL agent which uses the hyperparameters described Turns Inform Match Success Rule 5.25 94.00 100 100 FFN 11.67 81.00 52.63 61.00 DQN 18.79 28.50 11.07 11.85 PPO 5.79 65.67 72.51 63.27 RE 5.33 92.33 97.07 98.33 FLE 6.81 89.67 94.12 91.67 RLE 7.64 81.33 89.34 85.03 NLE 7.20 84.67 85.31 86.83 FFN-ft 9.62 83.00 90.79 76.00 FLE+R 6.75 90.00 94.57 92.47 RLE+R 6.38 88.67 90.62 92.93 NLE+R 6.89 89.00 92.68 91.00 Table 1: Evaluation results of baseline systems (top) as well as DQfD with rule-based and our weak expert approaches trained in-domain .",
"in Section 5 except that it does not use demonstrations.",
"We also compare against an agent trained with Proximal Policy Optimization (PPO; Schulman et al. 2017), an actor-critic based RL algorithm widely used across domains.",
"We use the PPO hyperparameters laid out in Takanobu et al. (2019).",
"The middle third of Table 1 summarizes results for DQfD agents trained with rule-based (RE), Full Label (FLE), Reduced Label (RLE), and No Label (NLE) experts.",
"The bottom third shows results for our weak expert methods trained with RoFL (+R).",
"We follow Takanobu et al. (2019) and report evaluation results in terms of average dialog length (Turns), F1-Score of the information provided that was requested by the user, Match Rate of user-goals, and Success Rate the percentage of dialogs in which all information has been provided and all booking information is correct.",
"As expected, the Rule agent written specifi-cally for ConvLab almost perfectly satisfies user goals.",
"FFN is considerably worse, with a 40% lower Success Rate, and half the Match Rate of the rule-based agent.",
"For standard DQN, the en-vironment's large state and action spaces pose a serious challenge, and it barely exceeds 11% Success and Match Rates.",
"PPO achieves a respectable 63% success rate, outperforming the FFN baseline.",
"Crucially, all DQfD agents significantly outperform the FFN, DQN, and PPO baselines, with the RE and FLE approaches coming within 3% and 6% respectively of the Rule agent's performance.",
"In the remainder of this section we will further analyze and compare the performances of DQfD agents with progressively weak demonstrations using in-domain and out-of-domain experts, as well as those trained with and without RoFL.",
"In-Domain Weak Expert DQfD We train in-domain reduced and no label experts on the MultiWOZ dataset.",
"The RLE scores 77 F1 on the reduced label test set, while the NLE manages 71 F1 of predicting whether an agent response belongs to a user utterance on the unannotated test set.",
"As shown in Table 1 (middle), the scores of DQfD agents with in-domain experts follow a clear trend corresponding to the type of demonstration data.",
"After 2.5 million training steps, the FLE with the most informative demonstrations clearly outperforms both RLE and NLE methods, while the latter two perform similarly.",
"Figure 4 shows graphs of the average Success Rates of DQN, PPO, and our proposed DQfD agents over the course of training.",
"DQN struggles to find successful dialog strategies, although its Success Rate slowly inclines and seems to gain some traction towards the end of the maximum training steps.",
"To begin with PPO learns rapidly, faster than RLE and NLE, but its Success Rate plateaus in the 60% range; it seems to learn to end dialogues too early.",
"Both RE and FLE start with performance advantages, due to their high quality expert demonstrations.",
"Over time, RE even approaches the Success Rate of its rule-based expert demonstrator.",
"The FLE consistently outperforms approaches with weaker demonstrations, quickly exceeding the Success Rate of the underlying FFN after an early dip when the agent's exploration parameter (cid:15) is relatively high.",
"The NLE comfortably outperforms the Reduced Label Expert throughout training, with the RLE only overtaking it at the end.",
"We believe that this strong relative performance makes sense if we consider that, during pre-training, the NLE acts according to a more fine-grained action set than the RLE.",
"While the RLE partitions the actions according to their reduced label, these sets are broad and contain many irrelevant responses, whereas Figure 4: Average Success Rates of our methods trained on in-domain data over the course of 2.5 million training steps.",
"the NLE acts randomly according to a smaller, potentially higher-quality, set of actions which have high correspondence scores.",
"Finally, the graphs in Figure 4 indicate that none of the agents fully converge after the training step limit, although RE and FLE plateau.",
"It is possible that after significantly more steps even DQN would converge to the ceiling performance of the Rule DM but all our methods are considerably more sample efficient.",
"RoFL Training Table 1 (bottom) shows evaluation results of DQfD agents trained with RoFL fine-tuning.",
"All weak experts improve with RoFL, especially the RLE which records an 8% jump in Success Rate.",
"We also include the performance of the final fine-tuned FFN classifier, whose improvement over its original incarnation (15% higher Success Rate) demonstrates that fine-tuning helps narrow the domain gap between data and the RL environment.",
"In addition to Table 1, Figure 5 shows DM performance over the course of training.",
"RoFL dramatically improves both the performance and convergence rate of the RLE, indicating a domain gap between the reduced label data and the sets of environment actions.",
"RoFL improves the FLE early in training, but this gain tails off after 1 million steps possibly due to the relative strength of the expert.",
"The trend for NLE-R is more ambiguous, falling behind its standard DQfD counterpart before catching up to its performance.",
"RoFL seems Figure 5: Average Success Rate of RL agents over the course of 2.5 million training steps, with and without RoFL fine-tuning.",
"Out-of-Domain Weak Experts The weakest experts that we evaluate were trained on out-of-domain data.",
"The OOD RLE, trained on Microsoft E2E and Frames, scores 53 F1 on a reduced label MultiWOZ test set, while the OOD NLE, trained on the same datasets, unannotated, only manages 41 F1 on the test set.",
"Results for OOD approaches trained with and without RoFL are shown in Table 2, with training graphs in Figure 6.",
"Even without RoFL, the OOD RLE guides the DQfD agent to performance rates comparable to its in-domain counterpart.",
"This indicates that even reduced labels learned on the OOD data provide the agent with enough clues to correctly satisfy some user goals.",
"With RoFL, the OOD RLE surpasses the Success Rate of the in-domain system, and is only marginally worse than the fine-tuned in-domain expert.",
"This shows that with RoFL we can learn a competitive DM in a challenging multi-domain environment while only using unannotated data from other dialog tasks.",
"RoFL leads to the greatest gain with the OOD NLE.",
"Without fine-tuning, it scores a measly 26% Success Rate (although it should be noted that this is still higher than DQN), compared to 86% when the expert is trained on in-domain sentences.",
"This illustrates the clear difference between the lan-Figure 6: Average Success Rates of Reduced and No Label experts trained on out-of-domain data over the course of 2.5 million training steps, with and without RoFL fine-tuning.",
"guage in the inand out-of-domain data.",
"With RoFL, OOD NLE is able to update its weights to adapt to the language of the environment, outperforming the unaltered expert's Success Rate by 35%.",
"This improvement holds true throughout training, as shown in Figure 6.",
"The graph also shows that OOD NLE+R has not started to converge after 2.5 million training steps; it is likely that with more training it would perform similarly to the in-domain NLE DM.",
"In this paper, we have shown that weak demonstrations can be leveraged to learn an accurate dialog manager with Deep Q-Learning from Demonstrations in a challenging multi-domain environment.",
"We established that expert demonstrators can be trained on labeled, reduced-labeled, and unlabeled data and still guide the RL agent by means of their respective auxiliary losses.",
"Evaluation has shown that all experts exceeded the performance of reinforcement and supervised learning baselines, and in some cases even approached the results of a hand-crafted rule-based dialog manager.",
"Furthermore, we introduced R einf o rced F ine-tune L earning (RoFL) a DAgger-inspired extension to DQfD which allows a pre-trained expert to adapt to an RL environment on-the-fly, bridging the domain-gap.",
"Our experiments show that RoFL training is beneficial across different sources of demonstration data, boosting both the rate of convergence and final system performance.",
"It even enables an expert trained on unannotated out-of-domain data to guide an RL dialog manager in a challenging environment.",
"In future, we want to continue to investigate the possibility of using even weaker demonstrations.",
"Since our No Label Expert is trained on unannotated data, it would be interesting to leverage large and noisy conversational datasets drawn from message boards or movie subtitles, and to see how RoFL training fares with such a significant domain gap between the data and the RL environment.",
"We thank Ignacio Iacobacci and Gerasimos Lam-pouras for their valuable suggestions."
] | [
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"result",
"abstain",
"objective",
"method",
"method",
"result",
"result",
"abstain",
"result",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"result",
"other"
] |
[
"In the era of pre-trained language models, Transformers are the de facto choice of model architectures.",
"While recent research has shown promise in entirely convolutional, or CNN, architectures, they have not been explored using the pre-train-fine-tune paradigm.",
"In the context of language models, are convolutional models competitive to Transformers when pre-trained?",
"This paper investigates this research question and presents several interesting findings.",
"Across an extensive set of experiments on 8 datasets/tasks, we find that CNN-based pre-trained models are competitive and outperform their Transformer counterpart in certain scenarios, albeit with caveats.",
"Overall, the findings outlined in this paper suggest that conflating pre-training and architectural advances is misguided and that both advances should be considered independently.",
"We believe our research paves the way for a healthy amount of optimism in alternative architectures.",
"In the modern era of pre-training, there appears to be an unbreakable tie between Transformer architectures (Vaswani et al., 2017) and pre-trained language models.",
"Models such as BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019), and T5 (Raffel et al., 2019) have all adopted Transformers as their underlying architecture.",
"As a matter of fact, there are barely any recent pre-trained models not based on Transformers.",
"2015; Chidambaram et al., 2018; Liu et al., 2020; Qiu et al., 2020), modern pre-trained language modeling started with models like ELMo (Peters et al., 2018) and CoVE (McCann et al., 2017) which are based on recurrent (e.g. LSTM (Hochreiter and Schmidhuber, 1997)) architectures.",
"Although they were successful, research using these architectures dwindled as Transformers stole the hearts of the NLP community, having, possibly implicitly, been perceived as a unequivocal advancement over its predecessors.",
"Recent work demonstrates the promise of entirely convolution-based models (Wu et al., 2019; Gehring et al., 2017) and questions the necessity of self-attentive architectures like Transformers.",
"For example, in (Wu et al., 2019), the proposed convolutional seq2seq models outperform Transformers on a series of canonical benchmarks such as machine translation and language modeling.",
"From these findings emerge a rather natural line of questioning should we consider pre-trained models beyond Transformers?",
"Despite early success, the relevance of convolutional models in the era of pre-trained language models remains an open question.",
"To the best of our knowledge, convolutional architectures have not yet been rigorously evaluated under the pre-train-fine-tune paradigm.",
"This is the primary purpose of this work.",
"Concretely, this paper seeks to empirically validate whether pre-trained convolutions are competitive with pre-trained Transformers across a range of tasks.",
"The interaction between pre-training schemes and model architectures is an under-studied topic.",
"Are only Transformers able to capitalize on the benefits of pre-training?",
"If we use a different architectural inductive bias, would there also be a substantial gain unlocked by pre-training?",
"Are pretrained convolutions better in particular scenarios?",
"This paper investigates these questions.",
"There are a number of obvious benefits of convolution-based models.",
"Firstly, convolutions do not suffer from the quadratic memory complexity of self-attention a problem significant enough that it spawned the creation of the entirely new category of efficient Transformer architectures (Tay et al., 2020b, 2021).",
"Secondly, convolutions operate locally and do not rely on positional encodings as an order signal to the model.",
"That said, convolutions also come with a slew of downsides.",
"For example, being unable to access global information means such models are unable to perform a form of cross-attention across multiple sequences.",
"We dive into the details of this more in subsequent sections.",
"In this paper, we present a pre-trained convolutional sequence-to-sequence, or Seq2Seq, model.",
"We train our convolutional model using span-based sequence-to-sequence denoising objectives similar to those employed in T5 (Raffel et al., 2019).",
"We evaluate a variety of convolutional variants (e.g., dilated, lightweight, dynamic (Wu et al., 2019),",
"etc.) under both raw (no pre-training) and pre-train-fine-tune paradigms.",
"Our goal is to understand the true competitiveness of convolutional architectures in the era of pre-training.",
"We show that pre-trained convolutions are competitive against pre-trained Transformers via a set of experiments on a potpourri of NLP tasks, like toxicity detection, sentiment classification, news classification, query understanding and semantic parsing/compositional generalization (Kim and Linzen, 2020).",
"Moreover, we find that pretrained convolutions can outperform, in terms of model quality and training speed, state-of-the-art pre-trained Transformers (Raffel et al., 2019) in certain scenarios.",
"However, to provide a balanced perspective, we also describe scenarios where pretrained convolutions do not perform well and may be deemed unsuitable.",
"Contributions Overall, the main contributions of this paper can be summarized as follows: We perform a comprehensive empirical evaluation of convolutional Seq2Seq models under the pre-train-fine-tune paradigm.",
"To the best of our knowledge, the competitiveness and relevance of pre-trained convolutions still remains an open question.",
"We make several important observations.",
"Specifically, we find that (1) pre-training helps convolutional models just as much as it helps Transformers, and (2) pre-trained convolutions are competitive alternatives in certain scenarios in terms of model quality and training speed.",
"We conduct extensive experiments across 8 datasets spanning a diverse range of tasks and domains.",
"On 7 out of 8 tasks, we find that pre-trained convolutions outperform a recent state-of-the-art transformer (T5 (Raffel et al., 2019)) with and without pre-training.",
"We examine the speed and operation count (FLOPS) of convolutions versus Transformers and find that convolutions are not only faster but also scale better to longer sequence lengths.",
"Pre-training on a large corpus has become the primary method of learning universal language representations to solve different downstream NLP tasks.",
"The first generation of pre-trained models aimed at learning embedding for words, like Skip-Gram (Mikolov et al., 2013) and Glove (Pen-nington et al., 2014), and quickly developed to learning contextualized representation for words, like ELMO (Peters et al., 2018), GPT (Radford et al., 2018), and BERT (Devlin et al., 2018).",
"This, however, is not the only axis in which pre-trained models have evolved.",
"Different objective functions and various tasks, both supervised and unsupervised, have been explored for pre-training.",
"For instance, CoVe (Mc-Cann et al., 2017) uses machine translation as the pre-training task, ELMO (Peters et al., 2018) and GPT (Radford et al., 2018) use language modeling objectives, BERT (Devlin et al., 2018) uses masked language modeling, T5 (Raffel et al., 2019) and MASS (Song et al., 2019) use Seq2Seq masked language modeling, and XLNet (Yang et al., 2019) utilizes permuted language modeling.",
"In addition to this, BART (Lewis et al., 2019) uses a denoising autoencoder setup during pre-training, where the model takes a partially corrupted input and is trained to recover the original, undistorted input.",
"Some models use a contrastive learning setup during pertaining, like replaced token detection, used by ELECTRA (Clark et al., 2020), and sentence order prediction, used by ALBERT (Lan et al., 2019) and StructBERT (Wang et al., 2019).",
"Another axis where pre-trained models in NLP explored different ideas is model architecture.",
"ELMO (Peters et al., 2018) and CoVe (McCann et al., 2017) used LSTMs as the base model.",
"Later, Transformers (Vaswani et al., 2017) became the de facto architecture of pre-trained NLP models.",
"BERT (Devlin et al., 2018), XLNet (Yang et al., 2019) and RoBERTa (Liu et al., 2019) use the Transformer encoder, while GPT (Radford et al., 2018), GPT-2 (Radford et al.), and GPT-3 (Brown et al., 2020) use the Transformer decoder as the backbone.",
"Some pre-trained models are also are based on the encoder-decoder transformer architecture, like T5 (Raffel et al., 2019), MASS (Song et al., 2019), and BART (Lewis et al., 2019).",
"In this paper, we investigate another model architecture variation by studying the power of convolutional neural network as the backbone of pre-trained models for NLP.",
"Convolutions have always been an interesting choice for sequence modeling and NLP applications (Kim, 2014; Bai et al., 2018; Kalchbrenner et al., 2016).",
"Convolutions are lightweight and fast and have many interesting use-cases, notably for lightweight classification.",
"In the era when LSTMs were the workhorses of NLP applications, convolutions were positioned nicely on the pareto frontier of the compute-performance curve.",
"They are fast and lightweight, and unlike Transformers, they do not suffer from quadratic complexity.",
"Our work is also well-aligned with the resurgence of interest in convolutions where (Wu et al., 2019) showed that convolutions can outperform self-attention on several sequence transduction tasks.",
"Moreover, the necessity of the self-attention inductive bias in transformers have been also a subject of recent interest.",
"Synthesizer models (Tay et al., 2020a) showed that transformers can still do pretty well without token-token dot product self-attention and a random attention matrix can perform competitively on certain tasks.",
"This section describes the pre-trained Convolution Model.",
"For most of our experiments, we adopt depthwise separable convolutions (Kaiser et al., 2017; Sifre and Mallat, 2014; Chollet, 2017) which have shown to be fast and efficient variants of the standard convolution.",
"This section introduces Lightweight Depthwise Convolutions (Wu et al., 2019) which forms the backbone of our pre-trained convolution model.",
"Depthwise convolutions convolve independently over every channel.",
"Given an input tensor X of dimensions n d , the depthwise convolution, D ( X, W c, : , i, c ) is defined as: O i,c = k (cid:88) j 1 W c,j X i + j (cid:100) k +12 (cid:101) ) , c (1) where W R d k are the learnable parameters of the layer.",
"O i,c is the output at position i and channel c .",
"The overall output is a tensor of n d of identical shape as the input.",
"L ( . ) are depthwise separable convolutions with (1) softmax-normalized kernels and (2) shared output channels and weight tying.",
"Specifically, this is written as: O Li,c = k (cid:88) j 1 softmax ( W c,j ) X i + j (cid:100) k +12 (cid:101) ) , c (2) where c = cHd .",
"In short, parameters are shared every dH output channels.",
"When H = 1 , this is equivalent to sharing all the weights of all channels.",
"Dynamic Convolutions DY ( . ) are a new form of lightweight convolutions introduced by (Wu et al., 2019).",
"The key idea is to learn position-specific kernels for performing lightweight convolutions.",
"This can be written as: DY = L ( X, f ( X i ) h, : , i, c ) , (3) where f ( . ) is a linear transformation with parameters WQ RH k d that learns a position dependent kernel.",
"We adopt span-based sequence-to-sequence pretraining as per (Raffel et al., 2019).",
"Specifically, given an input sequence, we randomly mask spans of lengths L and replace them with a special sen-tinel token.",
"The pre-training task is then to generate the masked tokens as targets.",
"For example: Inputs: The happy cat sat [mask].",
"and Outputs: on the mat.",
"We implement a Seq2Seq (Sutskever et al., 2014) architecture similar to (Wu et al., 2019).",
"The key difference when compared with Transformer architectures is that we replace the multi-headed self-attention with convolutional blocks.",
"Instead of query-key-value transforms, we use gated linear unit projections following (Wu et al., 2019).",
"Each convolution block be written as: X 1 = WIX (cid:12) sigmoid ( WSX ) , X 2 = ConvBlock ( X 1 ) , X 3 = WO ( X 2 ) , where WI , WS , WO are trainable parameters.",
"We experiment with simple lightweight convolutions, dynamic convolutions and dilated convolutions in our experiments.",
"Following (Wu et al., 2019; Gehring et al., 2017), the encoder-decoder attention remains untouched.",
"The convention follows the backbone Transformer model in which we wrap each submodule with layer normalization and residual connectors.",
"Hence, each Conv block is written as: XA = LayerNorm ( Conv ( X )) + X, XB = LayerNorm ( FFN ( XA ) + XA , where Conv is any of the convolution models that we explore in our experiments.",
"FFN(.) is a two layer feed-forward network with ReLU activations in the middle.",
"The model optimizes the token-wise cross-entropy loss and is trained with teacher forcing.",
"Before we delve into our experiments, we establish a set of research questions and agenda we hope this work aims to bring clarity to.",
"RQ2 : Are convolutional models, pre-trained or otherwise, competitive with Transformer models?",
"When do they perform well?",
"RQ3 : What are the benefits (if any) of using pre-trained convolution models over pretrained Transformers?",
"Are convolutions faster alternatives to self-attention based Transformers?",
"RQ4 : What are the failure modes, caveats and reasons to not use pre-trained convolutions?",
"RQ5 : Are certain convolution variants better than others?",
"This section presents our analysis and results.",
"Our evaluation is based on the following datasets",
"and tasks.",
"This is a four-way classification task.",
"Question Classification We use the TREC fine-grained question classification dataset (Li and Roth, 2002).",
"This task involves classifying questions into 46 fine-grained question categories.",
"Toxicity Detection We use the CIVILCOMMENTS (Borkan et al., 2019) and WIKITOXICSUBTYPES dataset (Wulczyn et al., 2017).",
"Given a piece of short text (originating from social media or wikipedia), the goal is to determine if the content is toxic, i.e., a binary classification task.",
"For this task, we evaluate on both accuracy and F1 score.",
"Sentiment Classification This is a binary classification task that determines the polarity of documents, sentences and/or tweets.",
"We use the IMDb reviews dataset (Maas et al., 2011), Stanford Sentiment Treebank (SST-2) (Socher et al., 2013) dataset, along with Twitter Sentiment140 (S140) (Go et al., 2009) dataset.",
"News Classification This is a task of topic categorization for news articles.",
"We use the AGNews dataset (Zhang et al., 2015).",
"ability of models to generalize compositionally outside of the training distribution.",
"To be specific, it needs be able to handle unseen combinations at test time.",
"For this task, we use the COGS dataset (Kim and Linzen, 2020), a task of generating semantic representation of a given English sentence.",
"For example, A cat smiled cat( x 1 ) AND smile.agent( x 2 , x 1 ).",
"All of the datasets, with the exception of the recent COGS dataset (Kim and Linzen, 2020), are Tensorflow datasets 1 .",
"For each dataset, we evaluate all models with and without pre-training (details in subsequent sec-tions).",
"Table 1 reports the statistics of the datasets used in this paper.",
"Our models are largely based on sequence to sequence models, a paradigm that has demonstrated great success made evident by models such as BART (Lewis et al., 2019) and T5(Raffel et al., 2019).",
"We implement our models in Mesh Tensorflow (MTF) (Shazeer et al., 2018), a library for distributed and efficient parallel model training that has similar API to Tensorflow.",
"We train models that are of base size, which corresponds to 12 layers each in the encoder and decoder, along with 3072 dimensions for the feed-forward layers, a model dimension of 768 and a total of 12 heads.",
"Our Transformer models are largely based on T5 (Raffel et al., 2019), which is considered the current state-of-the-art Transformer model for NLP tasks and hence serves as a strong baseline.",
"For the convolution models, our lightweight convolution 1 https://www.tensorflow.org/datasets/ catalog/overview .",
"and dynamic convolution models have a window size 2 of 7 across all layers, the number of unique depth filters is 2 .",
"For dilated models, we use a filter size of [4 , 4 , 7 , 7 , 15 , 15 , 15 , 15 , 31 , 31 , 31] for our 12 layer convolution model.",
"We pre-train both our convolutional and Transformer models for 524K steps with a batch size of 128 .",
"Given the input sequence length of 512 , this corresponds to 65536 tokens per batch.",
"For pre-training, we use the Colossal Cleaned Com-monCrawl Corpus (C4) (Raffel et al., 2019) dataset which has demonstrated impressive results on downstream tasks.",
"We use the span based seq2seq objective as the pre-training objective as mentioned in earlier sections.",
"The span size is set to 3 and a corruption rate of 15% is adopted.",
"We use the Adafactor optimizer (Shazeer and Stern, 2018) with an inverse square root learning rate scheduler.",
"Each pre-training run is performed using 16 TPU-v3 chips and takes approximately 12 hours to complete for models of base size.",
"We fine-tune the pre-trained models using the following set of hyperparameters: We use a constant learning rate which is tuned amongst { 0 .",
"001 , 0 .",
"0005 , 0 .",
"0001 } .",
"The batch size is generally set to 64 but occasionally set to 32 for smaller datasets.",
"Intuitively, sequence length is task dependent but generally approximately the 90th percentile for each task.",
"We fine-tune for a maximum of 100 K steps and report peak validation performance.",
"Fine-tuning uses the same Adafactor optimizer as during training.",
"We perform fine-tuning on similar hardware, i.e., typically 16 TPUv3 chips are used per fine-tuning job.",
"Table 2 reports results on toxicity detection.",
"On both toxicity detection datasets the pre-trained and no-pre-training (raw) setup, the best models are the dilated convolution models and the dynamic convolution models.",
"In fact, all convolutional models 2 We believe that tuning the hyperparameters of the convolution models can result in even better performance.",
"However, we decided to keep these hyperparameters simple for the start.",
"outperform Transformers on both CivilComments and WikiToxic.",
"Before pre-training, convolutions outperform Transformers by approximately 1 .",
"5 ab-solute percentage points.",
"The gap narrows after pretraining where Transformers see a better gain (e.g., +5 . 1% against +4 . 3% ) from pre-training over convolutions on the CivilComments dataset.",
"However, the converse is true on WikiToxic the only case of performance degradation after pre-training.",
"Overall, on this task, convolutions are competitive to Transformers and outperform them.",
"Results on Sentiment Classification (IMDb, SST-2 and S140) can be found in Table 2.",
"On the IMDb reviews dataset, the best non-pre-trained model is the lightweight convolution model, outperforming the Transformer model.",
"The best pre-trained model is the Transformer model.",
"However, all convolutional models come in close with less than a percentage point gap difference with pre-trained Transformers.",
"On the SST-2 and S140 tasks, we observe that the best models are convolution-based, regardless of whether the model is pre-trained or not.",
"The best non-pre-trained model is the Lightweight Convolution model.",
"For pre-trained models, convolutional models also outperform the pre-trained Transformer.",
"On this task, while most models benefit significantly from pre-training, Transformers seem to benefit slightly more from pre-training.",
"Results on news classification seems to follow similar trends as other benchmarks.",
"Convolutional models outperform Transformers both in non-pre-trained and pre-trained setups.",
"The highest gain from pre-training is obtained from the dilated convolution model.",
"We conduct additional experiments on semantic parsing and compositional generalization.",
"The task is framed as a sequence generation task.",
"We use the recently proposed (Kim and Linzen, 2020) dataset.",
"On the in-distribution test set, Transformers and convolutions have identical performance (95%) .",
"On the generalization or out of distribution set, Transformers perform at 77 .",
"5% while convolutions come in at 76 .",
"9 .",
"While convolutions do not exactly outperform Transformers, they come in close enough to be considered competitive.",
"On the seven tasks across a broad range of domains we find that (1) non-pre-trained convolutions are competitive and frequently outperform non-pre-trained Transformers, (2) pre-trained convolutions outperform pre-trained Transformers on six out of seven tasks.",
"This answers RQ2 .",
"We also find that convolutions are able to benefit from pre-training, in a similar fashion to self-attention-based models.",
"Hence, the benefits achieved by pre-training are not exclusive to Transformer models.",
"This answers RQ1 .",
"Amongst the pre-trained convolutional models, we find that dilated convolutions and dynamic convolutions are generally better than lightweight convolutions, thus answering RQ5 .",
"Finally, we observe that relative performance (i.e., rankings) do change with pre-training.",
"This definitely shows that there is some kind of effect from composing architectures with pre-training.",
"The direct implication of this effect is that a model that performs well (relatively) without pre-training will not necessarily perform the best when pretrained (and vice versa).",
"Hence, aside from conflating architectures with pre-training schemes, we do also need to take note that different architectures may behave differently under pre-training.",
"This section expands on the results via a detailed analysis and discussion.",
"We discuss the pros/cons of pretrained convolutions, the impact of pretraining on performance and also recommendations to the broader community.",
"In our experimental section, we observed the potential upsides of convolutional models over well-established pre-trained Transformers and observe that we are able to get quality improvements in certain cases.",
"However, it might be good to further understand the drawbacks of convolutions.",
"One obvious weakness of pre-trained convolutions are their lack of cross-attention inductive bias that comes for free with self-attention in the Transformer encoder.",
"For this reason, it is not a CIVILCOMMENTWIKITOXIC IMDb SST-2 S140 TREC News Model Acc F1 Acc F1 Acc Acc Acc Acc Acc No pre-training Trans.",
"good idea to use pre-trained convolutions for tasks that requires modeling the relationship between two or more sequences.",
"To verify this, we run experiments on SQuAD and MultiNLI and find that convolutions do not come close to Transformers just because of this missing inductive bias.",
"This should be clearly distinguished when examining and evaluating models, as how the early SNLI leaderboard 3 distinguished between models that used cross-attention and models that did not.",
"Our initial evaluations on benchmarks like SQuAD/MNLI (Rajpurkar et al., 2016; Williams et al., 2017) showed that pre-trained convolutions are indeed significantly lackluster.",
"For example, convolutions only achieve 75% accuracy on MultiNLI, while transformers easily achieve 84% accuracy.",
"Likewise, while transformers achieve about 90% F1 on SQuAd, convolutions come in around 70% .",
"This is entirely expected because there is no way the premise/question can interact with the hypothesis/context.",
"( RQ4 ).",
"However, our experiments show that this was only because they lack this cross-attention property.",
"When we augment convolutions with a single layer of cross attention at the encoder, we find that pre-trained convolutions come close (a delta of 3 https://nlp.stanford.edu/projects/ snli/ ( 1%) ) to pre-trained Transformers on datasets such as MultiNLI (Williams et al., 2017), achieving about 83% accuracy.",
"That said, we leave it to the practitioner to decide whether the cross-attention inductive bias is actually important for the problem at hand.",
"We also like to emphasize that the pattern of concatenating sentence pairs is not necessary practical when scaling up since this requires inference on every permutation of sentence pairs.",
"For this reason, dual encoder setups that do fast embedding space look-ups are more practical and feasible in practice (Guo et al., 2020).",
"Given the strong performance of convolutions in a series of encoding tasks, we can expect pre-trained convolutions to do well in a dual encoder setup.",
"We observed a reasonable quality improvement from using convolutions over Transformers.",
"This section discusses the additional benefit.",
"Figure 1 reports training speed of convolution (LightConvs) versus transformers on a sequence to sequence task.",
"The input lengths are varied from { 64 , 128 , 256 , 512 , 1024 , 2048 , 4096 } .",
"We Figure 1: Effect of sequence length on processing speed (examples per second) on a seq2seq masked language modeling task.",
"show that convolutions are not only consistently faster (even at shorter sequences) but scale better than transformers.",
"Convolution scales linearly while transformers are not able to scale to longer sequences.",
"We measure the number of FLOPs of convolutions versus transformers as we increase the sequence length.",
"Figure 2 shows the phenomenon while varying sequence length.",
"In general, across all sequence lengths, convolutions are more efficient in the number of floating point operations.",
"The overall findings that convolutions are faster both in wall clock time and in FLOPs answers RQ3 .",
"Moreover, we find that the FLOP efficiency of convolutions scales better across sequence lengths.",
"While Transformers have dominated the research landscape in NLP, this paper suggests that there are commonly overlooked benefits to convolutions such as model quality, speed, FLOPs and scalabil-ity.",
"Moreover, it is previously unknown to whether convolutions benefit from pre-training.",
"In this paper, we showed that they are competitive on some tasks and also benefit from pre-training in similar fashion to transformer models.",
"However, on the flip side, we also highlighted that they are unable to handle tasks that require cross-attention or when there is a need to model > 1 sentence or documents within the same sequence.",
"We believe that practitioners have good options and it might be worthwhile to explore architectures outside the well-established transformer models.",
"In this paper, we showed that three other (convolutional-based) architectures (e.g., lightweight, dymamic and dilated) also benefit from pre-training to the same extent as transformer models.",
"In the current research landscape, pre-training has always be tightly coupled and associated with transformers architectures.",
"As a result, the success of BERT, transformers and large language models seem to be pretty conflated.",
"While it is true that, to this date, the only model that large-scale pretraining has been applied to are transformer models, we believe there might be potential in other architectures.",
"Based on our empirical findings, we believe there is still significant room for the improving the understanding of the compositional effects of architecture and pre-training.",
"Hence, we believe that the impact of this work extends beyond showing the competitiveness of convolution models in NLP.",
"More concretely, the take home message is that there should be a healthy level of optimism in exploring architectural alternatives.",
"In this paper, we conducted an extensive study of the viability and feasibility of pre-trained convolutions.",
"convolutions.",
"Our experimental results show that convolutions can outperform Transformers in both pretrain and non-pre-trained setups.",
"Our extensive experiments across 8 datasets spanning a diverse range of tasks, show that convolutions are able to benefit from pre-training to the same (or sometimes greater) extent than Transformers.",
"While pre-trained transformers are the de-facto choice of architecture, our results show that they might not be the best in certain scenarios.",
"Additionally, we discussed the caveats, trade-offs pertaining with runtime, scalability, number of FLOPS and model quality.",
"Finally, we discussed the situations or data types that convolutions are not well equipped to handle and make an empirically informed recommendation for practitioners."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"objective",
"result",
"result",
"method",
"objective",
"method",
"abstain",
"result",
"method",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"result",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"objective",
"result",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"method",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain"
] |
[
"There has been little work on modeling the morphological well-formedness (MWF) of derivatives, a problem judged to be complex and difficult in linguistics (Bauer, 2019).",
"We present a graph auto-encoder that learns embeddings capturing information about the compatibility of affixes and stems in derivation.",
"The auto-encoder models MWF in English surprisingly well by combining syntactic and semantic information with associative information from the mental lexicon.",
"A central goal of morphology is, as famously put by Aronoff (1976), to tell us what sort of new words a speaker can form.",
"This definition is tightly intertwined with the notion of morphological well-formedness (MWF).",
"While nonexisting morphologically well-formed words such as pro$computer$ism conform to the morphological patterns of a language and could be formed, non-existing morphologically ill-formed words such as pro$and$ism violate the patterns and are deemed impossible (Allen, 1979).",
"More recent research has shown that MWF is a gradient rather than binary property: non-existing words that conform to the morphological patterns of a language differ in how likely they are to be actually created by speakers (Pierrehumbert, 2012).",
"This is particularly true in the case of derivational morphology, which is not obligatory and often serves communicative needs (Bauer, 2019).",
"As a result, the degree of MWF of a non-existing derivative is influenced by a multitude of factors and judged to be hard to predict (Bauer, 2001).",
"In NLP, the lack of reliable ways to estimate the MWF of derivatives poses a bottleneck for generative models, particularly in languages exhibiting a rich derivational morphology; e.g., while inflected read listen kiss reader listener kisser readable listenable kissable",
"forms can be translated by generating morphologically corresponding forms in the target language (Minkov et al., 2007), generating derivatives is still a major challenge for machine translation systems (Sreelekha and Bhattacharyya, 2018).",
"Similar problems exist in the area of automatic language generation (Gatt and Krahmer, 2018).",
"This study takes a first step towards computationally modeling the MWF of English derivatives.",
"We present a derivational graph auto-encoder (DGA) that combines semantic and syntactic information with associative information from the mental lexicon, achieving very good results on MWF prediction and performing on par with a character-based LSTM at a fraction of the number of trainable parameters.",
"The model produces embeddings that capture information about the compatibility of affixes and stems in derivation and can be used as pretrained input to other NLP applications.",
"1 2 Derivational Morphology 2.1 Inflection and Derivation Linguistics divides morphology into inflection and derivation.",
"word forms of a lexeme, e.g., listen , listens , and listened , derivation refers to the different lexemes of a word family, e.g., listen , listener , and listenable .",
"There are several differences between inflection and derivation, some of which are highly relevant for NLP.",
"Firstly, while inflection is obligatory and determined by syntactic needs, the existence of derivatives is mainly driven by communicative goals, allowing to express a varied spectrum of meanings (Acquaviva, 2016).",
"Secondly, derivation can produce a larger number of new words than inflection since it is iterable (Haspelmath and Sims, 2010); derivational affixes can be combined, in some cases even recursively (e.g., post$post$modern$ism ).",
"However, morpho-tactic constraints restrict the ways in which affixes can be attached to stems and other affixes (Hay and Plag, 2004); e.g., the suffix $less can be combined with $ness ( atom$less$ness ) but not with $ity ( atom$less$ity ).",
"The semantic and formal complexity of derivation makes predicting the MWF of derivatives more challenging than the MWF of inflectional forms (Anshen and Aronoff, 1999; Bauer, 2019).",
"Here, we model the MWF of derivatives as the likelihood of their existence in the mental lexicon.",
"How likely a derivative is to exist is influenced by various factors (Bauer, 2001; Pierrehumbert and Granell, 2018).",
"In this study, we concentrate on the role of the structure of the mental lexicon.",
"The mental lexicon can be thought of as a set of associations between meaning m and form f , i.e., words, organized in a network, where links correspond to shared semantic and phonological properties (see Pierrehumbert (2012) for a review).",
"Since we base our study on textual data, we will treat the form of words orthographically rather than phonologically.",
"We will refer to the type of information conveyed by the cognitive structure of the mental lexicon as associative information.",
"Sets of words with similar semantic and formal properties form clusters in the mental lexicon (Ale-gre and Gordon, 1999).",
"The semantic and formal properties reinforced by such clusters create abstractions that can be extended to new words (By-bee, 1995).",
"If the abstraction hinges upon a shared derivational pattern, the effect of such an extension is a new derivative.",
"The extent to which a word conforms to the properties of the cluster influences how likely the abstraction (in our case a derivational pattern) is to be extended to that word.",
"This is what is captured by the notion of MWF.",
"The main goal of this paper is to predict the MWF of morphological derivatives (i.e., how likely is a word to be formed as an extension of a lexical cluster) by directly leveraging associative information.",
"Since links in the mental lexicon reflect semantic and formal similarities of various sorts, many of which are not morphological (Tamariz, 2008), we want to create a distilled model of the mental lexicon that only contains derivational information.",
"One way to achieve this is by means of a derivational projection of the mental lexicon, a network that we call the Derivational Graph (DG).",
"Let L = ( W , Q ) be a graph of the mental lexicon consisting of a set of words W and a set of links between the words Q .",
"Let W a W be a set of words forming a fully interconnected cluster in L due to a shared derivational pattern a .",
"We define S a as the set of stems resulting from stripping off a from the words in W a and R a = { ( s, a ) } s S a as the corresponding set of edges between the stems and the shared derivational pattern.",
"We then define the two-mode derivational projection B of L as the Derivational Graph (DG) where B = ( V , E ) , V = (cid:83) a ( S a { a } ) and E = (cid:83) a R a .",
"Figure 1 gives an example of L and DG (= B ).",
"The DG is a bipartite graph whose nodes consist of stems s S with S = (cid:83) a S a and derivational patterns a A with A = (cid:83) a { a } .",
"The derivational patterns are sequences of affixes such as re$$ize$ate$ion in the case of revitalization .",
"The cognitive plausibility of this setup is supported by findings that affix groups can trigger derivational generalizations in the same way as individual affixes (Stump, 2017, 2019).",
"We define B R |V||V| to be the adjacency matrix of B .",
"The degree of an individual node n is d ( n ) .",
"We further define 1 ( n ) as the set of one-hop neighbors and 2 ( n ) as the set of two-hop neighbors of n .",
"Notice that 1 ( s ) A , 1 ( a ) S , 2 ( s ) S , and 2 ( a ) A for any s and a since the DG is bipartite.",
"The advantage of this setup of DGs is that it abstracts away information not relevant to derivational morphology while still allowing to interpret results in the light of the mental lexicon.",
"The cre-...",
"ation of a derivative corresponds to a new link between a stem and a derivational pattern in the DG, which in turn reflects the inclusion of a new word into a lexical cluster with a shared derivational pattern in the mental lexicon.",
"We base our study on data from the social media platform Reddit.",
"2 Reddit is divided into so-called subreddits (SRs), smaller communities centered around shared interests.",
"SRs have been shown to exhibit community-specific linguistic properties (del Tredici and Fernandez, 2018).",
"We draw upon the Baumgartner Reddit Corpus, a collection of publicly available comments posted on Reddit since 2005.",
"3 The preprocessing of the data is described in Appendix A.1.",
"We examine data in the SRs r/cfb (cfb college football), r/gaming (gam), r/leagueoflegends (lol), r/movies (mov), r/nba (nba), r/nfl (nfl), r/politics (pol), r/science (sci), and r/technology (tec) between 2007 and 2018.",
"These SRs were chosen because they are of comparable size and are among the largest SRs (see Table 1).",
"They reflect three distinct areas of interest, i.e., sports (cfb, nba, nfl), entertainment (gam, lol, mov), and knowledge (pol, sci, tec), thus allowing for a multifaceted view on how topical factors impact MWF: seeing MWF as an emergent property of the mental lexicon entails that communities with different lexica should differ in what derivatives are most likely to be created.",
"Many morphologically complex words are not decomposed into their morphemes during cognitive processing (Sonnenstuhl and Huth, 2002).",
"Based on experimental findings in Hay (2001), we segment a morphologically complex word only if the stem has a higher token frequency than the deriva-2 reddit.com 3 files.pushshift.io/reddit/comments SR n w n t |S| |A| |E| cfb 475,870,562 522,675 10,934 2,261 46,110 nba 898,483,442 801,260 13,576 3,023 64,274 nfl 911,001,246 791,352 13,982 3,016 64,821 gam 1,119,096,999 1,428,149 19,306 4,519 107,126 lol 1,538,655,464 1,444,976 18,375 4,515 104,731 mov 738,365,964 860,263 15,740 3,614 77,925 pol 2,970,509,554 1,576,998 24,175 6,188 143,880 sci 277,568,720 528,223 11,267 3,323 58,290 tec 505,966,695 632,940 11,986 3,280 63,839 Table 1: SR statistics.",
"tive (in a given SR).",
"Segmentation is performed by means of an iterative affix-stripping algorithm introduced in Hofmann et al. (2020) that is based on a representative list of productive prefixes and suffixes in English (Crystal, 1997).",
"The algorithm is sensitive to most morpho-orthographic rules of English (Plag, 2003): when $ness is removed from happi$ness , e.g., the result is happy , not happi .",
"See Appendix A.2.",
"for details.",
"The segmented texts are then used to create DGs as described in Section 2.3.",
"All processing is done separately for each SR, i.e., we create a total of nine different DGs.",
"Figure 2 illustrates the general experimental setup of our study.",
"Let W be a Bernoulli random variable denoting the property of being morphologically well-formed.",
"We want to model P ( W | d, C r ) = P ( W | s, a, C r ) , i.e., the probability that a derivative d consisting of stem s and affix group a is morphologically well-formed according to SR corpus C r .",
"Given the established properties of derivational morphology (see Section 2), a good model of P ( W | d, C r ) should include both semantics and formal structure, P ( W | d, C r ) = P ( W | m s , f s , m a , f a , C r ) , (1) where m s , f s , m a , f a , are meaning and form (here g x s x a z s z a h B B Figure 3: DGA model architecture. The DGA takes as input an adjacency matrix B and additional feature vectors x s and x a and learns embeddings z s and z a . modeled orthographically, see Section 2.2) of the involved stem and affix group, respectively.",
"The models we examine in this study vary in which of these features are used, and how they are used.",
"We model P ( W | d, C r ) by training a graph auto-encoder (Kipf and Welling, 2016, 2017) on the DG B of each SR.",
"The graph auto-encoder attempts to reconstruct the adjacency matrix B (Section 2.3) of the DG by means of an encoder function g and a decoder function h , i.e., its basic structure is B = h ( g ( B )) , (2) where B is the reconstructed version of B .",
"The specific architecture we use (see Figure 3), which we call a Derivational Graph Auto-encoder (DGA), is a variation of the bipartite graph auto-encoder (van den Berg et al., 2018).",
"Encoder.",
"The encoder g takes as one of its inputs the adjacency matrix B of the DG B .",
"This means we model f s and f a , the stem and affix group forms, by means of the associative relationships they create in the mental lexicon.",
"Since a DG has no information about semantic relationships between nodes within S and A , we reintroduce meaning as additional feature vectors x s , x a R n for m s and m a , stem and affix group embeddings that are trained separately on the SR texts.",
"The input to g is thus designed to provide complementary information: associative information ( B ) and semantic information ( x s and x a ).",
"For the encoder to be able to combine the two types of input in a meaningful way, the choice of g is crucial.",
"We model g as a graph convolutional network (Kipf and Welling, 2016, 2017), providing an intuitive way to combine information from the DG with additional information.",
"The graph convolutional network consists of L convolutional layers.",
"Each layer (except for the last one) performs two steps: message passing and activation.",
"During the message passing step (Dai et al., 2016; Gilmer et al., 2017), transformed versions of the embeddings x s and x a are sent along the edges of the DG, weighted, and accumulated.",
"We define 1+ ( s ) = 1 ( s ) { s } as the set of nodes whose transformed embeddings are weighted and accumulated for a particular stem s .",
"1+ ( s ) is extracted from the adjacency matrix B and consists of the one-hop neighbors of s and s itself.",
"The message passing propagation rule (Kipf and Welling, 2016, 2017) can then be written as m ( l ) s = (cid:88) n 1+ ( s ) x ( l 1) n W ( l ) (cid:113) | 1+ ( s ) || 1+ ( n ) | , (3) where W ( l ) is the trainable weight matrix of layer l , x ( l 1) n is the embedding of node n from layer l 1 with x (0) n = x n , and (cid:113) | 1+ ( s ) || 1+ ( n ) | is the weighting factor.",
"The message passing step is performed analogously for affix groups.",
"The matrix form of Equation 3 is given in Appendix A.3.",
"Intuitively, a message passing step takes embeddings of all neighbors of a node and the embedding of the node itself, transforms them, and accumulates them by a normalized sum.",
"Given that the DG B is bipartite, this means for a stem s that the normalized sum contains d ( s ) affix group embeddings and one stem embedding (and analogously for affix groups).",
"The total number of convolutional layers L determines how far the influence of a node can reach.",
"While one convolution allows nodes to receive information from their one-hop neighbors (stems from affix groups they co-occur with and vice versa), two convolutions add information from the two-hop neighbors (stems from stems co-occurring with the same affix group and vice versa), etc. (see Figure 4).",
"During the activation step, the output of the convolutional layer l for a particular stem s is x ( l ) s = ReLU (cid:16) m ( l ) s (cid:17) , (4) where ReLU( ) = max(0 , ) is a rectified linear unit (Nair and Hinton, 2010).",
"The final output of the encoder is z s = m ( L ) s , (5) i.e., there is no activation in the last layer.",
"The activation step is again performed analogously for affix groups.",
"z s and z a are representations of s and a enriched with information about the semantics of nodes in their DG neighborhood.",
"where is the sigmoid and z s and z a are the outputs of the encoder.",
"4 We set P ( W | d, C r ) = h ( z s , z a ) and interpret this as the probability that the corresponding edge in a DG constructed from a corpus drawn from the underlying distribution exists.",
"The resulting matrix B in Equation 2 is then the reconstructed adjacency matrix of DG.",
"Notice that the only trainable parameters of the DGA are the weight matrices W ( l ) .",
"To put the performance of the DGA into perspective, we compare against four baselines, which we present in decreasing order of sophistication.",
"We model P ( W | d, C r ) as P ( W | f s , f a , C r ) using a character-based model (CM), i.e., as opposed to the DGA, f s and f a are modeled directly by means of their orthographic form.",
"This provides the CM with phonological information, a central predictor of MWF (see Section 2.2).",
"CM might also learn semantic information during training, but it is not directly provided with it.",
"Character-based models show competitive results on derivational tasks (Cotterell et al., 2017; Vylomova et al., 2017; Deutsch et al., 2018), a good reason to test their performance on MWF prediction.",
"We use two one-layer bidirectional LSTMs to encode the stem and affix group into a vector o by concatenating the last hidden states from both LSTM directions (cid:126) h s , (cid:126) h s , (cid:126) h a , and (cid:126) h a , o = [ (cid:126) h s (cid:126) h s (cid:126) h a (cid:126) h a ] , (7) 4 Besides the simple dot-product decoder, we also implemented a bilinear decoder with h ( z s , z a ) = ( z (cid:62) s Qz a ) , where Q is a trainable weight matrix.",
"However, the model performed significantly worse.",
"where denotes concatentation.",
"o is then fed into a two layer feed-forward neural network with a ReLU non-linearity after the first layer.",
"5 The activation function after the second layer is .",
"We model P ( W | d, C r ) as P ( W | m s , m a , C r ) using a neural classifier (NC) whose architecture is similar to the auto-encoder setup of the DGA.",
"Similarly to the DGA, m s and m a are modeled by means of stem and affix group embeddings trained separately on the SRs.",
"The first encoder-like part of the NC is a two-layer feed-forward neural network with a ReLU non-linearity after the first layer.",
"The second decoder-like part of the NC is an inner-product layer as in the DGA.",
"Thus, the NC is identical to the DGA except that it does not use associative information from the DG via a graph convolutional network; it only has information about the stem and affix group meanings.",
"We model P ( W | d, C r ) as P ( W | f s , f a , C r ) .",
"Like in the DGA, we model the stem and affix group forms by means of the associative relationships they create in the mental lexicon.",
"Specifically, we predict links without semantic information.",
"In feature-based machine learning, link prediction is performed by defining similarity measures on a graph and ranking node pairs according to these features (Liben-Nowell and Kleinberg, 2003).",
"We apply four common measures, most of which have to be modified to accommodate the properties of bipartite DGs.",
"Here, we only cover the best performing measure, Jaccard similarity (JS).",
"JS is one of the simplest graph-based similarity measures, so it is a natural baseline for answering the question: how far does simple graph-based similarity get you at predicting MWF?",
"See Appendix A.4 for the other three measures.",
"However, since 1 ( s ) 1 ( a ) = for any s and a (the DG is bipartite), we redefine the set of common neighbors of two nodes n and m , ( n, m ) , as 2 ( n ) 1 ( m ) , i.e., the intersection of the two-hop neighbors of n and the one-hop neighbors of",
"5 We also experimented with only one layer, but it performed considerably worse.",
"m , and analogously ( n, m ) as 2 ( n ) 1 ( m ) .",
"Since these are asymmetric definitions, we define JS ( s, a ) = | ( s, a ) | | ( s, a ) | + | ( a, s ) | | ( a, s ) | (9) JS assumes that a stem that is already similar to a lexical cluster in its derivational patterns is more likely to become even more similar to the cluster than a less similar stem.",
"We again model P ( W | d, C r ) as P ( W | f s , f a , C r ) , leaving aside semantic information.",
"However, in contrast to JS, this model implements the classic approach of Fabb (1988), according to which pairwise constraints on affix combinations, or combinations of a stem and an affix, determine the allowable sequences.",
"Taking into account more recent results on morphological gradience, we do not model these selection restrictions with binary rules.",
"Instead, we use transition probabilities, beginning with the POS of the stem s and working outwards to each following suffix a ( s ) or preceding prefix a ( p ) .",
"Using a simple bigram model (BM), we can thus calculate the MWF of a derivative as P ( W | d, C r ) = P ( a ( s ) | s ) P ( a ( p ) | s ) , (10) where P ( a ( s ) | s ) = P ( a ( s ) 1 | s ) (cid:81) ni =2 P ( a ( s ) i | a ( s ) i 1 ) is the probability of the suffix group conditioned on the POS of the stem.",
"P ( a ( p ) | s ) is defined analogously for prefix groups.",
"We train all models on the nine SRs using the same split of E into training ( n ( p ) train = 0 .",
"85 |E| ), validation ( n ( p ) val = 0 .",
"05 |E| ), and test ( n ( p ) test = 0 .",
"1 |E| ) edges.",
"For validation and test, we randomly sample n ( n ) val = n ( p ) val and n ( n ) test = n ( p ) test non-edges ( s, a ) (cid:54) E as negative examples such that both sets are balanced (0.5 positive, 0.5 negative).",
"For training, we sample n ( n ) train = n ( p ) train non-edges ( s, a ) (cid:54) E in every epoch (i.e., the set of sampled non-edges changes in every epoch).",
"Nodes are sampled according to their degree with P ( n ) d ( n ) , a common strategy in bipartite link prediction (Chen et al., 2017).",
"We make sure non-edges sampled in training are not in the validation or test sets.",
"During the test phase, we rank all edges according to their predicted scores.",
"We evaluate the models using average precision (AP) and area under the ROC curve (AUC), two common evaluation measures in link prediction that do not require a decision threshold.",
"AP emphasizes the correctness of the top-ranked edges (Su et al., 2015) more than AUC.",
"DGA, DGA+: We use binary cross entropy as loss function.",
"Hyperparameter tuning is performed on the validation set.",
"We train the DGA for 600 epochs using Adam (Kingma and Ba, 2015) with a learning rate of 0.01.",
"6 We use L = 2 hidden layers in the DGA with a dimension of 100.",
"For regularization, we apply dropout of 0.1 after the input layer and 0.7 after the hidden layers.",
"For x s and x a , we use 100-dimensional GloVe embeddings (Pennington et al., 2014) trained on the segmented text of the individual SRs with a window size of 10.",
"These can be seen as GloVe variants of traditional morpheme embeddings as proposed, e.g., by Qiu et al. (2014), with the sole difference that we use affix groups instead of individual affixes.",
"For training the embeddings, derivatives are segmented into prefix group, stem, and suffix group.",
"In the case of both prefix and suffix groups, we add prefix and suffix group embeddings.",
"Since the window size impacts the information represented by the embeddings, with larger windows tending to capture topical and smaller windows morphosyntactic information (Lison and Kutuzov, 2017), we also train the DGA with 200-dimensional embeddings consisting of concatenated 100-dimensional embeddings trained with window sizes of 10 and 1, respectively (DGA+).",
"7 Since DGA already receives associative informa-6 The number of epochs until convergence lies within the typical range of values for graph convolutional networks.",
"7 We experimented with using vectors trained on isolated pairs of stems and affix groups instead of window-1 vectors trained on the full text, but the performance was comparable.",
"We also implemented the DGA using only window-1 vectors (without concatenating them with window-10 vectors), but it performed considerably worse.",
"tion from the DG and semantic information from the embeddings trained with window size 10, the main advantage of DGA+ should lie in additional syntactic information.",
"CM: We use binary cross entropy as loss function.",
"We train the CM for 20 epochs using Adam with a learning rate of 0.001.",
"Both input character embeddings and hidden states of the bidirectional LSTMs have 100 dimensions.",
"The output of the first feed-forward layer has 50 dimensions.",
"We apply dropout of 0.2 after the embedding layer as well as the first feed-forward layer.",
"NC, NC+: All hyperparameters are identical to the DGA and the DGA+, respectively.",
"JS: Similarity scores are computed on the SR training sets.",
"BM: Transition probabilities are maximum likelihood estimates from the SR training sets.",
"If a stem is assigned several POS tags by the tagger, we take the most frequent one.",
"Table 2 summarizes the number of trainable parameters for the neural models.",
"Notice that CM has more than 10 times as many trainable parameters as DGA+, DGA, NC+, and NC.",
"The overall best performing models are DGA+ and CM (see Table 3).",
"While DGA+ beats CM on all SRs except for lol in AP, CM beats DGA+ on all SRs except for cfb and tec in AUC.",
"Except for CM, DGA+ beats all other models on all SRs in both AP and AUC, i.e., it is always the best or second-best model.",
"DGA beats all models except for DGA+ and CM on all SRs in AP but has lower AUC than NC+ on three SRs.",
"It also outperforms CM on three SRs in AP.",
"NC+ and NC mostly have scores above 0.7, showing that traditional morpheme embeddings also capture information about the compatibility of affixes and stems (albeit to a lesser degree than models with associative or orthographic information).",
"Among the non-neural methods, JS outperforms BM (and the other non-neural link prediction models, see Appendix A.4) in AP, but is beaten by BM in AUC on six SRs.",
"The fact that DGA+ performs on par with CM while using less than 10% of CM's parameters demonstrates the power of incorporating associative information from the mental lexicon in modeling the MWF of derivatives.",
"This result is even more striking since DGA+, as opposed to CM, has no direct access to orthographic (i.e., phonological) information.",
"At the same time, CM's high performance indicates that orthographic information is an important predictor of MWF.",
"To understand better how associative information from the DG increases performance, we examine how DGA+ changes the shape of the vector space by comparing input vs. learned embeddings ( X vs. ZDGA + ), and contrast that with NC+ ( X vs. ZNC + ).",
"A priori, there are two opposing demands the embeddings need to respond to:",
"(i) as holds for bipartite graphs in general (Gao et al., 2018), the two node sets (stems and affix groups) should form two separated clusters in embedding space;",
"(ii) stems associated with the same affix group should form clusters in embedding space that are close to the embedding of the respective affix group.",
"For this analysis, we define ( N , v ) as the mean cosine similarity between the embeddings of a node set N and an individual embedding v , ( N , v ) = 1 |N | (cid:88) n N cos ( u n , v ) , (11) where u n is the embedding of node n .",
"We calculate for the set of stem nodes S and their centroid c S = 1 |S| (cid:80) s S u s as well as the set of affix group",
"nodes A and their centroid c A = 1 |A| (cid:80) a A u a .",
"Table 4 shows that while NC+ makes the embeddings of both S and A more compact (higher similarity in ZNC + than in X ), DGA+ makes S more compact, too, but decreases the compactness of A (lower similarity in ZDGA + than in X ).",
"ZNC + meets",
"(i) to a greater extent than ZDGA + .",
"We then calculate for all sets of stems S a occurring with a common affix group a and their centroids c S a = 1 |S a | (cid:80) s S a u s .",
"We also compute for all S a and the embeddings of the corresponding affix groups u a .",
"As Table 4 shows, both values are much higher in ZDGA + than in X , i.e., DGA+ brings stems with a common affix group a (lexical clusters in the mental lexicon) close to each other while at the same time moving a into the direction of the stems.",
"The embeddings ZNC + exhibit a similar pattern, but more weakly than ZDGA + (see Table 4 and Figure 5).",
"ZDGA + meets",
"(ii) to a greater extent than ZNC + .",
"Thus, DGA+ and NC+ solve the tension between",
"(i) and",
"(ii) differently; the associative information from the mental lexicon allows DGA+ to put a greater emphasis on",
"(ii), leading to higher performance in MWF prediction.",
"Another reason for the higher performance of the models with associative information could be that their embeddings capture differences in derivational patterns between the SR communities.",
"To examine this hypothesis, we map the embeddings ZDGA + of all SRs into a common vector space by means of orthogonal procrustes alignment (Schone-mann, 1966), i.e., we optimize R ( i ) = arg min T (cid:62) T = I || Z ( i ) DGA + T Z (0) DGA + || F (12) for every SR, where Z ( i ) DGA + is the embedding matrix of the SR i , and Z (0) DGA + is the embedding matrix of a randomly chosen SR (which is the same for all projections).",
"We then compute the intersection of stem and affix group nodes from all SRs S = (cid:84) i S ( i ) and A = (cid:84) i A ( i ) , where S ( i ) and A ( i ) are the stem and affix group sets of SR i , respectively.",
"To probe whether differences between SRs are larger or smaller for affix embeddings as compared to stem embeddings, we define ( S ( i ) , S ( j ) ) = (cid:88) s S cos( z ( i ) s , z ( j ) s ) |S | , (13) i.e., the mean cosine similarity between projected embedding pairs z ( i ) s and z ( j ) s from two SRs i and j representing the same stem s in the intersection set S , with z ( i ) s = z ( i ) s R ( i ) .",
"( A ( i ) , A ( j ) ) is defined analogously for affix groups.",
"The mean value for ( A ( i ) , A ( j ) ) ( 0 . 723 0 . 102 ) is lower than that for ( S ( i ) , S ( j ) ) ( 0 . 760 0 . 087 ), i.e., differences between affix group embeddings are more pronounced than between stem embeddings.",
"Topically connected SRs are more similar to each other than SRs of different topic groups,",
"with the differences being larger in ( A ( i ) , A ( j ) ) than in ( S ( i ) , S ( j ) ) (see Figure 6).",
"These results can be related to Section 6.1: affix groups are very close to the stems they associate with in ZDGA + , i.e., if an affix group is used with stems of meaning p in one SR and stems with meaning q in the other SR, then the affix groups also have embeddings close to p and q in the two SRs.",
"Most technical vocabulary, on the other hand, is specific to a SR and does not make it into S .",
"8 A qualitative analysis supports this hypothesis: affix groups with low cosine similarities between SRs associate with highly topical stems; e.g., the affix group $ocracy has a low cosine similarity of -0.189 between the SRs nba and pol, and it occurs with stems such as kobe , jock in nba but left , wealth in pol.",
"Much recent computational research on derivational morphology in NLP has focused on two related problems: predicting the meaning of a derivative given its form, and predicting the form of a derivative given its meaning.",
"The first group of studies models the meaning of derivatives as a function of their morphological structure by training embeddings directly on text segmented into morphemes (Luong et al., 2013; Qiu et al., 2014) or by inferring morpheme embeddings from whole-word vector spaces, e.g., using the vector offset method (Lazaridou et al., 2013; Pado et al., 2016).",
"Formally, given a derived form f d , this line of research tries to find the meaning m d that maximizes P ( m d | f d ) .",
"8 One SR standing out in Figure 6 is lol, a multiplayer online video game, in which many common stems such as fame and range have highly idiosyncratic meanings.",
"of derivatives as a function of their meaning.",
"The meaning is represented by the base word and a semantic tag (Cotterell et al., 2017; Deutsch et al., 2018) or the sentential context (Vylomova et al., 2017).",
"Formally, given a meaning m d , these studies try to find the derived form f d of a word that maximizes P ( f d | m d ) .",
"Our study differs from these two approaches in that we model P ( W | f d , m d ) , i.e., we predict the overall likelihood of a derivative to exist.",
"For future research, it would be interesting to apply derivational embeddings in studies of the second type by using them as pretrained input.",
"Neural link prediction is the task of inferring the existence of unknown connections between nodes in a graph.",
"Advances in deep learning have prompted various neural models for link prediction that learn distributed node representations (Tang et al., 2015; Grover and Leskovec, 2016).",
"Kipf and Welling (2016, 2017) proposed a convolutional graph auto-encoder that allows to include feature vectors for each node.",
"The model was adapted to bipartite graphs by van den Berg et al. (2018).",
"Previous studies on neural link prediction for bipartite graphs have shown that the embeddings of the two node sets should ideally form separated clusters (Gao et al., 2018).",
"Our work demonstrates that relations transcending the two-mode graph structure can lead to a trade-off between clustering and dispersion in embedding space.",
"We have introduced a derivational graph auto-encoder (DGA) that combines syntactic and semantic information with associative information from the mental lexicon to predict morphological well-formedness (MWF), a task that has not been addressed before.",
"The model achieves good results and performs on par with a character-based LSTM at a fraction of the number of trainable parameters (less than 10%).",
"Furthermore, the model learns embeddings capturing information about the compatibility of affixes and stems in derivation.",
"Acknowledgements.",
"Valentin Hofmann was funded by the Arts and Humanities Research Council and the German Academic Scholarship Foundation.",
"This research was also supported by the European Research Council (Grant No. 740516).",
"We thank the reviewers for their helpful and very constructive comments."
] | [
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"We propose a novel method for hierarchical entity classification that embraces ontological structure at both training and during prediction.",
"At training, our novel multi-level learning-to-rank loss compares positive types against negative siblings according to the type tree.",
"During prediction, we define a coarse-to-fine decoder that restricts viable candidates at each level of the ontology based on already predicted parent type(s).",
"We achieve state-of-the-art across multiple datasets, particularly with respect to strict accuracy.",
"1 1 Introduction Entity typing is the assignment of a semantic label to a span of text, where that span is usually a mention of some entity in the real world.",
"Named entity recognition (NER) is a canonical information extraction task, commonly considered a form of entity typing that assigns spans to one of a handful of types, such as PER , ORG , GPE , and so on.",
"Fine-grained entity typing (FET) seeks to classify spans into types according to more diverse, semantically richer ontologies (Ling and Weld, 2012; Yosef et al., 2012; Gillick et al., 2014; Del Corro et al., 2015; Choi et al., 2018), and has begun to be used in downstream models for entity linking (Gupta et al., 2017; Raiman and Raiman, 2018).",
"Consider the example in Figure 1 from the FET dataset, FIGER (Ling and Weld, 2012).",
"The mention of interest, Hollywood Hills , will be typed with the single label LOC in traditional NER, but may be typed with a set of types { /location , /geography , /geography/mountain } under a fine-grained typing scheme.",
"In these finer-grained typing schemes, types usually form a hierarchy: there are a set of coarse types that lies on 1 Code can be found at https://github.com/ ctongfei/hierarchical-typing .",
"the top levelthese are similar to traditional NER types, e.g. /person ; additionally, there are finer types that are subtypes of these top-level types, e.g. /person/artist or /person/doctor .",
"Most prior work concerning fine-grained entity typing has approached the problem as a multi-label classification problem: given an entity mention together with its context, the classifier seeks to output a set of types, where each type is a node in the hierarchy.",
"Approaches to FET include handcrafted sparse features to various neural architectures (Ren et al., 2016a; Shimaoka et al., 2017; Lin and Ji, 2019, inter alia , see section 2).",
"Perhaps owing to the historical transition from flat NER types, there has been relatively little work in FET that exploits ontological tree structure , where type labels satisfy the hierarchical property : a subtype is valid only if its parent supertype is also valid.",
"We propose a novel method that takes the explicit ontology structure into account, by a multi-level learning to rank approach that ranks the candidate types conditioned on the given entity mention.",
"Intuitively, coarser types are easier whereas finer types are harder to classify: we capture this intuition by allowing distinct margins at each level of the ranking model.",
"FET is usually studied as allowing for sentence-level context in making predictions, notably starting with Ling and Weld (2012) and Gillick et al. (2014), where they created the commonly used FIGER and OntoNotes datasets for FET.",
"While researchers have considered the benefits of document-level (Zhang et al., 2018), and corpus-level (Yaghoobzadeh and Schutze, 2015) context, here we focus on the sentence-level variant for best contrast to prior work.",
"Progress in FET has focused primarily on: Better mention representations: Starting from sparse hand-crafted binary features (Ling and Weld, 2012; Gillick et al., 2014), the community has moved to distributed representations (Yogatama et al., 2015), to pre-trained word embeddings with LSTMs (Ren et al., 2016a,b; Shimaoka et al., 2016; Abhishek et al., 2017; Shimaoka et al., 2017) or CNNs (Murty et al., 2018), with mention-to-context attention (Zhang et al., 2018), then to employing pre-trained language models like ELMo (Peters et al., 2018) to generate ever better representations (Lin and Ji, 2019).",
"Our approach builds upon these developments and uses state-of-the-art mention encoders.",
"as multi-label classification , without using information in the hierarchical structure, but there are a few exceptions.",
"Ren et al. (2016a) proposed an adaptive margin for learning-to-rank so that similar types have a smaller margin; Xu and Barbosa (2018) proposed hierarchical loss normalization that penalizes output that violates the hierarchical property; and Murty et al. (2018) proposed to learn a subtyping relation to constrain the type embeddings in the type space.",
"In contrast to these approaches, our coarse-to-fine decoding approach strictly guarantees that the output does not violate the hierarchical property, leading to better performance.",
"HYENA (Yosef et al., 2012) applied ranking to sibling types in a type hierarchy, but the number of predicted positive types are trained separately with a meta-model, hence does not support neural end-to-end training.",
"Researchers have proposed alternative FET formulations whose types are not formed in a type hierarchy, in particular Ultra-fine entity typing (Choi et al., 2018; Xiong et al., 2019; Onoe and Durrett, 2019), with a very large set of types derived from phrases mined from a corpus.",
"FET in KB (Jin et al., 2019) labels mentions to types in a knowledge base with multiple relations, forming a type graph.",
"Dai et al. (2019) augments the task with entity linking to KBs.",
"We denote a mention as a tuple x = ( w , l , r ) , where w = ( w 1 , , w n ) is the sentential context and the span [ l : r ] marks a mention of interest in sentence w .",
"That is, the mention of interest is ( w l , , w r ) .",
"Given x , a hierarchical entity typing model outputs a set of types Y in the type ontology Y , i.e. Y Y .",
"Type hierarchies take the form of a forest, where each tree is rooted by a top-level supertype (e.g. /person , /location , etc.).",
"We add a dummy parent node ENTITY = / , the supertype of all entity types, to all the top-level types, effectively transforming a type forest to a type tree.",
"In Figure 2, we show 3 type ontologies associated with 3 different datasets (see subsection 5.1), with the dummy ENTITY node augmented.",
"We now introduce some notation for referring to aspects of a type tree.",
"The binary relation type z is a subtype of y is denoted as z < : y .",
"2 The unique parent of a type y in the type tree is denoted y Y , where y is undefined for y = ENTITY .",
"The immediate subtypes of y (children nodes) are denoted Ch ( y ) Y .",
"Siblings of y , those sharing the same immediate parent, are denoted Sb ( y ) Y , where y (cid:60) Sb ( y ) .",
"In the AIDA FET ontology (see Figure 2), the maximum depth of the tree is L = 3 , and each mention can only be typed with at most 1 type from each level.",
"We term this scenario single-path typing, since there can be only 1 path starting from the root ( ENTITY ) of the type tree.",
"This is in contrast multi-path typing, such as in the BBN dataset, where mentions may be labeled with multiple types on the same level of the tree.",
"Additionally, in AIDA, there are mentions labeled such as as /per/police/<unspecified> .",
"In FIGER, we find instances with labeled type /person but not any further subtype.",
"What does it mean when a mention x is labeled with a partial type path , i.e., a type y but none of the subtypes z < : y ?",
"We consider two interpretations: Exclusive: x is of type y , but x is not of any type z < : y .",
"We devise different strategies to deal with these two conditions.",
"Under the exclusive case, we add a dummy OTHER node to every intermediate branch node in the type tree.",
"For any mention x labeled with type y but none of the subtypes z < : y , we add this additional label y /OTHER to the labels of x (see Figure 2: AIDA).",
"For example, if we interpret a partial type path /person 2 Per programming language literature, e.g. the type system F < : that supports subtyping.",
"in FIGER as exclusive , we add another type /person/OTHER to that instance.",
"Under the undefined case, we do not modify the labels in the dataset.",
"We will see this can make a significant difference depending on the way a specific dataset is annotated.",
"Hidden representations for entity mentions in sentence w are generated by leveraging recent advances in language model pre-training, e.g. ELMo (Peters et al., 2018).",
"3 The ELMo representation for each token w i is denoted as w i R d w .",
"Dropout is applied with probability p D to the ELMo vectors.",
"Our mention encoder largely follows Lin and Ji (2019).",
"First a mention representation is derived using the representations of the words in the mention.",
"We apply a max pooling layer atop the mention after a linear transformation: 4 m = MaxPool ( Tw l , , Tw r ) R d w .",
"Then we employ mention-to-context attention first described in Zhang et al. (2018) and later employed by Lin and Ji (2019): a context vector c is generated by attending the sentence with a query vector derived from the mention vector m .",
"We use the multiplicative attention of Luong et al. (2015): a i exp ( m T Qw i ) (2) c = N (cid:88) i = 1 a i w i R d w (3) The final representation for an entity mention is generated via concatenation of the mention and context vector: [ m ; c ] R 2 d w .",
"We learn a type embedding y R d t for each type y Y .",
"To score an instance with representation [ m ; c ] , we pass it through a 2-layer feed-forward network that maps into the same space as the type space R d t , with tanh as the nonlinearity.",
"The final 3 Lin and Ji (2019) found that ELMo performs better than BERT (Devlin et al., 2019) for FET.",
"Our internal experiments also confirm this finding.",
"We hypothesize that this is due to the richer character-level information contained in lowerlevel ELMo representations that are useful for FET.",
"4 Lin and Ji (2019) proposed an attentive pooler with a learned global query vector.",
"We found out that a simple max pooling layer achieves similar performance.",
"score is an inner product between the transformed feature vector and the type embedding: F ( x , y ) = FFNN ([ m ; c ]) y .",
"We introduce our novel hierarchical learning-to-rank loss that (1) allows for natural multi-label classification and (2) takes the hierarchical ontology into account.",
"We start with a multi-class hinge loss that ranks positive types above negative types (Weston and Watkins, 1999): J flat ( x , Y ) = (cid:88) y Y (cid:88) y (cid:48) (cid:60) Y [ F ( x , y ) + F ( x , y (cid:48) )] + (5) where [ x ] + = max { 0 , x } .",
"This is actually learning-to-rank with a ranking SVM (Joachims, 2002): the model learns to rank the positive types y Y higher than those negative types y (cid:48) (cid:60) Y , by imposing a margin between y and y (cid:48) : type y should rank higher than y (cid:48) by .",
"Note that in Equation 5, since it is a linear SVM, the margin hyperparameter could be just set as 1 (the type embeddings are linearly scalable), and we rely on L 2 regularization to constrain the type embeddings.",
"Multi-level Margins However, this method considers all candidate types to be flat instead of hierarchical all types are given the same treatment without any prior on their relative position in the type hierarchy.",
"Intuitively, coarser types (higher in the hierarchy) should be easier to determine (e.g. /person vs /location should be fairly easy for the model), but fine-grained types (e.g. /person/artist/singer ) are harder.",
"We encode this intuition by",
"(i) learning to rank types only on the same level in the type tree;",
"(ii) setting different margin parameters for the ranking model with respect to different levels: (cid:88) y Y (cid:88) y (cid:48) Sb ( y )\\ Y [ lev ( y ) F ( x , y ) + F ( x , y (cid:48) )] + (6) Here lev ( y ) is the level of the type y : for example, lev ( /location ) = 1 , and lev ( /person/artist/singer ) = 3 .",
"In Equation 6, each positive type y is only compared against its negative siblings Sb ( y )\\ Y , and the margin hyperparameter is set to be lev ( y ) , i.e., a margin dependent on which level y is in the tree.",
"Intuitively, we should set 1 > 2 > 3 since our ( 1 \u0000 ) 2 <latexit sha1_base64=\"Yckm0lgEuwguyWYDTSkkfyWZpZs=\">AAACw3icdVFNa9tAEF2rTZq6Sey0x15ETcAtjZFCIDkGSqHHFOokYBkzWo3ixfshdkeNjdAvSY/Jj+q/6doxxUragYXHm3k7b2bSQgpHUfS7Fbx4ubX9aud1+83u3n6ne/D20pnSchxyI429TsGhFBqHJEjidWERVCrxKp19WeavfqJ1wugftChwrOBGi1xwIE9Nup1+fJSALKbwMZmLyfGk24sG0SrC5yBegx5bx8XkoPUryQwvFWriEpwbxVFB4wosCS6xbielwwL4DG5w5KEGhW5crZzX4aFnsjA31j9N4YrdVFSgnFuo1FcqoKl7mluS/8qNSsrPxpXQRUmo+WOjvJQhmXC5hjATFjnJhQfArfBeQz4FC5z8stqHm20cB4kWZd2kpUjRz6ixyY/+8p+9O+mVpvDzaryl+cptuzEDCb+O5pL+I9tUpcbMCFIv9L9Z9FXcKAU6+5QQZZhDKamiOVHt7xk/vd5zcHk8iKNB/P2kd95fX3aHvWcfWJ/F7JSds2/sgg0ZZyW7Y/fsIfgazAIb0GNp0Fpr3rFGBPUfKwDgvg==</latexit> <latexit sha1_base64=\"Yckm0lgEuwguyWYDTSkkfyWZpZs=\">AAACw3icdVFNa9tAEF2rTZq6Sey0x15ETcAtjZFCIDkGSqHHFOokYBkzWo3ixfshdkeNjdAvSY/Jj+q/6doxxUragYXHm3k7b2bSQgpHUfS7Fbx4ubX9aud1+83u3n6ne/D20pnSchxyI429TsGhFBqHJEjidWERVCrxKp19WeavfqJ1wugftChwrOBGi1xwIE9Nup1+fJSALKbwMZmLyfGk24sG0SrC5yBegx5bx8XkoPUryQwvFWriEpwbxVFB4wosCS6xbielwwL4DG5w5KEGhW5crZzX4aFnsjA31j9N4YrdVFSgnFuo1FcqoKl7mluS/8qNSsrPxpXQRUmo+WOjvJQhmXC5hjATFjnJhQfArfBeQz4FC5z8stqHm20cB4kWZd2kpUjRz6ixyY/+8p+9O+mVpvDzaryl+cptuzEDCb+O5pL+I9tUpcbMCFIv9L9Z9FXcKAU6+5QQZZhDKamiOVHt7xk/vd5zcHk8iKNB/P2kd95fX3aHvWcfWJ/F7JSds2/sgg0ZZyW7Y/fsIfgazAIb0GNp0Fpr3rFGBPUfKwDgvg==</latexit> <latexit sha1_base64=\"Yckm0lgEuwguyWYDTSkkfyWZpZs=\">AAACw3icdVFNa9tAEF2rTZq6Sey0x15ETcAtjZFCIDkGSqHHFOokYBkzWo3ixfshdkeNjdAvSY/Jj+q/6doxxUragYXHm3k7b2bSQgpHUfS7Fbx4ubX9aud1+83u3n6ne/D20pnSchxyI429TsGhFBqHJEjidWERVCrxKp19WeavfqJ1wugftChwrOBGi1xwIE9Nup1+fJSALKbwMZmLyfGk24sG0SrC5yBegx5bx8XkoPUryQwvFWriEpwbxVFB4wosCS6xbielwwL4DG5w5KEGhW5crZzX4aFnsjA31j9N4YrdVFSgnFuo1FcqoKl7mluS/8qNSsrPxpXQRUmo+WOjvJQhmXC5hjATFjnJhQfArfBeQz4FC5z8stqHm20cB4kWZd2kpUjRz6ixyY/+8p+9O+mVpvDzaryl+cptuzEDCb+O5pL+I9tUpcbMCFIv9L9Z9FXcKAU6+5QQZZhDKamiOVHt7xk/vd5zcHk8iKNB/P2kd95fX3aHvWcfWJ/F7JSds2/sgg0ZZyW7Y/fsIfgazAIb0GNp0Fpr3rFGBPUfKwDgvg==</latexit> <latexit sha1_base64=\"Yckm0lgEuwguyWYDTSkkfyWZpZs=\">AAACw3icdVFNa9tAEF2rTZq6Sey0x15ETcAtjZFCIDkGSqHHFOokYBkzWo3ixfshdkeNjdAvSY/Jj+q/6doxxUragYXHm3k7b2bSQgpHUfS7Fbx4ubX9aud1+83u3n6ne/D20pnSchxyI429TsGhFBqHJEjidWERVCrxKp19WeavfqJ1wugftChwrOBGi1xwIE9Nup1+fJSALKbwMZmLyfGk24sG0SrC5yBegx5bx8XkoPUryQwvFWriEpwbxVFB4wosCS6xbielwwL4DG5w5KEGhW5crZzX4aFnsjA31j9N4YrdVFSgnFuo1FcqoKl7mluS/8qNSsrPxpXQRUmo+WOjvJQhmXC5hjATFjnJhQfArfBeQz4FC5z8stqHm20cB4kWZd2kpUjRz6ixyY/+8p+9O+mVpvDzaryl+cptuzEDCb+O5pL+I9tUpcbMCFIv9L9Z9FXcKAU6+5QQZZhDKamiOVHt7xk/vd5zcHk8iKNB/P2kd95fX3aHvWcfWJ/F7JSds2/sgg0ZZyW7Y/fsIfgazAIb0GNp0Fpr3rFGBPUfKwDgvg==</latexit> 2 <latexit sha1_base64=\"e6geauY33jiuodw/0nXuPUGwUKo=\">AAACvXicdVHbattAEF0rvaTuLUkf+yJqAqEUI4VA+9ZAX/KYQp0EJGNGq1G8eC9id9TYCH9G30rzXf2bjh1TrKQdWDicmbNzZqaotQqUJL970c6jx0+e7j7rP3/x8tXrvf2Di+AaL3EknXb+qoCAWlkckSKNV7VHMIXGy2L2ZZW//I4+KGe/0aLGsYFrqyolgZjKctD1FPK5mhxP9gbJMFlH/BCkGzAQmzif7Pd+5qWTjUFLUkMIWZrUNG7Bk5Ial/28CViDnME1ZgwtGAzjdu15GR8yU8aV8/wsxWt2W9GCCWFhCq40QNNwP7ci/5XLGqo+jVtl64bQyrtGVaNjcvFqAXGpPErSCwYgvWKvsZyCB0m8pv7hdpsgQaNHvezSWhXIM1rs8tlf/gO706x0Nc9r8Ybma7f9zgykeB3dJf1Htq0qnJsRFCzk3zxylXTGgC3f50QlVtBoamlOtOR7pvev9xBcHA/TZJh+PRmcHm0uuyveinfiSKTiozgVZ+JcjIQUTvwQv8Rt9DnCSEf2rjTqbTRvRCeimz9leN+2</latexit> <latexit sha1_base64=\"e6geauY33jiuodw/0nXuPUGwUKo=\">AAACvXicdVHbattAEF0rvaTuLUkf+yJqAqEUI4VA+9ZAX/KYQp0EJGNGq1G8eC9id9TYCH9G30rzXf2bjh1TrKQdWDicmbNzZqaotQqUJL970c6jx0+e7j7rP3/x8tXrvf2Di+AaL3EknXb+qoCAWlkckSKNV7VHMIXGy2L2ZZW//I4+KGe/0aLGsYFrqyolgZjKctD1FPK5mhxP9gbJMFlH/BCkGzAQmzif7Pd+5qWTjUFLUkMIWZrUNG7Bk5Ial/28CViDnME1ZgwtGAzjdu15GR8yU8aV8/wsxWt2W9GCCWFhCq40QNNwP7ci/5XLGqo+jVtl64bQyrtGVaNjcvFqAXGpPErSCwYgvWKvsZyCB0m8pv7hdpsgQaNHvezSWhXIM1rs8tlf/gO706x0Nc9r8Ybma7f9zgykeB3dJf1Htq0qnJsRFCzk3zxylXTGgC3f50QlVtBoamlOtOR7pvev9xBcHA/TZJh+PRmcHm0uuyveinfiSKTiozgVZ+JcjIQUTvwQv8Rt9DnCSEf2rjTqbTRvRCeimz9leN+2</latexit> <latexit sha1_base64=\"e6geauY33jiuodw/0nXuPUGwUKo=\">AAACvXicdVHbattAEF0rvaTuLUkf+yJqAqEUI4VA+9ZAX/KYQp0EJGNGq1G8eC9id9TYCH9G30rzXf2bjh1TrKQdWDicmbNzZqaotQqUJL970c6jx0+e7j7rP3/x8tXrvf2Di+AaL3EknXb+qoCAWlkckSKNV7VHMIXGy2L2ZZW//I4+KGe/0aLGsYFrqyolgZjKctD1FPK5mhxP9gbJMFlH/BCkGzAQmzif7Pd+5qWTjUFLUkMIWZrUNG7Bk5Ial/28CViDnME1ZgwtGAzjdu15GR8yU8aV8/wsxWt2W9GCCWFhCq40QNNwP7ci/5XLGqo+jVtl64bQyrtGVaNjcvFqAXGpPErSCwYgvWKvsZyCB0m8pv7hdpsgQaNHvezSWhXIM1rs8tlf/gO706x0Nc9r8Ybma7f9zgykeB3dJf1Htq0qnJsRFCzk3zxylXTGgC3f50QlVtBoamlOtOR7pvev9xBcHA/TZJh+PRmcHm0uuyveinfiSKTiozgVZ+JcjIQUTvwQv8Rt9DnCSEf2rjTqbTRvRCeimz9leN+2</latexit> <latexit sha1_base64=\"e6geauY33jiuodw/0nXuPUGwUKo=\">AAACvXicdVHbattAEF0rvaTuLUkf+yJqAqEUI4VA+9ZAX/KYQp0EJGNGq1G8eC9id9TYCH9G30rzXf2bjh1TrKQdWDicmbNzZqaotQqUJL970c6jx0+e7j7rP3/x8tXrvf2Di+AaL3EknXb+qoCAWlkckSKNV7VHMIXGy2L2ZZW//I4+KGe/0aLGsYFrqyolgZjKctD1FPK5mhxP9gbJMFlH/BCkGzAQmzif7Pd+5qWTjUFLUkMIWZrUNG7Bk5Ial/28CViDnME1ZgwtGAzjdu15GR8yU8aV8/wsxWt2W9GCCWFhCq40QNNwP7ci/5XLGqo+jVtl64bQyrtGVaNjcvFqAXGpPErSCwYgvWKvsZyCB0m8pv7hdpsgQaNHvezSWhXIM1rs8tlf/gO706x0Nc9r8Ybma7f9zgykeB3dJf1Htq0qnJsRFCzk3zxylXTGgC3f50QlVtBoamlOtOR7pvev9xBcHA/TZJh+PRmcHm0uuyveinfiSKTiozgVZ+JcjIQUTvwQv8Rt9DnCSEf2rjTqbTRvRCeimz9leN+2</latexit> 1 <latexit sha1_base64=\"mg5btP8pvA4PX4Lgss3gJW+5gB0=\">AAACvXicdVHbattAEF0rbZO6l1z62BdREwilGKkU0rcE+tLHFOokIBkzWo3ixXsRu6PGRvgz+lba7+rfdOyYYiXtwMLhzJydMzNFrVWgJPndi3YePX6yu/e0/+z5i5f7B4dHl8E1XuJIOu38dQEBtbI4IkUar2uPYAqNV8Xs0yp/9Q19UM5+pUWNYwM3VlVKAjGV5aDrKeRzNUknB4NkmKwjfgjSDRiITVxMDns/8tLJxqAlqSGELE1qGrfgSUmNy37eBKxBzuAGM4YWDIZxu/a8jI+ZKePKeX6W4jW7rWjBhLAwBVcaoGm4n1uR/8plDVUfx62ydUNo5V2jqtExuXi1gLhUHiXpBQOQXrHXWE7BgyReU/94u02QoNGjXnZprQrkGS12+ewv/47daVa6mue1eEvztdt+ZwZSvI7ukv4j21YVzs0IChbybx65SjpjwJZvc6ISK2g0tTQnWvI90/vXewgu3w/TZJh++TA4P9lcdk+8Fm/EiUjFqTgXn8WFGAkpnPgufopf0VmEkY7sXWnU22heiU5Et38AYynftQ==</latexit> <latexit sha1_base64=\"mg5btP8pvA4PX4Lgss3gJW+5gB0=\">AAACvXicdVHbattAEF0rbZO6l1z62BdREwilGKkU0rcE+tLHFOokIBkzWo3ixXsRu6PGRvgz+lba7+rfdOyYYiXtwMLhzJydMzNFrVWgJPndi3YePX6yu/e0/+z5i5f7B4dHl8E1XuJIOu38dQEBtbI4IkUar2uPYAqNV8Xs0yp/9Q19UM5+pUWNYwM3VlVKAjGV5aDrKeRzNUknB4NkmKwjfgjSDRiITVxMDns/8tLJxqAlqSGELE1qGrfgSUmNy37eBKxBzuAGM4YWDIZxu/a8jI+ZKePKeX6W4jW7rWjBhLAwBVcaoGm4n1uR/8plDVUfx62ydUNo5V2jqtExuXi1gLhUHiXpBQOQXrHXWE7BgyReU/94u02QoNGjXnZprQrkGS12+ewv/47daVa6mue1eEvztdt+ZwZSvI7ukv4j21YVzs0IChbybx65SjpjwJZvc6ISK2g0tTQnWvI90/vXewgu3w/TZJh++TA4P9lcdk+8Fm/EiUjFqTgXn8WFGAkpnPgufopf0VmEkY7sXWnU22heiU5Et38AYynftQ==</latexit> <latexit sha1_base64=\"mg5btP8pvA4PX4Lgss3gJW+5gB0=\">AAACvXicdVHbattAEF0rbZO6l1z62BdREwilGKkU0rcE+tLHFOokIBkzWo3ixXsRu6PGRvgz+lba7+rfdOyYYiXtwMLhzJydMzNFrVWgJPndi3YePX6yu/e0/+z5i5f7B4dHl8E1XuJIOu38dQEBtbI4IkUar2uPYAqNV8Xs0yp/9Q19UM5+pUWNYwM3VlVKAjGV5aDrKeRzNUknB4NkmKwjfgjSDRiITVxMDns/8tLJxqAlqSGELE1qGrfgSUmNy37eBKxBzuAGM4YWDIZxu/a8jI+ZKePKeX6W4jW7rWjBhLAwBVcaoGm4n1uR/8plDVUfx62ydUNo5V2jqtExuXi1gLhUHiXpBQOQXrHXWE7BgyReU/94u02QoNGjXnZprQrkGS12+ewv/47daVa6mue1eEvztdt+ZwZSvI7ukv4j21YVzs0IChbybx65SjpjwJZvc6ISK2g0tTQnWvI90/vXewgu3w/TZJh++TA4P9lcdk+8Fm/EiUjFqTgXn8WFGAkpnPgufopf0VmEkY7sXWnU22heiU5Et38AYynftQ==</latexit> <latexit sha1_base64=\"mg5btP8pvA4PX4Lgss3gJW+5gB0=\">AAACvXicdVHbattAEF0rbZO6l1z62BdREwilGKkU0rcE+tLHFOokIBkzWo3ixXsRu6PGRvgz+lba7+rfdOyYYiXtwMLhzJydMzNFrVWgJPndi3YePX6yu/e0/+z5i5f7B4dHl8E1XuJIOu38dQEBtbI4IkUar2uPYAqNV8Xs0yp/9Q19UM5+pUWNYwM3VlVKAjGV5aDrKeRzNUknB4NkmKwjfgjSDRiITVxMDns/8tLJxqAlqSGELE1qGrfgSUmNy37eBKxBzuAGM4YWDIZxu/a8jI+ZKePKeX6W4jW7rWjBhLAwBVcaoGm4n1uR/8plDVUfx62ydUNo5V2jqtExuXi1gLhUHiXpBQOQXrHXWE7BgyReU/94u02QoNGjXnZprQrkGS12+ewv/47daVa6mue1eEvztdt+ZwZSvI7ukv4j21YVzs0IChbybx65SjpjwJZvc6ISK2g0tTQnWvI90/vXewgu3w/TZJh++TA4P9lcdk+8Fm/EiUjFqTgXn8WFGAkpnPgufopf0VmEkY7sXWnU22heiU5Et38AYynftQ==</latexit> ( 1 \u0000 ) 1 <latexit sha1_base64=\"NQaqpUnq6dCEYoNBLkI0fjNzoB0=\">AAACw3icdVHbihNBEO2MtzVeNquPvgyGhSgaZkTQxwURfFzB7C5kQqjpqdk06cvQXaMJw3yJPupH+TdWskEyu1rQcDhVp+tUVV5pFShJfveiW7fv3L13cL//4OGjx4eDoydnwdVe4kQ67fxFDgG1sjghRRovKo9gco3n+fLDJn/+FX1Qzn6hdYUzA5dWlUoCMTUfHI7S1xnoagEvspWap/PBMBkn24hvgnQHhmIXp/Oj3o+scLI2aElqCGGaJhXNGvCkpMa2n9UBK5BLuMQpQwsGw6zZOm/jY2aKuHSen6V4y+4rGjAhrE3OlQZoEa7nNuS/ctOayvezRtmqJrTyqlFZ65hcvFlDXCiPkvSaAUiv2GssF+BBEi+rf7zfJkjQ6FG3XVqrHHlGi11++pd/xe40K13F81r8Rqut235nBlK8ju6S/iPbV+XOLQlyFvJvHrlKOmPAFi8zogJLqDU1tCJq+Z7p9evdBGdvxmkyTj+/HZ6Mdpc9EM/EczESqXgnTsQncSomQopafBc/xa/oY7SMfERXpVFvp3kqOhG1fwAoseC9</latexit> <latexit sha1_base64=\"NQaqpUnq6dCEYoNBLkI0fjNzoB0=\">AAACw3icdVHbihNBEO2MtzVeNquPvgyGhSgaZkTQxwURfFzB7C5kQqjpqdk06cvQXaMJw3yJPupH+TdWskEyu1rQcDhVp+tUVV5pFShJfveiW7fv3L13cL//4OGjx4eDoydnwdVe4kQ67fxFDgG1sjghRRovKo9gco3n+fLDJn/+FX1Qzn6hdYUzA5dWlUoCMTUfHI7S1xnoagEvspWap/PBMBkn24hvgnQHhmIXp/Oj3o+scLI2aElqCGGaJhXNGvCkpMa2n9UBK5BLuMQpQwsGw6zZOm/jY2aKuHSen6V4y+4rGjAhrE3OlQZoEa7nNuS/ctOayvezRtmqJrTyqlFZ65hcvFlDXCiPkvSaAUiv2GssF+BBEi+rf7zfJkjQ6FG3XVqrHHlGi11++pd/xe40K13F81r8Rqut235nBlK8ju6S/iPbV+XOLQlyFvJvHrlKOmPAFi8zogJLqDU1tCJq+Z7p9evdBGdvxmkyTj+/HZ6Mdpc9EM/EczESqXgnTsQncSomQopafBc/xa/oY7SMfERXpVFvp3kqOhG1fwAoseC9</latexit> <latexit sha1_base64=\"NQaqpUnq6dCEYoNBLkI0fjNzoB0=\">AAACw3icdVHbihNBEO2MtzVeNquPvgyGhSgaZkTQxwURfFzB7C5kQqjpqdk06cvQXaMJw3yJPupH+TdWskEyu1rQcDhVp+tUVV5pFShJfveiW7fv3L13cL//4OGjx4eDoydnwdVe4kQ67fxFDgG1sjghRRovKo9gco3n+fLDJn/+FX1Qzn6hdYUzA5dWlUoCMTUfHI7S1xnoagEvspWap/PBMBkn24hvgnQHhmIXp/Oj3o+scLI2aElqCGGaJhXNGvCkpMa2n9UBK5BLuMQpQwsGw6zZOm/jY2aKuHSen6V4y+4rGjAhrE3OlQZoEa7nNuS/ctOayvezRtmqJrTyqlFZ65hcvFlDXCiPkvSaAUiv2GssF+BBEi+rf7zfJkjQ6FG3XVqrHHlGi11++pd/xe40K13F81r8Rqut235nBlK8ju6S/iPbV+XOLQlyFvJvHrlKOmPAFi8zogJLqDU1tCJq+Z7p9evdBGdvxmkyTj+/HZ6Mdpc9EM/EczESqXgnTsQncSomQopafBc/xa/oY7SMfERXpVFvp3kqOhG1fwAoseC9</latexit> <latexit sha1_base64=\"NQaqpUnq6dCEYoNBLkI0fjNzoB0=\">AAACw3icdVHbihNBEO2MtzVeNquPvgyGhSgaZkTQxwURfFzB7C5kQqjpqdk06cvQXaMJw3yJPupH+TdWskEyu1rQcDhVp+tUVV5pFShJfveiW7fv3L13cL//4OGjx4eDoydnwdVe4kQ67fxFDgG1sjghRRovKo9gco3n+fLDJn/+FX1Qzn6hdYUzA5dWlUoCMTUfHI7S1xnoagEvspWap/PBMBkn24hvgnQHhmIXp/Oj3o+scLI2aElqCGGaJhXNGvCkpMa2n9UBK5BLuMQpQwsGw6zZOm/jY2aKuHSen6V4y+4rGjAhrE3OlQZoEa7nNuS/ctOayvezRtmqJrTyqlFZ65hcvFlDXCiPkvSaAUiv2GssF+BBEi+rf7zfJkjQ6FG3XVqrHHlGi11++pd/xe40K13F81r8Rqut235nBlK8ju6S/iPbV+XOLQlyFvJvHrlKOmPAFi8zogJLqDU1tCJq+Z7p9evdBGdvxmkyTj+/HZ6Mdpc9EM/EczESqXgnTsQncSomQopafBc/xa/oY7SMfERXpVFvp3kqOhG1fwAoseC9</latexit> Figure 3: Hierarchical learning-to-rank.",
"model should be able to learn a larger margin between easier pairs: we show that this is superior than using a single margin in our experiments.",
"Analogous to the reasoning that in Equation 5 the margin can just be 1, only the relative ratios between 's are important.",
"For simplicity, 5 if the ontology has L levels, we assign l = L l + 1 .",
"Flexible Threshold Equation 6 only ranks positive types higher than negative types so that all children types given a parent type are ranked based on their relevance to the entity mention.",
"What should be the threshold between positive and negative types?",
"We could set the threshold to be 0 (ap-proaching the multi-label classification problem as a set of binary classification problem, see Lin and Ji (2019)), or tune an adaptive, type-specific threshold for each parent type (Zhang et al., 2018).",
"Here, we propose a simpler method.",
"5 We did hyperparameter search on these margin hyperparameters and found that Equation 7 generalized well.",
"parent type /person/artist can be considered as a kind of prior for all types of artists, the model should learn that the positive type singer should have a higher confidence than artist, and in turn, higher than other types of artists like author or actor.",
"Hence the ranker should learn that a positive subtype should rank higher than its parent, and its parent should rank higher than its negative children.",
"Under this formulation, at decoding time, given parent type y , a child subtype z < : y that scores higher than y should be output as a positive label.",
"We translate the ranking relation in Equation 8 into a ranking loss that extends Equation 6.",
"In Equation 6, there is an expected margin between positive types and negative types.",
"Since we inserted the parent in the middle, we divide the margin into and ( 1 ) : being the margin between positive types and the parent; and ( 1 ) is the margin between the parent and the negative types.",
"For a visualization see Figure 3. The hyperparameter [ 0 , 1 ] can be used to tune the precision-recall tradeoff when outputting types: the smaller , the smaller the expected margin there is between positive types and the parent.",
"This intuitively increases precision but decreases recall (only very confident types can be output).",
"Vice versa, increasing decreases precision but increase recall.",
"Therefore we learn 3 sets of ranking relations from Equation 8:",
"(i) positive types should be scored above parent by ;",
"(ii) parent should be scored above any negative sibling types by ( 1 ) ;",
"(iii) positive types should be scored above negative sibling types by .",
"Our final hierarchical ranking loss is formulated as follows.",
"J y (cid:31) y = [ lev ( y ) F ( x , y ) + F ( x , y )] + J y (cid:31) y (cid:48) = (cid:88) y (cid:48) Sb ( y )\\ Y [( 1 ) lev ( y ) F ( x , y ) + F ( x , y (cid:48) )] + J y (cid:31) y (cid:48) = (cid:88) y (cid:48) Sb ( y )\\ Y [ lev ( y ) F ( x , y ) + F ( x , y (cid:48) )] + J hier ( x , Y ) = (cid:88) y Y (cid:0) J y (cid:31) y + J y (cid:31) y (cid:48) + J y (cid:31) y (cid:48) (cid:1) (9) 4.4 Decoding Predicting the types for each entity mention can be performed via iterative searching on the type tree, from the root ENTITY node to coarser types, then to finer-grained types.",
"This ensures that our output does not violate the hierarchical property, i.e., if a subtype is output, its parent must be output.",
"Given instance x we compute the score F ( x , y ) for each type y Y , the searching process starts with the root node ENTITY of the type tree in the queue.",
"For each type y in the node, a child node z < : y (subtypes) is added to the predicted type set if F ( x , z ) > F ( x , y ) , corresponding to the ranking relation in Equation 8 that the model has learned.",
"6 Here we only take the topk element to add to the queue to prevent from over-generating types.",
"This can also be used to enforce the single-path property (setting k = 1) if the dataset is single-path.",
"For each level i in the type hierarchy, we limit the branching factor (allowed children) to be k i .",
"The algorithm is listed in Algorithm 1, where the function TOPK ( S , k , f ) selects the topk elements from S with respect to the function f .",
"Each type y Y in the ontology is assigned a type embedding y R d t .",
"We notice the binary subtyping relation < : Y Y on the types.",
"Trouillon et al. (2016) proposed the relation embedding method ComplEx that works well with anti-symmetric and transitive relations such as subtyping.",
"It has been employed in FET before 6 For the OntoNotes dataset, we introduce another set of per-level hyperparameters lev ( y ) , and the threshold value F ( x , y ) is modified to F ( x , y ) + lev ( y ) , akin to the adaptive threshold in Zhang et al. (2018).",
"This is due to a large type distribution mismatch between the training and dev/test sets in OntoNotes (in dev/test there are a lot of instances with the single type /other but not in the training set).",
"For other datasets they are unused, i.e. just 0. in Murty et al. (2018), ComplEx is added to the loss to regulate the type embeddings.",
"ComplEx operates in the complex space we use the natural isomorphism between real and complex spaces to map the type embedding into complex space (first half of the embedding vector as the real part, and the second half as the imaginary part): : R d t C d t / 2 (10) t = [ Re ( t ) ; Im ( t ) ] (11) We learn a single relation embedding r C d t / 2 for the subtyping relation.",
"Given type y and z , the subtyping statement y < : z is modeled using the following scoring function: r ( y , z ) = Re (cid:16) r (cid:16) ( y ) (cid:12) ( z ) (cid:17)(cid:17) (12) where (cid:12) is element-wise product and x is the complex conjugate of x .",
"If y < : z then r ( y , z ) > 0 ; and vice versa, r ( y , z ) < 0 if y : z .",
"Loss Given instance ( x , Y ) , for each positive type y Y , we learn the following relations: y < : y y : y (cid:48) , y (cid:48) Sb ( y ) y : y (cid:48) , y (cid:48) Sb ( y ) (13) Translating these relation constraints as a binary classification problem (is or is not a subtype) under a primal SVM, we get a hinge loss: J rel ( x , Y ) = (cid:88) y Y (cid:32) [ 1 r ( y , y )] + + (cid:88) y (cid:48) Sb ( y ) Sb ( y ) [ 1 + r ( y , y (cid:48) )] + (cid:33) .",
"(14)",
"This is different from Murty et al. (2018), where a binary cross-entropy loss on randomly sampled ( y , y (cid:48) ) pairs is used.",
"Our experiments showed that the loss in Equation 14 performs better than the cross-entropy version, due to the structure of the training pairs: we use siblings and siblings of parents as negative samples (these are types closer to the positive parent type), hence are training with more competitive negative samples.",
"The AdamW optimizer (Loshchilov and Hutter, 2019) is used to train the model, as it is shown to be superior than the original Adam under L 2 regularization.",
"Hyperparameters (ratio of margin above/below threshold), (weight of subtyping relation constraint), and ( L 2 regularization coefficient) are tuned.",
"At validation time, we tune the maximum branching factors for each level k 1 , , k L .",
"7 These parameters tune the trade-off between the precision and recall for each layer and prevents over-generation (as we observed in some cases).",
"All hyperparameters are tuned so that models achieve maximum micro F 1 scores (see subsection 5.4).",
"AIDA The AIDA Phase 1 practice dataset for hierarchical entity typing comprises of 297 documents from LDC2019E04 / LDC2019E07 , and the evaluation dataset is from LDC2019E42 / LDC2019E77 .",
"We take only the English part of the data, and use the practice dataset as train/dev, and the evaluation dataset as test.",
"The practice dataset comprises of 3 domains, labeled as R103 , R105 , and R107 .",
"Since the evaluation dataset is out-of-domain, we use the smallest domain R105 as dev, and the remaining R103 and R107 as train.",
"The AIDA entity dataset has a 3-level ontology, termed type , subtype , and subsubtype .",
"A mention can only have one label for each level, hence the dataset is single-path , thus the branching factors ( k 1 , k 2 , k 3 ) for the three layers are set to ( 1 , 1 , 1 ) .",
"BBN Weischedel and Brunstein (2005) labeled a portion of the one million word Penn Treebank corpus of Wall Street Journal texts ( LDC95T7 ) using a two-level hierarchy, resulting in the BBN Pronoun Coreference and Entity Type Corpus.",
"We follow the train/test split by Ren et al. (2016b), and follow the train/dev split by Zhang et al. (2018).",
"OntoNotes Gillick et al. (2014) sampled sentences from the OntoNotes corpus and annotated the entities using 89 types.",
"We follow the train/dev/test data split by Shimaoka et al. (2017).",
"FIGER Ling and Weld (2012) sampled a dataset from Wikipdia articles and news reports.",
"Entity mentions in these texts are mapped to a 113-type ontology derived from Freebase (Bollacker et al., 2008).",
"Again, we follow the data split by Shimaoka et al. (2017).",
"To best compare to recent prior work, we follow Lin and Ji (2019) where the ELMo encodings of words are fixed and not updated.",
"We use all 3 layers of ELMo output, so the initial embedding has dimension d w = 3072 .",
"We set the type embedding dimensionality to be d t = 1024 .",
"The initial learning rate is 10 5 and the batch size is 256.",
"Hyperparameter choices are tuned on dev sets, and are listed in Table 1. We employ early stopping: choosing the model that yields the best micro F 1 score on dev sets.",
"Our models are implemented using AllenNLP (Gardner et al., 2018), with implementation for subtyping relation constraints from OpenKE (Han et al., 2018).",
"We compare our approach to major prior work in FET that are capable of multi-path entity typing.",
"9 For AIDA, since there are no prior work on this dataset to our knowledge, we also implemented multi-label classification as set of binary classifier models (similar to Lin and Ji (2019)) as a baseline, with our mention feature extractor.",
"The results are shown in Table 2 as Multi-label.",
"8 The OntoNotes dataset has an additional set of hyperparameters, i.e. the per-level threshold 1 , 2 , 3 = ( 2 . 5 , 3 . 0 , 0 . 0 ) .",
"9 Zhang et al. (2018) included document-level information in their best resultsfor fair comparison, we used their results without document context, as are reported in their ablation tests.",
"We follow prior work and use strict accuracy (Acc), macro F 1 (MaF), and micro F 1 (MiF) scores.",
"Given instance x i , we denote the gold type set as Y i and the predicted type set Y i .",
"The strict accuracy is the ratio of instances where Y i = Y i .",
"Macro F 1 is the average of all F 1 scores between Y i and Y i for all instances, whereas micro F 1 counts total true positives, false negatives and false positives globally.",
"We also investigate per-level accuracies on AIDA.",
"The accuracy on level l is the ratio of instances whose predicted type set and gold type set are identical at level l .",
"If there is no type output at level l , we append with OTHER to create a dummy type at level l : e.g. /person/OTHER/OTHER .",
"Hence accuracy of the last level (in AIDA, level 3) is equal to the strict accuracy.",
"All our results are run under the two conditions regarding partial type paths: exclusive or undefined.",
"The result of the AIDA dataset is shown in Table 2. Our model under the exclusive case outperforms a multi-label classification baseline over all metrics.",
"Of the 187 types specified in the AIDA ontology, the train/dev set only covers 93 types.",
"The test set covers 85 types, of which 63 are seen types.",
"We could perform zero-shot entity typing by initializing a type's embedding using the type name (e.g. /fac/structure/plaza ) together with its description (e.g. An open urban public space, such as a city square ) as is designated in the data annotation manual.",
"We leave this as future work.",
": Result has document-level context information, hence not comparable.",
"Results for the BBN, OntoNotes, and FIGER can be found in Table 3. Across 3 datasets, our method produces the state-of-the-art performance on strict accuracy and micro F 1 scores, and state-of-the-art or comparable ( 0 . 5% ) performance on macro F 1 score, as compared to prior models, e.g. (Lin and Ji, 2019).",
"Especially, our method improves upon the strict accuracy substantially (4% 8%) across these datasets, showing our decoder are better at outputting exact correct type sets.",
"Partial type paths: exclusive or undefined?",
"Interestingly, we found that for AIDA and FIGER, partial type paths should be better considered as exclusive , whereas for BBN and OntoNotes, considering them as undefined leads to better performance.",
"We hypothesize that this comes from how the data is annotatatedthe annotation manual may contain directives as whether to interpret partial type paths as exclusive or undefined, or the data may be non-exhaustively annotated, leading to undefined partial types.",
"We advocate for careful investigation into partial type paths for future experiments and data curation.",
"Ablation Studies We compare our best model with various components of our model removed, to study the gain from each component.",
"From the best of these two settings ( exclusive and undefined ), we report the performance of",
"(i) removing the subtyping constraint as is described in subsection 4.5;",
"(ii) substituting the multi-level margins in Equation 7 with a flat margin, i.e., margins on all levels are set to be 1. These results are shown in Table 2 and Table 3 under our best results, and they show that both multi-level margins and subtyping relation constraints offer orthogonal improvements to our models.",
"Error Analysis We identify common patterns of errors, coupled with typical examples: Confusing types: In BBN, our model outputs /gpe/city when the gold type is /location/region for ... in shipments from the Valley of either hardware or software goods. These types are semantically similar, and our model failed to discriminate between these types.",
"Incomplete types: In FIGER, given instance ... multi-agency investigation headed by the U.S. Immigration and Customs Enforcement 's homeland security investigations unit , the gold types are /government agency and /organization , but our model failed to output /organization .",
"Focusing on only parts of the mention: In AIDA, given instance ... suggested they were the work of Russian special forces assassins out to blacken the image of Kievs pro-Western authorities , our model outputs /org/government whereas the gold type is /per/militarypersonnel .",
"Our model focused on the Russian special forces part, but ignored the assassins part.",
"Better mention representation is required to correct this, possibly by introducing type-aware mention representationwe leave this as future work.",
"We proposed",
"(i) a novel multi-level learning to rank loss function that operates on a type tree, and",
"(ii) an accompanying coarse-to-fine decoder to fully embrace the ontological structure of the types for hierarchical entity typing.",
"Our approach achieved state-of-the-art performance across various datasets, and made substantial improvement (48%) upon strict accuracy.",
"Additionally, we advocate for careful investigation into partial type paths : their interpretation relies on how the data is annotated, and in turn, in-fluences typing performance.",
"We thank our colleague Guanghui Qin and the anonymous reviewers for their insightful suggestions and comments.",
"This research benefited from support by the JHU Human Language Technology Center of Excellence (HLTCOE), and DARPA AIDA.",
"The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes.",
"The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA or the U.S. Government."
] | [
"objective",
"objective",
"method",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"other",
"method",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"Abstract AI technologies for Natural Languages have made tremendous progress recently.",
"However, commensurate progress has not been made on Sign Languages, in particular, in recognizing signs as individual words or as complete sentences.",
"We introduce OpenHands 1 , a library where we take four key ideas from the NLP community for low-resource languages and apply them to sign languages for word-level recognition.",
"First, we propose using pose extracted through pretrained models as the standard modality of data in this work to reduce training time and enable efficient inference, and we release standardized pose datasets for different existing sign language datasets.",
"Second, we train and release checkpoints of 4 pose-based isolated sign language recognition models across 6 languages (American, Argentinian, Chinese, Greek, Indian, and Turkish), providing baselines and ready checkpoints for deployment.",
"Third, to address the lack of labelled data, we propose self-supervised pretraining on unlabelled data.",
"We curate and release the largest pose-based pretraining dataset on Indian Sign Language (Indian-SL).",
"Fourth, we compare different pretraining strategies and for the first time establish that pretraining is effective for sign language recognition by demonstrating",
"(a) improved fine-tuning performance especially in low-resource settings, and",
"(b) high crosslingual transfer from Indian-SL to few other sign languages.",
"We open-source all models and datasets in OpenHands with a hope that it makes research in sign languages reproducible and more accessible.",
"According to the World Federation of the Deaf, there are approximately 72 million Deaf people worldwide.",
"More than 80% of them live in developing countries.",
"Collectively, they use more than Equal contribution.",
"300 different sign languages varying across different nations (UN, 2021).",
"Loss of hearing severely limits the ability of the Deaf to communicate and thereby adversely impacts their quality of life.",
"In the current increasingly digital world, systems to ease digital communication between Deaf and hearing people are important accessibility aids.",
"AI has a crucial role to play in enabling this accessibility with automated tools for Sign Language Recognition (SLR).",
"Specifically, transcription of sign language as complete sentences is referred to as Continuous Sign Language Recognition (CSLR), while recognition of individual signs is referred to as Isolated Sign Language Recognition (ISLR).",
"There have been various efforts to build datasets and models for ISLR and CLSR tasks (Adaloglou et al., 2021; Koller, 2020).",
"But these results are often concentrated on a few sign languages (such as the American Sign Language) and are reported across different research communities with few standardized baselines.",
"When compared against textand speech-based NLP research, the progress in AI research for sign languages is significantly lagging.",
"This lag has been recently brought to notice of the wider NLP community (Yin et al., 2021).",
"For most sign languages across the world, the amount of labelled data is very low and hence they can be considered low-resource languages .",
"In the NLP literature, many successful templates have been proposed for such low-resource languages.",
"In this work, we adopt and combine many of these ideas from NLP to sign language research.",
"We implement these ideas and release several datasets and models in an open-source library OpenHands with the following key contributions: 1. Standardizing on pose as the modality: We consider using pose-extractor as an encoder, which processes raw RGB videos and extracts the framewise coordinates for few keypoints.",
"Pose-extractors are useful across sign languages and also other 2114 tasks such as action recognition (Yan et al., 2018; Liu et al., 2020), and can be trained to high accuracy.",
"Further, as we report, pose as a modality makes both training and inference for SLR tasks efficient.",
"We release pose-based versions of existing datasets for 5 sign languages: American, Argentinian, Greek, Indian, and Turkish.",
"2. Standardized comparison of models across languages: The progress in NLP has been earmarked by the release of standard datasets, including multilingual datasets like XGLUE (Liang et al., 2020), on which various models are compared.",
"As a step towards such standardization for ISLR, we train 4 different models spanning sequence models (LSTM and Transformer) and graph-based models (ST-GCN and SL-GCN) on 7 different datasets across 6 sign languages mentioned in Table 1, and compare them against models proposed in the literature.",
"We release all 28 trained models along with scripts for efficient deployment which demonstrably achieve real-time performance on CPUs and GPUs.",
"3. Corpus for self-supervised training: A defining success in NLP has been the use of self-supervised training, for instance masked-language modelling (Devlin et al., 2018), on large corpora of natural language text.",
"To apply this idea to SLR, we need similarly large corpora of sign language data.",
"To this end, we curate 1,129 hours of video data on Indian Sign Language.",
"We pre-process these videos with a custom pipeline and extract keypoints for all frames.",
"We release this corpus which is the first such large-scale sign language corpus for self-supervised training.",
"4. Effectiveness of self-supervised training: Self-supervised training has been demonstrated to be effective for NLP: Pretrained models require small amounts of fine-tuning data (Devlin et al., 2018; Baevski et al., 2020) and multilingual pretraining allows crosslingual generalization (Hu et al., 2020b).",
"To apply this for SLR, we evaluate multiple strategies for self-supervised pretraining of ISLR models and identify those that are effective.",
"With the identified pretraining strategies, we demonstrate the significance of pretraining by showing improved fine-tuning performance, especially in very low-resource settings and also show high crosslingual transfer from Indian SL to other sign languages.",
"This is the first and successful attempt that establishes the effectiveness of self-supervised learning in SLR.",
"We release the pretrained model and the fine-tuned models for 4 different sign languages.",
"Through these datasets, models, and experiments we make several observations.",
"First, in comparing standardized models across different sign languages, we find that graph-based models working on pose modality define state-of-the-art results on most sign languages.",
"RNN-based models lag on accuracy but are significantly faster and thus appropriate for constrained devices.",
"Second, we establish that self-supervised pretraining helps as it improves on equivalent models trained from scratch on labelled ISLR data.",
"The performance gap is particularly high if the labelled data contains fewer samples per label, i.e., for the many sign languages which have limited resources the value of self-supervised pretraining is particularly high.",
"Third, we establish that self-supervision in one sign language (Indian SL) can be crosslingually transferred to improve SLR on other sign languages (Amer-ican, Chinese, and Argentinian).",
"This is particularly encouraging for the long tail of over 300 sign languages that are used across the globe.",
"Fourth, we establish that for real-time applications, pose-based modality is preferable over other modalities such as RGB, use of depth sensors, etc. due to reduced infrastructure requirements (only camera), and higher efficiency in self-supervised pretraining, fine-tuning on ISLR, and inference.",
"We believe such standardization can help accelerate dataset collection and model benchmarking.",
"Fifth, we observe that the trained checkpoints of the pose-based models can be directly integrated with pose estimation models to create a pipeline that can provide real-time inference even on CPUs.",
"Such a pipeline can enable the deployment of these models in realtime video conferencing tools, perhaps even on smartphones.",
"As mentioned all datasets and models are released with permissible licenses in OpenHands with the intention to make SLR research more accessible and standardized.",
"We hope that others contribute datasets and models to the library, especially representing the diversity of sign languages used across the globe.",
"The rest of the paper is organized as follows.",
"In section 2 we present a brief overview of the existing work.",
"In section 3 we describe our efforts in standardizing datasets and models across six different sign languages.",
"In section 4 we explain our pretraining corpus and strategies for self-2115 supervised learning and detail results that establish its effectiveness.",
"In section 5 we describe in brief the functionalities of the OpenHands library.",
"In section 6, we summarize our work and also list potential follow-up work.",
"Significant progress has been made in Isolated Sign Language Recognition (ISLR) due to the release of datasets (Li et al., 2020; Sincan and Keles, 2020; Chai et al., 2014; Huang et al., 2019) and recent deep learning architectures (Adaloglou et al., 2021).",
"This section reviews this work, with a focus on pose-based models.",
"A sign language (SL) is the visual language used by the Deaf and hard-of-hearing (DHH) individuals (and also by those who comunnicate with them), which involves usage of various bodily actions, like hand gestures and facial expressions, called signs to communicate.",
"A sequence of signs constitutes a phrase or sentence in a SL.",
"The signs can be transcribed into sign-words of any specific spoken language usually written completely in capital letters.",
"Each such sign-word is technically called as a gloss and is the standardized basic atomic token of an SL transcript (Schembri and Crasborn, 2010).",
"It is be noted that there is not (always) one-to-one relationships between glosses and spoken language words.",
"The task of converting each visual sign communicated by a signer into a gloss is called isolated sign language recognition (ISLR).",
"The task of converting a continuous sequence of visual signs into serialized glosses is referred as continuous sign language recognition (CSLR).",
"CSLR can either be modeled as an end-to-end task, or as a combination of sign language segmentation and ISLR.",
"The task of converting signs into spoken language text is referred as sign language translation (SLT), which can again either be end-to-end or a combination of CLSR and gloss-sequence to spoken phrase converter.",
"In terms of real-world applications, eventhough CSLR is more practically useful than ISLR, it does not still undermine the value in studying and implementing ISLR.",
"The applications of ISLR include building sign spotting systems (Albanie et al., 2020), building alignment networks (Albanie et al., 2021) to aid in building CSLR datasets (or evaluate CSLR output), building CSLR systems on top of an automatic SL segmentation model (Farag and Brock, 2019) which identifies the frame boundaries for signs in videos to divide them into approximate meaningful segments (glosses), etc.",
"Although SL content is predominantly recorded as RGB (color) videos, it can also be captured using various other modalities like depth maps or point cloud, finger gestures recorded using sensors, skeleton representation of the signer, etc.",
"In this work, we focus on ISLR using pose-skeleton modality.",
"A pose representation, extracted using pose estimation models, provides the spatial coordinates at which the joints (such as elbows and knees), called keypoints, are located in a field or video.",
"This pose information can be represented as a connected graph with nodes representing keypoints and edges may be constructed across nodes to approximately represent the human skeleton.",
"Initial methods for SLR focused on hand gestures from either video frames (Reshna et al., 2020) or sensor data such as from smart gloves (Fels and Hinton, 1993).",
"Given that such sensors are not commonplace and that body posture and face expressions are also of non-trivial importance for understanding signs (Hu et al., 2020a), convolutional network based models have been used for SLR (Rao et al., 2018).",
"ISLR can be considered as a multiclass classification task and generally accuracy metric is used the to evaluate the performance of the models.",
"The ISLR task is related to the more widely studied action recognition task (Zhu et al., 2020).",
"Like in action recognition task, highly accurate pose recognition models like OpenPose (Cao et al., 2018) and MediaPipe Holistic (Grishchenko and Bazarevsky, 2020) are being used for ISLR models (Li et al., 2020; Ko et al., 2018), where frame-wise keypoints are the inputs.",
"Although RGB-based models may outperform pose-based models (Li et al., 2020) narrowly, pose-based models have far fewer parameters and are more efficient for deployment if used with very-fast pose estimation pipelines like MediaPipe BlazePose.",
"In this work, we focus on lightweight pose-based ISLR which encode the pose frames and classify the pose using specific decoders.",
"We briefly discuss the two broad types of such models: sequence-based and graph-based.",
"tially along time either on one or both directions.",
"Initially, RNNs were used for pose-based action recognition to learn from temporal features (Du et al., 2015; Zhang et al., 2017; Si et al., 2018).",
"Specifically, sequence of pose frames are input to GRU or LSTM layers, and the output from the final timestep is used for classification.",
"Transformer architectures with encoder-only models like BERT (Vaswani et al., 2017) have also been studied for pose-based ISLR models (De Coster et al., 2020).",
"The input is a sequence of pose frames along with positional embeddings.",
"A special [CLS] token is prepended to the sequence, whose final embedding is used for classification.",
"Graph convolution networks (Kipf and Welling, 2017), which are good at modeling graph data have been used for skeleton action recognition to achieve state-of-the-art results, by considering human skeleton sequences as spatio-temporal graphs (Cheng et al., 2020a; Liu et al., 2020).",
"Spatial-Temporal GCN (ST-GCN) uses human body joint connections for spatial connections and temporal connections across frames to construct a 3d graph, which is processed by a combination of spatial graph convolutions and temporal convolutions to efficiently model the spatio-temporal data (Lin et al., 2020).",
"Many architectural improvements have been proposed over ST-GCN for skeleton action recognition (Zhang et al., 2020; Shi et al., 2019b,a; Cheng et al., 2020b,a; Liu et al., 2020).",
"MS-AAGCN (Shi et al., 2020) uses attention to adaptively learn the graph topology and also proposes STC-attention module to adaptively weight joints, frames and channels.",
"Decoupled GCN (Cheng et al., 2020a) improves the capacity of ST-GCN without adding additional computations and also proposes attention guided drop mechanism called DropGraph as a regularization technique.",
"Sign-Language GCN (SL-GCN) (Jiang et al., 2021) combines STC-attention with Decoupled-GCN and extends it to ISLR achieving state-of-the-art results.",
"Although there are works which use an already trained classifier (on a large dataset) to finetune for smaller datasets and obtain state-of-the-art results in the latter (Albanie et al., 2020), there are currently no works which study the value of pretraining on openly available unlabelled data.",
"On this front, we now survey three broad classes of self-supervised pretraining strategies that we reckon could be applied to SLR.",
"Masking-based pretraining: In NLP, masked language modelling is a pretraining technique where randomly masked tokens in the input are predicted.",
"This approach has been explored for action recognition (Cheng et al., 2021), where certain frames are masked and a regression task estimates coordinates of keypoints.",
"In addition, a direction loss is also proposed to classify the quadrant where the motion vector lies.",
"Contrastive-learning based: Contrastive learning is used to learn feature representations of the input to maximize the agreement between augmented views of the data (Gao et al., 2021; Linguo et al., 2021).",
"For positive examples, different augmentations of the same data item are used, while for negative samples randomly-chosen data items usually from a few last training batches are used.",
"A variant of contrastive loss called InfoNCE (van den Oord et al., 2018) is used to minimize the distance between positive samples.",
"Predictive Coding: Predictive Coding aims to learn data representation by continuously correcting its predictions about data in future timesteps given data in certain input timesteps.",
"Specifically, the training objective is to pick the future timestep's representation from other negative samples which are usually picked from recent previous timesteps of the same video.",
"Similar to contrastive learning, a loss function based on NCE is used (Mikolov et al., 2013; van den Oord et al., 2018).",
"This technique was explored for action recognition in a model called Dense Predictive Coding (DPC) (Han et al., 2019).",
"Instead of predicting at the frame-level, DPC introduces coarse-prediction at the scale of non-overlapping windows.",
"In this section, we describe our efforts to curate standardized pose-based datasets across multiple sign languages and benchmark multiple ISLR models on them.",
"Multiple datasets have been created for the ISLR task across sign languages.",
"However, the amount of data significantly varies across different sign languages, with American and Chinese having the largest datasets currently.",
"With a view to cover a diverse set of languages, we study 7 different 2117 Dataset Language Vocab Signers Videos Hrs Data AUTSL (Sincan and Keles, 2020) Turkish 226 43 38,336 20.5 RGBD CSL (Huang et al., 2019) Chinese 500 50 125,000 108.84 RGBD DEVISIGN (Chai et al., 2014) Chinese 2000 30 24,000 21.87 RGBD GSL (Adaloglou et al., 2021) Greek 310 7 40,785 6.44 RGBD INCLUDE (Sridhar et al., 2020) Indian 263 7 4,287 3.57 RGB LSA64 (Ronchetti et al., 2016) Argentinian 64 10 3,200 1.90 RGB WLASL (Li et al., 2020) American 2000 119 21,083 14 RGB Table 1: The diverse set of existing ISLR datasets which we study in this work through pose-based models datasets across 6 sign languages as summarised in Table 1. For each of these datasets, we generate pose-based data using the Mediapipe pose-estimation pipeline (Grishchenko and Bazarevsky, 2020), which enables real-time inference in comparison with models such as OpenPose (Cao et al., 2018).",
"Mediapipe, in our chosen Holistic mode, returns 3d coordinates for 75 keypoints (exclud-ing the face mesh).",
"Out of these, we select only 27 sparse 2d keypoints which convey maximum information, covering upper-body, hands and face.",
"Thus, each input video is encoded into a vector of size F K D , where F is the number of frames in the video, K is the number of keypoints (27 in our case), and D is the number of coordinates (2 in our case).",
"In addition, we perform several normalizations and augmentations explained in Section 5.",
"On the 7 different datasets we consider, different existing ISLR models have been trained which are mentioned in Table 2 which produce their current state-of-the-art results.",
"For INCLUDE dataset, an XGBoost model is used (Sridhar et al., 2020) with direct input as 135 pose-keypoints obtained using OpenPose.",
"For AUTSL, SL-GCN is used (Jiang et al., 2021) with 27 chosen keypoints as input from HRNet pose estimation model.",
"For GSL, the corresponding model (Parelli et al., 2020) is an attention-based encoder-decoder with 3D hand pose and 2D body pose as input.",
"For WLASL, Temporal-GCN is used (Li et al., 2020) by passing 55 chosen keypoints from OpenPose.",
"For LSA64, 33 chosen keypoints from OpenPose are used as input to an LSTM decoder (Konstantinidis et al., 2018).",
"For DEVISIGN, RGB features are used (Yin et al., 2016) and the task is approached using a clustering-based classic technique called Iterative Reference Driven Metric Learning.",
"For CSL dataset, an I3D CNN is used as encoder with input as RGBD frames and BiLSTM as decoder (Adaloglou et al., 2021).",
"For DEVISIGN_L and CSL datasets, we report RGB model results in the table as there are no existing works using pose-based models.",
"The differences in the above models make it difficult to compare them on effectiveness, especially across diverse datasets.",
"To enable standardized comparison of models, we train pose-based ISLR models on all datasets with similar training setups for consistent benchmarking.",
"These models belong to two groups: sequence-based models and graph-based models.",
"For sequence-based models we consider RNN and Transformer based architectures.",
"For the RNN model , we use a 4-layered bidirectional LSTM of hidden layer dimension 128 which takes as input the framewise pose-representation of 27 keypoints with 2 coordinates each, i.e., a vector of 54 points per frame.",
"We also use a temporal attention layer to weight the most effective frames for classification.",
"For the Transformer model , we use a BERT-based architecture consisting of 5 Transformer-encoder layers with 6 attention heads and hidden dimension size 128, with a maximum sequence length of 256.",
"For the graph-based models we consider ST-GCN (Yan et al., 2018) and SL-GCN (Jiang et al., 2021) models as discussed in section 2. For ST-GCN model , we use 10 spatio-2118 Dataset State-of-the-art model Model available in OpenHands Model (Params) Accuracy LSTM BERT ST-GCN SL-GCN AUTSL Pose-SL-GCN 2 (4.9M) 95.02 77.4 81.0 90.4 91.9 CSL RGBD-I3D (27M) 95.68 75.1 88.8 94.2 94.8 DEVISIGN_L RGB-iRDML 56.85 37.6 48.9 55.8 63.9 GSL Pose-Attention (2.1M) 83.42 86.6 89.5 93.5 95.4 INCLUDE Pose-XGBoost 63.10 86.3 90.4 91.2 93.5 LSA64 Pose-LSTM (1.9M) 93.91 90.2 92.5 94.7 97.8 WLASL2000 Pose-TGCN (5.2M) 23.65 20.6 23.2 21.4 30.6 Average accuracy 67.7 73.5 77.3 81.1 Table 2: Accuracy of different models across datasets.",
"temporal GCN layers with the spatial dimension of the graph consisting of the 27 keypoints.",
"For the SL-GCN model , we use again 10 SL-GCN blocks with the same graph structure and hyperparameters as the ST-GCN model.",
"We train 4 models LSTM, BERT, ST-GCN, and SL-GCN for each of the 7 datasets.",
"We use Py-Torch Lightning to implement the data processing and training pipelines.",
"We use Adam Optimizer to train all the models.",
"We search for optimal hyperparameters using grid search to find the best hyperparams for each model on a standard dataset, and report the best configuration per model.",
"For the LSTM model, we set the batch size as 32 and initial learning rate (LR) as 5 e 3 , while for BERT, we set a batch size 64, and LR of 1 e 4 .",
"For ST-GCN and SL-GCN, we use a batch size of 32 and LR of 0.001.",
"We train all our models on a NVIDIA Tesla V100 GPU.",
"Also for all datasets, we only train on the train-sets given and we use valid-sets to do early stopping, whereas some works (like AUTSL) train on combination of train-set and val-set to report the final test accuracy.",
"We run each experiment around 3 times, and report the best accuracy, eventhough we do not see significant difference in accuracies across the runs.",
"All trained models and the training configurations are open-sourced in OpenHands .",
"Accuracy We report the obtained test-set accuracy of detecting individual signs, for each model against each dataset in Table 2. On all datasets, graph-based models report the state-of-the-art results using pose data.",
"Except for AUTSL 2 , on 6 of 2 SOTA AUTSL model is trained on high quality pose data from HRNet model with more keypoints.",
"the 7 datasets, models we train improve upon the accuracy reported in the existing papers sometimes significantly (e.g., over 10% on GSL).",
"These uniform results across a diverse set of SLs confirm that graph-based models on pose modality data define the SOTA.",
"In summary, the standardized benchmarking of multiple models in terms of accuracy on datasets and, measurements of latency on devices (ex-plained in appendix) informs model selection.",
"Making the trade-off between accuracy and latency, we use the ST-GCN model for the pretrained model we discuss later.",
"Our choice is also informed by the cost of the training step: The more accurate SL-GCN model takes 4 longer to train than ST-GCN.",
"In this section, we describe our efforts in building the largest corpus for self-supervised pretraining and our experiments in different pretraining strategies.",
"Indian SL Corpus for Self-supervised pretraining",
"Wikipedia dumps, OSCAR, etc. have enabled pretraining of large language models.",
"Although there are large amounts of raw sign language videos available on the internet, no existing work has studied how such large volumes of open unlabelled data can be collected and used for SLR tasks.",
"To address this, we create a corpus of Indian SL data by curating videos, pre-process the videos, and release a standardized pose-based dataset compatible with the models discussed in the previous section.",
"We manually search for freely available major sources of Indian SL videos.",
"We restrict our search to a single sign language so as to study the effect of pretraining on same language and crosslingual ISLR tasks.",
"We sort the sources by the number of hours of videos and choose the top 5 sources for download.",
"All of these 5 sources, as listed in Table 3 are YouTube channels, totalling over 1,500 hours before preprocessing.",
"We pass these videos through a processing pipeline as described in Figure 2. We initially dump the pose data for all videos, then process them to remove those which are noisy or contain either no person or more than 1 person.",
"This resulted in 1,129 hours of Indian SL data, as detailed source-wise in Table 3. This is significantly larger than all the training sets in the datasets we studied which is on average 177 hours.",
"We pass these videos through MediaPipe to obtain pose information as described earlier, i.e., 75 keypoints per frame.",
"The resultant Indian SL corpus has more than 100 million pose frames.",
"We convert this to the HDF5 format to enable efficient random access, as is required for training.",
"We open-source this corpus of about 250 GB which is available in OpenHands .",
"We explore the three major pretraining strategies as described in Section 2.3 and explain how and why certain self-supervised settings are effective for ISLR.",
"We pretrain on randomly sampled consecutive input sequences of length 60-120 frames (approximating 2-4 secs with 30fps videos).",
"After pretraining, we fine-tune the models on the respective ISLR dataset with an added classification head.",
"We follow the same hyperparameter settings as described in Motion-Transformer (Cheng et al., 2021), to pretrain a BERT-based model with random masking of 40% of the input frames.",
"When using only the regression loss, we find that pretraining learns to reduce the loss as shown in appendix.",
"However, when fine-tuned on the INCLUDE dataset, we see no major contribution of the pretrained model to increasing the accuracy as shown in Table 4. We posit that while pretraining was able to approximate interpolation for the masked frames based on the surrounding context, it did not learn higher-order features relevant across individual signs.",
"We also experiment with different masking ratios (20% and 30%) as well as different length of random contiguous masking spans (randomly selected between 2-10), and obtain similar results.",
"Inspired from the work by Gao et al. (2021), we consider Shear, Scaling and Rotation augmentations to generate the 2 augmented copies of the input pose sequence and we pretrain the model and observe that it converges on reducing the InfoNCE loss (see appendix for plot).",
"We then fine-tune on INCLUDE and again did not observe any gain over the baseline of training from scratch as seen in Table 4. To understand this, we analyzed the embeddings of data from the pretrained model and observed two facts:",
"(a) Embeddings of different augmentations of a video clip are similar indicating successful pretraining, but",
"(b) Embeddings of different videos from the INCLUDE dataset do not show any clustering based on the class (see visualization in appendix).",
"Again, we posit that 2120 pretraining did not learn higher order semantics that could be helpful for ISLR.",
"4.2.3 Predictive-coding based Our architecture is inspired from Dense Predictive Coding (Han et al., 2019), but using pose modality.",
"The architecture is represented in Figure 3. The pose frames from a video clip will be partitioned into multiple non-overlapping windows with equal number of frames in each window.",
"The encoder f takes each window of pose keypoints as input and embeds into the hidden space z .",
"We use ST-GCN as the encoder.",
"The ST-GCN encoder embeds each input window x i , and the direct output is average pooled across the spatial and temporal dimensions to obtain the output embedding z i for each window.",
"The embeddings are then fed to a Gated Recurrent Unit (GRU) as a temporal sequence and the future timesteps z i are predicted sequentially using the past timestep representations from GRU, with an affine transform layer .",
"We use 4 windows of data as input to predict the embeddings of the next 3 windows, each window spanning 10 frames, which we empirically found to be the best setting.",
"For pretraining, we used a batch size of 128 and for finetuning, we used a batch size of 64.",
"For both pretraining and finetuning, we used Adam optimizer with an initial learning rate of 1e-3.",
"The pretraining was done for 200k iterations on a NVIDIA V100 GPU, taking around 26 hours (on Microsoft platform's Azure NC6s_v3 machine).",
"Time",
"Upon fine-tuning on INCLUDE, DPC provides a significant improvement of 3.5% over the baseline.",
"We include a plot comparing the validation accuracy between baseline and finetuned model in appendix.",
"We posit that Sign Language DPC (SL-DPC) is successful, while previous methods were not, as it learns coarse-grained representations across multiple frames and thereby captures motion semantics of actions in SL.",
"To the best of our knowledge, this is the first comparison of pretraining strategies for SLR.",
"We demonstrated that DPC-based pretraining is effective.",
"We now analyze the effectiveness of such pretraining in two constrained settings -",
"(a) when fine-tuning datasets are small, and",
"(b) when fine-tuning on sign languages different from the sign language used for pretraining.",
"The former captures in-language generalization while the latter crosslingual generalization.",
"The INCLUDE dataset contains an average of 17 samples per class.",
"For this setting, we observed a gain of 3.5% with DPC-based pretraining over training from scratch.",
"How does this performance boost change when we have fewer samples per 2121 class?",
"We present results for 10, 5, and 3 samples per class in Table 5.",
"We observe that as the number of labels decreases the performance boost due to pretraining is higher indicating effective in-language generalization.",
"Does the pretraining on Indian sign language provide a performance boost when fine-tuning on other sign languages?",
"We study this for 3 different sign languages American, Chinese, and Argentinian and report results in Table 5.",
"We see that crosslingual transfer is effective leading to gains of about 6%, 4%, and 2% on the three datasets, similar to the 3% gain on in-language accuracy.",
"The increase in accuracy varies with datasets For Argentianian and Indian datasets which already have 90+% accuracy, there are small improvements.",
"However, WLASL which is scraped from web and has a lot more variations, sees a much higher improvement due to pretraining.",
"Further, we also observe that these gains extend to low-resource settings of fewer labels per sign.",
"For instance on Argentinian SL, with 3 labels, pretraining on Indian SL given an improvement of about 18% in accuracy.",
"To the best of our knowledge this is the first successful demonstration of crosslingual transfer in ISLR.",
"In summary, we discussed different pretraining strategies and found that only SL-DPC learns semantically relevant higher-order features.",
"With DPC-based pretraining we demonstrated both in-language and crosslingual transfer.",
"As mentioned in the main paper, we open-source all our contributions through the OpenHands library.",
"This includes the pose-based datasets for 5 SLs, 4 ISLR models trained on 7 datasets, the pretraining corpus on Indian SL with over 1,100 hours of pose data, pretrained models on this corpus using self-supervised learning, and models fine-tuned for 4 different SLs on top of the pretrained model.",
"We also provide scripts for efficient deployment using MediaPipe pose estimation and our trained ISLR models.",
"In addition, the library provides utilities that are helpful specifically for pose-based data.",
"This includes methods to normalize keypoints by width and height of frames, to normalize all of the pose data to be in the same scale and reference coordinate system by using a constant feature of body as reference, and to fill missing keypoints.",
"The library also includes utilities to create data augmentations such as ShearTransform to displace the joints in a random direction, RotatationTransform to simulate the viewpoint changes of the camera, ScaleTrans-form to simulate different scales of the pose data to account for relative zoomed-in or zoomed-out view of signers, PoseRandomShift to move a significant portion of the video by a time offset so as to make the ISLR models robust to inaccurate segmentation of real-time video, UniformTemporalSubsample to uniformly sample frames from the video instead of considering only the initial frames, in cases where the number of frames in a video clip exceeds a maximum limit, and RandomTemporalSubsample to sample a random fixed contiguous window of required size covering a maximum number of frames.",
"We encourage researchers to contribute datasets, models, and other utilities to make sign language research more accessible.",
"All the aspects of the toolkit are well-documented online 3 for anyone to get started easily.",
"In this work, we make several contributions to make sign language research more accessible.",
"We release pose-based datasets and 4 different ISLR models across 6 sign languages.",
"This evaluation enabled us to identify graph-based methods such as ST-GCN as being accurate and efficient.",
"We release the first large corpus of SL data for self-supervised pretraining.",
"We evaluated different pretraining strategies and found DPC as being effective.",
"We also show that pretraining is effective both for in-language and crosslingual transfer.",
"All our models, datasets, training and deployment scripts are open-sourced in OpenHands .",
"Several directions for future work emerge such as evaluating alternative graph-based models, experimenting with varying sequence lengths of input data, efficiently sampling the data from the raw dataset for pretraining such that the samples are diverse enough, using better pose estimator models and more keypoints, and quantized inference for 2 -4 reduced latency.",
"On the library front, we aim to release updated versions incorporating more SL datasets, better graph-based models, studying the performance on low FPS videos (like 2-4 FPS), effect of pretraining using other high-resource SL datasets, extending to CSLR, and improving deployment features.",
"We would like to thank Aravint Annamalai from IIT Madras for preparing the list of potential YouTube channels that can be crawled and for his help in downloading them.",
"We would like to thank the entire AI4Bharat Sign Language Team 4 for their support and feedback for this work, especially from Rohith Gandhi Ganesan for his insights on code structuring, and Advaith Sridhar for managing the overall project.",
"We would also like to extend our immense gratitude to Microsoft's AI4Accessibility program (via Microsoft Philanthropies India) for granting us the compute required to carry out all the experiments in this work, through Microsoft Azure cloud platform.",
"Our extended gratitude also goes to Zenodo, who helped us with hosting our large datasets (NC and Selvaraj, 2021).",
"Finally, we thank all the content creators and ISLR dataset curators without whose data this work would have been impossible."
] | [
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"method",
"abstain",
"method",
"objective",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"result",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"result",
"abstain",
"objective",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"objective",
"result",
"result",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other"
] |
[
"A standard approach to evaluating language models analyzes how models assign probabilities to valid versus invalid syntactic constructions (i.e. is a grammatical sentence more probable than an ungrammatical sentence).",
"Our work uses ambiguous relative clause attachment to extend such evaluations to cases of multiple simultaneous valid interpretations, where stark grammaticality differences are absent.",
"We compare model performance in English and Spanish to show that non-linguistic biases in RNN LMs advantageously overlap with syntactic structure in English but not Spanish.",
"Thus, English models may appear to acquire human-like syntactic preferences, while models trained on Spanish fail to acquire comparable human-like preferences.",
"We conclude by relating these results to broader concerns about the relationship between comprehension (i.e. typical language model use cases) and production (which generates the training data for language models), suggesting that necessary linguistic biases are not present in the training signal at all.",
"Language modeling is widely used as pretraining for many tasks involving language processing (Pe-ters et al., 2018; Radford et al., 2018; Devlin et al., 2019).",
"Since such pretraining affects so many tasks, effective evaluations to assess model quality are critical.",
"Researchers in the vein of the present study, typically take (pretrained) language models and ask whether those models have learned some linguistic phenomenon (e.g., subject-verb agreement).",
"Often the task is operationalized as: do the models match some human baseline (e.g., acceptability judgments, reading times, comprehension questions) measured as humans experience this linguistic phenomenon (e.g., comparing acceptability ratings of sentences with grammatical/ungrammatical agreement).",
"This approach tacitly assumes that the necessary linguistic biases are in the training signal and then asks whether the models learn the same abstract representations as humans given this signal.",
"The present study casts doubt on the notion that the necessary linguistic biases are present in the training signal at all.",
"We utilize the, now common, evaluation technique of checking whether a model assigns higher probability to grammatical sentences compared to ungrammatical sentences (Linzen et al., 2016).",
"However, we extend beyond binary grammaticality.",
"Real world applications demand that our models not only know the difference between valid and invalid sentences; they must also be able to correctly prioritize simultaneous valid interpretations (Lau et al., 2017).",
"In this paper, we investigate whether neural networks can in fact prioritize simultaneous interpretations in a human-like way.",
"In particular, we probe the biases of neural networks for ambiguous relative clause (RC) attachments, such as the following: (1) Andrew had dinner yesterday with the nephew of the teacher that was divorced .",
"In (1), there are two nominals ( nephew and teacher ) that are available for modification by the RC ( that was divorced ).",
"We refer to attachment of the RC to the syntactically higher nominal (i.e. the nephew is divorced) as HIGH and attachment to the lower nominal (i.e. the teacher is divorced) as LOW.",
"As both interpretations are equally semantically plausible when no supporting context is given, we might expect that humans choose between HIGH and LOW at chance.",
"However, it has been widely established that English speakers tend to interpret the relative clause as modifying the lower nominal more often than the higher nominal (i.e. they have a LOW bias; 1 Carreiras and Clifton Jr, 1993; Frazier and Clifton, 1996; Carreiras and Clifton, 1999; Fernandez, 2003).",
"LOW bias is actually typologically much rarer than HIGH bias (Brysbaert and Mitchell, 1996).",
"A proto-typical example of a language with HIGH attachment bias is Spanish (see Carreiras and Clifton Jr, 1993; Carreiras and Clifton, 1999; Fernandez, 2003).",
"A growing body of literature has shown that English linguistic structures conveniently overlap with non-linguistic biases in neural language models leading to performance advantages for models of English, without such models being able to learn comparable structures in non-English-like languages (e.g., Dyer et al., 2019).",
"This, coupled with recent work showing that such models have a strong recency bias (Ravfogel et al., 2019), suggests that one of these attachment types (LOW), will be more easily learned.",
"Therefore, the models might appear to perform in a humanlike fashion on English, while failing on the cross-linguistically more common attachment preference (HIGH) found in Spanish.",
"The present study investigates these concerns by first establishing, via a synthetic language experiment, that recurrent neural network (RNN) language models (LMs) are capable of learning either type of attachment (Section 4).",
"However, we then demonstrate that these models consistently exhibit a LOW preference when trained on actual corpus data in multiple languages (English and Spanish; Sections 57).",
"In comparing English and Spanish, we show that non-linguistic biases in RNN LMs overlap with interpretation biases in English to appear as though the models have acquired English syntax, while failing to acquire minimally different interpretation biases in Spanish.",
"Concretely, English attachment preferences favor the most recent nominal, which aligns with a general preference in RNN LMs for attaching to the most recent nominal.",
"In Spanish, this general recency preference in the models remains despite a HIGH attachment interpretation bias in humans.",
"These results raise broader questions regarding the relationship between comprehension (i.e. typical language model use cases) and production (which generates the training data for language models) and point to a deeper inability of RNN LMs to learn aspects of linguistic structure from raw text alone.",
"1 We use bias throughout this paper to refer to inter-pretation bias.",
"We will return to the distinction between production bias and interpretation bias in Section 8.",
"Much recent work has probed RNN LMs for their ability to represent syntactic phenomena.",
"In particular, subject-verb agreement has been explored extensively (e.g., Linzen et al., 2016; Bernardy and Lappin, 2017; Enguehard et al., 2017) with results at human level performance in some cases (Gulor-dava et al., 2018).",
"However, additional studies have found that the models are unable to generalize sequential patterns to longer or shorter sequences that share the same abstract constructions (Trask et al., 2018; van Schijndel et al., 2019).",
"This suggests that the learned syntactic representations are very brittle.",
"Despite this brittleness, RNN LMs have been claimed to exhibit human-like behavior when processing garden path constructions (van Schijndel and Linzen, 2018; Futrell and Levy, 2019; Frank and Hoeks, 2019), reflexive pronouns and negative polarity items (Futrell et al., 2018), and center embedding and syntactic islands (Wilcox et al., 2019, 2018).",
"There are some cases, like coordination islands, where RNN behavior is distinctly non-human (see Wilcox et al., 2018), but in general this literature suggests that RNN LMs encode some type of abstract syntactic representation (e.g., Prasad et al., 2019).",
"Thus far though, the linguistic structures used to probe RNN LMs have often been those with unambiguously ungrammatical counterparts.",
"This extends into the domain of semantics, where downstream evaluation platforms like GLUE and SuperGLUE evaluate LMs for correct vs. incorrect interpretations on tasks targeting language understanding (Wang et al., 2018, 2019).",
"Some recent work has relaxed this binary distinction of correct vs. incorrect or grammatical vs. ungrammatical.",
"Lau et al. (2017) correlate acceptability scores generated from a LM to average human acceptability ratings, suggesting that human-like gradient syntactic knowledge can be captured by such models.",
"Futrell and Levy (2019) also look at gradient acceptability in both RNN LMs and humans, by focusing on alternations of syntactic constituency order (e.g., heavy NP shift, dative al-ternation).",
"Their results suggest that RNN LMs acquire soft constraints on word ordering, like humans.",
"However, the alternations in Futrell and Levy, while varying in their degree of acceptability, maintain the same syntactic relations throughout the alternation (e.g., gave a book to Tom and gave Tom a book both preserve the fact that Tom is the indirect object).",
"Our work expands this line of research by probing how RNN LMs behave when multiple valid interpretations, with crucially different syntactic relations, are available within a single sentence.",
"We find that RNN LMs do not resolve such ambiguity in a human-like way.",
"There are, of course, a number of other modeling approaches that exist in the current literature; the most notable of these being BERT (Devlin et al., 2019).",
"These transformer models have achieved high performance on a variety of natural language processing tasks, however, there are a number of properties that make them less suitable to this work.",
"One immediate consideration is that of training.",
"We are interested in the behavior of a class of models, so we analyze the behavior of several randomly initialized models.",
"We do not know how representative BERT is of models of its same class, and training more BERT variants is immensely time consuming and environmentally detrimental (Strubell et al., 2019).",
"Additionally, we are interested in probability distributions over individual words given the preceding context, something that is not part of BERT's training as it takes whole sentences as input.",
"Finally, the bidirectional nature of many of these models makes their representations difficult to compare to humans.",
"For these reasons, we restrict our analyses to unidirectional RNN LMs.",
"This necessarily reduces the generalizability of our claims.",
"However, we still believe this work has broader implications for probing what aspects of linguistic representations neural networks can acquire using standard training data.",
"In the present study, we compare the attachment preferences of RNN LMs to those established in Fernandez (2003).",
"Fernandez demonstrated that humans have consistent RC attachment biases using both self-paced reading and offline comprehension questions.",
"They tested both English and Spanish monolinguals (along with bilinguals) using parallel stimuli across the two languages, which we adopt in the experiments in this paper.",
"2 Specifically, Fernandez (2003) included 24 items per language, 12 with a singular RC verb ( was ) and 12 with a plural RC verb ( were ).",
"The English and 2 All experimental stimuli and models used are available at https://github.com/forrestdavis/ AmbiAttach Spanish stimuli are translations of each other, so they stand as minimal pairs for attachment preferences.",
"Example stimuli are given below.",
"a. Andrew had dinner yesterday with the nephew of the teachers that was divorced.",
"b. Andrew had dinner yesterday with the nephews of the teacher that was divorced.",
"c. Andre ceno ayer con el sobrino de los maestros que estaba divorciado.",
"d. Andre ceno ayer con los sobrinos del maestro que estaba divorciado.",
"The underlined nominal above marks the attachment point of the relative clause ( that was divorced ).",
"(2-a) and (2-c) exhibit HIGH attachment, while (2-b) and (2-d) exhibit LOW attachment.",
"Fernandez found that English speakers had a LOW bias, preferring (2-b) over (2-a), while Spanish speakers had a HIGH bias, preferring (2-c) over (2-d).",
"We ran two experiments per language, 3 one a direct simulation of the experiment from Fernandez (2003) and the other an extension (EXTENDEDDATA ), using a larger set of experimental stimuli.",
"The direct simulation allowed us to compare the attachment preferences for RNN LMs to the experimental results for humans.",
"The extension allowed us to confirm that any attachment preferences we observed were generalizable properties of these models.",
"Specifically, the EXTENDEDDATA set of stimuli included the English and Spanish stimuli from Carreiras and Clifton Jr (1993) in addition to the stimuli from Fernandez (2003), for a total of 40 sentences.",
"Next, we assigned part-of-speech tags to the English and Spanish LM training data using TreeTagger (Schmid, 1999).",
"We filtered the tokens to the top 40 most frequent plural nouns, generating the singular forms from TreeTagger's lemmatization.",
"We then substituted into the test sentences all combinations of distinct nouns excluding reflexives.",
"Then we appended a relative clause with either a singular or plural verb ( was/were or 3 The vocabulary of the models was constrained to the 50K most frequent words during training.",
"Out-of-vocabulary nominals in the original stimuli were replaced with semantically similar nominals.",
"In English, lid(s) to cover(s) and refill(s) to filler(s).",
"In Spanish, sarc ofago(s) to ata ud(es), recambio(s) to sustituci on(es), fregadero(s) to lavabo(s), ba ul(es) to caja(s), cacerola(s) to platillo(s), and bol grafo(s) to pluma(s) estaba/estaban ).",
"4 Finally, each test stimulus in a pair had a LOW and HIGH attachment version for a total of 249600 sentences.",
"An example of four sentences generated for English given the two nouns building and system is below.",
"(3)",
"a. Everybody ignored the system of the buildings that was",
"b. Everybody ignored the systems of the building that was",
"c. Everybody ignored the system of the buildings that were",
"d. Everybody ignored the systems of the building that were Not all combinations are semantically coherent; however, Gulordava et al. suggest that syntactic operations (e.g., subject-verb agreement) are still possible for RNN LMs with completely meaning-less sentences (Gulordava et al., 2018, p. 2).",
"We analyzed long short-term memory networks (LSTMs; Hochreiter and Schmidhuber, 1997) throughout the present paper.",
"For English, we used the English Wikipedia training data provided by Gulordava et al. (2018).",
"5 For Spanish, we constructed a comparable training corpus from Spanish Wikipedia following the process used by Gulordava et al. (2018).",
"A recent dump of Spanish Wikipedia was downloaded, raw text was extracted using WikiExtractor, 6 and tokenization was done using TreeTagger.",
"A 100-million word subset of the data was extracted, shuffled by sentence, and split into training (80%) and validation (10%) sets.",
"7 For LM training, we included the 50K most frequent words in the vocabulary, replacing the other tokens with (cid:104) UNK (cid:105) '.",
"We used the best English model in Gulordava et al. (2018) and trained 4 additional models with the same architecture 8 but different random initializations.",
"There was no established Spanish model architecture, so we took the best Romance model 4 Since the unidirectional models are tested at the RC verb, we did not need to generate the rest of the sentence after that verb.",
"5 https://github.com/facebookresearch/ colorlessgreenRNNs 6 https://github.com/attardi/ wikiextractor 7 We also created a test partition (10% of our data), which we did not use in this work.",
"8 The models had 2 layers, 650 hidden/embedding units, batch size 128, dropout 0.2, and an initial learning rate of 20.",
"architecture 9 reported in Gulordava et al. (2018) and trained 5 models.",
"All models used in this work were trained for 40 epochs with resultant mean validation perplexities and standard deviations in Table 1.",
"We evaluated the RNN LMs using information-theoretic surprisal (Shannon, 1948; Hale, 2001; Levy, 2008).",
"Surprisal is defined as the inverse log probability assigned to each word ( w i ) in a sentence given the preceding context.",
"The probability is calculated by applying the softmax function to an RNN's output layer.",
"Surprisal has been correlated with human processing difficulty (Smith and Levy, 2013; Frank et al., 2015) allowing us to compare model behavior to human behavior.",
"Each of the experiments done in this work looked at sentences that differed in the grammatical number of the nominals, repeated from Section 3.1 below.",
"(4)",
"a. Andrew had dinner yesterday with the nephew of the teachers that was divorced.",
"b. Andrew had dinner yesterday with the nephews of the teacher that was divorced.",
"(from Fernandez, 2003)",
"In (4-a) the RC verb ( was ) agrees with the HIGH nominal, while in (4-b) it agrees with the LOW nominal.",
"As such, this minimal pair probes the interpretation bias induced by the relativizer ( that ).",
"We measure the surprisal of the RC verb ( was ) in both sentences of the pair.",
"If the model has a preference for LOW attachment, then we expect that the surprisal will be smaller when the number 9 They focused on Italian as a Romance language.",
"The models are the same as English except the batch size is 64.",
"of the final noun agrees with the number of the RC verb (e.g., surprisal (4-b) < surprisal (4-a)).",
"Concretely, for each such pair we take the difference in surprisal of the RC verb in the case of HIGH attachment (4-a) from the surprisal of the RC verb in the case of LOW attachment (4-b).",
"If this difference (surprisal (4-a) surprisal (4-b)) is positive, then the LM has a LOW bias, and if the difference is negative, the LM has a HIGH bias.",
"We begin with a proof of concept.",
"It has been noted that RNN LMs have a strong recency bias (Rav-fogel et al., 2019).",
"As such, it could be possible that only one type of attachment, namely LOW attachment, is learnable.",
"To investigate this possibility, we followed the methodology in McCoy et al. (2018) and constructed a synthetic language to control the distribution of RC attachment in two experiments.",
"Our first experiment targeted the question: if all RC attachment is HIGH, how many RCs have to be observed in training in order for a HIGH bias to generalize to unseen data?",
"Our second experiment targeted the question: what proportion of HIGH and LOW attachment is needed in training to learn a bias?",
"Our synthetic language had RC attachment sentences and filler declarative sentences.",
"The filler sentences follow the phrase structure template given in (5-a), while RC attachment sentences follow the phrase structure template given in (5-b).",
"Material in parentheses was optional and so was not present in all filler stimuli.",
"That is to say, all filler sentences had a subject (abbreviated D N) and a verb (abbreviated V), with the verb being optionally transitive and followed by a direct object (D N).",
"The subject, object, or both could be modified by a prepositional phrase (P D N).",
"The subject and object could be either singular or plural, with the optional auxiliary (Aux) agreeing in number with the subject.",
"There were 30 nouns (N; 60 with plural forms), 2 auxiliaries (Aux; was/were and has/had ), 1 determiner (D; the ), 14 verbs (V), and 4 prepositions (P).",
"An example filler sentence is given in (6-a), and an example RC sentence is given in (6-b).",
"(6)",
"a. The nephew near the children was seen by the players next to the lawyer.",
"b. The gymnast has met the hostage of the women that was eating.",
"We trained RNN LMs on our synthetic language using the same parameters as the English LMs given in Section 3.2, with 120,000 unique sentences in the training corpus.",
"The resultant RNN LMs were tested on 300 sentences with ambiguous RC attachment, and we measured the surprisal at the RC auxiliary verb ( was/were ), following the methodology given in Section 3.3.",
"To determine how many HIGH RCs were needed in training to learn a HIGH bias, we first constrained all the RC attachment in the training data to HIGH attachment.",
"Then, we varied the proportion (in increments of 10 RC sentences at a time) of RC sentences to filler sentences during training.",
"We trained 5 RNNs for each training configuration (i.e. each proportion of RCs).",
"This experiment provided a lower bound on the number of HIGH RCs needed in the training data to overcome any RNN recency bias when all RCs exhibited HIGH attachment.",
"When as little as 0.017% (20 sentences) of the data contained RCs with HIGH attachment, the test difference in surprisal between HIGH and LOW attachment significantly differed from zero ( p < 10 5 , BayesFactor (BF) > 100 ), 10 with a mean difference less than zero ( = 2 . 24 ).",
"These results indicate that the models were able to acquire a HIGH bias with only 20/120000 examples of HIGH RC attachment.",
"In practice, we would like LMs to learn a preference even when the training data contains a mixture of HIGH and LOW attachment.",
"To determine the proportion of RCs that must be HIGH to learn a HIGH bias, we fixed 10% of the training data as unambiguous RC attachment.",
"Within that 10%, we varied the proportion of HIGH and LOW attachment in 10% increments (i.e. 0% HIGH 100% LOW, 10% HIGH 90% LOW, etc).",
"Once again, we trained 5 models on each training configuration and tested those models on 300 test sentences, measuring the surprisal at the RC verb.",
"When 10 To correct for multiple comparisons, a Bonferroni correction with m = 6 was used.",
"Thus, the threshold for statistical significance was p = 0 .",
"0083 .",
"We also computed two-sample Bayes Factors (BF; Rouder et al., 2009) for each statistical analysis using ttestBF from the BayesFactor R package (Morey and Rouder, 2018).",
"A Bayes Factor greater than 10 is significant evidence for the hypothesis, while one greater than 100 is highly significant.",
"the training data had 50-100% HIGH attachment, the models preferred HIGH attachment in all the test sentences.",
"Conversely, when the training data had 0-40% HIGH attachment, the models preferred LOW attachment in all test sentences.",
"Taken together, the results from our synthetic language experiments suggest that HIGH attachment is indeed learnable by RNN LMs.",
"In fact, an equal proportion of HIGH and LOW attachment in the training data is all that is needed for these models to acquire a general preference for HIGH attachment (contra to the recency bias reported in the literature).",
"We turn now to model attachment preferences in English.",
"We trained the models using English Wikipedia.",
"We tested the attachment preferences of the RNN LMs using the original stimuli from Fernandez (2003), and using a larger set of stimuli to have a better sense of model behavior on a wider range of stimuli.",
"For space considerations, we only report here results of the EXTENDEDDATA (the larger set of stimuli), but similar results hold for the Fernandez (2003) stimuli (see Supplemental Materials).",
"In order to compare the model results with the mean human interpretation results reported by Fernandez (2003), we categorically coded the model response to each item for HIGH/LOW attachment preference.",
"If model surprisal for LOW attachment was less than model surprisal for HIGH attachment, the attachment was coded as LOW.",
"See Figure 1 for the comparison between RNNs and humans in English.",
"Statistical robustness for our RNN results was determined using the original distribution of surprisal values.",
"Specifically, a two-tailed t-test was conducted to see if the mean difference in surprisal differed from zero (i.e. the model has some attachment bias).",
"This revealed a highly significant ( p < 10 5 , BF > 100 ) mean difference in surprisal of 0.77.",
"This positive difference indicates that the RNN LMs have a consistent LOW bias, similar to English readers, across models trained with differing random seeds.",
"There are two possible reasons for this patterning: (1) the models have learned a human-like LOW bias, or (2) the models have a recency bias that favors attachment to the lower nominal.",
"These two hypotheses have overlapping predictions in Figure 1: Proportion HIGH vs LOW attachment in English.",
"English.",
"The second hypothesis is perhaps weakened by the results of Section 4, where both attachment types were learnable despite any recency bias.",
"However, we know that other syntactic attachment biases can influence RC attachment in humans (Scheepers, 2003).",
"It could be that other kinds of attachment (such as prepositional phrase attachment) have varying proportions of attachment biases in the training data.",
"Perhaps conflicting attachment biases across multiple constructions force the model to resort to the use of a default' recency bias in cases of ambiguity.",
"To determine whether the behavior of the RNNs is driven by a learned attachment preference or a strong recency bias, we created stimuli 11 using the stimulus template described in Section 3.1 (e.g., (3)).",
"All of these stimuli had only the higher nominal syntactically available for attachment; the lower nominal was blocked by the addition of a relative clause: (7)",
"a. Everybody ignored the boy that the girls hated that was boring.",
"b. *Everybody ignored the boys that the girl hated that was boring.",
"In (7) only (7-a) is grammatical.",
"This follows because boy(s) is the only nominal available for mod-11 As before, some of these stimuli are infelicitous.",
"We do not concern ourselves with this distinction in the present work, given the results in Gulordava et al. (2018).",
"ification.",
"In (7-a), the RC verb was agrees in number with this nominal, while in (7-b), was agrees in number with the now blocked lower nominal girl rather than with boys .",
"For all such sentence pairs, we calculated the difference in surprisal between (7-a) and (7-b).",
"If their behavior is driven by a legitimate syntactic attachment preference, the models should exhibit an overwhelming HIGH bias (i.e. the mean difference should be less than zero).",
"As before, the differences in surprisal were calculated for each pair of experimental items.",
"If the difference was greater than zero, the attachment was coded as LOW.",
"The results categorically coded for HIGH/LOW attachment are given in Figure 2, including the results expected for humans given the pattern in Linzen and Leonard (2018).",
"12 A two-tailed t-test was conducted to see if the mean difference in surprisal differed from zero.",
"The results were statistically significant ( p < 10 5 , BF > 100 ).",
"The mean difference in surprisal was 1.15, however, suggesting that the models still had a LOW bias when the lower nominal was syntactically unavailable for attachment.",
"This is in stark contrast to what one would expect if these models had learned the relationship between syntactic constituents and relative clause attachment.",
"A possible 12 Linzen and Leonard (2018) conducted experiments probing the agreement errors for subject-verb agreement with intervening RCs (and prepositional phrases).",
"Our work is concerned with agreement between an object and its modifying RC.",
"As such, their task serves as an approximate estimate of the errors we would expect for humans.",
"alternative to the recency bias explanation is that RNN LMs might learn that there is a general LOW attachment bias in English and overgeneralize this pattern even in cases where one of the nominals is syntactically unavailable.",
"Our English analyses suggest that RNN LMs either learn a general English LOW attachment preference that they apply in all contexts, or that they have a default' recency bias that prevents them from learning HIGH attachment preferences with more complex, naturalistic training data.",
"In the case of the former, we would expect that models trained on a language whose speakers generally prefer HIGH attachment should be able to learn HIGH attachment.",
"Spanish has a well-attested HIGH bias in humans (Carreiras and Clifton Jr, 1993; Carreiras and Clifton, 1999; Fernandez, 2003) offering a way to distinguish between competing recency bias and over-generalization accounts.",
"That is, if the models can learn a HIGH bias when trained on Spanish data, we should be able to conclude that the general LOW bias in English is being overgeneralized by the RNNs to corner cases where HIGH bias should be preferred.",
"As before, the differences in surprisal were calculated for each pair of experimental items.",
"If the difference was greater than zero, the attachment was coded as LOW.",
"Two sample t-tests were conducted to see if the mean difference in surprisal differed significantly from zero for both the direct simulation of Fernandez (2003) and the EXTENDEDDATA that included the stimuli derived from Carreiras and Clifton Jr (1993).",
"The results categorically coded for HIGH/LOW attachment for the extended stimulus set are given in Figure 3, alongside the human results reported in Fernandez (2003).",
"For the direct simulation, the mean did not differ significantly from 0 (BF < 1 / 3 ).",
"This suggests that there is no attachment bias for the Spanish models for the stimuli from Fernandez (2003), contrary to the human results.",
"For the extended set of stimuli, the results were significant ( p < 10 5 , BF > 100 ) with a mean difference greater than zero ( = 0 . 211 ).",
"Thus, rather than a HIGH bias, as we would expect, the RNN LMs once again had a LOW bias.",
"In this work, we explored the ability of RNN LMs to prioritize multiple simultaneous valid interpretations in a human-like way (as in John met the student of the teacher that was happy ).",
"While both LOW attachment (i.e. the teacher was happy ) and HIGH attachment (i.e. the student was happy ) are equally semantically plausible without a disambiguating context, humans have interpretation preferences for one attachment over the other (e.g., English speakers prefer LOW attachment and Spanish speakers prefer HIGH attachment).",
"Given the recent body of literature suggesting that RNN LMs have learned abstract syntactic representations, we tested the hypothesis that these models acquire human-like attachment preferences.",
"We found that they do not.",
"We first used a synthetic language experiment to demonstrate that RNN LMs are capable of learning a HIGH bias when HIGH attachment is at least as frequent as LOW attachment in the training data.",
"These results suggest that any recency bias in RNN LMs is weak enough to be easily overcome by sufficient evidence of HIGH attachment.",
"In English, the RNNs exhibited a human-like LOW bias, but this preference persisted even in cases where LOW attachment was ungrammatical.",
"To test whether the RNNs were over-learning a general LOW bias of English, we tested whether Spanish RNNs learned the general HIGH bias in that language.",
"Once again, RNN LMs favored LOW attachment over HIGH attachment.",
"The inability of RNN LMs to learn the Spanish HIGH attachment preference suggests that the Spanish data may not contain enough HIGH examples to learn human-like attachment preferences.",
"In post-hoc analyses of the Spanish Wikipedia training corpus and the AnCora Spanish newswire corpus (Taule et al., 2008), we find a consistent production bias towards LOW attachment among the RCs with unambiguous attachment.",
"In Spanish Wikipedia, LOW attachment is 69% more frequent than HIGH attachment, and in Spanish newswire data, LOW attachment is 21% more frequent than HIGH attachment.",
"13 This distributional bias in favor of LOW attachment does not rule out a subsequent HIGH RC bias in the models.",
"It has been established in the psycholinguistic literature that attachment is learned by humans as a general abstract feature of language (see Scheepers, 2003).",
"In other words, human syntactic representations of attachment overlap, with prepositional attachment influencing relative clause attachment, etc.",
"These relationships could coalesce during training and result in an attachment preference that differs from any one structure individually.",
"However, it is clear that whatever attachment biases exist in the data are insufficient for RNNs to learn a human-like attachment preference in Spanish.",
"This provides compelling evidence that standard training data itself may systematically lack aspects of syntax relevant to performing linguistic comprehension tasks.",
"We suspect that there are deep systematic issues leading to this mismatch between the expected distribution of human attachment preferences and the actual distribution of attachment in the Spanish training corpus.",
"Experimental findings from psycholinguistics suggest that this issue could follow from a more general mismatch between language production and language comprehension.",
"In particular, Kehler and Rohde (2015, 2018) have provided empirical evidence that the production and comprehension of these structures are guided by different biases in humans.",
"Production is guided by syntactic and information structural considerations (e.g., topic), while comprehension is influenced by those considerations plus pragmatic and discourse factors (e.g., coherence relations).",
"As such, the biases in language production are a proper subset of those of language comprehension.",
"As it stands now, RNN LMs are typically trained on production data 13 https://github.com/ UniversalDependencies/UD_Spanish-AnCora (that is, the produced text in Wikipedia).",
"14 Thus, they will have access to only a subset of the biases needed to learn human-like attachment preferences.",
"In its strongest form, this hypothesis suggests that no amount of production data (i.e. raw text) will ever be sufficient for these models to generalizably pattern like humans during comprehension tasks.",
"The mismatch between human interpretation biases and production biases suggested by this work invalidates the tacit assumption in much of the natural language processing literature that standard, production-based training data (e.g., web text) are representative of the linguistic biases needed for natural language understanding and generation.",
"There are phenomena, like agreement, that seem to have robust manifestations in a production signal, but the present work demonstrates that there are others, like attachment preferences, that do not.",
"We speculate that the difference may lie in the inherent ambiguity in attachment, while agreement explicitly disambiguates a relation between two syntactic units.",
"This discrepancy is likely the reason that simply adding more data doesn't improve model quality (e.g., van Schijndel et al., 2019; Bisk et al., 2020).",
"Future work needs to be done to understand more fully what biases are present in the data and learned by language models.",
"Although our work raises questions about mismatches between human syntactic knowledge and the linguistic representations acquired by neural language models, it also shows that researchers can fruitfully use sentences with multiple interpretations to probe the linguistic representations acquired by those models.",
"Before now, evaluations have focused on cases of unambiguous grammaticality (i.e. ungrammatical vs. grammatical).",
"By using stimuli with multiple simultaneous valid interpretations, we found that evaluating models on single-interpretation sentences overestimates their ability to comprehend abstract syntax.",
"We would like to thank members of the NLP group and the C.Psyd lab at Cornell University, and the Altmann and Yee labs at University of Connecticut, who gave feedback on an earlier form of this work.",
"We would also like to thank the three anonymous reviewers and Yonatan Belinkov.",
"Special thanks go 14 Some limited work has explored training models with human comprehension data with positive results (Klerke et al., 2016; Barrett et al., 2018).",
"to Dorit Abusch and John Whitman for invaluable suggestions and feedback, and Laure Thompson for comments on an earlier draft."
] | [
"abstain",
"objective",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"abstain",
"other",
"method",
"method",
"method",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"result",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"other",
"other",
"other",
"other"
] |
[
"The retriever-reader framework is popular for open-domain question answering (ODQA) due to its ability to use explicit knowledge.",
"Although prior work has sought to increase the knowledge coverage by incorporating structured knowledge beyond text, accessing heterogeneous knowledge sources through a unified interface remains an open question.",
"While data-to-text generation has the potential to serve as a universal interface for data and text, its feasibility for downstream tasks remains largely unknown.",
"In this work, we bridge this gap and use the data-to-text method as a means for encoding structured knowledge for ODQA.",
"Specifically, we propose a verbalizer-retriever-reader framework for ODQA over data and text where verbalized tables from Wikipedia and graphs from Wikidata are used as augmented knowledge sources.",
"We show that our U nified D ata and T ext QA , UDT-QA , can effectively benefit from the expanded knowledge index, leading to large gains over text-only baselines.",
"Notably, our approach sets the single-model state-of-the-art on Natural Questions.",
"Furthermore, our analyses indicate that verbalized knowledge is preferred for answer reasoning for both adapted and hot-swap settings.",
"Pretrained language models (Devlin et al., 2019; Brown et al., 2020) have been shown to store certain knowledge (linguistic or factual) implicitly in parameters (Manning et al., 2020; Petroni et al., 2019; Roberts et al., 2020), partially explaining the superior generalization abilities over downstream tasks.",
"However, besides the well-known hallucination issue, the implicit knowledge learned through language modeling objective over text struggles at reflecting up-to-date knowledge from text and structured data for answering open-domain questions.",
"To overcome this, recent work on open domain question answering (ODQA) focuses on the semi-parametric method (Karpukhin et al., 2020; Guu et al., 2020) where the pretrained language models can leverage external explicit knowledge sources for reasoning.",
"For example, in the retriever-reader framework (Min et al., 2021, inter alia ), the reader produces answers by grounding on the relevant evidence from the retriever, the interface to the explicit knowledge source (Wikipedia text passages).",
"In this work, we focus on the semi-parametric approach for ODQA going beyond textual knowledge.",
"Specifically, we are interested in the question: Can we develop a viable unified interface over a realistic heterogeneous knowledge source containing both data and text?",
"Recent retriever-reader models (Oguz et al., 2020; Agarwal et al., 2021) have demonstrated that expanding the textual knowledge source with more structured data is beneficial.",
"However, only knowledge base (KB) is considered in (Agarwal et al., 2021), limiting the applicability of their method to other structured data.",
"In (Oguz et al., 2020), both tables and KB triples are simply linearized as inputs to the reader, but different retrievers are required for individual cases.",
"Here, we propose a verbalizer-retriever-reader semi-parametric framework, UDT-QA , which provides a unification of both representation and model for ODQA over data and text.",
"The key idea is to augment the retriever with a data-to-text verbalizer for accessing heterogeneous knowledge sources, i.e. KB graphs from WikiData, tables and passages from Wikipedia.",
"Given its potential in providing a universal interface for data and text, data-to-text generation is increasingly popular (Gardent et al., 2017; Parikh et al., 2020; Nan et al., 2021) with various methods developed recently for converting structured knowledge into natural language (Wang et al., 2020; Ribeiro et al., 2020; Chen et al., 2020b).",
"Nevertheless, most existing work has focused on intrinsic 1605 evaluations exclusively, i.e. the quality of generated text measured by metrics like BLEU (Papineni et al., 2002), leaving its usefulness on downstream tasks largely unknown.",
"Moreover, it remains unclear whether a single data-to-text model is able to verbalize heterogeneous structured data effectively.",
"To bridge the gap, we develop a novel data-to-text generation paradigm for our framework.",
"We introduce data filtering and beam selection to maximize the faithful coverage of the input information.",
"To remedy the lack of in-domain data, we further propose an iterative training approach to augment the existing data-to-text training set with high quality outputs selected from the target domain.",
"With this verbalizer, we convert all tables from Wikipedia (10x more than (Oguz et al., 2020)) and sub-graphs from Wikidata together with Wikipedia text passages as the knowledge source for ODQA.",
"We first validate our data-to-text method using intrinsic metrics on DART (Nan et al., 2021) and additional faithfulness evaluation on the target ODQA data.",
"We show that our data-to-text approach can effectively improve the target-domain faithful metric without compromising too much on the intrinsic metrics.",
"To further evaluate the end-to-end effectiveness, we experiment with UDT-QA on the ODQA task using a recent state-of-the-art (SOTA) retriever-reader pipeline, including DPR (Karpukhin et al., 2020) and UnitedQA (Cheng et al., 2021b).",
"Consistent with previous work, our results also suggest that extra knowledge source is beneficial for ODQA.",
"Notably, we find that the verbalized knowledge is favored by the reader compared to the raw format (linearization), especially when the structured data size is comparable to text, leading to more pronounced improvements.",
"Overall, UDT-QA shows large improvements over text-only baselines and performs competitively with more complicated methods on both Natural Questions (NQ) (Kwiatkowski et al., 2019) and WebQuestions (WebQ) (Berant et al., 2013).",
"In particular, UDT-QA achieves new SOTA on NQ under the single-model open-book setting.",
"1 2 Overview of UDT-QA In this section, we present the overall pipeline of our UDT-QA framework for ODQA over data and text (Figure 1).",
"The major difference between our approach and the popular retriever-reader ODQA 1 Data and code available at https://github.com/ Mayer123/UDT-QA systems (Min et al., 2021, inter alia ) is the use of a data-to-text verbalizer (3) for converting structured data into natural language text, i.e. virtual documents, as the universal knowledge source.",
"Here, we consider two types of structured knowledge (4.2) tables and KB sub-graphs.",
"After verbalizing the structured knowledge, a subsequent pipeline consisting of a DPR retriever and a UnitedQA-E reader is used for answer inference.",
"Since the retriever and reader are not the main focus of this work, we only briefly describe them below.",
"The DPR retriever (Karpukhin et al., 2020) is a bi-encoder model consisting of a question encoder and a context encoder, which is used for data and text retrieval.",
"Following previous work (Karpukhin et al., 2020; Oguz et al., 2020), we use the uncased BERT-base (Devlin et al., 2019) model as the encoder, where the [CLS] token representation is used as the document/question vector.",
"During training, positive and negative pairs of (question, context) are used to update the model.",
"For inference, the entire document index is encoded with context encoder and the encoded question vector is used to retrieve the top documents with highest dot-product scores.",
"The UnitedQA-E (Cheng et al., 2021b) is an extractive reader based on ELECTRA (Clark et al., 2020) trained with enhanced objectives (Cheng et al., 2021a, 2020) for answer inference.",
"Here, a pair of a question and a support passage is jointly encoded into neural text representations.",
"These representations are used to compute scores of possible answer begin and end positions, which are then used to compute probabilities over possible answer spans.",
"Finally, the answer string probabilities are computed based on the aggregation over all possible answer spans from the entire set of support passages.",
"Here, we formally describe the data-to-text model developed in this paper, including the input format (3.1) and the adaptation for ODQA (3.2).",
"Given a structured data input D , the data-to-text generator G aims to generate a natural language passage P that faithfully describes the information presented in D .",
"In the literature, the structured data input can be in the form of a set of triples (Nan et al., 2021), a few highlighted cells from 1606 Figure 1: An overview of UDT-QA based on the verbalizer-retriever-reader pipeline.",
"a table (Parikh et al., 2020) or a full table (Chen et al., 2020a).",
"Correspondingly, P could a simple surface-form verbalization of D ( e.g. when D is a triple set) or a high-level summarization in case of a full table or a large KB graph.",
"Since we consider (noisy) tables/KB sub-graphs of arbitrary size in this paper, directly feeding the entire input into the generator is not feasible, likely incurring signifi-cant computation challenges.",
"Moreover, it is also desirable to maximize the information coverage of P so that most relevant information in D can be leveraged by the downstream QA retriever and reader.",
"Based on this, we verbalize both tables and KB graphs at a fine-grained level.",
"In this work, we verbalize tables row by row, i.e. input each table row to G individually, where each row is a set of cells r = { c i } ki =1 , and k is the number of cells in the corresponding row.",
"Most relevant to our setting, recent work (Nan et al., 2021) represents each cell in a triple.",
"To form such triples, they manually annotate the tree ontology of column headers and then create triples using table title, headers, cell value and header relations, e.g. ( [TABLECONTEXT], [title], LeBron James ), ( LeBron James, League, NBA ) where LeBron James is the parent cell.",
"Although such triples with fine-grained ordering may help guide the generator, directly applying such a generator to a target domain with no ontology annotation (our case) likely results in degradation.",
"To overcome this, we propose to convert the triple set to pairs, e.g. ( [title], LeBron James ), ( League, NBA ).",
"We find such conversion has little impact on the intrinsic evaluation (5).",
"After all rows are verbalized, we assemble the text outputs back to form the verbalized table.",
"For KB, we follow previous work (Agarwal et al., 2021) and break the KB into small sub-graphs based on subject entity.",
"Here, each sub-graph contains one central entity and its neighbors.",
"Although this conversion would inevitably create undesirable artifacts ( e.g. hurdles for multi-hop reasoning across sub-graphs), this preprocessing allows us to unify the input representations for both table and KB graphs, making it possible for a single verbalizer to convert structured knowledge into text format.",
"Specifically, we convert all KB sub-graphs into the same format as table cell sets above, where the subject entity is treated as the title and all the edges are represented using pairs in the form of ( relation , object ).",
"Then we verbalize each sub-graph with the generator G .",
"Examples of input and output for table rows and KB sub-graphs are shown in Figure",
"1. 3.2 Improved Data-to-Text Model Training A known problem in data-to-text generation is that the model tends to hallucinate or neglect information in the input (Wang et al., 2020; Agarwal et al., 2021).",
"Faithfulness and information coverage is especially important when we apply the verbalized output to knowledge-intensive downstream tasks like ODQA.",
"To address this, we subsample training data T such that the instances are filtered out if they are likely to steer model towards missing information.",
"In particular, we compute ROUGE-1 (Lin, 2004) scores between the input and target of training instances and filter out those whose scores are below a certain threshold.",
"We denote the filtered version as T-F .",
"Examples of the filtered instances can be found in Table 11, as we discuss more in Appendix F, these instances may bias the model towards unwanted behaviors.",
"1607 Another challenge we face is that most data-to-text training examples have succinct structured inputs.",
"In other words, the cells in the structured input are usually single words or short phrases with corresponding short target sentences as well.",
"In our case, a number of tables contain large cells with dozens of words.",
"Models trained with existing data likely have a hard time verbalizing such inputs faithfully.",
"To alleviate this domain-mismatch issue, we propose an iterative training set-up.",
"In the first iteration, we train a generator on T-F .",
"Then we apply the generator to our data.",
"We then find high quality verbalized outputs based on the ROUGE-1 score between the model inputs and model outputs, and sample instances with score higher than a threshold for the next-round training.",
"We sample instances up to the same size of T-F , and denote this set as ID-T (examples shown in Table 11).",
"Finally, we mix the ID-T with T-F and train a second generator for verbalization.",
"Following recent work (Nan et al., 2021), we use the pretrained T5-Large (Raffel et al., 2020) model as our generator.",
"Given paired training examples consisting of a structured data input and a target sentence, we finetune the T5 model to maximize the log-likelihood of generating the corresponding target sentences.",
"Here, we follow the same experimental setup as (Ribeiro et al., 2020).",
"In this section, we describe the data used for experiments and sources of structured knowledge.",
"In this paper, we use DART (Nan et al., 2021) to train our verbalizer (data-to-text) and two ODQA datasets, NQ and WebQ, to train and evaluate our pipeline, with the same split as in (Lee et al., 2019) provided by (Karpukhin et al., 2020).",
"Below we provide a brief description of each dataset and refer readers to their papers for details.",
"DART is a data-to-text dataset containing pairs of (triple-set, sentences) collected from WebNLG (Gardent et al., 2017), E2E (Novikova et al., 2017) and crowdsourcing based on tables found in Wik-iSQL (Zhong et al., 2017) and WikiTableQuestions (Pasupat and Liang, 2015).",
"We collect knowledge-answerable questions from NQ and WebQ in order to evaluate our verbalizer and construct the retrieval training data.",
"Specifically, we find questions in the original NQ training set that can be answered by a table.",
"For each question, we search through tables in its associated HTML page to locate exact answer matches.",
"In total, we collected 14,164 triples of (question, answer, gold table) from NQ train and dev sets as NQ-table-Q .",
"On WebQ, we find questions that can be answered by KB via expanding from question entities and search for their 1-hop neighbors.",
"If an answer entity is matched, we keep this sub-graph.",
"In total, we collected 2,397 triples of (question, answer, sub-graph) from WebQ train and dev set as WebQ-KB-Q .",
"In addition to regular Wikipedia text passages, we consider two types of structured knowledge tables",
"tables from Wikipedia and KB graphs from Wikidata.",
"For tables from Wikipedia, we follow OTT-QA (Chen et al., 2021b) with slight modifica-tions.",
"Chen et al. (2021b) only consider tables in good format, i.e. tables with no empty cell, multi-column or multi-row, and restrict the tables to have at most 20 rows or columns.",
"Instead, we remove such constraints and keep everything with the <table> tag, resulting in a larger and noisier table set.",
"We denote this more realistic set of tables as OTT-tables .",
"Note Oguz et al. (2020) only consider tables from the original NQ HTMLs.",
"In addition to the size difference, OTT-tables are crawled from a more recent Wikipedia dump than the NQ version.",
"To study the impact of knowledge source size, we also process tables from the NQ HTML pages with the heuristic suggested by (Herzig et al., 2021) to de-duplicate tables and filter lengthy cells (>80 words).",
"We denote this set of tables as NQ-tables .",
"To avoid overlap, we remove tables from OTT-tables whose page title are in NQ-tables set.",
"In total, we have a All-tables set with 2.2M tables from OTT-tables and 210K tables from NQ-tables , respectively.",
"For KB graphs, we consider using the English Wikidata (Vrandecic and Krtzsch, 2014) as our KB due to its broad coverage and high quality, not-1608 Intrinsic Eval Extrinsic Eval Training Set # Examples BLEU METEOR TER MoverScore BERTScore BLEURT Ans Cov DART (Nan et al., 2021) 62,659 50.66 0.40 0.43 0.54 0.95 0.44 DART ours ( T ) 62,628 51.05 0.40 0.43 0.54 0.95 0.43 95.4 DART ( T-F ) 55,115 51.04 0.41 0.43 0.54 0.95 0.43 96.0 DART ( T-F + ID-T ) 110,230 50.59 0.41 0.44 0.54 0.95 0.43 98.4 Table 1: Intrinsic and extrinsic evaluations of verbalization approaches on DART test and NQ-table-Q (4.1), respectively.",
"ing its predecessor Freebase is no longer maintained despite its popularity in research.",
"In order to be comparable with recent work (Agarwal et al., 2021), we directly use their partitioned KB graphs from WikiData in our experiments, which is denoted as WD-graphs .",
"In this section, we evaluate our data-to-text model with both intrinsic and extrinsic metrics.",
"Since intrinsic metrics are probably less correlated with the downstream performance, we use them only as a sanity check for generation quality and focus on using an extrinsic metric for selecting models.",
"Intrinsic Evaluation : Since our model is developed mainly on DART, we first conduct the intrinsic evaluation on the DART test set to measure the impact of our improved data-to-text methods, i.e. data filtering and iterative training.",
"Following (Nan et al., 2021), we use the official evaluation metrics including BLEU, METEOR (Banerjee and Lavie, 2005), TER, MoverScore (Zhao et al., 2019), BERTScore (Zhang et al., 2020) and BLEURT (Sel-lam et al., 2020).",
"Table 1 summarizes different data-to-text models on DART test.",
"As we can see, the resulting model trained with our data conversion (row 2) performs on par with the model using the original format (row 1).",
"More interestingly, filtering short samples has almost no impact on the verbalizer performance (row 3).",
"Lastly, iterative training with additional target domain data (row 4) slightly hurts on BLEU and TER and achieves similar performances on other metrics.",
"Overall, our verbalizer with the proposed data conversion and improved training remains very effective on DART.",
"Extrinsic Evaluation : Since we are interested in applying verbalized knowledge for ODQA, the QA model is more likely to predict the correct answer only if the answer still exists after the verbalization.",
"Therefore, we also evaluate each generator using a metric more related with the downstream task performance: answer coverage .",
"Specifically, we compute the answer coverage as the percentage of examples that the answer present in the raw structured knowledge is still preserved in the corresponding verbalized output.",
"First, we compute the answer coverage of different generators discussed in the previous section on NQ-table-Q where tables are known to contain question-triggering content.",
"The scores are reported in the last column of Table",
"1. Due to more lengthy tables in NQ-table-Q , data filtering improves the answer coverage as expected.",
"Moreover, model trained with our iterative training demonstrates substantial improvements in answer coverage, indicating that our approach is highly effective for converting tables into text.",
"Examples for comparing different verbalizer outputs are shown in Table 12 in Appendix F. Later, we use this best generator to verbalize All-tables .",
"We use beam search of size 10 and save all beams.",
"To retain as much input information as possible, a re-ranking stage is carried out over these predictions based on the ROUGE-1 score between the model inputs and model outputs.",
"The highest ranked prediction is then used as the final output.",
"Lastly, we directly apply our best generator (DART T-F + ID-T) for verbalizing KB graphs.",
"To evaluate the performance, we compare our model with the recent method KELM-verbalizer (Agar-wal et al., 2021) using answer coverage on the set WebQ-KB-Q where KB sub-graphs are known to contain answer entities.",
"Although never tuned for KB graph inputs, our model achieves 99.6 on answer coverage, outperforming the KELM-verbalizer (97.8 on answer coverage) by a large margin.",
"This suggests that our data-to-text approach is highly effective for both tables and KB sub-graphs.",
"Here we present our main experiments on ODQA over data and text.",
"For regular Wikipedia text, we use the same index containing 21M passages as 1609 Model NQ WebQ Without Structured Knowledge DPR (Karpukhin et al., 2020) 41.5 35.2 UnitedQA (Cheng et al., 2021b) 51.8 48.0 With Structured Knowledge KEALM (Agarwal et al., 2021) 41.5 43.9 UnitK-QA (Oguz et al., 2020) 54.1 57.8 UDT-QA w/ Raw Single Data 54.7 51.4 UDT-QA w/ Verbalized Single Data 55.2 52.0 UDT-QA w/ Verbalized Hybrid Data 55.1 52.5 Table 2: End-to-end open-domain QA evaluation of UDT-QA in comparison to recent state-of-the-art models on the test sets of NQ and WebQ.",
"in (Karpukhin et al., 2020).",
"To augment text, two settings are considered, i.e. the single data setting and the hybrid data setting.",
"In the single data setting for NQ, we augment the text index with tables from the All-tables set (4.2).",
"For comparison, we also experiment with the raw representations using a simple linearization of tables similar to (Oguz et al., 2020).",
"In single data setting for WebQ, we consider combining text with KB graphs from WD-graphs in the single data setting.",
"Different from (Oguz et al., 2020) where a separate entity-linking based retriever is used for KB, we use a single model over the text index with either linearization of raw KB graphs or our verbalized KB graphs.",
"Hence, in our case, both text and data (tables and KB graphs) can be handled by a unified retriever-reader pipeline.",
"In the hybrid data setting for both NQ and WebQ, we use text, All-tables and WD-graphs for retrieval.",
"The statistics of our document index are shown in Table 7 in Appendix A. We create additional retriever training data from NQ-Table-Q and WebQ-KB-Q in a similar fashion as in the text-only setting, so that DPR can better handle additional knowledge.",
"Following (Oguz et al., 2020), we also use the iterative training setup for retriever training.",
"More training details can be found in Appendix B. To evaluate the effectiveness of our UDT-QA for ODQA, we first include recent state-of-the-art ODQA models using text as the only knowledge source, DPR and UnitedQA.",
"We also compare our UDT-QA with recent models using additional structured knowledge, KEALM and UnitK-QA.",
"Following the literature, we report the exact match (EM) score for evaluation.",
"The results are in Table",
"2. Source Format R20 R100 EM text -80.8 86.1 49.6 + NQ-tables raw 85.2 90.1 51.1 + NQ-tables V 85.5 90.2 51.2 + All-tables raw 85.8 90.7 52.1 + All-tables V 86.0 90.7 52.5 text -78.9 82.3 52.6 + WD-graphs-WebQ raw 83.4 86.1 57.1 + WD-graphs-WebQ V 83.4 85.0 55.7 + WD-graphs raw 82.8 86.1 54.3 + WD-graphs V 82.8 86.7 55.4 Table 3: Impact of document index size over separately trained retriever-reader models (Top for NQ and bottom for WebQ).",
"As we can see, models with additional structured knowledge achieve better performance than text-only models.",
"This indicates that both KB graphs and tables contain complementary knowledge which is either absent in text or harder to be reasoned over.",
"For NQ, although we consider a significantly larger structured knowledge source which is likely to be more challenging, all our models substantially outperform UnitK-QA.",
"As for WebQ, our model achieves competitive performance, although worse than UnitK-QA.",
"We attribute this gap to two possible reasons.",
"First, UnitK-QA uses a separate entity-linking based retriever for KBs which might lead to higher retrieval recall.",
"Second, since WebQ is fully based on FreeBase, using WikiData only in our models likely suffers from mismatch (Pellissier Tanon et al., 2016).",
"Nevertheless, our verbalizer-based models achieve better performances than the corresponding raw format models on both datasets, indicating that the proposed verbalizer is highly effective for tables and KB graphs.",
"In this section, we present analyses over the impact of document index size, the use of additional structured knowledge in a hot-swap setting, comparison to a recent KB-only data-to-text approach in an end-to-end fashion, and manual exam of the verbalized/raw tables for their impact on ODQA.",
"How does the size of document index affect retriever and reader performance?",
"More knowledge is likely to have better coverage of relevant information.",
"On the other hand, larger and noisier index also increases the reasoning complexity.",
"To understand the impact of the increased document index size, we conduct experiments with a restricted setting where only relevant subset of knowledge to the corresponding dataset (a prior) is used for retrieval.",
"Similar to (Oguz et al., 2020), we experiment with the combined document index of text and NQ-tables for NQ.",
"As for WebQ, we keep documents from WD-graphs that contain any of the question entity in WebQ to build WD-graphs-WebQ , and experiment with using text + WD-graphs-WebQ .",
"In addition to EM, we report R20 and R100, evaluating the retrieval accuracy of gold passages in the top-20 and top-100 documents, respectively.",
"The results are reported in Table",
"3. For NQ, in spite of being more challenging, we see that using All-tables yield substantial improvement in both recall and answer exact match compare to using NQ-tables .",
"This indicates that, with proper training, ODQA models are likely to benefit from enriched knowledge.",
"Although the larger raw form index brings in decent improvement (+1 EM) in terms of reader performance (+ All-tables vs+ NQ-tables ), our verbalized knowledge is more friendly for answer reasoning leading to a more notable QA improvement (+1.3 EM).",
"Different from NQ, we observe that on WebQ the restricted setting with WD-graphs-WebQ achieves better results.",
"We hypothesize that this is likely due to the scale of WebQ dataset.",
"The small amount of WebQ training makes the retriever insufficient to handle large-scale document index.",
"We leave the verification of this hypothesis for future work.",
"Does a text-only retriever-reader model benefit more from verbalized knowledge compare to raw format (hot-swap)?",
"Since both retriever and reader are based on pretrained language models, we hypothesize that they would probably benefit more from the verbalized knowledge due to its sim-Source R20 R100 EM KELM 78.2 85.3 51.5 WD-graphs (Ours) 78.5 85.5 52.0 Table 5: Comparison of verbalized knowledge from our verbalizer and KELM for retriever and reader on WebQ test.",
"This can be particularly useful for a hot-swap setting where both retriever and reader have only seen textual knowledge during training.",
"To verify that verbalized knowledge is more amenable, we carry out a hot-swap experiment here.",
"Specifically, we directly use a DPR model trained on NQ text-only data for additionally indexing both NQ-tables and All-tables .",
"Then, the inference retrieval is performed on the augmented document index for an input question, and a text-only United-QA-E reader trained on NQ is applied for answer inference afterwards.",
"The results are summarized in Table",
"4. Similar to the previous fully fine-tuned settings, we see that additional knowledge still provide substantial improvements for text-only retriever using either raw or verbalized knowledge.",
"However, the improvement in recall is not reflected in the later reader performance for the raw format, whereas the hot-swap answer inference performance is notably improved with verbalized knowledge.",
"This observation further validates our hypothesis that verbalized knowledge is more beneficial, especially for reader.",
"How does the proposed verbalizer compare to recent data-to-text models?",
"Lastly, we compare our verbalizer with the recently proposed data-to-text generator for converting KB graphs only, KELM (Agarwal et al., 2021).",
"Since both KELM generator and our verbalizer are based on the same partitioned Wikidata, this evaluation can fully re-flect their corresponding generation impacts on ODQA in an end-to-end fashion.",
"Here, we evaluate using our verbalized WD-graphs and the KELM corpus as additional knowledge on WebQ.",
"In particular, we follow the same procedure to train and evaluate our retriever and reader except that we swap the WD-graphs with KELM corpus in data construction and retrieval.",
"Both retriever and reader performances are reported in Table",
"5. Note that the KELM data-to-text model is customized solely for converting KB graphs and trained with a much larger dataset (about 8M training instances), 1611 Q&A V table Raw table TITLE: List of Star Wars: The Clone Wars episodes Q: star wars ....",
"Nevertheless, consistent with its better extrinsic performance (5), our verbalizer again outperforms the KELM generator in both retrieval and reading, which provides further support for the effectiveness of our approach as a unified interface for ODQA over data and text.",
"What is the impact of verbalized/raw table on ODQA?",
"We manually analyze examples of verbalized and raw tables and the details of annotation can be found in Appendix E. We showcase the examples of verbalized tables and their raw counterpart in Table 6 and discussion their effect on our UDT-QA system.",
"We identify 2 common patterns where raw tables are inferior to verbalized tables, as shown in the first 2 rows of Table",
"6. In the first example, the concatenated numbers in the raw table can be hard to interpret , and we have to carefully align the row with the header, which is very far away.",
"In the second example, the raw infobox can be in ill-format and very long , making it hard to understand.",
"On the other hand, the verbalized row clearly states the answer evidence by connecting the information in the headers with cell values, making it straightforward to find the answer.",
"At the same time, we also notice the limitation of verbalized tables: table structure loss.",
"We found that raw tables are better at answering ranking questions, as the examples shown in row 3&4 of Table",
"6. When asked about the top or bottom ranked subject, the model can directly look for evidence from the starting or the end of the table.",
"On the other hand, when the table is verbalized, the model can not rely on such shortcuts because the boundary of rows is not clear and the original structure of the tables are lost .",
"This also suggests a possible direction for future work: to better incorporate the table structure information in verbalization.",
"Data-to-Text Generating text from structured data has been a popular task in NLP.",
"Many dataset have been proposed for this task such as Wikibio (Lebret et al., 2016), Rotowire (Wiseman et al., 1612 2017), WebNLG (Gardent et al., 2017) and E2E (Novikova et al., 2017), where each dataset focuses on a particular domain.",
"More recently, large-scale datasets that contains open-domain examples have been proposed including DART (Nan et al., 2021), TOTTO (Parikh et al., 2020), WikiTableT (Chen et al., 2021a) and GenWiki (Jin et al., 2020).",
"On the modeling side, finetuning the pretrained models typically achieves promising performance (Ribeiro et al., 2020).",
"Wang et al. (2020) propose customized loss functions to reduce model hallucination during generation.",
"Muti-task learning is used to improve model's robustness towards input variations (Hoyle et al., 2021).",
"Chen et al. (2020b) introduce a generalized format and a pretrained model that can generate text from both table rows and knowledge graphs.",
"Most previous work on data-to-text generation have only conducted internal evaluation, using typical generation metrics such as BLEU and ROUGE, hence the data-to-text is considered the target task.",
"In this paper, we argue that different training strategies and evaluation metrics should be adapted when applying data-to-text models to downstream tasks, i.e. ODQA.",
"Related to our work, Agarwal et al. (2021) convert the entire Wikidata to natural language using a finetuned T5 model (Raffel et al., 2020).",
"In this work, we generalize the data-to-text approach for verbalizing both tables and KB graphs in a unified fashion and study the verbalized knowledge on ODQA.",
"QA with Data and Text As the knowledge required to answer the questions may not be available in textual corpus, previous studies have sought to incorporate knowledge from difference sources such as tables and knowledge bases.",
"Min et al. (2019) use Wikidata to expand seed passages found by the retriever and enhance encoded passage representations in the reader.",
"Li et al. (2021) propose a hybrid framework that takes both text and tables as inputs to produce answers and SQL queries.",
"Recently, Chen et al. (2021b) develop the OTT-QA dataset containing questions that require joint reasoning over both tables and text, where the tables and text come from entire Wikipedia.",
"There is also a line of work that studies model architectures for tables specifically or joint encoding of tables and text (Yin et al., 2020; Herzig et al., 2020; Zayats et al., 2021; Glass et al., 2021).",
"However, their focus is not on open-domain QA tasks.",
"Most similar to our work is (Oguz et al., 2020), where they use both tables and Wikidata/Freebase knowledge graph along with Wikipedia text for ODQA.",
"However, they simply linearized structured data without using any verbalizer, thus may suffer from suboptimal input representation.",
"Also, their tables are only mined from original NQ HTMLs, i.e. a constrained setting.",
"In contrast, we consider tables from full Wikipedia which is a much larger set.",
"Additionally, separate retrieval models are used for tables and KB in (Oguz et al., 2020) whereas we develop a unified model over text and data.",
"In this paper, we demonstrated that a unified verbalizer-retriever-reader framework, UDT-QA , for open-domain QA over data and text.",
"We proposed a novel data-to-text paradigm that can largely improve the verbalization effectiveness for downstream knowledge-intensive applications, i.e. open-domain QA, when attaining good intrinsic performances.",
"With the verbalized knowledge, we achieved a new state-of-the-art result for NQ.",
"Remarkably, we showed that simply augmenting the text index with the verbalized knowledge improve the performance without retraining the model.",
"In addition to our method, there are many recently proposed approaches for open-domain QA that are orthogonal.",
"For example, language models specifically optimized for dense retrieval (Gao and Callan, 2021), pretraining on large-scale QA data (Oguz et al., 2021) and hybrid system that consists of retriever, reranker, extractive reader and generative reader (Fajcik et al., 2021).",
"Incorporating those methods may further improve the performance for open-domain QA, and we leave that exploration for future work.",
"Lastly, instead of only considering a sanitized collection of knowledge sources, it is an interesting future direction to scale up the knowledge to web-scale (Nakano et al., 2021; Pik-tus et al., 2021).",
"We would like to thank Ruohong Zhang for helpful discussions and anonymous reviewers for their valuable suggestions on this paper."
] | [
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"abstain",
"result",
"method",
"result",
"result",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"result",
"abstain",
"other"
] |
[
"Volatility prediction is complex due to the stock market's stochastic nature.",
"Existing research focuses on the textual elements of financial disclosures like earnings calls transcripts to forecast stock volatility and risk, but ignores the rich acoustic features in the company executives' speech.",
"Recently, new multimodal approaches that leverage the verbal and vocal cues of speakers in financial disclosures significantly outperform previous state-of-the-art approaches demonstrating the ben-efits of multimodality and speech.",
"However, the financial realm is still plagued with a severe underrepresentation of various communities spanning diverse demographics, gender, and native speech.",
"While multimodal models are better risk forecasters, it is imperative to also investigate the potential bias that these models may learn from the speech signals of company executives.",
"In this work, we present the first study to discover the gender bias in multimodal volatility prediction due to gender-sensitive audio features and fewer female executives in earnings calls of one of the world's biggest stock indexes, the S&P 500 index.",
"We quantitatively analyze bias as error disparity and investigate the sources of this bias.",
"Our results suggest that multimodal neural financial models accentuate gender-based stereotypes.",
"1 1 Introduction Earnings calls are publicly available, quarterly conference calls where CEOs discuss their company's performance and future prospects with outside analysts and investors (Qin and Yang, 2019; Sawhney et al., 2020b).",
"They consist of two sections: a prepared delivery of performance statistics, analysis and future expectations, and a spontaneous question-answer session to seek additional information not disclosed before (Keith and Stent, 2019).",
"Researchers have studied the Post Earnings Announcement Drift (PEAD) to observe that statements made by upper management affect the way information is digested and acted upon impacting short-term price movements (Ball and Brown, 1968; Bernard and Thomas, 1989; Yang et al., 2020).",
"Audio features contextualize text and connotate speaker's emotional and psychological state (Fish et al., 2017; Jiang and Pell, 2017; Burgoon et al., 2015; Bachorowski, 1999).",
"Hence, when used with textual features, audio features significantly determine the effect of earning calls on the stock market (Qin and Yang, 2019; Yang et al., 2020).",
"Past research has shown that audio features such as speakers' pitch, intensity, etc. vary greatly across genders (Mendoza et al., 1996; Burris et al., 2014; Latinus and Taylor, 2012).",
"Moreover, female executives are highly underrepresented in these earnings calls(Agarwal, 2019; Investments, 2017).",
"The variation in audio features is amplified by deep learning models due to a dearth of female training examples and is manifested as a gender bias.",
"The system learns unneeded correlations between stock volatility and sensitive attributes like gender, accent, etc.",
"It further perpetuates gender-based stereotypes and generalizations like female executives are less con-fident than male executives (Lonkani, 2019), men are assessed as more charismatic than female executives under identical conditions (Novk-Tt et al., 2017), and nurses are female and doctors are male (Saunders and Byrne, 2020).",
"Biased models further perpetuate stereotypes that can harm underrepresented communities, specifically in the financial and corporate world.",
"Novk-Tt et al. (2017) even show that female speakers have to deliver better acoustic-melodic performance to seem as charismatic as men.",
"Taking a step towards fair risk forecasting models, we analyze gender bias by studying the error disparity in the state-of-the-art for multimodal volatility prediction, MDRM (Qin and Yang, 2019).",
"Bias in Finance Public financial data is impacting virtually every aspect of investment decision making (Peric et al., 2016; Brynjolfsson et al., 2011).",
"Prior research shows that NLP methods leveraging social media (Sawhney et al., 2020a), news (Du and Tanaka-Ishii, 2020), and earning calls (Wang and Hua, 2014) can accurately forecast financial risk.",
"Companies and investors use statistical and neural models on multimodal financial data to forecast volatility (Cornett and Saunders, 2003; Trippi and Turban, 1992) and minimize risk.",
"These models although effective, may be tainted by bias due to individual and societal differences, often unintended (Mehrabi et al., 2019).",
"For example, models trained on the audio features extracted from CEO's speech in earnings calls (Qin and Yang, 2019), may be prone to bias given the underrepresentation of several demographics across race, gender, native language, etc. in the financial realm.",
"Bias in AI Bias is prevalent in AI based neural models owing to the lack of diversity in training data (Torralba and Efros, 2011; Tommasi et al., 2017).",
"The design and utilization of AI models trained on gender imbalanced data, pose potential deprivation of opportunities to underrepresented groups such as females(Niethammer, 2020; Dastin, 2018).",
"With over 75% of AI professionals being men, male experiences also dominate algorithmic creation (Forum, 2018).",
"In terms of natural language representation, embeddings such as word2vec and GloVe, trained on news articles may inherit gender stereotypes (Packer et al., 2018; Bolukbasi et al., 2016; Park et al., 2018).",
"Recent studies also show the presence of bias in speech emotion recognition (Li et al., 2019).",
"Bias in AI and Finance With the advent of AI and Big Data, companies are intelligently using data to measure performance (Newman, 2020).",
"But seldom do enterprises check on the imbalance in gathered data.",
"Women still represent fewer than 20% positions in the financial-services C-suite (Chin et al., 2018) and only 5% of Fortune-500 CEOs are women (Suresh and Guttag, 2019).",
"Studies show that models trained on gender imbalanced data reduce the chances for women to get capital investments or loans (Grdeniz et al., 2020).",
"Apart from that, using feature representations in-Figure 1: Model architecture used for training the multimodal audio-text model for evaluating the gender specific performance inspired by (Qin and Yang, 2019) trinsic to different genders can inculcate semantic gender bias (Li et al., 2019; Suresh and Guttag, 2019).",
"Professional studies have found that men tend to self-reference using I', me' and mine' whereas women tend to reference the team, like we', our' and us' (Investments, 2017).",
"Although there is great progress in mitigating bias in text, understanding its presence in multimodal speech based analysis, particularly in real world scenarios like corporate earnings calls analysis remain an understudied yet promising research direction.",
"Another study found that despite having identical credibility, female CEOs are perceived as less capable to attract growth capital (Bigelow et al., 2014).",
"Stock volatility Following Kogan et al. (2009); Sawhney et al. (2020c), for a given stock, with a close price of p i on trading day i , we calculate the average log volatility over n days following the day of the earnings call as:",
"Volatility Prediction Consider each earnings call E , with aligned audio recordings A and text transcripts T .",
"The earnings calls are divided into separate distributions based on the gender of the speaker to analyse the effect of gender on the model performance.",
"Building upon the work of Qin and Yang (2019); Yang et al. (2020) our main focus is to learn a function f ( E { T,A } ) v [0 , ] , over { 3 , 7 , 15 , 30 } days to evaluate the bias for different time periods.",
"Earnings Call Data We use the dataset 2 created by Qin and Yang (2019) comprising 559 public earnings calls audio recordings with their transcripts for 277 companies in the S&P 500 index spanning over a year of earnings calls.",
"The details of the dataset splits for training have been given in Table 1.",
"For the identification of gender bias in the earnings calls acoustics, we first map the speakers from all the earnings calls to their self reported gender.",
"For this we perform web scrapping from Reuters 3 (pronouns), Crunchbase 4 where the genders are self-declared and the available genders from the Wikidata API.",
"The genders extracted correspond only to male and female, 11.8% of the speakers are female and 88.2% are male which motivates us to estimate the error disparity in model performance.",
"Evaluating Gender Bias We use performance error disparity G = MSE f MSE m where f and m stand for female and male respectively (Saunders and Byrne, 2020).",
"A higher G is indicative of bias is in favour of the male distribution.",
"Model Architecture and Training We use the state-of-the-art, Multimodal Deep Regression Model (MDRM) Qin and Yang (2019), as shown in Figure 1.",
"MDRM takes utterancde level audio A and text T embeddings and models them through two contextual BiLSTM layers followed by late multimodal fusion.",
"The fused text-audio features are fed to another BiLSTM followed by two fully-connected layers.",
"MDRM is trained end-to-end by optimizing the mean square error (MSE) between the predicted and true stock volatility.",
"Training Setup For textual features we use Fin-BERT embeddings 5 (Araci, 2019) with default 2 https://github.com/GeminiLn/ EarningsCall_Dataset 3 https://www.thomsonreuters.com/en/ profiles.html 4 https://www.crunchbase.com/discover/ people 5 https://github.com/ProsusAI/finBERT G = MSEF MSEM = 3 = 7 = 15 = 30 MDRM(A) 0.38 0.16 0.26 0.18 MDRM(T) 0.33 0.12 0.20 0.16 MDRM(AT) 0.30 0.11 0.28 0.14 Table 2: Modality specific G i.e. the difference between the MSE for female and male distributions for 3, 7, 15 and 30 days over 5 runs.",
"parameters and for audio cues, we extract 26 dimensional vectors with Praat (Boersma and Van Heuven, 2001) extracted by Qin and Yang (2019), spanning Shimmer, Jitter, Pitch, Intensity, etc.",
"We report the complete list in Table 4.",
"The maximum number of audio clips in any call is 520 .",
"Hence, we zero-pad the calls that have less than 520 clips.",
"The model is trained on TPU version 3.8 for 20 epochs using a learning rate of 0.001.",
"The hyperparameters are tuned on the validation set defined by Qin and Yang (2019) following the same preprocessing.",
"We perform 5 end-to-end runs with early stopping over the validation loss to arrive at the decision of training for 20 epochs.",
"Bias in Multimodal Volatility Prediction For evaluating gender bias in MDRM, we analyze the error disparity quantified by G for the individual text and audio modalities and their combination for = 3,7,15,30 days.",
"We tabulate the error disparity 1 2 3 4 5 6 7 8 0 .",
"in terms of G across modalities in Table 2 and performance in Table 3.",
"We observe that for all modalities the error for male distribution is consistently less than that of female distribution for both shortand long-term durations.",
"Although the audio modality improve model performance significantly, it has the highest amount of bias as audio features for males and females vary significantly.",
"Further, the skewed distribution of speakers' gender in the earnings calls amplifies this error disparity.",
"Over amplification refers to bias that occurs in a system during model fitting.",
"The model learns imperfect generalizations between the attributes and the final labels and amplifies them while predicting on the test set.",
"In our case, since female examples are very less in comparison to the male counterparts, the model discriminates between male and female examples by inferring insufficient information beyond its source base rate as shown in Table 2.",
"To study this effect we train the model for different training sample ratios as per gender to observe performance variation in Figure 2.",
"We note that as the male:female training ratio increases, the test loss is amplified the most in audio modality followed by audio+text and text.",
"Test MSE male decreases in comparison to increase in MSE female .",
"MSE female increases as the percentage of female examples in the train set decreases as the generalised notions of this underrepresented community are learnt and the incorrect inferences do not harm the overall performance much.",
"Since the difference in test loss for male and female is significantly less when the number of samples across genders Audio features P value Bonferroni Pitch Analysis Mean Fundamental Frequency (F0) Stdev Fundamental Frequency (F0) Number of pulses * Number of periods * Degree of voice breaks Maximum Pitch Minimum Pitch Voiced Frames Voiced to Unvoiced Ratio Voiced to Total Ratio Intensity Analysis Mean Intensity SD Energy Maximum Intensity Minimum Intensity Voice Analysis Local Jitter * Local Absolute Jitter * Relative Average Perturbation Jitter * Period Perturbation Quotient-5 Jitter * ddp Jitter * Local Shimmer * Local dB Shimmer * apq3 Shimmer * apq5 Shimmer * apq11 Shimmer * dda Shimmer * Harmonicity Analysis Harmonic to Noise Ratio Table 4: Comparison of the audio features for male and female speaker distributions.",
"is equal.",
"Through this observation, we note that performance for female examples can be improved by augmentation techniques or cross domain adaptation, which we leave for future work.",
"Semantic Bias occurs in embeddings and representations of audio and textual data which learn unwanted stereotypes.",
"For our case semantic bias occurs as the audio features are significantly different for male and female distributions.",
"We analyze each audio feature for both distributions in Table 4.",
"We find that 13 out of 26 features have a statistically significant difference under the two-tailed T-test ( = 0 . 05 ) after applying Bonferroni correction (Weisstein, 2004), a multiple comparison correction when multiple statistical tests are being performed.",
"These differences in audio features of executives' speech can amplify the error disparity, as models may associate certain gender specific features such as Voice analysis-based features like Shimmer and Jitter.",
"Degradation in the performance of speech models could be due to discernible noise and indiscernible sources like demographic bias: age, gender, dialect, culture, etc (Meyer et al., 2020; Hashimoto et al., 2018; Tatman and Kasten, 2017).",
"Studies also show that AI can deploy biases against black people in criminal sentencing (Angwin et al., 2016; Tatman and Kasten, 2017).",
"Although we only account for the gender bias in our study, we acknowledge that there could exist other kinds of bias due to age, accent, culture, ethnic and regional disparities in audio cues, as the publicly available earnings calls majorly have companies belonging to the US.",
"Moreover, only publicly available earnings calls have been used limiting the scope of the data.",
"This also limits the availability of genders in the data to only male and female.",
"In the future, we hope to increase the amount of data to expand our study to more categories and types of sensitive attributes.",
"Earnings calls provide company insights from executives proving to be high risk-reward opportunities for investors.",
"Recent multimodal approaches that utilize these acoustic and textual features to predict the financial risk achieve state-of-the-art performance, but overlook the gender bias associated with speech.",
"We analyze the gender bias in volatility prediction of earnings calls due to gender sensitive audio features and underrepresentation of women in executive positions.",
"We observe that the while adding speech features improves performance, it also perpetuates gender bias, as the audio modality has the highest error disparity.",
"We further probe into the sources of bias, and analyze audio feature variations across gender, and perform experiments with varying training data distributions.",
"Our study presents the first analysis of its kind to analyze gender bias in multimodal financial forecasting to bridge the gap between fairness in AI, neural financial forecasting and multimodality."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"method",
"objective"
] |
[
"A conventional approach to improving the performance of end-to-end speech translation (E2E-ST) models is to leverage the source transcription via pre-training and joint training with automatic speech recognition (ASR) and neural machine translation (NMT) tasks.",
"However, since the input modalities are different, it is difficult to leverage source language text successfully.",
"In this work, we focus on sequence-level knowledge distillation (SeqKD) from external text-based NMT models.",
"To leverage the full potential of the source language information, we propose backward SeqKD , SeqKD from a target-to-source backward NMT model.",
"To this end, we train a bilingual E2E-ST model to predict paraphrased transcriptions as an auxiliary task with a single decoder.",
"The paraphrases are generated from the translations in bitext via back-translation.",
"We further propose bidirectional SeqKD in which SeqKD from both forward and backward NMT models is combined.",
"Experimental evaluations on both autoregressive and non-autoregressive models show that SeqKD in each direction consistently improves the translation performance, and the effectiveness is complementary regardless of the model capacity.",
"End-to-end speech translation (E2E-ST) (Brard et al., 2016), which aims to convert source speech to text in another language directly, is an active research area.",
"Because direct ST is a more difficult task than automatic speech recognition (ASR) and machine translation (MT), various techniques have been proposed to ease the training process by using source transcription.",
"Examples include pretraining (Brard et al., 2018; Wang et al., 2020c; Bansal et al., 2019; Wang et al., 2020d), multi-task learning (Weiss et al., 2017; Brard et al., 2018; Bahar et al., 2019), knowledge distillation (Liu et al., 2019), meta-learning (Indurthi et al., 2020), two-pass decoding (Anastasopoulos and Chiang, 2018; Sperber et al., 2019), and interactive decoding (Liu et al., 2020; Le et al., 2020).",
"However, as input modalities between ST and MT tasks are different, an auxiliary MT task is not always helpful, especially when additional bitext is not available (Ba-har et al., 2019).",
"Moreover, because monotonic speech-to-transcription alignments encourage the ASR task to see surface-level local information, an auxiliary ASR task helps the E2E-ST model to extract acoustic representations, not semantic ones, from speech.",
"Sequence-level knowledge distillation (Se-qKD) (Kim and Rush, 2016) is another approach to transferring knowledge from one model to another.",
"Recent studies have shown that SeqKD has the effect of reducing the complexity of training data and thus eases the training of student models, e.g., non-autoregressive (NAR) models (Gu et al., 2018; Zhou et al., 2019a; Ren et al., 2020).",
"Paraphrasing, which represents text in a different form but with the same meaning, can also be regarded as SeqKD when using neural paraphrasing via back-translation (Mallinson et al., 2017; Wieting et al., 2017; Federmann et al., 2019).",
"It has been studied to improve the reference diversity for MT system evaluations (Thompson and Post, 2020; Bawden et al., 2020a,b) and the performance of low-resource neural MT (NMT) models (Zhou et al., 2019b; Khayrallah et al., 2020).",
"In this work, due to its simplicity and effectiveness, we focus on SeqKD from text-based NMT models to improve the performance of a bilingual E2E-ST model.",
"In order to fully leverage source language information, we propose backward SeqKD , which targets paraphrased source transcriptions generated from a target-to-source backward NMT model as an auxiliary task.",
"Then, a single ST decoder is trained to predict both source and target language text as in a multilingual setting (Inaguma et al., 2019).",
"This way, the decoder is biased to capture semantic representations from speech, unlike joint training with an auxiliary ASR task.",
"We also propose bidirectional SeqKD , which combines SeqKD from two NMT models in both language directions.",
"Therefore, the E2E-ST models can fully exploit the knowledge embedded in both forward and backward NMT models.",
"Experimental evaluations demonstrate that SeqKD from each direction consistently improves the translation performance of both autoregressive and non-autoregressive E2E-ST models.",
"We also con-firm that bidirectional SeqKD outperforms unidirectional SeqKD and that the effectiveness is maintained in large models.",
"In this section, we propose bidirectional SeqKD from both forward and backward NMT models that leverages machine-generated source paraphrases as another target in addition to the distilled translation to enhance the training of a bilingual E2E-ST model.",
"Let X denote input speech features in a source language and Y s and Y t denote the corresponding gold transcription and translation, respectively.",
"Let D st = { ( X i , Y s i , Y t i ) } Ii =1 be an ST dataset including I samples, and D asr = { ( X i , Y s i ) } Ii =1 and D mt = { ( Y s i , Y t i ) } Ii =1 denote the corresponding ASR and MT datasets, respectively.",
"1 We drop the subscript i when it is obvious.",
"We first train a text-based source-to-target forward NMT model M fwd with D mt .",
"2 Then, we perform beam search decoding with M fwd on D st to create a new dataset D fwdst = { ( X i , Y s i , Y t i ) } Ii =1 , where Y t i is a distilled translation.",
"D fwdst is used to train the E2E-ST models, referred to as forward SeqKD (or fwd SeqKD).",
"To exploit semantic information in the source language, we leverage machine-generated paraphrases of source transcriptions.",
"We train a text-based target-to-source backward NMT model M bwd with D mt and then generate a new dataset D bwdst = { ( X i , Y s i , Y t i ) } Ii =1 , where Y s i is a paraphrase of Y s i .",
"We use D bwdst for training the E2E-ST models.",
"As neural paraphrasing can be regarded as SeqKD from M bwd , we referred to it as backward SeqKD 1 We focus on a complete triplet of ( X , Y s , Y t ) only.",
"(or bwd SeqKD).",
"In this work, we do not use large paraphrase datasets (Wieting and Gimpel, 2018; Hu et al., 2019) because their availability depends on languages and domains.",
"Moreover, neural paraphrasing is applicable to any source languages that lack a sufficient amount of paired paraphrase data.",
"We also propose combining forward SeqKD with backward SeqKD, referred to as bidirectional SeqKD (or bidir SeqKD), and construct a new dataset D bidirst = { ( X i , Y s i , Y t i ) } Ii =1 .",
"When using two references per utterance ( 2ref training) (Gor-don and Duh, 2019), we concatenate D fwdst and D bwdst , and the suitable combination is analyzed in Section 4.3.",
"This way, we can distill the knowledge of both M fwd and M bwd to a single E2E-ST model.",
"We train an E2E-ST model with a direct ST objective L st ( Y t or Y t | X ) and an auxiliary speech-to-source text objective L src ( Y s or Y s | X ) .",
"We refer to joint training with L src ( Y s | X ) as joint ASR and with L src ( Y s | X ) as backward SeqKD .",
"Both losses are calculated from the same ST decoder.",
"To bias the model to generate the desired target language, we add language embedding to token embedding at every token position in the decoder (Conneau and Lample, 2019).",
"3 We then apply bidirectional SeqKD to both autoregressive (AR) and non-autoregressive (NAR) E2E-ST models.",
"We use the speech Transformer architecture in (Karita et al., 2019) with an additional language embedding.",
"The total training objective is formulated with a hyperparameter src ( 0) as L total = L st + src L src , (1) where both L st and L src are defined as cross-entropy losses.",
"The entire encoder-decoder parameters are shared in both tasks.",
"We adopt Orthors (Inaguma et al., 2021), in which a decoder based on a conditional masked language model (CMLM) (Ghazvininejad et al., 2019) is jointly trained with an additional AR decoder",
"3 We found this was more effective than replacing the start-of-sentence symbol with a language ID (Inaguma et al., 2019; Wang et al., 2020b; Le et al., 2020) as done in previous multilingual E2E-ST studies.",
"on the shared speech encoder.",
"The training of the NAR decoder is further enhanced with semi-autoregressive training (SMART) (Ghazvininejad et al., 2020).",
"L st in Eq.",
"(1) is modified as L st = L cmlm + ar L ar + lp L lp , (2) where L cmlm , L ar , and L lp are losses in NAR E2E-ST, AR E2E-ST, and length prediction tasks, respectively.",
"is the corresponding tunable loss weight.",
"During inference, the mask-predict algorithm is used for T iterations with a length beam width of l (Ghazvininejad et al., 2019).",
"The best candidate at the last iteration is selected from the NAR decoder based on scores from the AR decoder (Inaguma et al., 2021).",
"Note that we apply L src to the NAR decoder only.",
"Data We used Must-C En-De ( 408 hours) and En-Fr ( 492 hours) datasets (Di Gangi et al., 2019).",
"Both language pairs consist of a triplet of ( X , Y s , Y t ).",
"We performed the same data preprocessing as (Inaguma et al., 2020) (see details in Appendix A.1).",
"We report case-sensitive detokenized BLEU scores (Pap-ineni et al., 2002) on the tst-COMMON set with the multi-bleu-detok.perl script in Moses (Koehn et al., 2007).",
"Model configuration We used the Transformer (Vaswani et al., 2017) architecture having 12 encoder layers following two CNN blocks and six decoder layers for the ASR and E2E-ST tasks.",
"For the MT models, we used six encoder layers.",
"We built our models with the ESPnet-ST toolkit (Inaguma et al., 2020).",
"See details in Appendix A.2.",
"Training We always initialized the encoder parameters of the E2E-ST model by those of the corresponding pre-trained ASR model (Brard et al., 2018).",
"We follow the same optimization strategies as in (Inaguma et al., 2021, 2020).",
"When using joint ASR or backward SeqKD, we set src to 0 .",
"3 .",
"More details are described in Appendix A.3 and A.4.",
"tst-COMMON set.",
"(Inaguma et al., 2020), (Wang et al., 2020a).",
"(cid:5)",
"Large model trained with eight language pairs (Wang et al., 2020a).",
"Inference For the AR models, we used a beam width of 4 .",
"For the NAR models, we set T = { 4 , 10 } and l = 9 as in (Inaguma et al., 2021).",
"We first report the paraphrasing quality, which is shown in Table",
"1. As confirmed by the BLEU and translation edit rate (TER) scores (Snover et al., 2006), the paraphrased source text was not just a simple copy of the transcription (see examples in Appendix A.5).",
"Autoregressive models The results are shown in Table",
"2. Pre-training the ST decoder with the forward MT decoder ( A2 ) improved the baseline performance ( A1 ).",
"Joint ASR showed a marginal improvement on En-De but a degraded performance on En-Fr ( A3 ).",
"We attribute this to the fact that the ASR task was more trivial than the ST task and biased the shared decoder to capture surface-level textual information.",
"In contrast, backward SeqKD showed small but consistent improvements in both language directions ( A4 ), and it was as effective as MT pre-training.",
"As the encoder was already pre-trained with the ASR model, paraphrases had an additional positive effect on the BLEU improvement.",
"Forward SeqKD significantly improved the performance, as previously reported in (Inaguma et al., 2021).",
"However, the gains by MT pre-training and joint ASR were diminished.",
"Forward SeqKD was more effective than backward SeqKD solely ( A4 vs. B1 ).",
"However, backward SeqKD was still Model TBLEU ( ) En-De En-Fr Fwd SeqKD 4 21.93 30.46 + Joint ASR 22.13 30.80 Bidir SeqKD 22.22 31.21 (Inaguma et al., 2021) 10 22.88 32.20 Fwd SeqKD (ours) 22.96 32.42 + Joint ASR 23.31 32.41 Bidir SeqKD 23.41 32.64 Table 3: BLEU scores of NAR models on Must-C tst-COMMON set.",
"beneficial on top of forward SeqKD ( C1 , i.e., bidirectional SeqKD) while joint ASR was less so ( B3 ).",
"We also augmented the target translations by concatenating D st and D fwdst ( 2ref training), which further improved forward SeqKD ( B4 ).",
"Nevertheless, a combination of 2ref training and backward SeqKD (i.e., bidirectional SeqKD with D fwdst D bwdst ) had a complementary effect and showed the best result ( C2 ).",
"It even outperformed larger multilingual models (Wang et al., 2020a) without using additional data in other language pairs.",
"Non-autoregressive models The results are presented in Table",
"3. Following the standard practice in NAR models (Gu et al., 2018), we always used forward SeqKD.",
"We did not use 2ref training for the NAR models because it increases the multimodality.",
"Joint ASR improved the performance on all NAR models, except for En-Fr with the number of iterations T = 10 .",
"However, bidirectional SeqKD with D bidirst further improved the performance consistently regardless of T .",
"Since NAR models assume conditional independence for every token, they prefer monotonic input-output alignments with lower alignment complexity in theory.",
"However, paraphrasing collapses the monotonicity of the ASR task and increases the alignment complexity, making the auxiliary speech-to-source text task non-trivial.",
"Nevertheless, BLEU scores were improved by adding backward SeqKD.",
"This was probably because the complexity of transcriptions in the training data was reduced at the cost of the alignment complexity, which was more effective for the NAR models.",
"We analyze the performance of bidirectional SeqKD through a lens of complexity in the training data following (Zhou et al., 2019a).",
"We aligned words in every source and target sentence pair with Condition Entropy ( more complex) En-De En-Fr C ( D st ) (Real) 0.70 0.65 C ( D fwdst ) (Fwd SeqKD) 0.52 0.47 C ( D bwdst ) (Bwd SeqKD) 0.54 0.47 C ( D bidirst ) (Bidir SeqKD) 0.63 0.61 C ( D st ) (Real) 0.40 0.54 C ( D fwdst ) (Fwd SeqKD) 0.28 0.36 C ( D bwdst ) (Bwd SeqKD) 0.25 0.31 C ( D bidirst ) (Bidir SeqKD) 0.37 0.49 Table 4: Corpus-level conditional entropy Condition Faithfulness ( more faithful) En-De En-Fr F ( D fwdst ) (Fwd SeqKD) 12.61 11.65 F ( D bwdst ) (Bwd SeqKD) 9.31 8.67 F ( D bidir st ) (Bidir SeqKD) 11.42 10.72 F ( D fwdst ) (Fwd SeqKD) 9.58 8.48 F ( D bwdst ) (Bwd SeqKD) 12.97 10.70 F ( D bidirst ) (Bidir SeqKD) 11.23 9.98 Table 5: Faithfulness to training data distribution fast_align 4 (Dyer et al., 2013).",
"Then, we calculated corpus-level conditional entropy C ( D ) and faithfulness F ( D ) for both forward ( D ) and backward ( D ) language directions to evaluate the multimodality.",
"In short, conditional entropy measures uncertainty of translation, and faithfulness is defined as KullbackLeibler divergence and measures how close the distilled data distribution is to the real data distribution.",
"See the mathematical definition in Appendix A.6.",
"The results of entropy and faithfulness are shown in Tables 4 and 5, respectively.",
"Consistent with (Zhou et al., 2019a), the entropy of target translations was reduced by forward SeqKD, indicating target translations were converted into a more deterministic and simplified form.",
"Interestingly, the entropy of the original translations was also reduced by backward SeqKD.",
"In other words, backward SeqKD modified transcriptions so that the target translations can be predicted easier.",
"This would help E2E-ST models learn relationships between source and target languages from speech because E2E-ST models are not conditioned on text in another language explicitly.",
"Therefore, we presume that the encoder representations were enhanced by back-4 https://github.com/clab/fast_align Training data Target1 Target2 BLEU ( ) En-De En-Fr D st D fwdst ( B4 + Joint ASR) ( Y s ,Y t ) ( Y s , Y t ) 25.00 35.05 D st D bidirst ( Y s ,Y t ) ( Y s , Y t ) 25.21 35.17 D bwdst D bidirst ( Y s ,Y t ) ( Y s , Y t ) 25.01 35.22 D fwdst D bwdst ( C2 ) ( Y s ,Y t ) ( Y s , Y t ) 25.28 35.29 Table 6: Ablation study of dataset concatenation on Must-C tst-COMMON set.",
"ward SeqKD.",
"Using machine-generated sequences in both languages increased the entropy, probably due to error accumulation.",
"However, E2E-ST models do not suffer from it because they are conditioned on the source speech.",
"We also confirmed similar trends in the reverse language direction.",
"Regarding faithfulness, distilled target sequences degraded faithfulness as expected.",
"However, an interesting finding was that the faithfulness of bidirectional SeqKD was better than that of forward SeqKD, meaning that the former reflected the true word alignment distribution more faithfully than the latter.",
"Although lexical choice might be degraded by targeting distilled text in both languages (Ding et al., 2021), mixing the original and distilled text by 2ref training would recover it.",
"We conduct an ablation study to verify the analysis in the previous section.",
"In Table 4, we observed that it was better to have the original reference in the target sequence of either the source or target language.",
"For example, to reduce the entropy of German text in the training set, it was best to condition the distilled German translation on the original English transcription, and vice versa.",
"Therefore, we hypothesize that the best way to reduce the entropy in both source and target languages during 2ref training is to combine ( Y s , Y t ) and ( Y s , Y t ) for each sample.",
"We compared four ways to leverage source text: gold transcription Y s only, distilled paraphrase Y s only, and both.",
"5 The results are shown in Table",
"6. We confirmed that the model trained with the original reference in either language for every target achieved the best BLEU score, which verifies our hypothesis.",
"Finally, we investigate the effectiveness of bidirectional Seq-KD with 2ref training when increasing the model capacity in Table",
"7. The purpose of 5 Both gold translation Y t and distilled translation Y t were always used as target sequences.",
"this experiment is to verify our expectation that large models can model complex target distributions in multi-referenced training better.",
"In addition to simply increasing the model dimensions, we also investigate Conformer (Gulati et al., 2020), a Transformer encoder augmented by a convolution module.",
"We confirmed that bidirectional SeqKD always outperformed forward SeqKD in both language directions regardless of model configurations.",
"We also found that the Conformer encoder significantly boosted the translation performance of forward SeqKD, but the gains of bidirectional SeqKD were transferred.",
"To fully leverage knowledge in both source and target language directions for bilingual E2E-ST models, we have proposed bidirectional SeqKD, in which both forward SeqKD from a source-to-target NMT model and backward SeqKD from a target-to-source NMT model are combined.",
"Backward SeqKD is performed by targeting source paraphrases generated via back-translation from the original translations in bitext.",
"Then, the E2E-ST model is enhanced by training to generate both source and target language text with a single decoder.",
"We experimentally confirmed that SeqKD from each direction boosted the translation performance of both autoregressive and non-autoregressive E2E-ST models, and the effectiveness was additive.",
"Multi-referenced training with the original and distilled text gave further gains.",
"We also showed that bidirectional SeqKD was effective regardless of model sizes.",
"The authors thank the anonymous reviewers for useful suggestions and Siddharth Dalmia, Brian Yan, and Pengcheng Guo for helpful discussions."
] | [
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"result",
"other"
] |
[
"Hierarchical text classification is an important yet challenging task due to the complex structure of the label hierarchy.",
"Existing methods ignore the semantic relationship between text and labels, so they cannot make full use of the hierarchical information.",
"To this end, we formulate the text-label semantics relationship as a semantic matching problem and thus propose a hierarchy-aware label semantics matching network (HiMatch).",
"First, we project text semantics and label semantics into a joint embedding space.",
"We then introduce a joint embedding loss and a matching learning loss to model the matching relationship between the text semantics and the label semantics.",
"Our model captures the text-label semantics matching relationship among coarse-grained labels and fine-grained labels in a hierarchy-aware manner.",
"The experimental results on various benchmark datasets verify that our model achieves state-of-the-art results.",
"Hierarchical text classification (HTC) is widely used in Natural Language Processing (NLP), such as news categorization (Lewis et al., 2004) and sci-entific paper classification (Kowsari et al., 2017).",
"HTC is a particular multi-label text classification problem, which introduces hierarchies to organize label structure.",
"As depicted in Figure 1, HTC models predict multiple labels in a given label hierarchy, which generally construct one or multiple paths from coarse-grained labels to fine-grained labels in a top-down manner (Aixin Sun and Ee-Peng Lim, 2001).",
"Generally speaking, fine-grained labels are the most appropriate labels for describing the input text.",
"Coarse-grained labels are generally the parent nodes of coarseor fine-grained labels, expressing a more general concept.",
"The key challenges of *Corresponding author HTC are to model the large-scale, imbalanced, and structured label hierarchy (Mao et al., 2019).",
"Existing work in HTC has introduced various methods to use hierarchical information in a holistic way.",
"To capture the holistic label correlation features, some researchers proposed a hierarchy-aware global model to exploit the prior probability of label dependencies through Graph Convolution Networks (GCN) and TreeLSTM (Zhou et al., 2020).",
"Some researchers also introduced more label correlation features such as label semantic similarity and label co-occurrence (Lu et al., 2020).",
"They followed the traditional way to transform HTC into multiple binary classifiers for every label (Furnkranz et al., 2008).",
"However, they ignored the interaction between text semantics and label semantics (Furnkranz et al., 2008; Wang et al., 2019), which is highly useful for classification (Chen et al., 2020).",
"Hence, their models may not be sufficient to model complex label dependencies and provide comparable text-label classification scores (Wang et al., 2019).",
"A natural strategy for modeling the interaction between text semantics and label semantics is to introduce a text-label joint embedding by label attention (Xiao et al., 2019) or autoencoders (Yeh et al., 2017).",
"Label attention-based methods adopted a self-attention mechanism to identify label-specific information (Xiao et al., 2019).",
"Autoencoder-based methods extended the vanilla Canonical Correlated Autoencoder (Yeh et al., 2017) to a ranking-based autoencoder architecture to produce comparable text-label scores (Wang et al., 2019).",
"However, these methods assume all the labels are independent without fully considering the correlation between coarse-grained labels and fine-grained labels, which cannot be simply transferred to HTC models (Zhou et al., 2020).",
"In this paper, we formulate the interaction between text and label as a semantic matching problem and propose a Hi erarchy-aware Label Semantics Match ing Network (HiMatch).",
"The principal idea is that the text representations should be semantically similar to the target label representations (especially fine-grained labels), while they should be semantically far away from the incorrect label representations.",
"First, we adopt a text encoder and a label encoder (shown in Figure 2) to extract textual semantics and label semantics, respectively.",
"Second, inspired by the methods of learning common embeddings (Wang et al., 2019), we project both textual semantics and label semantics into a text-label joint embedding space where correlations between text and labels are exploited.",
"In this joint embedding space, we introduce a joint embedding loss between text semantics and target label semantics to learn a text-label joint embedding.",
"After that, we apply a matching learning loss to capture text-label matching relationships in a hierarchy-aware manner.",
"In this way, the fine-grained labels are semantically closest to the text semantics, followed by the coarse-grained labels, while the incorrect labels should be semantically far away from the text semantics.",
"Hence, we propose a hierarchy-aware matching learning method to capture different matching relationships through different penalty margins on semantic distances.",
"Finally, we employ the textual representations guided by the joint embedding loss and matching learning loss to perform the hierarchical text classification.",
"1. By considering the text-label semantics matching relationship, we are the first to formulate HTC as a semantic matching problem rather than merely multiple binary classification tasks.",
"2. We propose a hierarchy-aware label semantics matching network (HiMatch), in which we introduce a joint embedding loss and a matching learning loss to learn the text-label semantics matching relationship in a hierarchy-aware manner.",
"Hierarchical text classification is a particular multilabel text classification problem, where the classification results are assigned to one or more nodes of a taxonomic hierarchy.",
"Existing state-of-the-art methods focus on encoding hierarchy constraint in a global view such as directed graph and tree structure.",
"Zhou et al. (2020) proposed a hierarchy-aware global model to exploit the prior probability of label dependencies.",
"Lu et al. (2020) introduced three kinds of label knowledge graphs, i.e., taxonomy graph, semantic similarity graph, and co-occurrence graph to benefit hierarchical text classification.",
"They regarded hierarchical text classification as multiple binary classification tasks (Furnkranz et al., 2008).",
"The limitation is that these models did not consider the interaction of label semantics and text semantics.",
"Therefore, they failed to capture complex label dependencies and can not provide comparable text-label classification scores (Wang et al., 2019), which leads to restricted performance (Chen et al., 2020).",
"Hence, it is crucial to exploit the relationship between text and label semantics, and help the model distinguish target labels from incorrect labels in a comparable and hierarchy-aware manner.",
"We perform matching learning in a joint embedding of text and label to solve these problems in this work.",
"To determine the correlation between text and label, researchers proposed various methods to exploit a text-label joint embedding such as (Xiao et al., 2019) or Autoencoder (Yeh et al., 2017).",
"In the field of multi-label text classification, Xiao et al. (2019) proposed a Label-Specific Attention Network (LSAN) to learn a text-label joint embedding by label semantic and document semantic.",
"Wang et al. (2019) extended vanilla Canonical Correlated AutoEncoder (Yeh et al., 2017) to a ranking-based autoencoder architecture to produce comparable label scores.",
"However, they did not fully consider label semantics and holistic label correlation Global debt is set to reach...",
"among fine-grained labels, coarse-grained labels, and incorrect labels.",
"In addition, we can not simply transfer these multi-label classification methods to HTC due to the constraint of hierarchy (Zhou et al., 2020).",
"In this section, we will describe the details about our Hierarchy-aware Label Semantics Matching Network.",
"Figure 2 shows the overall architecture of our proposed model.",
"In the HTC task, given the input sequence x seq = { x 1 , ..., x n } , the model will predict the label y = { y 1 , ..., y k } where n is the number of words and k is the number of label sets.",
"The label with a probability higher than a fixed threshold (0.5) will be regarded as the prediction result.",
"The sequence of token embeddings is firstly fed into a bidirectional GRU layer to extract contextual feature H = { h 1 , ..., h n } .",
"Then, CNN layers with top-k max-pooling are adopted for generating key n-gram features T R k d cnn where d cnn indicates the output dimension of the CNN layer.",
"Following the previous work (Zhou et al., 2020), we further introduce a hierarchy-aware text feature propagation module to encode label hierarchy information.",
"We define a hierarchy label structure as a directed graph G = (cid:16) V t , E , E (cid:17) , where V t indicates the set of hierarchy structure nodes.",
"E are built from the top-down hierarchy paths representing the prior statistical probability from parent nodes to children nodes.",
"E are built from the bottom-up hierarchy paths representing the connection relationship from children nodes to parent nodes.",
"The feature size of graph adjacency matrix E and E is R k k , where k is the number of label sets.",
"Text feature propagation module firstly projects text features T to node inputs V t by a linear transformation W proj R k d cnn d t , where d t represents the hierarchy structure node dimension from text feature.",
"Then a Graph Convolution Network (GCN) is adopted to explicitly combine text semantics with prior hierarchical information E and E : S t = (cid:16) E V t W g 1 + E V t W g 2 (cid:17) (1) where is the activation function ReLU .",
"W g 1 , W g 2 R d t d t are the weight matrix of GCN.",
"S t is the text representation aware of prior hierarchy paths.",
"In the HTC task, the hierarchical label structure can be regarded as a directed graph G = (cid:16) V l , E , E (cid:17)",
"where V l indicates the set of hierarchy structure nodes with label representation.",
"The graph G in label encoder shares the same structure E and E with the graph in text encoder.",
"Given the total label set y = { y 1 , ..., y k } as input, we create label embeddings V l R d l by averaging of pre-trained label embeddings first.",
"Then GCN could be utilized as label encoder: S l = (cid:16) E V l W g 3 + E V l W g 4 (cid:17) (2) where is the activation function ReLU .",
"W g 3 , W g 4 R d l d l are the weight matrix of GCN.",
"S l is the label representation aware of prior hierarchy paths.",
"It must be noted that the weight matrix and input representation of the label encoder are different with those in the text encoder.",
"In this section, we will introduce the methods of learning a text-label joint embedding and hierarchy-aware matching relationship.",
"For joint embedding learning, firstly, we project text semantics S t and label semantics S l into a common latent space as follows: t = FFN t ( S t ) , (3) l = FFN l ( S l ) (4) where FFN t and FFN l are independent two-layer feedforward neural networks.",
"t , l R d represent text semantics and label semantics in joint embedding space, respectively.",
"d indicates the dimension of joint embedding.",
"In order to align the two independent semantic representations in the latent space, we employ the mean squared loss between text semantics and target labels semantics: L joint = (cid:88) p P ( y ) (cid:13)(cid:13) t pl (cid:13)(cid:13) 2 2 (5) where P ( y ) is target label sets.",
"L joint aims to minimize the common embedding loss between input text and target labels.",
"Based on the text-label joint embedding loss, the model only captures the correlations between text semantics and target labels semantics, while correlations among different granular labels are ignored.",
"In the HTC task, it is expected that the matching relationship between text semantics and fine-grained labels should be the closest, followed by coarse-grained labels.",
"Text semantics and incorrect labels semantics should not be related.",
"Insight of these, we propose a hierarchy-aware matching loss L match to incorporate the correlations among text semantics and different labels semantics.",
"L match aims to penalize the small semantic distance between text semantics and incorrect labels semantics with a margin : L match = max (cid:0) 0 , D (cid:0) t , pl (cid:1) D ( t , nl ) + (cid:1) (6) where pl represents target labels semantics and nl represents incorrect labels semantics.",
"We use L2-normalized euclidean distance for metric D and is a margin constant for margin-based triplet loss.",
"We take the average of all the losses between every label pairs as the margin loss.",
"Hierarchy-aware Margin Due to the large label sets in the HTC task, it is time-consuming to calculate every label's matching loss.",
"Therefore, we propose hierarchy-aware sampling to alleviate the problem.",
"Specifically, we sample all parent labels (coarse-grained labels), one sibling label, and one random incorrect label for every fine-grained label to obtain its negative label sets n N ( y ) .",
"It is also unreasonable to assign the same margin for different label pairs since the label semantics similarity is quite different in a large structured label hierarchy.",
"Our basic idea is that the semantics relationship should be closer if two labels are closer in the hierarchical structure.",
"Firstly, the text semantics should match fine-grained labels the most, which is exploited in joint embedding learning.",
"Then we regard the pair with the smallest semantic distance ( d 1 ) as a positive pair and regard other text-label matching pairs as negative pairs.",
"As depicted in the schema figure 3, compared with the positive pair, the semantics matching distance between text and coarse-grained target labels ( d 2 ) should be larger.",
"The incorrect sibling labels have a certain semantic relationship with the target labels.",
"Hence, the semantics matching distance between text and the incorrect sibling labels of fine-grained labels ( d 3 ) should be further larger, while the semantics matching distance between text and other incorrect labels ( d 4 ) should be the largest.",
"We introduce hierarchy-aware penalty margins 1 , 2 , 3 , 4 to model the comparable relationship.",
"The penalty margin is smaller if we expect the semantic matching distance to be smaller.",
"We neglect 1 because the matching relationships between text semantics and fine-grained labels are exploited in joint embedding learning.",
"2 , 3 , 4 are penalty margins compared with the matching relationships between text semantics and fine-grained labels semantics.",
"We introduce two hyperparameters , to measure different matching relationships of : 2 = ; 3 = ; 4 = (7) where 0 < < < 1 .",
"The proposed loss captures the relative semantics similarity rankings among target labels and incorrect labels in a hierarchy-aware manner.",
"We find that it is easier to overfit for classification learning if we perform classification learning in the text-label joint embedding directly.",
"Hence, we use the text semantics representation S t guided by joint embedding loss and matching learning loss to perform classification learning.",
"S t is fed into a fully connected layer to get the label probability y for prediction.",
"The overall objective function includes a cross-entropy category loss, joint embedding loss and hierarchy-aware matching loss: L = L cls ( y, y ) + 1 L joint + 2 L match (8) where y and y are the ground-truth label and output probability, respectively.",
"1 , 2 are the hyperparameters for balancing the joint embedding loss and Dataset | L | Depth Avg ( | L i | ) Train V al Test RCV1-V2 103 4 3.24 20833 2316 781265 WOS 141 2 2 30070 7518 9397 EURLEX-57K 4271 5 5 45000 6000 6000 Table 1: Statistics of three datasets for hierarchical multi-label text classification.",
"matching learning loss.",
"We minimize the above function by gradient descent during training.",
"Datasets To evaluate the effectiveness of our model, we conduct experiments on three widely-studied datasets for hierarchical multi-label text classification.",
"Statistics of these datasets are listed in Table 1. RCV1-V2 (Lewis et al., 2004) is a news categorization corpora, and WOS (Kowsari et al., 2017) includes abstracts of published papers from Web of Science.",
"EURLEX57K is a large hierarchical multi-label text classification (LMTC) dataset that contains 57k English EU legislative documents, and is tagged with about 4.3k labels from the European Vocabulary (Chalkidis et al., 2019).",
"The label sets are split into zero-shot labels, few-shot labels, and frequent labels.",
"Few-shot labels are labels whose frequencies in the training set are less than or equal to 50.",
"Frequent labels are labels whose frequencies in the training set are more than 50.",
"The label setting is the same as previous work (Lu et al., 2020).",
"In EURLEX57K, the corpora are only tagged with fine-grained labels, and the parent labels of fine-grained labels are not tagged as the target labels.",
"Evaluation Metric On RCV1-V2 and WOS datasets, we measure the experimental results by Micro-F1 and Macro-F1.",
"Micro-F1 takes the overall precision and recall of all the instances into account, while Macro-F1 equals the average F1-score of labels.",
"We report the results of two ranking metrics on large hierarchical multi-label text classification dataset EURLEX-57K, including Re-call@5 and nDCG@5.",
"The ranking metrics are preferable for EURLEX-57K since it does not introduce a significant bias towards frequent labels (Lu et al., 2020).",
"(Pennington et al., 2014).",
"Then we use a one-layer BiGRU with hidden dimension 100 and used 100 filters with kernel size [2,3,4] to setup the CNNs.",
"The dimension of the text propagation feature and graph convolution weight matrix are both 300.",
"The hidden size of joint embedding is 200.",
"The matching margin is set to 0.2 on RCV1-V2 and WOS datasets, and set to 0.5 on EURLEX-57K dataset.",
"We set the value of hierarchy-aware penalty hyperparameters , to 0.01 and 0.5, respectively.",
"The loss balancing factor 1 , 2 are set to 1. For fair comparisons with previous work (Lu et al., 2020; Chalkidis et al., 2019) on EURLEX-57K dataset, firstly, we do not use CNN layer and text feature propagation module.",
"Secondly, to adapt to the zero-shot settings, the prediction is generated by the dot product similarity between text semantics and label semantics.",
"Our model is optimized by Adam with a learning rate of 1e-4.",
"For pretrained language model BERT (Devlin et al., 2018), we use the top-level representation h CLS of BERT's special CLS token to perform classification.",
"To combine our model with BERT, we replace the text encoder of HiMatch with BERT, and the label representations are initiated by pretrained BERT embedding.",
"The batch size is set to 16, and the learning rate is 2e-5.",
"Comparison Models On RCV1-V2 and WOS datasets, we compare our model with three types of strong baselines: 1) Text classification baselines: TextRCNN (Lai et al., 2015), TextRCNN with label attention (TextRCNN-LA) (Zhou et al., 2020), and SGM (Yang et al., 2018).",
"2) Hierarchy-aware models: HE-AGCRCNN (Peng et al., 2019), HMCN (Mao et al., 2019), Htrans (Banerjee et al., 2019), HiLAP-RL (Mao et al., 2019) which introduced reinforcement learning to simulate the assignment process, HiAGM (Zhou et al., 2020) which exploited the prior probability of label dependecies through Graph Convolution Network and TreeLSTM.",
"3) Pretrained language model: a more powerful pretrained language model BERT (Devlin et al., 2018) than tradition text classification models when fine-tuned on downstream tasks.",
"On EURLEX-57K dataset, we compare our model with strong baselines with/without zero-shot settings such as BIGRU-ATT, BIGRU-LWAN (Chalkidis et al., 2019) which introduced label-wise attention.",
"The models starting with ZERO make predictions by calculating similarity scores between text and label semantics for zero-shot settings.",
"AGRU-KAMG (Lu et al., 2020) is a state-of-the-art model which introduced various label knowledge.",
"Table 2, 3 and 4 report the performance of our approaches against other methods.",
"HiAGM is an effective baseline on RCV1-V2 and WOS due to the introduction of holistic label information.",
"However, they ignored the semantic relationship between text and labels.",
"Our model achieves the best results by capturing the matching relationships among text and labels in a hierarchy-aware manner, which achieves stronger performances especially on Macro-F1.",
"The improvements show that our model can make better use of structural information to help imbalanced HTC classification.",
"The pretrained language model BERT is an effective method when fine-tuned on downstream tasks.",
"Compared with the results regarding HTC Frequent Few Zero Overall R@5 nDCG@5 R@5 nDCG@5 R@5 nDCG@5 R@5 nDCG@5 BIGRU-ATT (Chalkidis et al., 2019) 0.740 0.813 0.596 0.580 0.051 0.027 0.675 0.789 BIGRU-LWAN (Chalkidis et al., 2019) 0.755 0.819 0.661 0.618 0.029 0.019 0.692 0.796 ZERO-CNN-LWAN (Chalkidis et al., 2019) 0.683 0.745 0.494 0.454 0.321 0.264 0.617 0.717 ZERO-BIGRU-LWAN (Chalkidis et al., 2019) 0.716 0.780 0.560 0.510 0.438 0.345 0.648 0.752 AGRU-KAMG (Lu et al., 2020) 0.731 0.795 0.563 0.518 0.528 0.414 0.661 0.766 HiMatch 0.769 0.830 0.697 0.648 0.399 0.372 0.705 0.807 Table 4: The experimental results comparing to other state-of-the-art models on EURLEX-57K dataset.",
"as multiple binary classifiers, our results show that the full use of structured label hierarchy can bring great improvements to BERT model on RCV1-V2 and WOS datasets.",
"On EURLEX57K dataset, our model achieves the best results on different matrics except for zero-shot labels.",
"The largest improvements come from few-shot labels.",
"AGRU-KAMG achieves the best results on zero-shot labels by fusing various knowledge such as label semantics similarities and label co-occurrence.",
"However, our model performs semantics matching among seen labels based on training corpora, which is not designed for a specific zero-shot learning task.",
"In this section, we investigate to study the independent effect of each component in our proposed model.",
"Firstly, we validate the influence of two proposed losses, and the hierarchy-aware sampling.",
"The results are reported in Table 5. The results show that F1 will decrease with removing joint embedding loss or matching learning loss.",
"Joint embedding loss has a great influence since label semantics matching relies on the joint embedding.",
"Besides, in the hierarchy-aware margin subsection, we perform hierarchy-aware sampling by sampling coarse-grained labels, incorrect sibling labels, and other incorrect labels as negative label sets.",
"When we remove hierarchy-aware sampling and replace it with random sampling, the results will decrease, which shows the effectiveness of hierarchy-aware sampling.",
"To study the influence of the hyperparameters , , and , we conduct seven experiments on RCV1-V2 dataset.",
"The results are reported in Table 6. The first experiment is the best hyperparameters of our model.",
"Then we fine-tune the matching learning margin in experiments two and three.",
"We Ablation Models Micro Macro TextRCNN 81.57 59.25 HiMatch 84.73 64.11 w/o Joint Embedding Loss 84.49 62.57 w/o Matching Learning Loss 84.46 63.58 w/o Hierarchy-aware Sampling 84.67 63.45 Table 5: Ablation study on RCV1-V2 dataset.",
"find that a proper margin = 0 .",
"2 is beneficial for matching learning compared with a large or small margin.",
"Furthermore, we validate the effectiveness of the hierarchy-aware margin.",
"In experiment four, the performance will decrease if we violate the hierarchical structure by setting a large penalty margin for coarse-grained labels, and setting a small penalty margin for incorrect sibling labels.",
"In experiment five, the performance has a relatively larger decrease if we set = 1 and = 1 , which ignores hierarchical structure completely.",
"We speculate that the penalty margin that violates the hierarchical structure will affect the results, since the semantics relationship should be closer if the labels are closer in the hierarchical structure.",
"Moreover, we validate the effectiveness of different penalty margins among different granular labels.",
"In experiments six and seven, the results will degrade if we ignore the relationships between coarse-grained target labels and incorrect sibling labels, by setting the same margin for and",
".",
"Therefore, it is necessary to set a small penalty margin for coarse-grained target labels, and a larger penalty margin for incorrect sibling labels.",
"We plot the T-SNE projection of the text representations and label representations in the joint embedding in Figure 4. Figure",
"a) is a part of the hierarchical label structure in RCV1-V2.",
"Label C171 and C172 are fine-grained labels, and label C17 is coarse-grained label of C171 and C172.",
"GWELF and E61 are other labels with different semantics with C17, C171 and C172.",
"In Figure",
"b), by introducing joint embedding loss, we can see that the text representations are close to their corresponding label representations.",
"Furthermore, the text representations of labels C171 and C172 are close to the label representation of their coarse-grained label C17.",
"However, the text representations of different labels may overlap, since the matching relationships among different labels are ignored.",
"In Figure",
"c), by introducing both joint embedding loss and matching learning loss, the text representations of different labels are more separable.",
"Other unrelated text representations and label representations such as labels GWELF, E61 are far away from C17, C171, C172.",
"Besides, the text representations of semantically similar labels (C171 and C172) are far away relatively compared with Figure",
"b).",
"The T-SNE visualization shows that our model can capture the semantics relationship among texts, coarse-grained labels, fine-grained labels and unrelated labels.",
"We analyze the performance with different label granularity based on their hierarchical levels.",
"We compute level-based Micro-F1 and Macro-F1 scores of the RCV1-V2 dataset on TextRCNN, HiAGM, and our model in Figure 5. On RCV1-V2 dataset, both the second and third hierarchical levels contain fine-grained labels (leaf nodes).",
"The second level has the largest number of labels and contains confusing labels with similar concepts, so its Micro-F1 is relatively low.",
"Both the second and third levels contain some long-tailed labels, so their Macro-F1 are relatively low.",
"Figure 5 shows that our model achieves a better performance than other models on all levels, especially among deep levels.",
"The results demonstrate that our model has a better ability to capture the hierarchical label semantic, especially on fine-grained labels with a complex hierarchical structure.",
"In this part, we compare the computational complexity between HiAGM and our model.",
"For time complexity, the training time of HiMatch is 1.11 times that of HiAGM with batch size 64.",
"For space complexity during training, HiMatch has 37.4M parameters, while HiAGM has 27.8M.",
"The increase mainly comes from the label encoder with large label sets.",
"However, during testing, the time and space complexity of HiMatch is the same as HiAGM.",
"The reason is that only the classification results are needed, and we can remove the joint embedding.",
"HiMatch achieves new state-of-the-art results, and we believe that the increase of computational complexity is acceptable.",
"Here we present a novel hierarchical text classification model called HiMatch that can capture semantic relationships among texts and labels at different abstraction levels.",
"Instead of treating HTC as multiple binary classification tasks, we consider the text-label semantics matching relationship and formulate it as a semantic matching problem.",
"We learn a joint semantic embedding between text and labels.",
"Finally, we propose a hierarchy-aware matching strategy to model different matching relationships among coarse-grained labels, fine-grained labels and incorrect labels.",
"In future work, we plan to extend our model to the zero-shot learning scenario.",
"We thank the anonymous reviewers for their helpful feedbacks.",
"The work described in this paper was partially funded by the National Natural Science Foundation of China (Grant No. 61502174, and 61872148), the Natural Science Foundation of Guangdong Province (Grant No. 2017A030313355, 2019A1515010768 and 2021A1515011496), the Guangzhou Science and Technology Planning Project (Grant No. 201704030051, and 201902010020), the Key R&D Program of Guangdong Province (No. 2018B010107002) and the Fundamental Research Funds for the Central Universities."
] | [
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"method",
"method",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"other",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"method",
"abstain",
"objective",
"objective",
"other",
"other"
] |
[
"Predicting missing facts in a knowledge graph (KG) is crucial as modern KGs are far from complete.",
"Due to labor-intensive human labeling, this phenomenon deteriorates when handling knowledge represented in various languages.",
"In this paper, we explore multilingual KG completion, which leverages limited seed alignment as a bridge, to embrace the collective knowledge from multiple languages.",
"However, language alignment used in prior works is still not fully exploited: (1) alignment pairs are treated equally to maximally push parallel entities to be close, which ignores KG capacity inconsistency; (2) seed alignment is scarce and new alignment identification is usually in a noisily unsupervised manner.",
"To tackle these issues, we propose a novel self-supervised adaptive graph alignment (SS-AGA) method.",
"Specifically, SS-AGA fuses all KGs as a whole graph by regarding alignment as a new edge type.",
"As such, information propagation and noise influence across KGs can be adaptively controlled via relation-aware attention weights.",
"Meanwhile, SS-AGA features a new pair generator that dynamically captures potential alignment pairs in a self-supervised paradigm.",
"Extensive experiments on both the public multilingual DBPedia KG and newly-created industrial multilingual E-commerce KG empirically demonstrate the effectiveness of SS-AGA 1 .",
"Knowledge graphs (KGs) like Freebase (Bollacker et al., 2008) and DBPedia (Lehmann et al., 2015) are essential for various knowledge-driven applications such as question answering (Yasunaga et al., 2021) and commonsense reasoning (Lin et al., 2021).",
"A KG contains structured and semantic information among entities and relations, where prior Part of work was done during internship at Amazon; Corresponding author.",
"knowledge can be instantiated as factual triples (head entity, relation, tail entity), e.g., ( Apple Inc. , Founded by , Steven Jobs ).",
"As new facts are continually emerging, modern KGs are still far from being complete due to the high cost of human annotation, which spurs on the Knowledge Graph Completion (KGC) task to automatically predict missing triples to complete the knowledge graph.",
"The KG incompletion circumstance is exacerbated in the multilingual setting, as human annotations are rare and difficult to gather, especially for low-resource languages.",
"Unfortunately, most efforts for KGC have been devoted to learning each monolingual KG separately (Peng et al., 2021; Xu et al., 2021; Liang et al., 2021; Cao et al., 2021; Lovelace et al., 2021), which usually underperform in low-resource language KGs that suffer from the sparseness (Chen et al., 2017, 2020; Sun et al., 2020).",
"In contrast, KGs from multiple languages are not naturally isolated, which usually share some real-world entities and relations.",
"The transferable knowledge can be treated as a bridge to align different KGs, which not only facilitates the knowledge propagation to low-resource KGs but also alleviates 474 costly manual labeling for all languages.",
"In this paper, we explore multilingual KG completion (MKGC) (Chen et al., 2020) with limited seed alignment across languages.",
"To mitigate language gaps, some efforts have been initiated on multilingual KG embedding methods, which leverage a KG embedding module (e.g., TransE (Bordes et al., 2013)) to encode each language-specific KG independently and then employ an alignment loss to force pairs of aligned entities to be close maximally (Chen et al., 2020; Zhang et al., 2019; Sun et al., 2020).",
"However, such approaches mainly involve two limitations: (1) the KG inconsistency issue among different languages is neglected due to the equal treatment for parallel entities; (2) the scarcity of seed alignment hinders the efficient knowledge transfer across languages.",
"Concretely, prior methods treat all alignment pairs equally by forcing all parallel entities to be maximally close to each other (Chen et al., 2018; Sun et al., 2018; Chen et al., 2017).",
"This ignores potentially negative effects from the KG inconsistency due to the language diversity.",
"For example, as shown in Figure 1, the support English KG in DBP-5L (Chen et al., 2020) has much more enriched knowledge (80K facts) than the Greek one (13K facts).",
"In order to complete the query ( Apple Inc. , Founded by , ?) in the resource-poor Japanese KG (28K facts), we can transfer more knowledge from resource-rich English KG through the alignment link of Steven Jobs than that of the low-data Greek.",
"However, if roughly pushing Steven Jobs to be equally close to that English KG and Greek KG, the learned embeddings for Steven Jobs will be similar even though they have different structures, KG capacity, coverage and quality.",
"As such, it will bring in irrelevant information regarding this query and may cause the model to get the wrong answer.",
"Thus, we encourage the model to automatically distinguish the underlying inconsistency and transfer knowledge from suitable support KGs 2 for better language-specific KGC performance.",
"One the other hand, seed alignment is critical for cross-lingual transfer (Chen et al., 2020; Sun et al., 2020), while acquisition of such parallel entities across languages is costly and often noisy.",
"To mitigate such issue, some recent works (Chen et al., 2018, 2020) propose to generate new alignment pairs based on the entity embedding similarity dur-2 We regard the remaining KGs as the support KGs when conducting the KGC task in the target one.",
"ing the training process.",
"The generated new pairs can increase the inter-connectivity between KGs to facilitate knowledge transfer.",
"However, simple usage of correlations between entities without any supervision may increase the noise during training, and inhibit the effectiveness of realistic language alignment in KGs (Sun et al., 2020).",
"Motivated by these observations, we propose a S elfS upervised A daptive G raph A lignment ( SS-AGA ) framework for MKGC.",
"To tackle the knowledge inconsistency issue, SS-AGA regards alignment as a new edge type between parallel entities instead of a loss constrain, which fuses KGs from different languages as a whole graph.",
"Based on such unified modeling, we propose a novel GNN encoder with a relation-aware attention mechanism, which aggregates local neighborhood information with learnable attention weights and differs the influence received from multiple alignment pairs for the same entity as shown in Figure",
"1(b).",
"To alleviate the scarcity of seed alignment, SS-AGA exploits a new pair generator that iteratively identifies new alignment pairs in a self-supervised manner.",
"This is achieved by masking some seed alignment in the fused KG before GNN encoding and teaching the generation module to recover them.",
"Empirically, SS-AGA outperforms popular baselines in both public and industrial datasets.",
"For the public dataset, we use the multilingual DBPedia KG (Chen et al., 2020) and for the industrial dataset, we create a multilingual E-commerce Product KG called E-PKG.",
"Our contributions are as follows: (1) We handle the knowledge inconsistency issue for MKGC by treating entity alignment as a new edge type and introducing a relation-aware attention mechanism to control the knowledge propagation; (2) We propose a new alignment pair generation mechanism with self-supervision to alleviate the scarcity of seed alignment; (3) We constructed a new industrial-level multilingual E-commerce KG dataset; (4) Extensive experiments verify the effectiveness of SS-AGA in both public and industrial datasets.",
"A knowledge graph G = ( E , R , T ) consists of a set of entities E , relations R , and relational facts T = { ( e h , r, e t ) } , where e h , e t E are head and tail entities, and r R is a relation.",
"Entities and relations are represented by their text descriptions.",
"The 475 KGC Decoder () GNN '()*+ () e /0 1 e /0 2 e 3 0 1 e 4 0 1 6 7 1 e / 0 1 3 4 6 4 e 3 0 2 e 4 0 2 6 / e 9 0 2 KG 2 KG 1 e 6 0 2 e / 0 2 9 / 3 ;<=> e 3 0 1 e 4 0 1 e 6 0 1 e / 0 1 3 4 6 4 e 3 0 2 e 4 0 2 6 / e 9 0 2 KG 2 KG 1 e 6 0 2 e / 0 2 9 / 3",
"KG completion task seeks to impute the missing head or tail entity of a triple given the relation and the other entity.",
"Without loss of generality, we hereafter discuss the case of predicting missing tails, which we also refer to as a query q = ( e h , r, ? e t ) .",
"Multilingual KG completion (MKGC) utilizes KGs across multiple languages to achieve more accurate KG completion task on each individual KG (Chen et al., 2020).",
"Formally, we are given M different language-specific KGs as G 1 , G 2 , , GM , and only limited entity alignment pairs G i G j { ( e i , e j ) : e i E i , e j E j } between G i and G j .",
"We also call G i G j the seed alignment pairs to distinguish it from the new or pseudo alignment.",
"Each KGG i has their own relation set R i .",
"We denote the union of relation sets from all KGs as a unified relation set R = R 1 R 2 R M .",
"MKGC is related to but different from the entity alignment (EA) task (Cao et al., 2019; Sun et al., 2020).",
"In MKGC, seed alignment is not direct supervision while the auxiliary input features, all used in the training stage for cross-lingual transfer to boost the KGC results.",
"KG embedding models aim to learn latent low-dimensional representations for entities { e } e E and relations { r } r R .",
"A naive implementation is an embedding lookup table (Bordes et al., 2013; Sun et al., 2019).",
"Recently, Graph Neural Networks (GNN) have been explored to aggregate neighborhood information in KGs, where each triple is no longer considered independent of each other (Hao et al., 2019).",
"Mathematically, these methods employ a GNN-based encoder g that embeds entities considering the neighborhood information, { e } e E = g ( G ) .",
"We introduce SS-AGA for MKGC, consisting of two alternating training components",
"(a) and",
"(b) in Figure 2:",
"(a) A new alignment pair generation module for alleviating the limited seed alignment in G fuse .",
"Specifically, we mask some seed alignment in the fuse KG to obtain G Maskedfuse and train the generator g a ( ) to recover them.",
"Then, the trained generator will propose new edges based on the learned entity embeddings, which will be incorporated to G fuse as (cid:101) G fuse for MKG embedding model g k ( ) in the next iteration;",
"(b) A novel relation-aware MKG embedding model g k ( ) for addressing the knowledge inconsistency across multilingual KGs.",
"Specifically, we fuse different KGs as a 476 whole graph G fuse by treating alignment as a new edge type.",
"Then g k ( ) computes the contextualized embeddings for each node with learnable relation-aware attention weights that differ the influence received from multiple alignment pairs.",
"Finally, a KGC decoder f ( ) computes the triple scores.",
"As mentioned before, the knowledge transfer is inefficient in existing MKGC methods, as they encode each KG separately and transfer knowledge by forcing aligned entities to share the same embedding.",
"To handle the knowledge inconsistency, we first fuse all KGs as a whole, which relaxes the entity alignment to relational facts.",
"We then design an attention-based relation-aware GNN to learn the contextualized MKG embeddings for entities, which can differ the influence from multiple alignment sources with learnable attention weights.",
"Afterwards, we apply a KGC decoder on the contextualized embedding to get the triple scores for relational facts.",
"More specifically, we create the fused KG by preserving triples within each KG and converting each cross-KG alignment pair ( e i , e j ) to two relational facts ( e i , r align , e j ) and ( e j , r align , e i ) with the alignment edge as a newly introduced relation r align .",
"In this way, we enable direct message passing among entities from different KGs, where the attention weight can be learned automatically from data to differ the influence from multiple alignment pairs.",
"We denote the fused knowledge graph as G fuse = ( E fuse , R fuse , T fuse ) , where E fuse = (cid:83) Mi =1 E i , R fuse !",
"= ( (cid:83) Mi =1 R i ) { r align } and T fuse = ( (cid:83) Mi =1 T i ) ( (cid:83) i,j { ( e h , r align , e t ) : ( e h , e t ) or ( e t , e h ) G i G j } ) .",
"Given the fused KGG fuse , we propose an attention-based relation-aware GNN encoder g k ( ) to learn contextualized embeddings for entities following a multi-layer message passing architecture.",
"At the l -th layer of GNN, we first compute the relation-aware message delivered by the entity e i in a relational fact ( e i , r, e j ) as follows: h li ( r ) = Msg (cid:16) h li , r (cid:17) := W lv Concat( h li , r ) , where h li is the latent representation of e i at the l -th layer, Concat( , ) is the vector concatenation function, and W lv is a transformation matrix.",
"Then, we propose a relation-aware scaled dot product attention mechanism to characterize the importance of each entity's neighbor e i to itself e j , which is computed as follows: Att (cid:16) h li ( r ) , h lj (cid:17) = exp( rij ) (cid:80) ( e i ,r ) N ( e j ) exp (cid:16) ri j (cid:17) rij = (cid:16) W lk h li ( r ) (cid:17) T (cid:16) W lq h lj (cid:17) 1 d r , (1) where d is the dimension of the entity embeddings, W lk , W lq are two transformation matrices, and r is a learnable relation factor.",
"Different from the traditional attention mechanism (Velickovic et al., 2018; Bai et al., 2019), we introduce r to characterize the general significance of each relation r .",
"It is essential as not all the relationships contribute equally to the query entity.",
"We also remark that the neighborhood is bidirectional, i.e. N ( e j ) := { ( e i , r ) : ( e i , r, e j ) T fuse or ( e j , r, e i ) T fuse } as the tail entity will also influence the head entity.",
"We then update the hidden representation of entities by aggregating the message from their neighborhoods based on the attention score: h l +1 j = h lj + (cid:88) ( e i ,r ) N ( e j ) Att (cid:16) h li ( r ) , h lj (cid:17) h li ( r ) , where ( ) is a non-linear activation function, and the residual connection is used to improve the stability of GNN (He et al., 2015).",
"Finally, we stack L layers to aggregate information from multi-hop neighbors and obtain the contextualized embedding for each entity e j as: e j = h Lj .",
"Given the contextualized entity embeddings, the KGC decoder computes the triple score for each relational fact: f ( e h , r , e t ) .",
"The learning object is to minimize the following hinge loss: JK = (cid:88) ( eh,r,et ) T m ( eh ,r,et ) / T m m =1 ,...,M (cid:2) f (cid:0) e h , r , e t (cid:1) f ( e h , r , e t ) + (cid:3) + , (2) where > 0 is a positive margin, f is the KGC decoder, ( e h , r, e t ) is a negative sampled triple obtained by replacing either head or tail entity of the true triple ( e h , r, e t ) randomly by other entities in the same language-specific KG.",
"Remark 1. Our method views cross-KG alignment as a relation r align in the fused KG.",
"The knowledge transfer cross KGs is essentially conducted via the learnable attention weight r align ij , where e i and e j are connected through the relation r align .",
"Thanks to the power of GNN, r align ij differs the influence from multiple alignment sources, as 477 opposed to some existing models that simply force pairs of entities to be close to each other through a pre-defined alignment loss.",
"In this way, we properly conduct knowledge transfer among KGs with aware of their knowledge inconsistency.",
"Scalability issue.",
"Since we fuse all the M KGs as a whole, and duplicate edges for head entities, the scale of the graph G fuse would become very large.",
"We therefore employ a k -hop graph sampler that samples the k -hop neighbors for each node and compute their contextualized embeddings.",
"In multilingual KGs, we are only provided with limited seed alignment pairs to facilitate knowledge transfer, as they are expensive to obtain and even sometimes noisy (Sun et al., 2020).",
"To tackle such challenge, we propose a self-supervised new alignment pair generator.",
"In each iteration, the generator identifies new alignment pairs which will be fed into the GNN encoder g k ( ) to produce the contextualized entity embeddings in the next iteration.",
"The training of the generator is conducted in a self-supervised manner, where the generator is required to recover masked alignment pairs.",
"New Pair Generation (NPG) relies on two sets of entity embeddings: the structural embeddings and the textual embeddings.",
"The structural embeddings are obtained by another GNN encoder g a : { e a } e E fuse = g a ( G fuse ) , which shares the same architecture with g k ( ) in the relation-aware MKG Embedding model (Section 3.1).",
"The reason we employ two GNN encoders is that the set of embeddings that generate the best alignment results may differ from those that can best achieve the KG completion task.",
"The textual embeddings are obtained by entities' text description and mBERT: e text = mBERT( e ) .",
"mBERT is a multilingual pre-trained language model (Devlin et al., 2019) and is particularly attractive to the new alignment pair generation due to the following merits: (1) it captures rich semantic information of the text; (2) the pre-trained BERT embeddings are also aligned across different languages (Devlin et al., 2019; Sun et al., 2020).",
"We then model the pairwise similarity score between entity e i and e j as the maximum of the co-sine similarities of their structural embeddings and textual embeddings: sim( e i , e j ) = max (cid:0) cos (cid:0) e ai , e aj (cid:1) , cos (cid:0) e text i , e text j (cid:1)(cid:1) .",
"Then we introduce new alignment pairs if a pair of unaligned entities in two KGs are mutual nearest neighbors according to the cross-domain similarity local scaling (CSLS) measure (Conneau et al., 2018) as shown below, CSLS( e i , e j ) = 2sim( e i , e j ) s ( e i ) s ( e j ) subject to s ( e i ) = 1 K (cid:88) e i N ( e i ) sim ( e i , e i ) , where K is the number of each node's k-nearest neighbors.",
"CSLS is able to capture the sturctural similarity between pairs of entities.",
"The generated pairs are then utilized to update the graph structure of G fuse to (cid:101) G fuse in the next iteration, to alleviate the challenge of limited seed alignment.",
"Self-Supervised Learning (SSL) Similar to many existing works (Chen et al., 2020; Sun et al., 2020), the aforementioned NPG paradigm is unsupervised and may bring in unexpected noises.",
"Inspired by masked language modeling (Devlin et al., 2019) which captures contextual dependencies between tokens, we propose a self-supervised learning procedure to guide and denoise the new pair generation.",
"Specifically, we randomly mask out some alignment relational facts, T masked { ( e h , r, e t ) T fuse : r = r align } , and let the generator to recover them.",
"Such masked alignment recovery in KGs can automatically identify the underlying correlations for alignment neighbors and encourage the NPG to generate high-quality alignment pairs that are real existences but hide due to the limited seed alignment.",
"Given the fused KG with masked alignment G Maskedfuse = {E fuse , R fuse , T fuse / T masked } , the GNN encoder g a embeds the entities as { (cid:101) e } e E fuse = g a ( G Maskedfuse ) .",
"The GNN g a is then trained via minimizing the following hinge loss JA , JG i G j A = (cid:88) ( eh,et ) pij ( eh ,et ) nij (cid:2) (cid:101) e ah (cid:101) e at 2 (cid:101) e ah (cid:101) e at 2 + a (cid:3) + JA = (cid:88) 1 i<j MJG i G j A , (3) where pij = { ( e h E i , e t E j ) : ( e h , r align , e t ) T masked } is the masked alignment set, nij = { ( e h E i , e t E j ) : ( e h , e t ) / G i G j } is the unaligned entity pair set, and a > 0 is a positive margin.",
"( e h , e t ) is randomly sampled by replacing one of the entities in the positive entity pairs.",
"The overall loss function is the combination of the KG completion loss Eq.",
"(2) and the self-supervised alignment loss Eq.",
"(3) as shown below J = JK + JA , (4) where > 0 is a positive hyperparameter to bal-ance between the two losses.",
"We summarize the training process in Algorithm 1 of the Appendix.",
"We conduct experiments over two real-world datasets.",
"(i) DBP-5L (Chen et al., 2020) contains five language-specific KGs from DBpedia (Lehmann et al., 2015), i.e., English (EN), French (FR), Spanish (ES), Japanese (JA), Greek (EL).",
"As the original dataset only contains structural information, we additionally crawled the text information for these entities and relations based on the given URLs.",
"(ii) E-PKG is a new industrial multilingual E-commerce product KG dataset, which describes phone-related product information from an E-commerce platform across six different languages: English (EN), German (DE), French (FR), Japanese (JA), Spanish (ES), Italian (IT).",
"The statistics are shown in Table 1. The # Aligned Links for a specific KGG i denotes the number of alignment pairs where one of the aligned entities belong to that KG.",
"It is possible for an entity to have multiple alignment pairs across different KG sources.",
"For both datasets, we randomly split the facts in each KG into three parts: 60 % for training, 30 % for validation, and 10 % for testing.",
"Please refer to Appendix A for the details of E-PKG construction.",
"In the testing phase, given each query ( e h , r, ? e t ) we compute the plausibility scores f ( e h , r, (cid:101) e t ) for triples formed by each possible tail entity (cid:101) e t in the test candidate set and rank them.",
"We report the mean reciprocal ranks (MRR), accuracy (Hits@1) and the proportion of correct answers ranked within the top 10 (Hits@10) for testing.",
"We also adopt the filtered setting following previous works based on the premise that the candidate space has excluded the triples that have been seen in the training set (Wang et al., 2014a; Yang et al., 2015a).",
"Monolingual Baselines.",
"(i) TransE (Bordes et al., 2013) models relations as translations in the Euclidean space;",
"(ii) RotatE (Sun et al., 2019) models relations as rotations in the complex space;",
"(iii) DisMult (Yang et al., 2015b) uses a simple bilinear formulation;",
"(iv) KG-BERT (Yao et al., 2020) employs pre-trained language models for knowledge graph completion based on text information of relations and entities.",
"Multilingual Baselines.",
"(i) KEnS (Chen et al., 2020) embeds all KGs in a unified space and exploits an ensemble technique to conduct knowledge transfer;",
"(ii) CG-MuA (Zhu et al., 2020) is a GNN-based KG alignment model with collective aggregation.",
"We revise its loss function to conduct MKGC.",
"(iii) AlignKGC (Singh et al., 2021) jointly trains the KGC loss with entity and relation alignment losses.",
"For fair comparison, we use mBERT (De-vlin et al., 2019) to obtain initial embeddings of entities and relations from their text for all methods.",
"We do not employ any pretrained tasks such as EA to obtain these initial text embeddings as in (Singh et al., 2021).",
"The main results are shown in Table 2 and Table 3. Firstly, by comparing multilingual and monolingual KG models, we can observe that multilingual methods can achieve better performance.",
"This indicates that the intuition behind utilizing multiple KG sources to conduct KG completion is indeed beneficial, compared with inferring each KG independently.",
"Notably, multilingual models tend 479 Method Metric EL JA ES FR EN Monolingual Baselines TransE H@1 13.1 21.1 13.5 17.5 7.3 H@10 43.7 48.5 45.0 48.8 29.3 MRR 24.3 25.3 24.4 27.6 16.9 RotatE H@1 14.5 26.4 21.2 23.2 12.3 H@10 36.2 60.2 53.9 55.5 30.4 MRR 26.2 39.8 33.8 35.1 20.7 DisMult H@1 8.9 9.3 7.4 6.1 8.8 H@10 11.3 27.5 22.4 23.8 30.0 MRR 9.8 15.8 13.2 14.5 18.3 KG-BERT H@1 17.3 26.9 21.9 23.5 12.9 H@10 40.1 59.8 54.1 55.9 31.9 MRR 27.3 38.7 34.0 35.4 21.0 Multilingual Baselines KenS H@1 28.1 32.1 23.6 25.5 15.1 H@10 56.9 65.3 60.1 62.9 39.8 MRR ---CG-MuA H@1 21.5 27.3 22.3 24.2 13.1 H@10 44.8 61.1 55.4 57.1 33.5 MRR 32.8 40.1 34.3 36.1 22.2 AlignKGC H@1 27.6 31.6 24.2 24.1 15.5 H@10 56.3 64.3 60.9 62.3 39.2 MRR 33.8 41.6 35.1 37.4 22.3 SS-AGA H@1 30.8 34.6 25.5 27.1 16.3 H@10 58.6 66.9 61.9 65.5 41.3 MRR 35.3 42.9 36.6 38.4 23.1 Table 2: Main results on DBP-5L.",
"to bring larger performance gains for those low-resource KGs such as Greek in DBP-5L, which is expected as low-resource KGs are far from complete and efficient external knowledge transfer can bring in potential benefits.",
"Among multilingual models, our proposed method SS-AGA can achieve better performance in most cases across different metrics, languages, and datasets, which verifies the effectiveness of SS-AGA.",
"To evaluate the effectiveness of our model design, we conduct ablation study by proposing the following model variants:",
"(i) GNN applies the GNN encoder without relation modeling to each KG independently, and directly forces all alignment pairs to be close to each other as in prior works (Chen et al., 2020; Zhu et al., 2020);",
"(ii) R-GNN is the proposed relation-aware MKG embedding model (Section 3.1), which utilizes all seed alignment to construct G fused and differs the influence from other KGs by the relation-aware attention mechanism;",
"(iii) R-GNN + NPG conducts additional new pair generation for R-GNN;",
"(iv) R-GNN + NPG + SSL is our proposed full model SS-AGA, which leverages SSL to guide the NPG process.",
"We also investigate the effect of whether to share or not share the Method Metric EN DE FR JA ES IT Monolingual Baselines TransE H@1 23.2 21.2 20.8 25.1 17.2 22.0 H@10 67.5 65.5 66.9 72.7 58.4 63.8 MRR 39.4 37.4 37.5 43.6 33.0 37.8 RotatE H@1 24.2 22.3 22.1 26.3 18.3 22.5 H@10 66.8 64.3 67.1 71.9 58.9 64.0 MRR 40.0 38.2 38.0 41.8 33.7 38.1 DisMult H@1 23.8 21.4 20.7 25.9 17.9 22.8 H@10 60.1 54.5 53.5 62.6 46.2 51.8 MRR 37.2 35.4 35.1 38.0 30.9 34.8 KG-BERT H@1 24.3 21.8 22.3 26.9 18.7 22.9 H@10 66.4 64.7 67.2 72.4 58.8 63.7 MRR 39.6 38.4 38.3 44.1 33.2 37.2 Multilingual Baselines KenS H@1 26.2 24.3 25.4 33.5 21.3 25.1 H@10 69.5 65.8 68.2 73.6 59.5 64.6 MRR ---CG-MuA H@1 24.8 22.9 23.0 30.4 19.2 23.9 H@10 67.9 64.9 67.5 72.9 58.8 63.8 MRR 40.2 38.7 39.1 45.9 33.8 37.6 AlignKGC H@1 25.6 22.1 22.8 31.2 19.4 24.2 H@10 68.3 65.1 67.2 72.3 59.1 63.4 MRR 40.5 38.5 38.8 46.2 34.2 37.3 SS-AGA H@1 26.7 24.6 25.9 33.9 21.0 24.9 H@10 69.8 66.3 68.7 74.1 60.1 63.8 MRR 41.5 39.4 40.2 48.3 36.3 38.4 Table 3: Main results on E-PKG.",
"for the SSL and KGC loss, respectively.",
"We report the average Hits@1, Hits@10 and MRR over DBP-5L as shown in Table 4. As we can see, applying a GNN encoder to each KG independently would cause the performance drop as all aligned entities are being equally forced to be close to each other.",
"Removing the new pair generation process would also cause a performance degradation due to the sparsity of seed alignment, which shows that iteratively proposing new alignment is indeed helpful.",
"If the generation process is further equipped with supervision, the performance would be enhanced, which verifies the effectiveness of the self-supervised alignment loss.",
"Finally, sharing the parameters of two GNN encoders would harm the performance.",
"Though MKGC and entity alignment are two close-related tasks that can potentially benefit each other, the set of embeddings that produce the best alignment result do not necessarily yield the best performance on the MKGC task.",
"We next study the effect of seed alignment number as depicted in Figure 3. Firstly, we can observe that SS-AGA consistently outperforms other multilingual models on varying alignment ratios.",
"Secondly, 480 0.2 0.4 0.6 0.8 1.0 Japanese KG 59 60 61 62 63 64 65 66 67 H i t s @ 10 SG-KGEKEnSCG_MuAlign 0.2 0.4 0.6 0.8 1.0 Greek KG 42.5 45.0 47.5 50.0 52.5 55.0 57.5 H i t s @ 10 SG-KGEKEnSCG_MuAlign 0.2 0.4 0.6 0.8 1.0 French KG 54 56 58 60 62 64 66 H i t s @ 10 SG-KGEKEnSCG_MuAlign 0.2 0.4 0.6 0.8 1.0 Spanish KG 54 56 58 60 62 H i t s @ 10 SG-KGEKEnSCG_MuAlign 0.2 0.4 0.6 0.8 1.0 English KG 30 32 34 36 38 40 42 H i t s @ 10 SG-KGEKEnSCG_MuAlign Figure 3: Hits @ 10 with respect to different sampling ratio of seed alignment pairs.",
"for low-resources KGs such as Japanese and Greek KGs, we can observe a sharp performance drop when decreasing the alignment ratio compared with those popular KGs such as English KG.",
"This indicates that the knowledge transfer among different KGs is especially beneficial for those low-resources KGs, as popular KGs already contain relatively rich knowledge.",
"However, such transfer process is heavily dependent on the seed alignment, which yields the necessity of new alignment generation process.",
"To interpret the knowledge transfer across different KGs, we visualize the normalized average attention weight for each KG w.r.t. the attention score computed in Eq.",
"(1) from different KG sources.",
"We can see that for those popular KGs, they will receive the highest attention score from themselves such as English and French KGs.",
"Although Japanese KG is low-resource, from the main results table 2, we can see that the gap improvement brought by multilingual methods is relatively small compared to another low-resource Greek KG.",
"This indicates that Japanese KG may contain more reliable facts to facilitate missing triple predictions.",
"However, for Greek KG, we can observe that the attention weights from other languages take the majority, which means that the performance boost in Greek KG is largely attributed to the efficient knowledge transfer from other KG sources.",
"Knowledge graph embeddings (Bordes et al., 2013; Sun et al., 2019; Con, 2018) achieve the state-of-the-art",
"state-of-the-art performance for KGC, which learn the latent low-dimensional representations of entities and relations.",
"They measure triple plausibility based on varying score functions such as translation-based TransE (Bordes et al., 2013), TransH (Wang et al., 2014b); rotation-based RotatE (Sun et al., 2019) and language-model-based KG-BERT (Yao et al., 2020).",
"Recently, GNN-based methods (Li et al., 2019; Zhang et al., 2020; Javari et al., 2020) have been proposed to capture node neighborhood information for the KGC tasks.",
"GNN is a class of neural networks that operate on graph-structured data by passing local messages (Kipf and Welling, 2017; Velickovic et al., 2018; Xu et al., 2019; Bai et al., 2019; Huang et al., 2020, 2021; Wang et al., 2021).",
"Specifically, they use GNN as an encoder to generate contextualized representation of entities by passing local messages (Kipf and Welling, 2017; Velickovic et al., 2018; Xu et al., 2019; Bai et al., 2019; Huang et al., 2020, 2021).",
"Then, existing score functions are employed to generate triple scores which outperform the aforementioned methods that treat each triple independently only with the scoring function.",
"Multilingual KG embeddings are extensions of monolingual KG embeddings that consider knowledge transfer across KGs with the use of limited seed alignment (Sun et al., 2020; Singh et al., 2021).",
"Earlier work proposes different ways to reconcile KG embeddings for the entity alignment (EA) task: MTransE (Chen et al., 2017) learns a transformation matrix between pairs of KGs.",
"MuGNN (Cao et al., 2019) reconciles structural differences via rule grounding.",
"CG-MuA utilizes collective aggregation of confident neighborhood (Zhu et al., 2020).",
"Others incorporate attribute information such as entity text (Zhang et al., 2019; Chen et al., 2018).",
"To tackle the sparsity of seed alignment, BootEA (Sun et al., 2018) iteratively proposes new aligned pairs via bootstrapping.",
"Zhu et al. (2017) utilizes parameter sharing to improve alignment performance.",
"While they focus 481 T a r g e t KG Support KG Figure 4: Average attention weight learned in DBP-5L.",
"on the EA task rather than the MKGC task that we tackle here, such techniques can be leveraged to conduct knowledge transfer among KGs.",
"Recently, Chen et al. (2020) propose an ensemble-based approach for the MKGC task.",
"In this paper, we view alignment as a new edge type and employ a relation-aware GNN to get the contextualized representation of entities.",
"As such, the influence of the aligned entities is captured by the learnable attention weight, instead of assuming each alignment pair to have the same impact.",
"We also propose a self-supervised learning task to propose new alignment pairs during each training epoch to overcome the sparsity issue of seed alignment pairs.",
"In this paper, we propose SS-AGA for multilingual knowledge graph completion (MKGC).",
"It addresses the knowledge inconsistency issue by fusing all KGs and utilizing a GNN encoder to learn entity embeddings with learnable attention weights that differs the influence from multiple alignment sources.",
"It features a new pair generation conducted in a self-supervised learning manner to tackle the limited seed alignment issue.",
"Extensive results on two real-world datasets including a newly-created E-commerce dataset verified the effectiveness of SS-AGA.",
"Our current approach may fail to fully exploit the benefit of entity and relation texts.",
"In the future, we plan to study more effective ways to combine text data with graph data for better model performance.",
"We are also interested in studying MKGC where there no alignment pairs are given, which is a very practical setting and our current model is not able to deal with.",
"Our paper proposed SS-AGA, a novel multilingual knowledge graph completion model for predicting missing triples in KGs considering their",
"knowledge transfer.",
"SS-AGA neither introduces any social/ethical bias to the model nor amplifies any bias in the data.",
"We the created multilingual E-commerce product KG dataset by masking all customers'/sellers' identity and privacy.",
"We only collect information related to products without any personal information leakage.",
"Our model is built upon public libraries in Pytorch.",
"We do not foresee any direct social consequences or ethical issues."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"objective",
"other",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain"
] |
[
"This work presents a new resource for borrowing identification and analyzes the performance and errors of several models on this task.",
"We introduce a new annotated corpus of Spanish newswire rich in unassimilated lexical borrowingswords from one language that are introduced into another without orthographic adaptationand use it to evaluate how several sequence labeling models (CRF, BiLSTM-CRF, and Transformer-based models) perform.",
"The corpus contains 370,000 tokens and is larger, more borrowing-dense, OOV-rich, and topic-varied than previous corpora available for this task.",
"Our results show that a BiLSTM-CRF model fed with subword embeddings along with either Transformer-based embeddings pretrained on codeswitched data or a combination of contextualized word embeddings outperforms results obtained by a multilingual BERT-based model.",
"Lexical borrowing is the process of bringing words from one language into another (Haugen, 1950).",
"Borrowings are a common source of out-of-vocabulary (OOV) words, and the task of detecting borrowings has proven to be useful both for lexicographic purposes and for NLP downstream tasks such as parsing (Alex, 2008a), text-to-speech synthesis (Leidig et al., 2014), and machine translation (Tsvetkov and Dyer, 2016).",
"Recent work has approached the problem of extracting lexical borrowings in European languages such as German (Alex, 2008b; Garley and Hocken-maier, 2012; Leidig et al., 2014), Italian (Furiassi and Hofland, 2007), French (Alex, 2008a; Ches-ley, 2010), Finnish (Mansikkaniemi and Kurimo, 2012), Norwegian (Andersen, 2012; Losnegaard and Lyse, 2012), and Spanish (Serigos, 2017), with a particular focus on English lexical borrowings (often called anglicisms ).",
"Computational approaches to mixed-language data have traditionally framed the task of identifying the language of a word as a tagging problem, where every word in the sequence receives a language tag (Lignos and Marcus, 2013; Molina et al., 2016; Solorio et al., 2014).",
"As lexical borrowings can be single (e.g. app , online , smartphone ) or multi-token (e.g. machine learning ), they are a natural fit for chunking-style approaches.",
"lvarez Mellado (2020b) introduced chunking-based models for borrowing detection in Spanish media which were later improved (lvarez Mellado, 2020a), producing an F1 score of 86.41.",
"However, both the dataset and modeling approach used by lvarez Mellado (2020a) had significant limitations.",
"The dataset focused exclusively on a single source of news and consisted only of headlines.",
"The number and variety of borrowings were limited, and there was a significant overlap in borrowings between the training set and the test set, which prevented assessment of whether the modeling approach was actually capable of generalizing to previously unseen borrowings.",
"Additionally, the best results were obtained by a CRF model, and more sophisticated approaches were not explored.",
"The contributions of this paper are a new corpus of Spanish annotated with unassimilated lexical borrowings and a detailed analysis of the performance of several sequence-labeling models trained on this corpus.",
"The models include a CRF, Transformer-based models, and a BiLSTM-CRF with different word and subword embeddings (in-cluding contextualized embeddings, BPE embeddings, character embeddings, and embeddings pretrained on codeswitched data).",
"The corpus contains 370,000 tokens and is larger and more topic-varied than previous resources.",
"The test set was designed to be as difficult as possible; it covers sources and dates not seen in the training set, includes a high number of OOV words (92% of the borrowings in the test set are OOV) and is very borrowing-dense 3868 Media Topics Set(s) ElDiario.es General newspaper Train, Dev.",
"The dataset we present is publicly available 1 and has been released under a CC BY-NC-SA-4.0 license.",
"This dataset was used for the ADoBo shared task on automatic detection of borrowings at IberLEF 2021 (lvarez Mellado et al., 2021).",
"Linguistic borrowing can be defined as the transference of linguistic elements between two languages.",
"Borrowing and codeswitching have been described as a continuum (Clyne et al., 2003).",
"Lexical borrowing involves the incorporation of single lexical units from one language into another language and is usually accompanied by morphological and phonological modification to conform with the patterns of the recipient language (Onysko, 2007; Poplack et al., 1988).",
"On the other hand, codeswitches are by definition not integrated into a recipient language, unlike established loanwords (Poplack, 2012).",
"While codeswitches require a substantial level of fluency, comply with grammatical restrictions in both languages, and are produced by bilingual speakers in bilingual discourses, lexical borrowings are words used by monolingual individuals that eventually 1 https://github.com/lirondos/coalas Set Tokens ENG OTHER Unique Training 231,126 1,493 28 380 Development 82,578 306 49 316 Test 58,997 1,239 46 987 Total 372,701 3,038 123 1,683 Table 2: Corpus splits with counts become lexicalized and assimilated as part of the recipient language lexicon until the knowledge of foreign disappears (Lipski, 2005).",
"Our dataset consists of Spanish newswire annotated for unassimilated lexical borrowings.",
"All of the sources used are European Spanish online publications (newspapers, blogs, and news sites) published in Spain and written in European Spanish.",
"Data was collected separately for the training, development, and test sets to ensure minimal overlap in borrowings, topics, and time periods.",
"The training set consists of a collection of articles appearing between August and December 2020 in elDiario.es , a progressive online newspaper based in Spain.",
"The development set contains sentences in articles from January 2021 from the same source.",
"The data in the test set consisted of annotated sentences extracted in February and March 2021 from a diverse collection of online Spanish media that covers specialized topics rich in lexical borrowings and usually not covered by elDiario.es , such as sports, gossip or videogames (see Table 1).",
"To focus annotation efforts for the training set on articles likely to contain unassimilated borrowings, the articles to be annotated were selected by first using a baseline model and were then human-annotated.",
"To detect potential borrowings, the CRF model and data from lvarez Mellado (2020b) was used along with a dictionary look-up pipeline.",
"Articles that contained more than 5 borrowing candidates were selected for annotation.",
"The main goal of data selection for the development and test sets was to create borrowing-dense, OOV-rich datasets, allowing for better assessment of generalization.",
"To that end, the annotation was based on sentences instead of full articles.",
"If a sentence contained a word either flagged as a borrowing by the CRF model, contained in a wordlist of English, or simply not present in the training set, it was selected for annotation.",
"This data selection approach ensured a high number of borrowings and 3869 OOV words, both borrowings and non-borrowings.",
"While the training set contains 6 borrowings per 1,000 tokens, the test set contains 20 borrowings per 1,000 tokens.",
"Additionally, 90% of the unique borrowings in the development set were OOV (not present in training).",
"92% of the borrowings in the test set did not appear in training (see Table 2).",
"The corpus was annotated with BIO encoding using Doccano (Nakayama et al., 2018) by a native speaker of Spanish with a background in linguistic annotation (see Appendix C).",
"The annotation guidelines (provided in Appendix B) were based on those of lvarez Mellado (2020a) but were expanded to account for a wider diversity of topics.",
"Following Serigos's observations and lvarez Mel-lado's work, English lexical borrowings were labeled ENG , other borrowings were labeled OTHER .",
"Here is an example from the training set: 2 Benching [ ENG ], estar en el banquillo de tu crush [ ENG ] mientras otro juega de titular.",
"In order to assess the quality of the guidelines and the annotation, a sample of 9,110 tokens from 450 sentences (60% from the test set, 20% from training, 20% from development) was divided among a group of 9 linguists for double annotation.",
"The mean inter-annotation agreement computed by Cohen's kappa was 0.91, which is above the 0.8 threshold of reliable annotation (Artstein and Poesio, 2008).",
"Like all resources, this resource has significant limitations that its users should be aware of.",
"The corpus consists exclusively of news published in Spain and written in European Spanish.",
"This fact by no means implies the assumption that European Spanish represents the whole of the Spanish language.",
"The notion of assimilation is usage-based and community-dependant, and thus the dataset we present and the annotation guidelines that were followed were designed to capture a very specific phenomena at a given time and in a given place: unassimilated borrowings in the Spanish press.",
"In order to establish whether a given word has been assimilated or not, the annotation guidelines rely on lexicographic sources such as the prescriptivist Diccionario de la Lengua Espaola (Real Academia Espaola, 2020) by the Royal 2 Benching: being on your crush's bench while someone else plays in the starting lineup.",
"Spanish Academy, a dictionary that aims to cover world-wide Spanish but whose Spain-centric criteria has been previously pointed out (Blanch, 1995; Fernndez Gordillo, 2014).",
"In addition, prior work has suggested that Spanish from Spain may have a higher tendency of anglicism-usage than other Spanish dialects (McClelland, 2021).",
"Consequently, we limit the scope of the dataset to European Spanish not because we consider that this variety represents the whole of the Spanish-speaking community, but because we consider that the approach we have taken here may not account adequately for the whole diversity in borrowing assimilation within the Spanish-speaking world.",
"Our annotation is licensed with a permissive CC BY-NC-SA-4.0 license.",
"With one exception, the sources included in our dataset release their content under Creative Commons licenses that allow for reusing and redistributing the material for non commercial purposes.",
"This was a major point when deciding which news sites would be included in the dataset.",
"The exception is the source Microsiervos , whose content we use with explicit permission from the copyright holder.",
"Our annotation is stand-off annotation that does not create a derivative work under Creative Commons licenses, so ND licenses are not a problem for our resource.",
"Table 9 in Appendix A lists the URL and license type for each source.",
"The corpus was used to evaluate four types of models for borrowing extraction: (1) a CRF model, (2) two Transformer-based models, (3) a BiLSTM-CRF model with different types of unadapted embeddings (word, BPE, and character embeddings) and (4) a BiLSTM-CRF model with previously fine-3870",
"tuned Transformer-based embeddings pretrained on codeswitched data.",
"By unadapted embeddings, we mean embeddings that have not been fine-tuned for the task of anglicism detection or a related task (e.g. codeswitching).",
"Evaluation for all models required extracted spans to match the annotation exactly in span and type to be correct.",
"Evaluation was performed with SeqScore (Palen-Michel et al., 2021), using conlleval -style repair for invalid label sequences.",
"All models were trained using an AMD 2990WX CPU and a single RTX 2080 Ti GPU.",
"As baseline model, we evaluated a CRF model with handcrafted features from lvarez Mellado (2020b).",
"The model was built using pycrfsuite (Peng and Korobov, 2014), a Python wrapper for crfsuite (Okazaki, 2007) that implements CRF for labeling sequential data.",
"The model also uses the Token and Span utilities from spaCy library (Honnibal and Montani, 2017).",
"The following handcrafted binary features from lvarez Mellado (2020b) were used for the model: Bias: active on all tokens to set per-class bias Token: the string of the token Uppercase: active if the token is all uppercase Titlecase: active if only the first character of the token is capitalized Character trigram: an active feature for every trigram contained in the token Quotation: active if the token is any type of quotation mark ( ' \" ) Suffix: last three characters of the token POS tag: part-of-speech tag of the token provided by spaCy utilities Word shape: shape representation of the token provided by spaCy utilities Word embedding: provided by Spanish word2vec 300 dimensional embeddings by Cardellino (2019), one feature per dimension URL: active if the token could be validated as a URL according to spaCy utilities Email: active if the token could be validated as an email address by spaCy utilities Twitter: active if the token could be validated as a possible Twitter special token: #hashtag or @username A window of two tokens in each direction was used for feature extraction.",
"Optimization was performed using L-BFGS, with the following hyper-parameter values chosen following the best results from lvarez Mellado (2020b) were set: c1 = 0 .",
"05 , c2 = 0 .",
"01 .",
"As shown in Table 3, the CRF produced an overall F1 score of 66.15 on the development set (P: 74.13, R: 59.72) and an overall F1 of 55.44 (P: 77.89, R: 43.04) on the test set.",
"The CRF results on our dataset are far below the F1 of 86.41 reported by lvarez Mellado (2020b), showing the impact that a topically-diverse, OOV-rich dataset can have, especially on test set recall.",
"These results demonstrate that we have created a more difficult task and motivate using more sophisticated models.",
"model trained for Spanish (Caete et al., 2020) mBERT: multilingual BERT, trained on Wikipedia in 104 languages (Devlin et al., 2019)",
"Both models were run using the Transformers library by HuggingFace (Wolf et al., 2020).",
"The same default hyperparameters were used for both models: 3 epochs, batch size 16, and maximum sequence length 256.",
"Except where otherwise specified, we report results for 10 runs that use different random seeds for initialization.",
"We perform statistical significance testing using the Wilcoxon rank-sum test.",
"(F1: 82.02) performed better than BETO (F1: 80.71), and the difference was statistically significant ( p = 0 . 027 ).",
"Both models performed better on the test set than on the development set, despite the difference in topics between them, suggesting good generalization.",
"This is a remarkable difference with the CRF results, where the CRF performed substantially worse on the test set than on the development set.",
"The limited number of OTHER examples explains the high deviations in the results for this label.",
"We explored several possibilities for a BiLSTM-CRF model fed with different types of word and subword embeddings.",
"The purpose was to assess whether the combination of different embeddings that encode different linguistic information could outperform the Transformer-based models in Section 3.2.",
"All of our BiLSTM-CRF models were built using Flair (Akbik et al., 2018) with default hyperparameters (hidden size = 256, learning rate = 0.1, mini batch size = 32, max number of epochs = 150) and embeddings provided by Flair .",
"We first ran exploratory experiments on the development set with different types of embeddings using Flair tuning functionalities.",
"We explored the following embeddings: Transformer embeddings (mBERT and BETO), fastText embeddings (Bo-janowski et al., 2017), one-hot embeddings, byte pair embeddings (Heinzerling and Strube, 2018), and character embeddings (Lample et al., 2016).",
"The best results were obtained by a combination of mBERT embeddings and character embeddings (F1: 74.00), followed by a combination of BETO embeddings and character embeddings (F1: 72.09).",
"These results show that using contextualized embeddings unsurprisingly outperforms non-contextualized embeddings for this task, and that subword representation is important for the task of extracting borrowings that have not been adapted orthographically.",
"The finding regarding the importance of subwords is consistent with previous work; feature ablation experiments for borrowing detection have shown that character trigram features contributed the most to the results obtained by a CRF model (lvarez Mellado, 2020b).",
"The worst result (F1: 39.21) was produced by a model fed with one-hot vectors, and the second-worst result was produced by a model fed exclusively with character embeddings.",
"While it performed poorly (F1: 41.65), this fully unlexicalized model outperformed one-hot embeddings, reinforcing the importance of subword information for the task of unassimilated borrowing extraction.",
"In light of the preliminary embedding experiments and our earlier experiments with Transformer-based models, we fed our BiLSTM-CRF model with different combinations of contextualized word embeddings (including English BERT embeddings from Devlin et al.), byte-pair embeddings and character embeddings.",
"Table 5 shows development set results from different combinations of embeddings.",
"The best overall F1 on the development set was obtained by the combination of BETO embeddings, BERT embeddings and byte-pair embeddings.",
"The model fed with BETO embeddings, BERT embeddings, byte-pair embeddings and character embeddings ranked second.",
"Several things stand out from the results in Table 5.",
"The BETO+BERT embedding combination consistently works better than mBERT embeddings, and BPE embeddings contribute to better results.",
"Character embeddings, however, seem to produce little effect at first glance.",
"Given the same model, adding character embeddings produced little changes in F1 or even slightly hurt the results.",
"Although character embeddings seem to make little difference in overall F1, recall was consistently higher in models that included character embeddings, and in fact, the model with BETO+BERT embeddings, BPE embeddings and character embeddings produced the highest recall overall (77.46).",
"This is an interesting finding, as our results from Sections 3.1 and 3.2 as well as prior work (lvarez Mellado, 2020b) identified recall as weak for borrowing detection models.",
"The two best-performing models from Table 5 (BETO+BERT embeddings, BPE embeddings and optionally character embeddings) were evaluated on the test set.",
"Table 6 gives results per type on the development and test sets for these two models.",
"For both models, results on the test set were better (F1: 82.92, F1: 83.63) than on the development set (F1: 81.21, F1: 81.05).",
"Although the best F1 score on the development set was obtained with no character embeddings, when run on the test set the model with character embeddings obtained the best score; however, these differences did not show to be statistically significant.",
"Recall, on the other hand, 3872 Word embedding BPE embedding Char embedding Precision Recall F1 mBERT -82 .",
"was higher when run with character embeddings (R: 78.34) than when run without them (R: 76.89), and the difference was statistically significant ( p = 0 . 019 ).",
"This finding again corroborates the positive impact that character information can have in recall when dealing with previously unseen borrowings.",
"Finally, we decided to explore whether detecting unassimilated lexical borrowings could be framed as transfer learning from language identification in codeswitching.",
"As before, we ran a BiLSTM-CRF model using Flair , but instead of using the unadapted Transformer embeddings, we used codeswitch embeddings (Sarker, 2020), fine-tuned Transformer-based embeddings pretrained for language identification on the Spanish-English section of the LinCE dataset (Aguilar et al., 2020).",
"Table 7 gives results for these models.",
"The two best-performing models were the BiLSTM-CRF with codeswitch and BPE embeddings (F1: 84.06) and the BiLSTM-CRF model with codeswitch, BPE and character embeddings (F1: 84.22).",
"The differences between these two models did not show to be statistically significant, but the difference with the best-performing model with unadapted embeddings from Section 3.3 (F1: 83.63) was statistically significant ( p = 0 . 018 ).",
"These two best-performing models however obtained worse results on the development set than those obtained by the best-performing models from Section 3.3.",
"Adding BPE embeddings showed to improve F1 score by around 1 point compared to either feeding the model with only codeswitch (F1: 82.83) or only codeswitch and character embeddings (F1: 83.13), and the differences were statistically significant in both cases ( p = 0 . 024 , p = 0 . 018 ).",
"It should be noted that this transfer learning approach is indirectly using more data than just the training data from our initial corpus, as the codeswitch-based BiLSTM-CRF models benefit from the labeled data seen during pretraining for the language-identification task.",
"We compared the different results produced by the best performing model of each type on the test set: (1) the mBERT model, (2) the BiLSTM-CRF with BERT+BETO, BPE and character embeddings and (3) the BiLSTM-CRF model with codeswitch, BPE and character embeddings.",
"We divide the error analysis into two sections.",
"We first analyze errors that were made by all three models, with the aim of discovering which instances of the dataset were 3873 Embeddings Development Test Precision Recall F1 Precision Recall F1 Codeswitch ALL 80 .",
"challenging for all models.",
"We then analyze unique answers (both correct and incorrect) per model, with the aim of gaining insight on what are the unique characteristics of each model in comparison with other models.",
"There were 137 tokens in the test set that were incorrectly labeled as O by all three models.",
"103 of these were of type ENG , 34 were of type OTHER .",
"These errors can be classified as follows Borrowings in upper case (12), which tend to be mistaken by models with proper nouns: Anlisis de empresa basados en Big Data [ ENG ].",
"3 Borrowings in sentence-initial position (9), which were titlecased and therefore consistently mislabeled as O : Youtuber [ ENG ], mujer y afroamericana: Candace Owen podra ser la alternativa a Trump.",
"4 Sentence-initial borrowings are particularly tricky, as models tend to confuse these with foreign named entities.",
"In fact, prior work on anglicism detection based on dictionary lookup (Serigos, 2017) stated that borrowings in sentence-initial position were rare in Spanish and consequently chose to ignore all foreign words in sentence-initial position under the assumption that they could be considered named entities.",
"However, these examples (and the 3 Business analytics based on Big Data 4 Youtuber, woman and African-American: Candace Owen could be the alternative to Trump difficulty they pose for models) prove that sentence-initial borrowings are not rare and therefore should not be overlooked.",
"Borrowings that also happen to be words in Spanish (8), such as the word primer , that is a borrowing found in makeup articles ( un primer hidratante , a hydrating primer) but also happens to be a fully Spanish adjective meaning first ( primer premio , first prize).",
"Borrowings like these are still treated as fully unassimilated borrowings by speakers, even when the form is exactly the same as an already-existing Spanish word and were a common source of mislabeling, especially partial mismatches in multitoken borrowings: red (which exists in Spanish meaning net) in red carpet , tractor in tractor pulling or total in total look .",
"Borrowings that could pass as Spanish words (58): most of the misslabeled borrowings were words that do not exist in Spanish but that could orthographically pass for a Spanish word.",
"That is the case of words like burpees (hypothetically, a conjugated form of the non-existing verb burpear ), gimbal , mules , bromance or nude .",
"Other borrowings (50): a high number of mislabeled borrowings were borrowings that were orthographically implausible in Spanish, such as trenchs , multipads , hypes , riff , scrunchie or mint .",
"The fact that none of our models were able to correctly classify these orthographically implausible examples leaves the door open to further exploration of character-based models and investigating character-level perplexity as a source of information.",
"4.1.2 Non-borrowings labeled as borrowings 29 tokens were incorrectly labeled as borrowings by all three models.",
"These errors can be classified in the following groups: Metalinguistic usage and reported speech: a foreign word or sentence that appears in the text to refer to something someone said or wrote.",
"Escribir icon pack [ ENG ] en el buscador.",
"5 Lower-cased proper nouns: such as websites.",
"Acceder a la pgina flywithkarolg [ ENG ] 6 Computer commands: the test set included blog posts about technology, which mentioned computer commands (such as sudo apt-get update ) that were consistently mistaken by our models as borrowings.",
"La serie 10.000 ships [ ENG ] cuenta la odisea de la princesa Nymeria.",
"7 Acronyms and acronym expansions: El entrenamiento HITT ( high intensity interval training [ ENG ]) 8 Assimilated borrowings: certain borrowings that are already considered by RAE's dictionary as fully assimilated were labeled by all models as anglicisms.",
"These may seem like an extreme caseafter all, computer commands do contain English words but they are a good example of the real data that a borrowing-detection system may encounter.",
"Foreign words within proper nouns: lower-cased foreign words that were part of multitoken proper nouns.",
"Three tokens of type OTHER were marked by all models as ENG .",
"There were no ENG borrowings that were labeled as OTHER by all three models.",
"Haba buffet [ ENG ] libre.",
"10 4.2 Unique answers per model We now summarize the unique mistakes and correct answers made per model, with the aim of understanding what data points were handled uniquely well or badly by each model.",
"There were 46 tokens that were incorrectly labeled as borrowings only by the mBERT model.",
"These include foreign words used in reported speech or acronym expansion (21), proper names (11) and already assimilated borrowings (7).",
"There were 27 tokens that were correctly labeled only by the mBERT model.",
"The mBERT model was particularly good at detecting the full span of multitoken borrowings as in no knead bread , total white , wide leg or kettlebell swings (which were only partially detected by other models) and at detecting borrowings that could pass for Spanish words (such as fashionista , samples , vocoder ).",
"In addition, the mBERT model also correctly labeled as O 12 tokens that the other two models mistook as borrowings, including morphologically adapted anglicisms, such as craftear (Spanish infinitive of the verb to craft ), crackear (from to crack ) or lookazo (augmentative of the noun look ).",
"There were 23 tokens that were incorrectly labeled as borrowings solely by this model, the most common types being assimilated borrowings (such as fan , clon ) and Spanish words ( fiestones ) (9 each).",
"32 tokens were correctly labeled as borrowings only by this model.",
"These include borrowings that could pass for Spanish words ( camel , canvas ).",
"In addition, this model also correctly labeled as O 6 tokens that the other two mistook as borrowings, including old borrowings that are considered today as fully assimilated (such as films or sake ) or the usage of post as a prefix of Latin origin (as in post-produccin ), which other models mistook with the English word post .",
"The codeswitch-based system incorrectly labeled 18 tokens as borrowings, including proper nouns (7), such as Baby Spice , and fully asimilated borrowings (5), such as jersey , relax or tutorial .",
"This model correctly labeled 27 tokens that were mistakenly ignored by other models, including multitoken borrowings ( dark and gritty , red carpet ) and other borrowings that were non-compliant with Spanish orthographic rules but that were however ignored by other models ( messy , athleisure , multi-touch , workaholic ).",
"The codeswitch-based model also correctly labeled as O 16 tokens that the other two models labeled as borrowings, including acronym expansions, lower-cased proper names and orthographically unorthodox Spanish words, such as the ideophone tiki-taka or shavales (a non-standard writing form of the word chavales , guys).",
"Table 8 provides a summary of our results.",
"As we have seen, the diversity of topics and the presence of OOV words in the dataset can have a remarkable impact on results.",
"The CRF modelwhich in previous work had reported an F1 score of 86saw its performance drop to a 55 when dealing with our dataset, despite the fact that both datasets consisted of journalistic European Spanish texts.",
"On the other hand, neural models (Transformer-based and BiLSTM-CRF) performed better.",
"All of them performed better on the test set than on the development set, which shows good generalization ability.",
"The BiLSTM-CRF model fed with different combinations of Transformer-based word embeddings and subword embeddings outperformed multilingual BERT and Spanish monolingual BETO.",
"The model fed with codeswitch, BPE, and character embeddings ranked first and was significantly better than the result obtained by the model fed with BETO+BERT, BPE, and character embeddings.",
"Our error analysis shows that recall was a weak point for all models we examined.",
"Among false negatives, upper-case borrowings (such as Big Data ) and borrowings in sentence-initial position (in titlecase) were frequent, as they tend to be mistaken with named entities.",
"This finding suggests that borrowings with capitalized initial should not be overlooked.",
"Similarly, words that exist both in English and Spanish (like primer or red ) are not rare and were also a common source of error.",
"Adding character embeddings produced a statistically significant improvement in recall, which opens the door to future work.",
"Concurrently with the work presented on this paper, De la Rosa (2021) explored using supplementary training on intermediate labeled-data tasks (such as POS, NER, codeswitching and language identification) along with multilingual Transformer-based models to the task of detecting borrowings.",
"Alternatively, Jiang et al. (2021) used data augmentation to train a CRF model for the same task.",
"We have introduced a new corpus of Spanish newswire annotated with unassimilated lexical borrowings.",
"The test set has a high number of OOV borrowings92% of unique borrowings in the test set were not seen during trainingand is more borrowing-dense and varied than resources previously available.",
"We have used the dataset to explore several sequence labeling models (CRF, BiLSTM-CRF, and Transformer-based models) for the task of extracting lexical borrowings in a high-OOV setting.",
"Results show that a BiLSTM-CRF model fed with Transformer-based embeddings pretrained on codeswitched data along subword embeddings produced the best results (F1: 84.22, 84.06), followed by a combination of contextualized word embeddings and subword embeddings (F1: 83.63).",
"These models outperformed prior models for this task (CRF F1: 55.44) and multilingual Transformer-based models (mBERT F1: 82.02).",
"In this paper we have introduced an annotated dataset and models for detecting unassimilated borrowings in Spanish.",
"The dataset is openly-licensed, and detailed annotation guidelines are provided (Appendix B).",
"Appendix C includes a data statement that provides information regarding the curation rationale, annotator demographics, text characteristics, etc. of the dataset we have presented.",
"We hope these resources will contribute to bringing more attention to borrowing extraction, a task that has been little explored in the field of NLP but that can be of great help to lexicographers and linguists studying language change.",
"However, the resources we have presented should not be considered a full depiction of either the process of borrowing or the Spanish language in general.",
"We have identified four important considerations that any future systems that build off this research should be aware of.",
"The process of borrowing.",
"Borrowing is a complex phenomenon that can manifest at all linguistic levels (phonological, morphological, lexical, syntactic, semantic, pragmatic).",
"This work is exclusively concerned with lexical borrowings.",
"Furthermore, in this work we have taken a synchronic approach to borrowing: we deal with borrowings that are considered as such in a given dialect and at a given point in time.",
"The process of borrowing assimilation is a diachronic process, and the notion of what is perceived as unassimilated can vary across time and varieties.",
"As a result, our dataset and models may not be suitable to account for partially assimilated borrowings or even for unassimilated borrowings in a different time period.",
"Language variety.",
"The dataset we have presented is exclusively composed of European Spanish journalistic texts.",
"In addition, the guidelines we have described were designed to capture a very specific phenomena: unassimilated borrowings in the Spanish press.",
"In fact, the annotation guidelines rely on sources such as Diccionario de la Lengua Espaola , a lexicographic source whose Spain-centric criteria has been previously pointed out (Blanch, 1995; Fernndez Gordillo, 2014).",
"Consequently, the scope of our work is restricted to unassimilated borrowings in journalistic European Spanish.",
"Our dataset and models may not translate adequately to other Spanish-speaking areas or genres.",
"The preeminence of written language.",
"In our work, the notion of what a borrowing is is heavily influenced by how a word is written.",
"According to our guidelines, a word like meeting will be considered unassimilated, while the Spanish form mitin will be considered assimilated.",
"These preferences in writing may indirectly reveal how well-established a loanword is or how foreign it is perceived by the speaker.",
"But it is questionable that these two forms necessarily represent a difference in pronunciation or linguistic status in the speaker's mental lexicon.",
"How a word is written can be helpful for the purpose of detecting novel anglicisms in written text, but ideally one would not establish a definition of borrowing solely based on lexicographic, corpus-derived or orthotypographic cues.",
"These are all valuable pieces of information, but they only represent an indirect evidence of the status that the word holds in the lexicon.",
"After all, speakers will identify a word as an anglicism (and use it as such), regardless of whether the word is written in a text or used in speech.",
"On the other hand, the lexicographic fact that a word came from another language may not be enough as a criterion to establish the notion of borrowing.",
"Speakers use words all the time without necessarily knowing where they came from or how long ago they were incorporated into the language.",
"The origin of the word may just be a piece of trivia that is totally irrelevant or unknown to the speaker at the time of speaking, so the etymological origin of the word might not be enough to account for the difference among borrowings.",
"In fact, what lies at the core of the unassimilated versus assimilated distinction is the awareness of speakers when they use a certain word (Poplack et al., 1988).",
"The notion of what a borrowing is lies within the brain of the speaker, and in this work we are only indirectly observing that status through written form.",
"Therefore our definition of borrowing and assimilation cannot be regarded as perfect or universal.",
"Ideas about linguistic purity.",
"The purpose of this project is to analyze the usage of borrowings in the Spanish press.",
"This project does not seek to promote or stigmatise the usage of borrowings, or those who use them.",
"The motivation behind our research is not to defend an alleged linguistic purity, but to study the phenomenon of lexical borrowing from a descriptive and data-driven point of view.",
"The authors would like to thank Carlota de Benito Moreno, Jorge Diz Pico, Nacho Esteban Fernndez, Gloria Gil, Clara Manrique, Roco Morn Gonzlez, Aarn Prez Bernabeu, Monserrat Rius and Miguel Snchez Ibez for their assessment of the annotation quality."
] | [
"objective",
"objective",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other"
] |
[
"Prompt-based tuning for pre-trained language models (PLMs) has shown its effectiveness in few-shot learning.",
"Typically, prompt-based tuning wraps the input text into a cloze question.",
"To make predictions, the model maps the output words to labels via a verbalizer , which is either manually designed or automatically built.",
"However, manual verbalizers heavily depend on domain-specific prior knowledge and human efforts, while finding appropriate label words automatically still remains challenging.",
"In this work, we propose the prototypical verbalizer (ProtoVerb) which is built directly from training data.",
"Specifically, ProtoVerb learns prototype vectors as verbalizers by contrastive learning.",
"In this way, the prototypes summarize training instances and are able to enclose rich class-level semantics.",
"We conduct experiments on both topic classification and entity typing tasks, and the results demonstrate that ProtoVerb significantly outperforms current automatic verbalizers, especially when training data is extremely scarce.",
"More surprisingly, ProtoVerb consistently boosts prompt-based tuning even on untuned PLMs, indicating an elegant non-tuning way to utilize PLMs.",
"Our codes are avaliable at https: //github.com/thunlp/OpenPrompt .",
"The massive-scale pre-trained language models (PLMs) (Han et al., 2021a) have been proven to be backbones for solving a variety of NLP tasks (Kowsari et al., 2019; Rajpurkar et al., 2016).",
"To further adapt these PLMs to downstream tasks such as classification, traditional approaches fine-tune the language models through an extra classifier (Howard and Ruder, 2018).",
"However, when task-specific data is limited (Bragg et al., 2021), training the extra classifier effectively is challenging due to the gap between pre-training tasks (e.g., Corresponding author: Z.Liu (liuzy@tsinghua.edu.cn) A [MASK] news : Tokyo Olympic Daily Preview, July 26th.",
"masked language modeling) and fine-tuning tasks (e.g., classification and regression).",
"This gap impedes the fast adaptation of PLMs to downstream tasks.",
"Recently, prompt-based tuning (Schick and Schtze, 2021; Liu et al., 2021) has risen to be a powerful way for few-shot learning by bridging the gap between the pre-training stage and downstream task stage.",
"In prompt-based tuning, the input texts are wrapped with task-specific templates to re-formalize the original task as a cloze-style task.",
"For example, in topic classification task, we can use template <text> This topic is about [MASK] , where <text> is the placeholder for input sentences.",
"The PLMs are asked to infer the words to fill in [MASK] and the words are further mapped to corresponding labels through a verbalizer (e.g. sports for label Sports).",
"Verbaliz-7014 ers are of great importance in prompt-based tuning (Gao et al., 2021) since they are the bridges between model outputs and the final predictions.",
"How to build effective verbalizers for prompt-based tuningespecially for many-class classification, is a critical issue in prompt-based tuning.",
"Typically, most current works adopt three kinds of verbalizers: manual verbalizers, search-based verbalizers, and soft verbalizers.",
"We show them by an example in Figure 1.",
"Human-designed manual verbalizers pick some label words (e.g. label names) to depict classes.",
"These verbalizers are powerful across multiple tasks (Schick and Schtze, 2021).",
"Despite their success, a major drawback roots in the strong assumption that we own precise understandings of downstream tasks and are able to sum up each class with several words.",
"Without task-specific prior knowledge, selecting appropriate label words is non-trivial.",
"Further, they also need intensive human labors when facing many classes.",
"To mitigate these issues, search-based verbalizers aim at finding suitable label words from vocabulary with algorithms (Schick et al., 2020; Shin et al., 2020; Gao et al., 2021) and soft verbalizers use trainable tokens which are optimized during tuning (Hambardzumyan et al., 2021; Zhang et al., 2021).",
"However, it is challenging to search or optimize adequately in a large vocabulary or embedding space under a low-data regime, making automatic verbalizers suboptimal compared with manual ones.",
"Intuitively, class proxies in verbalizers should encapsulate class-level semantic features, which are expressed implicitly by instances.",
"To obtain these semantic representatives with few data, one promising approach is computing central points of class instances, namely prototypes , as approximation.",
"To this end, we manage to estimate prototype vectors for each class to serve as verbalizer.",
"Summarized from instances, prototypes are supposed to establish concepts similar with human-designed labels.",
"In this work, we introduce prototypes into this problem and propose prototypical verbalizer (Pro-toVerb), which learns class prototypes from training data to build verbalizers automatically.",
"For prototype learning, inspired by the idea of PCL (Li et al., 2021), ProtoVerb trains the prototype vectors by contrastive learning with the InfoNCE estimator (Oord et al., 2018).",
"Specifically, our optimization objective includes two components: The first part is an instance-instance loss to cluster intra-class instances and separate inter-class instances; The second part is an instance-prototype loss which enforces the prototypes to be center points of classes.",
"Compared with other verbalizer construction methods, ProtoVerb learns continuous vectors straight from training instances efficiently, which makes it a plug-in-and-play algorithm with high flexibility.",
"To verify the effectiveness of ProtoVerb, we conduct extensive experiments on topic classification and entity typing tasks.",
"We study two different settings where ProtoVerb can work: (1) When manual verbalizers are available, ProtoVerb can play as an extra verbalizer in the inference stage.",
"Results show that ProtoVerb consistently improves the classification performance with low cost, and even untuned PLMs benefit largely.",
"(2) Consider a realistic setting where only a limited number of samples are provided with no manual verbalizers, ProtoVerb also produces verbalizers of high quality.",
"Experimental results demonstrate that ProtoVerb significantly outperforms existing search-based and soft verbalizers.",
"Despite the success of PLMs (Devlin et al., 2019; Liu et al., 2019; Raffel et al., 2019) in massive NLP tasks, few-shot fine-tuning of PLMs was suboptimal due to the gap between pre-training and downstream tasks.",
"Inspired by the in context learning proposed by GPT-3 (Brown et al., 2020), stimulating model knowledge with a few prompts has recently received much attention.",
"A series of prompt-based work on knowledge probing (Trinh and Le, 2018; Petroni et al., 2019; Davison et al., 2019), text classification (Schick and Schtze, 2021; Gao et al., 2021), relation extraction (Han et al., 2021b), and entity typing (Ding et al., 2021a) emerge and achieve impressive progress.",
"Typically, a piece of prompt contains a template and a verbalizer.",
"Early prompts employ human-picked prompts which demand human knowledge and manual efforts.",
"To alleviate this issue, later works explore automatic designing and optimizing prompts (Liu et al., 2021; Gao et al., 2021; Zhang et al., 2021).",
"Recently research works further propose continuous prompts to replace the discrete phrases (Lester et al., 2021; Li and Liang, 2021).",
"However, the designation of verbalizers, an important part of prompts, is less ex-7015 plored.",
"Verbalizers bridge between model outputs and labels and make great impact on prompt-based tuning (Gao et al., 2021).",
"With task-specific knowledge, human-picked words are widely used and proved effective (Schick and Schtze, 2021).",
"The major drawback of manual verbalizers is the assumption that we possess sufficient knowledge of downstream tasks, which is not always satisfied.",
"To avoid intensive human labor and expert knowledge dependency in manual verbalizers, some works explore search-based verbalizers (Schick et al., 2020; Gao et al., 2021; Shin et al., 2020) that identify label words automatically with training data.",
"However, with a large vocabulary and few examples, it is non-trivial to find suitable words.",
"Another line of researches focuses on soft verbalizers (Ham-bardzumyan et al., 2021; Zhang et al., 2021), which insert continuous embeddings as soft labels.",
"The label embeddings are optimized along with model tuning.",
"Similarly, soft verbalizers require abundant data for sufficient optimization, which can not be satisfied with the few-shot setting.",
"In contrast, our approach learns prototype vectors from scratch, hence is more effective for few-shot tuning.",
"In few-shot learning, prototype-based metric-learning methods have been promising approaches for their simplicity and effectiveness.",
"Prototypical Networks (ProtoNet) (Snell et al., 2017) is the pioneering work that introduces prototypes into deep learning.",
"Specifically, ProtoNet calculates prototype vectors by taking the average of instance vectors and makes predictions by metric-based comparisons between prototypes and query instances.",
"A set of following works concentrates on the advancement of prototype estimation (Li et al., 2021; Gao et al., 2019; Ding et al., 2021c).",
"Among them, PCL (Li et al., 2021) achieves remarkable results on self-supervised few-shot learning by using prototypes as latent variables and inspires us in designing training objectives.",
"The success of prototype-based models indicates that prototypes, which are representative embeddings of instances from the same classes, encapsulate some class-level semantic features.",
"Inspired by the intrinsic similarity of prototypes and verbalizers, we find it natural and elegant to introduce prototypes into verbalizer construction for prompt-based tuning.",
"Given a pre-trained language model M , our goal is to tune it for specific downstream tasks.",
"Take N way K shot few-shot text classification as an example, the support set for class n D n = { x n 1 , , x nK } contains K sentences.",
"We aim to predict the label y Y for each sentence, where Y is the label set with N distinct classes.",
"For a sentence concatenated with special tokens x = { [CLS] , t 1 , , t T , [SEP] } , language model M encodes it into hidden representations { h [CLS] , h 1 , , h T , h [SEP] } .",
"Conventional fine-tuning trains an extra classifier F over the [CLS] embedding h [CLS] and output the probability distribution on label set Y .",
"The vanilla prompt-based tuning converts the downstream task to a cloze-style mask language modeling problem.",
"For example, to formulate the text classification task, we can modify the original input x with a template T ( ) = A [MASK] news: to get the prompt input T ( x ) = A [MASK] news: x .",
"With T ( x ) , M produces the hidden vector at the [MASK] position h [MASK] .",
"To calculate the probability distribution over the label set, a manual verbalizer stores a set of label words V and the score for label y is PM ( y | x ) = g ( PM ( [MASK] = v |T ( x )) | v V y ) , (2) where V y is the label words of y and g ( ) is to aggregate multiple scores.",
"In previous sections, we introduce the general pipeline of prompt-based tuning.",
"As manually defining or automatically searching for appropriate verbalizers can be challenging, here we propose to learn prototypes directly from training instances.",
"Inspired by PCL (Li et al., 2021), the prototypes are trained with contrastive learning.",
"As shown in 7016 A [MASK] news : Stocks Fall as Oil Hits High.",
"Figure 2, we first get the hidden states of [MASK] tokens to represent instances, then project them to another embedding space for prototype learning.",
"The prototypes are used as verbalizers for prediction.",
"Next, we will introduce the learning and inference stages of ProtoVerb in detail.",
"Given a piece of training text x wrapped with a template, we take the last layer's hidden state of the [MASK] token h [MASK] as the initial representation of the text.",
"With an encoder E ( ) parameterized by , the instance representation of x is v = E ( x ) = Wh [MASK] .",
"In practice, we simply adopt a linear encoder with weight W .",
"To measure the similarity between instances, we adopt cosine similarity function S ( ) , where S ( v i , v j ) = v i || v i || v j || v j || .",
"With the instance representation and similarity function, we discuss how to define our training objective.",
"Denote C = { c 1 , , c N } as the set of prototype vectors.",
"Intuitively, there are two goals we need to achieve by optimization: (1) For instance-instance pairs, intra-class pairs should get higher similarity scores than inter-class pairs.",
"(2) For instance-prototype pairs, the similarity scores between prototype c n and instances of class n should be higher than c n and other instances.",
"To realize these two goals, we define the objective function based on the InfoNCE estimator (Oord et al., 2018), which is widely adopted in contrastive learning.",
"For the instance-instance objective, we minimize the following loss function L ins = 1 N 2 K 2 (cid:88) n (cid:88) i,j log exp S ( v ni , v nj ) (cid:80) n ,j exp S ( v ni , v n j ) , (5) where ( v ni , v nj ) are instance pairs of the same class.",
"This loss function maximizes intra-class similarity and minimizes inter-class similarity between instances.",
"Similarly, the instance-prototype loss function is defined as L proto = 1 N 2 K (cid:88) i,n log exp S ( v ni , c n ) (cid:80) n exp S ( v ni , c n ) , (6) and v ni is of class n .",
"This objective forces each prototype to lie at the center point of its instances.",
"Overall, combining the instance-instance loss and instance-prototype loss, our final training objective is L = L ins + L proto .",
"During inference, following the same metric, we calculate the similarity scores of query and prototypes.",
"The probability score for class k is PM ( y k | x ) = exp S ( v , c k ) (cid:80) k exp S ( v , c k ) .",
"(8) Then we make prediction by arg max function (cid:101) y = arg max k PM ( y k | x ) .",
"(9) When there are other verbalizers (e.g. manual verbalizers), we first process the logits from different verbalizers with a standard scaler (minus mean then divide by standard deviation).",
"Then we take the mean value of the scores to get the final score.",
"We conduct extensive few-shot learning experiments to illustrate the effectiveness of ProtoVerb.",
"In this section, we first introduce the experimental settings in use.",
"Then we present and discuss the experiment results.",
"Verbalizers in many-class classification tasks are difficult to get precise definitions.",
"Hence we adopt three topic classification datasets: AG's News, Yahoo (Zhang et al., 2015), and DBPedia (Lehmann et al., 2015) and one entity typing dataset: FewNERD (Ding et al., 2021d) as benchmarks, and their statistics are summarized in Table 1.",
"To focus on the verbalizer and alleviate the influence of templates, we adopt multiple fixed manual templates.",
"For topic classification, following (Hu et al., 2021), we use four templates on each dataset.",
"For entity typing, we use three templates from (Ding et al., 2021a).",
"Details about the templates can be found in Appendix A. Dataset Task #Class #Test AG's News TC 4 7,600 DBPedia TC 14 70,000 Yahoo TC 10 60,000 FewNERD ET 66 96,901 Table 1: Dataset statistics.",
"Under the few-shot setting, we randomly sample k = 1 , 2 , 4 , 8 , 16 instances in each class from the training set and test the model on the entire test set.",
"As for the evaluation metric, we use accuracy in all experiments.",
"For the different usages of ProtoVerb, we consider two specific settings: (1) ProtoVerb as a single verbalizer ( 5.5).",
"When manual verbalizers are not available, we can tune the model with ProtoVerb.",
"Under this setting, we want to evaluate the performance of ProtoVerb compared with other automatic verbalizer construction methods.",
"(2) ProtoVerb as an extra verbalizer ( 5.6).",
"Naturally, we suppose that there exists a manual verbalizer and we append ProtoVerb to strengthen the performance.",
"Under this setting, ProtoVerb is a plug-in-and-play component and does not participate in the tuning process.",
"We compare ProtoVerb with manual verbalizers and other verbalizer ensembles.",
"All our models and baselines are implemented with PyTorch (Paszke et al., 2019) framework, Hugging-face transformers (Wolf et al., 2020), and OpenPrompt toolkit (Ding et al., 2021b).",
"We optimize PLMs with AdamW optimizer (Loshchilov and Hutter, 2019).",
"For prototype learning, we set the prototype dimension to 128 and optimize the loss function with Adam optimizer (Kingma and Ba, 2015).",
"For topic classification, we use RoBERTa-large (Liu et al., 2019) as our PLM backbone and tune the model for 5 epochs.",
"The batchsize is 2 and the learning rate is 3e-5.",
"For entity typing, we tune a BERT-base (Devlin et al., 2019) model for 30 epochs and set the batchsize to 16.",
"The learning rate here is 5e-5.",
"The vanilla prompt-based tuning method fuses the input text with a task-specific template and maps the model outputs to labels through a verbalizer.",
"For fair comparisons, all our baselines and proposed models are built on this pipeline and they merely differ from the verbalizers in use.",
"Manual verbalizers (ManualVerb) are defined by human with domain knowledge.",
"Here we simply employ the verbalizers provided by OpenPrompt (Ding et al., 2021b).",
"Search-based verbalizers (SearchVerb) search for suitable words from vocabulary automatically.",
"We adopt the implementation in PETAL (Schick et al., 2020), which finds the words that maximize the likelihood of the training data.",
"To combine SearchVerb with ManualVerb, we merge their verbalizer words together.",
"Soft verbalizers (SoftVerb) introduce trainable tokens as verbalizers in prompt-based tuning.",
"We follow the approach in WARP (Hambardzumyan et al., 2021) that applies soft tokens as a linear decoding layer, and the token embeddings are learned along with model tuning.",
"Note that the templates in WARP are also trainable, but here we only use its soft verbalizers.",
"In single verbalizer experiments, we initialize the token embeddings randomly for fairness.",
"And in extra verbalizer experiments, they are initialized with label names.",
"Table 2 presents the performance of different verbalizers.",
"Overall, ManualVerb is the most powerful verbalizer, which is reasonable because it is picked by human with domain knowledge.",
"ProtoVerb outperforms SearchVerb and SoftVerb remarkably and consistently, especially when only 1 or 2 instances per class are given.",
"The poor performances of the two baselines under extreme data scarcity corroborate the issues we claim in 1.",
"As the training data become sufficient, ProtoVerb gets comparable or even exceeding scores compared with ManualVerb, showing that ProtoVerb is able to learn prototypes that well represent the classes.",
"At the same time, the gaps between ManualVerb and other verbalizers narrow, which also indicates that we can summarize data across various ways.",
"Across tasks, ProtoVerb gets better results on topic classification than entity typing.",
"A possible reason is that FewNERD is a fine-grained entity typing dataset, in which the differences across classes are subtle.",
"For example, it is hard for ProtoVerb K Method AG DB Yahoo Few 0 ManualVerb 75.13 67.06 43.11 20.00 1 Fine-tuning 25.45 10.80 10.59 7.48 ManualVerb 76.67 85.47 50.22 41.68 SearchVerb+ 51.82 81.31 43.24 35.64 SoftVerb+ 76.34 85.85 49.11 37.66 ProtoVerb+ 77.71 88.16 50.08 43.20 w/o tuning 76.28 78.32 45.01 29.51 2 Fine-tuning 25.78 49.01 11.26 19.03 ManualVerb 81.06 93.61 58.65 46.44 SearchVerb+ 77.56 91.79 52.46 42.13 SoftVerb+ 79.95 93.68 55.73 42.17 ProtoVerb+ 84.09 94.77 59.33 48.69 w/o tuning 82.13 86.11 50.34 34.44 4 Fine-tuning 28.14 94.08 26.02 20.98 ManualVerb 84.73 95.83 61.41 52.54 SearchVerb+ 81.25 95.16 58.98 50.61 SoftVerb+ 84.22 94.90 59.01 49.45 ProtoVerb+ 85.71 96.74 66.14 54.16 w/o tuning 83.05 89.56 55.59 35.55 8 Fine-tuning 72.78 96.83 54.76 49.77 ManualVerb 85.85 96.46 64.12 56.59 SearchVerb+ 85.68 97.57 65.32 56.58 SoftVerb+ 86.54 97.40 63.48 54.30 ProtoVerb+ 87.25 97.64 66.61 58.30 w/o tuning 83.79 92.61 59.42 34.37 16 Fine-tuning 84.14 97.25 64.27 52.66 ManualVerb 84.74 96.05 58.77 61.17 SearchVerb+ 85.30 95.08 59.34 61.70 SoftVerb+ 85.65 96.34 58.68 59.23 ProtoVerb+ 87.98 97.22 65.65 62.55 w/o tuning 84.78 93.46 60.89 33.96 Table 3: Results for multiple verbalizer experiments.",
"to discriminate between person-artist/author and person-director with only a few instances.",
"However, ProtoVerb can also catch up with ManualVerb with enough samples.",
"Table 3 shows the experiment results when we ensemble manual verbalizers with automatic verbalizers.",
"The ensembled versions are denoted as SearchVerb+, SoftVerb+, and ProtoVerb+ respectively.",
"From the table, we have the following observations: (1) Basically, prompt-based tuning outperforms fine-tuning by a large margin with few 7019 samples (1 2 per class).",
"When sufficient training data is available, fine-tuning models will produce comparable results.",
"(2) Overall, ProtoVerb+ certainly improves the performance of prompt-based tuning under most cases, which demonstrates the effectiveness of ProtoVerb+.",
"At the same time, SearchVerb+ and SoftVerb+ seldom show enhancement compared with ManualVerb.",
"As ProtoVerb+ does not introduce any external knowledge, this illustrates that ProtoVerb+ provides a better way to utilize training data.",
"Finally, we also present the results of applying ProtoVerb+ on untuned PLMs.",
"It is worth noting that even for untuned models, ProtoVerb+ also boosts them considerably on all tasks.",
"For example on DBPedia, showing only one instance per class to PLMs with ProtoVerb+ leads to 11.26% absolute accuracy improvement.",
"On topic classification, when more training samples are given, untuned PLMs achieve competitive scores.",
"This observation indicates a new cost-efficient way to leverage training data, which we highlight as valuable for future study of none-tuning methods for PLMs.",
"Compared to the in context learning in GPT-3 (Brown et al., 2020), ProtoVerb+ is not limited by input length and can deal with arbitrary number of samples.",
"We further study this fixed model scenario in 6.1.",
"In this section, we discuss several analytical top-ics for further understandings of ProtoVerb.",
"For simplicity, we conduct experiments on AG's News dataset.",
"In 5.6, we see ProtoVerb is still powerful with fixed PLMs.",
"For further comparisons, we conduct experiments to quantitatively evaluate verbalizers when PLMs are fixed.",
"Figure 3 gives the results.",
"To clarify, using ManualVerb on fixed PLMs equals the zero-shot setting, which we plot with a dashed line.",
"Meanwhile, different from 5.6, ProtoVerb here is a single verbalizer.",
"From the figure we can conclude that (1) Similar with 5.5, ProtoVerb outperforms SoftVerb and SearchVerb by a large margin under low-shot settings.",
"Notably, ProtoVerb exceeds ManualVerb with only 2 shots per class, illustrating the experessive power of prototypes.",
"(2) SoftVerb is also better than SearchVerb under this setting, demonstrating that tunable verbalizers could exploit training data better with PLMs fixed.",
"To validate the effect of each part in the loss function, we conduct an ablation study on AG's News dataset.",
"For comparison, we consider two variants of prototype calculation methods: (1) ProtoVerb with L proto only.",
"(2) Following ProtoNet (Snell et al., 2017), take the average of instance embeddings for prototype embeddings.",
"Table 4 shows the results.",
"Compared to taking the mean embedding 7020 Class K = 1 K = 16 World Qaida, Syria, Iraq, Nusra, TPP Taliban, Iraq, Afghan, militants, rebellion Sports Steelers, Raptors, Knicks, Dodgers ball, ESPN, baseball, Fifa, Sports Business cash, earnings, Securities, NYSE Dow, dividend, investing, markets Tech LTE, Tel, Huawei, Mbps, VPN Vault, IBM, Qualcomm, Technologies Table 6: Words that are most similar with prototypes of each class on AG's News.",
"vectors directly, optimizing the embedding vectors of prototypes using our loss functions leads to better performances and stability.",
"Adding L ins is also beneficial, meaning that L ins helps ProtoVerb in learning instance embeddings.",
"Noisy data are commonly seen as threats in real-world datasets for few-shot learning systems.",
"For automatic verbalizers, noisy data are more harmful because of the effect on both the quality of verbalizers and the training process.",
"In this section, we evaluate the robustness of different automatic verbalizers against noisy samples on AG's News.",
"For training stability, we set K = 8 , 16 .",
"Table 5 presents the accuracy drop when there are 1, 2, or 3 samples having wrong labels.",
"It is clearly seen that a limited number of noisy samples will hinder the performance greatly, showing the vulnerability of automatic verbalizers.",
"Meanwhile, we can also find that ProtoVerb is more robust than baseline methods when facing noisy samples.",
"Since ProtoVerb learns continuous prototype vectors, their meanings are implicit.",
"Here we manage to investigate which words are most similar to the learned prototypes.",
"Due to word embeddings and prototype vectors lying in different embedding spaces, we can not directly calculate their similarity.",
"Hence we use the vocabulary as the input texts (one word at a time) to get the top-scored word for each class.",
"On AG's News dataset, we collect some most similar words for each class and list them in Table 6.",
"To investigate the property of prototypes learned with different numbers of samples, we present words for K = 1 and K = 16 .",
"With the table, we see that: (1) Even when only one example is available, the learned prototypes are meaningful.",
"Most of the similar words are proper nouns and entity names closely related to class topics.",
"For example, Steelers, Raptors, Knicks, and Dodgers are all baseball or basketball teams that appear frequently in sports news.",
"We attribute this to prompt mechanism that allows PLMs to extract the most conclusive information and fill the [MASK] with it.",
"Then the relevant words are also included.",
"(2) With more training instances, prototypes show diverse interests.",
"Despite entity names, more con-ceptual words show up on the list, such as ball and Sports for class Sports.",
"We interpret this as the summarization and abstraction ability of prototypes.",
"Given many instances, prototypes are enforced to capture their common features, hence some abstract concepts are found automatically.",
"In this way, ProtoVerb encapsulates class-level, rather than entity-level, semantics, which leads to better performance on unseen data.",
"To give further analyses for the inner workings of prototypes, we measure the similarity between ProtoVerb and ManualVerb to see whether ProtoVerb is able to learn abstract concepts as humans do.",
"On AG's News dataset, we calculate the similarity scores between prototypes and manual verbalizers and normalize the scores using the softmax function across the four classes.",
"In Figure 4 we plot the scores with various shots.",
"It is clearly seen that the similarity of prototypes and corresponding verbal-7021 izers are above average (0.25).",
"As shot increases, the scores also gradually grow, which illustrates that prototypes can capture the conceptual information better from more instances.",
"This observation matches our findings in 6.4.",
"Among the four classes, Business and Sports get higher scores than World and Tech.",
"A reasonable guess is that World and Tech news includes diverse sub-topics that are hard to summarize.",
"In this paper, we propose a novel approach for automatic verbalizer construction in prompt-based tuning.",
"The proposed ProtoVerb learns class prototypes from training instances using contrastive learning.",
"We explore the performance of ProtoVerb on few-shot topic classification and entity typing tasks.",
"As a single verbalizer, ProtoVerb outperforms state-of-the-art automatic verbalizers considerably.",
"Working together with manual verbalizers, ProtoVerb can also consistently improve prompt-based tuning with minor effort.",
"The results validate the effectiveness of ProtoVerb.",
"Our analysis further reveals the intrinsic properties of prototypes.",
"For future work, we will focus on extending ProtoVerb for effective non-tuning algorithms of PLMs and prompt-tuning with soft templates.",
"Moreover, we are finding proper ways to combine label words and prototypes for verbalizer construction.",
"This work is supported by the National Key R&D Program of China (No. 2020AAA0106502), Institute for Guo Qiang at Tsinghua University, Beijing Academy of Artificial Intelligence (BAAI), and International Innovation Center of Tsinghua University, Shanghai, China.",
"Ganqu Cui and Shengding Hu conducted the experiments.",
"Ganqu Cui, Shengding Hu, Ning Ding and Zhiyuan Liu wrote the paper.",
"Longtao Huang provided valuable advices to the research.",
"Our work explores how to stimulate large PLMs with few samples.",
"We conduct experiments under the few-shot setting, where requires less training time and fewer resources than normal full-data setting.",
"Also, we open up our codes and hyperpa-rameters to facilitate future reproduction without repeated energy cost."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"objective",
"other",
"objective",
"other",
"other",
"other",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"result",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain"
] |
[
"One of the difficulties of neural machine translation (NMT) is the recall and appropriate translation of low-frequency words or phrases.",
"In this paper, we propose a simple, fast, and effective method for recalling previously seen translation examples and incorporating them into the NMT decoding process.",
"Specifically, for an input sentence, we use a search engine to retrieve sentence pairs whose source sides are similar with the input sentence, and then collect n -grams that are both in the retrieved target sentences and aligned with words that match in the source sentences, which we call translation pieces.",
"We compute pseudo-probabilities for each retrieved sentence based on similarities between the input sentence and the retrieved source sentences, and use these to weight the retrieved translation pieces.",
"Finally, an existing NMT model is used to translate the input sentence, with an additional bonus given to outputs that contain the collected translation pieces.",
"We show our method improves NMT translation results up to 6 BLEU points on three narrow domain translation tasks where repetitiveness of the target sentences is particularly salient.",
"It also causes little increase in the translation time, and compares favorably to another alternative retrieval-based method with respect to accuracy, speed, and simplicity of implementation.",
"Neural machine translation (NMT) (Bahdanau et al., 2014; Sennrich et al., 2016a; Wang et al., 2017b) is now the state-of-the-art in machine translation, due to its ability to be trained end-to-end on large parallel corpora and capture complex parameterized functions that generalize across a variety of syntactic and semantic phenomena.",
"However, it has also been noted that compared to alternatives such as phrase-based translation (Koehn et al., 2003), NMT has trouble with low-frequency words or phrases (Arthur et al., 2016; Kaiser et al., 2017), and also generalizing across domains (Koehn and Knowles, 2017).",
"A number of methods have been proposed to ameliorate these problems, including methods that incorporate symbolic knowledge such as discrete translation lexicons (Arthur et al., 2016; He et al., 2016; Chatterjee et al., 2017) and phrase tables (Zhang et al., 2017; Tang et al., 2016; Dahlmann et al., 2017), adjust model structures to be more conducive to generalization (Nguyen and Chiang, 2017), or incorporate additional information about domain (Wang et al., 2017a) or topic (Zhang et al., 2016) in translation models.",
"In particular, one paradigm of interest is recent work that augments NMT using retrieval -based models, retrieving sentence pairs from the training corpus that are most similar to the sentence that we want to translate, and then using these to bias the NMT model.",
"1 These methods reminiscent of translation memory (Utiyama et al., 2011) or example-based translation (Nagao, 1984; Grefen-stette, 1999) are effective because they augment the parametric NMT model with a non-parametric translation memory that allows for increased capacity to measure features of the target technical terms or domain-specific words.",
"Currently there are two main approaches to doing so.",
"Li et al. (2016) and Farajian et al. (2017) use the retrieved sentence pairs to fine tune the parameters of the NMT model which is pre-trained on the whole training corpus.",
"Gu et al. (2017) uses the retrieved sentence pairs as additional inputs to the NMT model to help NMT in translating the input sen-1 Note that there are existing retrieval-based methods for phrase-based and hierarchical phrase-based translation (Lopez, 2007; Germann, 2015).",
"However, these methods do not improve translation quality but rather aim to improve the efficiency of the translation models.",
"tence.",
"While both of these paradigms have been proven effective, they both add significant complexity and computational/memory cost to the decoding process, and also to the training procedure.",
"The first requires the running of several training iterations and rolling back of the model, which is costly at test time, and the second requires entirely changing the model structure which requires training the model separately, and also increases test-time computational cost by adding additional encoders.",
"In this paper, we propose a simple and efficient model for using retrieved sentence pairs to guide an existing NMT model at test time.",
"Specifically, the model collects n -grams occurring in the retrieved target sentences that also match words that overlap between the input and retrieved source sentences, which we will refer to as translation pieces (e.g., in Figure 1, the blue part of the retrieved target sentence is collected as translation pieces for the input sentence).",
"The method then calculates a pseudo-probability score for each of the retrieved example sentence pairs and weights the translation pieces according to this value.",
"Finally, we up-weight NMT outputs that contain the collected translation pieces.",
"Unlike the previous methods, this requires no change of the underlying NMT model and no updating of the NMT parameters, making it both simple and efficient to apply at test time.",
"We show our method improved NMT translation results up to 6 BLEU points on three translation tasks and caused little increase in the translation time.",
"Further, we find that accuracies are comparable with the model of Gu et al. (2017), despite being significantly simpler to implement and faster at test time.",
"Our baseline NMT model is similar to the attentional model of Bahdanau et al. (2014), which includes an encoder, a decoder and an attention (alignment) model.",
"Given a source sentence X = { x 1 , ..., x L } , the encoder learns an annotation h i = h ~h i ; h i i for x i using a bi-directional recurrent neural network.",
"The decoder generates the target translation from left to right.",
"The probability of generating next word y t is, 2 PNMT (cid:0) y t | y t 1 1 , X (cid:1) = softmax ( g ( y t 1 , z t , c t )) (1) where z t is a decoding state for time step t , computed by, z t = f ( z t 1 , y t 1 , c t ) (2) c t is a source representation for time t , calculated as, c t = LX i =1 t,i h i (3) where t,i scores how well the inputs around position i and the output at position t match, computed as, t,i = exp ( a ( z t 1 , h i )) LP j =1 exp ( a ( z t 1 , h j )) (4) The standard decoding algorithm for NMT is beam search.",
"2 g , f and a in Equation 1, 2 and 4 are nonlinear, potentially multi-layered, functions.",
"log PNMT ( Y | X ) = | Y | X t =1 log PNMT (cid:0) y t | y t 1 1 , X (cid:1) (5) Finally, the translation score is normalized by sentence length to avoid too short outputs.",
"log SNMT ( Y | X ) = log PNMT ( Y | X ) | Y | (6) 3 Guiding NMT with Translation Pieces This section describes our approach, which mainly consists of two parts:",
"1. retrieving candidate translation pieces from a parallel corpus for the new source sentence that we want to translate, and then",
"2. using the collected translation pieces to guide an existing NMT model while translating this new sentence.",
"At training time, we first prepare the parallel corpus that will form our database used in the retrieval of the translation pieces.",
"Conceivably, it could be possible to use a different corpus for translation piece retrieval and NMT training, for example when using a separate corpus for domain adaptation, but for simplicity in this work we use the same corpus that was used in NMT training.",
"As pre-processing, we use an off-the-shelf word aligner to learn word alignments for the parallel training corpus.",
"At test time we are given an input sentence X .",
"For this X , we first use the off-the-shelf search engine Lucene to search the word-aligned parallel training corpus and retrieve M source sentences { X m : 1 m M } that are similar to X .",
"Y m indicates the target sentence that corresponds to source sentence X m and A m is word alignments between X m and Y m .",
"For each retrieved source sentence X m , we compute its edit distance with X as d ( X, X m ) using dynamic programming.",
"We record the unedited words in X m as W m , and also note the words in the target sentence Y m that correspond to source words in W m , which we can presume are words that will be more likely to appear in the translated sentence for X .",
"According to Algorithm 1, we collect n -grams (up to 4 -grams) from n -grams G mX Vorschriften fur die Eignung Yes die Eignung von Yes von Um@@ schlags@@ anlagen No Um@@ schlags@@ anlagen No Table 1: Examples of the collected translation pieces.",
"the retrieved target sentence Y m as possible translation pieces G mX for X , using word-level alignments to select n -grams that are related to X and discard n -grams that are not related to X .",
"The final translation pieces GX collected for X are computed as, 3 GX = M [ m =1 G mX (7) Table 1 shows a few n -gram examples contained in the retrieved target sentence in Figure 1 and whether they are included in G mX or not.",
"Because the retrieved source sentence in Figure 1 is highly similar with the input sentence, the translation pieces collected from its target side are highly likely to be correct translation pieces of the input sentence.",
"However, when a retrieved source sentence is not very similar with the input sentence (e.g. only one or two words match), the translation pieces collected from its target side will be less likely to be correct translation pieces for the input sentence.",
"We compute a score for each u GX to measure how likely it is a correct translation piece for X based on sentence similarity between the retrieved source sentences and the input sentence as following, S u, X, M [ m =1 { ( X m , G mX ) } !",
"3 Note that the extracted translation pieces are target phrases, but the target words contained in one extracted translation piece may be aligned to discontiguous source words, which is different from how phrase-based translation extracts phrase-based translation rules.",
"In the next phase, we use our NMT system to translate the input sentence.",
"Inspired by Stahlberg et al. (2017) which rewards n -grams from syntactic translation lattices during NMT decoding, we add an additional reward for n -grams that occur in the collected translation pieces.",
"That is, as shown in Figure 2, at each time step t , we update the probabilities over the output vocabulary and increase the probabilities of those that result in matched n -grams according to log SNMT updated (cid:0) y t | y t 1 1 , X (cid:1) = log PNMT (cid:0) y t | y t 1 1 , X (cid:1) + 4 X n =1 y tt n +1 , X, M [ m =1 { ( X m , G mX ) } !",
", (10) where can be tuned on the development set and ( ) is computed as Equation 8 if y tt n +1 GX , otherwise ( ) = 0 .",
"DX to store translation pieces GX and their scores for each input sentence X .",
"At each time step t , we update the output layer probabilities by checking DX .",
"However, it is inefficient to traverse all target words in the vocabulary and check whether they belong to GX or not, because the vocabulary size is large.",
"Instead, we only traverse target words that belong to GX and update the corresponding output probabilities as shown in Algorithm",
"2. Here, LX is a list that stores 1 -grams contained in GX .",
"4 As we can see, our method only up-weights NMT outputs that match the retrieved translation pieces in the NMT output layer.",
"In contrast, Li et al. (2016) and Farajian et al. (2017) use the retrieved sentence pairs to run additional training iterations and fine tune the NMT parameters for each input sentence; Gu et al. (2017) runs the NMT model for each retrieved sentence pair to obtain the NMT encoding and decoding information of the retrieved sentences as key-value memory to guide NMT for translating the new input sentence.",
"Compared to their methods, our method adds little computational/memory cost and is simple to implement.",
"Following Gu et al. (2017), we use version 3.0 of the JRC-Acquis corpus for our translation experiments.",
"The JRC-Acquis corpus contains the total body of European Union (EU) law applicable in the EU Member States.",
"It can be used as a narrow domain to test the effectiveness of our proposed method.",
"We did translation experiments on three 4 Note that our method does not introduce new states during decoding, because the output layer probabilities are simply updated based on history words and the next word.",
"directions: English-to-German (en-de), English-to-French (en-fr) and English-to-Spanish (en-es).",
"We cleaned the data by removing repeated sentences and used the train-truecaser.perl script from Moses (Koehn et al., 2007) to truecase the corpus.",
"Then we selected 2000 sentence pairs as development and test sets, respectively.",
"The rest was used as the training set.",
"We removed sentences longer than 80 and 100 from the training and development/test sets respectively.",
"The final numbers of sentence pairs contained in the training, development and test sets are shown in Table",
"3. 5 We applied byte pair encoding (Sennrich et al., 2016b) and set the vocabulary size to be 20K.",
"For translation piece collection, we use GIZA++ (Och and Ney, 2003) and the grow-diag-final-and heuristic (Koehn et al., 2003) to obtain symmetric word alignments for the training set.",
"We trained an attentional NMT model as our baseline system.",
"The settings for NMT are shown in Table",
"4. We also compared our method with the search engine guided NMT model (SGNMT, Gu et al. (2017)) in Section 4.5.",
"For each input sentence, we retrieved 100 sentence pairs from the training set using Lucene as our preliminary setting.",
"We analyze the influence of the retrieval size in Section 4.4.",
"The weights of translation pieces used in Equation 10 are tuned on the development set for different language pairs, resulting in weights of 1.5 for en-de and en-fr, and a weight of 1 for en-es.",
"Table 2 shows the main experimental results.",
"We can see that our method outperformed the baseline NMT system up to 6 BLEU points.",
"As large BLEU gains in neural MT can also often be attributed to changes in output length, we examined the length (Table 5) and found that it did not influence the translation length significantly.",
"In addition, it is of interest whether how well the retrieved sentences match the input influences the search results.",
"We measure the similarity between a test sentence X and the training corpus D train by computing the sentence similarities between X and the retrieved source sentences as simi ( X, D train ) = max 1 m M simi ( X, X m ) .",
"Our analysis demonstrated that, expectedly, the performance of our method is highly influenced by the similarity between the test set and the training set.",
"We divided sentences in the test set into two 1329 whole half-H half-L en-de 0.56 0.80 0.32 en-fr 0.57 0.81 0.33 en-es 0.57 0.81 0.32 Table 6: Similarities between the training set and the whole/divided test sets.",
"parts: half has higher similarities with the training corpus (half-H) and half has lower similarities with the training corpus (half-L).",
"Table 6 shows the similarity between the training corpus and the whole/divided test sets.",
"Table 7 shows translation results for the whole/divided test sets.",
"As we can see, NMT generally achieved better BLEU scores for half-H and our method improved BLEU scores for half-H much more significantly than for half-L, which shows our method can be quite useful for narrow domains where similar sentences can be found.",
"We also tried our method on WMT 2017 English-to-German News translation task.",
"However, we did not achieve significant improvements over the baseline attentional NMT model, likely because the test set and the training set for the WMT task have a relatively low similarity as shown in Table 8 and hence few useful translation pieces can be retrieved for our method.",
"In contrast, the JRC-Acquis corpus provides test sentences that have much higher similarities with the training set, i.e., much more and longer translation pieces exist.",
"To demonstrate how the retrieved translation pieces help NMT to generate appropriate outputs, Figure 3 shows an input sentence with reference, the retrieved sentence pair with the highest sentence similarity and outputs by different systems for this input sentence with detailed scores: log NMT probabilities for each target word in T 1 and T 2 ; scores for matched translation pieces contained in T 1 and T 2 .",
"As we can see, NMT as-WMT JRC-Acquis Similarity Sent Percent Sent Percent [0 , 0 .",
"signs higher probabilities to the incorrect translation T 1 , even though the retrieved sentence pair whose source side is very similar with the input sentence was used for NMT training.",
"However, T 2 contains more and longer translation pieces with higher scores.",
"The five translation pieces contained only in T 2 are collected from the retrieved sentence pair shown in Figure 3, which has high sentence similarity with the input sentence.",
"The three translation pieces contained only in T 1 are also translation pieces collected for the input sentence, but have lower scores, because they are collected from sentence pairs with lower similarities with the input sentence.",
"This shows that computing scores for translation pieces based on sentence similarities is important for the performance of our method.",
"If we assign score 1 to all translation pieces contained in GX , i.e., use 1/0 reward for translation pieces and non-translation pieces, then the performance of our method decreased significantly as shown in Table 9, but still outperformed the NMT baseline significantly.",
"The basic idea of our method is rewarding n grams that occur in the training set during NMT decoding.",
"We found our method is especially useful to help the translation for infrequent n -grams.",
"First, we count how many times a target n -gram u occurs in the training set D train as, Occur ( u ) = |{ Y : h X, Y i D train u uniq ( Y ) }| (13) where uniq ( Y ) is the set of uniq n -grams (up to 4 -grams) contained in Y .",
"Given system outputs (cid:8) Z k : 1 k K (cid:9) for the test set (cid:8) X k : 1 k K (cid:9) with reference (cid:8) Y k : 1 k K (cid:9) , we count the number of correctly translated n -grams that occur times in the training set as, Count = KX k =1 (cid:12) (cid:12)(cid:12) (cid:16) , Z k , Y k (cid:17) (cid:12) (cid:12)(cid:12) (14) where (cid:16) , Z k , Y k (cid:17) = n u : u (cid:16) uniq (cid:16) Z k (cid:17) uniq (cid:16) Y k (cid:17)(cid:17) Occur ( u ) = o (15) Table 10 shows Count for different system outputs.",
"As we can see, our method helped little for the translation of n -grams that do not occur 1331 en-de en-fr en-es Base NMT decoding 0.215 0.224 0.227 Search engine retrieval 0.016 0.017 0.016 TP collection 0.521 0.522 0.520 Our NMT decoding 0.306 0.287 0.289 Table 11: Translation time (seconds).",
"in the training set, which is reasonable because we only reward n -grams that occur in the training set.",
"However, our method helped significantly for the translation of n -grams that do occur in the training set but are infrequent (occur less than 5 times).",
"As the frequency of n -grams increases, the improvement caused by our method decreased.",
"We analyze that the reason why our method is especially helpful for infrequent n -grams is that NMT is trained on the whole training corpus for maximum likelihood and tends to generate more frequent n -grams while our method computes scores for the collected translation pieces based on sentence similarities and does not prefer more frequent n -grams.",
"Our method only collects translation pieces to help NMT for translating a new sentence and does not influence the training process of NMT.",
"Therefore, our method does not increase the NMT training time.",
"Table 11 shows the average time needed for translating one input sentence in the development set in our experiments.",
"The search engine retrieval and translation piece (TP) collection time is computed on a 3.47GHz Intel Xeon X5690 machine using one CPU.",
"The NMT decoding time is computed using one GPU GeForce GTX 1080.",
"As we can see, the search engine retrieval time is negligible and the increase of NMT decoding time caused by our method is also small.",
"However, 0.2 0.22 0.24 0.26 0.28 0.3 0 1 2 5 10 20 50 100 Figure 5: NMT decoding time (seconds) with different search engine retrieval sizes.",
"collecting translation pieces needed considerable time, although our implementation was in Python and could potentially be significantly faster in a more efficient programming language.",
"The translation piece collection step mainly consists of two parts: computing the edit distances between the input sentence and the retrieved source sentences using dynamic programming with time complexity O (cid:0) n 2 (cid:1) ; collecting translation pieces using Algorithm 1 with time complexity O (4 n ) .",
"We changed the size of sentence pairs retrieved by the search engine and analyze its influence on translation performance and time.",
"Figure 4, 5 and 6 show the translation piece collection time, the NMT decoding time and translation BLEU scores with different search engine retrieval sizes for the en-fr task.",
"As we can see, as the number of retrieved sentences decreased, the time needed by translation piece collection decreased significantly, the translation performance decreased much less significantly and the NMT decoding time is further reduced.",
"In our experiments, 10 is a good setting for the retrieval size, which gave significant BLEU score improvements and caused little increase in the total translation time compared to the NMT baseline.",
"We compared our method with the search engine guided NMT (SGNMT) model (Gu et al., 2017).",
"We got their preprocessed datasets and tested our method on their datasets, in order to fairly compare our method with their reported BLEU scores.",
"6 Table 12 shows the results of their method and our method with the same settings for the baseline NMT system.",
"As we can see, our method generally outperformed their method on the three translation tasks.",
"Considering the computational complexity, their method also performs search engine retrieval for each input sentence and computes the edit distance between the input sentence and the retrieved source sentences as our method.",
"In addition, their method runs the NMT model for each retrieved sentence pair to obtain the NMT encoding and decoding information of the retrieved sentences as key-value memory to guide the NMT model for translating the real input sentence, which changes the NMT model structure and increases both the training-time and test-time computational cost.",
"Specifically, at test time, running the NMT model for one retrieved sentence pair costs the same time as translating the retrieved source sentence with beam size",
"1. Therefore, as the number of the retrieved sentence pairs increases to the beam size of the baseline NMT model, their method doubles the translation time.",
"This paper presents a simple and effective method that retrieves translation pieces to guide NMT for narrow domains.",
"We first exploit a search engine to retrieve sentence pairs whose source sides are similar with the input sentence, from which we 6 Only BLEU scores are reported in their paper.",
"collect and weight translation pieces for the input sentence based on word-level alignments and sentence similarities.",
"Then we use an existing NMT model to translate this input sentence and give an additional bonus to outputs that contain the collected translation pieces.",
"We show our method improved NMT translation results up to 6 BLEU points on three narrow domain translation tasks, caused little increase in the translation time, and compared favorably to another alternative retrieval-based method with respect to accuracy, speed, and simplicity of implementation.",
"We thank Jiatao Gu for providing their preprocessed datasets in Section 4.5."
] | [
"abstain",
"objective",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"method",
"result",
"other"
] |
[
"Variational autoencoders (VAE) combined with hierarchical RNNs have emerged as a powerful framework for conversation modeling.",
"However, they suffer from the notorious degeneration problem, where the decoders learn to ignore latent variables and reduce to vanilla RNNs.",
"We empirically show that this degeneracy occurs mostly due to two reasons.",
"First, the expressive power of hierarchical RNN decoders is often high enough to model the data using only its decoding distributions without relying on the latent variables.",
"Second, the conditional VAE structure whose generation process is conditioned on a context, makes the range of training targets very sparse; that is, the RNN decoders can easily overfit to the training data ignoring the latent variables.",
"To solve the degeneration problem, we propose a novel model named Variational Hierarchical Conversation RNNs (VHCR), involving two key ideas of (1) using a hierarchical structure of latent variables, and (2) exploiting an utterance drop regularization.",
"With evaluations on two datasets of Cornell Movie Dialog and Ubuntu Dialog Corpus, we show that our VHCR successfully utilizes latent variables and outperforms state-of-the-art models for conversation generation.",
"Moreover, it can perform several new utterance control tasks, thanks to its hierarchical latent structure.",
"Conversation modeling has been a long interest of natural language research.",
"Recent approaches for data-driven conversation modeling mostly build upon recurrent neural networks (RNNs) (Vinyals and Le, 2015; Sordoni et al., 2015b; Shang et al., 2015; Li et al., 2017; Serban et al., 2016).",
"Serban et al. (2016) use a hierarchical RNN structure to model the context of conversation.",
"Serban et al. (2017) further exploit an utterance latent variable in the hierarchical RNNs by incorporating the variational autoencoder (VAE) framework (Kingma and Welling, 2014; Rezende et al., 2014).",
"VAEs enable us to train a latent variable model for natural language modeling, which grants us several advantages.",
"First, latent variables can learn an interpretable holistic representation, such as topics, tones, or high-level syntactic properties.",
"Second, latent variables can model inherently abundant variability of natural language by encoding its global and long-term structure, which is hard to be captured by shallow generative processes ( e.g . vanilla RNNs) where the only source of stochasticity comes from the sampling of output words.",
"In spite of such appealing properties of latent variable models for natural language modeling, VAEs suffer from the notorious degeneration problem (Bowman et al., 2016; Chen et al., 2017) that occurs when a VAE is combined with a powerful decoder such as autoregressive RNNs.",
"This issue makes VAEs ignore latent variables, and eventually behave as vanilla RNNs.",
"Chen et al. (2017) also note this degeneration issue by showing that a VAE with a RNN decoder prefers to model the data using its decoding distribution rather than using latent variables, from bits-back coding perspective.",
"To resolve this issue, several heuristics have been proposed to weaken the decoder, enforcing the models to use latent variables.",
"For example, Bowman et al. (2016) propose some heuristics, including KL annealing and word drop regularization.",
"However, these heuristics cannot be a complete solution; for example, we observe that they fail to prevent the degeneracy in VHRED (Serban et al., 2017), a conditional VAE model equipped with hierarchical RNNs for conversation modeling.",
"The objective of this work is to propose a novel VAE model that significantly alleviates the degen-1792 eration problem.",
"Our analysis reveals that the causes of the degeneracy are two-fold.",
"First, the hierarchical structure of autoregressive RNNs is powerful enough to predict a sequence of utterances without the need of latent variables, even with the word drop regularization.",
"Second, we newly discover that the conditional VAE structure where an utterance is generated conditioned on context, i.e .",
"a previous sequence of utterances, induces severe data sparsity.",
"Even with a large-scale training corpus, there only exist very few target utterances when conditioned on the context.",
"Hence, the hierarchical RNNs can easily memorize the context-to-utterance relations without relying on latent variables.",
"We propose a novel model named Variational Hierarchical Conversation RNN (VHCR), which involves two novel features to alleviate this problem.",
"First, we introduce a global conversational latent variable along with local utterance latent variables to build a hierarchical latent structure.",
"Second, we propose a new regularization technique called utterance drop .",
"We show that our hierarchical latent structure is not only crucial for facilitating the use of latent variables in conversation modeling, but also delivers several additional advantages, including gaining control over the global context in which the conversation takes place.",
"(1) We reveal that the existing conditional VAE model with hierarchical RNNs for conversation modeling ( e.g .",
"(Serban et al., 2017)) still suffers from the degeneration problem, and this problem is caused by data sparsity per context that arises from the conditional VAE structure, as well as the use of powerful hierarchical RNN decoders.",
"(2) We propose a novel variational hierarchical conversation RNN (VHCR), which has two distinctive features: a hierarchical latent structure and a new regularization of utterance drop.",
"To the best of our knowledge, our VHCR is the first VAE conversation model that exploits the hierarchical latent structure.",
"(3) With evaluations on two benchmark datasets of Cornell Movie Dialog (Danescu-Niculescu-Mizil and Lee, 2011) and Ubuntu Dialog Corpus (Lowe et al., 2015), we show that our model improves the conversation performance in multiple metrics over state-of-the-art methods, including HRED (Serban et al., 2016), and VHRED (Ser-ban et al., 2017) with existing degeneracy solutions such as the word drop (Bowman et al., 2016), and the bag-of-words loss (Zhao et al., 2017).",
"Conversation Modeling .",
"One popular approach for conversation modeling is to use RNN-based encoders and decoders, such as (Vinyals and Le, 2015; Sordoni et al., 2015b; Shang et al., 2015).",
"Hierarchical recurrent encoder-decoder (HRED) models (Sordoni et al., 2015a; Serban et al., 2016, 2017) consist of utterance encoder and decoder, and a context RNN which runs over utterance representations to model long-term temporal structure of conversation.",
"Recently, latent variable models such as VAEs have been adopted in language modeling (Bow-man et al., 2016; Zhang et al., 2016; Serban et al., 2017).",
"The VHRED model (Serban et al., 2017) integrates the VAE with the HRED to model Twitter and Ubuntu IRC conversations by introducing an utterance latent variable.",
"This makes a conditional VAE where the generation process is conditioned on the context of conversation.",
"Zhao et al. (2017) further make use of discourse act labels to capture the diversity of conversations.",
"Degeneracy of Variational Autoencoders .",
"For sequence modeling, VAEs are often merged with the RNN encoder-decoder structure (Bowman et al., 2016; Serban et al., 2017; Zhao et al., 2017) where the encoder predicts the posterior distribution of a latent variable z , and the decoder models the output distributions conditioned on z .",
"However, Bowman et al. (2016) report that a VAE with a RNN decoder easily degenerates; that is, it learns to ignore the latent variable z and falls back to a vanilla RNN.",
"They propose two techniques to alleviate this issue: KL annealing and word drop .",
"Chen et al. (2017) interpret this degeneracy in the context of bits-back coding and show that a VAE equipped with autoregressive models such as RNNs often ignores the latent variable to minimize the code length needed for describing data.",
"They propose to constrain the decoder to selectively encode the information of interest in the latent variable.",
"However, their empirical results are limited to an image domain.",
"Zhao et al. (2017) use an auxiliary bag-of-words loss on the latent variable to force the model to use z .",
"That is, they train an auxiliary network that predicts bag-of-words representation of the target utterance based on z .",
"Yet this loss works in an opposite di-1793 rection to the original objective of VAEs that minimizes the minimum description length.",
"Thus, it may be in danger of forcibly moving the information that is better modeled in the decoder to the latent variable.",
"We assume that the training set consists of N i.i.d samples of conversations { c 1 , c 2 , ..., c N } where each c i is a sequence of utterances ( i.e . sentences) { x i 1 , x i 2 , ..., x in i } .",
"Our objective is to learn the parameters of a generative network using Maximum Likelihood Estimation (MLE): arg max X i log p ( c i ) (1) We first briefly review the VAE, and explain the degeneracy issue before presenting our model.",
"We follow the notion of Kingma and Welling (2014).",
"A datapoint x is generated from a latent variable z , which is sampled from some prior distribution p ( z ) , typically a standard Gaussian distribution N ( z | 0 , I ) .",
"We assume parametric families for conditional distribution p ( x | z ) .",
"Since it is intractable to compute the log-marginal likelihood log p ( x ) , we approximate the intractable true posterior p ( z | x ) with a recognition model q ( z | x ) to maximize the variational lower-bound : log p ( x ) L ( , ; x ) (2) = E q ( z | x ) [ log q ( z | x ) + log p ( x , z )] = DKL ( q ( z | x ) k p ( z ))+ E q ( z | x ) [log p ( x | z )] Eq.",
"2 is decomposed into two terms: KL divergence term and reconstruction term.",
"Here, KL divergence measures the amount of information encoded in the latent variable z .",
"In the extreme where KL divergence is zero, the model completely ignores z , i.e. it degenerates.",
"The expectation term can be stochastically approximated by sampling z from the variational posterior q ( z | x ) .",
"The gradients to the recognition model can be ef-ficiently estimated using the reparameterization trick (Kingma and Welling, 2014).",
"Serban et al. (2017) propose Variational Hierarchical Recurrent Encoder Decoder (VHRED) model",
"for conversation modeling.",
"It integrates an utterance latent variable z utt t into the HRED structure (Sordoni et al., 2015a) which consists of three RNN components: encoder RNN , context RNN , and decoder RNN .",
"Given a previous sequence of utterances x 1 , ... x t 1 in a conversation, the VHRED generates the next utterance x t as: h enc t 1 = f enc ( x t 1 ) (3) h cxt t = f cxt ( h cxt t 1 , h enc t 1 ) (4) p ( z utt t | x <t ) = N ( z | t , t I ) (5) where t = MLP ( h cxt t ) (6) t = Softplus(MLP ( h cxt t )) (7) p ( x t | x <t ) = f dec ( x | h cxt t , z utt t ) (8) At time step t , the encoder RNN f enc takes the previous utterance x t 1 and produces an encoder vector h enc t 1 (Eq. 3).",
"The context RNN f cxt models the context of the conversation by updating its hidden states using the encoder vector (Eq. 4).",
"The context h cxt t defines the conditional prior p ( z utt t | x <t ) , which is a factorized Gaussian distribution whose mean t and diagonal variance t are given by feed-forward neural networks (Eq. 5-7).",
"Finally the decoder RNN f dec generates the utterance x t , conditioned on the context vector h cxt t and the latent variable z utt t (Eq. 8).",
"We make two important notes: (1) the context RNN can be viewed as a high-level decoder, and together with the decoder RNN, they comprise a hierarchical RNN decoder.",
"(2) VHRED follows a conditional VAE structure where each utterance x t is generated conditioned on the context h cxt t (Eq. 5-8).",
"The variational posterior is a factorized Gaussian distribution where the mean and the diagonal variance are predicted from the target utterance and the context as follows: q ( z utt t | x t ) = N ( z | 0 t , 0 t I ) (9) where 0 t = MLP ( x t , h cxt t ) (10) 0 t = Softplus(MLP ( x t , h cxt t )) (11) 3.3 The Degeneration Problem A known problem of a VAE that incorporates an autoregressive RNN decoder is the degeneracy that ignores the latent variable z .",
"In other words, the KL divergence term in Eq.",
"2 goes to zero and the decoder fails to learn any dependency between the latent variable and the data.",
"Eventually, the model behaves as a vanilla RNN.",
"This problem is 1794 0 5 10 15 20 25 30 35 40 45 50 55 Epoch 0.0 0.1 0.2 0.3 0.4 0.5 0.6 KL d i v e r g e n c e KL divergence 0.0 0.2 0.4 0.6 0.8 1.0 KL m u l t i p li e r KL multiplier Figure 1 : Degeneration of VHRED.",
"The KL divergence term continuously decreases as training proceeds, meaning that the decoder ignores the latent variable z utt .",
"We train the VHRED on Cornell Movie Dialog Corpus with word drop and KL annealing.",
"first reported in the sentence VAE (Bowman et al., 2016), in which following two heuristics are proposed to alleviate the problem by weakening the decoder.",
"First, the KL annealing scales the KL divergence term of Eq.",
"2 using a KL multiplier , which gradually increases from 0 to 1 during training: L ( , ; x ) = D KL ( q ( z | x ) k p ( z )) (12) + E q ( z | x ) [log p ( x | z )] This helps the optimization process to avoid local optima of zero KL divergence in early training.",
"Second, the word drop regularization randomly replaces some conditioned-on word tokens in the RNN decoder with the generic unknown word token (UNK) during training.",
"Normally, the RNN decoder predicts each next word in an autoregressive manner, conditioned on the previous sequence of ground truth (GT) words.",
"By randomly replacing a GT word with an UNK token, the word drop regularization weakens the autoregressive power of the decoder and forces it to rely on the latent variable to predict the next word.",
"The word drop probability is normally set to 0.25, since using a higher probability may degrade the model performance (Bowman et al., 2016).",
"However, we observe that these tricks do not solve the degeneracy for the VHRED in conversation modeling.",
"An example in Fig. 1 shows that the VHRED learns to ignore the utterance latent variable as the KL divergence term falls to zero.",
"Figure 2 : The average ratio E [ 2 t ] / Var ( t ) when the decoder is only conditioned on z utt t .",
"The ratio drops to zero as training proceeds, indicating that the conditional priors p ( z utt t | x <t ) degenerate to separate point masses.",
"The decoder RNN of the VHRED in Eq.",
"8 conditions on two information sources: deterministic h cxt t and stochastic z utt .",
"In order to check whether the presence of deterministic source h cxt t causes the degeneration, we drop the deterministic h cxt t and condition the decoder only on the stochastic utterance latent variable z utt : p ( x t | x <t ) = f dec ( x | z utt t ) (13) While this model achieves higher values of KL divergence than original VHRED, as training proceeds it again degenerates with the KL divergence term reaching zero (Fig. 2).",
"To gain an insight of the degeneracy, we examine how the conditional prior p ( z utt t | x <t ) (Eq. 5) of the utterance latent variable changes during training, using the model above (Eq. 13).",
"Fig. 2 plots the ratios of E [ 2 t ] / Var ( t ) , where E [ 2 t ] indicates the within variance of the priors, and Var ( t ) is the between variance of the priors.",
"Note that traditionally this ratio is closely related to Analysis of Variance (ANOVA) (Lomax and Hahs-Vaughn, 2013).",
"The ratio gradually falls to zero, implying that the priors degenerate to separate point masses as training proceeds.",
"Moreover, we find that the degeneracy of priors coincide with the degeneracy of KL divergence, as shown in (Fig. 2).",
"This is intuitively natural: if the prior is already narrow enough to specify the target utterance, there is little pressure to encode any more information in the variational posterior for reconstruction of the target utterance.",
"Figure 3 : Graphical representation of the Variational Hierarchical Conversation RNN (VHCR).",
"The global latent variable z conv provides a global context in which the conversation takes place.",
"This empirical observation implies that the fundamental reason behind the degeneration may originate from combination of two factors: (1) strong expressive power of the hierarchical RNN decoder and (2) training data sparsity caused by the conditional VAE structure.",
"The VHRED is trained to predict a next target utterance x t conditioned on the context h cxt t which encodes information about previous utterances { x 1 , . . . , x t 1 } .",
"However, conditioning on the context makes the range of training target x t very sparse; even in a large-scale conversation corpus such as Ubuntu Dialog (Lowe et al., 2015), there exist one or very few target utterances per context.",
"Therefore, hierarchical RNNs, given their autoregressive power, can easily overfit to training data without using the latent variable.",
"Consequently, the VHRED will not encode any information in the latent variable, i.e. it degenerates.",
"It explains why the word drop fails to prevent the degeneracy in the VHRED.",
"The word drop only regularizes the decoder RNN; however, the context RNN is also powerful enough to predict a next utterance in a given context even with the weakened decoder RNN.",
"Indeed we observe that using a larger word drop probability such as 0.5 or 0.75 only slows down, but fails to stop the KL divergence from vanishing.",
"As discussed, we argue that the two main causes of degeneration are",
"i) the expressiveness of the hierarchical RNN decoders, and",
"ii) the conditional VAE structure that induces data sparsity.",
"This finding hints us that in order to train a nondegenerate latent variable model, we need to design a model that provides an appropriate way to regularize the hierarchical RNN decoders and alleviate data sparsity per context.",
"At the same time, the model should be capable of modeling complex structure of conversation.",
"Based on these insights, we propose a novel VAE structure named Variational Hierarchical Conversation RNN (VHCR), whose graphical model is illustrated in Fig. 3.",
"Below we first describe the model, and discuss its unique features.",
"We introduce a global conversation latent variable z conv which is responsible for generating a sequence of utterances of a conversation c = { x 1 , . . . , x n } : p ( c | z conv ) = p ( x 1 , . . . , x n | z conv ) (14) Overall, the VHCR builds upon the hierarchical RNNs, following the VHRED (Serban et al., 2017).",
"One key update is to form a hierarchical latent structure, by using the global latent variable z conv per conversation, along with local the latent variable z utt t injected at each utterance (Fig. 3): h enc t = f enc ( x t ) (15) h cxt t = ( MLP ( z conv ) , if t = 0 f cxt ( h cxt t 1 , h enc t 1 , z conv ) , otherwise p ( x t | x <t , z utt t , z conv ) = f dec ( x | h cxt t , z utt t , z conv ) p ( z conv ) = N ( z | 0 , I ) (16) p ( z utt t | x <t , z conv ) = N ( z | t , t I ) (17) where t = MLP ( h cxt t , z conv ) (18) t = Softplus(MLP ( h cxt t , z conv )) .",
"(19)",
"q ( z conv | x 1 , ..., x n ) = N ( z | conv , conv I ) (20) where h conv = f conv ( h enc 1 , ..., h enc n ) (21) conv = MLP ( h conv ) (22) conv = Softplus ( MLP ( h conv )) .",
"(23)",
"The posteriors for local variables z utt t are then conditioned on z conv : q ( z utt t | x 1 , ..., x n , z conv ) = N ( z | 0 t , 0 t I ) (24) where 0 t = MLP ( x t , h cxt t , z conv ) (25) 0 t = Softplus ( MLP ( x t , h cxt t , z conv )) .",
"Our solution of VHCR to the degeneration problem is based on two ideas.",
"The first idea is to build a hierarchical latent structure of z conv for 1796 5 10 15 20 25 30 Epoch 0.0 0.1 0.2 0.3 0.4 0.5 0.6 KL d i v e r g e n c e VHRED + w.d VHCR + u.d VHRED + u.d Figure 4 : The comparison of KL divergences.",
"The VHCR with the utterance drop shows high and stable KL divergence, indicating the active use of latent variables.",
"w.d and u.d denote the word drop and the utterance drop, respectively.",
"a conversation and z utt t for each utterance.",
"As z conv is independent of the conditional structure, it does not suffer from the data sparsity problem.",
"However, the expressive power of hierarchical RNN decoders makes the model still prone to ignore latent variables z conv and z utt t .",
"Therefore, our second idea is to apply an utterance drop regularization to effectively regularize the hierarchical RNNs, in order to facilitate the use of latent variables.",
"That is, at each time step, the utterance encoder vector h enc t is randomly replaced with a generic unknown vector h unk with a probability p .",
"This regularization weakens the autoregressive power of hierarchical RNNs and as well alleviates the data sparsity problem, since it induces noise into the context vector h cxt t which conditions the decoder RNN.",
"The difference with the word drop (Bowman et al., 2016) is that our utterance drop depresses the hierarchical RNN decoders as a whole, while the word drop only weakens the lower-level decoder RNNs.",
"Fig. 4 confirms that with the utterance drop with a probability of 0 .",
"25 , the VHCR effectively learns to use latent variables, achieving a significant degree of KL divergence.",
"Is the hierarchical latent structure of the VHCR crucial for effective utilization of latent variables?",
"We investigate this question by applying the utterance drop on the VHRED which lacks any hierarchical latent structure.",
"We observe that the KL divergence still vanishes (Fig. 4), even though the utterance drop injects considerable noise in the context h cxt t .",
"We argue that the utterance drop weakens the context RNN, thus it consequently fail to predict a reasonable prior distribution for z utt (Eq. 5-7).",
"If the prior is far away from the region of z utt that can generate a correct target utterance, encoding information about the target in the variational posterior will incur a large KL divergence penalty.",
"If the penalty outweighs the gain of the reconstruction term in Eq.",
"2, then the model would learn to ignore z utt , in order to maximize the variational lower-bound in Eq.",
"2.",
"On the other hand, the global variable z conv allows the VHCR to predict a reasonable prior for local variable z utt t even in the presence of the utterance drop regularization.",
"That is, z conv can act as a guide for z utt by encoding the information for local variables.",
"This reduces the KL divergence penalty induced by encoding information in z utt to an affordable degree at the cost of KL divergence caused by using z conv .",
"This trade-off is indeed a fundamental strength of hierarchical models that provide parsimonious representation; if there exists any shared information among the local variables, it is coded in the global latent variable reducing the code length by effectively reusing the information.",
"The remaining local variability is handled properly by the decoding distribution and local latent variables.",
"The global variable z conv provides other bene-fits by representing a latent global structure of a conversation, such as a topic, a length, and a tone of the conversation.",
"Moreover, it allows us to control such global properties, which is impossible for models without hierarchical latent structure.",
"We first describe our experimental setting, such as datasets and baselines (section 4.1).",
"We then report quantitative comparisons using three different metrics (section 4.24.4).",
"Finally, we present qualitative analyses, including several utterance control tasks that are enabled by the hierarchal latent structure of our VHCR (section 4.5).",
"We defer implementation details and additional experiment results to the supplementary file.",
"Datasets .",
"We evaluate the performance of conversation generation using two benchmark datasets: 1) Cornell Movie Dialog Corpus (Danescu-1797 Model NLL Recon.",
"Table 1 : Results of Negative Log-likelihood.",
"The inequalities denote the variational bounds.",
"w.d and u.d., and bow denote the word drop, the utterance drop, and the auxiliary bag-of-words loss respectively.",
"Niculescu-Mizil and Lee, 2011), containing 220,579 conversations from 617 movies.",
"2) Ubuntu Dialog Corpus (Lowe et al., 2015), containing about 1 million multi-turn conversations from Ubuntu IRC channels.",
"In both datasets, we truncate utterances longer than 30 words.",
"Baselines .",
"We compare our approach with four baselines.",
"They are combinations of two state-of-the-art models of conversation generation with different solutions to the degeneracy.",
"(i) Hierarchical recurrent encoder-decoder (HRED) (Serban et al., 2016),",
"(ii) Variational HRED (VHRED) (Ser-ban et al., 2017),",
"(iii) VHRED with the word drop (Bowman et al., 2016), and",
"(iv) VHRED with the bag-of-words (bow) loss (Zhao et al., 2017).",
"Performance Measures .",
"Automatic evaluation of conversational systems is still a challenging problem (Liu et al., 2016).",
"Based on literature, we report three quantitative metrics:",
"i) the negative log-likelihood (the variational bound for variational models),",
"ii) embedding-based metrics (Serban et al., 2017), and",
"iii) human evaluation via Amazon Mechanical Turk (AMT).",
"Table 1 summarizes the per-word negative log-likelihood (NLL) evaluated on the test sets of two datasets.",
"For variational models, we instead present the variational bound of the negative log-likelihood in Eq.",
"2, which consists of the reconstruction error term and the KL divergence term.",
"The KL divergence term can measure how much each model utilizes the latent variables.",
"We observe that the NLL is the lowest by the HRED.",
"Variational models show higher NLLs, because they are regularized methods that are forced to rely more on latent variables.",
"Independent of NLL values, we later show that the latent variable models often show better generalization performance in terms of embedding-based metrics and human evaluation.",
"In the VHRED, the KL divergence term gradually vanishes even with the word drop regularization; thus, early stopping is necessary to obtain a meaningful KL divergence.",
"The VHRED with the bag-of-words loss (bow) achieves the highest KL divergence, however, at the cost of high NLL values.",
"That is, the variational lower-bound minimizes the minimum description length, to which the bow loss works in an opposite direction by forcing latent variables to encode bag-of-words representation of utterances.",
"Our VHCR achieves stable KL divergence without any auxiliary objective, and the NLL is lower than the VHRED + bow model.",
"Table 2 summarizes how global and latent variable are used in the VHCR.",
"We observe that VHCR encodes a significant amount of information in the global variable z conv as well as in the local variable z utt , indicating that the VHCR successfully exploits its hierarchical latent structure.",
"The embedding-based metrics (Serban et al., 2017; Rus and Lintean, 2012) measure the textual similarity between the words in the model response and the ground truth.",
"We represent words using Word2Vec embeddings trained on the Google News Corpus 1 .",
"The average metric projects each utterance to a vector by taking the mean over word embeddings in the utterance, and computes the cosine similarity between the model response vector and the ground truth vector.",
"The extrema metric is similar to the average metric, only except that it takes the extremum of each di-1 https://code.google.com/archive/p/word2vec/.",
"Table 3 : Results of embedding-based metrics.",
"1-turn and 3-turn responses of models per context.",
"mension, instead of the mean.",
"The greedy metric first finds the best non-exclusive word alignment between the model response and the ground truth, and then computes the mean over the cosine similarity between the aligned words.",
"Table 3 compares the different methods with three embedding-based metrics.",
"Each model generates a single response (1-turn) or consecutive three responses (3-turn) for a given context.",
"For 3-turn cases, we report the average of metrics measured for three turns.",
"We use the greedy decoding for all the models.",
"Our VHCR achieves the best results in most metrics.",
"The HRED is the worst on the Cornell Movie dataset, but outperforms the VHRED and VHRED + bow on the Ubuntu Dialog dataset.",
"Although the VHRED + bow shows the highest KL divergence, its performance is similar to that of VHRED, and worse than that of the VHCR model.",
"It suggests that a higher KL divergence does not necessarily lead to better performance; it is more important for the models to balance the modeling powers of the decoder and the latent variables.",
"The VHCR uses a more sophisticated hierarchical latent structure, which better reflects the structure of natural language conversations.",
"Table 4 reports human evaluation results via Amazon Mechanical Turk (AMT).",
"The VHCR outperforms the baselines in both datasets; yet the performance improvement in Cornell Movie Dialog are less significant compared to that of Ubuntu.",
"We empirically find that Cornell Movie dataset is small in size, but very diverse and complex in content and style, and the models often fail to generate sensible responses for the context.",
"The performance gap with the HRED is the smallest, suggesting that the VAE models without hierarchical latent structure have overfitted to Cornell Movie dataset.",
"Comparison of Predicted Responses .",
"Table 5 compares the generated responses of algorithms.",
"Overall, the VHCR creates more consistent responses within the context of a given conversation.",
"This is supposedly due to the global latent variable z conv that provides a more direct and effective way to handle the global context of a conversation.",
"The context RNN of the baseline models can handle long-term context to some extent, but not as much as the VHCR.",
"Interpolation on z conv .",
"We present examples of one advantage by the hierarchical latent structure of the VHCR, which cannot be done by the other existing models.",
"Table 6 shows how the generated responses vary according to the interpolation on z conv .",
"We randomly sample two z conv from a standard Gaussian prior as references ( i.e . the top and the bottom row of Table 6), and interpolate points between them.",
"We generate 3-turn conversations conditioned on given z conv .",
"We see that z conv controls the overall tone and content of conversations; for example, the tone of the response is friendly in the first sample, but gradually becomes hostile as z conv changes.",
"Generation on a Fixed z conv .",
"We also study how fixing a global conversation latent variable z conv affects the conversation generation.",
"Table 7 shows an example, where we randomly fix a reference z conv from the prior, and generate multiple examples of 3-turn conversation using randomly sampled local variables z utt .",
"We observe that z conv heavily affects the form of the first utterance; in the examples, the first utterances all start with a where phrase.",
"At the same time, responses show 1799 Cornell Ubuntu Opponent Wins Losses Ties Wins Losses Ties VHCR vs HRED 28 .",
"Table 4 : Results of human evaluation via AMT.",
"Human turkers are asked to choose which response is more appropriate in a given context, without knowing which algorithms generate which responses.",
"For each pair of models, we carry out three evaluation batches, each of which consists of 100 random test samples evaluated by five unique humans.",
"We report mean preferences with 90% confidence interval.",
"Table 5 : Qualitative comparison of generated responses.",
"Top two rows show the samples from Cornell Movie Dialog, while the bottom two rows are from Ubuntu Dialog.",
"Table 6 : An example of interpolated 3-turn responses over z conv on Cornell Movie Dialog.",
"variations according to different local variables z utt .",
"These examples show that the hierarchical latent structure of VHCR allows both global and fine-grained control over generated conversations.",
"We introduced the variational hierarchical conversation RNN (VHCR) for conversation modeling.",
"We noted that the degeneration problem in existing VAE models such as the VHRED is persistent, and proposed a hierarchical latent variable model with the utterance drop regularization.",
"Our VHCR obtained higher and more stable KL divergences than various versions of VHRED models without using any auxiliary objective.",
"The empir-where is she?",
"Table 7 : An example of 3-turn responses conditioned on sampled z utt for a single fixed z conv .",
"ical results showed that the VHCR better reflected the structure of natural conversations, and outperformed previous models.",
"Moreover, the hierarchical latent structure allowed both global and fine-grained control over the conversation generation.",
"This work was supported by Kakao and Kakao Brain corporations, and Creative-Pioneering Researchers Program through Seoul National University.",
"Gunhee Kim is the corresponding author."
] | [
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Natural language allows us to refer to novel composite concepts by combining expressions denoting their parts according to systematic rules, a property known as compositionality .",
"In this paper, we study whether the language emerging in deep multi-agent simulations possesses a similar ability to refer to novel primitive combinations, and whether it accomplishes this feat by strategies akin to human-language compositionality.",
"Equipped with new ways to measure compositionality in emergent languages inspired by disentanglement in representation learning, we establish three main results.",
"First, given sufficiently large input spaces, the emergent language will naturally develop the ability to refer to novel composite concepts.",
"Second, there is no correlation between the degree of compositionality of an emergent language and its ability to generalize.",
"Third, while compositionality is not necessary for generalization, it provides an advantage in terms of language transmission: The more compositional a language is, the more easily it will be picked up by new learners, even when the latter differ in architecture from the original agents.",
"We conclude that compositionality does not arise from simple generalization pressure, but if an emergent language does chance upon it, it will be more likely to survive and thrive.",
"Most concepts we need to express are composite in some way.",
"Language gives us the prodigious ability to assemble messages referring to novel composite concepts by systematically combining expressions denoting their parts.",
"As interest raises in developing deep neural agents evolving a communication code to better accomplish cooperative tasks, the question arises of how the emergent code can be Contributed equally.",
"endowed with the same desirable compositionality property (Kottur et al., 2017; Lazaridou et al., 2018; Mordatch and Abbeel, 2018; Cogswell et al., 2019; Li and Bowling, 2019).",
"This in turn requires measures of how compositional an emergent language is (Andreas, 2019).",
"Compositionality is a core notion in linguistics (Partee, 2004), but linguists' definitions assume full knowledge of primitive expressions and their combination rules, which we lack when analyzing emergent languages (Nefdt, 2020).",
"Also, these definitions are categorical, whereas to compare emergent languages we need to quantify degrees of compositionality.",
"Some researchers equate compositionality with the ability to correctly refer to unseen composite inputs (e.g., Kottur et al., 2017; Cogswell et al., 2019).",
"This approach measures the generalization ability of a language, but it does not provide any insights on how this ability comes about.",
"Indeed, one of our main results below is that emergent languages can attain perfect generalization without abiding to intuitive notions of compositionality.",
"Topographic similarity has become the standard way to quantify the compositionality of emergent languages (e.g., Brighton and Kirby, 2006; Lazaridou et al., 2018; Li and Bowling, 2019).",
"This metric measures whether the distance between two meanings correlates with the distance between the messages expressing them.",
"While more informative than generalization, topographic similarity is still rather agnostic about the nature of composition.",
"For example, when using, as is standard practice, Levenshtein distance to measure message distance, an emergent language transparently concatenating symbols in a fixed order and one mixing deletion and insertion operations on free-ordered symbols can have the same topographic similarity.",
"We introduce here two more opinionated measures of compositionality that capture some intuitive properties of what we would expect to happen in a compositional emergent language.",
"One possibility we consider is that order-independent juxtapositions of primitive forms could denote the corresponding union of meanings, as in English noun conjunctions: cats and dogs , dogs and cats .",
"The second still relies on juxtaposition, but exploits order to denote different classes of meanings, as in English adjective-noun phrases: red triangle , blue square .",
"Both strategies result in disentangled messages, where each primitive symbol (or sym-bol+position pair) univocally refers to a distinct primitive meaning independently of context.",
"We consequently take inspiration from work on disentanglement in representation learning (Suter et al., 2019) to craft measures that quantify whether an emergent language follows one of the proposed composition strategies.",
"Equipped with these metrics, we proceed to ask the following questions.",
"First, are neural agents able to generalize to unseen input combinations in a simple communication game?",
"We find that generalizing languages reliably emerge when the input domain is sufficiently large.",
"This somewhat expected result is important nevertheless, as failure-to-generalize claims in the recent literature are often based on very small input spaces.",
"Second, we unveil a complex interplay between compositionality and generalization.",
"On the one hand, there is no correlation between our compositionality metrics and the ability to generalize, as emergent languages successfully refer to novel composite concepts in inscrutablly entangled ways.",
"(Order-dependent) compositionality, however, if not necessary, turns out to be a sufficient condition for generalization.",
"Finally, more compositional languages are easier to learn for new agents, including agents that are architecturally different from the ones that evolved the language.",
"This suggests that, while composition might not be a natural outcome of the need to generalize, it is a highly desirable one, as compositional languages will more easily be adopted by a large community of different agents.",
"We return to the implications of our findings in the discussion.",
"We designed a variant of Lewis' signaling game (Lewis, 1969).",
"The game proceeds as follows: 1. Sender network receives one input i and chooses a sequence of symbols from its vocabulary V = { s 1 , s 2 ..., s c voc } of size c voc to construct a message m of fixed length c len .",
"2. Receiver network consumes m and outputs i .",
"3. Agents are successful if i = i , that is, Receiver reconstructs Sender's input.",
"Each input i of the reconstruction game is comprised of i att attributes, each with i val possible values.",
"We let i att range from 2 to 4 and i val from 4 to 100 .",
"We represent each attribute as a i val one-hot vector.",
"An input i is given by the concatenation of its attributes.",
"For a given ( i att , i val ), the number of input samples | I | = i i att val .",
"This environment, which can be seen as an extension of that of Kottur et al. (2017), is one of the simplest possible settings to study the emergence of reference to composite concepts (here, combinations of multiple attributes).",
"Attributes can be seen as describing object properties such as color and shape, with their values specifying those properties for particular objects ( red , round ).",
"Alternatively, they could be seen as slots in an abstract semantic tree (e.g., agent and action), with the values specifying their fillers (e.g., dog , barking ).",
"In the name of maximally simplifying the setup and easing interpretability, unlike Kottur et al. (2017), we consider a single-step game.",
"We moreover focus on input reconstruction instead of discrimination of a target input among distractors as the latter option adds furtherx complications: for example, languages in that setup have been shown to be sensitive to the number and distribution of the distractors (Lazari-dou et al., 2018).",
"For a fixed | I | , we endow Sender with large enough channel capacity | C | = c c len voc ( c voc { 5 , 10 , 50 , 100 } and c len { 3 , 4 , 6 , 8 } ) to express the whole input space (i.e., | C | | I | ).",
"Unless explicitly mentioned, we run 10 different initializations per setting.",
"See Appendix 8.1 for details about the range of tested settings.",
"The game is implemented in EGG (Kharitonov et al., 2019).",
"1 2.2 Agent architecture Both agents are implemented as single-layer GRU cells (Cho et al., 2014) with hidden states of size 500.",
"2 Sender encodes i in a message m of fixed 1 Code can be found at https://github.com/ facebookresearch/EGG/tree/master/egg/zoo/compo_vs_generalization .",
"length c len as follows.",
"First, a linear layer maps the input vector into the initial hidden state of Sender.",
"Next, the message is generated symbol-by-symbol by sampling from a Categorical distribution over the vocabulary c voc , parameterized by a linear mapping from Sender's hidden state.",
"The generated symbols are fed back to the cell.",
"At test time, instead of sampling, symbols are selected greedily.",
"Receiver consumes the entire message m .",
"Further, we pass its hidden state through a linear layer and consider the resulting vector as a concatenation of i att probability vectors over i val values each.",
"As a loss, we use the average cross-entropy between these distributions and Sender's input.",
"Popular approaches for training with discrete communication include Gumbel-Softmax (Mad-dison et al., 2016; Jang et al., 2016), REINFORCE (Williams, 1992), and a hybrid in which the Receiver gradients are calculated via back-propagation and those of Sender via REINFORCE (Schulman et al., 2015).",
"We use the latter, as recent work (e.g., Chaabouni et al., 2019) found it to converge more robustly.",
"We apply standard tricks to improve convergence:",
"(a) running mean baseline to reduce the variance of the gradient estimates (Williams, 1992), and",
"(b) a term in the loss that favors higher entropy of Sender's output, thus promoting exploration.",
"The obtained gradients are passed to the Adam optimizer (Kingma and Ba, 2014) with learning rate 0 .",
"001 .",
"Topographic similarity (topsim) (Brighton and Kirby, 2006) is commonly used in language emergence studies as a quantitative proxy for compositionality (e.g., Lazaridou et al., 2018; Li and Bowling, 2019).",
"Given a distance function in the input space (in our case, attribute value overlap, as attributes are unordered, and values categorical) and a distance function in message space (in our case, following standard practice, minimum edit distance between messages), topsim is the (Spearman) correlation between pairwise input distances and the corresponding message distances.",
"The measure can detect a tendency for messages with similar meanings to be similar in form, but it is relatively results with LSTMs, that were slower to converge.",
"agnostic about the type of similarity (as long as it is captured by minimum edit distance).",
"We complement topsim with two measures that probe for more specific types of compositionality, that we believe capture what deep-agent emergent-language researchers seek for, when interested in compositional languages.",
"In most scenarios currently considered in this line of research, the composite inputs agents must refer to are sets or sequences of primitive elements: for example, the values of a set of attributes, as in our experiment.",
"In this restricted setup, a compositional language is a language where symbols independently referring to primitive input elements can be juxtaposed to jointly refer to the input ensembles.",
"Consider a language with a symbol r referring to input element color:red and another symbol l referring to weight:light , where r and l can be juxtaposed (possibly, in accordance with the syntactic rules of the language) to refer to the input set { color:red , weight:light } .",
"This language is intuitively compositional.",
"On the other hand, a language where both r and l refer to these two input elements, but only when used together, whereas other symbol combinations would refer to color:red and weight:light in other contexts, is intuitively not compositional.",
"Natural languages support forms of compositionality beyond the simple juxtaposition of context-independent symbols to denote ensembles of input elements we are considering here (e.g., constructions that denote the application of functions to arguments).",
"However, we believe that the proposed intuition is adequate for the current state of affairs in language emergence research.",
"The view of compositionality we just sketched is closely related to the idea of disentanglement in representation learning.",
"Disentangled representations are expected to enable a consequent model to generalize on new domains and tasks (Bengio et al., 2013).",
"Even if this claim has been challenged (Bozkurt et al., 2019; Locatello et al., 2019), several interesting metrics have been proposed to quantify disentanglement, as reviewed in Suter et al. (2019).",
"We build in particular upon the Information Gap disentanglement measure of Chen et al. (2018), evaluating how well representations capture independence in the input sets.",
"metric measures whether symbols in specific positions tend to univocally refer to the values of a specific attribute.",
"This order-dependent strategy is commonly encountered in natural language structures (and it is a pre-condition for sophisticated syntactic structures to emerge).",
"Consider English adjective-noun phrases with a fully intersective interpretation, such as yellow triangle .",
"Here, the words in the first slot will refer to adjectival meanings, those in the second to nominal meanings.",
"In our simple environment, it might be the case that the first symbol is used to discriminate among values of an attribute, and the second to discriminate among values of another attribute.",
"Let's denote s j the j th symbol of a message and a j 1 the attribute that has the highest mutual information with s j : a j 1 = arg max a I ( s j ; a ) .",
"In turn, a j 2 is the second highest informative attribute, a j 2 = arg max a (cid:54) = a j 1 I ( s j ; a ) .",
"Denoting H ( s j ) the entropy of j -th position (used as a normalizing term), we define posdis as: posdis = 1 /c len c len (cid:88) j =1 I ( s j ; a j 1 ) I ( s j ; a j 2 ) H ( s j ) (1) We ignore positions with zero entropy.",
"Eq.",
"1 captures the intuition that, for a language to be compositional given our inputs, each position of the message should only be informative about a single attribute.",
"However, unlike the related measure proposed by Resnick et al. (2019), it does not require knowing which set of positions encodes a particular attribute, which makes it computationally simpler (only linear in c len ).",
"Posdis assumes that a language uses positional information to disambiguate symbols.",
"However, we can easily imagine a language where symbols univocally refer to distinct input elements independently of where they occur, making order irrelevant.",
"3 Hence, we also introduce bag-of-symbols disentanglement (bosdis) .",
"The latter maintains the requirement for symbols to univocally refer to distinct meanings, but captures the intuition of a permutation-invariant language, where only symbol counts are informative.",
"Denoting by n j a counter of the j -th symbol in a message, bosdis is given by: bosdis = 1 /c voc c voc (cid:88) j =1 I ( n j ; a j 1 ) I ( n j ; a j 2 ) H ( n j ) (2) In all experiments, the proposed measures topsim , posdis and bosdis are calculated on the train set.",
"3 This is not unlike what happens in order-insensitive constructions such as English conjunctions: dogs and cats , cats and dogs .",
"In Appendix 8.2, we illustrate how the three metrics behave differently on three miniature languages.",
"Across the languages of all converging runs in our simulations, their Spearman correlations are: topsim / posdis : 0 .",
"08 ; topsim / bosdis : 0 .",
"38 ; posdis / bosdis : 0 .",
"31 .",
"These correlations, while not extremely high, are statistically significant ( p < 0 . 01 ), which is reassuring as all metrics attempt to capture compositionality.",
"It is also in line with reasonable expectations that the most opinionated posdis measure is the one that behaves most differently from topsim .",
"In our setup, generalization can be straightforwardly measured by splitting all possible distinct inputs so that the test set only contains inputs with attribute combinations that were not observed at training.",
"Generalization is then simply quantified by test accuracy.",
"In intuitive terms, at training time the agents are exposed to blue triangles and red circles , but blue circles only appear at test time.",
"This requires Sender to generate new messages, and Receiver to correctly infer their meaning.",
"If a blue circle is accurately reconstructed, then agents do generalize.",
"For all the considered settings, we split the possible distinct inputs into 90% train and 10% test items.",
"This implies that the absolute training/test set sizes increase with input dimension (this issue is further discussed in Appendix 8.4).",
"Finally, we only evaluate generalization for runs that successfully converged, where convergence is operationalized as > 99 .",
"9% training-set accuracy.",
"Fig. 1 shows that emergent languages are able to almost perfectly generalize to unseen combinations as long as input size | I | is sufficiently large (input size/test accuracy Spearman = 0 . 86 , p 0 ).",
"The figure also shows that the way in which a large input space is obtained (manipulating i att or i val ) does not matter (no significant accuracy difference between the bracketed runs, according to a set of t-tests with p > 0 . 01 ).",
"Moreover, the correlation is robust to varying agents' capacity (Appendix 8.3; see Resnick et al. (2019) for a thorough study of how agent capacity impacts generalization and compositionality).",
"Importantly, the effect is not simply a product of larger input sizes coming with 25 ( 2 , 5 ) 100 ( 2 , 10 ) 125 ( 3 , 5 ) 256 ( 4 , 4 ) 256 ( 2 , 16 ) 625 ( 4 , 5 ) 625 ( 2 , 25 ) 1000 ( 3 , 10 ) 2500 ( 2 , 50 ) 10000 ( 4 , 10 ) 10000 ( 2 , 100 ) input size 0.0 0.2 0.4 0.6 0.8 1.0 t e s t a cc u r a c y Figure 1: Average accuracy on unseen combinations as a function of input size of successful runs.",
"larger training corpora, as we replicate it in Appendix 8.4 while keeping the number of distinct training examples fixed, but varying input combinatorial variety .",
"What matters is that, in the training data, specific attribute values tend to occur with a large range of values from other attributes, providing a cue about the composite nature of the input.",
"That languages capable to generalize will only emerge when the input is varied enough might seem obvious, and it has been shown before in mathematical simulations of language emergence (Nowak et al., 2000), as well as in studies of deep network inductive biases (Zhao et al., 2018).",
"However, our result suggests an important caveat when interpreting experiments based on small input environments that report failures in the generalization abilities of deep networks (e.g., Kottur et al., 2017; Lake and Baroni, 2018).",
"Before assuming that special architectures or training methods are needed for generalization to emerge, such experiments should be repeated with much larger/varied input spaces, where it is harder for agents to develop ad-hoc strategies overfitting the training data and failing to generalize.",
"We also considered the relation between channel capacity | C | and language emergence.",
"Note that | C | | I | is a prerequisite for successful communication, and a perfectly compositional language could already generalize at the lower | C | = | I | bound.",
"Indeed, limiting channel capacity has been proposed as an important constraint for the emergence of compositionality (Nowak and Krakauer, 1999).",
"However, we find that, when | I | is sufficiently large to support generalization, our deep agents need | C | > | I | in order to even converge at training time.",
"The minimum | C | / | I | ratio across all converging runs for each configuration with | I | 625 (the settings where we witness generalizing languages) is on average 5.9 (s.d.: 4.4).",
"Concretely, this implies that none of our successful languages is as compact as a minimal fully-compositional solution would afford.",
"Appendix 8.5 reports experiments focusing, more specifically, on the relation between channel capacity and generalization, showing that it is essential for | C | to be above a large threshold to reach near-perfect accuracy, and further increasing | C | beyond that does not hamper generalization.",
"Having established that emergent languages can generalize to new composite concepts, we test whether languages that generalize better are also more compositional.",
"Since bosdis and topsim correlate with | C | (Appendix 8.6), we compute Spearman correlations between test accuracy and compositionality metrics across all converging runs of each ( i att , i val , c len , c voc ) configuration separately.",
"Surprisingly, in just 4 out of 141 distinct settings the correlation is significant ( p < 0 . 01 ) for at least 1 measure.",
"4 We further analyze the ( i att = 2 , i val = 100 , c len = 3 , c voc = 100 ) setting, as it has a large number of generalizing runs, and it is representative of the general absence of correlation we also observe elsewhere.",
"Fig. 2 confirms that even non-compositional languages (w.r.t. any definition of compositionality) can generalize well.",
"Indeed, for very high test accuracy ( > 98% ), we witness a large spread of posdis (between 0 . 02 and 0 . 72 ), bosdis (between 0 . 03 and 0 . 4 ) and topsim (between 0 . 11 and 0 . 64 ).",
"In other words, deep agents are able to communicate about new attribute combinations while using non-compositional languages.",
"We note moreover that even the most compositional languages according to any metric are far from the theoretical maximum ( = 1 for all metrics).",
"We observe however that the top-left quadrants of Fig. 2 panels are empty.",
"In other words, it never happens that a highly compositional language has low accuracy.",
"To verify this more thoroughly, for each compositionality measure , we select those languages, among all converging runs in all con-4 3 , 3 and 1 (different) significant settings for topsim , posdis and bosdis , respectively.",
"figurations , that have > 0 .",
"5 , and compute the proportion of them that reaches high test accuracy ( > 0 . 80 ).",
"We find that this ratio equates 0 .",
"90 , 0 .",
"50 , and 0 .",
"11 for posdis , bosdis , and topsim respectively.",
"That is, while compositionality is not a necessary condition for generalization, it appears that the strongest form of compositionality, namely posdis , is at least sufficient for generalization.",
"This provides some evidence that compositionality is still a desirable feature, as further discussed in Section 6.",
"We gain further insights on what it means to generalize without full compositionality by taking a deeper look at the language shown in red in Fig. 2, that has near-perfect generalization accuracy ( > 99%), and whose posdis score (0.70), while near the relative best, is still far from the theoretical maximum (we focus on posdis since it is the easiest compositional strategy to qualitatively char-acterize).",
"As its behavior is partially interpretable, this mediumposdis language offered us clearer insights than more strongly entangled cases.",
"We partially analyze one of the latter in Appendix 8.7.",
"Note that, with ( i att = 2 , i val = 100 ), a ( c len = 2 , c voc = 100 ) channel should suffice for a perfectly positionally disentangled strategy.",
"Why does the analyzed language use ( c len = 3 ) instead?",
"Looking at its mutual information profile (Appendix Table 5), we observe that positions 2 and 3 ( pos2 and pos3 ) are respectively denoting attributes 2 and 1 ( att2 and att1 ): pos3 has high mutual information with att1 and low mutual information with att2 ; the opposite holds for pos2 .",
"The remaining position, pos1 , could then be simply redundant with respect to the others, or encode noise ignored by Receiver.",
"However, this is not quite the case, as the language settled instead for a form of leaky disentanglement.",
"The two disentangled positions do most of the job, but the third, more entangled one, is still necessary for perfect communication.",
"To see this, consider the ablations in Table 1. Look first at the top block, where the trained Receiver of the relevant run is fed messages with the symbol in one original position preserved, the others shuffled.",
"Confirming that communication is largely happening by disentangled means, preserving pos2 alone suffices to have Receiver guessing a large majority of att2 values, and keeping pos3 unchanged is enough to guess almost 90% of att1 values correctly.",
"Conversely, preserving pos1 alone causes a complete drop in accuracy for both attributes.",
"However, neither pos2 nor pos3 are sufficient on their own to perfectly predict the corresponding attributes.",
"Indeed, the results in the bottom block of the table (one symbol shuffled while the others stay in their original position) confirm that pos1 carries useful complementary information: when fixing the latter and either one of the other positions , we achieve 100% accuracy for the relevant attribute ( att2 for pos1 + pos2 and att1 for pos1 + pos3 ), respectively.",
"In sum, pos2 and pos3 largely specialized as predictors of att2 and att1 , respectively.",
"However, they both have a margin of ambiguity (in pos2 and pos3 there are 96 and 98 symbols effectively used, respectively, whereas a perfect 1-to-1 strategy would require 100).",
"When the symbols in these positions do not suffice, pos1 , that can refer to both attributes, serves a disambiguating role.",
"We quantified this complementary function as follows.",
"We define the cue validity of s p (symbol in position p ) w.r.t an attribute a as CV ( s p , a ) = max a P ( a | s p ) , where a iterates over all possible values of a .",
"CV ( s pos 1 , att 2) is significantly higher in those (train/test) messages where CV ( s pos 2 , att 2) is below average.",
"Similarly, CV ( s pos 1 , att 1) is significantly higher in messages where CV ( s pos 3 , att 1) is below average ( p 0 in both cases).",
"We might add that, while there is a huge difference between our simple emergent codes and natural languages, the latter are not perfectly disentangled either, as they feature extensive lexical ambiguity, typically resolved in a phrasal context (Piantadosi et al., 2012).",
"The need to generalize to new composite inputs does not appear to constitute a sufficient pressure to develop a compositional language.",
"Given that compositionality is ubiquitous in natural language, we conjecture that it has other beneficial properties, making it advantageous once agents chanced upon it.",
"Compositional codes are certainly easier to read out by humans (as shown by our own difficulty in qualitatively analyzing highly entangled languages), and we might hypothesize that this ease-of-decoding is shared by computational agents.",
"A long tradition of subject studies and computational simulations has shown that the need to transmit a language across multiple generations or to populations of new learners results in the language being more compositional (e.g., Kirby, 2001; Kirby et al., 2015; Verhoef et al., 2016; Cornish et al., 2017; Cogswell et al., 2019; Guo et al., 2019; Li and Bowling, 2019).",
"Our next experiments are closely related to this earlier work, but we adopt the opposite perspective.",
"Instead of asking whether the pressure to transmit a language will make it more compositional, we test whether languages that have already emerged as compositional, being easier to decode, are more readily transmitted to new learners.",
"5 Specifically, we run 30 games in the largest input setting ( i att = 2 , i val = 100 ), varying the channel parameters.",
"We select the pairs of agents that achieved a high level of generalization accuracy ( 0.80).",
"Next, following the paradigm of Li and Bowling (2019), we freeze Sender, and train a new 5 Li and Bowling (2019) established this for hand-crafted languages; we extend the result to spontaneously emerging ones.",
"Receiver from scratch.",
"We repeat this process 3 times per game, initializing new Receivers with different random seeds.",
"Once the newly formed pair of agents is successful on the training set, we measure its test accuracy.",
"We also report speed of learning, measured by area under the epochs vs. training accuracy curve.",
"We experiment with three Receiver architectures.",
"The first two, GRU (500) and GRU (50), are GRUs with hidden layer sizes of 500 (identical to the original Receiver) and 50, respectively.",
"The third is a two-layer Feed-Forward Network (FFN) with a ReLu non-linearity and hidden size 500.",
"The latter Receiver takes the flattened one-hot representation of the message as its input.",
"This setup allows probing ease of language transmission across models of different complexity.",
"We leave the study of language propagation across multiple generations of speakers to future work.",
"Results in the same setting studied in Section 5 are presented in Table 2 (experiments with other setups are in Appendix 8.8).",
"Both learning speed and generalization accuracy of new Receivers are strongly positively correlated with degree of compositionality .",
"The observed correlations reach values almost as high as 0 .",
"90 for learning speed and 0 .",
"80 for generalization, supporting our hypothesis that, when emergent languages are compositional, they are simpler to understand for new agents, including smaller ones (GRU (50)), and those with a different architecture (FFN).",
"The natural emergence of generalization There has been much discussion on the generalization capabilities of neural networks, particularly in linguistic tasks where humans rely on compositionality (e.g., Fodor and Lepore, 2002; Marcus,",
"2003; van der Velde et al., 2004; Brakel and Frank, 2009; Kottur et al., 2017; Lake and Baroni, 2018; Andreas, 2019; Hupkes et al., 2019; Resnick et al., 2019).",
"In our setting, the emergence of generalization is very strongly correlated with variety of the input environment.",
"While this result should be replicated in different conditions, it suggests that it is dangerous to study the generalization abilities of neural networks in thought experiment setups where they are only exposed to a small pool of carefully-crafted examples.",
"Before concluding that garden-variety neural networks do not generalize, the simple strategy of exposing them to a richer input should always be tried.",
"Indeed, even studies of the origin of human language conjecture that the latter did not develop sophisticated generalization mechanisms until pressures from an increasingly complex environment forced it to evolve in that direction (Bickerton, 2014; Hurford, 2014).",
"Generalization without compositionality Our most important result is that there is virtually no correlation between whether emergent languages are able to generalize to novel composite inputs and the presence of compositionality in their messages (Andreas (2019) noted in passing the emergence of non-compositional generalizing languages, but did not explore this phenomenon systematically).",
"Supporting generalization to new composite inputs is seen as one of the core purposes of compositionality in natural language (e.g., Pagin and Westerstahl, 2010).",
"While there is no doubt that compositional languages do support generalization, we also found other systems spontaneously arising that generalize without being compositional, at least according to our intuitive measures of compositionality.",
"This has implications for the ongoing debate on the origins of compositionality in natural language, (e.g., Townsend et al., 2018, and references there), as it suggests that the need to generalize alone might not constitute a sufficient pressure to develop a fully compositional language.",
"Our result might also speak to those linguists who are exploring the non-fully-compositional corners of natural language (e.g., Goldberg, 2019).",
"A thorough investigation of neural network codes that can generalize while being partially entangled might shed light on similar phenomena in human languages.",
"Finally, and perhaps most importantly, recent interest in compositionality among AI researchers stems from the assumption that compositionality is crucial to achieve good generalization through language (e.g., Lake and Baroni, 2018; Lazaridou et al., 2018; Baan et al., 2019).",
"Our results suggest that the pursuit of generalization might be separated from that of compositionality, a point also recently made by Kharitonov and Baroni (2020) through hand-crafted simulations.",
"What is compositionality good for?",
"We observed that positional disentanglement, while not necessary, is sufficient for generalization.",
"If agents develop a compositional language, they are then very likely to be able to use it correctly to refer to novel inputs.",
"This supports the intuition that compositional languages are easier to fully understand.",
"Indeed, when training new agents on emerged languages that generalize, it is much more likely that the new agents will learn them fast and thoroughly (i.e., they will be able to understand expressions referring to novel inputs) if the languages are already compositional according to our measures.",
"That language transmission increases pressure for structured representations is an established fact (e.g., Kirby et al., 2015; Cornish et al., 2017).",
"Here, we reversed the arrow of causality and showed that, if compositionality emerges (due to chance during initial language development), it will make a language easier to transmit to new agents.",
"Compositionality might act like a dominant genetic feature: it might arise by a random mutation but, once present, it will survive and thrive, as it guarantees that languages possessing it will generalize and will be easier to learn.",
"From an AI perspective, this suggests that trying to enforce compositionality during language emergence will increase the odds of developing languages that are quickly usable by wide communities of artificial agents, that might be endowed with different architectures.",
"From the linguistic perspective, our results suggest an alternative view of the relation between compositionality and language transmissionone in which the former might arise by chance or due to other factors, but then makes the resulting language much easier to be spread.",
"Compositionality and disentanglement Language is a way to represent meaning through discrete symbols.",
"It is thus worth exploring the link between the area of language emergence and that of representation learning (Bengio et al., 2013).",
"We took this route, borrowing ideas from research on disentangled representations to craft our compositionality measures.",
"We focused in particular on the intuition that, if emergent languages must denote ensembles of primitive input elements, they are compositional when they use symbols to univocally denote input elements independently of each other.",
"While the new measures we proposed are not highly correlated with topographic similarity, in most of our experiments they did not behave significantly differently from the latter.",
"On the one hand, given that topographic similarity is an established way to quantify compositionality, this serves as a sanity check on the new measures.",
"On the other, we are disappointed that we did not find more significant differences between the three measures.",
"Interestingly one of the ways in which they did differ is that, when a language is positionally disentangled, (and, to a lesser extent, bag-of-symbols disentangled), it is very likely that the language will be able to generalizea guarantee we don't have from less informative topographic similarity.",
"The representation learning literature is not only proposing disentanglement measures, but also ways to encourage emergence of disentanglement in learned representations.",
"As we argued that compositionality has, after all, desirable properties, future work could adapt methods for learning disentangled representations (e.g., Higgins et al., 2017; Kim and Mnih, 2018) to let (more) compositional languages emerge.",
"We thank the reviewers for feedback that helped us to make the paper clearer."
] | [
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"Not all documents are equally important.",
"Language processing is increasingly finding use as a supplement for questionnaires to assess psychological attributes of consenting individuals, but most approaches neglect to consider whether all documents of an individual are equally informative.",
"In this paper, we present a novel model that uses message-level attention to learn the relative weight of users' social media posts for assessing their five factor personality traits.",
"We demonstrate that models with message-level attention outperform those with word-level attention, and ultimately yield state-of-the-art accuracies for all five traits by using both word and message attention in combination with past approaches (an average increase in Pearson r of 2.5%).",
"In addition, examination of the high-signal posts identified by our model provides insight into the relationship between language and personality, helping to inform future work.",
"Most language-based methods for human attribute prediction assume all documents generated by a person are equally informative.",
"However, this is not necessarily true.",
"Figure 1 gives examples of high and low signal messages for predicting extraversion one's tendency to be energized by social interaction.",
"The high signal messages contain words relating to social interaction ( hangin out , chillin ), whereas the low signal messages, while still containing social-related words, have little clear relevance to extraversion.",
"The former examples would ideally be weighted higher by a personality prediction model than the latter.",
"This paper applies the idea of modeling document relevance to the task of personality prediction.",
"Inferring an individual's personality traits is a fundamental task in psychology (McCrae and Costa Jr,",
"1997; Mischel et al., 2007), with social scientific applications ranging from public health (Fried-man and Kern, 2014) and marketing (Matz et al., 2017) to personalized medicine (Chapman et al., 2011), mental health care (Bagby et al., 1995), and even providing useful information for downstream NLP tasks (Preotiuc-Pietro et al., 2015; Lynn et al., 2017).",
"Recently, researchers from both NLP and psychology have turned toward more accurately assessing personality and other human attributes via language (Mairesse et al., 2007; Schwartz et al., 2013; Park et al., 2015; Kulkarni et al., 2018).",
"The idea behind language-based assessments (Park et al., 2015) is that language use patterns can supplement and, in part, replace traditional and expensive questionnaire-based human assessments.",
"Here, we present a hierarchical neural sequence model over both the words and messages of the user and correspondingly applies attention to each level.",
"The document-level attention learns the relative importance of each social media post for predicting personality.",
"2. An empirical demonstration that shows models with message-level attention outperform those without.",
"3. State-of-the-art performance for language-based assessment of personality.",
"4. Insight into the relationship between message-level language use and personality.",
"Our goal is to encode user messages into a representation that can be used to predict the personality of the user.",
"We can use a two-step process to produce such a representation: First encode the sequences of words in each message to form message-level representations and then encode the message-level representations to form a user-level representation.",
"Social media users write hundreds or even thousands of messages; while the messages, and the words within them, contain valuable clues to their personality, not all of it is equally valuable.",
"An ideal representation of user text, therefore, should pay particular attention to personality-revealing portions of a user's text.",
"Hierarchical attention is a natural fit for this problem.",
"At the message level, a word-attention model can learn to emphasize personality related words in the message representation, while at the user-level, a message attention model can learn to emphasize personality-related messages in the overall user representation.",
"We instantiate this idea using a hierarchical sequence architecture shown in Figure",
"2. Given a set of n messages from a user u , the first step of the model is to produce an encoding for each message m i .",
"Each word w ij in message m i is fed through a Gated Recurrent Unit (GRU) (Cho et al., 2014) to produce a hidden state: h ij = GRU ( w ij ) (1) We then apply an attention mechanism over the sequence of hidden states [ h i 1 , h i 2 , ..., h il ] : d ij = tanh ( W word h ij + b word ) (2) ij = exp ( d ij (cid:62) d word ) lk =0 exp ( d ik (cid:62) d word ) (3) s i = l (cid:88) k =0 i k h i k (4) where d word is a learned context vector for word-level attention, b word is a bias term, and ij is a Figure 2: Diagram of our proposed model for personality prediction.",
"normalized attention weight for h ij .",
"s i is thus a weighted combination of the hidden states representing { w i 1 , w i 2 , ..., w il } .",
"Once we have these message representations, the next step is to encode each sequence of messages into a user representation.",
"Each message representation s i is passed through another encoder, also using Gated Recurrent Units: h i = GRU ( s i ) (5) As before, the hidden states are then passed through another message-level attention mechanism: e i = tanh ( W message h i + b message ) (6) i = exp ( e (cid:62) i e message ) nk =0 exp ( e (cid:62) k e message ) (7) u = n (cid:88) k =0 k h k (8) As before, e message is a learned context vector for message-level attention.",
"The representation for a user u is thus a weighted combination of the hidden states representing that person's messages.",
"Once the user representation has been produced, u is further passed through some fully-connected layers before being used for prediction at the final layer.",
"In this way, important words and messages don't get lost to noise and are instead carried through to later portions of the model, where they can have a greater impact on the final prediction.",
"Our model is similar in structure and motivation to the Hierarchical Attention Network proposed by Yang et al. (2016).",
"However, our work focuses on a different level of analysis: whereas Yang et al. (2016) encode words sentences documents , our work seeks to encode words documents users .",
"This idea of applying attention at a document level when modeling user-level attributes is, to the best of our knowledge, entirely novel.",
"We hypothesize that where attention is applied is crucial and that message-level attention is of particular importance for modeling personality.",
"We draw our data from consenting users of a Facebook application (Kosinski et al., 2013), which allowed users to take various psychological assessments and voluntarily share their data with researchers.",
"Following the work of Schwartz et al. (2013) and Park et al. (2015), the current state of the art on this dataset, we filtered the users to those who shared their Facebook status posts, wrote at least 1,000 words across those statuses, provided their age and gender, and were less than 65 years old.",
"All users completed psychological measures, ranging from 20 to 100 items, that assessed their Big Five personality traits (Costa and McCrae, 1992): conscientiousness, agreeableness, neuroticism, openness to experience, and extraversion.",
"Each of the five dimensions is represented by a normalized, continuous score representing the degree to which that trait is exhibited.",
"We refer to these as personality scores .",
"The Big Five personality traits are described more fully in Section",
"4. Overall, our dataset contains Facebook statuses and personality scores for 68,687 users.",
"To allow for direct comparisons, we use the same test set ( n =1,943) as Park et al. (2015).",
"Each of these test users completed a longer 100-item questionnaire, ensuring higher-quality scores.",
"We sample an additional 4,998 for use as a development set, and leave the remaining 61,746 for training.",
"On average, users in our dataset are 23 years old and 63% are female.",
"Users had an average of 3,619 words and 165 messages, all posted to Facebook between 2009 and 2011.",
"Ethical Research Statement.",
"All participants consented to sharing their status updates and personality questionnaire results for research purposes, and the study has been approved by an academic institutional review board.",
"Discovery of the Big Five personality traits began nearly a century ago with some of the first data-driven, statistical latent variable modeling techniques (Thurstone, 1934).",
"The goal in this decades-long pursuit was not very different from that of producing latent vector embeddings of words: 1 to use latent factor analysis to reveal underlying, stable dimensional vectors that distinguish people.",
"However, rather than finding latent semantic dimensions of words, the models (run by hand at first) focused on how individuals answered questions about themselves.",
"For example, modern questions include: How much do you agree with these statements? (1) I am the life of the party; (2) I have difficulty understanding abstract ideas; (3) I like order; (4) I worry about things (Goldberg et al., 2006).",
"The idea behind this data-driven approach was that if such latent dimensions could be found to be stable across time and differing populations, that suggests they are fundamental to what makes each of us different.",
"Such work continued for decades, documented across thousands of studies to eventually arrive at the acceptance of five such factors being fundamental and consistent across time and populations (Costa and McCrae, 1992).",
"Those fundamental human factors, the target of our human language predictive task, are described below.",
"The big five often goes by the acronym OCEAN, standing for o penness to experience , c onscientiousness, e xtraversion, a greeableness, and n euroticism .",
"High scores for openness to experience are correlated with philosophical and free thought, as well as an interest in the arts, music, and cinema (Schwartz et al., 2013; Kern et al., 2014).",
"Those who score low here may be more practical, realistic, or close-minded (Costa and McCrae, 1992).",
"Individuals with high conscientiousness tend to be well organized and have a lot of self-discipline, which may be expressed through discussions of work or school-related responsibilities (Yarkoni, 2010; Kern et al., 2014).",
"Those who score low 1 In fact Thurstone referred to the latent variables as vec-tors of the mind.",
"on this dimension may appear impulsive, disorganized, or unreliable.",
"Those with high extraversion are likely to talk about friends, social situations, and interpersonal interaction.",
"On the other hand, those with low extraversion may be more independent and may focus more on solo activities (e.g. watching television) (Costa and McCrae, 1992; Park et al., 2015).",
"Agreeableness is associated with being friendly and good-natured, while those who score low may be selfish or rude.",
"Swearing is highly correlated with low agreeableness (Yarkoni, 2010; Schwartz et al., 2013).",
"High neuroticism is strongly linked to anxiety and depression, while low neuroticism is linked to emotional stability.",
"2 This dimension may be expressed through feelings such as fear, sadness, or frustration (Costa and McCrae, 1992; Kern et al., 2014).",
"Each user was represented as a sequence of their messages, from most to least recent, which were themselves represented as a sequence of word embeddings.",
"To do so, we pre-trained 200-dimensional word2vec embeddings (Mikolov et al., 2013) over all messages belonging to the training set users.",
"The vocabulary was limited to words that appear in at least 50 messages.",
"Words that occurred fewer times were replaced by an out-of-vocabulary token.",
"The Language Detection Library (Shuyo, 2010) was used to filter out non-English texts.",
"3 5.2 Baseline Models Ridge Regression (N-Grams/Topics).",
"We compare against Park et al. (2015), which is the current state of the art on this dataset and, to the best of our knowledge, demonstrated the best published regression predictions over a Big Five personality factors from language alone.",
"Their model uses a combination of n-gram features and LDA-based topics extracted from the training data.",
"These features then undergo dimensionality reduction in the 2 Some versions of the Big Five flip this dimension and call it emotional stability.",
"3 Even without this step, the models tended to artificially exclude non-English texts by assigning them very low attention weights.",
"form of univariate feature selection and randomized principal component analysis, resulting in a total of 5106 features.",
"These features are then used to train ridge regression models, one per personality dimension, for prediction.",
"Because we use the same test set users as Park et al. (2015), we compare directly against their reported results.",
"Ridge Regression (Embeddings).",
"In addition to the n-gram and topic-based ridge models of Park et al. (2015), we train ridge regression models using the word embeddings described in Section 5.1.",
"These embeddings are averaged first per-message and then per-user, creating a 200-dimensional embedding per user to input to the model.",
"DAN.",
"We modify the model proposed in Section 2 to use a Deep Averaging Network (Iyyer et al., 2015), rather than a GRU, at the word and/or message level.",
"This takes the average across all word (or message) embeddings to produce a message-(or user-) level representation.",
"DAN + Attn.",
"Identical to the DAN variant except takes the weighted (rather than unweighted) average using learned attention weights.",
"than word or message attention.",
"Transformer (TN).",
"This variant of our proposed model uses a two-layer transformer (Vaswani et al., 2017) with double-headed attention, rather than a GRU, at the message or word level.",
"BERT.",
"Whereas our proposed model learns message-level representations, we instead experiment with using pre-trained BERT embeddings (Devlin et al., 2019) as our message representations.",
"These 768-dimension message embeddings are produced by averaging across all BERT token embeddings for each message (Matero et al., 2019).",
"All models were implemented using Py-Torch (Paszke et al., 2017), with the exception of Ridge Regression which used scikit-learn (Pe-dregosa et al., 2011).",
"One model was trained for each of the five personality dimensions.",
"All deep learning models use two feed-forward layers with 512 hidden units each, followed by a final prediction layer.",
"The GRU layers have a hidden size of 200 to match the number of embedding dimensions.",
"Similarly, we learn a projection down to 200 dimensions for our BERT embeddings.",
"All hyperparameters (dropout and learning rate word-to-message message-to-user OPE CON EXT AGR NEUDAN DAN .579 .516 .509 .474 .516 SN SN .601 .506 .512 .431 .523 DAN + Attn DAN + Attn .615 .506 .530 .499 .528 DAN + Attn SN + Attn .605 .510 .535 .501 .560 SN + Attn DAN + Attn .625 .497 .539 .519 .532 SN + Attn SN + Attn .626 .521 .552 .509 .541 TN (Attn) SN + Attn .544 .474 .513 .483 .526 Table 1: Comparison of Disattenuated Pearson R of different models for personality prediction on the test set users ( n =1943), using different architectures to aggregate from word to message level and message to user level.",
"for deep models; alpha for ridge) were tuned over the development set for a single personality dimension ( OPE ), with the best parameters being used to train models for the remaining dimensions.",
"The deep models were trained using a batch size of 64.",
"Training lasted for a maximum of 20 epochs, with most models stopping after around 10 epochs due to early stopping with a patience of two epochs.",
"To reduce memory requirements during training, each user's post history was chunked into sequences of at most 500 messages each.",
"For example, a user with 1250 messages total would be divided into three instances with 500, 500, and 250 messages.",
"This was only done for the training set; the testing and tuning sets used all messages at once.",
"Our evaluation aims to answer the following:",
"1. How successful are attention-based models at predicting personality?",
"2. What is the distribution of high signal versus low signal messages?",
"3. What is the relative importance of message-level attention over word-level attention?",
"Table 1 compares the performance of our proposed model, SN+Attn , against variations using different architectures to aggregate from the word to",
"message level and message to user level.",
"Model performance is given as the disattenuated Pearson correlation coefficient 4 between the predicted and questionnaire-based personality scores.",
"Overall the models with attention outperform those without.",
"Perhaps surprisingly, the SN+Attn at the message level typically outperformed the DAN+Attn , which may be due to the messages forming a sort of personal narrative, containing repeated themes and follow-ups to previous messages.",
"The SN+Attn also tended to outperform the DAN+Attn at the word level.",
"Our proposed model, using SN+Attn at both word and message level, is best for three out of five dimensions.",
"Table 2 shows the performance when using pre-trained BERT embeddings (Devlin et al., 2019) as our message representations, rather than learning them as part of the model.",
"As before, we see that message-level attention is generally beneficial, and additionally we find that the BERT-based models outperform our proposed model in 3 out of 5 cases.",
"Table 3 compares our proposed model against the state-of-the-art.",
"Unsurprisingly, Ridge (Embeddings) is the worst-performing model overall.",
"Although Park et al. (2015) also used ridge 4 Disattenuated Pearson correlation helps account for the error of the measurement instrument (Murphy and Davidshofer, 1988; Kosinski et al., 2013).",
"Following Lynn et al. (2018), we use reliabilities: r xx = 0 .",
"70 and r yy = 0 .",
"77 .",
"regression, their models used significantly more features ( d =5106 (dimensionally reduced, supervised, from an original of over d > 50 , 000 ) compared to our d =200).",
"Finally, we find that by averaging the z-scored predictions of our proposed model and Ridge (N-Grams/Topics) , we obtain the overall best performance, outperforming current state-of-the-art.",
"This suggests that the models are able to learn complementary information.",
"These results show that neural models with attention are better able to predict personality than those without.",
"Because some messages are of more relevance than others, attention allows the model to better separate the signal from noise.",
"In addition, combining the predictions of the best attention-based model, SN+Attn , with those from Park et al. (2015), the previous best, advances the state-of-the-art results over all 5 factors by a signficant margin ( p < . 05 from a paired t-test on error) and an average increase of .025, demonstrating the complementary value in these methods.",
"Results suggest not all text is equally informative when it comes to personality prediction, which is why attention helps.",
"Figure 3 shows the distribution of standardized message-level attention weights, obtained from our proposed model, for 100 randomly-sampled test set users.",
"Sampled users had 742 messages on average.",
"The figure shows that any single user's messages encompass a range of relative importance.",
"OPE skews negative, indicating that most messages of a user are of little relevance with a few being very relevant, while NEU was slightly more likely to mark messages as relevant but with less variance.",
"By incorporating that concept of message (and word) importance via attention, we can produce better user-level representations from which to predict personality.",
"proposed model incorporates attention at two different levels of analysis: word and message level.",
"We examine each attention mechanism's impact on the overall performance of the model.",
"Table 4 shows ablation results for word and message attentions.",
"As expected, adding any attention results in improvements over the No Attn model.",
"In addition, using only message-level attention generally outperforms using only word-level attention.",
"This may be because message-level attention oc-Figure 4: Performance of our model when keeping only the top n percent highest or lowest weighted messages.",
"curs later in the model, where its impacts are less likely to get washed out by downstream layers.",
"While adding message attention provides the single largest boost, in 3 out of 5 cases combining it with word attention results in additional gains.",
"This may be because the word-level attention helped the model to better encode longer messages: the average message length for the top 5% highest-weighted messages were, on average, 4.4 tokens longer for Word+Msg than for Msg Only .",
"The inclusion of message-level attention appears to have little direct impact on the word-level attention.",
"On examination, Word+Msg and Word Only typically assigned roughly the same word-level attention weights to the same sentences.",
"This suggests the strength of adding message-level attention is in learning how best to weight messages, rather than how to better represent each individual message.",
"We further explore the impact of the learned message-level attention weights.",
"Figure 4 shows our proposed model's performance when evaluated over the top n percent highest or lowest weighted messages, as learned by our model.",
"We see that performance is much better when using high-attention messages than low-attention ones in all cases but CON , which we saw in Table 4 did not benefit much from message-level attention.",
"Another note of interest is that AGR plateaus very quickly for high attention messages, which suggests that high-signal messages are rare but extremely predictive.",
"The high-signal text identified by our attention-based models potentially provides additional, qualitative value for researchers interested in the relationship",
"relationship between language and personality.",
"Bag-of-words approaches to language modeling can identify attribute-relevant words (e.g. word clouds), but this can be limiting as it lacks the context in which the words appear.",
"By contrast, a personality researcher interested in how high extraversion, for example, manifests itself in one's language use can use our learned attention weights to identify whole messages that may warrant further study.",
"Table 5 shows examples of messages that received high and low attention weights from the SN+Attn model for users at the extreme ends of each personality dimension.",
"Overall, the high-attention messages are thematically relevant to the target personality dimension.",
"For example, the messages for conscientiousness focus on work and school responsibilities, while those for extraversion discuss social interactions.",
"The high-attention words, highlighted in green, are also consistent with each personality dimension.",
"For example, openness to experience highlights philosophical words ( weird , nothingness , trippy ) while agreeableness favors swear words ( shit ).",
"In contrast, the low-attention messages have little relevance.",
"To test whether our high-signal text might be of qualitative value to researchers, we asked two experts on personality (psychologists with past research in the area) to view 100 paired messages sets (20 per dimension) and select which set was more informative of the individual's personality.",
"Each paired set consisted of 5 messages within the top third of message weights and 5 in the bottom third for a given user.",
"To reduce the frequency of long messages, we only selected messages whose length was at most 20 characters above or below that user's average message length.",
"The users themselves were randomly sampled from those in the top or bottom 10th percentile of each dimension and who had at least 20 messages total.",
"Note that personality psychologists, though experts in how HighOPE trippy day ahead ....",
"personality manifests in behaviors like language, are not trained necessarily to identify it from micro-blog posts.",
"The goal here is not to simply validate the attention, but to shed some light on where message attention helps and whether it is consistent with expectations from personality theory.",
"Table 6 shows the percentage of instances where each expert identified the high-attention set as most informative, and their inter-rater agreement.",
"Judges showed a preference towards the high-attention messages for OPE and AGR , while CON and NEU were no better than chance.",
"These findings are somewhat consistent with Table 4, which showed that OPE and AGR benefited from message-level attention more than CON .",
"Not only were EXT judgements no better than chance, but there was virtually no agreement among experts.",
"This suggests that for some personality dimensions, individual messages have more or less relevance for personality, while for other dimensions there is little difference between messages (or at least it is difficult for both experts and our approach to capture differences).",
"In general, our proposed model seems to identify text that is informative of one's personality, both in terms of individual words and the overarching themes of the message as a whole, though this is easier for some dimensions than others.",
"Modeling document relevance is useful, then, not just as a means to boost performance but as a tool to aid those seeking to better understand language.",
"Personality modeling from language is becoming increasingly important for many social scientific",
"applications.",
"For example, Preotiuc-Pietro et al. (2015) found personality features to be highly predictive of depression and PTSD.",
"Lynn et al. (2017) demonstrated that the performance of document classification models can be improved by adapting to a variety of human factors, including personality.",
"Personality has also been shown to be useful for deception detection (Fornaciari et al., 2013) and recommendation systems (Roshchina et al., 2011).",
"Most research on personality modeling focuses on the Big Five, or Five-Factor Model (Costa and McCrae, 1992).",
"Personality is traditionally measured using questionnaires, but cost and scalability issues make computational methods preferable.",
"Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001) features are popular for personality modeling (Yarkoni, 2010; Schwartz et al., 2013; Gjurkovic and Snajder, 2018), as they readily provide insight into the type of language that correlates with certain personality dimensions.",
"However, using predefined lexica is limiting; Schwartz et al. (2013) and Park et al. (2015) showed significantly improved prediction when using topics and n-grams extracted from their training set.",
"When working with a very limited amount of data, Arnoux et al. (2017) found pre-trained word embeddings to be effective.",
"Deep learning approaches to personality prediction are limited.",
"Majumder et al. (2017) used a convolutional neural network (CNN) with max pooling, alongside traditional document features (e.g. word count).",
"Their best results were obtained when they filtered out sentences that did not contain strong emotion words (as determined via lexica) during preprocessing.",
"This supports our intuition that some messages contain stronger signal than others, though our approach allows the model to identify such cases.",
"Yu and Markov (2017) also used CNNs with maxand average-pooling to predict personality over Facebook statuses.",
"They experimented with fully-connected neural networks and bidirectional recurrent neural networks, but ultimately CNNs performed best.",
"Both Majumder et al. (2017) and Yu and Markov (2017) used datasets that were significantly smaller than ours ( n =2467 and n =9917, respectively) and their problems were framed as binary classification rather than regression 5 .",
"Language-based personality prediction is an important task with many applications in social science and natural language processing.",
"We presented a hierarchical sequence model with messageand word-level attention that learns to differentiate high-and low-signal messages.",
"Our approach, which novelly models the idea that all messages are not equally valuable for psychological regression tasks, achieves new state-of-the-art results for personality prediction and provides insight into the relationship between language and personality.",
"Our analysis demonstrates that the level of abstraction at which attention is applied can have a significant impact on a model's overall performance.",
"Finally, this work highlights the critical role of document relevance as we progress with further human-centered natural language processing.",
"This work is supported in part by the National Science Foundation under Grant IIS-1815358.",
"Data set used in grateful collaboration with Michal Kosinski and David Stillwell.",
"We thank Google for supporting this research through the Google Cloud Platform credits.",
"Thanks also to social and personality psychologists Sandra Matz and David Yaden for their help with the expert evaluation task."
] | [
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"abstain",
"method",
"objective",
"objective",
"result",
"other",
"other",
"other",
"other"
] |
[
"The goal of argumentation mining is to automatically extract argumentation structures from argumentative texts.",
"Most existing methods determine argumentative relations by exhaustively enumerating all possible pairs of argument components, which suffer from low ef-ficiency and class imbalance.",
"Moreover, due to the complex nature of argumentation, there is, so far, no universal method that can address both tree and non-tree structured argumentation.",
"Towards these issues, we propose a neural transition-based model for argumentation mining, which incrementally builds an argumentation graph by generating a sequence of actions, avoiding inefficient enumeration operations.",
"Furthermore, our model can handle both tree and non-tree structured argumentation without introducing any structural constraints.",
"Experimental results show that our model achieves the best performance on two public datasets of different structures.",
"Argumentation mining (AM) aims to identify the argumentation structures in text, which has received widespread attention in recent years (Lawrence and Reed, 2019).",
"It has been shown beneficial in a broad range of fields, such as information retrieval (Carstens and Toni, 2015; Stab et al., 2018), automated essay scoring (Wachsmuth et al., 2016; Ke et al., 2018), and legal decision support (Palau and Moens, 2009; Walker et al., 2018).",
"Given a piece of paragraph-level argumentative text, an AM system first detects argument components (ACs), which are segments of text with argumentative meaning, and then extracts the argumentative relations (ARs) between ACs to obtain an argumentation graph, where the nodes and edges represent ACs and ARs, Equal Contribution Corresponding Author [ Either State or Federal law should provide some penalty for passing an NSF or closed account check that includes a presumption of guilt. ] AC1 [ The check either bounced or it did not. ] AC2 [ There's not a lot of grey area here. ] AC3 [ That said, either State or Federal law should also allow the drafter of the check a safe harbor wherein they ] AC4 [ Michigan has such a procedure and it seems relatively equitable. ] AC5 AM AC2 Fact AC3 Value AC1 Policy Reason Argumentative Text Argumentation Graph Reason Reason AC4 Policy AC5 Value Reason Figure 1: An example of argumentation mining from the CDCP dataset (Park and Cardie, 2018).",
"respectively.",
"An example of AM is shown in Figure 1, where the text is segmented into five ACs, and there are four ARs.",
"In this instance, the types of AC2 and AC3 are Fact (non-experiential objective proposition) and Value (proposition containing value judgments), respectively.",
"In addition, there is an AR from AC2 to AC3, i.e., The check either bounced or it did not. is the reason of There's not a lot of grey area here. , for the latter is a value judgment based on the fact of the former.",
"Generally, AM involves several subtasks, including 1) Argument component segmentation (ACS), which separates argumentative text from non-argumentative text; 2) Argument component type classification (ACTC), which determines the types of ACs (e.g., Policy , Fact , Value , etc.); 3) Argumentative relation identification (ARI), which identifies ARs between ACs; 4) Argumentative relation type classification (ARTC), which determines the types of ARs (e.g., Reason and Evidence ).",
"Most previous works assume that subtask 1) ACS has been completed, that is, ACs have been segmented, and focus on other subtasks (Potash et al., 2017; Kuribayashi et al., 2019; Chakrabarty et al., 2019).",
"In this paper, we also make such an assumption, and perform ACTC and ARI on this basis.",
"Among all the subtasks of AM, ARI is the most challenging because it requires understanding complex semantic interactions between ACs.",
"Most previous works exhaustively enumerate all possible pairs of ACs (i.e., all ACs are matched to each other by Cartesian products) to determine the ARs between them (Kuribayashi et al., 2019; Morio et al., 2020).",
"However, these approaches are of low ef-ficiency and can cause class imbalance, since the majority of AC pairs have no relation.",
"Besides, due to different annotation schemes, there are mainly two kinds of structures of argumentation graphs, tree (Stab and Gurevych, 2014; Peldszus, 2014) and non-tree (Park and Cardie, 2018).",
"Briefly, in tree structures, each AC has at most one outgoing AR, but there is no such restriction in non-tree structures (Figure 1).",
"However, studies on these two kinds of structures are usually conducted separately.",
"To date, there is no universal method that can address both tree and non-tree structured argumentation without any corpus-specific constraints.",
"Towards these issues, we present a neural transition-based model for AM, which can classify the types of ACs and identify ARs simultaneously.",
"Our model predicts a sequence of actions to incrementally construct a directed argumentation graph, often with O ( n ) parsing complexity.",
"This allows our model to avoid inefficient enumeration operations and reduce the number of potential AC pairs that need evaluating, thus alleviating the class imbalance problem and achieving speedup.",
"Also, our transition-based model does not introduce any corpus-specific structural constraints, and thus can handle both tree and non-tree structured argumentation, yielding promising generalization ability.",
"Furthermore, we enhance our transition-based model with pre-trained BERT (Devlin et al., 2019), and use LSTM (Hochreiter and Schmidhuber, 1997) to represent the parser state of our model.",
"Extensive experiments on two public datasets with different structures show that our transition-based model outperforms previous methods, and achieves state-of-the-art results.",
"Further analysis reveals that our model is of low parsing complexity and has a strong structure adaptive ability.",
"In computational AM, there are mainly two types of approaches to model argumentation structures, that is, tree and non-tree.",
"Most previous works assume that the argumentation graphs can be viewed as tree or forest structures, which makes the problem computationally easier because many tree-based structural constraints can be applied.",
"Under the theory of Van Eemeren et al. (2004), Palau and Moens (2009) modeled argumentation in the legal text as tree structures and used handcrafted context-free grammar to identify these structures.",
"Presented by Stab and Gurevych (2014, 2017), the tree structured Persuasive Essay (PE) dataset has been utilized in a number of studies in AM.",
"Following this dataset, Persing and Ng (2016) and Stab and Gurevych (2017) leveraged the Integer Linear Programming (ILP) framework to jointly predict ARs and AC types, in which several structural constraints are defined to ensure the tree structures.",
"The arg-microtext (MT) dataset, created by Peldszus (2014), is another tree structured dataset.",
"Studies on this dataset usually apply decoding mechanisms based on tree structures, such as Minimum Spanning Trees (MST) (Peldszus and Stede, 2015) and ILP (Afantenos et al., 2018).",
"Regarding neural network-based methods, Eger et al. (2017) studied AM as a dependency parsing and a sequence labeling problem with multiple neural networks.",
"Potash et al. (2017) introduced the sequence-to-sequence based Pointer Networks (Vinyals et al., 2015) to AM, and used the output of encoder and decoder to identify AC types and the presence of ARs, respectively.",
"Kuribayashi et al. (2019) proposed an argumentation structure parsing model based on span representation, which used ELMo (Peters et al., 2018) to obtain representations for ACs.",
"Those studies described in Section 2.1 are all based upon the assumption that the argumentation forms tree structures.",
"However, this assumption is somewhat idealistic since argumentation structures in real-life scenarios may not be such well-formed.",
"Hence, some studies have focused on non-tree structured AM, and these studies typically use the Consumer Debt Collection Practices (CDCP) (Park and Cardie, 2018) dataset.",
"Regarding this dataset, Niculae et al. (2017) presented a structured learning approach based on factor graphs, which can also handle the tree structured PE dataset.",
"However, the factor graph needs to be specifically designed according to the types of argumentation structures.",
"Galassi et al. (2018) adopted residual networks for AM on the CDCP dataset.",
"Recently, Morio et al. (2020) proposed a model devoted to non-tree structured AM, with a task-specific parameterization module to encode ACs and a biaffine attention module to capture ARs.",
"To the best of our knowledge, until now there is no universal method that can address both tree and non-tree structured argumentation without any corpus-specific design.",
"Thus, in this work, we fill this gap by proposing a neural transition-based model that can identify both tree and non-tree argumentation structures without introducing any prior structural assumptions.",
"Transition-based methods are commonly used in dependency parsing (Chen and Manning, 2014; Gomez-Rodrguez et al., 2018), and has also been successfully applied to other NLP tasks with promising performance, such as discourse parsing (Yu et al., 2018), information extraction (Zhang et al., 2019), word segmentation (Zhang et al., 2016) and mention recognition (Wang et al., 2018).",
"Following previous works (Potash et al., 2017; Kuribayashi et al., 2019), we assume subtask 1) ACS has been completed, i.e., the spans of ACs are given.",
"Then, we aim at jointly classifying AC types (ACTC) and determining the presence of ARs (ARI).",
"The reason why we do not jointly conduct AR type classification (ARTC) is that performing ARTC along with ACTC and ARI jointly will hurt the overall performance.",
"More details on this issue will be discussed in Section 6.4.",
"Formally, we assume a piece of argumentation related paragraph P = ( w 1 , w 2 , . . . , w m ) consisting of m tokens and a set X = ( x 1 , x 2 , . . . , x n ) consisting of n AC spans are given.",
"Each AC span x i is a tuple containing the beginning token index ( 1 , 1 ) ( 1 , 1 ) with precondition action prediction 0 1 stack 0 1 1 2 1 2 3 4 1 2 3 4 BERT 1 2 buffer action list ( 1 , 1 ) AC type classifier Figure 2: The architecture of our model.",
"b i and the ending token index e i of this AC, i.e., x i = ( b i , e i ) .",
"The goal is to classify the types of ACs and identify the ARs, and finally obtain a directed argumentation graph with ACs and ARs representing nodes and edges, respectively.",
"We present a neural transition-based model for AM, which can jointly learn ACTC and ARI.",
"Our model generates a sequence of actions in terms of the parser state to incrementally build an argumentation graph.",
"We utilize BERT and LSTM to represent our parser state, which contains a stack to store processed ACs, a buffer to store unprocessed ACs, a delay set D to record ACs that need to be removed subsequently, and an action list to record historical actions.",
"Then, the learning problem is framed as: given the parser state of current step t : ( t , t , D t , t ) , predict an action to determine the parser state of the next step, and simultaneously identify ARs according to the predicted action.",
"Figure 2 shows the architecture of our model.",
"In the following, we first introduce our transition system, then describe the parser state representation.",
"Our transition system contains six types of actions.",
"Different actions will change the state in different ways, which are also summarized in Table 1: SHIFT (SH): When t is not empty and 1 is not in D t , pop 0 from t and move it to the top of t .",
"DELETE-DELAY (DE d ).",
"When t is not empty and 1 is in D t , remove 1 from t and D t , and keep t unchanged.",
"DELETE (DE).",
"When t is empty, remove 1 from t and keep t and D t unchanged.",
"RIGHT-ARC (RA).",
"When t is empty, remove 0 from t and assign an AR from 0 to 1 .",
"RIGHT-ARC-DELAY (RA d ).",
"When t is not empty, pop 0 from t and move it to the top of t .",
"Then assign an AR from 0 to 1 and add 0 into D t for delayed deletion.",
"This strategy can help extract more ARs related to 0 .",
"LEFT-ARC (LA).",
"Remove 1 from t and assign an AR from 1 to 0 .",
"Table 2 illustrates the golden transition sequence of the text in Figure 1.",
"This example text contains five ACs and four ARs.",
"At the initial state, all ACs are in buffer .",
"Then, a series of actions change the parser state according to Table 1, and extract ARs simultaneously.",
"This procedure stops when meeting the terminal state, that is, buffer is empty and stack only contains one element.",
"We employ BERT to obtain the representation of each AC and use LSTM to encode the long-term dependencies of stack , buffer and action list .",
"Representation of ACs.",
"We feed the input paragraph P = ( w 1 , w 2 , . . . , w m ) into BERT to get the contextual representation matrix H R m d b , where d b is the vector dimension of the last layer of BERT.",
"In this way, paragraph P can be represented as H = ( h 1 , h 2 , . . . , h m ) , where h i is the contextual representation of the i -th token of P .",
"Then, we use the AC spans set X = ( x 1 , x 2 , . . . , x n ) to produce a contextual representation of each AC from H by mean pooling over the representations of words in each AC span.",
"Specifically, for the i -th AC with span x i = ( b i , e i ) , the contextual representation of this AC could be obtained by: u i = 1 e i b i + 1 e i (cid:88) j = b i h j (1) where u i R d b .",
"In addition, following previous works (Potash et al., 2017; Kuribayashi et al., 2019), we also combine some extra features with u i to represent ACs, including the bag-of-words (BoW) vector, position and paragraph type embedding of each AC 1 .",
"We denote these features of the i -th AC as i .",
"Then, the i -th AC is represented by 1 Details of these features are described in Appendix A. the concatenation of u i and i : c i = [ u i ; i ] (2) Hence, the ACs in paragraph P can be represented as C = ( c 1 , c 2 , . . . , c n ) .",
"Representation of Parser State.",
"Our transition-based model utilizes the parser state to predict a sequence of actions.",
"At each step t , we denote our parser state as ( t , t , D t , t ) .",
"t and t are stack and buffer , which store the representations of processed and unprocessed ACs, respectively.",
"D t is the delay set that records ACs that need to be removed from stack subsequently.",
"t is the action list that stores the actions generated so far.",
"At the beginning, all ACs are in the buffer , i.e., the initial parser state is ([ ] , [ c 1 , c 2 , . . . , c n ] , , [ ]) .",
"Then, a series of predicted actions will iteratively change the parser state.",
"Specifically, at step t , we have t = ( 0 , 1 , . . . ) , t = ( 0 , 1 , . . . ) , where i and i indicate the representations of ACs in the stack and the buffer at the current state.",
"In addition, we also have t = ( . . . , t 2 , t 1 ) where i denotes the distributed representation of the i -th action obtained by a looking-up table E a .",
"In order to capture the context information in the stack t , we feed it into a bidirectional LSTM: S t = [ s 0 , s 1 , . . . ] = BiLSTM s ([ 0 , 1 , . . . ]) (3) where S t R | t | 2 d l is the output of LSTM from both directions, | t | is the size of stack , and d l is the hidden size of LSTM.",
"Similarly, we can obtain the contextual representation of t by: B t = [ b 0 , b 1 , . . . ] = BiLSTM b ([ 0 , 1 , . . . ]) (4) where B t R | t | 2 d l , | t | is the size of buffer .",
"Besides, in order to incorporate the historical action information into our model, we apply a unidirectional LSTM to process the action list : A t = [ . . . , a t 2 , a t 1 ] = LSTM a ([ . . . , t 2 , t 1 ]) (5) where A t R | t | d l , | t | is the size of action list .",
"Furthermore, since the relative distance between the pair ( 0 , 1 ) is a strong feature for determining their relations, we represent it as an embedding e d through another looking-up table E d .",
"where s 0 and s 1 denote the first and second elements of S t , b 0 is the first element of the B t , and a t 1 indicates the last action representation of A t",
"To predict the current action at step t , we first apply a multi-layer perceptron (MLP) with ReLU activation to squeeze the state representation r t to a lower-dimensional vector z t , and then compute the action probability by a softmax output layer:",
"where W denotes a learnable parameter matrix, b is the bias term, t is the predicted action for step t .",
"A ( S ) represents the set of valid candidate actions that may be taken according to the preconditions.",
"For efficient decoding, we greedily take the candidate action with the highest probability.",
"With the predicted action sequence, we could identify ARs according to Table 1.",
"Note that, the univocal supervision over actions for one input paragraph is built based on the gold labels of ARs.",
"We jointly train an AC type classifier over the AC representations: p ( y i | C ) = softmax (MLP c ( c i )) , where y i is the predicted type for the i -th AC.",
"Finally, combining this task with action prediction, the training objective of our model can be obtained: J ( ) = (cid:88) t log p ( t | z t ) + (cid:88) i log p ( y i | C ) + 2 || || 2 (9) where is the coefficient of L 2 -norm regularization, and denotes all the parameters in this model.",
"We conduct experiments on two datasets: Persuasive Essays (PE) (Stab and Gurevych, 2017) and Consumer Debt Collection Practices (CDCP) (Nic-ulae et al., 2017; Park and Cardie, 2018).",
"The PE dataset contains 402 essays (1,833 para-graphs), in which 80 essays (369 paragraphs) are held out for testing.",
"There are three types of ACs in this dataset: Major-Claim , Claim , and Premise .",
"Also, each AC in PE dataset has at most one outgoing AR.",
"That is, the argumentation graph of one paragraph can be either directed trees or forests.",
"We extend each AC by including its argumentative marker in the same manner as Kuribayashi et al. (2019).",
"The CDCP dataset consists of 731 paragraphs, and 150 of them are reserved for testing.",
"It provides five types of ACs (propositions): Reference , Fact , Testimony , Value , and Policy .",
"Unlike PE dataset, each AC in CDCP dataset can have two or more outgoing ARs, thus forming non-tree structures.",
"For PE dataset, we randomly choose 10% of the training set as the validation set, which is consistent with the work of Kuribayashi et al. (2019).",
"For CDCP dataset, we randomly choose 15% of the training set for validation.",
"Following Potash et al. (2017), for ACTC, we employ F 1 score for each AC type and their macro averaged score to measure the performance.",
"Similarly, for ARI, we present F 1 scores for the presence/absence of links between ACs and their macro averaged score.",
"All experiments are performed 5 times with different random seeds, and the scores are averaged.",
"We finetune uncased BERT Base 2 in our model.",
"AdamW optimizer (Loshchilov and Hutter, 2019) is adopted for parameter optimization, and the initial learning rates for the BERT layer and other layers are set to 1e-5 and 1e-3, respectively.",
"All LSTMs are 1 layer with the hidden size of 256, and the hidden size of MLP is 512.",
"Besides, the dropout rate (Srivastava et al., 2014) is set to 0.5, and the batch size is set to 32.",
"All parameters of our model are unfixed and can be learned during training.",
"We train the model 50 epochs with early stopping strategy, and choose model parameters with the best performance (average of macro F 1 scores of ACTC and ARI) on the validation set.",
"Our model is implemented in PyTorch (Paszke et al., 2019) on a NVIDIA Tesla V100 GPU.",
"In order to evaluate our proposed BERT-Trans model, we compare it with several baselines.",
"Joint-ILP (Stab and Gurevych, 2017) jointly optimizes AC types and ARs by Integer Linear Programming",
"Programming (ILP).",
"St-SVM-full is structured SVM with full factor graph, which performs best on PE dataset in the work of Niculae et al. (2017).",
"Joint-PN (Potash et al., 2017) applies Pointer Network with attention mechanism to AM, which can jointly address both ACTC and ARI.",
"Span-LSTM (Kuribayashi et al., 2019) employs LSTM-minus-based span representation with pre-trained ELMo embedding for AM, which is the current state-of-the-art method on PE dataset.",
"For CDCP dataset, we compare our model with the following baselines: Deep-Res-LG (Galassi et al., 2018) applies residual network model with link-guided training procedure, to perform ACTC and ARI.",
"St-SVM-strict is structured SVM with strict factor graph, which performs best on CDCP dataset in the work of (Niculae et al., 2017).",
"TSP-PLBA (Morio et al., 2020) uses task-specific parameterization to encode ACs and biaffine attention to capture ARs with ELMo based features, which is the current state-of-the-art method on CDCP dataset.",
"Furthermore, in order to show the effectiveness of our proposed transition system, we implemented two additional baselines: Span-LSTM-Trans incorporates the span representation method used in Span-LSTM and our transition system on PE dataset.",
"For a fair comparison, features and ELMo used to represent ACs are consistent with that of Span-LSTM.",
"ELMo-Trans replaces BERT in our proposed model with ELMo on CDCP dataset for a fair comparison with TSP-PLBA.",
"The overall performance of our proposed model and the baselines are shown in Table 3 and Table 4. Our model achieves the best performance on both datasets.",
"On PE dataset, our model outperforms the current sota model Span-LSTM by at least 1.1% and 1.4% in macro F 1 score over ACTC and ARI, respectively.",
"On CDCP dataset, compared with TSP-PLBA, our model obtains at least 3.6% higher Method ACTC ARI Macro MC Claim Premise Macro Rel No-Rel Joint-ILP 82.6 89.1 68.2 90.3 75.1 58.5 91.8 St-SVM-full 77.6 78.2 64.5 90.2 -60.1 Joint-PN 84.9 89.4 73.2 92.1 76.7 60.8 92.5 Span-LSTM 87.3 --81.1 -Span-LSTM-Trans 87.5 93.8 76.4 92.2 82.0 69.8 94.1 BERT-Trans (Ours) 88.4 93.2 78.8 93.1 82.5 70.6 94.3 Table 3: Comparison results with baselines on PE dataset (%).",
"We also show the results where our BERT-based AC representation is replaced by the ELMo-based method, that is, Span-LSTM-Trans on PE dataset and ELMo-Trans on CDCP dataset.",
"We found that, without employing pre-trained BERT, Span-LSTM-Trans and ELMo-Trans still outperform Span-LSTM and TSP-PLBA over ARI, respectively, which demonstrates the effectiveness of our proposed transition system.",
"It can also be observed that our BERT-based AC representation method can further improve the model performance.",
"Some of the baselines improve overall performance by imposing structural constraints when predicting or decoding.",
"For example, Joint-PN only predicts one outgoing AR for each AC to partially enforce the predicted argumentation graphs as tree structures.",
"Similarly, to ensure tree structures, Span-LSTM applies MST algorithm based on the probabilities calculated by the model.",
"However, these two methods can only deal with tree structured argumentation.",
"The method proposed by Niculae et al. (2017), which is based on factor graph, can handle both tree and no-tree structured argumentative text (St-SVM-full and St-SVM-strict), but the factor graph need to be specifically designed for datasets of different structures.",
"Differently, our proposed model can handle datasets of both tree and non-tree structures without introducing any corpus-specific structural constraints and also outperforms all the structured baselines.",
"We conduct ablation experiments on the PE dataset to further investigate the impacts of each component in BERT-Trans.",
"The results are shown in Table 5. It can be observed that applying LSTM to encode buffer , stack , and action list contributes about 2.0% macro F 1 score of ARI, showing the necessity of capturing non-local dependencies in parser state.",
"Also, incorporating buffer into parser state can improve the macro F 1 score of ARI by about 1.8%, for buffer can provide crucial information about subsequent ACs to be processed.",
"Besides, the macro F 1 score of ARI drops heavily without action list (-1.6%), indicating that the historical action information has a significant impact on predicting the next action.",
"Without the distance information between the top two ACs of the stack , the macro F 1 score of ARI decreases by 0.7%.",
"The model components described above mainly affect ARI by modifying the parsing procedure, but have little impact on ACTC.",
"However, BoW feature has a significant influence on both two tasks, and removing it causes 2.5% and 1.9% decreases in macro F 1 score of ACTC and ARI, respectively.",
"Most previous models parse argumentation graphs by exhaustively enumerating all possible pairs of ACs, that is, all ACs are connected by Cartesian products, which lead to O ( n 2 ) parsing complexity.",
"Differently, our transition-based model can incrementally parse an argumentation graph by predicting a sequence of actions, often with linear parsing complexity.",
"Concretely, given a paragraph with n ACs, our system can parse it with O ( n ) actions.",
"Parsing complexity of our transition system can be determined by the number of actions performed with respect to the number of ACs in a paragraph.",
"Specifically, we measure the length of the action sequence predicted by our model for every paragraph from the test sets of PE dataset and CDCP dataset and depict the relation between them and the number of ACs.",
"As shown in Figure 3, the number of predicted actions is linearly related to the number of ACs in both two datasets, proving that our system can construct an argumentation graph with O ( n ) complexity.",
"In addition, we also compared our model with the current state-of-the-art model on PE dataset, i.e., Span-LSTM, in terms of training time, and our model is around two times faster.",
"Following Kuribayashi et al. (2019), we also try to add the task of AR type classification (ARTC) to our model for joint learning on PE dataset.",
"However, as shown in Table 6, jointly learning ARTC together with ACTC and ARI degrades the overall performance, while learning ARTC separately actually yields better performance.",
"Such an observation is consistent with the joint learning results Method Joint Tasks Macro F 1 ACTC ARI ARTC BERT-Trans(Ours) ALL 86.8 81.8 78.4 ACTC+ARI 88.4 82.5 ARTC -81.0 Span-LSTM ALL 85.7 80.7 79.0 ACTC+ARI 87.3 81.1 ARTC -79.6 Table 6: Joint learning results on PE dataset (%).",
"of Span-LSTM in Kuribayashi et al. (2019).",
"The reason may be that the class labels are usually very unbalanced for ARTC (around 1:10 in PE dataset and 1:25 in CDCP dataset), such that the high uncertainty can seriously affect the overall learning.",
"Thus, we mainly focus on joint learning of ACTC and ARI.",
"We also argue that learning ARTC individually is better than jointly learning it with other subtasks.",
"Besides, our model outperforms Span-LSTM over ACTC and ARI even when joint learning all three subtasks.",
"To validate the structure adaptive ability of our model on both tree and non-tree structures, we analyze the structure type of the predicted argumentation graphs on the test set of both PE and CDCP datasets in Figure 4. It can be seen that for non-tree structured CDCP dataset, even though there are few non-tree structured paragraphs in the test set of CDCP (only 16%), our model is still able to identify 29.2% of them.",
"This is an acceptable performance considering the poor results of ARI on the CDCP dataset due to the complex non-tree structures.",
"For tree structured PE dataset, our model predicts all the paragraphs as tree structures, showing a strong structure adaptive ability.",
"In contrast, most previous models like Joint-PN and Span-LSTM can only predict tree structures.",
"In this paper, we propose a neural transition-based model for argumentation mining, which can incrementally construct an argumentation graph by predicting a sequence of actions.",
"Our proposed model can handle both tree and non-tree structures, and often with linear parsing complexity.",
"The experimental results on two public datasets demonstrate the effectiveness of our model.",
"One potential drawback of our model is the greedy decoding for action prediction.",
"For future work, we plan to optimize the decoding process by using methods like beam search to further boost the performance.",
"This work was partially supported by National Natural Science Foundation of China (61632011, 61876053, 62006062), Guangdong Province Covid-19 Pandemic Control Research Funding (2020KZDZX1224), Shenzhen Foundational Research Funding (JCYJ20180507183527919, JCYJ20180507183608379), and the Joint Lab of China Merchants Securities and HITSZ."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"abstain",
"method",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"method",
"other"
] |
[
"When collecting annotations and labeled data from humans, a standard practice is to use inter-rater reliability (IRR) as a measure of data goodness (Hallgren, 2012).",
"Metrics such as Krippendorff's alpha or Cohen's kappa are typically required to be above a threshold of 0.6 (Landis and Koch, 1977).",
"These absolute thresholds are unreasonable for crowdsourced data from annotators with high cultural and training variances, especially on subjective topics .",
"We present a new alternative to interpreting IRR that is more empirical and contextualized.",
"It is based upon benchmarking IRR against baseline measures in a replication, one of which is a novel cross-replication reliability (xRR) measure based on Cohen's (1960) kappa.",
"We call this approach the xRR framework.",
"We opensource a replication dataset of 4 million human judgements of facial expressions and analyze it with the proposed framework.",
"We argue this framework can be used to measure the quality of crowdsourced datasets.",
"Much content analysis and linguistics research is based on data generated by human beings (hence-forth, annotators or raters) asked to make some kind of judgment.",
"These judgments involve systematic interpretation of textual, visual, or audible matter (e.g. newspaper articles, television programs, advertisements, public speeches, and other multimodal data).",
"When relying on human observers, researchers must worry about the quality of the data specifically, their reliability (Krippendorff, 2004).",
"Are the annotations collected reproducible, or are they the result of human idiosyncrasies?",
"Respectable scholarly journals typically require reporting quantitative evidence for the inter-rater reliability (IRR) of the data (Hallgren, 2012).",
"Cohen's kappa (Cohen, 1960) or Krippendorff's alpha (Hayes and Krippendorff, 2007) is expected to be Figure 1 : Agreement measures for categorical data (Landis and Koch, 1977) above a certain threshold to be worthy of publication, typically 0.6 (Landis and Koch, 1977).",
"Similar IRR requirements for human annotations data have been followed across many fields.",
"In this paper we refer to this absolute interpretation of IRR as the Landis-Koch approach (Fig. 1).",
"This approach has been foundational in guiding the development of widely used and shared datasets and resources.",
"Meanwhile, the landscape of human annotations collection has witnessed a tectonic shift in recent years.",
"Driven by the data-hungry success of machine learning (LeCun et al., 2015; Schaekermann et al., 2020), there has been an explosive growth in the use of crowdsourcing for building datasets and benchmarks (Snow et al., 2008; Kochhar et al., 2010).",
"We identify three paradigm shifts in the scope of and methodologies for data collection that make the Landis-Koch approach not as useful in today's settings.",
"A rise in annotator diversity In the pre-crowdsourcing era lab settings, data were typically annotated by two graduate students following detailed guidelines and working with balanced corpora.",
"Over the past two decades, however, the bulk of data are annotated by crowd workers with high cultural and training variances.",
"A rise in task diversity There has been an increasing amount of subjective tasks with genuine ambiguity: judging toxicity of online discussions (Aroyo et al., 2019), in which the IRR values range between 0.2 and 0.4; judging emotions expressed by faces (Cowen and Keltner, 2017), in which more than 80% of the IRR values are below 0.6; and A/B testing of user satisfaction or preference evaluations (Kohavi and Longbotham, 2017), where IRR values are typically between 0.3 and 0.5.",
"A rise in imbalanced datasets Datasets are no longer balanced intentionally.",
"Many high-stakes human judgements concern rare events with substantial tail risks: event security, disease diagnostics, financial fraud, etc.",
"In all of these cases, a single rare event can be the source of considerable cost.",
"High class imbalance has led to many complaints of IRR interpretability (Byrt et al., 1993; Feinstein and Cicchetti, 1990; Cicchetti and Feinstein, 1990).",
"Each of these changes individually has a profound impact on data reliability.",
"Together, they have caused a shift from data-from-the-lab to data-from-the-wild, for which the Landis-Koch approach to interpreting IRR is admittedly too rigid and too stringent.",
"Meanwhile, we have seen a drop in the reliance on reliability.",
"Machine learning, crowdsourcing, and data research papers and tracks have abandoned the use and reporting of IRR for human labeled data, despite calls for it (Paritosh, 2012).",
"The most cited recent datasets and benchmarks used by the community such as SQuAD (Rajpurkar et al., 2016), ImageNet (Deng et al., 2009), Freebase (Bollacker et al., 2008), have never reported IRR values.",
"This would have been unthinkable twenty years ago.",
"More importantly, this is happening against the backdrop of a reproducibility crisis in artificial intelligence (Hutson, 2018).",
"With the decline of the usage of IRR, we have seen a rise of ad hoc, misguided quality metrics that took its place, including",
"1) agreement-%,",
"2) accuracy relative to consensus,",
"3) accuracy relative to ground truth.",
"This is dangerous, as IRR is still our best bet for ensuring data reliability.",
"How can we ensure its continued importance in this new era of data collection?",
"This paper is an attempt to address this problem by proposing an empirical alternative to interpreting IRR.",
"Instead of relying on an absolute scale, we benchmark an experiment's IRR against two baseline measures, to be found in a replication.",
"Replication here is defined as re-annotating the same set of items with a slight change in the experimental setup, e.g., annotator population, annotation guidelines, etc.",
"By fixing the underlying corpus, we can ensure the baseline measures are sensitive to the experiment on hand.",
"The first baseline measure is the annotator reliability in the replication.",
"The second measure is the annotator reliability between the replications.",
"In Section 3, we present a novel way of measuring this.",
"We call it cross-kappa ( x ).",
"It is an extension of Cohen's (1960) kappa and is designed to measure annotator agreement between two replications in a chance-corrected manner.",
"We present in Appendix A the International Replication (IRep) dataset, 1 a large-scale crowdsourced dataset of four million judgements of human facial expressions in videos.",
"The dataset consists of three replications in Mexico City, Budapest, and Kuala Lumpur.",
"2 Our analysis in Section 4 shows this empirical approach enables meaningful interpretation of IRR.",
"In Section 5, we argue xRR is a sensible way of measuring the goodness of crowdsourced datasets, where high reliability is unattainable.",
"While we only illustrate comparing annotator populations in this paper, the methodology behind the xRR framework is general and can apply to similarly replicated datasets, e.g., via change of annotation guidelines.",
"To position our research, we present a brief summary of the literature in two areas: metrics for measuring annotator agreement and their shortcomings (Section 2.1), comparing replications of an experiment (Section 2.2).",
"Artstein and Poesio (2008) present a comprehensive survey of the literature on IRR metrics used in linguistics.",
"Popping (1988) compare an astounding 43 measures for nominal data (mostly applicable to reliability of data generated by only two observers).",
"Since then, Cohen's (1960) kappa and its variants (Carletta et al., 1997; Cohen, 1968) have become the de facto standard for measuring agreement in computational linguistics.",
"One of the strongest criticisms of kappa is its lack of interpretability when facing class imbalance.",
"This problem is known as the kappa paradox (Feinstein and Cicchetti, 1990; Byrt et al., 1993; 1 https://github.com/google-research-datasets/replication-dataset 2 On this task, raters received average hourly wages of $12, $20, and $14 USD in Mexico City, Budapest, and Kuala Lumpur respectively. See Appendix A for annotation setup. Warrens, 2010), or the base rates' problem (Ue-bersax, 1987).",
"Bruckner and Yoder (2006) show class imbalance imposes practical limits on kappa and suggest one to interpret kappa in relation to the class imbalance of the underyling data.",
"Others have proposed measures that are more robust against class imbalance (Gwet, 2008; Spitznagel and Helzer, 1985; Stewart and Rey, 1988).",
"Pontius Jr and Millones (2011) even suggest abandoning the use of kappa altogether.",
"Replications are often being compared, but it is done at the level of per-item mean scores.",
"Cowen and Keltner (2017) measure the correlation between the mean scores of two geographical rater pools.",
"They use Spearman's (1904) correction for attenuation (discussed later in this paper) with split-half reliability.",
"Snow et al. (2008) measure the Pearson correlations between the score of a single expert and the mean score of a group of non-experts, and vice versa.",
"In this comparison the authors do not correct for correlation attenuation, hence the reported correlations may be strongly biased.",
"Bias aside, correlation is not suitable for tasks with non-interval data or task with missing data.",
"In this paper, we propose a general methodology for measuring rater agreement between replications with the same kind of generality, flexibility, and ease of use as IRR.",
"Data reliability can be assessed when a set of items are annotated multiple times.",
"When this is done by a single rater, intra-rater reliability assesses a per-son's agreement with oneself.",
"When this is done by two or more raters, inter-rater reliability (IRR) assesses the agreement between raters in an experiment.",
"We propose to extend IRR to measure a similar notion of rater-rater agreement, but where the raters are taken from two different experiments.",
"We call it cross-replication reliability (xRR).",
"These replications can be a result of re-labeling the same items with a different rater pool, annotation template, or on a different platform, etc.",
"We begin with a general definition of Cohen's (1960) kappa.",
"We extend it to cross-kappa ( x ) to measure cross-replication reliability.",
"We then use this foundation to define normalized x to measure similarity between two replications.",
"The class of IRR measures is quite diverse, covering many different experimental scenarios, e.g., different numbers of raters, rating scales, agreement definitions, assumptions about rater interchangeability, etc.",
"Out of all such coefficients, Cohen's (1960) kappa has a distinct property that makes it most suitable for the task on hand.",
"Unlike Scott's pi (Scott, 1955), Fleiss's kappa (Fleiss, 1971), Krip-pendorf's alpha (Krippendorff, 2004), and many others, Cohen's (1960) kappa allows for two different marginal distributions .",
"This stems from Cohen's belief that two raters do not necessarily share the same marginal distribution, hence they should not be treated interchangeably .",
"When we compare replications, e.g., two rater populations, we are deliberately changing some underlying conditions of the experiment, hence it is safer to assume the marginal distributions will not be the same.",
"Within either replication, however, we rely on the rater interchangeability assumption.",
"We think this more accurately reflects the current practice in crowdsourcing, where each rater contributes a limited number of responses in an experiment, and hence raters are operationally interchangeable.",
"Cohen's (1960) kappa was invented to compare two raters classifying n items into a fixed number of categories.",
"Since its publication, it has been generalized to accommodate multiple raters (Light, 1971; Berry and Mielke Jr, 1988), and to cover different types of annotation scales: ordinal (Co-hen, 1968), interval (Berry and Mielke Jr, 1988; Janson and Olsson, 2001), multivariate (Berry and Mielke Jr, 1988), and any arbitrary distance function (Artstein and Poesio, 2008).",
"In this paper we focus on Janson and Olsson's (2001) generalization, which the authors denote with the lowercase Greek letter iota ( ).",
"It extends kappa to accommodate interval data with multiple raters, and is expressed in terms of pairwise disagreement: = 1 d o d e .",
"d o in this formula represents the observed portion of disagreement and is defined as:",
"where n is the number of items, b the number of annotators, i the item index, r and s the annotator",
"D ( x ri , x si ) = ( x ri x si ) 2",
"for categorical data.",
"Note we are dropping Janson and Olsson's multivariate reference in D () and focusing on the univariate case.",
"d e in the denominator represents the expected portion of disagreement and is defined as: d e = (cid:34) n 2 (cid:18) b 2 (cid:19)(cid:35) 1 (cid:88) r<s n (cid:88) i n (cid:88) j D ( x ri , x sj ) .",
"Janson and Olsson's expression in Eq.",
"1 is based on Berry and Mielke Jr (1988).",
"While the latter use absolute distance for interval data, the former use squared distance instead.",
"We follow Janson and Olsson's approach because squared distance leads to desirable properties and familiar interpretation of coefficients (Fleiss and Cohen, 1973; Krippendorff, 1970).",
"Squared distance is also used in alpha (Krippendorff, 2004).",
"Berry and Mielke Jr (1988) show if b = 2 and the scale is categorical, in Eq.",
"1 reduces to Cohen's (1960) kappa.",
"For other rating scales such as ratio, rank, readers should refer to Krippendorff (2004) for additional distance functions.",
"The equations for d o and d e are unaffected by the choice of D () .",
"Here we present x as a novel reliability coefficient for measuring the rater agreement between two replications.",
"In Janson and Olsson's generalized kappa above, the disagreement is measured within pairs of annotations taken from the same experiment.",
"In order to extend it to measure cross-replication agreement, we construct annotation pairs such that the two annotations are taken from different replications.",
"We do not consider annotation pairs from the same replication.",
"We define cross-kappa , x ( X, Y ) , as a reliability coefficient between replications X and Y : x ( X, Y ) = 1 d o ( X, Y ) d e ( X, Y ) , (6) where d o ( X, Y ) = 1 nRS n (cid:88) i =1 R (cid:88) r =1 S (cid:88) s =1 D ( x ri , y si ) , (7) and d e ( X, Y ) = 1 n 2 RS n (cid:88) i =1 n (cid:88) j =1 R (cid:88) r =1 S (cid:88) s =1 D ( x ri , y sj ) , (8) where x and y denote annotations from replications X and Y respectively, n is the number of items, R and S the numbers of annotations per item in replications X and Y respectively.",
"In this definition, the observed disagreement is obtained by averaging disagreement observed in nRS pairs of annotations, where each pair contains two annotations on the same item taken from two different replications.",
"Expected disagreement is obtained by averaging over all possible n 2 RS cross-replication annotation pairs.",
"When each replication has only 1 annotation per item, and the data is categorical, it is easy to show x reduces to Cohen's (1960) kappa.",
"x is a kappa-like measure, and will have properties similar to kappa's.",
"x is bounded between 0 and 1 in theory, though in practice it may be slightly negative for small sample sizes.",
"x = 0 means there is no discernible agreement between raters from two replications, beyond what would be expected by chance.",
"x = 1 means all raters between two replications are in perfect agreement with each other, which also implies perfect agreement within either replication.",
"As presented, the two replications can have different numbers of annotations per item.",
"However, within either replication, the number of annotations per item is assumed to be fixed.",
"We recognize this may not always be the case.",
"In practice, items within an experiment can receive varying numbers of annotations (i.e., missing data).",
"We now show how to calculate x with missing data.",
"When computing IRR with missing data, weights can be used to account for varying numbers of annotations within each item.",
"Janson and Olsson (2004) propose a weighting scheme for iota in Eq.",
"1.",
"Instead, we follow the tradition of Krippendorff (2004) in weighting each annotation equally in computing d o and d e .",
"That amounts to the following scheme.",
"In d o , we first normalize within each item, then we take a weighted average over all items, with weights proportional to the combined numbers of annotations per item.",
"In d e , no weighting is required.",
"Since R and S can now vary from item to item, we index them using R ( ) and S ( ) to denote that they are functions of the underlying items.",
"We rewrite d o and d e as: d o ( X, Y ) = n (cid:88) i =1 R ( i ) + S ( i ) R + SR ( i ) (cid:88) r =1 S ( i ) (cid:88) s =1 D ( x ri , y si ) R ( i ) S ( i ) (9) and d e ( X, Y ) = 1 R S n (cid:88) i =1 n (cid:88) j =1 R ( i ) (cid:88) r =1 S ( j ) (cid:88) s =1 D ( x ri , y sj ) , (10) with R = n (cid:88) i R ( i ) , S = n (cid:88) j S ( j ) , (11) where R is the total number of annotations in replications X , R ( i ) the number annotations on item i in replication X , r = 1 , 2 , . . . , R ( i ) (on item i in replication X ); and similarly for S , S ( j ) , and s with respect to replication Y .",
"(cid:80)",
"R ( i ) r =1 and (cid:80) S ( j ) s =1 in Eq.",
"9 and 10 are inner summations, where i and j are indexes from the outer summations.",
"Without missing data, R ( i ) = R for all i , and S ( j ) = S for all j , then R = nR , S = nS , reducing Eq.",
"9 and 10 to Eq.",
"7 and 8.",
"xRR is modeled closely after IRR in order to serve as its baseline.",
"As IRR measures the agreement between raters, so does xRR.",
"In other words, x is really a measure of rater agreement , not a measure of experimental similarity per se.",
"This distinction is important.",
"If we want to measure how well we replicate an experiment, we need to measure its disagreement with the replication in relationship to their own internal disagreements.",
"The departure between inter-experiment and intra-experiment disagreements is important in measuring experimental similarity.",
"This calls for a normalization that considers x in relation to IRR.",
"First, we take inspirations from Spearman's correction for attenuation (Spearman, 1904): xy = r xy reliability x (cid:112) reliability y , (12) where r xy is the observed Pearson product-moment correlation between x and y (variables observed with measurement errors), xy is an estimate of their true, unobserved correlation (in the absence of measurement errors), and reliability x and reliability y are the reliabilities of x and y respectively.",
"Eq.",
"12 is Spearman's attempt to correct for the negative bias in r xy caused by the observation errors in x and y .",
"3 Eq.",
"12 is relevant here because of the close connection between Cohen's (1960) kappa and the Pearson correlation, r xy .",
"In the dichotomous case, if the two marginal distributions are the same, Cohen's (1960) kappa is equivalent to the Pearson correlation (Cohen, 1960, 1968).",
"In the multi-category case, Cohen (1968) generalizes this equivalence to weighted kappa, under the conditions of equal marginals and a specific quadratic weighting scheme.",
"Based on this strong connection, we propose replacing r xy in Eq.",
"12 with x and define normalized x as: normalized x = x ( X, Y ) IRRX IRRY .",
"Defined this way, one would expect normalized x to behave like xy .",
"That is indeed the case.",
"When we apply both measures to the IRep dataset, we obtain a Pearson correlation of 0.99 between them (see Section 4.5).",
"This leads to two insights.",
"First, we can interpret normalized x like a disattenuated correlation, xy (see (Muchinsky, 1996) for a rigorous interpretation).",
"Second, normalized x approximates the true correlation between two experiments' item-level mean scores.",
"Despite their affinity, xy is not a substitute for normalized x for measuring experimental similarity.",
"Normalized x is more general as it can accommodate non-interval scales and missing data.",
"By connecting normalized x to xy , we can also learn a lot about x itself.",
"To the extent that normalized x approximates xy , we can rewrite Eq.",
"13 as: x ( X, Y ) xy (cid:112) IRRX (cid:112) IRRY .",
"(14)",
"This formulation shows x behaves like a product of xy and the geometric mean of the two IRRs.",
"This has important consequences, as we can deduce the following.",
"1) Holding constant the mean scores, and hence xy , the lower the IRRs, the lower the x .",
"Intra-experiment disagreement inflates inter-experiment disagreement.",
"2) In theory xy 1 .",
"0 , 4 hence x is capped by the greater of the two IRRs.",
"I.e., Intra-experiment agreement presents a ceiling to inter-experiment agreement.",
"3) If x and y are identically distributed, e.g., in a perfect replication, xy = 1 and x ( X, Y ) = IRRX = IRRY .",
"Thus, when a low reliability experiment is replicated perfectly, x will be as low, whereas normalized x will be 1.",
"This explains why normalized x is more suitable for measuring experimental similarity.",
"In this section, we propose x as a measure of rater agreement between two replications, and normalized x is as an experimental similarity metric.",
"In the next section, we apply them in conjunction with IRR to illustrate how we can gain deeper insights into experiment reliabilities by triangulating these measures.",
"As a standalone measure, IRR captures the reliability of an experiment by encapsulating many of its facets: class imbalance, item difficulty, guideline clarity, rater qualification, task ambiguity, etc.",
"As such, it is difficult to compare the IRR of different experiments, or to interpret their individual values, because IRR is tangled with all the aforementioned design parameters.",
"For example, we cannot attribute a low IRR to rater qualification without first isolating other design parameters.",
"This is the problem we try to solve with xRR by contextualizing IRR with meaningful baselines via a replication.",
"We will demonstrate this by applying this technique to the IRep Dataset (Appendix A).",
"We focus on a subset of 5 emotions for illustration purposes, with the rest of the reliability values provided in Appendix B. In our analysis, IRR is measured with Cohen's (1960) kappa and xRR with x .",
"We will refer to them interchangeably.",
"First we illustrate in Fig. 2 that different emotions within the same city can have very different IRR.",
"For instance, the labels awe and love in Mexico City have an IRR of 0.1208 and 0.597 respectively (Table 1).",
"Awe and love are completely different 4 Spearman's correction can occasionally produce a correlation above 1.0 (Muchinsky, 1996).",
"emotions with different levels of class imbalance and ambiguity, and without controlling for these differences, the gap in their reliabilities is not unexpected.",
"That is exactly the problem about comparing IRRs such comparisons are not meaningful.",
"We need something directly comparable to awe in order to interpret its low IRR.",
"If we do not compare emotions, and just consider awe using the Landis-Koch scale, that would not be helpful either.",
"We would not be able to tell if its low IRR is a result of poor guidelines, general ambiguity in emotion detection, or ambiguity specific to awe .",
"It's more meaningful to compare replications of awe itself.",
"Figure 2 : Histograms of 31 emotion labels' IRR in 3 cities.",
"The x-axis denotes buckets of IRR values.",
"The y-axis denotes the number of emotion labels in each of those buckets.",
"There is a lot of variation between emotion labels within each city.",
"While the aforementioned variation in IRR between emotions is expected, IRR of the same emotion can vary greatly between replications as well.",
"Fig. 3 shows two contrasting examples.",
"On the one hand, the IRR of love is consistent across replications.",
"On the other hand, the IRR of contemplation varies a lot.",
"We know the IRR variation in contemplation is strictly attributed to rater pool differences because the samples, platforms and annotation templates are the same across experiments.",
"Such variation in IRR will be missed entirely by sampling based approaches for error-bars (e.g. standard error, boot-strap), which assume a fixed rater population.",
"As shown, replication can facilitate comparisons of IRR by producing meaningful baselines.",
"However, IRR is an internal property of a dataset, it does not allow us to compare two datasets directly.",
"To Figure 3 : IRR values for label love (left) and contemplation (right) across the 3 cities.",
"There are different degrees of IRR variability in the two emotion labels.",
"that end, we can apply x to quantify the rater agreement between two datasets, as IRR quantifies the rater agreement within a dataset.",
"Interestingly, not only is x useful for comparing two datasets, but it also serves as another baseline for interpreting their IRRs.",
"IRR is a step toward ensuring reproducibility, so naturally we wonder how much of the observed IRR is tied to the specific experiment and how much of it generalizes?",
"This is of particular concern when raters are sampled in a clustered manner, e.g., crowd workers from the same geographical region, grad students sharing the same office.",
"We rarely make sure raters are diverse and representative of the larger population.",
"High IRR can be the result of a homogeneous rater group, limiting the generality of the results.",
"In the context of the IRep dataset, that two cities having similar IRRs does not imply their raters agree with each other at a comparable level, or at all.",
"We will demonstrate this with two contrasting examples.",
"Figure 4 : IRR values of sadness in Mexico City and Budapest and their x value.",
"Both cities have as much internal agreement as cross-replication agreement.",
"Figure 5 : IRR of contentment in Kuala Lumpur and Mexico City and their x .",
"Both cities have high internal agreement, but no discernible cross-replication agreement.",
"Mexico City and Budapest both have a moderate IRR for sadness , 0.5147 and 0.5175 respectively, and their x is nearly the same at 0.4709 (Fig. 4).",
"This gives us confidence that the high IRR of sadness generalizes beyond the specific rater pools.",
"In contrast, on contentment Mexico City and Kuala Lumpur have comparable levels of IRR, 0.4494 and 0.6363 respectively, but their x is an abysmal -0.0344 5 (Fig. 5).",
"In other words, the rater agreement on contentment is limited to within-pool observations only.",
"This serves as an important reminder that IRR is a property of a specific experimental setup and may or may not generalize beyond that.",
"x allows us to ensure the internal agreement has external validity.",
"x is a step towards comparing two replications, but it is not a good standalone measure of replication similarity.",
"To do that, we must also account for both replications' internal agreements, e.g., via normalized x in Eq.",
"13.",
"Fig. 6 shows an example.",
"Mexico City and Budapest have a low x of 0.0817 on awe .",
"On the surface, this low agreement may seem attributable to differences between the rater pools.",
"However, there is a similarly low IRR in either city: 0.1208 in Mexico City, and 0.117 in Budapest.",
"After accounting for IRR, normalized x is much higher at 0.6872 (Table 2), indicating a decent replication similarity between the two cities.",
"Figure 6 : IRR of awe in Mexico City and Budapest and their xRR.",
"The low xRR is primarily a reflection of their low IRRs.",
"Table 2 : x and normalized x (in parentheses) of 5 emotion labels in 3 replication pairs.",
"5 Negative xRR value due to estimation error.",
"We apply Spearman's correction for attenuation in Eq.",
"12 to all 31 emotion labels in 3 replication pairs.",
"The resulting xy is plotted against the corresponding normalized x in Fig. 7.",
"Both measures are strongly correlated with a Pearson correlation of 0.99.",
"This justifies interpreting normalized x as a disattenuated correlation like xy .",
"The IRep dataset is replicated and is conducive to xRR analysis.",
"However, in practice most datasets are not replicated.",
"Is xRR still useful?",
"We present a specific use case of xRR in this section and argue that it is worth replicating a crowdsourced dataset in order to evaluate its quality.",
"Given a set of items, it is possible that annotations of the highest attainable quality still fail to meet the Landis-Koch requirements.",
"Task subjectivity and class imbalance together impose a practical limit on kappa (Bruckner and Yoder, 2006).",
"In these situations, the experimenter can forgo a data collection effort for reliability reasons.",
"Alternatively, the experimenter may believe that data of sufficiently high quality can still have scientific merits, regardless of reliability.",
"If so, what guidance can we use to ensure the highest quality data , especially when collecting data via crowdsourcing?",
"This paper is heavily motivated by this question.",
"xRR allows us to interpret IRR not on an absolute scale, but against a replication, a reference of sorts.",
"By judging a crowdsourced dataset against a reference, we can decide if its meets a certain quality bar, albeit a relative one.",
"In the IRep dataset, all replications are of equal importance.",
"However, in practice, we can often define a trusted source as our target.",
"This trusted source can consist of linguists, medical experts, calibrated crowd workers, or the experimenters themselves.",
"They should have enough expertise knowledge and an adequate understanding of the task.",
"The critical criterion in choosing a target is its ability to remove common quality concerns such as rater qualification and guideline effectiveness.",
"By replicating a random subset of a crowdsourced dataset with trusted annotators, 6 one can compare the two IRRs and make sure they are at a similar level.",
"If the crowd IRR is much higher, that may be an indication of collusion, or a set of overly simplistic guidelines that have deviated from the experiment fidelity (Sameki et al., 2015).",
"If the crowd IRR is much lower, it may just be a reflection of annotator diversity, or it can mean under-defined guidelines, unequal annotator qualifications, etc.",
"Further investigation is needed to ensure the discrepancy is reasonable and appropriate.",
"Suppose the two IRRs are similar, that is not to say that both datasets are similar.",
"Both groups of annotators can have high internal agreement amongst themselves, but the two groups can agree on different sets of items.",
"If our goal is to collect crowdsourced data that closely mirror the target, then we have to measure their mutual agreement, in addition to comparing their internal agreements.",
"Recall from Section 3.5 that if an experiment is replicated perfectly, x should be identical to the two IRRs.",
"Or more concisely, normalized x should be equal to 1.",
"Thus a high normalized x can assure us that the crowdsourced annotators are functioning as an extension of the trusted annotators, based on which we form our expectations.",
"At a glance, this approach seems similar to the common practice of measuring the accuracy of crowdsourced data against the ground truth (Resnik et al., 2006; Hripcsak and Wilcox, 2002).",
"However, they are actually fundamentally different approaches.",
"x is rooted in the reliability literature that does not rely on the existence of a correct 6 2 or more ratings per item are needed to measure the IRR.",
"answer.",
"The authors argue this is an unrealistic assumption for many crowdsourcing tasks, where the input involves some subjective judgement.",
"Accuracy itself is also a flawed metric for annotations data due to its inability to handle data uncertainty.",
"For instance, when the reliability of the gold data is less than perfect, accuracy can never reach 1.0.",
"Furthermore, accuracy is not chance-corrected, so it tends to inflate with class imbalance.",
"The aforementioned technique can also measure the quality of a dataset extension.",
"The main challenge in extending an existing dataset is to ensure the new data is consistent with the old.",
"The state-of-the-art method in computer vision is frequency matching ensuring the same proportion of yes/no votes in each image class.",
"Recht et al. (2019) extended ImageNet 7 using this technique, concluding there is a 11% 14% drop in accuracy across a broad range of models.",
"While frequency matching controls the distribution of some statistics, the impact of the new platform is uncontrolled for.",
"Engstrom et al. (2020) pointed out a bias in this sampling technique.",
"Overall, it is difficult to assess how well we are extending a dataset.",
"To that end, xRR can be of help.",
"A high normalized x and a comparable IRR in the new data can give us confidence in the uniformity and continuity in the data collection.",
"There has been a tectonic shift in the scope of and methodologies for annotations data collection due to the rise of crowdsourcing and machine learning.",
"In many of these tasks, a high reliability is often difficult to attain, even under favorable circumstances.",
"The rigid Landis-Koch scale has resulted in a decrease in the usage and reporting of IRR in most widely used datasets and benchmarks.",
"Instead of abandoning IRR, we should adapt it to new ways of measuring data quality.",
"The xRR framework presents a first-principled way of doing so.",
"It is a more empirical approach that utilizes a replication as a reference point.",
"It is based on two metrics x for measuring cross-replication rater agreement and normalized x for measuring replication similarity.",
"7 http://www.image-net.org/",
"be used to guide our crowdsourcing data collection efforts.",
"This is the beginning of a long line of inquiry.",
"We outline future work and limitations below: Confidence intervals for x Confidence intervals for x and normalized x are required for hypothesis testing.",
"Though one can use the block-bootstrap for an empirical estimate, large sample behavior of these metrics needs to be studied.",
"Sensitivity of x with high class-imbalance The xRR framework sidesteps the effect of class-imbalance by comparing replications on the same item set.",
"Further analysis needs to confirm the sensitivity of x metrics in high class-imbalance.",
"Optimization of x computation Our method requires constructing many pairs of observations: n 2 RS .",
"This may get prohibitively expensive, when the number of items is large.",
"Using algebraic sim-plification and dynamic programming, this can be made much more efficient.",
"Alternative normalizations of x We provided one particular normalization technique, but it may not suit all applications.",
"For example, when comparing crowd annotations to expert annotations, one can consider, x / IRR expert .",
"Alternative xRR coefficients Our proposed xRR coefficient, x , is based on Cohen's (1960) kappa for its assumption about rater noninterchangeability.",
"It may be useful to consider Krippendorff's alpha and other agreement statistics as alternatives for other assumptions and statistical characteristics.",
"We hope this paper and dataset will spark research on these questions and increase reporting of reliability measures for human annotated data.",
"We like to thank Gautam Prasad and Alan Cowen for their work on collecting and sharing the IRep dataset and opensourcing it."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"result",
"method",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"This paper presents the first unsupervised approach to lexical semantic change that makes use of contextualised word representations.",
"We propose a novel method that exploits the BERT neural language model to obtain representations of word usages, clusters these representations into usage types, and measures change along time with three proposed metrics.",
"We create a new evaluation dataset and show that the model representations and the detected semantic shifts are positively correlated with human judgements.",
"Our extensive qualitative analysis demonstrates that our method captures a variety of synchronic and diachronic linguistic phenomena.",
"We expect our work to inspire further research in this direction.",
"In the fourteenth century the words boy and girl referred respectively to a male servant and a young person of either sex (Oxford English Dictionary).",
"By the fifteenth century a narrower usage had emerged for girl , designating exclusively female individuals, whereas by the sixteenth century boy had lost its servile connotation and was more broadly used to refer to any male child, becoming the mas-culine counterpart of girl (Bybee, 2015).",
"Word meaning is indeed in constant mutation and, since correct understanding of the meaning of individual words underpins general machine reading comprehension, it has become increasingly relevant for computational linguists to detect and characterise lexical semantic changee.g., in the form of laws of semantic change (Dubossarsky et al., 2015; Xu and Kemp, 2015; Hamilton et al., 2016)with the aid of quantitative and reproducible evaluation procedures (Schlechtweg et al., 2018).",
"Most recent studies have focused on shift detection , the task of deciding whether and to what extent the concept evoked by a word has changed between time periods (e.g., Gulordava and Baroni, 2011; Kim et al., 2014; Kulkarni et al., 2015; Del Tredici et al., 2019; Hamilton et al., 2016; Bamler and Mandt, 2017; Rosenfeld and Erk, 2018).",
"This line of work relies mainly on distributional semantic models, which produce one abstract representation for every word form.",
"However, aggregating all senses of a word into a single representation is particularly problematic for semantic change as word meaning hardly ever shifts directly from one sense to another, but rather typically goes through polysemous stages (Hopper et al., 1991).",
"This limitation has motivated recent work on word sense induction across time periods (Lau et al., 2012; Cook et al., 2014; Mitra et al., 2014; Frermann and Lapata, 2016; Rudolph and Blei, 2018; Hu et al., 2019).",
"Word senses, however, have shortcomings themselves as they are a discretisation of word meaning, which is continuous in nature and modulated by context to convey ad-hoc interpretations (Brugman, 1988; Kilgarriff, 1997; Paradis, 2011).",
"In this work, we propose a usage-based approach to lexical semantic change, where sentential context modulates lexical meaning on the fly (Lud-low, 2014).",
"We present a novel method that (1) exploits a pre-trained neural language model (BERT; Devlin et al., 2019) to obtain contextualised representations for every occurrence of a word of interest, (2) clusters these representations into usage types , and (3) measures change along time.",
"More concretely, we make the following contributions: We present the first unsupervised approach to lexical semantic change that makes use of state-of-the-art contextualised word representations.",
"We propose several metrics to measure semantic change with this type of representation.",
"Our code is available at https://github.com/ glnmario/cwr4lsc .",
"similarity judgements on more than 3K word usage pairs across different time periods, available at https://doi.org/10.5281/zenodo.3773250",
"We show that both the model representations and the detected semantic shifts are positively correlated with human intuitions.",
"Through in-depth qualitative analysis, we show that the proposed approach captures synchronic phenomena such as word senses and syntactic functions, literal and metaphorical usage, as well as diachronic linguistic processes related to narrowing and broadening of meaning across time.",
"Overall, our study demonstrates the potential of using contextualised word representations for modelling and analysing lexical semantic change and opens the door to further work in this direction.",
"Semantic change modelling Lexical semantic change models build on the assumption that meaning change results in the modification of a word's linguistic distribution.",
"In particular, with the exception of a few methods based on word frequencies and parts of speech (Michel et al., 2011; Kulkarni et al., 2015), lexical semantic change detection has been addressed following two main approaches: form-based and sense-based (for an overview, see Kutuzov et al., 2018; Tang, 2018).",
"In form-based approaches independent models are trained on the time intervals of a diachronic corpus and the distance between representations of the same word in different intervals is used as a semantic change score (Gulordava and Baroni, 2011; Kulkarni et al., 2015).",
"Representational coherence between word vectors across different periods can be guaranteed by incremental training procedures (Kim et al., 2014) as well as by post hoc alignment of semantic spaces (Hamilton et al., 2016).",
"More recent methods capture diachronic word usage by learning dynamic word embeddings that vary as a function of time (Bamler and Mandt, 2017; Rosenfeld and Erk, 2018; Rudolph and Blei, 2018).",
"Form-based models depend on a strong sim-plification: that a single representation is sufficient to model the different usages of a word.",
"Time-dependent representations are also created in sense-based approaches: in this case word meaning is encoded as a distribution over word senses.",
"Several Bayesian models of sense change have been proposed (Wijaya and Yeniterzi, 2011; Lau et al., 2012, 2014; Cook et al., 2014).",
"Among these is the recent SCAN model (Frermann and Lapata, 2016), which represents (1) the meaning of a word in a time interval as a multinomial distribution over word senses and (2) word senses as probability distributions over the vocabulary.",
"The main limitation of sense-based models is that they rely on a bag-of-words representation of context.",
"Furthermore, many of these models keep the number of senses constant across time intervals and require this number to be manually set in advance.",
"Unsupervised approaches have been proposed that do not rely on a fixed number of senses.",
"For example, the method for novel sense identification by Mitra et al. (2015) represents senses as clusters of short dependency-labelled contexts.",
"Like ours, this method analyses word forms within the grammatical structures they appear.",
"However, it requires syntactically parsed diachronic corpora and focuses exclusively on nouns.",
"None of these restrictions limit our proposed approach, which leverages neural contextualised word representations.",
"Contextualised word representations Several approaches to context-sensitive word representations have been proposed in the past.",
"Schutze (1998) introduced a clustering-based disambiguation algorithm for word usage vectors, Erk and Pado (2008) proposed creating multiple vectors for the same word and Erk and Pado (2010) proposed to directly learn usage-specific representations based on the set of exemplary contexts within which the target word occurs.",
"Recently, neural contextualised word representations have gained widespread use in NLP, thanks to deep learning models which learn usage-dependent representations while optimising tasks such as machine translation (CoVe; McCann et al., 2017) and language modelling (Dai and Le, 2015, ULMFiT; Howard and Ruder, 2018, ELMo; Peters et al., 2018, GPT; Radford et al., 2018, 2019, BERT; Devlin et al., 2019).",
"State-of-the-art language models typically use stacked attention layers (Vaswani et al., 2017), they are pre-trained on a very large amount of textual data, and they can be fine-tuned for specific downstream tasks (Howard and Ruder, 2018; Radford et al., 2019; Devlin et al., 2019).",
"Contextualised representations have been shown to encode lexical meaning dynamically, reaching high accuracy on, e.g., the binary usage similarity judgements of the WiC evaluation set (Pilehvar and Camacho-Collados, 2019), performing on a par with state-of-the-art word sense disambiguation models (Wiedemann et al., 2019), and proving useful for the supervised derivation of time-specific sense representation (Hu et al., 2019).",
"In this work, we investigate the potential of contextualised word representations to detect and analyse lexical semantic change, without any lexicographic supervision.",
"We introduce a usage-based approach to lexical semantic change analysis which relies on contextualised representations of unique word occurrences ( usage representations ).",
"First, given a diachronic corpus and a list of words of interest, we use the BERT language model (Devlin et al., 2019) to compute usage representations for each occurrence of these words.",
"Then, we cluster all the usage representations collected for a given word into an automatically determined number of partitions ( usage types ) and organise them along the temporal axis.",
"Finally, we propose three metrics to quantify the degree of change undergone by a word.",
"We produce usage representations using the BERT language model (Devlin et al., 2019), a multilayer bidirectional Transformer encoder trained on masked token prediction and next sentence prediction, on the BooksCorpus (800M words) (Zhu et al., 2015) and on English text passages extracted from Wikipedia (2,500M words).",
"There are two versions of BERT.",
"For space and time efficiency, we use the smaller base-uncased version, with 12 layers, 768 hidden dimensions, and 110M parameters.",
"1 3.2 Usage Representations Given a word of interest w and a context of occurrence s = ( v 1 , ..., v i , ..., v n ) with w = v i , we extract the activations of all of BERT's hidden layers for sentence position i and sum them dimension-wise.",
"We use addition because neither concatenation nor selecting a subset of the layers produced notable differences in the relative geometric distance between word representations.",
"The set of N usage representations for w in a given corpus can be expressed as the usage matrix U w = ( w 1 , . . . , w N ) .",
"For each usage representation in the usage matrix U w , we store the context of 1 We rely on Hugging Face 's implementation of BERT (available at https://github.com/huggingface/ transformers ).",
"occurrence (a 128-token window around the target word) as well as a temporal label t w indicating the time interval of the usage.",
"Once we have obtained a word-specific matrix of usage vectors U w , we standardise it and cluster its entries using K -Means.",
"2 This step partitions usage representations into clusters of similar usages of the same word, or usage types (see Figure 1a), and thus it is directly related to automatic word sense discrimination (Schutze, 1998; Pantel and Lin, 2002; Manandhar et al., 2010; Navigli and Vannella, 2013, among others).",
"For each word independently, we automatically select the number of clusters K that maximises the silhouette score (Rousseeuw, 1987), a metric of cluster quality which favours intra-cluster coherence and penalises inter-cluster similarity, without the need for gold labels.",
"For each value of K , we execute 10 iterations of Expectation Maximization to alleviate the influence of different initialisation values (Arthur and Vassilvitskii, 2007).",
"The final clustering for a given K is the one that yields the minimal distortion value across the 10 runs, i.e., the minimal sum of squared distances of each data point from its closest centroid.",
"We experiment with K [2 , 10] .",
"We choose the range [2 , 10] heuristically: we forgo K = 1 as K -Means and the silhouette score are ill-defined for this case, while keeping the number of possible clusters manageable computationally.",
"This excludes the possibility that a word has a single usage type.",
"Alternatively, we could use a measure of intra-cluster dispersion for K = 1 , and consider a word monosemous if its dispersion value is below a threshold d (if the dispersion is higher than d , we would discard K = 1 2 Other clustering methods are also possible.",
"For this first study, we choose the widely used K -Means ( scikit-learn ).",
"and use the silhouette score to find the best K 2 ).",
"There also exist clustering methods that select the optimal K automatically, e.g. DBSCAN or Affin-ity Propagation (Martinc et al., 2020).",
"They nevertheless require method-specific parameter choices which indirectly determine the number of clusters.",
"By counting the number of occurrences of each usage type k in a given time interval t (we refer to this count as freq ( k, t ) ), we obtain frequency distributions f tw for each interval under scrutiny: f tw NK w : f tw [ k ] = freq ( k, t ) k [1 , K w ] (1) When normalised, frequency distributions can be interpreted as probability distributions over usage types u tw : u tw [ k ] = 1 N t f tw [ k ] .",
"Figure 1b illustrates the result of this process.",
"We propose three metrics for the automatic quan-tification of lexical semantic change using contextualised word representations.",
"The first two ( entropy difference and Jensen-Shannon divergence ) are known metrics for comparing probability distributions.",
"In our approach, we apply them to measure variations in the relative prominence of coexisting usage types.",
"We conjecture that these kinds of metric can help detect semantic change processes that, e.g., lead to broadening or narrowing (i.e., to increase or decrease, respectively, in the number or relative distribution of usage types).",
"The third metric ( average pairwise distance ) only requires a usage matrix U w and the temporal labels t w (Section 3.2).",
"Since it does not rely on usage type distributions, it is not sensitive to possible errors stemming from the clustering process.",
"Entropy difference (ED) We propose measuring the uncertainty (e.g., due to polysemy) in the interpretation of a word w in interval t using the normalised entropy of its usage distribution u tw : ( u tw ) = log K w (cid:32) K w (cid:89) k =1 u tw [ k ] u tw [ k ] (cid:33) (2) To quantify how uncertainty over possible interpretations varies across time intervals, we compute the difference in entropy between the two usage type distributions in these intervals: ED( u tw , u t (cid:48) w ) = ( u t (cid:48) w ) ( u tw ) .",
"We expect high ED values to signal the broadening of a word's interpretation and negative values to indicate narrowing.",
"Jensen-Shannon divergence (JSD) The second metric takes into account not only variations in the size of usage type clusters but also which clusters have grown or shrunk.",
"It is the Jensen-Shannon divergence (Lin, 1991) between usage type distributions: JSD( u tw , u t (cid:48) w ) = H (cid:18) 1 2 (cid:16) u tw + u t (cid:48) w (cid:17)(cid:19) 1 2 (cid:16) H (cid:0) u tw (cid:1) H (cid:16) u t (cid:48) w (cid:17)(cid:17) (3) where H is the Boltzmann-Gibbs-Shannon entropy.",
"Very dissimilar usage distributions yield high JSD whereas low JSD values indicate that the proportions of usage types barely change across periods.",
"Average pairwise distance (APD) While the previous two metrics rely on usage type distributions, it is also possible to quantify change bypassing the clustering step into usage types, e.g. by calculating the average pairwise distance between usage representations in different periods t and t (cid:48) : APD( U tw , U t (cid:48) w ) = 1 N t N t (cid:48) (cid:88) x i U tw , x j U t (cid:48) w d ( x i , x j ) (4) where U tw is a usage matrix constructed with occurrences of w only in interval t .",
"We experiment with cosine, Euclidean, and Canberra distance.",
"Generalisation to multiple time intervals The presented metrics quantify semantic change across pairs of temporal intervals ( t, t (cid:48) ).",
"When more than two intervals are available, we measure change across all contiguous intervals ( m ( U tw , U t +1 w ) , where m is one of the metrics), and collect these values into vectors.",
"We then transform each vector into a scalar change score by computing the vector's mean and maximum values.",
"3 Whereas the mean is indicative of semantic change across the entire period under consideration, the max pinpoints the pair of successive intervals where the strongest shift has occurred.",
"3 The Jensen-Shannon divergence can also be measured with respect to T > 2 probability distributions (R e and Azad, 2014): JSD (cid:0) u 1 w , . . . , u Tw (cid:1) = H (cid:16) 1 T (cid:80) Ti =1 u iw (cid:17) 1 T (cid:80) Ti =1 H (cid:0) u iw (cid:1) .",
"However, this definition of the JSD is insensitive to the order of the temporal intervals and yields lower correlation with human semantic change ratings (cfr. Section 5.2) than the pairwise metrics.",
"We examine word usages in a large diachronic corpus of English, the Corpus of Historical American English (COHA, Davies, 2012), which covers two centuries (18102009) of language use and includes a variety of genres, from fiction to newspapers and popular magazines, among others.",
"In this study, we focus on texts written between 1910 and 2009, for which a minimum of 21M words per decade is available, and discard previous decades, where data are less balanced per decade.",
"We use the 100 words annotated with semantic shift scores by Gulordava and Baroni (2011) as our target words.",
"These scores are human judgements collected by asking five annotators to quantify the degree of semantic change undertaken by each word (shown out of context) from the 1960's to the 1990's.",
"We exclude extracellular as in COHA this word only appears in three decades; all other words appear in at least 8 decades, with a minimum and maximum frequency of 191 and 108,796, respectively.",
"We refer to the resulting set of 99 words and corresponding shift scores as the GEMS dataset' or the GEMS words', as appropriate.",
"We collect a contextualised representation for each occurrence of these words in the second century of COHA, using BERT as described in Section 3.2.",
"This results in a large set of usage representations, 1.3M in total, which we cluster into usage types using K -Means and silhouette coefficients (Section 3.3).",
"We use these usage representations and usage types in the evaluation and the analyses offered in the remaining of the paper.",
"Before using our proposed method to analyse language change, we assess how its key components compare with human judgements.",
"We test whether the clustering into usage types reflects human similarity judgements (Section 5.1) and to what extent the degree of change computed with our metrics correlates with shift scores provided by humans (Section 5.2).",
"The clustering of contextualised representations into usage types is one of the main steps in our method (see Section 3.3).",
"It relies on the similarity values between pairs of usage representations created by the language model.",
"To quantitatively evaluate the quality of these similarity values (and thus, by extension, the quality of usage representations and usage types), we compare them to similarity judgements by human raters.",
"New dataset of similarity judgements We create a new evaluation dataset, following the annotation approach of Erk et al. (2009, 2013) for rating pairs of usages of the same word.",
"Since we need to collect human judgements for pairs of usages, annotating the entire GEMS dataset would be extremely costly and time consuming.",
"Therefore, to limit the scope of the annotation, we select a subset of words.",
"For each shift score value s in the GEMS dataset, we sample a word uniformly at random from the words annotated with s .",
"This results in 16 words.",
"To ensure that our selection of usages is suf-ficiently varied, for each of these words, we sample five usages from each of their usage types (the number of usage types is word-specific) along different time intervals, one usage per 20-year period over the century.",
"All possible pairwise combinations are generated for each target word, resulting in a total of 3,285 usage pairs.",
"We use the crowdsourcing platform Figure Eight 4 to collect five similarity judgements for each of these usage pairs.",
"Annotators are shown pairs of usages of the same word: each usage shows the target word in its sentence, together with the previous and the following sentences (67 tokens on average).",
"Annotators are asked to assign a similarity score on a 4-point scale, ranging from unrelated to identical , as defined by Brown (2008) and used e.g., by Schlechtweg et al. (2018).",
"5 A total of 380 annotators participated in the task.",
"The inter-rater agreement, measured as the average pairwise Spearman's correlation between common annotation subsets, is 0.59.",
"This is in line with previous approaches such as Schlechtweg et al. (2018), who report agreement scores between 0.57 and 0.68.",
"Results To obtain a single human similarity judgement per usage pair, we average the scores given by five annotators.",
"We encode all averaged human similarity judgements for a given word in a square matrix.",
"We then compute similarity scores over pairs of usage vectors output by BERT 6 to 4 https://www.figure-eight.com , recently acquired by Appen ( https://appen.com ).",
"6 For this evaluation, BERT is given the same variable-size context as the human annotators.",
"Vector similarity values are computed as the inverse of Euclidean distance, because K -means relies on this metric for cluster assignments.",
"obtain analogous matrices per word and measure Spearman's rank correlation between the human-and the machine-generated matrices using the Mantel test (Mantel, 1967).",
"We observe a significant ( p < 0 . 05 ) positive correlation for 10 out of 16 words, with coefficients ranging from 0.13 to 0.45.",
"7 This is an encouraging result, which indicates that BERT's word representations and similarity scores (as well as our clustering methods which build on them) correlate, to a substantial extent, with human similarity judgements.",
"We take this to provide a promising empirical basis for our approach.",
"We now quantitatively assess the semantic change scores yielded by the metrics described in Section 3.4 when applied to BERT usage representations and the usage types created with our approach.",
"We do so by comparing them to the human shift scores in the GEMS dataset.",
"For consistency with this dataset, which quantifies change from the 1960's to the 1990's as explained in Section 4, we only consider these four decades when calculating our scores.",
"Using each of the metrics on representations from these time intervals, we assign a semantic change score to all the GEMS words.",
"We then compute Spearman's rank correlation between the automatically generated change scores and the gold standard shift values.",
"Results Table 1 shows the Spearman's correlation coefficients obtained using our metrics, together with a frequency baseline (the difference between the normalised frequency of a word in the 1960's and in the 1990's).",
"The three proposed metrics yield significant positive correlations.",
"This is again a very encouraging result regarding the potential of contextualised word representations for capturing lexical semantic change.",
"As a reference, we report the correlation coefficients with respect to GEMS shift scores documented by the authors of two alternative approaches: the count-based model by Gulordava and Baroni (2011) themselves (trained on two time slices from the Google Books corpus with texts from the 1960's and the 1990's) and the sense-based SCAN model by Frermann and Lapata (2016) (trained on the DATE corpus with texts from the 1960's through the 1990's).",
"8 7 Scores per target word are given in Appendix A.2.",
"For all our metrics, the max across the four time intervalsi.e., identifying the pair of successive intervals where the strongest shift has occurred (cfr. end of Section 3.4)is the best performing aggregation strategy.",
"Table 1 only shows values obtained with max and Euclidean distance for APD, as they are the best-performing options.",
"It is interesting to observe that APD can prove as informative as JSD and ED, although it does not depend on the clustering of word occurrences into usage types.",
"Yet, computing usage types offers a powerful tool for analysing lexical change, as we will see in the next section.",
"In this section, we provide an in-depth qualitative analysis of the linguistic properties that define usage types and the kinds of lexical semantic change we observe.",
"More quantitative methods (such as taking the top n words with highest JSD, APD and ED and checking, e.g., how many cases of broadening each metric captures) are difficult to operationalise (Tang et al., 2016) because there exist no well-established formal notions of semantic change types in the linguistic literature.",
"To carry out this analysis, for each GEMS word, we identify the most representative usages in a given usage type cluster by selecting the five closest vectors to the cluster centroid, and take the five corresponding sentences as usage examples.",
"However, to allow for direct comparison, Frermann and Lapata (2016) computed Spearman correlation for that work (see their footnote 7), which is the value we report.",
"goal is to assess the interpretability and internal coherence of the obtained usage clusters.",
"We observe that usage types can discriminate between underlying senses of polysemous (and homonymous) words, between literal and figura-tive usages, and between usages that fulfil different syntactic roles; plus they can single out phrasal collocations as well as named entities.",
"Polysemy and homonymy Distinctions often occur between underlying senses of polysemous and homonymous words.",
"For example, the vectors collected for the polysemous word curious are grouped together into two usage types, depending on whether curious is used to describe something that excites attention as odd, novel, or unexpected (a wonderful and curious and unbelievable story') or rather to describe someone who is marked by a desire to investigate and learn ( curious and amazed and innocent').",
"The same happens for the homonymous usages of the word coach , for instance, which can denote vehicles as well as instructors (see Figure 2a for a diachronic view of the usage types).",
"Metaphor and metonymy In several cases, literal and metaphorical usages are also separated.",
"For example, occurrences of curtain are clustered into four usage types (Figure 2c): two of these correspond to a literal interpretation of the word as a hanging piece of cloth ( curtain less windows', pulled the curtain closed') whereas the other two indicate metaphorical interpretations of curtain as any barrier that excludes the free exchange of information or communication (the curtain on the legal war is being raised').",
"Similarly, we obtain two usage types for sphere : one for literal usages that denote a round solid figure (the sphere of the moon'), and the other for metaphorical interpretations of the word as an area of knowledge or activity (a certain sphere of autonomy') as well as metonymical usages that refer to the planet Earth (land and peoples on the top half of the sphere ').",
"Syntactic roles and argument structure Further distinctions are observed between word usages that fulfil a different syntactic functionality: not only is part-of-speech ambiguity detected (e.g., the cost -tapered average tariff' vs. cost less to make') but contextualised representations also capture regularities in syntactic argument structures.",
"For example, usages of refuse are clustered into nominal usages (society's emotional refuse ', the amount of refuse '), verbal transitive and intransitive usages (fall, give up, refuse , kick'), as well as verbal usages with infinitive complementation ( refuse to go', refuse for the present to sign a treaty').",
"Collocations and named entities Specific clusters are also assigned to lexical items that are parts of phrasal collocations (e.g., iron curtain ') or of named entities (alexander graham bell ' vs. bell like whistle').",
"Other distinctions Some distinctions are interpretable but unexpected.",
"As an example, the word doubt does not show the default noun-verb separation but rather a distinction between usages in affirmative contexts (there is still doubt ', the ben-efit of the doubt ') and in negative contexts (there is not a bit of doubt ', beyond a reasonable doubt ').",
"Observed errors For some words, we find that usages which appear to be identical are separated into different usage types.",
"In a handful of cases, this seems due to the setup we have used for experimentation, which sets the minimum number of clusters to 2 (see Section 3.3).",
"This leads to distinct usage types for words such as maybe , for which a single type is expected.",
"In other cases, a given interpretation is not identified as an independent type, and its usages appear in different clusters.",
"This holds, for example, for the word tenure , whose usages in phrases such as tenure -track faculty po-sition' are present in two distinct usage types (see Figure 2b).",
"Finally, we see that in some cases a usage type ends up including two interpretations which arguably should have been distinguished.",
"For example, two of the usage types identified for address are interpretable and coherent: one includes usages in the sense of formal speech and the other one includes verbal usages.",
"The third usage type, however, includes a mix of nominal usages of the word as in disrespectful manners or address ' as well as in network address '.",
"Here we consider usage types diachronically.",
"Different kinds of change, driven by cultural and technological innovation as well as by historical events, emerge from a qualitative inspection of usage distributions along the temporal dimension.",
"We describe the most prominent kindsnarrowing and broadening, including metaphorisationand discuss the extent to which our metrics are able to detect them.",
"Narrowing Examination of the dynamics of usage distributions allows us to see that for a few words certain usage types disappear or become less common over time (i.e., the interpretation of the word becomes narrower', less varied).",
"This is the case, for example, for coach , where the frequency decrease of a usage type is gradual and caused by technological evolution (see Figure 2a).",
"Negative mean ED (see Section 3.4) reliably indicates this kind of narrowing.",
"Indeed coach is assigned one of the lowest ED score among the GEMS words.",
"In contrast, ED fails to detect the obsolescence of a usage type when new usage types emerge simultaneously (since this may lead to no entropy reduction).",
"This is the case, e.g., of tenure .",
"The usage type capturing tenure of a landed property becomes obsolete; however, we obtain a positive mean ED caused by the appearance of a new usage type (the third type in Figure 2b).",
"Broadening For a substantial amount of words, we observe the emergence of new usage types (i.e., a broadening' of their use).",
"This may be due to technological advances as well as to specific historical events.",
"As an example, Figure 2d shows how, starting from the 1950's and as a result of technological innovation, the word disk starts to be used to denote also optical disks while beforehand it referred only to generic flat circular objects.",
"A special kind of broadening is metaphorisation.",
"As mentioned in Section 6.1, the usage types for the word curtain include metaphorical interpretations.",
"Figure 2c allows us to see when the metaphorical meaning related to the historically charged expression iron curtain is acquired.",
"This novel usage type is related to a specific historical period: it emerges between the 1930's and the 1940's, reaches its peak in the 1950's, and remains stably low in frequency starting from the 1970's.",
"The metrics that best capture broadening are JSD and APDe.g., disk is assigned a high semantic change score by both metrics.",
"Yet, sometimes these metrics generate different score rankings.",
"For example, curtain yields a rather low APD score due to the low relative frequency of the novel usage (Figure 2c).",
"In contrast, even though the novel usage type is not very prominent in some decades, JSD can still discriminate it and measure its development.",
"On the other hand, the word address , for which we also observe broadening, is assigned a low score by JSD due to the errors in its usage type assignments pointed out in Section 6.1.",
"As APD does not rely on usage types, it is not affected by this issue and does indeed assign a high change score to the word.",
"Finally, although our metrics help us identify the broadening of a word's meaning, they cannot capture the type of broadening (i.e., the nature of the emerging interpretations).",
"Detecting metaphorisa-tion, for example, may require inter-cluster comparisons to identify a metaphor's source and target usage types, which we leave to future work.",
"We have introduced a novel approach to the analysis of lexical semantic change.",
"To our knowledge, this is the first work that tackles this problem using neural contextualised word representations and no lexicographic supervision.",
"We have shown that the representations and the detected semantic shifts are aligned to human interpretation, and presented a new dataset of human similarity judgements which can be used to measure said alignment.",
"Finally, through extensive qualitative analysis, we have demonstrated that our method allows us to capture a variety of synchronic and diachronic linguistic phenomena.",
"Our approach offers several advantages over previous methods: (1) it does not rely on a fixed number of word senses, (2) it captures morphosyntac-tic properties of word usage, and (3) it offers a more effective interpretation of lexical meaning by enabling the inspection of particular example sentences.",
"In recent work, we have experimented with alternative ways of obtaining usage representations (using a different language model, fine-tuning, and various layer selection strategies) and we have obtained very promising results in detecting semantic change across four languages (Kutuzov and Giulianelli, 2020).",
"In the future, we plan to investigate whether usage representations can provide an even finer grained account of lexical meaning and its dynamics, e.g., to automatically discriminate between different types of meaning change.",
"We expect our work to inspire further analyses of variation and change which exploit the expressiveness of contextualised word representations.",
"This paper builds upon the preliminary work presented by Giulianelli (2019).",
"We would like to thank Lisa Beinborn for providing useful feedback as well as the three anonymous ACL reviewers for their helpful comments.",
"This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 819455)."
] | [
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"other",
"objective",
"result",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"objective",
"objective",
"objective",
"abstain",
"result",
"objective",
"abstain",
"other",
"other",
"other"
] |
[
"Neural architecture search (NAS) has advanced significantly in recent years but most NAS systems restrict search to learning architectures of a recurrent or convolutional cell.",
"In this paper, we extend the search space of NAS.",
"In particular, we present a general approach to learn both intra-cell and inter-cell architectures (call it ESS).",
"For a better search result, we design a joint learning method to perform intra-cell and inter-cell NAS simultaneously.",
"We implement our model in a differentiable architecture search system.",
"For recurrent neural language modeling, it outperforms a strong baseline significantly on the PTB and WikiText data, with a new state-of-the-art on PTB.",
"Moreover, the learned architectures show good transferability to other systems.",
"E.g., they improve state-of-the-art systems on the CoNLL and WNUT named entity recognition (NER) tasks and CoNLL chunking task, indicating a promising line of research on large-scale pre-learned architectures.",
"Neural models have shown remarkable performance improvements in a wide range of natural language processing (NLP) tasks.",
"Systems of this kind can broadly be characterized as following a neural network design: we model the problem via a pre-defined neural architecture, and the resulting network is treated as a black-box family of functions for which we find parameters that can generalize well on test data.",
"This paradigm leads to many successful NLP systems based on well-designed architectures.",
"The earliest of these makes use of recurrent neural networks (RNNs) for representation learning (Bahdanau et al., 2015; Wu et al., 2016), Corresponding author.",
"whereas recent systems have successfully incorporated fully attentive models into language generation and understanding (Vaswani et al., 2017).",
"In designing such models, careful engineering of the architecture plays a key role for the state-of-the-art though it is in general extremely difficult to find a good network structure.",
"The next obvious step is toward automatic architecture design.",
"A popular method to do this is neural architecture search (NAS).",
"In NAS, the common practice is that we first define a search space of neural networks, and then find the most promising candidate in the space by some criteria.",
"Previous efforts to make NAS more accurate have focused on improving search and network evaluation algorithms.",
"But the search space is still restricted to a particular scope of neural networks.",
"For example, most NAS methods are applied to learn the topology in a recurrent or convolutional cell, but the connections between cells are still made in a heuristic manner as usual (Zoph and Le, 2017; Elsken et al., 2019).",
"Note that the organization of these sub-networks remains important as to the nature of architecture design.",
"For example, the first-order connectivity of cells is essential to capture the recurrent dynamics in RNNs.",
"More recently, it has been found that additional connections of RNN cells improve LSTM models by accessing longer history on language modeling tasks (Melis et al., 2019).",
"Similar results appear in Transformer systems.",
"Dense connections of distant layers help in learning a deep Transformer encoder for machine translation (Shen et al., 2018).",
"A natural question that arises is: can we learn the connectivity of sub-networks for better architecture design?",
"sub-networks that are designed in either a handcrafted or automatic way (Figure 1).",
"We call this the Extended Search Space method for NAS (or ESS for short).",
"Here, we choose differentiable architecture search as the basis of this work because it is efficient and gradient-friendly.",
"We present a general model of differentiable architecture search to handle arbitrary search space of NAS, which offers a unified framework of describing intra-cell NAS and inter-cell NAS.",
"Also, we develop a joint approach to learning both high-level and low-level connections simultaneously.",
"This enables the interaction between intra-cell NAS and inter-cell NAS, and thus the ability of learning the full architecture of a neural network.",
"Our ESS method is simple for implementation.",
"We experiment with it in an RNN-based system for language modeling.",
"On the PTB and WikiText data, it outperforms a strong baseline significantly by 4.5 and 2.4 perplexity scores.",
"Moreover, we test the transferability of the learned architecture on other tasks.",
"Again, it shows promising improvements on both NER and chunking benchmarks, and yields new state-of-the-art results on NER tasks.",
"This indicates a promising line of research on large-scale pre-learned architectures.",
"More interestingly, it is observed that the inter-cell NAS is helpful in modeling rare words.",
"For example, it yields a bigger improvement on the rare entity recognition task (WNUT) than that on the standard NER task (CoNLL).",
"NAS is a promising method toward AutoML (Hut-ter et al., 2018), and has been recently applied to NLP tasks (So et al., 2019; Jiang et al., 2019; Li and Talwalkar, 2019).",
"Several research teams have investigated search strategies for NAS.",
"The very early approaches adopted evolutionary algorithms to model the problem (Angeline et al., 1994; Stanley and Miikkulainen, 2002), while Bayesian and reinforcement learning methods made big progresses in computer vision and NLP later (Bergstra et al., 2013; Baker et al., 2017; Zoph and Le, 2017).",
"More recently, gradient-based methods were successfully applied to language modeling and image classification based on RNNs and CNNs (Liu et al., 2019a).",
"In particular, differentiable architecture search has been of great interest to the community because of its efficiency and compatibility to off-the-shelf tools of gradient-based optimization.",
"Despite of great success, previous studies restricted themselves to a small search space of neural networks.",
"For example, most NAS systems were designed to find an architecture of recurrent or convolutional cell, but the remaining parts of the network are handcrafted (Zhong et al., 2018; Brock et al., 2018; Elsken et al., 2019).",
"For a larger search space, Zoph et al. (2018) optimized the normal cell (i.e., the cell that preserves the dimensionality of the input) and reduction cell (i.e., the cell that reduces the spatial dimension) simultaneously and explored a larger region of the space than the single-cell search.",
"But it is still rare to see studies on the issue of search space though it is an important fac-tor to NAS.",
"On the other hand, it has been proven that the additional connections between cells help in RNN or Transformer-based models (He et al., 2016; Huang et al., 2017; Wang et al., 2018, 2019).",
"These results motivate us to take a step toward the automatic design of inter-cell connections and thus search in a larger space of neural architectures.",
"In this work we use RNNs for description.",
"We choose RNNs because of their effectiveness at preserving past inputs for sequential data processing tasks.",
"Note that although we will restrict ourselves to RNNs for our experiments, the method and discussion here can be applied to other types of models.",
"For a sequence of input vectors { x 1 , ..., x T } , an RNN makes a cell on top of every input vector.",
"The RNN cell receives information from previous cells and input vectors.",
"The output at time step t is defined to be: h t = ( h t 1 , x t ) (1) where ( ) is the function of the cell.",
"h t 1 is the representation vector of previous cells, and x t is the representation vector of the inputs up to time step t .",
"More formally, we define h t 1 and x t as functions of cell states and model inputs, like this h t 1 = f ( h [0 ,t 1] ; x [1 ,t 1] ) (2) x t = g ( x [1 ,t ] ; h [0 ,t 1] ) (3) where h [0 ,t 1] = { h 0 , ..., h t 1 } and x [1 ,t 1] = { x 1 , ..., x t 1 } .",
"f ( ) models the way that we pass information from previous cells to the next.",
"Likewise, g ( ) models the case of input vectors.",
"These functions offer a general method to model connections between cells.",
"For example, one can obtain a vanilla recurrent model by setting h t 1 = h t 1 and x t = x t , while more intra-cell connections can be considered if sophisticated functions are adopted for f ( ) and g ( ) .",
"While previous work focuses on searching for the desirable architecture design of ( ) , we take f ( ) and g ( ) into account and describe a more general case here.",
"We separate two sub-problems out from NAS for conceptually cleaner description: Intra-Cell NAS .",
"It learns the architecture of a cell (i.e., ( ) ).",
"Inter-Cell NAS .",
"It learns the way of connecting the current cell with previous cells and input vectors (i.e., f ( ) and g ( ) ).",
"For search algorithms, we follow the method of differentiable architecture search (DARTS).",
"It is gradient-based and runs orders of magnitude faster than earlier methods (Zoph et al., 2018; Real et al., 2019).",
"DARTS represents networks as a directed acyclic graph (DAG) and search for the appropriate architecture on it.",
"For a DAG, the edge o i,j ( ) F ( , ) ... ... S ... ... S Figure 2: Formalizing intra and inter-cell NAS as learning function F ( ) .",
"between node pair ( i, j ) performs an operation to transform the input (i.e., tail) to the output (i.e., head).",
"Like Liu et al. (2019a)'s method and others, we choose operations from a list of activation functions, e.g., sigmoid, identity and etc 1 .",
"A node represents the intermediate states of the networks.",
"For node i , it weights vectors from all predecessor nodes ( j < i ) and simply sums over them.",
"Let s i be the state of node i .",
"We define s i to be: s i = (cid:88) j<i (cid:88) k i,jk o i,jk ( s j W j ) (4) where W j is the parameter matrix of the linear transformation, and i,jk is the weight indicating the importance of o i,jk ( ) .",
"Here the subscript k means the operation index.",
"i,jk is obtained by softmax normalization over edges between nodes i and j : i,jk = exp( w i,jk ) / (cid:80) k (cid:48) exp( w i,jk (cid:48) ) .",
"In this way, the induction of discrete networks is reduced to learning continuous variables { i,jk } at the end of the search process.",
"This enables the use of efficient gradient descent methods.",
"Such a model encodes an exponentially large number of networks in a graph, and the optimal architecture is generated by selecting the edges with the largest weights.",
"The common approach to DARTS constraints the output of the generated network to be the last node that averages the outputs of all preceding nodes.",
"Let s n be the last node of the network.",
"We have s n = 1 n 1 n 1 (cid:88) i =1 s i (5) Given the input vectors, the network found by DARTS generates the result at the final node s n .",
"Here we present a method to fit this model into intra and inter-cell NAS.",
"We re-formalize the function for which we find good architectures as F ( ; ) .",
"and are two groups of the input vectors.",
"We create DAGs on them individually.",
"This gives us two DAGs with s and s as the last nodes.",
"Then, we make the final output by a Hadamard product of s and s , like this, F ( ; ) = s (cid:12) s (6) See Figure 2 for the network of an example F ( ; ) .",
"This method transforms the NAS problem into two learning tasks.",
"The design of two separate networks allows the model to group related inputs together, rather than putting everything into a magic system of NAS.",
"For example, for the inter-cell function f ( ) , it is natural to learn the pre-cell connection from h [0 ,t 1] , and learn the im-pact of the model inputs from x [1 ,t 1] .",
"It is worth noting that the Hadamard product of s and s is doing something very similar to the gating mechanism which has been widely used in NLP (Dauphin et al., 2017; Bradbury et al., 2017; Gehring et al., 2017).",
"For example, one can learn s as a gate and control how much s is used for final output.",
"Table 1 gives the design of and for the functions used in this work.",
"Another note on F ( ; ) .",
"The grouping reduces a big problem into two cheap tasks.",
"It is particularly important for building affordable NAS systems because computational cost increases exponentially as more input nodes are involved.",
"Our method instead has a linear time complexity if we adopt a reasonable constraint on group size, leading to a Function ( ) { h t 1 , x t } 1 f ( ) h [0 ,t 1] x [1 ,t 1] g ( ) x [1 ,t ] h [0 ,t 1] Table 1: and for different functions possibility of exploring a much larger space during the architecture search process.",
"The search of intra-cell architectures is trivial.",
"Since = 1 and s = 1 (see Table 1), we are basically performing NAS on a single group of input vectors h t 1 and x t .",
"We follow Liu et al. (2019a)'s work and force the input of networks to be a single layer network of h t 1 and x t .",
"This can be described as e 1 = tanh ( h t 1 W ( h ) + x t W ( x ) ) (7) where W ( h ) and W ( x ) are parameters of the transformation, and tanh is the non-linear transformation.",
"e 1 is the input node of the graph.",
"See Figure 3 for intra-cell NAS of an RNN models.",
"To learn h t 1 and x t , we can run the DARTS system as described above.",
"However, Eqs.",
"(2-3) define a model with a varying number of parameters for different time steps, in which our architecture search method is not straightforwardly applicable.",
"Apart from this, a long sequence of RNN cells makes the search intractable.",
"where m is a hyper-parameter that determines how much history is considered.",
"Eq.",
"(8) indicates a model that learns a network on x [ t m,t 1] (i.e., = x [ t m,t 1] ).",
"Then, the output of the learned network (i.e., s ) is used as a gate to control the information that we pass from the previous cell to the current cell (i.e., = { h t 1 } ).",
"Likewise, Eq.",
"(9) defines a gate on h [ t m,t 1] and controls the information flow from x t to the current cell.",
"Learning f (cid:48) ( ) and g (cid:48) ( ) fits our method well due to the fixed number of input vectors.",
"Note that f (cid:48) ( ) has m input vectors x [ t m,t 1] for learning the gate network.",
"Unlike what we do in intra-cell NAS, we do not concatenate them into a single input vector.",
"Instead, we create a node for every input vector, that is, the input vector e i = x t i links with node s i .",
"We restrict s i to only receive inputs from e i for better processing of each input.",
"This can be seen as a pruned network for the model described in Eq.",
"(4).",
"See Figure 3 for an illustration of inter-cell NAS.",
"Our model is flexible.",
"For architecture search, we can run intra-cell NAS, or inter-cell NAS, or both of them as needed.",
"However, we found that simply joining intra-cell and inter-cell architectures might not be desirable because both methods were restricted to a particular region of the search space, and the simple combination of them could not guarantee the global optimum.",
"This necessitates the inclusion of interactions between intra-cell and inter-cell architectures into the search process.",
"Generally, the optimal inter-cell architecture depends on the intra-cell architecture used in search, and vice versa.",
"A simple method that considers this issue is to learn two models in a joint manner.",
"Here, we design a joint search method to make use of the interaction between intra-cell NAS and inter-cell NAS.",
"Figure 4 shows the algorithm.",
"It runs for a number of rounds.",
"In each round, we first learn an optimal intra-cell architecture by fixing the inter-cell architecture, and then learn a new inter-cell architecture by fixing the optimal intra-cell architecture that we find just now.",
"Obviously, a single run of intra-cell (or inter-cell) NAS is a special case of our joint search method.",
"For example, one can turn off the inter-cell NAS part (lines 4-5 in Figure 4) and learn intra-cell architectures solely.",
"In a sense, the joint NAS method extends the search space of individual intra-cell (or inter-cell) NAS.",
"Both intra-cell and inter-cell NAS shift to a new region of the parameter space in a new round.",
"This implicitly explores a larger number of underlying models.",
"As shown in our experiments, joint NAS learns intra-cell architectures unlike those of the individual intra-cell NAS, which leads to better performance in language modeling and other tasks.",
"We experimented with our ESS method on Penn Treebank and WikiText language modeling tasks and applied the learned architecture to NER and chunking tasks to test its transferability.",
"For language modeling task, the monolingual and evaluation data came from two sources.",
"Penn Treebank (PTB).",
"We followed the standard preprocessed version of PTB (Mikolov et al., 2010).",
"It consisted of 929k training words, 73k validation words and 82k test words.",
"The vocabulary size was set to 10k.",
"WikiText-103 (WT-103).",
"We also used WikiText-103 (Merity et al., 2017) data to search for a more universal architecture for NLP tasks.",
"This dataset contained a larger training set of 103 million words and 0.2 million words in the validation and test sets.",
"NER and chunking tasks were also used to test the transferability of the pre-learned architecture.",
"We transferred the intra and inter-cell networks learned on WikiText-103 to the CoNLL-2003 (En-glish), the WNUT-2017 NER tasks and the CoNLL-2000 tasks.",
"The CoNLL-2003 task focused on the newswire text, while the WNUT-2017 contained a wider range of English text which is more difficult to model.",
"Our ESS method consisted of two components, including recurrent neural architecture search and architecture evaluation.",
"During the search process, we ran our ESS method to search for the intra-cell and inter-cell architectures jointly.",
"In the second stage, the learned architecture was trained and evaluated on the test dataset.",
"For architecture search on language modeling tasks, we applied 5 activation functions as the candidate operations, including drop, identity, sigmoid, tanh and relu.",
"On the PTB modeling task, 8 nodes were equipped in the recurrent cell.",
"For the inter-cell architecture, it received 3 input vectors from the previous cells and consisted of the same number of the intermediate nodes.",
"By default, we trained our ESS models for 50 rounds.",
"We set batch = 256 and used 300 hidden units for the intra-cell model.",
"The learning rate was set as 3 10 3 for the intra-cell architecture and 1 10 3 for the inter-cell architecture.",
"The BPTT (Werbos, 1990) length was 35.",
"For the search process on WikiText-103, we developed a more complex model to encode the representation.",
"There were 12 nodes in each cell and 5 nodes in the inter-cell networks.",
"The batch size was 128 and the number of hidden units was 300 which was the same with that on the PTB task.",
"We set the intra-cell and inter-cell learning rate to 1 10 3 and 1 10 4 .",
"A larger window size ( = 70 ) for BPTT was applied for the WikiText-103.",
"All experiments were run on a single NVIDIA 1080Ti.",
"After the search process, we trained the learned architectures on the same data.",
"To make it comparable with previous work, we copied the setup in Merity et al. (2018b).",
"For PTB, the size of hidden layers was set as 850 and the training epoch was 3,000.",
"While for the WikiText-103, we enlarged the number of hidden units to 2,500 and trained the model for 30 epochs.",
"Additionally, we transferred the learned architecture to NER and chunking tasks with the setting in Akbik et al. (2019).",
"We only modified the batch size to 24 and hidden size to 512.",
"Here we report the perplexity scores, number of parameters and search cost on the PTB and WikiText-103 datasets (Table 2).",
"First of all, the joint ESS method improves the performance on language modeling tasks significantly.",
"Moreover, it does not introduce many parameters.",
"Our ESS method achieves state-of-the-art result on the PTB task.",
"It outperforms the manually designed Mogrifier-LSTM by 4.5 perplexity scores on the test set.",
"On 10/1 9/2 8/3 7/4 6/5 5/6 4/7 3/8 2/9 1/10 59 .",
"the WikiText task, it still yields a +2.4 perplexity scores improvement over the strong NAS baseline (DARTS) method.",
"These results indicate that ESS is robust and can learn better architectures by enlarging the scope of search space.",
"Also, we find that searching for the appropriate connections among cells plays a more important role in improving the model performance.",
"We observe that the intra-cell NAS (DARTS) system underperforms the inter-cell counterpart with the same number of parameters.",
"It is because the well-designed intra-cell architectures (e.g., Mogrifier-LSTM) are actually competitive with the NAS structures.",
"However, the fragile connections among different cells greatly restrict the representation space.",
"The additional inter-cell connections are able to encode much richer context.",
"Nevertheless, our ESS method does not defeat the manual designed Transformer-XL model on the WikiText-103 dataset, even though ESS works better than other RNN-based NAS methods.",
"This is partially due to the better ability of Transformer-XL to capture the language representation.",
"Note that RNNs are not good at modeling the long-distance dependence even if more history states are considered.",
"It is a good try to apply ESS to Transformer but this is out of the scope of this work.",
"To modulate the complexity of the intra and inter-cell, we study the system behaviors under different numbers of intermediate nodes (Figure 5).",
"Fixing the number of model parameters, we compare these systems under different numbers of the intra and inter-cell nodes.",
"Due to the limited space, we show the result on the PTB in the following sensitivity analysis.",
"We observe that an appropriate choice of node number (8 nodes for intra-cell and 3 nodes for inter-cell) brings a consistent improvement.",
"More interestingly, we find that too many nodes for inter-cell architecture do not improve the model representation ability.",
"This is reasonable 0.5K 2K 3.5K 5K 400 550 700 850 # of Training Steps P e r p l e x it y joint intra 0.5K 2K 3.5K 5K 0.00 0.15 0.30 0.45 0.60 # of Training Steps MAD intra inter Figure 6: Perplexity on the validation data (PTB) and Mean Absolute Deviation (MAD) between edge weights and uniform distribution vs. number of training steps.",
"because more inter-cell nodes refer to considering more history in our system.",
"But for language modeling, the current state is more likely to be relevant to most recent words.",
"Too many inputs to the gate networks raise difficulties in modeling.",
"We observe that our ESS method leads to a model that is easier to train.",
"The left part in Figure 6 plots the validation perplexity at different training steps.",
"The loss curve of joint ESS significantly goes down as the training proceeds.",
"More interestingly, our joint learning method makes the model achieve a lower perplexity than the intra-cell NAS system.",
"This indicates better networks can be obtained in the search process.",
"Additionally, the convergence can be observed from the right part in Figure 6.",
"Here we apply Mean Absolute Deviation (MAD) to define the distance between edge weights and initial uniform distribution.",
"It is obvious that both the intra and inter-cell architectures change little at the final searching steps.",
"In order to figure out the advantage of inter-cell connections, we detail the model contribution on each word on the validation data.",
"Specifically, we compute the difference in word loss function (i.e., !",
"(b) An intra-cell architecture found without using inter-cell connections Figure 7: Comparison of intra-cell architectures found by using and not using additional inter-cell connections Models F1 LSTM-CRF (Lample et al., 2016) 90.94 LSTM-CRF + ELMo (Peters et al., 2018) 92.22 LSTM-CRF + Flair (Akbik et al., 2019) 93.18 GCDT + BERTLARGE (Liu et al., 2019b) 93.47 CNN Large + ELMo (Baevski et al., 2019) 93.50 DARTS + Flair (Jiang et al., 2019) 93.13 I-DARTS + Flair (Jiang et al., 2019) 93.47 ESS 91.78 ESS + Flair 93.62 Table 4: F1 scores on CoNLL-2003 NER task.",
"log perplexity) between methods with and without inter-cell NAS.",
"The words with eight best improvements are shown in the left column of Table 3.",
"We observe that the rare words in the training set obtain more significant improvements.",
"In contrast, the most frequent words lead to very modest decrease in loss (right column of Table 3).",
"This is because the connections between multiple cells enable learning rare word representations from more histories.",
"While for common words, they can obtain this information from rich contexts.",
"More inputs from previous cells do not bring much useful information.",
"Additionally, we visualize the learned intra-cell architecture in Figure",
"7(a).",
"The networks are jointly learned with the inter-cell architecture.",
"Compared with the results of intra-cell NAS (Fig-ure",
"7(b)), the learned network is more shallow.",
"The inter-cell architectures have deeper networks.",
"This in turn reduces the need for intra-cell capacity.",
"Thus a very deep intra-cell architecture might not be necessary if we learn the whole model jointly.",
"After architecture search, we test the transferability of the learned architecture.",
"In order to apply the model to other tasks, we directly use the architecture searched on WikiText-103 and train the param-Models F1 Cross-BiLSTM-CNN (Aguilar et al., 2018) 45.55 Flair (Akbik et al., 2019) 50.20 DARTS + Flair 50.34 ESS 48.85 ESS + Flair 52.18 Table 5: F1 scores on WNUT-2017 NER task.",
"eters with the in-domain data.",
"In our experiments, we adapt the model to CoNLL-2003, WNUT-2017 NER tasks and CoNLL-2000 chunking task.",
"For the two NER tasks, it achieves new state-of-the-art F1 scores (Table 4 and Table 5).",
"ELMo, Flair and BERTLARGE refer to the pre-trained language models.",
"We apply these word embeddings to the learned architecture during model training process.",
"For the chunking task, the learned architecture also shows greater performance than other NAS methods (Table 6).",
"Moreover, we find that our pre-learned neural networks yield bigger improvements on the WNUT-2017 task.",
"The difference of the two NER tasks lies in that the WNUT-2017 task is a long-tail emerging entities recognition task.",
"It focuses on identifying unusual, previously-unseen entities in the context of emerging discussions.",
"As we discuss in the previous part of the section, the additional inter-cell NAS is good at learning the representations of rare words.",
"Therefore, it makes sense to have a bigger improvement on WNUT-2017.",
"In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages 53595368, Hong Kong, China.",
"Association for Computational Linguistics.",
"Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben-gio.",
"2015.",
"Neural machine translation by jointly learning to align and translate.",
"In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings .",
"James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher.",
"2017.",
"Quasi-recurrent neural networks.",
"In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings .",
"Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier.",
"2017.",
"Language modeling with gated convolutional networks.",
"In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017 , pages 933941.",
"Thomas Elsken, Jan Hendrik Metzen, and Frank Hut-ter.",
"2019.",
"Efficient multi-objective neural architecture search via lamarckian evolution.",
"In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019 .",
"Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin.",
"2017.",
"Convolutional sequence to sequence learning.",
"We have proposed the Extended Search Space (ESS) method of NAS.",
"It learns intra-cell and inter-cell architectures simultaneously.",
"Moreover, we present a general model of differentiable architecture search to handle the arbitrary search space.",
"Meanwhile, the high-level and low-level sub-networks can be learned in a joint fashion.",
"Experiments on two language modeling tasks show that ESS yields improvements of 4.5 and 2.4 perplexity scores over a strong RNN-based baseline.",
"More interestingly, it is observed that transferring the pre-learned architectures to other tasks also obtains a promising performance improvement.",
"This work was supported in part by the National Science Foundation of China (Nos. 61876035 and 61732005), the National Key R&D Program of China (No. 2019QY1801) and the Opening Project of Beijing Key Laboratory of Internet Culture and Digital Dissemination Research.",
"The authors would like to thank anonymous reviewers for their comments."
] | [
"abstain",
"objective",
"method",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Despite the success of language models using neural networks, it remains unclear to what extent neural models have the generalization ability to perform inferences.",
"In this paper, we introduce a method for evaluating whether neural models can learn systematicity of monotonicity inference in natural language, namely, the regularity for performing arbitrary inferences with generalization on composition.",
"We consider four aspects of monotonicity inferences and test whether the models can systematically interpret lexical and logical phenomena on different training/test splits.",
"A series of experiments show that three neural models systematically draw inferences on unseen combinations of lexical and logical phenomena when the syntactic structures of the sentences are similar between the training and test sets.",
"However, the performance of the models significantly decreases when the structures are slightly changed in the test set while retaining all vocabularies and constituents already appearing in the training set.",
"This indicates that the generalization ability of neural models is limited to cases where the syntactic structures are nearly the same as those in the training set.",
"Natural language inference (NLI), a task whereby a system judges whether given a set of premises P semantically entails a hypothesis H (Dagan et al., 2013; Bowman et al., 2015), is a fundamental task for natural language understanding.",
"As with other NLP tasks, recent studies have shown a remarkable impact of deep neural networks in NLI (Williams et al., 2018; Wang et al., 2019; Devlin et al., 2019).",
"However, it remains unclear to what extent DNN-based models are capable of learning the compositional generalization underlying NLI from given labeled training instances.",
"Systematicity of inference (or inferential systematicity ) (Fodor and Pylyshyn, 1988; Aydede, 1997) in natural language has been intensively studied in the field of formal semantics.",
"From among the various aspects of inferential systematicity, in the context of NLI, we focus on monotonicity (van Benthem, 1983; Icard and Moss, 2014) and its productivity .",
"Consider the following premisehypothesis pairs (1)(3), which have the target label entailment : (1) P : Some [ puppies ] ran .",
"H : Some dogs ran .",
"(2) P : No [ cats ] ran .",
"H : No small cats ran .",
"(3) P : Some [ puppies which chased no [ cats ]] ran .",
"H : Some dogs which chased no small cats ran .",
"As in (1), for example, quantifiers such as some exhibit upward monotone (shown as [... ]), and replacing a phrase in an upward-entailing context in a sentence with a more general phrase (re-placing puppies in P with dogs as in H ) yields a sentence inferable from the original sentence.",
"In contrast, as in ( 2), quantifiers such as no exhibit downward monotone (shown as [... ]), and replacing a phrase in a downward-entailing context with a more specific phrase (replacing cats in P with small cats as in H ) yields a sentence inferable from the original sentence.",
"Such primitive inference patterns combine recursively as in (3).",
"This manner of monotonicity and its productivity produces a potentially infinite number of inferential patterns.",
"Therefore, NLI models must be capable of systematically interpreting such primitive patterns and reasoning over unseen combinations of patterns.",
"Although many studies have addressed this issue by modeling logical reasoning in formal semantics (Abzianidze, 2015; Mineshima et al., 2015; Hu et al., 2019) and testing DNN-based models on monotonicity inference (Yanaka et al., 2019a,b; Richardson et al., 6106 Systematicity Train 1 : Fix a quantifier and feed various predicate replacements.",
"2020), the ability of DNN-based models to generalize to unseen combinations of patterns is still underexplored.",
"Given this background, we investigate the systematic generalization ability of DNN-based models on four aspects of monotonicity:",
"(i) systematicity of predicate replacements (i.e., replacements with a more general or specific phrase),",
"(ii) systematicity of embedding quantifiers,",
"(iii) productivity, and",
"(iv) localism (see Section 2.2).",
"To this aim, we introduce a new evaluation protocol where we",
"(i) synthesize training instances from sampled sentences and",
"(ii) systematically control which patterns are shown to the models in the training phase and which are left unseen.",
"The rationale behind this protocol is twofold.",
"First, patterns of monotonicity inference are highly systematic, so we can create training data with arbitrary combinations of patterns, as in examples (1)(3).",
"Second, evaluating the performance of the models trained with well-known NLI datasets such as MultiNLI (Williams et al., 2018) might severely underestimate the ability of the models because such datasets tend to contain only a limited number of training instances that exhibit the inferential patterns of interest.",
"Furthermore, using such datasets would prevent us from identifying which combinations of patterns the models can infer from which patterns in the training data.",
"This paper makes two primary contributions.",
"First, we introduce an evaluation protocol 1 using 1 The evaluation code will be publicly available at https://github.com/verypluming/systematicity.",
"the systematic control of the training/test split under various combinations of semantic properties to evaluate whether models learn inferential systematicity in natural language.",
"Second, we apply our evaluation protocol to three NLI models and present evidence suggesting that, while all models generalize to unseen combinations of lexical and logical phenomena, their generalization ability is limited to cases where sentence structures are nearly the same as those in the training set.",
"Figure 1 illustrates the basic idea of our evaluation protocol on monotonicity inference.",
"We use synthesized monotonicity inference datasets, where NLI models should capture both",
"(i) monotonicity directions (upward/downward) of various quantifiers and",
"(ii) the types of various predicate replacements in their arguments.",
"To build such datasets, we first generate a set of premises GQ d by a context-free grammar G with depth d (i.e., the maximum number of applications of recursive rules), given a set of quantifiers Q .",
"Then, by applying GQ d to elements of a set of functions for predicate replacements (or replacement functions for short) R that rephrase a constituent in the input premise and return a hypothesis, we obtain a set DQ , R d of premisehypothesis pairs defined as DQ , R d = { ( P, H ) | P GQ d , r R ( r ( P ) = H ) } .",
"For example, the premise Some puppies ran is generated from the quantifier some in Q and 6107 the production rule S Q , N , IV , and thus it is an element of GQ 1 .",
"By applying this premise to a replacement function that replaces the word in the premise with its hypernym (e.g., puppy dog ), we provide the premisehypothesis pair Some puppies ran Some dogs ran in Fig. 1. We can control which patterns are shown to the models during training and which are left unseen by systematically splitting DQ , R d into training and test sets.",
"As shown on the left side of Figure 1, we consider how to test the systematic capacity of models with unseen combinations of quantifiers and predicate replacements.",
"To expose models to primitive patterns regarding Q and R , we fix an arbitrary element q from Q and feed various predicate replacements into the models from the training set of inferences D { q } , R d generated from combinations of the fixed quantifier and all predicate replacements.",
"Also, we select an arbitrary element r from R and feed various quantifiers into the models from the training set of inferences DQ , { r } d generated from combinations of all quantifiers and the fixed predicate replacement.",
"We then test the models on the set of inferences generated from unseen combinations of quantifiers and predicate replacements.",
"That is, we test them on the set of inferences D { q } , { r } d generated from the complements { q } , { r } of { q } , { r } .",
"If models capture inferential systematicity in combinations of quantifiers and predicate replacements, they can correctly perform all inferences in D { q } , { r } d on an arbitrary split based on q, r .",
"Similarly, as shown on the right side of Figure 1, we can test the productive capacity of models with unseen depths by changing the training/test split based on d .",
"For example, by training models on DQ , R d and testing them on DQ , R d +1 , we can evaluate whether models generalize to one deeper depth.",
"By testing models with an arbitrary train-ing/test split of DQ , R d based on semantic properties of monotonicity inference (i.e., quantifiers, predicate replacements, and depths), we can evaluate whether models systematically interpret them.",
"To test NLI models from multiple perspectives of inferential systematicity in monotonicity inferences, we focus on four aspects:",
"(i) systematicity of predicate replacements,",
"(ii) systematicity of embedding quantifiers,",
"(iii) productivity, and",
"(iv) localism.",
"For each aspect, we use a set DQ , R d of premisehypothesis pairs.",
"Let Q = Q Q be the union of a set of selected upward quantifiers Q and a set of selected downward quantifiers Q such that | Q | = | Q | = n .",
"Let R be a set of replacement functions { r 1 , . . . , r m } , and d be the embedding depth, with 1 d s .",
"(4) is an example of an element of DQ , R 1 , containing the quantifier some in the subject position and the predicate replacement using the hypernym relation dogs animals in its upward-entailing context without embedding.",
"(4) P : Some dogs ran H : Some animals ran I. Systematicity of predicate replacements The following describes how we test the extent to which models generalize to unseen combinations of quantifiers and predicate replacements.",
"Here, we expose models to all primitive patterns of predicate replacements like (4) and (5) and all primitive patterns of quantifiers like (6) and (7).",
"We then test whether the models can systematically capture the difference between upward quantifiers (e.g., several ) and downward quantifiers (e.g., no ) as well as the different types of predicate replacements (e.g., the lexical relation dogs animals and the adjective deletion small dogs dogs ) and correctly interpret unseen combinations of quantifiers and predicate replacements like (8) and (9).",
"(5) P : Some small dogs ran H : Some dogs ran (6) P : Several dogs ran H : Several animals ran (7) P : No animals ran H : No dogs ran (8) P : Several small dogs ran H : Several dogs ran (9) P : No dogs ran H : No small dogs ran Here, we consider a set of inferences DQ , R 1 whose depth is 1. We move from harder to easier tasks by gradually changing the training/test split according to combinations of quantifiers and predicate replacements.",
"First, we expose models to primitive patterns of Q and R with the minimum training set.",
"Thus, we define the initial training set S 1 and test set T 1 as follows: ( S 1 , T 1 ) = ( D { q } , R 1 DQ , { r } 1 , D { q } , { r } 1 ) where q is arbitrarily selected from Q , and r is arbitrarily selected from R .",
"Next, we gradually add the set of inferences generated from combinations of an upward downward quantifier pair and all predicate replacements to the training set.",
"In the examples above, we add (8) and (9) to the training set to simplify the task.",
"We assume a set Q of a pair of upward/downward quantifiers, namely, { ( q , q ) | ( q , q ) Q Q , q , q = q } .",
"We consider 6108 a set perm ( Q ) consisting of permutations of Q .",
"For each p perm ( Q ) , we gradually add a set of inferences generated from p ( i ) to the training set S i with 1 < i n 1 .",
"Then, we provide a test set T i generated from the complement Q i of Q i = { x | y ( x, y ) Q i or y ( y, x ) Q i } and { r } where Q i = { p (1) , . . . , p ( i ) } .",
"This protocol is summarized as S i +1 = S i D { q i ,q i } , R 1 , T i = DQ i , { r } 1 with 1 < i n 1 where ( q i , q i ) = p ( i ) .",
"To evaluate the extent to which the generalization ability of models is robust for different syntactic structures, we use an additional test set T i = DQ i , { r } 1 generated using three production rules.",
"The first is the case where one adverb is added at the beginning of the sentence, as in example (10).",
"(10)",
"P adv : Slowly, several small dogs ran H adv : Slowly, several dogs ran The second is the case where a three-word prepositional phrase is added at the beginning of the sentence, as in example (11).",
"(11)",
"P prep : Near the shore, several small dogs ran H prep : Near the shore, several dogs ran The third is the case where the replacement is performed in the object position, as in example (12).",
"(12)",
"P obj : Some tiger touched several small dogs H obj : Some tiger touched several dogs We train and test models | perm ( Q ) | times, then take the average accuracy as the final evaluation result.",
"II.",
"Systematicity of embedding quantifiers To properly interpret embedding monotonicity, models should detect both",
"(i) the monotonicity direction of each quantifier and",
"(ii) the type of predicate replacements in the embedded argument.",
"The following describes how we test whether models generalize to unseen combinations of embedding quantifiers.",
"We expose models to all primitive combination patterns of quantifiers and predicate replacements like (4)(9) with a set of non-embedding monotonicity inferences DQ , R 1 and some embedding patterns like (13), where Q 1 and Q 2 are chosen from a selected set of upward or downward quantifiers such as some or no .",
"We then test the models with an inference with an unseen quantifier several in (14) to evaluate whether models can systematically interpret embedding quantifiers.",
"(13)",
"P : Q 1 animals that chased Q 2 dogs ran H : Q 1 animals that chased Q 2 animals ran (14) P : Several animals that chased several dogs ran H : Several animals that chased several animals ran We move from harder to easier tasks of learning embedding quantifiers by gradually changing the training/test split of a set of inferences DQ , R 2 whose depth is 2, i.e., inferences involving one embedded clause.",
"We assume a set Q of a pair of upward and downward quantifiers as Q { ( q , q ) | ( q , q ) Q Q } , and consider a set perm ( Q ) consisting of permutations of Q .",
"For each p perm ( Q ) , we gradually add a set of inferences D 2 generated from p ( i ) to the training set S i with 1 i n 1 .",
"We test models trained with S i on a test set T i generated from the complement Q i of Q i = { x | y ( x, y ) Q i or y ( y, x ) Q i } where Q i = { p (1) , . . . , p ( i ) } , summarized as S 0 = DQ , R 1 , S i = S i 1 D { q i ,q i } , R 2 , T i = DQ i , R 2 with 1 i n 1 where ( q i , q i ) = p ( i ) .",
"We train and test models | perm ( Q ) | times, then take the average accuracy as the final evaluation result.",
"III.",
"Productivity Productivity (or recursiveness ) is a concept related to systematicity, which refers to the capacity to grasp an indefinite number of natural language sentences or thoughts with generalization on composition.",
"The following describes how we test whether models generalize to unseen deeper depths in embedding monotonicity (see also the right side of Figure 1).",
"For example, we expose models to all primitive non-embedding/single-embedding patterns like (15) and (16) and then test them with deeper embedding patterns like (17).",
"(17)",
"P : Some animals which chased some cats which followed some dogs ran H : Some animals which chased some cats which followed some animals ran To evaluate models on the set of inferences involving embedded clauses with depths exceeding those in the training set, we train models with d { 1 ,...,i +1 } D d , where we refer to DQ , R d as D d for short, and test the models on d { i +2 ,...,s } D d with 1 i s 2 .",
"IV.",
"Localism According to the principle of compositionality, the meaning of a complex expression derives from the meanings of its constituents and how they are combined.",
"One important concern is how local the composition operations should be ( Pagin and Westersthl, 2010).",
"We therefore test whether models trained with inferences involving embedded monotonicity locally perform inferences composed of smaller constituents.",
"Specifically, we train models with examples like (17) and then test the models with examples like (15) and (16).",
"We train models with D d and test the models on k { 1 ,...,d } D k with 3 d s .",
"To prepare the datasets shown in Table 1, we first generate premise sentences involving quantifiers from a set of context-free grammar (CFG) rules and lexical entries, shown in Table 6 in the Appendix.",
"We select 10 words from among nouns, intransitive verbs, and transitive verbs as lexical entries.",
"A set of quantifiers Q consists of eight elements; we use a set of four downward quantifiers Q = { no, at most three, less than three, few } and a set of four upward quantifiers Q = { some, at Function Example r 1 : hyponym dogs animals r 2 : adjective small dogs dogs r 3 : preposition dogs in the park dogs r 4 : relative clause dogs which ate dinner dogs r 5 : adverb ran quickly ran r 6 : disjunction ran ran or walked r 7 : conjunction ran and barked ran Table 2: Examples of replacement functions.",
"least three, more than three, a few }, which have the same monotonicity directions in the first and second arguments.",
"We thus consider n = | Q | = | Q | =4 in the protocol in Section 2.2.",
"The ratio of each monotonicity direction (upward/downward) of generated sentences is set to 1 : 1 .",
"We then generate hypothesis sentences by applying replacement functions to premise sentences according to the polarities of constituents.",
"The set of replacement functions R is composed of the seven types of lexical replacements and phrasal additions in Table 2. We remove unnatural premisehypothesis pairs in which the same words or phrases appear more than once.",
"For embedding monotonicity, we consider inferences involving four types of replacement functions in the first argument of the quantifier in Table 2: hyponyms, adjectives, prepositions, and relative clauses.",
"We generate sentences up to the depth d = 5 .",
"There are various types of embedding monotonicity, including relative clauses, conditionals, and negated clauses.",
"In this paper, we consider three types of embedded clauses: peripheral-embedding clauses and two kinds of center-embedding clauses, shown in Table 6 in the Appendix.",
"The number of generated sentences exponentially increases with the depth of embedded clauses.",
"Thus, we limit the number of inference examples to 320,000, split into 300,000 examples for the training set and 20,000 examples for the test set.",
"We guarantee that all combinations of quantifiers are included in the set of inference examples for each depth.",
"Gold labels for generated premisehypothesis pairs are automatically determined according to the polarity of the argument position (upward/downward) and the type of predicate replacements (with more gen-eral/specific phrases).",
"The ratio of each gold label (entailment/non-entailment) in the training and test sets is set to 1 : 1 .",
"To double-check the gold label, we translate each premisehypothesis pair into a logical formula (see the Appendix for more details).",
"The logical formulas are obtained by combining lambda terms in accordance with meaning composition rules specified in the CFG rules in the standard way (Blackburn and Bos, 2005).",
"We prove the entailment relation using the theorem prover Vampire 2 , checking whether a proof is found in time for each entailment pair.",
"For all pairs, the output of the prover matched with the entailment relation automatically determined by monotonicity calculus.",
"We consider three DNN-based NLI models.",
"The first architecture employs long short-term memory (LSTM) networks ( Hochreiter and Schmidhuber, 1997).",
"We set the number of layers to three with no attention.",
"Each premise and hypothesis is processed as a sequence of words using a recurrent neural network with LSTM cells, and the final hidden state of each serves as its representation.",
"The second architecture employs multiplicative tree-structured LSTM (TreeLSTM) networks (Tran and Cheng, 2018), which are expected to be more sensitive to hierarchical syntactic structures.",
"Each premise and hypothesis is processed as a tree structure by bottom-up combinations of constituent nodes using the same shared compositional function, input word information, and between-word relational information.",
"We parse all premisehypothesis pairs with the dependency parser using the spaCy li-2 https://github.com/vprover/vampire brary 3 and obtain tree structures.",
"For each experimental setting, we randomly sample 100 tree structures and check their correctness.",
"In LSTM and TreeLSTM, the dimension of hidden units is 200, and we initialize the word embeddings with 300-dimensional GloVe vectors (Pennington et al., 2014).",
"Both models are optimized with Adam (Kingma and Ba, 2015), and no dropout is applied.",
"The third architecture is a Bidirectional Encoder Representations from Transformers (BERT) model (Devlin et al., 2019).",
"We used the base-uncased model pre-trained on Wikipedia and BookCorpus from the pytorch-pretrained-bert library 4 , fine-tuned for the NLI task using our dataset.",
"In fine-tuning BERT, no dropout is applied, and we choose hyperparameters that are commonly used for MultiNLI.",
"We train all models over 25 epochs or until convergence, and select the best-performing model based on its performance on the validation set.",
"We perform five runs per model and report the average and standard deviation of their scores.",
"I. Systematicity of predicate replacements Figure 2 shows the performance on unseen combinations of quantifiers and predicate replacements.",
"In the minimal training set S 1 , the accuracy of LSTM and TreeLSTM was almost the same as chance, but that of BERT was around 75%, suggesting that only BERT generalized to unseen combinations of quantifiers and predicate replacements.",
"When we train BERT with the training set S 2 , which contains inference examples generated from combinations of one pair of up-ward/downward quantifiers and all predicate replacements, the accuracy was 100%.",
"This indicates that by being taught two kinds of quantifiers in the training data, BERT could distinguish between upward and downward for the other quantifiers.",
"The accuracy of LSTM and TreeLSTM increased with increasing the training set size, but did not reach 100%.",
"This indicates that LSTM and TreeLSTM also generalize to inferences involving similar quantifiers to some extent, but their generalization ability is imperfect.",
"When testing models with inferences where adverbs or prepositional phrases are added to the be-3 https://spacy.io/ 4 https://github.com/huggingface/pytorch-pretrained-bert 6111 Figure 2: Results for systematicity of predicate replacements.",
"ginning of the sentence, the accuracy of all models significantly decreased.",
"This decrease becomes larger as the syntactic structures of the sentences in the test set become increasingly different from those in the training set.",
"Contrary to our expectations, the models fail to maintain accuracy on test sets whose difference from the training set is the structure with the adverb at the beginning of a sentence.",
"Of course, we could augment datasets involving that structure, but doing so would require feeding all combinations of inference pairs into the models.",
"These results indicate that the models tend to estimate the entailment label from the beginning of a premisehypothesis sentence pair, and that inferential systematicity to draw inferences involving quantifiers and predicate replacements is not completely generalized at the level of arbitrary constituents.",
"II.",
"Systematicity of embedding quantifiers Figure 3 shows the performance of all models on unseen combinations of embedding quantifiers.",
"Even when adding the training set of inferences involving one embedded clause and two quantifiers step-by-step, no model showed improved performance.",
"The accuracy of BERT slightly exceeded chance, but the accuracy of LSTM and TreeLSTM was nearly the same as or lower than chance.",
"These results suggest that all the models fail to generalize to unseen combinations of embedding quantifiers even when they involve similar upward/downward quantifiers.",
"III.",
"Productivity Table 3 shows the performance on unseen depths of embedded clauses.",
"The accuracy on D 1 and D 2 was nearly 100%, indicating that all models almost completely generalize to inferences containing previously seen depths.",
"When Figure 3: Results for systematicity of embedding quantifiers.",
"D 1 + D 2 were used as the training set, the accuracy of all models on D 3 exceeded chance.",
"Similarly, when D 1 + D 2 + D 3 were used as the training set, the accuracy of all models on D 4 exceeded chance.",
"This indicates that all models partially generalize to inferences containing embedded clauses one level deeper than the training set.",
"However, standard deviations of BERT and LSTM were around 10, suggesting that these models did not consistently generalize to inferences containing embedded clauses one level deeper than the training set.",
"While the distribution of monotonicity directions (upward/downward) in the training and test sets was uniform, the accuracy of LSTM and BERT tended to be smaller for downward inferences than for upward inferences.",
"This also indicates that these models fail to properly compute monotonicity directions of constituents from syntactic structures.",
"The standard deviation of TreeLSTM was smaller, indicating that TreeLSTM robustly learns inference patterns containing embedded clauses one level deeper than the training set.",
"However, the performance of all models trained with D 1 + D 2 on D 4 and D 5 significantly decreased.",
"Also, performance decreased for all models trained with D 1 + D 2 + D 3 on D 5 .",
"Specifically, there was significantly decreased performance of all models, including TreeLSTM, on inferences containing embedded clauses two or more levels deeper than those in the training set.",
"These results indicate that all models fail to develop productivity on inferences involving embedding monotonicity.",
"IV.",
"Localism Table 4 shows the performance of all models on localism of embedding monotonicity.",
"When the models were trained with D 3 , D 4 or D 5 , all performed at around chance on the test set of non-embedding inferences D 1 and the test set of inferences involving one embedded clause D 2 .",
"These results indicate that even if models are trained with a set of inferences containing complex syntactic structures, the models fail to locally interpret their constituents.",
"Performance of data augmentation Prior studies (Yanaka et al., 2019b; Richardson et al., 2020) have shown that given BERT initially trained with Train Dev/Test BERT LSTM TreeLSTM MNLI D 1 46.9 0.4 47.2 1.1 43.4 0.3 D 2 46.2 0.6 48.3 1.0 49.5 0.4 D 3 46.8 0.8 48.9 0.7 41.0 0.4 D 4 48.5 0.8 50.6 0.5 48.5 0.2 D 5 48.9 0.6 49.3 0.7 48.8 0.5 MNLI-test 84.6 0.2 64.7 0.3 70.4 0.1 D 1 + D 2 D 1 100.0 0.0 100.0 0.1 100.0 0.1 + MNLI D 2 100.0 0.0 89.3 9.0 99.8 0.1 D 3 67.8 12.5 66.7 13.5 76.3 4.1 D 4 46.8 3.7 47.1 14.6 50.7 7.8 D 5 41.2 4.3 46.7 11.2 47.5 3.7 MNLI-test 84.4 0.2 39.7 0.5 63.0 0.2 D 1 + D 2 + D 3 D 1 100.0 0.0 100.0 0.0 100.0 0.0 + MNLI D 2 100.0 0.0 97.1 5.0 99.8 0.0 D 3 100.0 0.0 89.2 5.1 98.3 1.1 D 4 70.9 7.9 73.4 10.9 76.1 5.6 D 5 42.4 4.2 47.8 3.9 57.0 4.3 MNLI-test 84.0 0.1 39.7 0.4 62.8 0.2 Table 5: Results for productivity where models were trained with our synthesized dataset mixed with MultiNLI (MNLI).",
"MultiNLI, further training with synthesized instances of logical inference improves performance on the same types of logical inference while maintaining the initial performance on MultiNLI.",
"To investigate whether the results of our study are transferable to current work on MultiNLI, we trained models with our synthesized dataset mixed with MultiNLI, and checked",
"(i) whether our synthesized dataset degrades the original performance of models on MultiNLI 5 and",
"(ii) whether MultiNLI degrades the ability to generalize to unseen depths of embedded clauses.",
"Table 5 shows that training BERT on our synthetic data D 1 + D 2 and MultiNLI increases the accuracy on our test sets D 1 (46.9 to 100.0), D 2 (46.2 to 100.0), and D 3 (46.8 to 67.8) while preserving accuracy on MultiNLI (84.6 to 84.4).",
"This indicates that training BERT with our synthetic data does not degrade performance on commonly used corpora like MultiNLI while improving the performance on monotonicity, which suggests that our data-synthesis approach can be combined with naturalistic datasets.",
"For TreeLSTM and LSTM, however, adding our synthetic dataset decreases accuracy on MultiNLI.",
"One possible reason for this is that a pre-training based model like BERT can mitigate catastrophic forgetting in various types of datasets.",
"Regarding the ability to generalize to unseen depths of embedded clauses, the accuracy of all 5 Following the previous work (Richardson et al., 2020), we used the MultiNLI mismatched development set for MNLI-test.",
"models on our synthetic test set containing embedded clauses one level deeper than the training set exceeds chance, but the improvement becomes smaller with the addition of MultiNLI.",
"In particular, with the addition of MultiNLI, the models tend to change wrong predictions in cases where a hypothesis contains a phrase not occurring in a premise but the premise entails the hypothesis.",
"Such inference patterns are contrary to the heuristics in MultiNLI (McCoy et al., 2019).",
"This indicates that there may be some trade-offs in terms of performance between inference patterns in the training set and those in the test set.",
"The question of whether neural networks are capable of processing compositionality has been widely discussed (Fodor and Pylyshyn, 1988; Marcus, 2003).",
"Recent empirical studies illustrate the importance and difficulty of evaluating the capability of neural models.",
"Generation tasks using artificial datasets have been proposed for testing whether models compositionally interpret training data from the underlying grammar of the data (Lake and Baroni, 2017; Hupkes et al., 2018; Saxton et al., 2019; Loula et al., 2018; Hupkes et al., 2019; Bernardy, 2018).",
"However, these conclusions are controversial, and it remains unclear whether the failure of models on these tasks stems from their inability to deal with compositionality.",
"Previous studies using logical inference tasks have also reported both positive and negative results.",
"Assessment results on propositional logic (Evans et al., 2018), first-order logic (Mul and Zuidema, 2019), and natural logic (Bowman et al., 2015) show that neural networks can generalize to unseen words and lengths.",
"In contrast, Geiger et al. (2019) obtained negative results by testing models under fair conditions of natural logic.",
"Our study suggests that these con-flicting results come from an absence of perspective on combinations of semantic properties.",
"Regarding assessment of the behavior of modern language models, Linzen et al. (2016), Tran et al. (2018), and Goldberg (2019) investigated their syntactic capabilities by testing such models on subjectverb agreement tasks.",
"Many studies of NLI tasks (Liu et al., 2019; Glockner et al., 2018; Poliak et al., 2018; Tsuchiya, 2018; McCoy et al., 2019; Rozen et al., 2019; Ross and Pavlick, 2019) have provided evaluation methodologies and found that current NLI models often fail on particular inference types, or that they learn undesired heuristics from the training set.",
"In particular, recent works (Yanaka et al., 2019a,b; Richardson et al., 2020) have evaluated models on monotonicity, but did not focus on the ability to generalize to unseen combinations of patterns.",
"Monotonicity covers various systematic inferential patterns, and thus is an adequate semantic phenomenon for assessing inferential systematicity in natural language.",
"Another benefit of focusing on monotonicity is that it provides hard problem settings against heuristics (McCoy et al., 2019), which fail to perform downward-entailing inferences where the hypothesis is longer than the premise.",
"We introduced a method for evaluating whether DNN-based models can learn systematicity of monotonicity inference under four aspects.",
"A series of experiments showed that the capability of three models to capture systematicity of predicate replacements was limited to cases where the positions of the constituents were similar between the training and test sets.",
"For embedding monotonicity, no models consistently drew inferences involving embedded clauses whose depths were two levels deeper than those in the training set.",
"This suggests that models fail to capture inferential systematicity of monotonicity and its productivity.",
"We also found that BERT trained with our synthetic dataset mixed with MultiNLI maintained performance on MultiNLI while improving the performance on monotonicity.",
"This indicates that though current DNN-based models do not systematically interpret monotonicity inference, some models might have sufficient ability to memorize different types of reasoning.",
"We hope that our work will be useful in future research for realizing more advanced models that are capable of appropriately performing arbitrary inferences.",
"We thank the three anonymous reviewers for their helpful comments and suggestions.",
"We are also grateful to Benjamin Heinzerling and So-suke Kobayashi for helpful discussions.",
"This work was partially supported by JSPS KAKENHI Grant Numbers JP20K19868 and JP18H03284, Japan."
] | [
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"objective",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"other",
"other",
"other"
] |
[
"Conventional wisdom in pruning Transformer-based language models is that pruning reduces the model expressiveness and thus is more likely to underfit rather than overfit.",
"However, under the trending pretrain-and-finetune paradigm, we postulate a counter-traditional hypothesis, that is: pruning increases the risk of overfitting when performed at the fine-tuning phase.",
"In this paper, we aim to address the overfitting problem and improve pruning performance via progressive knowledge distillation with error-bound properties.",
"We show for the first time that reducing the risk of overfitting can help the effectiveness of pruning under the pretrain-and-finetune paradigm.",
"Ablation studies and experiments on the GLUE benchmark show that our method outperforms the leading competitors across different tasks.",
"Recently, the emergence of Transformer-based language models (using pretrain-and-finetune paradigm) such as BERT (Devlin et al., 2019) and GPT-3 (Brown et al., 2020) have revolutionized and established state-of-the-art (SOTA) records (be-yond human-level) on various natural language (NLP) processing tasks.",
"These models are first pre-trained in a self-supervised fashion on a large corpus and fine-tuned for specific downstream tasks (Wang et al., 2018).",
"While effective and prevalent, they suffer from redundant computation due to the heavy model size, which hinders their popularity on resource-constrained devices, e.g., mobile phones, smart cameras, and autonomous driving (Chen et al., 2021; Qi et al., 2021; Yin et al., 2021a,b; Li et al., 2021; Choi and Baek, 2020).",
"Various weight pruning approaches (zeroing out certain weights and then optimizing the rest) have been proposed to reduce the footprint requirements of Transformers (Zhu and Gupta, 2018; Blalock These authors contributed equally Dense model D Sparse model Pruning D Task knowledge Discardedknowledge",
"et al., 2020; Gordon et al., 2020; Xu et al., 2021; Huang et al., 2021; Peng et al., 2021).",
"Conventional wisdom in pruning states that pruning reduces the overfitting risk since the compressed model structures are less complex, have fewer parameters and are believed to be less prone to overfit (Ying, 2019; Wang et al., 2021; Tian et al., 2020; Gerum et al., 2020).",
"However, under the pretrain-and-finetune paradigm, most pruning methods understate the overfitting problem.",
"In this paper, we postulate a counter-traditional hypothesis, that is: model pruning increases the risk of overfitting if pruning is performed at the fine-tuning phase.",
"As shown in Figure 1b, the pretrain-and-finetune paradigm contains two types of knowledge, the general-purpose language knowledge learned during pre-training ( L ) and the task-specific knowledge from the downstream task data ( D ).",
"Compared to conventional pruning that only discards task-specific knowledge (Figure 1a), pruning under pretrain-and-finetune (Figure 1b) discards extra knowledge (red area) learned in pretraining phase.",
"Thus, to recover both the extra discarded general-purpose knowledge and the discarded task-specific knowledge, pruning under 190 0 1000 2000 3000 Training steps 0.4 0.6 0.8 1.0 A cc u r a c y dev set training set",
"pretrain-and-finetune increases the amount of information a model needs, which results in relative data deficiency, leading to a higher risk of overfitting.",
"To empirically verify the overfitting problem, we visualize the training and evaluation performance on a real-world task data of MRPC (Devlin et al., 2019) in Figure 2. From Figure 2",
"(b), it is observed that the evaluation accuracy on the training dataset remains improved while it keeps the same for the validation set through the training process.",
"From Figure 2",
"(c), the difference in performance becomes more significant when the pruning rate becomes higher and the performance on the validation set even becomes worse after 2,000 training steps.",
"All these observations verify our hypothesis.",
"The main question this paper attempts to answer is: how to reduce the risk of overfitting of pre-trained language models caused by pruning?",
"However, answering this question is challenging.",
"First, under the pretrain-and-finetune paradigm, both the general-purpose language knowledge and the task-specific knowledge are learned.",
"It is nontrivial to keep the model parameters related to both knowledge when pruning.",
"Second, the amount of data for downstream tasks can be small, such as the data with privacy.",
"Thus, the overfitting problem can easily arise, especially in the face of high pruning rate requirements.",
"A little recent progress has been made on addressing overfitting associated with model compression.",
"However, their results are not remarkable and most of them focus on the vision domain (Bai et al., 2020; Shen et al., 2021).",
"To address these challenges, we propose SPD, a sparse progressive distillation method, for pruning pre-trained language models.",
"We prune and optimize the weight duplicates of the backbone of the teacher model (a.k.a., student modules).",
"Each student module shares the same architecture (e.g., the number of weights, the dimension of each weight) as the duplicate.",
"We replace the corresponding layer(s) of the duplicated teacher model with the pruned sparse student module(s) in a progressive way and name the new model as a grafted model.",
"We validate our proposed method through the ablation studies and the GLUE benchmark.",
"Experimental results show that our method outperforms the existing approaches.",
"We summarize our contributions as follows: We postulate, analyze, and empirically verify a counter-traditional hypothesis: pruning increases the risk of overfitting under the pretrain-and-finetune paradigm.",
"We propose a sparse progressive pruning method and show for the first time that reducing the risk of overfitting can help the effectiveness of pruning.",
"Moreover, we theoretically analyze that our pruning method can obtain a sub-network from the student model that has similar accuracy as the teacher.",
"Last but not least, we study and minimize the interference between different hyperparameter strategies, including pruning rate, learning rate, and grafting probability, to further improve performance.",
"To summarize, our contribution is determining the overfitting problem of pruning under the pretrain-and-finetune paradigm and proposing the sparse progressive distillation method to address it.",
"We demonstrate the benefits of the proposed framework through the ablation studies.",
"We validate our method on eight datasets from the GLUE benchmark.",
"To test if our method is applicable across 191 Teacher Input Output Pruning ( \" ) Grafted model Pruning ( $ ) zero non-zero Input Output update Pruning( % ) Pruning( & ) Input Output Input Output Input Output Input Output Input Output Input Output p i =0.25 p i =0.50 p i =0.25 p i =1 p i =1 p i =0.25 p i =0.50 Grafted model Student modules Grafted model Grafted model Grafted model Grafted model Grafted model Final grafted model !",
"tasks, we include the tasks of both single sentence and sentence-pair classification.",
"Experimental results show that our method outperforms the leading competitors by a large margin.",
"Network Pruning.",
"Common wisdom has shown that weight parameters of deep learning models can be reduced without sacrificing accuracy loss, such as magnitude-based pruning and lottery ticket hypothesis (Frankle and Carbin, 2019).",
"(Zhu and Gupta, 2018) compared small-dense models and large-sparse models with the same parameters and showed that the latter outperforms the former, showing the large-sparse models have better expressive power than their small-dense counterparts.",
"However, under the pretrain-and-finetune paradigm, pruning leads to overfitting as discussed.",
"Knowledge Distillation (KD).",
"As a common method in reducing the number of parameters, the main idea of KD is that the small student model mimics the behaviour of the large teacher model and achieves a comparable performance (Hinton et al., 2015; Mirzadeh et al., 2020).",
"(Sanh et al., 2019; Jiao et al., 2020; Sun et al., 2020) utilized KD to learn universal language representations from large corpus.",
"However, current SOTA knowledge distillation methods are not able to achieve a high model compression rate (less than 10% remaining weights) while achieving an insignificant performance decrease.",
"Progressive Learning.",
"The key idea of progressive learning is that student learns to update module by module with the teacher.",
"(Shen et al., 2021) utilized a dual-stage distillation scheme where student modules are progressively grafted onto the teacher network, it targets the few-shot scenario and uses only a few unlabeled samples to achieve comparable results on CIFAR-10 and CIFAR-100.",
"(Xu et al., 2020) gradually increased the probability of replacing each teacher module with their corresponding student module and trained the student to reproduce the behavior of the teacher.",
"However, the performance on Transformer-based models of the aforementioned first method is unknown while the second method has an obvious performance drop with a low sparsity (50%).",
"The teacher model and the grafted model (shown in Figure 3) are denoted as f S and f G , respectively.",
"Both models have N + 1 layers (i.e., the first N layers are encoder layers, and the ( N + 1) -th layer is the output layer).",
"Denote f T i ( ) , f G i ( ) as the behaviour function induced from the i -th encoder of the teacher model, and the grafted model, respectively.",
"As shown in Figure 4, we utilize layer-wise knowledge distillation (KD), where we aim to bridge the gap between f T i ( ) and f G i ( ) .",
"The grafted model is trained to mimic the behavior of the teacher model.",
"where X denotes the training dataset, i is coefficient of i -th layer loss, LD is the distillation loss",
"of the layer pair, x i is the input of the i -th layer.",
"During KD, each student module mimics the behavior of the corresponding teacher layer.",
"Similar to (Jiao et al., 2020), we take the advantage 192 Teacher encoder 1 Teacher encoder 2 Teacher encoder 3 Teacher encoder N Input Input Embedding Output Layer Output Teacher model !",
"of abundant knowledge in self-attention distribution, hidden states of each Transformer layer, and the final output layer's soft logits of teacher model to help train the student model.",
"Specifically, we design the KD loss as follows LKD = (cid:40) L hidn + L attn 1 i N L pred i = N + 1 (2) where L hidn = MSE( H Ti , H Si ) ( 1 i N ) indicates the difference between hidden states, L attn = MSE( A Ti , A Si ) indicates the difference between attention matrices.",
"MSE( ) is the mean square error loss function and i is the index of Transformer layer.",
"L pred = -softmax( z T ) log _softmax( z S / temp ) indicates the difference of soft cross-entropy loss, where z T and z S are the soft logits of teacher and student model, respectively.",
"T is the temperature hyper-parameter.",
"We further reduce the number of non-zero parameters in the weight matrix while maintaining accuracy.",
"We denote { W j } j = i j =1 as the collection of weights in the first i layers, j as the sparsity of the j -th layer.",
"Then, the loss function of sparse knowledge distillation becomes L = (cid:88) x X N +1 (cid:88) i =1 i LKD ( f Ti ( x , { W j } j = i j =1 ) , f Gi ( x , { W j } j = i j =1 )) s.t. sparsity ( W j ) j for j = 1 , ..., N (3) After training, we find the sparse weight matrix W j using W j = S j ( W j ) for j = 1 , ..., N, (4) where S j ( ) denotes the Euclidean projection onto the set S j = { W j | sparsity ( W j ) j } .",
"Our pruning method is similar to finding matching subnetworks using the lottery ticket hypothesis (Frankle and Carbin, 2019; Pensia et al., 2020) methodology.",
"We analyze the self-attention (ex-cluding activation).",
"Some non-linear activation functions has been analyzed in (Pensia et al., 2020).",
"Feed-forward layer.",
"Consider a feed-forward network f ( x ) = w x , and g ( x ) = ( (cid:80) ni =1 w i ) x .",
"Lueker et al. (Lueker, 1998) and Pensia et al. (Pen-sia et al., 2020) show that existing a subset of w i , such that the corresponding value of g ( x ) is very close to f ( x ) .",
"Corollary: When w 1 , ..., w n belongs to i.i.d. uniform distribution over [-1,1], where n C log 2 , min { 1 , } .",
"Then, with probability at least 1 , we have G spd { 1 , 2 , ..., n } , w [ 0 . 5 , 0 . 5] , s.t (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) w (cid:88) i G spd w i (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (5) Analysis on self-attention.",
"Consider a model f ( x ) with only one self-attention, when the token size of input x is 1, softmax ( Q KT d k ) = 1 , we have Z = V , where V = w V x.",
"Consider f G ( x ) = (cid:16)(cid:80) d i =1 w G i (cid:17) x and a pruning sparsity , base on Corollary , when d C log 4 / , there exists a pattern of w Gi , such that, with probability 1 , w [ 1 , 1] , i { 0 , 1 } , s.t. (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) w ( (cid:88) i [1 ,d ] w Gi I ( i )) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) < (7) where I ( i ) is the indicator to determine whether w Gi will be remained.",
"with a self-attention, then",
"k",
"Base on Corollary , when d C log 4 / , there exists a pattern of w Gi , such that, with probability 1 , w c i.",
"To avoid overfitting in the training process for the sparse Transformer model, we further graft student modules (scion) onto the teacher model duplicates (rootstock).",
"For the i -th student module, we use an independent Bernoulli random variable I ( i ) to indicate whether it will be grafted on the rootstock.",
"To be more specific, I ( i ) has a probability of p (grafting probability) to be set as 1 (i.e., student module substitutes the corresponding teacher layer).",
"Otherwise, the latter will keep weight matrices unchanged.",
"Once the target pruning rate is achieved, we apply linear increasing probability to graft student modules which enable the student modules to orchestrate with each other.",
"Different from the model compression methods that update all model parameters at once, such as TinyBERT (Jiao et al., 2020) and DistilBERT (Sanh et al., 2019), SPD only updates the student modules on the grafted model.",
"It reduces the complexity of network optimization, which mitigates the overfitting problem and enables the student modules to learn deeper knowledge from the teacher model.",
"The overview is described in Algorithm 1. We will further demonstrate the effectiveness of progressive student module grafting in 4.2.",
"Datasets.",
"We evaluate SPD on the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2018) and report the metrics, i.e., accuracy scores for SST-2, QNLI, RTE, and WNLI, Matthews Correlation Coefficient (MCC) for CoLA, F1 scores for QQP and MRPC, Spearman correlations for STS-B.",
"Baselines.",
"We first use 50% sparsity (a widely adopted sparsity ratio among SOTA), and compare SPD against two types of baselines nonprogressive and progressive.",
"For the former, we select BERT-PKD (Sun et al., 2019), DistilBERT (Sanh et al., 2019), MiniLM (Wang et al., 2020), TinyBERT (Jiao et al., 2020), SparseBERT (Xu et al., 2021) and E.T. (Chen et al., 2021), while for the latter, we choose Theseus (Xu et al., 2020).",
"We further compare SPD against other existing works under higher sparsity, e.g., TinyBERT (Jiao et al., 2020), SparseBERT (Xu et al., 2021) and RPP (Guo et al., 2019).",
"SPD Settings.",
"We use official BERTBASE , uncased model as the pre-train model and the fine-tuned pre-train model as our teacher.",
"Both BERTBASE and teacher model have the same architecture (i.e., 12 encoder layers (L = 12; embedding dimension d model = 768; self-attention heads H = 12)).",
"We finetune BERTBASE using best performance from { 2 e 5 , 3 e 5 , 4 e 5 , 5 e 5 } as the learning rate.",
"For SPD model training, the number of pruning epochs, linear increasing module grafting epochs, finetuning epochs vary from [10, 30], [5, 194 20], [5, 10], respectively.",
"For pruning, we use AdamW (Loshchilov and Hutter, 2018) as the optimizer and run the experiments with an initial grafting probability from {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9}.",
"The probability with the best performance will be adopted.",
"After pruning, we adjust the slope of the grafting probability curve so that the grafting probability equals 1 at the end of module grafting.",
"For module grafting and finetuning, an AdamW optimizer is used with learning rate chosen from { 3 e 5 , 1 e 4 , 3 .",
"2 e 4 , 5 e 4 , 6 .",
"4 e 4 }.",
"The model training and evaluation are performed with CUDA 11.1 on Quadro RTX6000 GPU and Intel(R) Xeon(R) Gold 6244 @ 3.60GHz CPU.",
"Accuracy vs. Sparsity.",
"We do experiments on eight GLUE benchmark tasks (Table 1).",
"For non-progressive baselines, SPD exceeds all of them on QNLI, SST-2, CoLA, STS-B, and MRPC.",
"For RTE, TinyBERT 6 has a 1.6% higher accuracy than SPD.",
"However, TinyBERT 6 used augmented data while SPD does not use data augmentation to generate the results in Table 1. On average, SPD has 6.3%, 5.6%, 1.2%, 1.7%, 3.7% improvement in performance than BERT 6 -PKD, DistilBERT, TinyBERT 6 , SparseBERT, E.T. respectively.",
"Furthermore, on CoLA, SPA achieves up to 25.9% higher performance compared to all nonprogressive baselines.",
"For the progressive baseline, we compare SPD with BERT-of-Theseus.",
"Experimental results show that SPD exceeds the latter on all tasks.",
"SPD has a 3.9% increase on average.",
"Among all the tasks, CoLA and RTE have 20.2% and 5.9% gain respectively.",
"For the comparison with sparse and non-progressive baseline, SPD has an improvement of 16.8%, 5.5%, 3.2%, 2.7%, 2.0%, 1.9%, 1.6%, 1.6% on CoLA, RTE, MNLI, QNLI, QQP, MRPC, STS-B, SST-2, respectively.",
"On all listed tasks, SPD even outperforms the teacher model except for RTE.",
"On RTE, SPD retains exactly the full accuracy of the teacher model.",
"On average, the proposed SPD achieves a 1.1% higher accuracy/score than the teacher model.",
"We conclude the reason for the outstanding performance from three respects: 1) There is redundancy in the original dense BERT model.",
"Thus, pruning the model with a low pruning rate (e.g., 50%) will not lead to a significant performance drop.",
"2) SPD decreases the overfitting risk which helps the student model learn better.",
"3) The interference between different hyperparameter strategies is mitigated, which enables SPD to obtain a better student model.",
"We also compare SPD with other baselines (i.e., 4-layer TinyBERT (Jiao et al., 2020), RPP (Guo et al., 2019), and SparseBERT (Xu et al., 2021)) under higher pruning rates.",
"Results are summarized in Table 2. For the fairness of comparison, we remove data augmentation from the above methods.",
"We mainly compare the aforementioned baselines with very high sparsity (e.g., 90%, 95%) SPD.",
"For the comparison with TinyBERT 4 , both SPD ( 90% sparsity) and SPD ( 95% sparsity) win.",
"SPD ( 90% sparsity) has 63.4% and 9% higher evaluation score than TinyBERT 4 on CoLA and MRPC, respectively.",
"For the setting of 95% sparsity, SPD outperforms TinyBERT 4 with 41 .",
"3% and 7 .",
"6% higher performance, respectively.",
"Compared to RPP, both SPD ( 90% sparsity) and SPD ( 95% sparsity) show higher performance on MRPC, with 9 .",
"8% and 8 .",
"3% higher F1 score, respectively.",
"For SparseBERT, SPD exceeds it on all tasks in Table 2. Especially on CoLA, SPD ( 90% sparsity) and SPD ( 95% sparsity) have 2.69 and 2.33 higher Mcc score on CoLA, respectively.",
"SparseBERT has competitive performance with SOTA when using data augmentation.",
"The reason for the performance drop for SparseBERT may because its deficiency of ability in mitigating overfitting problems.",
"Overfitting Mitigation.",
"We explore the effectiveness of SPD to mitigate the overfitting problem.",
"Depending on whether progressive, grafting, or KD is used, we compare 4 strategies:",
"(a) no progressive, no KD;",
"(b) progressive, no KD;",
"(c) no progressive, KD;",
"(d) progressive, KD (ours).",
"We evaluate these strategies on both training and validation sets of MRPC.",
"The results are summarized in Figure 5.",
"From",
"(a) to",
"(d), the gap between the evaluation results of the training set and the dev set is reduced, which strongly suggests that the strategy adopted by SPD, i.e., progressive + KD, outperforms other strategies in mitigating the overfitting problem.",
"Figure 5",
"(a),",
"(b), and",
"(c) indicate that compared to progressive only, KD has a bigger im-pact on mitigating overfitting, as the performance gap between the training set and the dev set decreases more from",
"(a) to",
"(c) than from",
"(a) to",
"(b).",
"From Figure 5",
"(a),",
"(b) and",
"(c), we also observe that compared to no progressive, no KD, either using progressive (Figure 5",
"(b)) or KD (Figure 5",
"(c)) is very obvious to help mitigate the overfitting prob-195 Model #Param MNLI QQP QNLI SST-2 CoLA STS-B MRPC RTE Avg.",
"lem.",
"Figures 5",
"(b),",
"(c) and",
"(d) indicate that the combination of progressive and KD brings more benefits than only using progressive or KD as Figure 5",
"(d) has the smallest performance gap between the training set and the dev set.",
"Combined with Table 1 and Table 2, Figure 5 shows that SPD mitigates overfitting and leads to higher performance.",
"In this section, we justify the three schedulers used in our method (i.e., grafting probability, pruning rate, and learning rate), and study the sensitivity of our method with respect to each of them.",
"Study on Components of SPD.",
"The proposed SPD consists of three components (i.e., sparse, knowledge distillation, and progressive module grafting).",
"We conduct experiments to study the importance of each component on GLUE benchmark tasks with the sparsity of 50% and results are shown in Table 3.",
"Compared to both sparse + KD and sparse + progressive, SPD achieves gains on performance among all tasks.",
"Effects of Grafting Probability Strategy.",
"In our method, we set the grafting probability greater than 0 during pruning, to allow student modules to learn deeper knowledge from the teacher model.",
"To verify the benefit of this design, we change the grafting probability to zero and compare it with our 196 Model #Param MNLI QQP QNLI SST-2 CoLA STS-B MRPC RTE Avg.",
"method.",
"The result on RTE is shown in Figure 6.",
"Pruning with grafting (the red curve) shows better performance than pruning without grafting, which justifies the existence of grafting during pruning.",
"In addition, we study the sensitivity of our method to grafting probability (Figure 7).",
"It is observed that p 0 = 0.6 achieves the best performance, and the progressive design is better than the non-progressive.",
"rate scheduler, we compare the strategies with different pruning ending steps.",
"The results are shown in Figure 8.",
"It is observed that the pruning during when grafting probability p = p 0 has a higher F1 score than other strategies on MRPC.",
"Effects of Optimizer Strategy.",
"We also compare our strategy with the strategy that only has one learning rate scheduler.",
"The results (Figure 9) indicate that our strategy (i.e., two independent optimizers) is better.",
"We also evaluate different learning rates with the pruning rate of 0.9 and the grafting probability of 0.8.",
"In this paper, we postulate a counter-traditional hypothesis that pruning increases the risk of overfitting under the pretrain-and-finetune paradigm.",
"We analyze and empirically verify this hypothesis, and propose a sparse progressive pruning method 197 to address the overfitting problem.",
"We theoretically analyze that our pruning method can obtain a subnetwork from the student model that has a similar accuracy as the teacher.",
"We study and minimize the interference between different hyperparameter strategies, including pruning rate, learning rate, and grafting probability.",
"A number of ablation studies and experimental results on eight tasks from the GLUE benchmark demonstrate the superiority of our method over the leading competitors.",
"This research was supported in part by National Science Foundation (NSF) CRII Award No. 2000722 and NSF CAREER Award No. 2046102.",
"Sanguthevar Rajasekaran has been supported in part by the NSF RAISE Award No. 1743418 and NSF EAGER Award No. 1843025.",
"In addition, it used the Extreme Science and Engineering Discovery Environment (XSEDE) through allocations TG-CCR200004."
] | [
"abstain",
"method",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"objective",
"result",
"objective",
"objective",
"result",
"result",
"objective",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"result",
"method",
"objective",
"other",
"other",
"other"
] |
[
"Copy module has been widely equipped in the recent abstractive summarization models, which facilitates the decoder to extract words from the source into the summary.",
"Generally, the encoder-decoder attention is served as the copy distribution, while how to guarantee that important words in the source are copied remains a challenge.",
"In this work, we propose a Transformer-based model to enhance the copy mechanism.",
"Specifically, we identify the importance of each source word based on the degree centrality with a directed graph built by the self-attention layer in the Transformer.",
"We use the centrality of each source word to guide the copy process explicitly.",
"Experimental results show that the self-attention graph provides useful guidance for the copy distribution.",
"Our proposed models significantly outperform the baseline methods on the CNN/Daily Mail dataset and the Gigaword dataset.",
"The explosion of information has expedited the rapid development of text summarization technology, which can help us to grasp the key points from miscellaneous information quickly.",
"There are broadly two types of summarization methods: extractive and abstractive.",
"Extractive approaches select the original text segments in the input to form a summary, while abstractive approaches create novel sentences based on natural language generation techniques.",
"In the past few years, recurrent neural networks (RNNs) based architectures (Chopra et al., 2016; Gu et al., 2016; Nallapati et al., 2016, 2017; See et al., 2017; Zhou et al., 2017; Li et al., 2018b,a; Zhu et al., 2019) have obtained state-of-the-art results for text summarization.",
"Benefit from long-term dependency and high scalability, transformer-based networks have shown superiority over RNNs Equal contribution.",
"Source : two u.s. senators are blocking 11 of president barack obama 's nominees for senior administration posts at the pentagon and justice department in protest over a proposal to house guantanamo detainees at the fort leavenworth prison in their midwestern home state of kansas Reference : us senators bar obama nominees protest guantanamo Transformer : 1 us senators block pentago justice nominees Transformer + Copy : us senators block 11 from pentago justice posts Transformer + Guided Copy : us senators block obama nominees over guantanamo Top Words from Self-attention : nominees, obama, senators, pentagon, guantanamo Table 1: Yellow shades represent overlap with reference.",
"on many NLP tasks, including machine translation (Vaswani et al., 2017; Dehghani et al., 2019), sentence classification (Devlin et al., 2019; Cohan et al., 2019), and text summarization (Song et al., 2019; Zhang et al., 2019).",
"One of the most successful frameworks for the summarization task is Pointer-Generator Network (See et al., 2017) that combines extractive and abstractive techniques with a pointer (Vinyals et al., 2015) enabling the model to copy words from the source text directly.",
"Although, copy mechanism has been widely used in summarization task, how to guarantee that important tokens in the source are copied remains a challenge.",
"In our experiments, we find that the transformer-based summarization model with the copy mechanism may miss some important words.",
"As shown in Table 1, words like nominees and obama are ignored by the standard copy mechanism.",
"To tackle this problem, we intend to get some clues about the importance of words from the self-attention graph.",
"We propose a Self-Attention Guided Copy mechanism (SAGCopy) that aims to encourage the summarizer to copy important source words.",
"Self-attention layer in the Transformer (Vaswani et al., 2017) builds a directed graph whose vertices represent the source words and edges are defined in terms of the relevance score between each pair of source words by dot-product attention (Vaswani et al., 2017) between the query Q and the key K .",
"We calculate the centrality of each source words based on the adjacency matrices.",
"A straightforward method is using TextRank (Mihalcea and Tarau, 2004) algorithm that assumes a word receiving more relevance score from others are more likely to be important.",
"This measure is known as the indegree centrality.",
"We also adopt another measure assuming that a word sends out more relevance score to others is likely to be more critical, namely outdegree centrality, to calculate the source word centrality.",
"We utilize the centrality score as guidance for copy distribution.",
"Specifically, we extend the dot-product attention to a centrality-aware function.",
"Furthermore, we introduce an auxiliary loss computed by the divergence between the copy distribution and the centrality distribution, which aims to encourage the model to focus on important words.",
"Our contribution is threefold: We present a guided copy mechanism based on source word centrality that is obtained by the indegree or outdegree centrality measures.",
"We propose a centrality-aware attention and a guidance loss to encourage the model to pay attention to important source words.",
"We achieve state-of-the-art on the public text summarization dataset.",
"Neural network based models (Rush et al., 2015; Nallapati et al., 2016; Chopra et al., 2016; Nallapati et al., 2017; Zhou et al., 2017; Tan et al., 2017; Gehrmann et al., 2018; Zhu et al., 2019; Li et al., 2020b,a) achieve promising results for the abstractive text summarization.",
"Copy mechanism (Gul-cehre et al., 2016; Gu et al., 2016; See et al., 2017; Zhou et al., 2018) enables the summarizers with the ability to copy from the source into the target via pointing (Vinyals et al., 2015).",
"Recently, pre-training based methods (Devlin et al., 2019; Radford et al., 2018) have attracted growing attention and achieved state-of-the-art performances in many NLP tasks, and pre-training encoder-decoder Transformers (Song et al., 2019; Dong et al., 2019; Lewis et al., 2019; Xiao et al., 2020; Bao et al., 2020) show great successes for the summarization task.",
"In this work, we explore the copy module upon the Transformer-based summarization model.",
"We first introduce the copy mechanism.",
"In Pointer-Generator Networks (PGNet) (See et al., 2017), the source text x are fed into a bidirectional LSTM (BiLSTM) encoder, producing a sequence of encoding hidden state h : h i = BiLSTM( x i , h i 1 ) (1) On each step t , a unidirectional LSTM decoder receives the word embedding of the previous word to produce decoder state s : s t = LSTM( s t 1 , y t 1 , c t ) (2) where c t is a context vector generated based on the attention distribution (Bahdanau et al., 2015): e t,i = v T tanh ( W h h i + W s s t ) , (3) t = softmax( e t ) (4) c t = (cid:88) i t,i h i (5) The vocabulary distribution P vocab over all words in the target vocabulary is calculated as follows: P vocab ( w ) = softmax( W a s t + V a c t ) (6) By incorporating a generating-copying switch p gen [0 , 1] , the final probability distribution of the ground-truth target word y t is: P ( y t ) = p gen P vocab ( y t ) + (1 p gen ) P copy ( y t ) (7) p gen = sigmoid( w Ta c t + u Ta s t + v Ta y t 1 ) (8) Copy distribution P copy determines where to attend in time step t .",
"In the most previous work, encoder-decoder attention weight t is serves as the copy distribution (See et al., 2017): P copy ( w ) = (cid:88) i : x i = w t,i (9) The loss function L is the average negative log likelihood of the ground-truth target word y t for each timestep t : L = 1 T (cid:88) T t =0 log P ( y t ) (10) N N Source Text Encoder Decoder Target Text Attention Graph Vocabulary Distribution Final Distribution p gen p gen Copy Distribution ( 1-p gen ) Figure 1: The framework of our proposed model.",
"In this section, we present our approach to enhance the copy mechanism.",
"First, we briefly describe the Transformer model with the copy mechanism.",
"Then, we introduce two methods to calculate the centrality scores for the source words based on the encoder self-attention layer.",
"Finally, we incorporate the centrality score into the copy distribution and the loss function.",
"The framework of our model is shown in Figure 1.",
"Scaled dot-product attention (Vaswani et al., 2017) is widely used in self-attention networks: Attention( Q, K, V ) = softmax( QKT d k ) V (11) where d k is the number of columns of query matrix Q , key matrix K and value matrix V .",
"We take the encoder-decoder attentions in the last decoder layer as the copy distribution: t,i = softmax(( W s s t ) TW h h i d k ) (12) Note that for the multi-head attention, we obtain the copy distributions with the sum of multiple heads.",
"We introduce two approaches, i.e., indegree centrality and outdegree centrality, to calculate the centrality score for each source word based on the last encoder self-attention layer of the Transformer.",
"Centrality approaches are proposed to investigates the importance of nodes in social networks (Freeman, 1978; Bonacich, 1987; Borgatti and Everett, 2006; Kiss and Bichler, 2008; Li et al., 2011).",
"Degree centrality is one of the simplest centrality measures that can be distinguished as indegree centrality and outdegree centrality (Free-man, 1978), which are determined based on the edges coming into and leaving a node, respectively.",
"Indegree centrality of a word is proportional to the number of relevance scores incoming from other words, which can be measured by the sum of the indegree scores or by graph-based extractive summarization methods (Mihalcea and Tarau, 2004; Erkan and Radev, 2004; Zheng and Lapata, 2019).",
"Outdegree centrality of a word is proportional to the number of relevance scores outgoing to other words, which can be computed by the sum of the outdegree scores.",
"Formally, let G = ( V, D ) be a directed graph representing self-attention, where vertices V is the word set and edge D i,j is represented by the encoder self-attention weight from the word x i to the word x j , where (cid:80) i D i,j = 1 .",
"Next, we introduce the approaches to calculate the word centrality with the graph G .",
"We first construct a transition probability matrix T as follows: T i,j = D i,j / (cid:88) j D i,j .",
"(13)",
"A basic indegree centrality is defined as: score i = (cid:88) j T j,i (14) Alternatively, TextRank (Mihalcea and Tarau, 2004) that is inspired by PageRank algorithm (Page et al., 1999) calculates indegree centrality of the source words iteratively based on the Markov chain: score i = (cid:88) j T j,i score j (15) where score i is indegree centrality score for vertex V i with initial score set to 1 / | V | .",
"We can get a stationary indegree centrality distribution by computing score = T score iteratively, and we take at most three iterations in our implementation.",
"Outdegree centrality measures how much a word i contributes to other words in the directed graph: score i = (cid:88) j D i,j (16) Next, we incorporate the source word centrality score into the decoding process.",
"The motivation is that word centrality indicates the salience of the source words, which can provide the copy prior knowledge that can guide the copy module to focus on important source words.",
"We use word centrality score as an extra input to calculate the copy distribution as follows: t,i = softmax(( W s s t ) T ( W h h i + w p score i ) d k ) (17) where score i is the indegree or outdegree centrality score for the i -th word in source text.",
"The above implementation can be referred to as centrality-aware dot-product attention.",
"Moreover, we expect that important source words can draw enough encoder-decoder attention.",
"Thus, we adopt a centrality-aware auxiliary loss to encourage the consistency between the overall copy distribution and the word centrality distribution based on the Kullback-Leibler (KL) divergence: L = 1 T (cid:88) t log P ( y t ) + KL( 1 T (cid:88) t t , score ) (18) 5 Experiments 5.1 Experimental Setting We evaluate our model in CNN/Daily Mail dataset (Hermann et al., 2015) and Gigaword dataset (Rush et al., 2015).",
"Our experiments are conducted with 4 NVIDIA P40 GPU.",
"We adopt 6 layer encoder and 6 layers decoder with 12 attention heads, and h model = 768.",
"Byte Pair Encoding (BPE) (Sennrich et al., 2016) word segmentation is used for data pre-processing.",
"We warm-start the model parameter with MASS pre-trained base model 1 and trains about 10 epoches for convergence.",
"During decoding, we use beam search with a beam size of 5.",
"We compare our proposed Self-Attention Guided Copy ( SAGCopy ) model with the following comparative models.",
"Lead-3 uses the first three sentences of the article as its summary.",
"PGNet (See et al., 2017) is the Pointer-Generator Network.",
"Bottom-Up (Gehrmann et al., 2018) is a sequence-to-sequence model augmented with a bottom-up content selector.",
"MASS (Song et al., 2019) is a sequence-to-sequence pre-trained model based on the Transformer.",
"ABS (Rush et al., 2015) relies on an CNN encoder and a NNLM decoder.",
"ABS+ (Rush et al., 2015) enhances the ABS model with extractive summarization features.",
"SEASS (Zhou et al., 2017) controls the information flow from the encoder to the decoder with the selective encoding strategy.",
"SeqCopyNet (Zhou et al., 2018) extends the copy mechanism that can copy sequences from the source.",
"We adopt ROUGE (RG) F 1 score (Lin, 2004) as the evaluation metric.",
"As shown in Table 2 and Table 3, SAGCopy with both outdegree and indegree centrality based guidance significantly outperform the baseline models, which prove the effectiveness of self-attention guided copy mechanism.",
"The basic indegree centrality (indegree-1) is more favorable, considering the ROUGE score and computation complexity.",
"Besides ROUGE evaluation, we further investigate the guidance from the view of the loss function.",
"For each sample in the Gigaword test set, we measure the KL divergence between the centrality score and the copy distribution, and we calculate the ROUGE-1 and ROUGE-2 scores.",
"Figure 2 demonstrates that lower KL divergence yields a 1 https://github.com/microsoft/MASS Models RG-1 RG-2 RG-L Lead-3* 40.34 17.70 36.57 PGNet* 39.53 17.28 36.38 Bottom-Up* 41.22 18.68 38.34 MASS 41.38 19.11 38.42 MASS+Copy 41.71 19.41 38.66 SAGCopy Outdegree 42.53 19.92 39.44 SAGCopy Indegree-1 42.30 19.75 39.23 SAGCopy Indegree-2 42.56 19.89 39.40 SAGCopy Indegree-3 42.34 19.72 39.29 Table 2: ROUGE F 1 scores on the CNN/Daily Mail dataset.",
"higher ROUGE score, showing that our loss function is reasonable.",
"Additionally, we visualize the self-attention weights learned from our model in Figure 3, which demonstrates the guidance process.",
"We conduct human evaluations to measure the quantify of the summaries for importance and readability .",
"We randomly selected 100 samples from the Gigaword test set.",
"The annotators are required to give a comparison between two model summaries that are presented anonymously.",
"The results in Table 4 show that SAGCopy significantly outperforms MASS+Copy in terms of Importance and is comparative in terms of Readability .",
"In this paper, we propose the SAGCopy summarization model that acquires guidance signals for the copy mechanism from the encoder self-attention graph.",
"We first calculate the centrality score for each source word.",
"Then, we incorporate the importance score into the copy module.",
"The experimental results show the effectiveness of our model.",
"For future work, we intend to apply our method to other Transformer-based summarization models.",
"This work is partially supported by Beijing Academy of Artificial Intelligence (BAAI)."
] | [
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"objective",
"objective",
"objective",
"result",
"other",
"other",
"other",
"objective",
"objective",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"result",
"method",
"other"
] |
[
"Text simplification reduces the language complexity of professional content for accessibility purposes.",
"End-to-end neural network models have been widely adopted to directly generate the simplified version of input text, usually functioning as a blackbox.",
"We show that text simplification can be decomposed into a compact pipeline of tasks to ensure the transparency and explainability of the process.",
"The first two steps in this pipeline are often neglected:",
"1) to predict whether a given piece of text needs to be simplified, and",
"2) if yes, to identify complex parts of the text.",
"The two tasks can be solved separately using either lexical or deep learning methods, or solved jointly.",
"Simply applying explainable complexity prediction as a preliminary step, the out-of-sample text simplification performance of the state-of-the-art, black-box simplification models can be improved by a large margin.",
"Text simplification aims to reduce the language complexity of highly specialized textual content so that it is accessible for readers who lack adequate literacy skills, such as children, people with low education, people who have reading disorders or dyslexia, and non-native speakers of the language.",
"Mismatch between language complexity and literacy skills is identified as a critical source of bias and inequality in the consumers of systems built upon processing and analyzing professional text content.",
"Research has found that it requires on average 18 years of education for a reader to properly understand the clinical trial descriptions on ClinicalTrials.gov, and this introduces a potential self-selection bias to those trials (Wu et al., 2016).",
"Text simplification has considerable potential to improve the fairness and transparency of text information systems.",
"Indeed, the Simple English Wikipedia ( simple.wikipedia.org ) has been constructed to disseminate Wikipedia articles to kids and English learners.",
"In healthcare, consumer vocabulary are used to replace professional medical terms to better explain medical concepts to the public (Abrahamsson et al., 2014).",
"In education, natural language processing and simplified text generation technologies are believed to have the potential to improve student outcomes and bring equal opportunities for learners of all levels in teaching, learning and assessment (Mayfield et al., 2019).",
"Ironically, the definition of text simplification in literature has never been transparent.",
"The term may refer to reducing the complexity of text at various linguistic levels, ranging all the way through replacing individual words in the text to generating a simplified document completely through a computer agent.",
"In particular, lexical simplification (Devlin, 1999) is concerned with replacing complex words or phrases with simpler alternatives; syntactic simplification (Siddharthan, 2006) alters the syntactic structure of the sentence; semantic simplification (Kandula et al., 2010) paraphrases portions of the text into simpler and clearer variants.",
"More recent approaches simplify texts in an end-to-end fashion, employing machine translation models in a monolingual setting regardless of the type of simplifications (Zhang and Lapata, 2017; Guo et al., 2018; Van den Bercken et al., 2019).",
"Nevertheless, these models are limited on the one hand due to the absence of large-scale parallel (complex simple) monolingual training data, and on the other hand due to the lack of interpretibility of their black-box procedures (Alva-Manchego et al., 2017).",
"Given the ambiguity in problem definition, there also lacks consensus on how to measure the goodness of text simplification systems, and automatic evaluation measures are perceived ineffective and sometimes detrimental to the specific procedure, in particular when they favor shorter but not necessarily simpler sentences (Napoles et al., 2011).",
"While end-to-end simplification models demonstrate superior performance on benchmark datasets, their success is often compromised in out-of-sample, real-world scenarios (D'Amour et al., 2020).",
"Our work is motivated by the aspiration that increasing the transparency and explainability of a machine learning procedure may help its generalization into unseen scenarios (Doshi-Velez and Kim, 2018).",
"We show that the general problem of text simplification can be formally decomposed into a compact and transparent pipeline of modular tasks.",
"We present a systematic analysis of the first two steps in this pipeline, which are commonly overlooked:",
"1) to predict whether a given piece of text needs to be simplified at all , and",
"2) to identify which part of the text needs to be simplified .",
"The second task can also be interpreted as an explanation of the first task: why a piece of text is considered complex.",
"These two tasks can be solved separately, using either lexical or deep learning methods, or they can be solved jointly through an end-to-end, explainable predictor.",
"Based on the formal definitions, we propose general evaluation metrics for both tasks and empirically compare a diverse portfolio of methods using multiple datasets from different domains, including news, Wikipedia, and scientific papers.",
"We demonstrate that by simply applying explainable complexity prediction as a preliminary step, the out-of-sample text simplification performance of the state-of-the-art, black-box models can be improved by a large margin.",
"Text simplification at word level has been done through",
"1) lexicon based approaches, which match words to lexicons of complex/simple words (Deleger and Zweigenbaum, 2009; Elhadad and Sutaria, 2007),",
"2) threshold based approaches, which apply a threshold over word lengths or certain statistics (Leroy et al., 2013),",
"3) human driven approaches, which solicit the user's input on which words need simplification (Rello et al., 2013), and",
"4) classification methods, which train machine learning models to distinguish complex words from simple words (Shardlow, 2013).",
"Complex word identification is also the main topic of SemEval 2016 Task 11 (Paetzold and Specia, 2016), aiming to determine whether a non-native English speaker can understand the meaning of a word in a given sentence.",
"Significant differences exist between simple and complex words, and the latter on average are shorter, less ambiguous, less frequent, and more technical in nature.",
"Interestingly, the frequency of a word is identified as a reliable indicator of its simplicity (Leroy et al., 2013).",
"While the above techniques have been widely employed for complex word identification, the results reported in the literature are rather controversial and it is not clear to what extent one technique outperforms the other in the absence of standardized high quality parallel corpora for text simplification (Paetzold, 2015).",
"Pre-constructed lexicons are often limited and do not generalize to different domains.",
"It is intriguing that classification methods reported in the literature are not any better than a simplify-all baseline (Shardlow, 2014).",
"Traditionally, measuring the level of reading difficulty is done through lexicon and rule-based metrics such as the age of acquisition lexicon (AoA) (Kuperman et al., 2012) and the Flesch-Kincaid Grade Level (Kincaid et al., 1975).",
"A machine learning based approach in (Schumacher et al., 2016) extracts lexical, syntactic, and discourse features and train logistic regression classifiers to predict the relative complexity of a single sentence in a pairwise setting.",
"The most predictive features are simple representations based on AoA norms.",
"The perceived difficulty of a sentence is highly in-fluenced by properties of the surrounding passage.",
"Similar methods are used for fine-grained classification of text readability (Aluisio et al., 2010) and complexity (Stajner and Hulpus , , 2020).",
"Simplification rules are learnt by finding words from a complex sentence that correspond to different words in a simple sentence (Alva-Manchego et al., 2017).",
"Identifying simplification operations such as copies, deletions, and substitutions for words from parallel complex vs. simple corpora helps understand how human experts simplify text (Alva-Manchego et al., 2017).",
"Machine translation has been employed to learn phrase-level alignments for sentence simplification (Wubben et al., 2012).",
"Lexical and phrasal paraphrase rules are extracted in (Pavlick and Callison-Burch, 2016).",
"These methods are often evaluated by comparing their output to gold-standard, human-generated simplifications, using standard metrics (e.g., token-level precision, recall, F1), machine translation metrics (e.g., BLEU (Papineni et al., 2002) ), text simplification metrics (e.g. SARI (Xu et al., 2016) which rewards copying words from the original sentence), and readability metrics (among which Flesch-Kincaid Grade Level (Kincaid et al., 1975) and Flesch Reading Ease (Kincaid et al., 1975) are most commonly used).",
"It is desirable that the output of the computational models is ultimately validated by human judges (Shardlow, 2014).",
"Neural encoder-decoder models are used to learn simplification rewrites from monolingual corpora of complex and simple sentences (Scarton and Spe-cia, 2018; Van den Bercken et al., 2019; Zhang and Lapata, 2017; Guo et al., 2018).",
"On one hand, these models often obtain superior performance on particular evaluation metrics, as the neural network directly optimizes these metrics in training.",
"On the other hand, it is hard to interpret what exactly are learned in the hidden layers, and without this transparency it is difficult to adapt these models to new data, constraints, or domains.",
"For example, these end-to-end simplification models tend not to distinguish whether the input text should or should not be simplified at all, making the whole process less transparent.",
"When the input is already simple, the models tend to oversimplify it and deviate from its original meaning (see Section 5.3).",
"Various approaches are proposed in the literature to address the explainability and interpretability of machine learning agents.",
"The task of providing explanations for black-box models has been tackled either at a local level by explaining individual predictions of a classifier (Ribeiro et al., 2016), or at a global level by providing explanations for the model behavior as a whole (Letham et al., 2015).",
"More recently, differential explanations are proposed to describe how the logic of a model varies across different subspaces of interest (Lakkaraju et al., 2019).",
"Layer-wise relevance propagation (Arras et al., 2017) is used to trace backwards text classification decisions to individual words, which are assigned scores to reflect their separate contribution to the overall prediction.",
"LIME (Ribeiro et al., 2016) is a model-agnostic explanation technique which can approximate any machine learning model locally with another sparse linear interpretable model.",
"SHAP (Lundberg and Lee, 2017) evaluates Shapley values as the average marginal contribution of a feature value across all possible coalitions by considering all possible combinations of inputs and all possible predictions for an instance.",
"Explainable classification can also be solved simultaneously through a neural network, using hard attentions to select individual words into the rationale behind a classification decision (Lei et al., 2016).",
"Extractive adversarial networks employs a three-player adversarial game which addresses high recall of the rationale (Carton et al., 2018).",
"The model consists of a generator which extracts an attention mask for each token in the input text, a predictor that cooperates with the generator and makes prediction from the rationale (words attended to), and an adversarial predictor that makes predictions from the remaining words in the inverse rationale.",
"The minimax game between the two predictors and the generator is designed to ensure all predictive signals are included into the rationale.",
"No prior work has addressed the explainability of text complexity prediction.",
"We fill in this gap.",
"We propose a unified view of text simplification which is decomposed into several carefully designed sub-problems.",
"These sub-problems generalize over many approaches, and they are logically dependent on and integratable with one another so that they can be organized into a compact pipeline.",
"The first conceptual block in the pipeline (Fig-ure",
"1) is concerned with explainable prediction of the complexity of text.",
"It consists of two sub-tasks:",
"1) prediction : classifying a given piece of text into two categories, needing simplification or not; and",
"2) explanation : highlighting the part of the text that needs to be simplified.",
"The second conceptual block is concerned with simplification generation, the goal of which is to generate a new, simplified version of the text that needs to be simplified.",
"This step could be achieved through completely manual effort, or a computer-assisted approach (e.g., by suggesting alternative words and expressions), or a completely automated method (e.g., by self-translating into a simplified version).",
"The second building block is piped into a step of human judg-ment, where the generated simplification is tested, approved, and evaluated by human practitioners.",
"One could argue that for an automated simplification generation system the first block (complexity prediction) is not necessary.",
"We show that it is not the case.",
"Indeed, it is unlikely that every piece of text needs to be simplified in reality, and instead the system should first decide whether a sentence needs to be simplified or not.",
"Unfortunately such a step is often neglected by existing end-to-end simplifiers, thus their performance is often biased towards the complex sentences that are selected into their training datasets at the first place and doesn't generalize well to simple inputs.",
"Empirically, when these models are applied to out-of-sample text which shouldn't be simplified at all, they tend to oversimplify the input and result in a deviation from its original meaning (see Section 5.3).",
"One could also argue that an explanation component (1B) is not mandatory in certain text simplification practices, in particular in an end-to-end neural generative model that does not explicitly identify the complex parts of the input sentence.",
"In reality, however, it is often necessary to highlight the differences between the original sentence and the simplified sentence (which is essentially a variation of 1B) to facilitate the validation and evaluation of these black-boxes.",
"More generally, the explainability/interpretability of a machine learning model has been widely believed to be an indispensable factor to its fidelity and fairness when applied to the real world (Lakkaraju et al., 2019).",
"Since the major motivation of text simplification is to improve the fairness and transparency of text information systems, it is critical to explain the rationale behind the simplification decisions, even if they are made through a black-box model.",
"Without loss of generality, we can formally define the sub-tasks 1A, 1B, and 2in the pipeline: Definition 3.1.",
"(Complexity Prediction).",
"Let text d D be a sequence of tokens w 1 w 2 ...w n .",
"The task of complexity prediction is to find a function f : D { 0 , 1 } such that f ( d ) = 1 if d needs to be simplified, and f ( d ) = 0 otherwise.",
"Definition 3.2.",
"(Complexity Explanation).",
"Let d be a sequence of tokens w 1 w 2 ...w n and f ( d ) = 1 .",
"The task of complexity explanation/highlighting is to find a function h : D { 0 , 1 } n s.t. h ( d ) = c 1 c 2 ...c n , where c i = 1 means w i will be highlighted as a complex portion of d and c i = 0 otherwise.",
"We denote d | h ( d ) as the highlighted part of d and d | h ( d ) as the unhighlighted part of d.",
"Definition 3.3.",
"(Simplification Generation).",
"Let d be a sequence of tokens w 1 w 2 ...w n and f ( d ) = 1 .",
"The task of simplification generation is to find a function g : D D (cid:48) s.t. g ( d, f ( d ) , h ( d )) = d (cid:48) , where d (cid:48) = w (cid:48) 1 w (cid:48) 2 ...w (cid:48) m and f ( d (cid:48) ) = 0 , subject to the constraint that d (cid:48) preserves the meaning of d .",
"In this paper, we focus on an empirical analysis of the first two sub-tasks of explainable prediction of text complexity (1A and 1B), which are the preliminaries of any reasonable text simplification practice.",
"We leave aside the detailed analysis of simplification generation (2-) for now, as there are many viable designs of g ( ) in practice, spanning the spectrum between completely manual and completely automated.",
"Since this step is not the focus of this paper, we intend to leave the definition of simplification generation highly general.",
"Note that the definitions of complexity prediction and complexity explanation can be naturally extended to a continuous output, where f ( ) predicts the complexity level of d and h ( ) predicts the complexity weight of w i .",
"The continuous output would align the problem more closely to readability measures (Kincaid et al., 1975).",
"In this paper, we stick to the binary output because a binary action (to simplify or not) is almost always necessary in reality even if a numerical score is available.",
"Note that the definition of complexity explanation is general enough for existing approaches.",
"In lexical simplification where certain words in a complex vocabulary V are identified to explain the complexity of a sentence, it is equivalent to highlighting every appearance of these words in d , or w i V, c i = 1 .",
"In automated simplification where there is a self-translation function g ( d ) = d (cid:48) , h ( d ) can be simply instantiated as a function that returns a sequence alignment of d and d (cid:48) .",
"Such reformulation helps us define unified evaluation metrics for complexity explanation (see Section 4).",
"It is also important to note that the dependency between the components, especially complexity prediction and explanation, does not restrict them to be done in isolation.",
"These sub-tasks can be done either separately, or jointly with an end-to-end approach as long as the outputs of f, h, g are all obtained (so that transparency and explainability are preserved).",
"In Section 4, we include both separate models and end-to-end models for explanatory complexity predication in one shot.",
"With the pipeline formulation, we are able to compare a wide range of methods and metrics for the sub-tasks of text simplification.",
"We aim to understand how difficult they are in real-world settings and which method performs the best for which task.",
"We examine a wide portfolio of deep and shallow binary classifiers to distinguish complex sentences from simple ones.",
"Among the shallow models we use Naive Bayes (NB), Logistic Regression (LR), Support Vector Machines (SVM) and Random Forests (RF) classifiers trained with unigrams, bigrams and trigrams as features.",
"We also train the classifiers using the lexical and syntactic features proposed in (Schumacher et al., 2016) combined with the n -gram features (denoted as enriched fea-tures).",
"We include neural network models such as word and char-level Long Short-Term Memory Network (LSTM) and Convolutional Neural Networks (CNN).",
"We also employ a set of state-of-the-art pre-trained neural language models, fine-tuned for complexity prediction; we introduce them below.",
"ULMFiT (Howard and Ruder, 2018) a language model on a large general corpus such as WikiText-103 and then fine-tunes it on the target task using slanted triangular rates, and gradual unfreezing.",
"We use the publicly available implementation 1 of the model with two fine-tuning epochs for each dataset and the model quickly adapts to a new task.",
"BERT (Devlin et al., 2019) trains deep bidirectional language representations and has greatly advanced the state-of-the-art for many natural language processing tasks.",
"The model is pre-trained on the English Wikipedia as well as the Google Book Corpus.",
"Due to computational constraints, we use the 12 layer BERT base pre-trained model and fine-tune it on our three datasets.",
"We select the best hyperparameters based on each validation set.",
"XLNeT (Yang et al., 2019) overcomes the limitations of BERT (mainly the use of masks) with a permutation-based objective which considers bidirectional contextual information from all positions without data corruption.",
"We use the 12 layer XLNeT base pre-trained model on the English Wikipedia, the Books corpus (similar to BERT), Giga5, ClueWeb 2012-B, and Common Crawl.",
"We evaluate the performance of complexity prediction models using classification accuracy on balanced training, validation, and testing datasets.",
"We use LIME in combination with LR and LSTM classifiers, SHAP on top of LR, and the extractive adversarial networks which jointly conducts complexity prediction and explanation.",
"We feed each test complex sentence as input to these explanatory models and compare their performance at identifying tokens (words and punctuation) that need to be removed or replaced from the input sentence.",
"We compare these explanatory models with three baseline methods:",
"1) Random highlighting : randomly draw the size and the positions of tokens to highlight;",
"2) Lexicon based highlighting : highlight words that appear in the Age-of-Acquisition (AoA) lexicon (Kuperman et al., 2012), which contains ratings for 30,121 English content words (nouns, verbs, and adjectives) indicating the age at which a word is acquired; and",
"3) Feature highlighting : highlight the most important features of the best performing LR models for complexity prediction.",
"Evaluation of explanatory machine learning is an open problem.",
"In the context of complexity explanation, when the ground truth of highlighted tokens ( y c ( d ) = c 1 c 2 ...c n , c i { 0 , 1 } ) in each complex sentence d is available, we can compare the output of complexity explanation h ( d ) with y c ( d ) .",
"Such per-token annotations are usually not available in scale.",
"To overcome this, given a complex sentence d and its simplified version d (cid:48) , we assume that all tokens w i in d which are absent in d (cid:48) are candidate words for deletion or substitution during the text simplification process and should therefore be highlighted in complexity explanation (i.e., c i = 1 ).",
"In particular, we use the following evaluation metrics for complexity explanation:",
"1) Tokenwise Precision (P) , which measures the proportion of highlighted tokens in d that are truly removed in d (cid:48) ; 2) Tokenwise Recall (R) , which measures the proportion of tokens removed in d (cid:48) that are actually highlighted in d ; 3) Tokenwise F1 , the harmonic mean of P and R ; 4) word-level Edit distance (ED) (Levenshtein, 1966): between the unhighlighted part of d and the simplified document d (cid:48) .",
"Intuitively, a more successful complexity explanation would highlight most of the tokens that need to be simplified, thus the remaining parts in the complex sentences will be closer to the simplified version, achieving a lower edit distance (we also explore ED with a higher penalty cost for the substitution operation, namely values of 1, 1.5 and 2); and 5) Translation Edit Rate (TER) (Snover et al., 2006), which measures the minimum number of edits needed to change a hypothesis (the unhighlighted part of d ) so that it exactly matches the closest references (the simplified document d (cid:48) ).",
"Note these metrics are all proxies of the real editing process from d to d (cid:48) .",
"When token-level edit history is available (e.g., through track changes), it is better to compare the highlighted evaluation with these true changes made.",
"We compute all the metrics at sentence level and macro-average them.",
"We use three different datasets (Table",
"1) which cover different domains and application scenarios of text simplification.",
"Our first dataset is Newsela (Xu et al., 2015), a corpus of news articles simplified by professional news editors.",
"In our experiments we use the parallel Newsela corpus with the training, validation, and test splits made available in (Zhang and Lapata, 2017).",
"Second, we use the WikiLarge corpus introduced in (Zhang and Lap-ata, 2017).",
"The training subset of WikiLarge is created by assembling datasets of parallel aligned Wikipedia Simple Wikipedia sentence pairs available in the literature (Kauchak, 2013).",
"While this training set is obtained through automatic alignment procedures which can be noisy, the validation and test subsets of WikiLarge contain complex sentences with simplifications provided by Amazon Mechanical Turk workers (Xu et al., 2016); we in-crease the size of validation and test on top of the splits made available in (Zhang and Lapata, 2017).",
"Third, we use the dataset released by the Biendata competition 2 , which asks participants to match research papers from various scientific disciplines with press releases that describe them.",
"Arguably, rewriting scientific papers into press releases has mixed objectives that are not simply text simplification.",
"We include this task to test the generalizability of our explainable pipeline (over various definitions of simplification).",
"We use alignments at title level.",
"On average, a complex sentence in Newsela, WikiLarge, Biendata contains 23.07, 25.14, 13.43 tokens, and the corresponding simplified version is shorter, with 12.75, 18.56, 10.10 tokens.",
"The original datasets contain aligned complex-simple sentence pairs instead of classification labels for complexity prediction.",
"We infer ground-truth complexity labels for each sentence such that: label 1 is assigned to every sentence for which there is an aligned simpler version not identical to itself (the sentence is complex and needs to be simpli-fied); label 0 is assigned to all simple counterparts of complex sentences, as well as to those sentences that have corresponding simple versions identical to themselves (i.e., these sentences do not need to be simplified).",
"For complex sentences that have label 1, we further identify which tokens are not present in corresponding simple versions.",
"For all shallow and deep classifiers we find the best hyperparameters using random search on validation, with early stopping.",
"We use grid search on validation to fine-tune hyperparameters of the pre-trained models, such as maximum sequence 2 https://www.biendata.com/competition/ hackathon , retrieved on 5/31/2021.",
"length, batch size, learning rate, and number of epochs.",
"For ULMFit on Newsela, we set batch size to 128 and learning rate to 1e-3.",
"For BERT on WikiLarge, batch size is 32, learning rate is 2e-5, and maximum sequence length is 128.",
"For XLNeT on Biendata, batch size is 32, learning rate is 2e-5, and maximum sequence length is 32.",
"We use grid search on validation to fine-tune the complexity explanation models, including the extractive adversarial network.",
"For LR and LIME we determine the maximum number of words to highlight based on TER score on validation (please see Table 2); for SHAP we highlight all features with positive assigned weights, all based on TER.",
"For extractive adversarial networks batch size is set to 256, learning rate is 1e-4, and adversarial weight loss equals 1; in addition, sparsity weight is 1 for Newsela and Biendata, and 0.6 for WikiLarge; lastly, coherence weight is 0.05 for Newsela, 0.012 for WikiLarge, and 0.0001 for Biendata.",
"In Table 3, we evaluate how well the representative shallow, deep, and pre-trained classification models can determine whether a sentence needs to be simplified at all.",
"We test for statistical significance of the best classification results compared to all other models using a two-tailed z-test.",
"In general, the best performing models can achieve around 80% accuracy on two datasets (Newsela and WikiLarge) and a very high performance on the Biendata ( > 95% ).",
"This difference presents the difficulty of complexity prediction in different domains, and distinguishing highly specialized scientific content from public facing press releases is relatively easy (Biendata).",
"Deep classification models in general outperform shallow ones, however with carefully designed handcrafted features and proper hyperpa-rameter optimization shallow models tend to approach to the results of the deep classifiers.",
"Overall models pre-trained on large datasets and fine-tuned for text simplification yield superior classifi-Table 3: Accuracy of representative shallow , deep, and pre-trained models for complexity prediction.",
"BOLD : best performing models.",
"cation performance.",
"For Newsela the best performing classification model is ULMFiT (accuracy = 80.83%, recall = 76.87%), which significantly (p < 0.01) surpasses all other classifiers except for XLNeT and CNN (char-level).",
"On WikiLarge, BERT presents the highest accuracy ( 81 . 45% , p < 0 . 01 ), and recall = 83.30%.",
"On Biendata, XLNeT yields the highest accuracy ( 95 . 48% , p < 0 . 01 ) with recall = 94.93%, although the numerical difference to other pre-trained language models is small.",
"This is consistent with recent findings in other natural language processing tasks (Cohan et al., 2019).",
"We evaluate how well complexity classification can be explained, or how accurately the complex parts of a sentence can be highlighted.",
"Results (Table",
"4) show that highlighting words in the AoA lexicon or LR features are rather strong baselines, indicating that most complexity of a sentence still comes from word usage.",
"Highlighting more LR features leads to a slight drop in precision and a better recall.",
"Although LSTM and LR perform comparably on complexity classification, using LIME to explain LSTM presents better recall, F1, and TER (at similar precision) compared to using LIME to explain LR.",
"The LIME & LSTM combination is reasonably strong on all datasets, as is SHAP & LR.",
"TER is a reliable indicator of the difficulty of the remainder (unhighlighted part) of the complex sentence.",
"ED with a substitution penalty of 1.5 efficiently captures the variations among the explanations.",
"On Newsela and Bien-Table 4: Results for complexity explanation.",
"P, R and F1 the higher the better; TER and ED 1.5 the lower the better.",
"BOLD & Underlined: best & second best.",
"data, the extractive adversarial networks yield solid performances (especially TER and ED 1.5), indicating that jointly making predictions and generating explanations reinforces each other.",
"Table 5 provides examples of highlighted complex sentences by each explanatory model.",
"One may question whether explainable prediction of text complexity is still a necessary preliminary step in the pipeline if a strong, end-to-end simplification generator is used.",
"We show that it is.",
"We consider the scenario where a pre-trained, end-to-end text simplification model is blindly applied to texts regardless of their complexity level, compared to only simplifying those considered complex by the best performing complexity predictor in Table 3.",
"Such a comparison demonstrates whether adding complexity prediction as a preliminary step is beneficial to a text simplification process when a state-of-the-art, end-to-end simplifier is already in place.",
"From literature we select the current best text simplification models on WikiLarge and Newsela which have released pre-trained models: ACCESS (Martin et al., 2020), a controllable sequence-to-sequence simplification model that reported the highest performance (41.87 SARI) on WikiLarge.",
"Dynamic Multi-Level Multi-Task Learning for Sentence Simplification (DMLMTL) (Guo et al., 2018), which reported the highest performance (33.22 SARI) on Newsela.",
"We apply the author-released, pre-trained ACCESS and DMLMTL on all sentences from the validation and testing sets of all three datasets.",
"We do not use the training examples as the pre-trained models may have already seen them.",
"Presumably, a smart model should not further simplify an input sentence if it is already simple enough.",
"However, to our surprise, a majority of the out-of-sample simple sentences are still changed by both models (above 90% by DMLMTL and above 70% by ACCESS, please see Table 6).",
"We further quantify the difference with vs. without complexity prediction as a preliminary step.",
"Intuitively, without complexity prediction, an already simple sentence is likely to be overly simplified and result in a loss in text simplification metrics.",
"In contrast, an imperfect complexity predictor may mistaken a complex sentence as simple, which misses the opportunity of simplification and results in a loss as well.",
"The empirical question is which loss is higher.",
"From Table 7, we see that after directly adding a complexity prediction step before either of the state-of-the-art simplification models, there is a considerable drop of errors in three text simplification metrics: Edit Distance (ED), TER, and Frechet Embedding Distance (FED) that measures the difference of a simplified text and the ground-truth in a semantic space (de Masson d'Autume et al., 2019).",
"For ED alone, the improvements are between 30% to 50%.",
"This result is very encouraging: considering that the complexity predictors are only 80% accurate and the complexity predictor and the simplification models don't depend on each other, there is considerable room to optimize this gain.",
"Indeed, the benefit is higher on Biendata where the complexity predictor is more accurate.",
"Qualitatively, one could frequently observe syntactic, semantic, and logical mistakes in the model-simplified version of simple sentences.",
"We give a few examples below.",
"In Ethiopia, HIV disclosure is low In Ethiopia , HIV is low (ACCESS) Mustafa Shahbaz , 26 , was shopping for books about science .",
"Mustafa Shahbaz , 26 years old , was a group of books about science .",
"(ACCESS) New biomarkers for the diagnosis of Alzheimer's New biomarkers are diagnosed with Alzheimer (ACCESS) Table 5: Explanations of complexity predictions (in red).",
"Healthy diet linked to lower risk of chronic lung disease Healthy diet linked to lung disease (DMLMTL) Dramatic changes needed in farming practices to keep pace with climate change changes needed to cause climate change (DMLMTL) Social workers can help patients recover from mild traumatic brain injuries Social workers can cause better problems .",
"(DMLMTL)",
"All these qualitative and quantitative results suggest that the state-of-the-art black-box models tend to oversimplify and distort the meanings of out-of-sample input that is already simple.",
"Evidently, the lack of transparency and explainability has limited the application of these end-to-end black-box models in reality, especially to out-of-sample data, context, and domains.",
"The pitfall can be avoided with the proposed pipeline and simply with explainable complexity prediction as a preliminary step.",
"Even though this explainable preliminary does not necessarily reflect how a black-box simplification model thinks, adding it to the model is able to yield better out-of-sample performance.",
"We formally decompose the ambiguous notion of text simplification into a compact, transparent, and logically dependent pipeline of sub-tasks, where explainable prediction of text complexity is identified as the preliminary step.",
"We conduct a systematic analysis of its two sub-tasks, namely complexity prediction and complexity explanation, and show that they can be either solved separately or jointly through an extractive adversarial network.",
"While pre-trained neural language models achieve significantly better performance on complexity prediction, an extractive adversarial network that solves the two tasks jointly presents promising advantage in complexity explanation.",
"Using complexity prediction as a preliminary step reduces the error of the state-of-the-art text simplification models by a large margin.",
"Future work should integrate rationale extractor into the pre-trained neural language models and extend it for simplification generation.",
"This work is in part supported by the National Science Foundation under grant numbers 1633370 and 1620319 and by the National Library of Medicine under grant number 2R01LM010681-05."
] | [
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"method",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"Semantic parsing is challenging due to the structure gap and the semantic gap between utterances and logical forms.",
"In this paper, we propose an unsupervised semantic parsing method Synchronous Semantic Decoding (SSD), which can simultaneously resolve the semantic gap and the structure gap by jointly leveraging paraphrasing and grammar-constrained decoding.",
"Specifically, we reformulate semantic parsing as a constrained paraphrasing problem: given an utterance, our model synchronously generates its canonical utterance 1 and meaning representation.",
"During synchronous decoding: the utterance paraphrasing is constrained by the structure of the logical form, therefore the canonical utterance can be paraphrased controlledly; the semantic decoding is guided by the semantics of the canonical utterance, therefore its logical form can be generated unsupervisedly.",
"Experimental results show that SSD is a promising approach and can achieve competitive unsupervised semantic parsing performance on multiple datasets.",
"Semantic parsing aims to translate natural language utterances to their formal meaning representations, such as lambda calculus (Zettlemoyer and Collins, 2005; Wong and Mooney, 2007), FunQL (Kate et al., 2005; Lu et al., 2008), and SQL queries.",
"Currently, most neural semantic parsers (Dong and Lapata, 2016; Chen et al., 2018b; Zhao et al., 2020; Shao et al., 2020) model semantic parsing as a sequence to sequence translation task via encoder-decoder framework.",
"1 Canonical utterances are pseudo-language representations of logical forms, which have the synchronous structure of logical",
"forms.(Berant and Liang, 2014; Xiao et al., 2016; Su and Yan, 2017; Cao et al., 2020)",
"Semantic parsing is a challenging task due to the structure gap and the semantic gap between natural language utterances and logical forms.",
"For structure gap, because utterances are usually word sequences and logical forms are usually trees/graphs constrained by specific grammars, a semantic parser needs to learn the complex structure transformation rules between them.",
"For semantic gap, because the flexibility of natural languages, the same meaning can be expressed using very different utterances, a semantic parser needs be able to map various expressions to their semantic form.",
"To address the structure gap and the semantic gap, current semantic parsers usually rely on a large amount of labeled data, often resulting in data bottleneck problem.",
"Previous studies have found that the structure gap and the semantic gap can be alleviated by leveraging external resources, therefore the re-liance on data can be reduced.",
"For structure gap, previous studies found that constrained decoding can effectively constrain the output structure by injecting grammars of logical forms and facts in knowledge bases during inference.",
"For example, the grammar-based neural semantic parsers (Xiao et al., 2016; Yin and Neubig, 2017) and the constrained decoding algorithm (Krishnamurthy et al., 2017).",
"For semantic gap, previous studies have found that paraphrasing is an effective technique for resolving the diversity of natural expressions.",
"Using paraphrasing, semantic parsers can handle the different expressions of the same meaning, therefore can reduce the requirement of labeled data.",
"For example, supervised methods (Be-rant and Liang, 2014; Su and Yan, 2017) use the paraphrasing scores between canonical utterances and sentences to re-rank logical forms; Two-stage (Cao et al., 2020) rewrites utterances to canonical utterances which can be easily parsed.",
"The main drawback of these studies is that they use constrained decoding and paraphrasing independently and separately, therefore they can only alleviate either semantic gap or structure gap.",
"In this paper, we propose an unsupervised semantic parsing method Synchronous Semantic Decoding (SSD) , which can simultaneously resolve the structure gap and the semantic gap by jointly leveraging paraphrasing and grammar-constrained decoding.",
"Specifically, we model semantic parsing as a constrained paraphrasing task: given an utterance, we synchronously decode its canonical utterance and its logical form using a general paraphrase model, where the canonical utterance and the logical form share the same underlying structure.",
"Based on the synchronous decoding, the canonical utterance generation can be constrained by the structure of logical form, and the logical form generation can be guided by the semantics of canonical form.",
"By modeling the interdependency between canonical utterance and logical form, and exploiting them through synchronous decoding, our method can perform effective unsupervised semantic parsing using only pretrained general paraphrasing model no annotated data for semantic parsing is needed.",
"We conduct experiments on GEO and OVERNIGHT .",
"Experimental results show that our method is promising, which can achieve competitive unsupervised semantic parsing performance, and can be further improved with external resources.",
"The main contributions of this paper are: We propose an unsupervised semantic parsing method Synchronous Semantic Decoding , which can simultaneously resolve the semantic gap and the structure gap by jointly leveraging paraphrasing and grammar-constrained semantic decoding.",
"We design two effective synchronous semantic decoding algorithms rule-level inference and word-level inference, which can generate paraphrases under the grammar constraints and synchronously decode meaning representations.",
"We now present overview of our synchronous semantic decoding algorithm, which can jointly leverage paraphrasing and grammar-constrained decoding for unsupervised semantic parsing.",
"Given an utterance, SSD reformulates semantic parsing as a constrained paraphrasing problem, and synchronously generates its canonical utterance and logical form.",
"For example in Fig. 2, given How many rivers run through Texas, SSD generates What is the number of river traverse State0 as its canonical form and Answer(Count(River(Traverse 2( State0)))) as its logical form.",
"During synchronous decoding: the utterance paraphrase generation is constrained by the grammar of logical forms, therefore the canonical utterance can be generated controlledly; the logical form is generated synchronously with the canonical utterance via synchronous grammar.",
"Logical form generation is controlled by the semantic constraints from paraphrasing and structure constraints from grammars and database schemas.",
"Therefore the logical form can be generated unsupervisedly.",
"To this end, SSD needs to address two challenges.",
"Firstly, we need to design paraphrasing-based decoding algorithms which can effectively impose grammar constraints on inference.",
"Secondly, current paraphrasing models are trained on natural language sentences, which are different from the unnatural canonical utterances.",
"Therefore SSD needs to resolve this style bias for effective canonical utterance generation.",
"chronous semantic decoding: rule-level inference and word-level inference.",
"Then we resolve the style bias of paraphrase model via adaptive fine-tuning and utterance reranking, where adaptive fine-tuning can adjust the paraphrase model to generate canonical utterances, and utterance reranking resolves the style bias by focusing more on semantic coherence.",
"In Sections 3-5, we provide the details of our implementation.",
"Given an utterance x , we turn semantic parsing into a constrained paraphrasing task.",
"Concretely, we use synchronous context-free grammar as our synchronous grammar, which provides a one-to-one mapping from a logical form y to its canonical utterance c y .",
"The parsing task y = arg max y Y p parse ( y | x ) is then transferred to y = arg max y Y p paraphrase ( c y | x ) .",
"Instead of directly parsing utterance into its logical form, SSD generates its canonical utterance and obtains its logical form based on the one-to-one mapping relation.",
"In following we first introduce the grammar constraints in decoding, and then present two inference algorithms for generating paraphrases under the grammar constraints.",
"Synchronous context-free grammar(SCFG) is employed as our synchronous grammar, which is widely used to convert a meaning representation into an unique canonical utterance (Wang et al., 2015; Jia and Liang, 2016).",
"An SCFG consists of a set of production rules: N (cid:104) , (cid:105) , where N is a non-terminal, and and are sequence of terminal and non-terminal symbols.",
"Each non-terminal symbol in is aligned to the same non-terminal symbol in , and vice versa.",
"Therefore, an SCFG defines a set of joint derivations of aligned pairs of utterances and logical forms.",
"SCFGs can provide useful constraints for semantic decoding by restricting the decoding space and exploiting the semantic knowledge: Grammar Constraints The grammars ensure the generated utterances/logical forms are grammar-legal.",
"In this way the search space can be greatly reduced.",
"For example, when expanding the non-terminal $r in Fig 2 we don't need to consider the words run and flow, because they are not in the candidate grammar rules.",
"Semantic Constraints Like the type checking in Wang et al. (2015), the constraints of knowledge base schema can be integrated to further re-fine the grammar.",
"The semantic constraints ensure the generated utterances/logical forms will be semantically valid.",
"One strategy to generate paraphrase under the grammar constraint is taking the grammar rule as the decoding unit.",
"Grammar-based decoders have been proposed to output sequences of grammar rules instead of words(Yin and Neubig, 2017).",
"Like them, our rule-level inference method takes the grammar rule as the decoding unit.",
"Figure 3",
"(a) shows an example of our rule level inference method.",
"i n W h a t t h a t s t a t e i s c it y0 l o ca t e d Answer(State($s)) Answer(State(Loc_1(city 0 ))) Answer(State(Loc_1(lake 0 ))) Answer(State(Loc_1(largest(city))))",
"(a) Rule-Level Inference $e $ s root What is $e Answer ($e) state $s State ($s) that $c located in Loc _1($c) $ c city0 City 0 l o ca t e d i n W h a t t h a t s t a t e i s l a r g e s t c it y c it y0 l a k e 0 t h e l o ca t e d located Answer(State($s)) Answer(State(Loc_1(city 0 ))) Answer(State(Loc_1(lake 0 ))) Answer(State(Loc_1(largest(city)))) Answer($e) $e $ s root What is $e Answer ($e) state $s State ($s) that $c located in Loc _1($c) $ c city0 City 0",
"(b) Word-Level Inference Figure 3: From the utterance which state is city0 in , two inference methods generate its canonical utterance what is state that city0 located in and its logical form Answer(State(Loc 1(City0))) .",
"The ways they handle non-terminal $c which is not at the end of utterance-side production rule are represented by purple lines.",
"We use beam search during the inference.",
"The inference details are described in Algorithm 1. 3.2.2 Word-Level Inference Except for rule-level inference, we also propose a word-level inference algorithm, which generates paraphrases word by word under the SCFG constraints.",
"Answer($e)",
"When the non-terminal in the utterance-side production rule is at the end of the rule (e.g., $e (cid:104) state $s , State( $s ) (cid:105) ), denoting the utterance-side production rule as r = [ w 1 , w 2 , ..., w L r , N ] , we can simply expand non-terminals in canonical utterances by this rule, and generate the canonical utterances from left to right with probabilities computed by:",
"(1) Otherwise, we generate the next production rules to expand this rule (i.e., rule with purple line), until",
"Firstly, we construct a deterministic automaton using LR(1) parser (Knuth, 1965) from the CFG in utterance side.",
"The automaton can transit from one state to another in response to an input.",
"The inputs of the automaton are words and the states of it are utterance/logical form segments.",
"LR(1) parser peeks ahead one lookahead input symbol, and the state transition table describes the acceptable inputs and the next states.",
"Then, in each decoding step we generate a word with a new state which is transited from previous state.",
"An example is shown in Figure 3",
"(b).",
"Only the acceptable words in the current state can be generated, and the end-of-sentence symbol can only be generated when reaching the final state.",
"Beam search is also used in this inference.",
"The above decoding algorithms only rely on a paraphrase generation model , which generates canonical utterance and logical form synchronously for semantic parsing.",
"We can directly use general paraphrase generation models such as GPT-2(Radford et al., 2019), T5(Raffel et al., 2020) for SSD.",
"However, as described in above, there exists a style bias between natural language sentences and canonical utterances, which hurts the performance of unsupervised semantic paring.",
"In this section, we describe how to alleviate this bias via adaptive fine-tuning.",
"Given a text generation model, after pretraining it using paraphrase corpus, we fine-tune it using synthesized (cid:104) sentence , canonical utterance (cid:105) pairs.",
"Previous studies have shown that the pretraining on synthesized data can significantly improve the performance of semantic parsing (Xu et al., 2020a; Marzoev et al., 2020; Yu et al., 2020; Xu et al., 2020b).",
"Specifically, we design three data synthesis algorithms: 1) CUs We sample CUs from SCFGs, and preserve executable ones.",
"As we do not have the paired sentences, we only fine-tune the language model of the PLMs on CUs.",
"2) Self Paras We use the trained paraphrase model to get the natural language paraphrases of the sampled canonical utterances to form (cid:104) sentence , canonical utterance (cid:105) pairs.",
"3) External Paras We also use external paraphrase methods such as back translation to get the pairs.",
"Adaptive fine-tuning resolves the style bias problem by fitting a better paraphrase model.",
"In this section, we propose an utterance reranking algorithm to further alleviate the style bias by reranking and selecting the best canonical form.",
"Given the utterance x and topN parsing results ( y n , c n ) , n = 1 , 2 , ..., N , we rerank all candidates by focusing on semantic similarities between x and c n , so that canonical utterances can be effectively selected.",
"Reranking for semantic parsing has been exploited in many previous studies (Berant and Liang, 2014; Yin and Neubig, 2019).",
"These works employ reranking for canonical utterances selection.",
"Differently, our re-ranker does not need labeled data.",
"Formally, we measure two similarities between x and c n and the final reranking score is calculated by: score ( x, c ) = log p ( c | x ) + s rec ( x, c ) + s asso ( x, c ) (2) Reconstruction Score The reconstruction score measures the coherence and adequacy of the canonical utterances, using the probability of reproducing the original input sentence x from c with the trained paraphrasing model: s rec ( x, c ) = log p pr ( x | c ) Association Score The association score measures whether x and c contain words that are likely to be paraphrases.",
"We calculate it as: s asso ( x, c ) = log | c | (cid:89) i =1 | x | (cid:88) j =0 p ( c i | x j ) a ( j | i ) + log | x | (cid:89) j =1 | c | (cid:88) i =0 p ( x j | c i ) a ( i | j ) (3) in which, p ( c i | x j ) means the paraphrase probability from x j to c i , and a ( j | i ) means the alignment probability.",
"The paraphrase probability and alignment are trained and inferred as the translation model in SMT IBM model 2. 6 Experiments 6.1 Experimental Settings Datasets We conduct experiments on three datasets: OVERNIGHT ( -DCS), GEO (FunQL), and GEOGRANNO , which use different meaning representations and on different domains.",
"Our implementations are public available 2 .",
"OVERNIGHT This is a multi-domain dataset, which contains natural language paraphrases paired with lambda DCS logical forms across eight domains.",
"We use the same train/test splits as Wang et al. (2015).",
"GEO (FunQL) This is a semantic parsing benchmark about U.S. geography (Zelle and Mooney, 1996) using the variable-free semantic representation FunQL (Kate et al., 2005).",
"We extend the FunQL grammar to SCFG for this dataset.",
"We follow the standard 600/280 train/test splits.",
"GEOGRANNO This is another version of GEO (Herzig and Berant, 2019), in which lambda DCS logical forms paired with canonical utterances are produced from SCFG.",
"Instead of paraphrasing sentences, crowd workers are required to select the correct canonical utterance from candidate list.",
"We follow the split (train/valid/test 487/59/278) in original paper.",
"Paraphrase Model We obtain the paraphrase model by training T5 and GPT2.0 on WikiAnswer Paraphrase 3 , we train 10 epochs with learning rate as 1e-5.",
"Follow Li et al. (2019), we sample 500K pairs of sentences in WikiAnswer corpus as training set and 6K as dev set.",
"We generate adaptive fine-tuning datasets proportional to their labeled datasets, and back-translation(from English 2 https://github.com/lingowu/ssd 3 http://knowitall.cs.washington.edu/ paralex Bas.",
"to Chinese then translate back) is used to obtain external paraphrases data.",
"On average, we sample 423 CUs per domain, and synthesize 847 instances per domain in Self Paras and 1252 in External Paras.",
"Unsupervised settings In unsupervised settings, we do not use any annotated semantic parsing data.",
"The paraphrase generation models are fixed after the paraphrasing pre-training and the adaptive fine-tuning.",
"The models are employed to generate canonical utterances and MRs synchronously via rule-level or word-level inference.",
"In rule-level inference, the leftmost non-terminators are eliminated by cyclically expanded and the maximum depth K is set to 5, the beam size is set to 20.",
"SSD uses T5 as the pre-trained language model in all the proposed components, including adaptive fine-tuning, reranking and the two decoding constraints.",
"Ablation experiments are conducted over all components with rule-level inference.",
"nonparallel data) Cao et al. (2020) have shown that external nonparallel data (including nonparallel natural language utterances and canonical utterances) can be used to build unsupervised semantic parsers.",
"For fair comparison, we also conduct unsupervised experiments with external unparallel data.",
"Specifically, we enhance the original SSD using the SAMPLES methods (Cao et al., 2020): we label each input sentences with the most possible outputs in the nonparallel corpus and use these samples as peusdo training data we denote this setting as SSD-SAMPLES .",
"Supervised settings Our SSD method can be further enhanced using annotated training instances.",
"Specifically, given the annotated (cid:104) utterance, logical form (cid:105) instances, we first transform logical form to its canonical form, then use them to further fine-tune our paraphrase models after unsupervised pre-training.",
"Baselines We compare our method with the following unsupervised baselines: 1) Cross-domain Zero Shot(Herzig and Berant, 2018), which trains on other source domains and then generalizes to target domains in OVERNIGHT and 2) GENOVERNIGHT (Wang et al., 2015) in which models are trained on synthesized (cid:104) CU, MR (cid:105) pairs; 3) We also implement SEQ 2S EQ baseline on the synthesized data as SYNTH-SEQ 2S EQ .",
"4) SYNTHPARASEQ 2S EQ is trained on the synthesized data and (cid:104) CU paraphrase, MR (cid:105) pairs, the paraphrases are obtained in the same way in Section 4.",
"The overall results of different baselines and our method are shown in Table 1 and Table 3 (We also demonstrate several cases in Appendix).",
"For our Bas.",
"1. By synchronously decoding canonical utterances and meaning representations, SSD achieves competitive unsupervised semantic parsing performance.",
"In all datasets, our method outperforms other baselines in the unsupervised settings.",
"These results demonstrate that unsupervised semantic parsers can be effectively built by simultaneously exploit semantic and structural constraints, without the need of labeled data.",
"2. Our model can achieve competitive performance on different datasets with different settings.",
"In supervised settings, our model can achieve competitive performance with SOTA.",
"With nonparallel data, our model can outperform Two-stage.",
"On GEO (FunQL) our model also obtains a significant improvement compared with baselines, which also verifies that our method is not limited to specific datasets (i.e., OVERNIGHT and GEOGRANNO , which are constructed with SCFG and paraphrasing.) 3. Both rule-level inference and word-level inference can effectively generate paraphrases under the grammar constraints.",
"The rule-level inference can achieve better performance, we believe this is because rule-level inference is more compact than word-level inference, therefore the rule-level inference can search wider space and benefit beam search more.",
"Effect of Decoding Constraints To analyze the effect of decoding constraints, we conduct ablation experiments with different constraint settings and the results are shown in Table 2: SEMANTIC denotes removing the semantic constraint, -GRAMMAR denotes all constraints are removed at the same time, the decoding is unrestricted.",
"We can see that the constrained decoding is critical for our paraphrasing-based semantic parsing, and both grammar constraints and semantic constraints contribute to the improvement.",
"Effect of Adaptive Fine-tuning To analyze the effect of adaptive fine-tuning, we show the results with different settings by ablating a fine-tuning corpus at a time (see Table 2).",
"We can see that adaptive fine-tuning can significantly improve the performance.",
"And the paraphrase generation model can be effectively fine-tuned only using CUs or Self Paras, which can be easily constructed.",
"Effect of Reranking To analyze the effect of reranking, we compare the settings with/without reranking and its upper bound Oracle, which can always select the correct logical form if it is within the beam.",
"Experimental results show that reranking can improve the semantic parsing performance.",
"Moreover, there is still a large margin between our method and Oracle, i.e., the unsupervised semantic parsing can be significantly promoted by designing better reranking algorithms.",
"Effect of Adding Labeled Data To investigate the effect of adding labeled data, we test our method by varying the size of the labeled data on OVERNIGHT from 0% to 100%.",
"In Fig. 4, we can see that our method can outperform baselines using the same labeled data.",
"And a small amount of data can produce a good performance using our method.",
"Effect of Pretrained Language Models To analyze the effect of PLMs, we show the results with different PLM settings: instead of T5 we use GPT-2 or randomly initialized transformers to construct paraphrasing models.",
"Experimental results show that powerful PLMs can improve the performance.",
"Powered by the language generation models to do semantic parsing, our method can benefit from the rapid development of PLMs.",
"Data Scarcity in Semantic Parsing.",
"Witnessed the labeled data bottleneck problem, many techniques have been proposed to reduce the demand for labeled logical forms.",
"Many weakly supervised learning are proposed (Artzi and Zettlemoyer, 2013; Berant et al., 2013; Reddy et al., 2014; Agrawal et al., 2019; Chen et al., 2020), such as denotation-base learning (Pasupat and Liang, 2016; Goldman et al., 2018), iterative searching (Dasigi et al., 2019).",
"Semi-supervised semantic parsing is also proposed(Chen et al., 2018a).",
"Such as variational auto-encoding (Yin et al., 2018), dual learning framework for semantic parsing (Cao et al., 2019), dual information maximization method (Ye et al., 2019), and back-translation (Sun et al., 2019).",
"One other strategy is to generate data for semantic parsing, e.g., Wang et al. (2015) construct a semantic parsing dataset from grammar rules and crowdsourcing paraphrase.",
"Guo et al. (2018) produce pseudo-labeled data.",
"Jia and Liang (2016) create new recombinant training examples with SCFG.",
"The domain transfer techniques are also used to reduce the cost of data collecting for the unseen domain (Su and Yan, 2017; Herzig and Berant, 2018; Lu et al., 2019; Zhong et al., 2020).",
"Goldwasser et al. (2011); Poon and Domingos (2009); Schmitt et al. (2020) leverage external resources or techniques for unsupervised learning.",
"Constrained Decoding.",
"After neural parsers model semantic parsing as a sentence to logical form translation task (Yih et al., 2015; Krishna-murthy et al., 2017; Iyyer et al., 2017; Jie and Lu, 2018; Lindemann et al., 2020), many constrained decoding algorithms are also proposed, such as type constraint-based illegal token filtering (Kr-ishnamurthy et al., 2017); Lisp interpreter-based method (Liang et al., 2017); type constraints for generating valid actions (Iyyer et al., 2017).",
"Paraphrasing in Semantic Parsing.",
"Paraphrase models have been widely used in semantic parsing.",
"ParaSempre (Berant and Liang, 2014) use paraphrase model to rerank candidate logical forms.",
"Wang et al. (2015) employ SCFG grammar rules to produce MR and canonical utterance pairs, and construct OVERNIGHT dataset by paraphrasing utterances.",
"Dong et al. (2017) use paraphrasing to expand the expressions of query sentences.",
"Compared with these methods, we combine paraphrasing with grammar-constrained decoding, therefore SSD can further reduce the requirement of labeled data and achieve unsupervised semantic parsing.",
"We propose an unsupervised semantic parsing method Synchronous Semantic Decoding, which leverages paraphrasing and grammar-constrained decoding to simultaneously resolve the semantic gap and the structure gap.",
"Specifically, we design two synchronous semantic decoding algorithms for paraphrasing under grammar constraints, and exploit adaptive fine-tuning and utterance reranking to alleviate the style bias in semantic parsing.",
"Experimental results show that our approach can achieve competitive performance in unsupervised settings.",
"We sincerely thank the reviewers for their insightful comments and valuable suggestions.",
"Moreover, this work is supported by the National Key Research and Development Program of China(No. 2020AAA0106400), the National Natural Science Foundation of China under Grants no. 61906182 and 62076233, and in part by the Youth Innovation Promotion Association CAS(2018141)."
] | [
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"method",
"method",
"result",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"method",
"result",
"other",
"other"
] |
[
"This paper presents a new corpus and a robust deep learning architecture for a task in reading comprehension, passage completion, on multiparty dialog.",
"Given a dialog in text and a passage containing factual descriptions about the dialog where mentions of the characters are replaced by blanks, the task is to fill the blanks with the most appropriate character names that reflect the contexts in the dialog.",
"Since there is no dataset that challenges the task of passage completion in this genre, we create a corpus by selecting transcripts from a TV show that comprise 1,681 dialogs, generating passages for each dialog through crowdsourcing, and annotating mentions of characters in both the dialog and the passages.",
"Given this dataset, we build a deep neural model that integrates rich feature extraction from convolutional neural networks into sequence modeling in recurrent neural networks, optimized by utterance and dialog level attentions.",
"Our model outperforms the previous state-of-the-art model on this task in a different genre using bidirectional LSTM, showing a 13.0+% improvement for longer dialogs.",
"Our analysis shows the effectiveness of the attention mechanisms and suggests a direction to machine comprehension on multiparty dialog.",
"Reading comprehension that challenges machine's ability to understand a document through question answering has gained lots of interests.",
"Most of the previous works for reading comprehension have focused on either children's stories (Richardson et al., 2013; Hill et al., 2016) or newswire (Hermann et al., 2015; Onishi et al., 2016).",
"Few approaches have attempted comprehension on small talks, although they are evaluated on toy examples not suitable to project real-life performance (Weston et al., 2015).",
"It is apparent that the main stream of reading comprehension has not been on the genre of multiparty dialog although it is the most common and natural means of human communication.",
"The volume of data accumulating from group chat or messaging continues to outpace data accumulation from other writing sources.",
"The combination of available and rapidly developing analytic options, a marked need for dialogue processing, and the disproportionate generation of data from conversations through text platforms inspires us to create a corpus consisting of multiparty dialogs and develop learning models that make robust inference on their contexts.",
"Passage completion is a popular method of evaluating reading comprehension that is adapted by several standardized tests (e.g., SAT, TOEFL, GRE).",
"Given a document and a passage containing factual descriptions about the contexts in the document, the task replaces keywords in the passage with blanks and asks the reader to fill in the blanks.",
"This task is particularly challenging when the document is in a form of dialog because it needs to match contexts between colloquial (dialog) and formal (passage) writings.",
"Moreover, a context that can be described in a short passage, say a sentence, tends to be expressed across multiple utterances in dialog, which requires discourse-level processing to make the full interpretation of the context.",
"This paper introduces a new corpus for passage completion on multiparty dialog (Section 3), and a deep learning architecture that produces robust results for understanding dialog contexts (Section 4).",
"Our experiments show that models trained by this architecture significantly outperform the previous state-of-the-art model using bidirectional LSTM, especially on longer dialogs (Section 5).",
"Our analysis highlights the comprehension of our models for matching utterances in dialogs to words in passages (Section 6).",
"To the best of our knowledge, this is the first time that the sentence completion task is thoroughly examined with a challenging dataset on multiparty dialog using deep learning models.",
"Hermann et al. (2015) introduced the CNN/Daily Mail dataset where documents and passages were news articles and their summaries respectively, and evaluated neural models with three types of readers.",
"Chen et al. (2016) proposed the entity centric model and the bidirectional LSTM model using attention, and conducted a thorough analysis on this dataset.",
"Trischler et al. (2016) presented the EpiReader that combined a reasoner with an extractor for encoding documents and passages using both CNN and RNN.",
"Dhingra et al. (2017) proposed the gated-attention reader that incorporated attention on multiplicative interactions between documents and passages.",
"At last, Cui et al. (2017) introduced the attention-over-attention reader that placed document-to-passage attention over passage-to-document attention.",
"Hill et al. (2016) released the Children Book Test dataset where documents were children's book stories and passages were excerpts from those stories.",
"Paperno et al. (2016) introduced the LAMBADA dataset comprising novels from the Book corpus.",
"Onishi et al. (2016) introduced the Who-did-What dataset consisting of articles from the LDC English Gigaword newswire corpus.",
"All corpora described above provide queries, that are passages where certain words are masked by blanks, for the evaluation of passage completion.",
"More datasets are available for another type of a reading comprehension task, that is multiple choice question answering, such as MCTest (Richardson et al., 2013), TriviaQA (Joshi et al., 2017), RACE (Lai et al., 2017), and SQuAD (Rajpurkar et al., 2016).",
"Unlike the other corpora where documents and passages are written in a similar writing style, they are multiparty dialogs and plot summaries in our corpus, which have very different writing styles.",
"This raises another level of difficulty to match contexts between documents and queries for the task of passage completion.",
"The Character Mining project provides transcripts of the TV show Friends for ten seasons in the JSON format.",
"1 Each season contains 24 episodes, each episode is split into 13 scenes, where each scene comprises a sequence of 21 utterances.",
"Chen et al. (2017) annotated the first two seasons of the show for an entity linking task, where personal mentions (e.g., she , mom , Rachel ) were identified by their corresponding characters.",
"Jurczyk and Choi (2017) collected plot summaries of all episodes for the first eight seasons to evaluate a document retrieval task that returned a ranked list of relevant documents given any sentence in the plot summaries.",
"For the creation of our corpus, we collect more plot summaries for the last two seasons of Friends from the fan sites suggested by Jurczyk and Choi (2017), generate passages for each dialog using the plot summaries and crowdsourced descriptions (Section 3.1), then annotate mentions of all characters in both the dialogs and the passages for passage completion (Section 3.2).",
"An episode consists of multiple scenes, which may or may not be coherent.",
"In our corpus, each scene is considered a separate dialog.",
"The lengths of the scenes vary from 1 to 256 utterances; we select only scenes whose lengths are between 5 and 25 utterances as suggested by the previous works (Chen and Choi, 2016; Jurczyk and Choi, 2017), which notably improves the readability for crowd workers, resulting higher quality annotation.",
"The plot summaries collected from the fan sites are associated with episodes, not scenes.",
"To break down the episode-level summaries into scene-level, they are segmented into sentences by the tokenizer in NLP4J.",
"2 Each sentence in the plot summaries 1 nlp.mathcs.emory.edu/character-mining 2 https://github.com/emorynlp/nlp4j 2040",
"(a) A dialog from Friends : Season 8, Episode 12, Scene 2.",
"Table 1: An example dialog and its passages and queries from our corpus.",
"All mentions are encoded by their entity IDs.",
"The queries are generated by replacing each unique entity in every passage with the variable x (Section 3.2).",
"@ent01: Joey, @ent02: Rachel, @ent03: Ross, @ent04: Neuman, @ent05: Paul.",
"is then queried to Elasticsearch that has indexed the selected scenes, and the scene with the highest relevance is retrieved.",
"Finally, the retrieved scene along with the queried sentence are sent to a crowd worker who is asked to determine whether or not they are relevant, and perform anaphora resolution to replace all pronouns in the sentence with the corresponding character names.",
"The sentence that is checked for the relevancy and processed by the anaphora resolution is considered a passage.",
"Out of 6,014 sentences collected from the plot summaries, 2,994 of them got turned into passages; in other words, about a half of the sentences could not be paired with relevant scenes by Elasticsearch.",
"In addition to these pseudo-generated passages, two more sets of passages are created.",
"For the first set, crowd workers are asked to generate new passages including factual descriptions different from the ones that are pseudo-generated.",
"This produced additional 615 passages; however, passages in this set could be biased toward the dominant characters.",
"To increase the diversity of the character entities in the passages, crowd workers are asked to generate the second set of passages that include factual descriptions related to only non-dominant characters.",
"A total of 1,037 passages are generated in this set, which makes passage completion even more challenging since the chance of the dominant characters being the answers becomes much lower with this second set.",
"Figure 1 shows the overview of passage generation.",
"Note that Amazon Mechanical Turk is used for all crowdsourcing.",
"For all dialogs and their passages, mentions are first detected automatically by the named entity recognizer in NLP4J (Choi, 2016) using the PERSON entity, then manually corrected.",
"For each passage including multiple mentions, a query is created for every mention by replacing it with the variable x : Rachel misses dating, so Joey offers to take Rachel out.",
"x misses dating, so Joey offers to take Rachel out.",
"Rachel misses dating, so x offers to take Rachel out.",
"Rachel misses dating, so Joey offers to take x out.",
"Following Hermann et al. (2015), all mentions implying the same character are encoded by the same entity ID.",
"A different set of entity IDs are randomly generated for each dialog; for the above example, Joey and Rachel may be encoded by @ent01 and @ent02 in this dialog (Table 1), although they can be encoded by different entity IDs in other dialogs.",
"This random encoding prevents learning models from overfitting to certain types of entities.",
"On the other hand, the same set of entity IDs are applied to the passages associated with the dialog.",
"One issue still remains that characters in this dataset are often mentioned by several aliases (e.g., nicknames, honorifics) such that it is not trivial to cluster mentions implying the same character using simple string matching.",
"Thus, an entity dictionary is created for each character whose key is the name of the character and the value is a list of aliases for the character, manually inspected throughout the entire show.",
"This entity dictionary is then used to link mentions in both the dialogs and the passages to their character entities.",
"Table 2 shows the overall statistics of our corpus.",
"It is relatively smaller than the other corpora (Sec-tion 2).",
"However, it is the largest, if not the only, corpus for the evaluation of passage completion on multiparty dialog that still gives enough instances to develop meaningful models using deep learning.",
"This section presents our deep learning architecture that integrates rich feature extraction from convolutional neural networks (CNN) into robust sequence modeling in recurrent neural networks (RNN) (Sec-tion 4.1).",
"The combination of CNN and RNN has been adapted by several NLP tasks such as text summarization (Cheng and Lapata, 2016), essay scoring (Dong et al., 2017), sentiment analysis (Wang et al., 2016), or even reading comprehension (Dhin-gra et al., 2017).",
"Unlike previous works that feed a sequence of sentences encoded by CNN to RNN, a sequence of utterances is encoded by CNN in our model, where each utterance is spoken by a distinct speaker and contains one or more sentences that are coherent in topics.",
"Our best model is optimized by both the utterance (Section 4.2) and the dialog (Section 4.3) level attentions, showing significant improvement over the pure CNN+RNN model.",
"Each utterance comes with a speaker label encoded by the entity ID in our corpus (Table 1).",
"This entity ID is treated as the first word of the utterance in our models.",
"Before training, random embeddings are generated for all entity IDs and the variable x with the same dimension d as word embeddings.",
"All utterances and queries are zero-padded to their maximum lengths m and n , respectively.",
"Given a query and a dialog comprising k -number of utterances, the query matrix Q R n d and the utterance matrix U i R m d are created using the 2042 word, entity, and variable embeddings i [1 , k ] .",
"For each U i , 2D convolutions are performed for 2-5 grams, where each convolution takes f -number of filters and the output of every filter is max-pooled, resulting a vector of the size f .",
"These vectors are concatenated to create the utterance embedding ~u i R 1 4 f , then the utterance embeddings are stacked to generate the dialog matrix D R k 4 f .",
"This dialog matrix is fed into a bidirectional LSTM consisting of two networks, LSTM d and LSTM d , that process the sequence of utterance embeddings in both directions.",
"In parallel, Q is fed into another bidirectional LSTM with LSTM q and LSTM q that process the sequence of word embeddings in Q .",
"Each LSTM returns two vectors from the last hidden states of LSTM and LSTM : ~h d = LSTM d (D) ~h d = LSTM d (D) ~h q = LSTM q (Q) ~h q = LSTM q (Q) All the outputs of LSTMs are concatenated and fed into the softmax layer that predicts the most likely entity for x in the query, where each dimension of the output layer represents a separate entity: O = softmax( ~h d ~h d ~h q ~h q ) predict(U 1 , . . . , U k , Q) = argmax(O) Figure 2 demonstrates our CNN+LSTM model that shows significant advantage over the pure bidirectional LSTM model as dialogs get longer.",
"Inspired by Yin et al. (2016), attention is applied to every word pair in the utterances and the query.",
"First, the similarity matrix S i R m n is created for each utterance matrix U i by measuring the similarity score between every word in U i and Q : S i [ r, c ] = sim(U i [ r, :] , Q[ c, :]) sim( x, y ) = 1 / (1+ k x y k ) The similarity matrix is then multiplied by the attention matrix A R n d learned during the training.",
"The output of this multiplication produces another utterance embedding U 0 i R m d , which is channeled to the original utterance embedding U i and generates the 3D matrix V i R 2 m d (Figure 3): U 0 i = S i A V i = U i (cid:11) U 0 i V i is fed into the CNN in Section 4.1 instead of U i and constructs the dialog matrix D .",
"The utterance-level attention is for the optimization of local contents through word similarities between the query and the utterances.",
"To give a global view to the model, dialog-level attention is applied to the query matrix Q and the dialog matrix D .",
"First, 1D convolutions are applied to each row in Q and D , generating another query matrix Q 0 R n e and dialog matrix D 0 R m e , where e is the number of filters used for the convolutions.",
"Q 0 is then multiplied to D 0 T , resulting another similarity matrix P R n m .",
"Furthermore, the sum of each row in P is concatenated to create ~p c R n 1 , and the sum of each column in P is also concatenated to create ~p r R 1 m : P = Q 0 D 0 T ~p c [ r ] = P mj =1 P [ r, j ] ~p r [ c ] = P nj =1 P [ j, c ] ~p Tc is multiplied to Q 0 and ~p r is multiplied to D 0 , producing the attention embeddings ~a q R 1 e 2043 Model Development Set Evaluation Set Org.",
"and ~a d R 1 e , respectively.",
"Finally, these attention embeddings are concatenated with the outputs of the LSTMs in Section 4.1 then fed into the softmax layer to make the prediction: ~a q = ~p Tc Q 0 ~a d = ~p r D 0 O = softmax( ~h d ~h d ~h q ~h q ~a d ~a q ) predict(U 1 , . . . , U k , Q) = argmax(O) Similar attentions have been proposed by Yin et al. (2016) and evaluated on NLP tasks such as answer selection, paraphrase identification, and textual entailment; however, they have not been adapted to passage completion.",
"It is worth mentioning that we have tried many other kinds of attention mechanisms and empirically found that the combination of these two attentions yields the best result for the passage completion task.",
"The Glove 100-dimensional pre-trained word embeddings (Pennington et al., 2014) are used for all experiments ( d = 100 ).",
"The maximum lengths of utterances and queries are m = 92 and n = 126 , and the maximum number of utterances is k = 25 .",
"For the 2/1D convolutions in Sections 4.1 and 4.3, f = e = 50 filters are used, and the ReLu activation is applied to all convolutional layers.",
"The dimension of the LSTM outputs ~h is 32, and the tanh activation is applied to all hidden states of LSTMs.",
"Finally, the Adam optimizer with the learning rate of 0.001 is used to learn the weights of all models.",
"Table 4 shows the dataset split for our experiments that roughly gives 80/10/10% for training/development/evaluation sets.",
"Most utterances in our corpus are relatively short except for a few ones so that padding all utterances to their maximum length is practically inefficient.",
"Thus, pruning is used for those long utterances.",
"For any utterance containing more than 80 words, that is about 1% of the entire dataset, stopwords are removed.",
"If the utterance still has over 80 words, all words whose document frequencies are among the top 5% in the training set are removed.",
"If the length is still greater than 80, all words whose document frequencies are among the top 30% in the training set are removed.",
"By doing so, we reduce down the maximum length of utterances from 1,066 to 92, which dramatically speeds up the modeling without compromising the accuracy.",
"The average number of utterances per dialog is 15.8 in our corpus, which is relatively short.",
"To demonstrate the model robustness for longer dialogs, three more datasets are created in which all dialogs have the fixed lengths of 25, 50, and 100 by borrowing utterances from their consecutive scenes.",
"The same sets of queries are used although models need to search through much longer dialogs in order to answer the queries for these new datasets.",
"The three pseudo-generated datasets as well as the original dataset are used for all our experiments.",
"Human performance is examined on the evaluation set of the original length using crowdsourcing.",
"The workers are presented with passages and the corresponding dialogs and asked to choose the answer from the list of entities that appear in the dialog.",
"For fair comparisons, the encoded input where the character names are replaced with the entity IDs are used for this evaluation as well, to minimize the bias from external knowledge.",
"Three models are used to establish comprehensible baseline results:",
"Majority This model picks the dominant entity in the dialog as the answer for each query.",
"Entity Centric This is our reimplementation of Chen et al. (2016)'s entity centric model.",
"Our entity centric model was evaluated on the CNN/Daily Mail dataset and showed a comparable result to the previous work.",
"Bi-LSTM This is the bidirectional LSTM model introduced by Chen et al. (2016), which outperforms their entity centric model by a large margin.",
"We use their implementation of this model; 3 the input to this model is a list of words across all utterances within the dialog.",
"All hyperparameters are tuned using the development set.",
"Table 3 shows the results from all models.",
"The human performance on the evaluation set is only 1.6+% higher than the best performing model, which on part shows the difficulty of the task.",
"It should be noted that character anonymization process makes it harder to for people to find the answer.",
"However, it also possible that some participants of the evaluation may enter the answer randomly (i.e the results may not truly reflect human perfor-mance).",
"Notice that the performance of the majority model on our dataset is similar to the ones in the CNN/Daily Mail dataset, which validates the level of difficulty in our corpus.",
"As expected, the entity centric model sets its performance in between the majority model and the other deep learning models.",
"For all of our models and Bi-LSTM, experiments are run three times with different random seeds and the accuracies are averaged.",
"The accuracy of Bi-LSTM reported on the CNN dataset is 72.4, which is similar to its performance on our dataset.",
"Our models coupled with both the utterance-level and the dialog-level attentions (CNN+LSTM+UA+DA) outperform all the other models except for the one on the development set of the original dataset.",
"Our models show significant advantage over Bi-LSTM as the length of the dialog gets larger.",
"3 github.com/danqi/rc-cnn-dailymail",
"peaks of CNN+LSTM+UA+DA and Bi-LSTM, respectively.",
"Although the accuracies between these models are very similar, our model converges in fewer epochs.",
"Figure 6 shows the learning curves from both models in 3 trials on the length-100 dataset.",
"Our models take fewer epochs to converge and the variance of performance across trials is smaller, implying that our models are not as sensitive to the hyperparameter tuning as Bi-LSTM.",
"Figure 7 depicts the dialog-level attention matrix, that is P in Section 4.3, for the example in Table 1.",
"The x -axis and y -axis denote utterances and words in the query, respectively.",
"Each cell represents the attention value between a word in the query and an utterance.",
"From this visualization, we see that query words such as misses , take , good , and time 2045 have the most attention from utterances as they are the keywords to find the answer entity.",
"The utterances 14, 15 and 17 that give out the answer also get relatively high attention from the query words.",
"This illustrates the effectiveness of the dialog-level attention in our model.",
"Table 5 shows the confusion matrix between Bi-LSTM and CNN+LSTM +UA+DA on the original dataset.",
"During the error analysis, it is noticed that Bi-LSTM is better at capturing exact string matches or paraphrases.",
"As shown by the first two examples in Table 6, it is clear that those queries can be answered by capturing just the snippets of the dialogs.",
"In the first example, x makes up his mind about something in the query matches @ent06 sets his mind on something in the dialog.",
"In the second example, query phrase the closet that x and @ent03 were in also has the exact string matchthe closet @ent18 and @ent03 were in in the dialog.",
"Although these cues are usually parts of sentences in long utterances, since Bi-LSTM is based on only words, it still is able to locate them correctly.",
"On the other hand, our model encodes each utterance and then feeds encoded vectors to LSTMs, so the high level representation of the cues are mixed with other information, which hinders the model's ability to find the exact string matches.",
"Our model is better at answering queries that require inference from multiple utterances.",
"As shown by the last two examples in Table 6, the cues to the answers distribute across several utterances and there is no obvious match of words or phrases.",
"In the third example, the model needs to infer that in the sentence (She reaches over to look at the label on the box), she refers to @ent18 and connect this information with the later utterance by @ent18 This is addressed to Mrs. @ent16 down-stairs in order to answer the query.",
"In the last example, finding the correct answer requires the model to interpret that the utterances What the hell was that?! and (They both scream and jump away.) reflect the outcome of startles , which is the verb in the query.",
"As dialogs become longer in the padded datasets, because of the utterance encoding procedure, our model's ability's ability to locate relevant part of dialog is not influenced as much, whereas it becomes much more difficult for Bi-LSTM to find the matches.",
"It is worth mentioning that besides the models presented in Section 4, the attention-over-attention reader was also experimented with our dataset, which outperformed various neural systems by a large margin on both the CNN news dataset and the Children Book Test dataset (Cui et al., 2017).",
"We first reimplemented their model and experimented on the CNN dataset and achieved similar results as reported in the previous paper.",
"We then experimented this model on our original length dataset.",
"However, even after an extensive hyperparameter turning on the development set, this model did not achieve results comparable to those of either Bi-LSTM or our models, so we did not make a further analysis on this model.",
"We introduce a new corpus consisting of multiparty dialogs and crowdsourced annotation for the task of passage completion.",
"To the best of our knowledge, this is the first corpus that can challenge deep learning models for passage completion on this genre.",
"We also present a deep learning architecture combining convolutional and recurrent neural networks, coupled with utterance-level and dialog-level attentions.",
"Models trained by our architecture significantly outperform the one trained by the pure bidirectional LSTM, especially on longer 2046 Model Query Dialog Bi-LSTM @ent12 says that once x makes up his mind Because you know as well as I do that once @ent06 about something, @ent06 will have xxx with it.",
"Association for Computational Linguistics, Berlin, Germany, pages 484494.",
"http://www.aclweb.org/anthology/P16-1046.",
"Jinho D. Choi.",
"2016.",
"Dynamic Feature Induction: The Last Gist to the State-of-the-Art.",
"In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies .",
"NAACL'16.",
"dialogs.",
"Our analysis demonstrates the comprehension of our model using the attention matrix.",
"For the future work, we will expand the annotation for more entity types and automatically link mentions with respect to their entities using an entity linker.",
"All our resources including the annotated corpus and source codes of the models are available at: https://github.com/emorynlp/ reading-comprehension ."
] | [
"objective",
"abstain",
"result",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"result",
"objective",
"objective",
"method",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"other"
] |
[
"In common law, the outcome of a new case is determined mostly by precedent cases, rather than by existing statutes.",
"However, how exactly does the precedent influence the outcome of a new case?",
"Answering this question is crucial for guaranteeing fair and consistent judicial decision-making.",
"We are the first to approach this question computationally by comparing two longstanding jurisprudential views; Halsbury's, who believes that the arguments of the precedent are the main determinant of the outcome, and Goodhart's, who believes that what matters most is the prece-dent's facts.",
"We base our study on the corpus of legal cases from the European Court of Human Rights (ECtHR), which allows us to access not only the case itself, but also cases cited in the judges' arguments (i.e. the precedent cases).",
"Taking an information-theoretic view, and modeling the question as a case outcome classification task, we find that the prece-dent's arguments share 0 .",
"38 nats of information with the case's outcome, whereas prece-dent's facts only share 0 .",
"18 nats of information (i.e., 58 % less); suggesting Halsbury's view may be more accurate in this specific court.",
"We found however in a qualitative analysis that there are specific statues where Goodhart's view dominates, and present some evidence these are the ones where the legal concept at hand is less straightforward.",
"Legal systems around the world can be divided into two major categories (Joutsen, 2019): civil law systems, which rely predominantly on the rules written down in statutes, and common law systems, which rely predominantly on past judicial decisions, known as the precedent.",
"Within common law systems, jurisprudential scholars have pondered over the nature of precedent in law for at least a century (Halsbury, 1907).",
"Is it the judges' argumentation in the precedent, or is it the Figure 1: The text of ECtHR cases can be divided into facts , arguments and outcome .",
"claimants' specific individual circumstances that are the deciding factor in what becomes the law?",
"Here, we present a new information-theoretical methodology that helps answer this question.",
"In common law countries, statutes establish the general idea of the law, but the actual scope of the law is determined by the courts during a trial.",
"To keep case outcomes consistent and predictable in subsequent cases, judges are forced to apply the reasoning developed in prior cases with similar facts (precedent), to the facts of the new case under the doctrine of stare decisis (Duxbury, 2008; Lamond, 2016; Black, 2019).",
"This is done by identifying the ratio decidendi (the reasons for the decision) as opposed to the obiter dicta (that which is said in passing).",
"The distinction between ratio and obiter is an important one, since ratio is binding, whereas obiter is not.",
"This means that courts will only strive to remain consistent in upholding ratio, but can freely depart from the obiter.",
"But what does the ratio consist of?",
"There is no accepted overarching theory of precedent (Duxbury, 2008), but there are two tests of ratio.",
"On the one hand, Lord Halsbury (1907) claims that what is binding is the judge's reasoning and arguments.",
"For instance, by using a high degree of abstraction, judges can analogise physical and psychological pain.",
"A different view has been put forward by Goodhart (1930), who argues it is the analogy of the facts of the precedent and the case at hand, without the need for reasoning (e.g. comparing the pain caused by a knife to that caused by another instrument, requiring a far lower degree of abstraction).",
"These give rise to the two wellknown legal tests for ratio: Halsbury's test and Goodhart's test .",
"In this paper, we are the first to approach this problem from a data-driven perspective, using the European Court of Human Rights ( ECtHR ) 1 case law; see Figure",
"1. We build a citation network over this corpus in order to have access to many prece-dents' full text.",
"Training our model on either the facts or the arguments of the precedent, we can put Halsbury's and Goodhart's views to the test.",
"We cast this problem as an information-theoretic study by measuring the mutual information (Shannon and Weaver, 1962) between the case outcome and either the precedent facts or arguments.",
"We find that precedent arguments and case outcome share information to the degree of 0 .",
"38 nats, whereas facts and case outcome only share information to the degree of 0 .",
"18 nats (i.e., 58 % less).",
"We therefore observe thatat least for ECtHRHalsbury's view of the precedent is more accurate than that of Goodhart.",
"Despite the importance of the precedent in civil law, its operationalization remains shrouded in philosophical debate centred around how the precedent actually forms the binding law.",
"Jurisprudentially, we can think of this as searching for the ratio decidendi in the judgement, i.e. separating the ratio decidendi from the obiter dicta, or binding law from merely circumstantial statements.",
"It is the nature of ratio that distinguishes Halsbury's view from Goodhart's.",
"The case argument contains the judge's explanation of why the case is decided the way it is.",
"It incorporates knowledge of the precedent, facts of the case 1 European Court of Human Rights (ECtHR) is the court that adjudicates on cases dealing with the European Convention of Human Rights (ECHR).",
"and any new reasoning the judge might develop for the case itself.",
"We consider the intuitive position that a legal test is formulated by the argument that the judge put forward when deciding the case.",
"A legal test is by its nature part of the ratio and, thus, would be binding on all subsequent cases.",
"This is the position endorsed by Lord Halsbury (1907).",
"Under this conception of the ratio, it is the arguments that matter, becoming the law; the facts of the case are of secondary importance.",
"If a judge acts as Halsbury suggests they should extract the logic of the implicit legal test of the precedent, and attempt to largely ignore the specific facts of the case.",
"Halsbury's view remains the conventional view of the precedent to this day (Lamond, 2005).",
"In contrast, Goodhart (1930) observes that many cases do not contain extensive reasoning, or any reasoning at all; judges seem to decide the outcome without these.",
"Therefore, he claims that the facts of the case together with its outcome must form the ratio; otherwise, a hypothetical new case with the same facts as any given precedent could lead to a different outcome.",
"Duxbury (2008) observes that judges, when in disagreement with the precedent, concentrate on the facts of a previous case more than one would expect if Halsbury's hypothesis were fully correct.",
"Halsbury would predict that they should talk about the facts of previous cases as little as possible, and seek the most direct route to ratio in the form of argument, but they evidently do not.",
"A potential explanation is that, when disagreement arises, it is easier for judges to claim that the facts are substantially different, than to challenge the logic of the precedent, i.e. to overrule that case.",
"Overruling a previous judgement is a rare and significant legal event (Dunn, 2003; Spriggs and Hansford, 2001) because it threatens the stability of the legal system.",
"By concentrating on facts rather than running the risk of overruling, the judge can avoid this problem, including the threat of overruling her own previous judgement.",
"In support of this view, inspection of the argumentative part of the judgement reveals judges do not usually formulate legal tests of the kind Halsbury implies (Lamond, 2005).",
"Neither do judges usually search the precedent for such legal tests (Alexander and Sherwin, 2008).",
"Goodhart's position suggests that the precedent operates less as an enactment of rules, but more as reasoning by Figure 2: Our formulation of Halsbury's and Goodhart's tests as a classification task.",
"analogy; hence it is the good alignment between the facts of the two cases that leads to consistent outcomes.",
"Notation.",
"We denote the set of cases as C , writing each of its element as c .",
"The set of cases that form the precedent for case c are denoted P c C .",
"We will consider three main random variables in this work.",
"First, we consider O , a random variable that ranges over a binary outcome space O = { 0 , 1 } K , where K is the number of Articles.",
"An instance o O tells us which Articles have been violated.",
"Since o is a vector of binary outcomes for all Articles, we can index it as o k to get the outcome of a specific k th Article and we analogously index the random variable O k .",
"We will denote o c the outcome of a specific case c .",
"2 Next, we consider F , a random variable that ranges over the space of facts.",
"We denote the space of all facts as F = , where is a set of sub-word units and is its Kleene closure.",
"We denote an instance of F as f .",
"We will further denote the facts of a specific case c as f c .",
"Finally, we consider A , a random variable that ranges over the space of Arguments.",
"Analogously to facts, the space of all Arguments is A = .",
"An element of A is denoted 2 We note here that we overload the subscript notation in this paper.",
"Operationalising Halsbury and Goodhart.",
"In this work, we intend to measure the use of Halsbury's and Goodhart's views in practice, which we operationalise information-theoretically following the methodology proposed by Pimentel et al. (2019).",
"To test the hypothesis, we construct two collections of random variables, which we denote H and G .",
"We define an instance h c of random variable H as the union of arguments and outcomes for all precedent cases of c , i.e. (cid:83) c (cid:48) P c { a c (cid:48) , o c (cid:48) } .",
"We will denote the instance h when referring to it in the abstract (without referring to a particular case).",
"We analogously define instances of random variable G as g c = (cid:83) c (cid:48) P c { f c (cid:48) , o c (cid:48) } .",
"While the set-theoretic notation may seem tedious, it encompasses the essence of the distinction between Halsbury's and Goodhart's view: Each view hypothe-sises a different group of random variables should contain more information about the outcome O of a given case.",
"In terms of mutual information, we are interested in comparing the following: MI( O ; H | F ) , MI( O ; G | F ) (1) If MI( O ; H | F ) > MI( O ; G | F ) , then Halsbury's view should be more widely used in practice.",
"Conversely if the opposite is true, i.e. MI( O ; G | F ) > MI( O ; H | F ) , then Goodhart's view should be the one more widely used.",
"The MI is calculated by subtracting the outcome entropy conditioned on the case facts and either H or G from the outcome entropy conditioned on the facts alone.",
"Therefore, to compute the MI we need to compute the Halsbury's and Goodhart's conditional entropies first: H( O | H, F ) (2) = (cid:88) o,h,f p ( o, h, f ) log p ( o | h, f ) H( O | G, F ) (3) = (cid:88) o,g,f p ( o, g, f ) log p ( o | g, f ) as well as the entropy conditioned on the facts of the current case alone: H( O | F ) = (cid:88) o,f p ( o, f ) log p ( o | f ) (4) The conditional entropies above reflect the uncertainty (measured in nats) 3 of an event, given the knowledge of another random variable.",
"For instance, if G completely determines O , then H( O | G ) is 0 ; there is no uncertainty left.",
"Conversely, if the variables are independent, then H( O ) = H( O | G ) , where H( O ) denotes the unconditional entropy of the outcomes O .",
"We now note a common decomposition of mutual information that will help with the approximation: MI( O ; H | F ) = H( O | F ) H( O | H, F ) (5) MI( O ; G | F ) = H( O | F ) H( O | G, F ) (6) In this work, we consider the conditional probabilities p ( o | ) as the independent product of each Article's probability, i.e. (cid:81) Kk =1 p ( o k | ) .",
"Information-theoretically, then, they are related through the following equation: H( O | ) = K (cid:88) k =1 H( O k | ) (7) Following Williams et al. (2020), we further calculate the uncertainty coefficient (Theil, 1970) of each of these mutual informations.",
"These coeffi-cients are easier to interpret, representing the percentage of uncertainty reduced by the knowledge of a random variable: U( O | H ; F ) = MI( O ; H | F ) H( O | F ) (8) U( O | G ; F ) = MI( O ; G | F ) H( O | F ) (9) 4 Experimental Setup We choose to work with the ECtHR corpus for three reasons.",
"First, it can be treated as operating under precedential law, in the vein of common law countries.",
"This is not a given, as the ECtHR is an international court of highest appeal without a formal doctrine of stare decisis (Jacob, 2014), but there is nevertheless strong evidence that it is precedential.",
"This evidence comes from the court's own guidelines (ECtHR, 2014), but can also be found in the writings of a former judge of the ECtHR (Zupancic, 2016) and of legal scholars (Lupu and Voeten, 2010).",
"Second, there is existing research on the neural modeling of ECtHR case law we can build upon (Aletras et al., 2016; Chalkidis et al., 2019, 2020).",
"Third, the documents of the ECtHR 3 Nats are computed with ln , while bits use log 2 .",
"case law, unlike those of most other courts, textually separate the facts from the arguments , which is crucial for our experiments.",
"Case facts are descriptions of what had happened to the claimant before they went to the court; they include domestic proceedings of their case before it was appealed to the ECtHR as a form of a last resort.",
"They do not contain any reference to European Convention of Human Rights (ECHR) Articles or ECtHR case law.",
"Arguments on the other hand contain judges' discussion of ECHR articles and ECtHR case law in relation to the facts.",
"The ECtHR corpus has been scraped from the HUDOC 4 database and contains 11 , 000 cases reported in English (Chalkidis et al., 2019).",
"5 Judges decide for each Article of ECHR whether it has been violated with respect to the claimant's circumstances.",
"In the ECtHR corpus, each case therefore comes with a pre-extracted decision in form of a set of violated ECHR Article numbers.",
"We refer to this set as the outcome of a case.",
"Out of 30 Articles, 18 are from the Convention itself (Articles 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 18 , 34 , 38 , 41 , 46 ), while the rest ( 1 . 1 , 1 . 2 , 1 . 3 , 4 . 2 , 4 . 4 , 6 . 1 , 6 . 3 , 7 . 1 , 7 . 2 , 7 . 3 , 7 . 4 , 12 . 1 ) comes from the Protocols to the Convention.",
"For our experiment, we need a sub-corpus where each case has at least one outgoing citation where the full text is contained in our corpus.",
"In practice, there will be other outgoing citations we cannot resolve, for instance because the document is not in English or HUDOC happens not to contain them.",
"We also need our citations to be de-duplicated.",
"We create such a sub-corpus, which contains 9 , 585 documents (i.e., citing documents), with 94 , 167 in-corpus links (tokens) to 7 , 113 cases (types) and 65 , 495 out-of-corpus links to 22 , 328 types (cited documents).",
"We start from the original ECtHR split of 9 , 000 training, 1 , 000 validation and 1 , 000 test cases, and after citation filtering arrive at 7 , 627 training, 976 validation and 982 test cases.",
"For every citation, we extract the text under headings with regular expressions such as THE FACTS and THE LAW, labelling it as facts and arguments, respectively.",
"The mutual information values that we intend to analyse need to be approximated.",
"We follow Pi-4 HUDOC: https://hudoc.echr.coe.int/eng .",
"mentel et",
"al.'s (2019; 2021) methodology for this, approximating them as the difference between two cross-entropies: MI( O ; H | F ) H ( O | F ) H ( O | H, F ) MI( O ; G | F ) H ( O | F ) H ( O | G, F ) Indeed, although several estimates for the mutual information exist, McAllester and Stratos (2020) argues that estimating it as this difference is the most statistically justified way.",
"These conditional entropies are themselves approximated through their sample estimate.",
"For instance, we compute: H ( O | G, F ) 1 |C| (cid:88) c C log p ( o c | g c , f c ) (10) which is exact as |C| .",
"We note that the cross-entropy is an upper bound on the entropy, which uses a model p ( o | ) for its estimate.",
"The better this model, the tighter our estimates will be.",
"The only thing left to do now, is to obtain these probability estimates.",
"We thus model Halsbury's view as a classification task (see Figure 2) estimating the probability: p ( o | h, f ) = K (cid:89) k =1 p ( o k | h, f ) (11) We analogously model Goodhart's view as: p ( o | g, f ) = K (cid:89) k =1 p ( o k | g, f ) (12) Finally, we model the p of the model conditioned only on the facts of the case at hand as: p ( o | f ) = K (cid:89) k =1 p ( o k | f ) (13) These models can be approximated using deep neural networks as introduced in the next section.",
"We train deep neural networks on our training sets, using a cross-entropy loss function and a sub-gradient descent method.",
"Given the trained models, we can then answer if it is Halsbury's view or Goodhart's that is more widely used by the ECtHR judiciary.",
"All experiments are conducted using a LONGFORMER classifier (Beltagy et al., 2020).",
"6 The 6 Our code is available here: https://github.com/ valvoda/Precedent.",
"LONGFORMER is built on the same TRANSFORMER (Vaswani et al., 2017) architecture as BERT (Devlin et al., 2019), but allows for up to 4 , 096 tokens, using an attention mechanism which scales linearly, instead of quadratically.",
"We choose this architecture in particular as it achieves state-of-the-art performance in tasks similar to ours, e.g. on the IMDB sentiment classification (Maas et al., 2011) and Hyperpartisan news detection (Kiesel et al., 2019).",
"h = LONGFORMER ( g, f ) p ( o | g, f ) = ( W (1) ReLU( W (2) h ))",
"where h R d 1 is a high dimensional representation, W (1) RK d 2 and W (2) R d 2 d 1 are learnable parameters in linear projections, and is the sigmoid function.",
"Eq.",
"(14) will thus output a K -dimensional vector with the probabilities for all articles, by indexing this vector we retrieve the probabilities of the individual articles applying.",
"Due to resource limitations we set the models' hidden size to 50 and batch size to 16 , and also truncate individual cases to 512 tokens.",
"For the models p ( o k | g, f ) and p ( o k | h, f ) , which are trained on the combination of f and either h or g , we concatenate cases to the maximum length of 1 , 024 tokens (as exemplified in Figure 2).",
"While we do not fully utilise the 4 , 096 word limit of the LONGFORMER , we are able to process twice as many tokens as standard BERT without pooling; memory limitations prevent us from using the full 4 , 096 tokens, though.",
"Our LONGFORMER models are implemented using the Pytorch (Paszke et al., 2019) and Huggin-face (Wolf et al., 2020) Python libraries.",
"We train all our models on 4 Nvidia P 100 16 GiB GPU's for a maximum of 6 hours using LONGFORMER -base model.",
"Our results are reported in terms of the models cross entropy.",
"Our experimental results are contained in Table",
"1. We first note that both our mutual information estimates are statistically larger than zero, i.e. Goodhart's and Halsbury's cross-entropies are statistically smaller than that of the Facts.",
"7 The question we asked ourselves at the outset, though, concerns whether the data supports Halsbury's or Goodhart's view.",
"We find that our estimate of MI( O ; H | F ) is significantly larger at 0 .",
"31 nats than our estimate of MI( O ; G | F ) at 0 .",
"18 nats.",
"These results suggest that the information contributed by the precedent arguments give us nearly 58 % more information about the outcome of the case than the information contained in the facts of the precedent.",
"In terms of the uncertainty coefficient, the outcome entropy is reduced by 6% for facts and by 10% for arguments.",
"We therefore observe that Halsbury's view is more widely used in the domain of ECtHR than Goodhart's.",
"A more nuanced story can be told if we inspect the individual Articles even though the small number of cases per Article does not allow for conclusive significance tests.",
"The core rights of the Convention are contained in (Articles 2 18 ).",
"8 Figure 3 shows that for some of the core Articles, we see the opposite effect from what we observed 7 We measure significance using the two tailed paired permutation tests with p < 0 .",
"05 after Benjamini and Hochberg's (1995) correction.",
"8 The Convention Section 1 is the first section of the ECHR and elevates some of the Universal Declaration of Human Rights principles into actionable rights of European citizens (Schindler, 1962).",
"for the entirety of Articles, namely that facts outperform arguments, in particular for Articles 2 , 4 , 9 , 11 , 13 , 18 .",
"We hypothesise that the reason for this is either that the judges have not yet developed a functional legal method for these Articles, that the relevant precedent has been placed late in the list of precedents (and thus was truncated away by our method-ology), or that the complexity of the arguments requires a reasoning ability our models are simply not capable of.",
"We consider each hypothesis separately below.",
"For some Articles, it is more difficult to develop a legal method than for others because the logic of the argument is elusive for some reason.",
"This holds, for instance, for Articles encoding a vague concept such as right to life, cf.",
"the discussion below.",
"If a case deals with such an Article, the argument of a potential precedent will be less useful to determine the outcome.",
"We hypothesise that in such a case the judges will be more willing to depart from the logic of past cases, which they might perceive as unsatisfactory in search of a better legal reasoning.",
"However, judges strive to maintain consistency between decisions as their authority is based on this consistency.",
"Under these conditions, a judge might take the approach of trying to find precedent cases that match the current case in terms of facts even if not in terms of logic.",
"Case law dealing with such Articles would therefore be more likely to follow Goodhart's view.",
"To support or disprove this hypothesis would require an in-depth legal analysis far beyond the scope of this paper; one would need to robustly argue why judges find it relatively more difficult to develop legal reasoning for certain articles.",
"However, looking at the Articles where our data indicate that Goodhart's view is the one more widely used, it seems to us that they indeed concern legal concepts that are more slippery than others, which we categorised as follows.",
"We can contrast Articles 2 and 4 , where judges follow Goodhart's view, to Article 3 , for which judges follow Halsbury's view instead, see Table",
"2. All three Articles are concerned with the fundamental respect of human life, and we therefore consider them together as the corporal Articles.",
"Article 2 : Right to Life prohibits the intentional Goodhart Halsbury Art H ( O k | F ) MI U MI U 2 0.065 0.014 21.97% 0.010 15.27% 3 0.272 0.028 10.15% 0.047 17.23% 4 0.028 0.020 71.26% 0.011 39.27% 5 0.275 0.019 7.05% 0.021 7.53% 6 0.493 0.042 8.50% 0.089 17.95% 7 0.024 -0.003 -12.01% -0.000 -1.52% 8 0.298 0.063 21.15% 0.084 28.33% 9 0.022 0.005 23.14% -0.003 -15.74% 10 0.173 0.003 1.92% 0.034 19.90% 11 0.074 0.018 24.29% -0.004 -5.66% 12 0.006 -0.001 -11.09% 0.003 46.60% 13 0.235 -0.000 -0.10% -0.006 -2.38% 14 0.071 -0.005 -7.30% -0.005 -7.28% 18 0.031 -0.003 -10.00% -0.007 -24.01% Table 2: The cross-entropy H , mutual information MI and uncertainty coefficient U results of each of the core ECHR Articles.",
"deprivation of life, save for circumstances where it is a penalty for a crime, in defence, during an arrest, or riot suppression .",
"In the context of the criminal code of Europe, this is a very restricted prohibition.",
"Every country already encodes these rules.",
"On the other hand, it raises the difficult issues of beginning and end of life.",
"Is Article 2 for or against abortion (Cosentino, 2015)?",
"What is its stance on euthanasia (Hendriks, 2019)?",
"Developing a legal test for Article 2 seems very hard indeed.",
"Similarly, Article 4 : Prohibition of slavery and forced labour , excludes work forced in detention, compulsory military service, any service during emergency or normal civic obligations.",
"Due to the large number of exceptions to the general rule it seems very hard to establish what exactly this Article does prohibit.",
"Let us compare these to Article 3 : Prohibition of torture , where Halsbury's view prevails.",
"This Article simply states that no one shall be subjected to torture or inhuman or degrading treatment or punishment .",
"No exceptions are given.",
"It seems much easier to develop a legal test for Article 3 than for Articles 2 and 4 .",
"The judges are free to establish what constitutes torture; whereas when it comes to Articles 2 and 4 , they are facing many restrictionsboth legal and political.",
"Articles 8 , 9 , 10 , 11 and 12 as the Articles broadly",
"concerning belief, family and religion.",
"The two outliers here are Articles 9 and 11 .",
"Article 9 provides the freedom of thought, conscience and religion , Article 11 provides the freedom of assembly and association .",
"For both Articles, Goodhart's test outperforms Halsbury's.",
"Just like above, the nature of Articles 9 and 11 seems more complicated compared to Article 8 , which is similar, but narrower in scope: Right to respect for private and family life , Article 10 : Freedom of expression and Article 12 : Right to marry .",
"We would argue that since Articles 8 and 12 provide a right as opposed to a freedom , they define more narrowly the obligation on the part of the State.",
"Compared to the freedom of thought and association (Articles 9 and 11 ), the right to marry and the right to privacy (Article 8 and 12 ) seem to be more concrete and testable obligations.",
"We can further view Article 10 : freedom of expression , as dealing with an action brought about by the exercise of Article 9 : freedom of thought .",
"While similar in concept, regulating speech seems far easier in practice than regulating thought.",
"Finally, an inspection of the ECHR guidelines to Article 11 reveals that judges seem to be often torn between Articles 10 and 11 .",
"9 This is because much of the cases dealing with Article 11 concern themselves with disentangling what constitutes an expression during an assembly and conversely which assembly is a form of an expression.",
"Many cases deal with the question of religious gathering as an assembly.",
"This is obviously not an easy position for a judge to divine a legal test for, and perhaps a good reason for turning to the facts of the precedent cases for consistency instead.",
"There is a group of Articles in the last quarter of Figure 3 ( 13 , 14 , 18 ) for which neither Goodhart's nor Halsbury's view seem to hold.",
"We speculate that the reason for this is that these Articles never appear alone, and instead always appear in conjunction with another Article, and also that they appear late in the list of precedents, so get truncated with our methodology.",
"Articles 13 : Right to an effective remedy , 14 : Prohibition of discrimination and 18 : Limitation on use of restrictions on rights , are designed to 9 Article 11 guidance: https://www.echr.coe.",
"ensure that states provide remedy for their wrongdoing, equal access to the rights, and do not use the restrictions in Articles for Human Rights abuse.",
"To claim any one of these Articles, the claimant will also have to claim a violation of one of the primary Articles as their core grievance for which they seek the remedy or equal treatment, for instance Article 3 : Prohibition of torture .",
"This means that any case dealing with Articles 13 , 14 and 18 is likely to focus on the violation of that primary right.",
"While there might be a precedent present for the secondary Articles, the probability is high that our models will not have the chance to train on them because they appear late and because our method truncates text due to computational complexity reasons.",
"This could explain why for these Articles, all our models trained on the precedent cases under-perform when compared to the models trained on the facts of the case alone.",
"Another possible explanation for the different behaviour between Articles could lie within the limitations of the neural architecture.",
"There could be a model bias for facts in precedent since they are more similar to the facts at hand as opposed to the arguments.",
"If this is the case our results understate the value of arguments.",
"While this is a concern, the overall results of our paper would not change even if we could remove this bias since we find arguments more important than facts despite this potential handicap.",
"On a more nuanced level, Articles 2 and 4 above might require a higher level of reasoning than their Article 3 counterpart.",
"So while the judges might have developed a satisfying legal test for them, our models simply aren't able to learn it.",
"For example for Article 7 : No punishment without law , our precedent models fail to learn any additional information from the precedent facts or arguments.",
"This might simply be the result of an insufficient representation of Article 7 in training cases, or of its appearance truncated out of the input.",
"However it also raises the question of what a TRANSFORMER model can learn.",
"The nascent field of BERTology has explored exactly this question (Rogers et al., 2020; Pimentel et al., 2020).",
"In particular the work of Niven and Kao (2019), examining BERT performance on the English Argument Reasoning Comprehension Task (Habernal et al., 2018), suggest that instead of BERT being able to reason, it is merely very good at utilising the artefacts in the data when compared to previous approaches.",
"As Bender and Koller (2020) contend a system can't ever learn meaning from form alone.",
"According to their view, description of the case facts alone will never fully capture the reality of the world the claimant inhabits.",
"On the other hand, there is some evidence towards transformers being able to reason over simple sentences Clark et al. (2020).",
"While this is encouraging, legal documents are far more complicated than the simple sentences considered in the study above.",
"Either way, the models' ability to reason in the way a human lawyer would is certainly limited and could explain the diminished performance for the more complicated Articles.",
"In this section, we contextualise our work with relation to the related research on legal AI.",
"Computational approaches to solving legal problems go back at least as far as the late 1950's (Kort, 1957; Nagel, 1963).",
"Early research has focused on crafting rule-based systems for case outcome prediction, achieving human-like performance by the early 2000's (Ashley, 2017).",
"These systems however proved too brittle to keep up with the ever-changing legal landscape and never transitioned from research into industry.",
"More recently, a new wave of deep learning methods has reinvigorated the research interest in legal AI.",
"The majority of this new work has been conducted on statutory legal systems which do not rely on the doctrine of precedent to nearly the same extent as their common law counterparts.",
"For instance, in Chinese law the use of neural models for case outcome classification has already been investigated extensively (Hu et al., 2018; Zhong et al., 2018; Xu et al., 2020).",
"In the precedential legal domain, smaller corpora of annotated cases have been investigated over the years (Grover et al., 2003; Valvoda et al., 2018).",
"However, large-scale corpora necessary for deep learning architectures have become available only recently.",
"The Caselaw Access Project 10 introduced a large dataset of American case law in 2018.",
"Aletras et al. (2016) have introduced the ECtHR corpus, and Chalkidis et al. (2019) have run deep neural networks on it in order to predict outcome.",
"Similarly, the Canadian Supreme Court Case corpus has been used in infor-10 Caselaw Access Project:, https://case.law mation retrieval for the first time by Rabelo et al. (2020).",
"This improved access to a high quality common law datasets has opened up a potential for new work in the field of legal AI.",
"Particularly similar to our work is the study done by Sim et al. (2016), who have considered the influence of petitioners and responders (amicus) briefs on the US Supreme Court decision and opinions.",
"In this paper, we have shifted the focus of legal AI research from practical tasks such as precedent retrieval or outcome prediction, to a theoretical question: which aspect of the precedent is most important in forming the law?",
"To this end, we trained a similar neural modeling approach as Chalkidis et al. (2019) to predict the outcome of a case on the ECtHR dataset, and inspected the difference in the mutual information between our operationalisa-tions of Halsbury's and Goodhart's view.",
"We have used a method inspired by Pimentel et al. (2019) to approximate the MI .",
"We observe that out of the two archetypal views on precedent, that of Halsbury and Goodhart, the former has a better empirical support in the domain of ECtHR case law.",
"This study has demonstrated a novel method of approaching jurisprudential questions using the information-theoretic toolkit.",
"We hope that future work can leverage our methodology towards answering other questions of legal philosophy.",
"However, our results are not only of an interest in the context of legal theory, but they can also inform a development of better legal models in practice.",
"Since most precedential reasoning is conducted using the arguments in the precedent, outcome prediction models should take advantage of the case arguments, instead of relying solely on the facts.",
"While our work is not concerned with a legal application, it is important to note that the results presented here are qualified by the limitations of contemporary NLP models' ability to process language.",
"It should therefore serve as no indication that judges could (or should) be replaced by models or techniques discussed in this paper.",
"We are grateful to Prof. Ken Satoh for all the fruitful discussions leading towards this paper.",
"We further thank the National Institute of Informatics (NII) Japan and Huawei research UK for their fi-nancial support enabling this research."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"method",
"result",
"abstain",
"abstain",
"result",
"other",
"abstain",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"other",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"method",
"result",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"method",
"other",
"other"
] |
[
"In order to better understand the rationale behind model behavior, recent works have exploited providing interpretation to support the inference prediction.",
"However, existing methods tend to provide human-unfriendly interpretation, and are prone to sub-optimal performance due to one-side promotion, i.e. either inference promotion with interpretation or vice versa.",
"In this paper, we propose a multi-level M utual P romotion mechanism for self-evolved I nference and sentence-level I nterpretation (MPII).",
"Specifically, from the model-level, we propose a Step-wise Integration Mechanism to jointly perform and deeply integrate inference and interpretation in an autoregressive manner.",
"From the optimization-level, we propose an Adversarial Fidelity Regularization to improve the fidelity between inference and interpretation with the Adversarial Mutual Information training strategy.",
"Extensive experiments on NLI and CQA tasks reveal that the proposed MPII approach can significantly outperform baseline models for both the inference performance and the interpretation quality.",
"1 1 Introduction Recently, the interpretability of neural networks has been of increasing concern.",
"In order to break the black-box of neural networks, many works explore the interpretability of neural networks through providing interpretations to support their inference results (Ribeiro et al., 2016; Chen et al., 2018; Liu et al., 2019; Thorne et al., 2019; Kumar and Talukdar, 2020).",
"Although prior works have made some progress towards interpretable NLP, they tend to provide interpretations that lack human-readability.",
"Existing interpretable models usually extract promiWork was done during internship at Microsoft Research 1 Our code is available at https://github.com/ theNamek/MPII.git Premise: People walk through a store.",
"nent features or select input key words as explanations, such as attention distribution (Xu et al., 2015), heatmap (Samek et al., 2017), alignment rationale (Jiang et al., 2021), gradients (Li et al., 2016), magnitude of hidden states (Linzen et al., 2016), etc.",
"Considering readability and comprehensibility for humans, some works turn to generate token-level explanations (Liu et al., 2019; Thorne et al., 2019), which are nevertheless prone to cause ambiguity.",
"Figure 1 shows some prevalent forms of interpretations in NLI task.",
"Obviously, human language interpretations seem more acceptable than those chaotic maps, whether it is heatmap or alignment map.",
"As for the token-level interpretation, several discrete tokens without any logical links are vague and ambiguous.",
"Moreover, Thorne et al. (2019) observed that token-level methods tend to predict common tokens (e.g. people, man, dog) 7074 rather than keywords.",
"Intuitively, human language sentence-level interpretations containing reasoning logic are the best form for human to understand.",
"With annotated natural language interpretation datasets available (Camburu et al., 2018; Rajani et al., 2019), methods of generating sentence-level interpretation have been explored recently.",
"Cam-buru et al. (2018) proposed to first generate interpretation and then predict the label only based on the generated interpretation.",
"Kumar and Talukdar (2020) proposed to first generate sentence-level interpretations with deep pre-trained language models (such as BERT and GPT), then fed those interpretations as extra knowledge to help improve inference performance.",
"We notice that these methods only include one-side promotion: utilizing information contained in interpretation to improve inference, while ignoring the other-side promotion: using inference logic to enhance interpretation.",
"As claimed in Kumar and Talukdar (2020) that their one-side promotion improves predictions' faithfulness to generated interpretations, then the other-side should be able to improve interpretation's faithfulness to inference process.",
"This has aroused our thinking: Can we deeply fuse these two relevant tasks with ingenious combination skills and achieve mutual promotion for inference and interpretation?",
"In this paper, we propose a multi-level Mutual Promotion mechanism for self-evolved Inference and sentence-level Interpretation (MPII).",
"Specifically, from the model-level, we propose a Stepwise Integration Mechanism (SIM) to iteratively update the inference prediction and generate an interpretation token at each decoding step, and deeply integrate hidden representations of the prediction and the token with two fusion modules.",
"In this way, the model learns to refine the inference conclusion as the interpretation proceeds, and the inference procedure can in turn guide the generation of interpretation at each decoding step.",
"From the optimization-level, we propose an Adversarial Fidelity Regularization (AFiRe) to improve the fidelity between inference and interpretation with the Adversarial Mutual Information (AMI) method (Pan et al., 2020), which extends the maximum mutual information optimization objective with the idea of generative adversarial network (Goodfel-low et al., 2014).",
"With this training framework, the model is trained against a smart backward network that learns to reward the inference prediction and interpretation of fidelity, which ensures faithfulness and makes the derived interpretation depict the true profile of how the model works (Jiang et al., 2021).",
"To verify the effectiveness of MPII, we conduct extensive experiments on two inference tasks: Natural Language Inference (NLI) task and Commonsense Question Answering (CQA) task.",
"Experiment results reveal that compared with baseline models, our method can achieve mutual promotion on both model inference performance and sentence-level interpretation quality.",
"Meanwhile, through providing simultaneous inference prediction and human-comprehensible interpretation with deep integration mechanism and adversarial training strategy, our model can perform inference and interpretation of fidelity and generate more robust explanations.",
"Main contributions of this work include: Different from the previous works that only include one-side promotion, we mutually promote the inference and sentence-level interpretation from both the model-level and the optimization-level.",
"We propose a Stepwise Integration Mechanism to tightly fuse latent prediction and interpretation information at every decoding step, and an Adversarial Fidelity Regularization to further improve the fidelity with the adversarial training strategy.",
"Experiment results show that our method achieves significant improvement in both inference accuracy and interpretation quality compared with baseline models.",
"In this section, we introduce Stepwise Integration Mechanism (SIM) and Adversarial Fidelity Regularization (AFiRe) in details.",
"Utilizing the autoregressive nature of Transformer decoder, SIM enables deep interaction at every decoding step between inference and interpretation.",
"With the adversarial training strategy, AFiRe enables further integration of latent semantic information between inference and interpretation, and also improves the quality of explanation sentences by bringing them closer to human expressions.",
"Transformer model (Vaswani et al., 2017) has been firmly established as the dominant approach in text generation tasks, we therefore adopt the Transformer model as backbone.",
"Given a sequence of tokens as input X = { x 0 , x 1 , ..., x m } (e.g. for NLI: 7075 Output Embedding Masked Multi-Head Attention Add & Norm Feed Forward Add & Norm Multi-Head Attention Add & Norm Linear Softmax MLP Argmax Gate2 Softmax Argmax Gate1 Positonal Encoding Prediction Explanation Input Embedding Multi-Head Attention Add & Norm Feed Forward Add & Norm Positonal Encoding Inputs MLP MLP Figure 2: The overall architecture of our model. Both prediction label and explanation token are generated at every decoding step. Two fusion gates are attached to enable deep interaction of their hidden representations. X = { [ CLS ] + Premise + [ SEP ] + Hypothesis } , for CQA: X = { [ CLS ] + Question + [ SEP ] + Answers } ), Transformer encoder produces a sequence of continuous vectors H enc .",
"Conditioned on H enc , on each decoding step, Transformer decoder takes the embedding of words generated by previous steps as input and predicts the word for current step.",
"With ground truth prediction L and explanation E from human-annotated dataset, the interpretable model is required to generate prediction L and explanation sentence E = { e 0 , e 1 , ..., e n } simultaneously.",
"Prevalent interpretable models share the same encoder and separately adopt a MLP and a decoder to generate predictions and explanations.",
"We analogously adopt the standard Transformer encoder, but apply Stepwise Integration Mechanism to deeply integrate standard MLP and Transformer decoder at every decoding step to simultaneously produce predictions and explanations.",
"As depicted in Figure 2, at decoding step t , decoder takes the last generated token e t 1 and the predicted label l t 1 at previous step as input.",
"At the first decoding step, we pass the encoder hidden state corresponding to [CLS] token into MLP to get the l 0 .",
"We project the label l t 1 with Multi-Layer Perceptrons (MLP) and obtain v pt 1 , which represents the previous step prediction information.",
"We then fuse the prediction information v pt 1 and the explanation token e t 1 with gate mechanism.",
"The gate probability at t step is computed by: p t = ReLU( W 1 [ Emb l t 1 ; Emb e t 1 ] + b 1 ) (1) p t = ( W 2 p t + b 2 ) (2) where ; means concatenation, W 1 , W 2 , b 1 and b 2 are trainable parameters.",
"ReLU( ) here denotes the ReLU activation function (Nair and Hinton, 2010), ( ) represents the sigmoid function.",
"We fuse the prediction and interpretation information as below: Emb t = p t Emb l t 1 + (1 p t ) Emb e t 1 (3) where Emb t contains the information of prediction and the overall explanation sub-sequence generated in all previous steps.",
"We utilize the stack of masked self-attention layers f sa used in Transformer decoder to compute the decoder hidden states: { h 0 , h 1 ,..., h t } = f sa ( { Emb 0 , Emb 1 ,..., Emb t } ) (4) 7076 The attention vector referring to the source sequence is computed with multi-head attention: v t = f mha ( H enc , h t ) (5) where H enc represents the encoder hidden states, f mha denotes the multi-head attention module.",
"The v t is further passed into a fully connected layer followed with softmax function to obtain the vocabulary distribution of generated explanation token e t at t step: e t = argmax ( softmax ( Wv t + b )) (6) where W and b are both trainable parameters.",
"The gate mechanism is then used to integrate the explanation information to update the prediction information: p t = (MLP 1 ([ Emb l t 1 ; MLP 2 ( v t )])) (7) where the two MLP( ) use different parameters.",
"We apply the residual connection (He et al., 2016) here, which is easier to optimize in the scenario of many decoding steps.",
"This is similar to the gate mechanism used in Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) that learns to remember important information obtained on each decoding step.",
"At the last decoding step, the model deduces the eventual decision: L = argmax ( softmax ( Emb l n )) (9) where n is the length of the generated explanation E .",
"With this setting, both prediction and explanation are updated at every decoding step.",
"The step-by-step explanation helps the model to do better inference, and the stepwise inference in turn guides the generation of better explanation.",
"From the level of optimization objective, we further introduce the Adversarial Fidelity Regularization (AFiRe) to improve the fidelity of inference and interpretation.",
"We leverage the Adversarial Mutual Information (AMI) method (Pan et al., 2020) to extend the maximum mutual information objective among input, inference prediction and the generated explanation with the idea of generative adversarial network (Goodfellow et al., 2014).",
"Compared to the maximum likelihood estimation (MLE) objective, maximum mutual information (MMI) objective encourages the model to generate the prediction and explanation that are more faithful to the input (Kinney and Atwal, 2014; Stratos, 2019).",
"The mutual information I ( X, L, E ) among the input X , inference label L and explanation E is formulated as: I ( X, L, E ) = EP ( X,L,E ) (cid:20) log P ( X, L, E ) P ( X ) P ( L, E ) (cid:21) = H ( X ) H ( X | L, E ) where H denotes the entropy.",
"Because of the intractability of directly estimating the mutual information in high-dimensional space, we approximate the optimization objective with a Variational Information Maximization lower bound (Chen et al., 2016b; Zhang et al., 2018; Poole et al., 2019): I(X, L, E) = H(X) + EP ( X,L,E ) [log P ( X | L, E )] = H(X) + EP ( X,L,E ) [log Q ( X | L, E )] + EP ( L,E ) [ KL ( P ( X | L, E ) || Q ( X | L, E ))] H(X) + EP ( X ) EP ( L,E | X ) [log Q ( X | L, E )] where KL ( || ) denotes the Kullback-Leibler (KL) divergence between two distributions.",
"P ( L, E | X ) and Q ( X | L, E ) denote the forward network (gen-erating L, E conditioned on X ) and the backward network (generating X conditioned on L, E ) respectively.",
"Since the entropy term H ( X ) associates with the training data and does not involve the parameters we optimize, the objective of MMI is equivalent as: max , E ( L ,E ) P ( L ,E | X ) (cid:2) log Q ( X | L , E ) (cid:3) where and are the parameters of the forward and backward network respectively.",
"L and E represent the synthetic prediction label and explanation generated by the forward network.",
"With the MMI optimization objective, the backward network is trained with only the synthetic label and explanation produced by the forward network, and prone to sub-optimal performance if the synthetic text is uninformative.",
"Since the the backward network provides a reward for optimizing the forward network, a biased backward network may provide unreliable reward scores and mislead the forward network optimization.",
"To remedy this problem, we leverage the Adversarial Mutual Information (AMI) method (Pan et al., 2020) to extend MMI with the idea of generative adversarial network (Goodfellow et al., 2014).",
"Specifically, we first bring the min-max adversarial game into training procedure and add an additional objective term Q ( X | L, E ) to maximize the negative likelihood of Q when feeding it with the real data: min max E ( L ,E ) P ( L ,E | X ) (cid:2) log Q ( X | L ,E ) (cid:3) Q ( X | L,E ) With this interactive training strategy and regularizing the backward network with both the synthetic data and real data, the forward network will be trained against a smarter backward network that only rewards prediction and explanation of fidelity.",
"Besides, we add an objective term P ( L , E | X ) of maximize the negative likelihood of P to balance the positive samples as teacher-forcing algorithm (Li et al., 2017).",
"The final optimization objective is formulated as: min max P ( L , E | X ) + MutualInformation (cid:122) (cid:125)(cid:124) (cid:123) (cid:124) (cid:123)(cid:122) (cid:125) AdversarialTraining E ( L ,E ) P ( L ,E | X ) (cid:2) log Q ( X | L ,E ) (cid:3) Q ( X | L,E ) As depicted in Fig 3, to encourage the forward network to learn a stronger connection between generated explanations and model predictions, we also add Q ( X | L, E ) as negative samples for backward network.",
"This explicitly encourages the backward network to be capable of punishing the P when it generates unfaithful explanations.",
"We intend to verify the mutual promotion effect of SIM and AFiRe on the inference ability and interpretablity of model.",
"We choose two tasks requiring inference ability: Natural Language Inference (NLI) and Commonsense Question Answering (CQA).",
"We use six datasets as our testbeds: SNLI (Bow-man et al., 2015), e-SNLI (Camburu et al., 2018), CQA (Talmor et al., 2019), CoS-E (Rajani et al., 2019), MultiNLI (Williams et al., 2018), and SICK-E (Marelli et al., 2014).",
"SNLI is a standard benchmark for NLI task, while e-SNLI extends it with human-annotated natural language explanations for each sentence pair.",
"CoS-E 2 dataset extends CQA dataset with natural language explanations for each QA sample.",
"MultiNLI is another large-scale NLI corpus, which includes a diverse range of genres.",
"SICK-e (Sentences Involving Compositional Knowledge for entailment) provides sentence pairs that are rich in the lexical, syntactic and semantic phenomena.",
"The latter two datasets are used for out-of-domain evaluation.",
"NLI: We use e-INFERSENT and Transformer as two baseline models for NLI task.",
"The e-INFERSENT model adds a LSTM decoder into INFERSENT (Conneau et al., 2017) for explanations.",
"The classification module and the explanation generation module are separated but share the same encoder.",
"The Transformer model (Vaswani et al., 2017) adds a MLP layer for making predictions.",
"With this baseline, we aim to test whether vanilla transformer without further interaction can achieve good results.",
"CQA: We use CAGE (Rajani et al., 2019) as the baseline model for CQA task.",
"CAGE adopts the explain-then-predict approach, which firstly fine-tunes a deep pretrained language model GPT (Rad-ford et al., 2019) to generate explanations, then use a classifier to predict the inference label with the generated explanation and source text as the input.",
"To evaluate inference performance, we report Task-specific Accuracy (NLI Accuracy and CQA Ac-curacy).",
"To evaluate the quality of generated interpretation, we report BLEU (similarity between generation and ground truth), PPL (fluency of generated sentences), and Inter Repetition (diversity of generated explanations).",
"Table 1 shows automatic evaluation results on the SNLI and CQA datasets with the annotated explanation from the e-SNLI and CoS-E datasets.",
"Compared with the baseline models, our MPII method can achieve significant performance improvement for both the inference and interpretation on two tasks.",
"It indicates that the inference and interpretation process can be mutually promoted with our proposed method.",
"With the ablation study, we notice a performance degradation of the inference and interpretation if we remove either of them, demonstrating the faithfulness between the generated explanation and the model's prediction.",
"Inference Promotion: We can achieve 11.73 and 2.06 absolute inference accuracy improvements compared to the baselines for the NLI and CQA task, respectively.",
"For the NLI task, with our MPII framework, the Transformer baseline model can improve over 5 absolute accuracy score.",
"The ablation study shows the contribution comes from not only the mutual interaction of inference and interpretation in the Stepwise Integration Mechanism (SIM), but also the adversarial mutual information training objective introduced in the Adversarial Fidelity Regularization (AFiRe).",
"Moreover, with parameters initialized with the pretrained BART model, the accuracy can be further improved by a 4.53 absolute score.",
"For the CQA task, we observe that better performance is still achieved compared Methods MultiNLI SICK-E Transformer 55.92 53.21 Transformer + MPII (w/o AFiRe) 56.42 53.84 Transformer + MPII 58.73 56.54 Table 2: Out-of-domain NLI evaluation results on MultiNLI and SICK-E datasets.",
"with the CAGE baseline model.",
"If we remove the AFiRe, a significant inference degradation would be witnessed.",
"It also indicates the effectiveness of AFiRe for utilizing interpretability to improve the inference ability.",
"Interpretation Promotion: The quality of generated interpretation can also be significantly improved with our mutual promotion method on both NLI and CQA tasks.",
"For the NLI task, combined with our MPII, the Transformer baseline model can provide more accurate, fluent and diverse interpretation with much better results in all metrics.",
"Similar with the inference results, the ablation study shows that both SIM and AFiRe contribute to the performance improvement.",
"With the pretrained BART model, we further improve the BLEU and Inter-Rep performance and get comparable PPL compared with the e-INFERSENT model.",
"For the CQA task, our method performs better in terms of BLEU score and the diversity of generated explanations.",
"We notice that the BLEU scores are pretty low for CQA task, which may stem from the free form of expression for explanations in the dataset, i.e. several different explanations share the same 7079 a park typically does not have a peer or boardwalk Explanation Generation Process 0.00 0.25 0.50 0.75 1.00 P r ob o f T h r ee P r e d i c ti on L a b e l Premise: a couple standing on what looks like a peer or boardwalk Hypothesis: a couple hugging each other at the park Label: Contradiction NeutralContradictionEntailment Figure 4: Visualization for mutual promotion evolution of inference and interpretation.",
"commonsense knowledge.",
"We observe that most of the explanations generated by our method are reasonable enough to interpret the predictions even though the BLEU scores are low.",
"Our method also achieves a smaller Inter-Rep score, which shows that our model can provide more diverse explanations to reveal the inference process of making predictions.",
"As shown in Table 2, we evaluate our method with the Transformer baseline model on two out-of-domain datasets: MultiNLI and SICK-E.",
"The results show that our mutual promotion method enables the Transformer model to be more robust, and achieves about 3 absolute accuracy improvement on both of the out-of-domain datasets without fine-tuning.",
"It is because with our MPII method, the model can generate more reliable and domain-related interpretation, which helps to make more accurate inference prediction.",
"The ablation results demonstrate both the adversarial mutual information training strategy in AFiRe and deep integration in SIM is very effective to improve the model's generalization and robustness.",
"We propose a model-based evaluation metric Critic-Score to evaluate the fidelity between model's inference predictions and interpretations.",
"Inspired by Shen et al. (2017), which applied a trained model to automatically evaluate the text style transfer accuracy in the absence of parallel dataset, we pre-train a well-performed discriminator model to evaluate the fidelity between the predicted label and the generated explanation.",
"The discriminator is a binary classifier f : ( X, L, E ) (cid:55) Yes / No , which shares similar architecture with the backward network in our Adversarial Fidelity Regularization (Section 2.3).",
"The training dataset is constructed based on the e-SNLI and CoS-E corpus.",
"Given a sample X i , L i , E i on e-SNLI that serves as a positive sample, we build the negative sample as X i , L i , E j , where explanation E j = E i is selected from another e-SNLI sample that shares either the same premise or hypothesis.",
"With this dataset, the discriminator model is trained to learn the intrinsic fidelity between the label and its corresponding explanation.",
"The trained discriminator achieves 97% accuracy on its test set and is able to serve as a quantitative way of evaluating fidelity.",
"As shown in Table 3, with our proposed mutual promotion method, the Transformer model can achieve significant improvement on Critic-Score between prediction and explanation.",
"The ablation results confirm both the deep interaction design in Stepwise Integration Mechanism and the adversarial training strategy in Adversarial Mutual Information can contribute to the improvement of fidelity and faithfulness.",
"Mutual Promotion Visualization: Figure 4 demonstrates the evolution of the inference prediction as the interpretation proceeds.",
"The input of the model is [CLS] a couple standing on what looks like a peer or boardwalk [SEP] a couple hugging each other at the park, of which the ground truth label is contradiction.",
"We observe that the model draws an initial conclusion that the entailment relationship between the premise and the hypothesis is not entailment, and is not able to tell whether it is neutral or contradiction.",
"As the deliberation proceeds, our model comes to judge that it is con-tradiction with the generated interpretation a park does not have a peer or boardwalk.",
"From the clear split of the red and blue lines when does and not are generated, we can see that the prediction is very sensitive to explanation, which demonstrates the faithfulness (Kumar and Talukdar, 2020).",
"Semantic Similarity Evaluation of Interpretation: To better evaluate the quality of generated explanations, we also measure the cosine similarity between generated explanations and human annotated explanations.",
"The results are presented in Fig 5.",
"The cosine similarity of our method conFigure 5: The distribution of cosine similarity with average sentence embedding between human annotation and generated interpretation.",
"centrates on 0.9 and achieves higher scores than CAGE, which demonstrates the effectiveness of our MPII for generating better interpretation that are closer to human expression.",
"Case Study Table 4 presents examples produced by different models.",
"For the first example, e-INFERSENT fails to make correct prediction and provide reasonable explanation.",
"In contrast, our MPII not only predict the entailment relation correctly, but also produce faithful explanations to 7081 Methods Fidelity-C Fidelity-W LAcc Fluency NLITask e-INFERSENT 3.16 2.74 3.34 4.23 Transformer 3.30 3.21 3.65 3.68 Transformer+MPII(w/oAFiRe) 4.01 4.12 4.33 4.36 Transformer+MPII 4.17 4.38 4.57 4.51 CQATask CAGE(GPT,ETP) 3.71 3.18 3.52 4.25 BART+MPII(w/oAFiRe) 4.26 4.13 4.05 4.21 BART+MPII 4.37 4.39 4.22 4.30 Table 6: Human evaluation results on Fidelity-C(fidelity between correct prediction and corresponding interpre-tation), Fidelity-W(fidelity between wrong prediction and corresponding interpretation), LAcc(accuracy of selecting correct lables when only given the generated interpretations), Fluency(fluency of interpretation).",
"interpret predictions.",
"For the second example, our MPII and MPII with AFiRe removed still capture the entailment relation well, and explain that at the beach and at restaurant can not be done at the same time.",
"As we can see, these explanations generated by our method are also fluent.",
"Table 5 shows the randomly selected examples generated by different models in the CQA task.",
"For the first example, CAGE makes wrong prediction, and generates explanation that obviously conflicts with common knowledge.",
"In contrast, our method can make correct predictions and generate more reasonable explanations.",
"Similarly for the second example, CAGE seems to directly copy words from the question that do not actually contain meaningful information.",
"Our MPII still explains well, but fails to explain properly with AFiRe removed, even if the explanation contains the correct answer, which reveals the importance of AFiRe for promotion of interpretation.",
"Human evaluation: We conduct human evaluation to further evaluate the effectiveness of MPII.",
"We randomly selected 300 examples from the test set of e-SNLI, and asked 4 well-educated annotators to rate every sample with 4 metrics on a 1-5 Likert scale in a strictly blind fashion (Stent et al., 2005).",
"As shown in Table 6, analogous to automatic evaluation results (Section 3.4), our MPII can generate interpretations with best quality and fidelity to corresponding inference predictions, whether correct or wrong.",
"With the great success of natual language inference, many recent works explore the interpretability of neural networks through providing interpretation to support their inference results (Ribeiro et al., 2016; Chen et al., 2018; Liu et al., 2019;",
"Thorne et al., 2019; Kumar and Talukdar, 2020).",
"Three forms of interpretation are provided by these works: (1) feature-based interpretation (Chen et al., 2016a, 2018; Ribeiro et al., 2016, 2018; Li et al., 2016; Nguyen, 2018; Feng et al., 2018; Gururan-gan et al., 2018) such as attention distribution (Xu et al., 2015), heatmap (Samek et al., 2017), alignment rationale (Jiang et al., 2021), gradients (Li et al., 2016), magnitude of hidden states (Linzen et al., 2016),",
"etc.; (2) token-level interpretation that relatively easy to comprehend but prone to ambiguity (Ribeiro et al., 2016; Liu et al., 2019; Thorne et al., 2019), and (3) sentence-level interpretation which has the best human-readability Cam-buru et al. (2018); Talmor et al. (2019); Kumar and Talukdar (2020).",
"Different from the previous work which only include one-side promotion, we proposed the mutual promotion mechanism that can improve the performance of both inference and sentence-level interpretation.",
"In this work, we propose to mutually promote model inference ability and interpretability from multi-levels.",
"From the model-level, we propose Stepwise Integration Mechanism to enable the model to refine the prediction conclusion as the explaining proceeds and also to guide the generation of better explanation with the inference procedure of reaching prediction conclusion.",
"From the optimization-level, we propose an Adversarial Fidelity Regularization, which leverages the Adversarial Mutual Information method to improve the fidelity between the inference and interpretation, which further guarantees faithfulness.",
"Experiment results show the effectiveness of our proposed method on both NLI and CQA tasks.",
"Future work will involve extending our approaches into other tasks of NLP.",
"We hope that our work can encourage further research in this direction.",
"We would like to acknowledge Chuhan Wu for the helpful discussion.",
"We also want to thank Jiale Xu for his kindness and help."
] | [
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"result",
"method",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"other",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"method",
"other",
"other"
] |
[
"Residual networks are an Euler discretization of solutions to Ordinary Differential Equations (ODE).",
"This paper explores a deeper relationship between Transformer and numerical ODE methods.",
"We first show that a residual block of layers in Transformer can be described as a higher-order solution to ODE.",
"Inspired by this, we design a new architecture, ODE Transformer , which is analogous to the Runge-Kutta method that is well motivated in ODE.",
"As a natural extension to Transformer, ODE Transformer is easy to implement and efficient to use.",
"Experimental results on the large-scale machine translation, abstractive summarization, and grammar error correction tasks demonstrate the high genericity of ODE Transformer.",
"It can gain large improvements in model performance over strong baselines (e.g., 30.77 and 44.11 BLEU scores on the WMT'14 English-German and English-French benchmarks) at a slight cost in inference efficiency.",
"Residual networks have been used with a great success as a standard method of easing information flow in multi-layer neural models (He et al., 2016; Vaswani et al., 2017).",
"Given an input y t , models of this kind define the output of a layer t to be: y t +1 = y t + F ( y t , t ) (1) where F ( , ) is the function of the layer and t is its parameter.",
"Interestingly, recent work in machine learning (Weinan, 2017; Lu et al., 2018; Haber et al., 2018; Chang et al., 2018; Ruthotto and Haber, 2019) points out that Eq.",
"(1) is an Euler discretization of the Ordinary Differential Equation (ODE), like this: d y ( t ) d t = F ( y ( t ) , ( t )) (2) Corresponding author.",
"where y ( t ) and ( t ) are continuous with respect to t .",
"In this way, we can call Eq.",
"(1) an ODE block .",
"This finding offers a new way of explaining residual networks in the view of numerical algorithms.",
"Then, one can think of a multi-layer network as applying the Euler method (i.e., Eq.",
"(1)) to solve Eq.",
"(2) subject to the initial conditions y (0) = y 0 and (0) = 0 .",
"The solution of Eq.",
"(2) has a sufficiently low error bound (call it a stable solution ) only if ( t ) changes slow along t (Haber and Ruthotto, 2017; Chen et al., 2018).",
"But this assumption does not always hold for state-of-the-art natural language processing (NLP) systems, in which models are non-linear and over-parameterized.",
"For example, language modeling and machine translation systems learn quite different parameters for different layers, especially when the layers are close to the model input (Vaswani et al., 2017; Dai et al., 2019).",
"Also, truncation errors are nonnegligible for the Euler method because it is a first-order approximation to the true solution (He et al., 2019).",
"These problems make the situation worse, when more layers are stacked and errors are propagated through the neural network.",
"It might explain why recent 8335 Machine Translation (MT) systems cannot benefit from extremely deep models (Wang et al., 2019; Liu et al., 2020a; Wei et al., 2020; Li et al., 2020).",
"This paper continues the line of research on the ODE-inspired method.",
"The basic idea is to use a high-order method for more accurate numerical solutions to the ODE.",
"This leads to a larger ODE block that generates a sequence of intermediate approximations to the solution.",
"We find that the larger ODE block is sufficient to take the role of several ODE blocks with first-order solutions.",
"The benefit is obvious: the use of fewer ODE blocks lowers the risk of introducing errors in block switching, and the high-order method reduces the approximation error in each ODE block.",
"See Figure 1 for a comparison of different models.",
"Our method is parameter-efficient because ( t ) is re-used within the same ODE block.",
"As another bonus\", the model can be improved by learning coefficients of different intermediate approximations in a block. We evaluate our method in strong Transformer systems, covering both the wide (and big) model and the deep model. For machine translation tasks, ODE Transformer achieves 30.77 and 44.11 BLEU scores on the WMT'14 En-De and En-Fr test sets, setting a new state-of-the-art on the WMT'14 En-Fr task. It also significantly outperforms baselines on abstractive summarization and grammar error correction tasks. 2 Transformer and ODEs We start with a description of Transformer, followed by its relationship with ODEs. We choose Transformer for our discussion and experiments because it is one of the state-of-the-art models in recent sentence generation tasks. 2.1 Transformer Transformer is an example of the encoder-decoder paradigm (Vaswani et al., 2017). The encoder is a stack of identical layers. Each layer consists of a self-attention block and a feedforward network (FFN) block. Both of them equip with a residual connection and a layer normalization unit. Note that the term block is used in many different ways.",
"In this paper, the term refers to any neural network that is enhanced by the residual connection (occasionally call it a residual block ).",
"Following the Pre-norm architecture (Wang et al., 2019), we define a block as y t +1 = y t + G ( LN ( y t ) , t ) (3) where LN ( ) is the layer normalization function, 1 and G ( ) is either the self-attention or feedforward network.",
"The decoder shares a similar architecture, having an additional encoder-decoder attention block sandwiched between the self-attention and FFN blocks.",
"An ordinary differential equation is an equation involving a function y ( t ) of a variable t and its derivatives.",
"A simple form of ODE is an equation that defines the first-order derivative of y ( t ) , like d y ( t ) d t = f ( y ( t ) , t ) (4) where f ( y ( t ) , t ) defines a time-dependent vector field if we know its value at all points of y and all instants of time t .",
"Eq.",
"(4) covers a broad range of problems, in that the change of a variable is determined by its current value and a time variable t .",
"This formulation also works with Pre-norm Transformer blocks.",
"For notational simplicity, we re-define G ( LN ( y t ) , t ) as a new function F ( y t , t ) : F ( y t , t ) = G ( LN ( y t ) , t )) (5) We then relax y t and t to continuous functions y ( t ) and ( t ) , and rewrite Eq.",
"where t is the change of t , and is general called step size .",
"Obviously, we have t = 1 in Transformer.",
"But we can adjust step size t using a limit, and have lim t 0 y ( t + t ) y ( t ) t = F ( y ( t ) , ( t )) (7) Given the fact that lim t 0 y ( t + t ) y ( t ) t = d y ( t ) d t , Eq.",
"(7) is an instance of Eq.",
"(4).",
"The only difference lies in that we introduce ( t ) into the righthand side of Eq.",
"(4).",
"Then, we say that a Pre-norm Transformer block describes an ODE.",
"It has been found that Eq.",
"(3) shares the same form as the Euler method of solving the ODE described in Eq.",
"(7) (Haber and Ruthotto, 2017).",
"This establishes a relationship between Transformer and ODEs, in that, given F ( , ) and learned parameters { t } , the forward pass of a multi-block Transformer is a process of running the Euler method for several steps.",
"In numerical methods of ODEs, we want to ensure the precise solutions to the ODEs in a minimum number of computation steps.",
"But the Euler method is not precise because it is a first-order method, and naturally with local truncation errors.",
"The global error might be larger if we run it for a number of times.",
"2 This is obviously the case for Transformer, especially when the multi-layer neural network arises a higher risk of instability in solving the ODEs (Haber and Ruthotto, 2017).",
"Here we use the Runge-Kutta methods for a higher order solution to ODEs (Runge, 1895; Kutta, 1901; Butcher, 1996; Ascher and Petzold, 1998).",
"They are a classic family of iterative methods with different orders of precision.",
"3 More formally, the explicit Runge-Kutta methods of an n -step solution is defined to be: y t +1 = y t + n (cid:88) i =1 i F i (8) F 1 = hf ( y t , t ) (9) F i = hf ( y t + i 1 (cid:88) j =1 ij F j , t + i h ) (10) where h is the step size and could be simply 1 in most cases.",
"F i is an intermediate approximation to the solution at step t + i h .",
", and are coefficients which can be determined by the Taylor series of y t +1 (Butcher, 1963).",
"Eq.",
"(10) describes a sequence of solution approximations { F 1 , ..., F n } over n steps { t + 1 h, ..., t + n h } .",
"These approximations are then interpolated to form the final solution, as in Eq.",
"(8).",
"The Runge-Kutta methods are straightforwardly applicable to the design of a Transformer block.",
"All we need is to replace the function f (see Eq.",
"(10)) with the function F (see Eq.",
"(5)).",
"The advantage is that the function F is re-used in a block.",
"Also, the model parameter t can be shared within the block.",
"4 In this way, one can omit t + i h in Eq.",
"2 The global error is what we would ordinarily call the error: the difference between y ( t ) and the true solution.",
"The local error is the error introduced in a single step: the difference between y ( t ) and the solution obtained by assuming that y ( t 1) is the true solution 3 A p -order numerical method means that the global truncation error is proportional to p power of the step size.",
"4 Although we could distinguish the parameters at different steps in a block, we found that it did not help and made the model difficult to learn.",
"This makes the system more parameter-efficient.",
"As would be shown in our experiments, the high-order Runge-Kutta methods can learn strong NMT systems with significantly smaller models.",
"The Runge-Kutta methods are general.",
"For example, the Euler method is a first-order instance of them.",
"For a second-order Runge-Kutta (RK2) block, we have y t +1 = y t + 1 2( F 1 + F 2 ) (12) F 1 = F ( y t , t ) (13) F 2 = F ( y t + F 1 , t ) (14) This is also known as the improved Euler method.",
"Likewise, we can define a fourth-order Runge-Kutta (RK4) block to be: y t +1 = y t + 1 6( F 1 + 2 F 2 + 2 F 3 + F 4 ) (15) F 1 = F ( y t , t ) (16) F 2 = F ( y t + 1 2 F 1 , t ) (17) F 3 = F ( y t + 1 2 F 2 , t ) (18) F 4 = F ( y t + F 3 , t ) (19) See Figure 2 for a comparison of different Runge-Kutta blocks.",
"It should be noted that the method presented here can be interpreted from the perspective of representation refinement (Greff et al., 2017).",
"It provides a way for a function to update the function itself.",
"For example, Universal Transformer refines the representation of the input sequence using the same function and the same parameters in a block-wise manner (Dehghani et al., 2019).",
"Here we show that inner block refinements can be modeled with good theoretical support.",
"In our preliminary experiments, the RK2 and RK4 methods yielded promising BLEU improvements when the model was shallow.",
"But it was found that the improvements did not persist for deeper models.",
"To figure out why this happened, let us review the Runge-Kutta methods from the angle of training.",
"Take the RK2 method as an example.",
"We rewrite Eq.",
"(12) by substituting F 1 and F 2 , as follow y t +1 = y t + 1 2 F ( y t , t ) + 1 2 F ( y t + F ( y t , t ) , t ) (20) Let E be the loss of training, L be the number blocks of the model, and y L be the model output.",
"The gradient of E at y t is E y t = E y L 1 2 L t L 1 (cid:89) k = t (1 + g k ) (21) where g k = (cid:16) 1 + F ( y k , k ) y k (cid:17) (cid:16) 1 + F ( y k + F ( y k , k ) , k ) y k + F ( y k , k ) (cid:17) (22) Seen from Eq.",
"(21), E y t is proportional to the factor 1 2 L t .",
"This leads to a higher risk of gradient vanishing when L is larger.",
"The problem somehow attributes to the small coefficients of F i , that is, 1 = 2 = 12 .",
"A natural idea is to empirically set i = 1 to eliminate the product factor of less than 1 in gradient computation, although this is not theoretically grounded in standard Runge-Kutta methods.",
"We rewrite Eq.",
"(20) with the new coefficients, as follows y t +1 = y t + F ( y t , t ) + F ( y t + F ( y t , t ) , t ) (23) Then, we have the gradient, like this E y t = E y L L 1 (cid:89) k = t g k (24) This model is easy to optimize because E yL can be passed to lower-level blocks with no scales.",
"Note that, the methods here are instances of parameter sharing (Dehghani et al., 2019; Lan et al., 2020).",
"For example, in each ODE block, we use the same function F with the same parameter t for all intermediate steps.",
"Setting i = 1 is a further step towards this because F i is passed to the following computations with the same scale.",
"Here we call it implicit parameter sharing.",
"Another way of scaling F i to further improve ODE functions is to learn the coefficients automatically on the training data.",
"The simplest method is to initialize i = 1 and independently optimize each scale.",
"It helps the system learn the way of flowing F i in a block.",
"Based on it, scaling F i by a weighted gate mechanism (Srivastava et al., 2015) empirically achieves the best performance (see Section 4).",
"Take RK2-block as an instance, the concatenation of F 1 and F 2 is transformed to a scalar (0 , 1) through a sigmoid gate, then the block output y t +1 is y t +1 = y t + g F 1 + (1 g ) F 2 (25) g = sigmoid ([ F 1 , F 2 ] W + b ) (26) where [ , ] denotes the concatenation operation and W, b are learnable parameters.",
"We call it RK2-block (learnable i ), and the architecture is shown in Figure 2",
"(d).",
"This kind of formulation offers a more flexible way to decide which part contributes more and is also easy to be optimized.",
"Moreover, we also summarize the comparison of various scaling functions in Appendix C. 8338 Model Layers WMT En-De WMT En-Fr #Param Steps BLEU SBLEU #Param Steps BLEU SBLEU Transformer (Vaswani et al., 2017) 6-6 213M 100K 28.40 222M 300K 41.00 MacaronNet (Lu et al., 2019) 6-6 -30.20 --Depth growing (Wu et al., 2019) 8-8 270M 800K 29.92 -43.27 Transformer-DLCL (Wang et al., 2019) 30-6 137M 50K 29.30 28.6 --Multiscale Collaborative (Wei et al., 2020) 18-6 512M 300K 30.56 --ADMIN (Liu et al., 2020a) 60-12 262M 250K 30.01 29.5 250K 43.80 41.8 SDT (Li et al., 2020) 48-6 192M 50K 30.21 29.0 198M 100K 43.28 41.5 BERT-fused model (Zhu et al., 2020) 6-6 -30.75 -43.78 Base and Deep Models Residual-block 6-6 61M 50K 27.89 26.8 69M 100K 41.05 39.1 RK2-block 6-6 61M 50K 28.67 27.5 69M 100K 42.08 40.1 RK2-block (learnable i ) 6-6 61M 50K 28.89 27.7 69M 100K 42.31 40.3 RK4-block 6-6 61M 50K 29.03 27.9 69M 100K 42.56 40.6 Residual-block 24-6 118M 50K 29.43 28.3 123M 100K 42.67 40.6 RK2-block 24-6 118M 50K 29.85 28.7 123M 100K 43.04 41.1 RK2-block (learnable i ) 24-6 118M 50K 30.29 29.2 123M 100K 43.48 41.5 RK4-block 24-6 118M 50K 29.80 28.8 123M 100K 43.28 41.3 Wide Models Residual-block-Big 6-6 211M 100K 29.21 28.1 221M 100K 42.89 40.9 RK2-block 6-6 211M 100K 30.11 29.0 221M 100K 43.34 41.3 RK2-block (learnable i ) 6-6 211M 100K 30.53 29.4 221M 100K 43.59 41.6 RK4-block 6-6 211M 100K 30.39 29.3 221M 100K 43.55 41.6 Residual-block-Big 12-6 286M 100K 29.91 28.9 297M 100K 43.22 41.2 RK2-block 12-6 286M 100K 30.58 29.4 297M 100K 43.88 42.0 RK2-block (learnable i ) 12-6 286M 100K 30.77 29.6 297M 100K 44.11 42.2 RK4-block 12-6 286M 100K 30.55 29.4 297M 100K 43.81 41.9 Table 1: Comparison with the state-of-the-arts on the WMT En-De and WMT En-Fr tasks.",
"ODE Transformer is efficient to use.",
"As we only apply the ODE design schema to the encoder side, it only brings minor impacts on the inference speed due to the autoregressive decoding schema.",
"Another concern here is memory consumption.",
"ODE Transformer consumes more memory than the baseline in the same depth since we need to store the intermediate approximations in the forward pass.",
"But the additional consumption is less than that of the baseline who has the same computation cost, which is acceptable for most scenarios.",
"We give a quantitative analysis in Section 5.",
"We evaluated the ODE Transformer on three sequence generation tasks: machine translation, abstractive summarization and grammar error correction.",
"The datasets we used are elaborated in the following section, and more details of experimental setups could be found in Appendix A and B. 4.1 Datasets Machine Translation We report results on three WMT benchmarks.",
"For the WMT'14 English-German (En-De) task, the training data consisted of approximately 4 .",
"5 M tokenized sentence pairs, as in (Vaswani et al., 2017).",
"All sentences were segmented into sequences of sub-word units (Sen-nrich et al., 2016) with 32 K merge operations using a shared vocabulary.",
"We selected newstest2013 as the validation data and newstest2014 as the test data.",
"For the WMT'14 English-French (En-Fr) task, we used the dataset provided within Fairseq, i.e., 36 M training sentence pairs from WMT'14.",
"newstest2012+newstest2013 was the validation data and newstest2014 was the test data.",
"For the WMT'16 English-Romanian (En-Ro) task, we replicated the setup of (Mehta et al., 2020), which used 600 K/ 2 K/ 2 K sentence pairs for training, evaluation and inference, respectively.",
"Abstractive Summarization We also tested the models' ability to process long sequences on the CNN-DailyMail summarization task (Nallapati et al., 2016; Hermann et al., 2015).",
"The prepro-8339 Model Params Epochs BLEU Transformer in Mehta et al. (2020) 62M 170 34.30 DeLight (Mehta et al., 2020) 53M 170 34.70 Int Transformer (Lin et al., 2020) -32.60 Transformer (Our impl.) 69M 20 33.49 RK2-block (learnable i ) 69M 20 34.94 RK2-block-Big (learnable i ) 226M 20 35.28 Table 2: Results on the WMT En-Ro task.",
"cessed method was the same as in (Ott et al., 2019).",
"We used a shared BPE with 30 K operations, resulting in a vocabulary of 32 , 580 entries.",
"The evaluation metric was F1-Rouge (Lin, 2004) (Rouge-1, Rouge-2 and Rouge-L).",
"Grammar Error Correction We used the following datasets as the training data, including National University of Singapore Corpus of Learner English (NUCLE) (Dahlmeier et al., 2013), Lang-8 Corpus of Learner English (Lang-8) (Tajiri et al., 2012), FCE dataset (Yannakoudakis et al., 2011), and Write & Improve + LOCNESS Corpus (Bryant et al., 2019).",
"We borrowed the setup from Chollampatt and Ng (2018) and used the provided preprocessed script.",
"The word-level dropout technique was also applied to prevent the overfitting problem.",
"Language Modeling The truncation error analysis is conducted on the Penn Treebank (Mikolov et al., 2011), which is a widely-used language model dataset.",
"It contains 88 K, 3 , 370 and 3 , 761 sentences for training, validation and test.",
"The vocabulary size was 10 K. We set the layer depth of the language model to 1 or 2 to make a fair comparison.",
"Assume the layer depth is 1 , then the loss between the block output and the ground-truth could be regarded as the truncation error.",
"It alleviates the influence of the error accumulation across different layers.",
"Results of En-De and En-Fr Table 1 compares ODE Transformer with several state-of-the-art systems.",
"Both RK2-block and RK4-block outperform the baselines by a large margin with different model capacities.",
"For example, RK2-block obtains a +1 .",
"00 BLEU improvement with the base configu-ration when the depth is 6 .",
"RK4-block yields a gain of 0 .",
"17 BLEU points on top of RK2-block.",
"This observation empirically validates the conjecture that high-order ODE functions are more efficient.",
"When we switch to deep models, our method is more parameter efficient.",
"E.g., RK2-block is comparable with a strong 48-layer system (Li et al., 2020) with half of the encoder depth.",
"Similarly, wide models can also benefit from the enlarging layer depth (Wei et al., 2020; Li et al., 2020).",
"RK2-block achieves BLEU scores of 30 .",
"77 and 44 .",
"11 on the En-De and the En-Fr tasks, significantly surpassing the standard Big model by 1 .",
"32 and 0 .",
"70 BLEU points.",
"This sets a new state-of-the-art on these tasks with fewer parameters.",
"Results of En-Ro Table 2 exhibits model parameters, total training steps and BLEU scores of several strong systems on the En-Ro task.",
"Again, ODE Transformer outperforms these baselines.",
"As stated in (Mehta et al., 2020), they trained the model up to 170 epochs and obtained a BLEU score of 34 .",
"70 through the DeLight model.",
"However, the observation here is quite different.",
"The validation PPL begins to increase after 20 epochs.",
"Thus, our baseline is slightly inferior to theirs, but matches the result reported in Lin et al. (2020).",
"ODE blocks achieve even better performance with DeLight within much less training cost.",
"For a bigger model (line 6), it obtains a BLEU score of 35 .",
"28 .",
"Parameter Efficiency Table 3 summaries the results of several efficient Transformer variants, including Lite Transformer (Wu et al., 2020), DeLight (Mehta et al., 2020) and a light version of the Evolved Transformer (So et al., 2019).",
"As expected, ODE Transformer is promising for smaller models.",
"It is comparable in BLEU with DeLight but having 9 M fewer parameters.",
"Under the same model capacity, it outperforms DeLight by 0 .",
"64 BLEU points.",
"It may offer a new choice for deploying NMT systems on edge devices.",
"Results of Summarization and Correction We also evaluated the ODE Transformer on another two sequence generation tasks.",
"Table 4 shows that both RK2-block and RK4-block outperform the 8340 Model Summarization Correction RG-1 RG-2 RG-L Prec.",
"baselines by a margin.",
"Similarly, RK4-block is superior to RK2-block when the model is shallow.",
"More results and case studies could be found in Appendix C. 5 Analysis Here we investigate some interesting issues.",
"For simplicity, we call RK2-block with coefficients initialized by 1 as RK2-block-v1, and learnable coefficients (Eq.",
"(25) ) as RK2-block-v2.",
"Quantization of the Truncation Error In fact, we cannot obtain the true solution of each block output in NMT, because we mainly experimented on the encoder side.",
"Instead, we tested our system on the language modeling task, where the perplexity between the single-layer model output and the ground truth could be regarded as the truncation error with no error propagations.",
"Table 5 shows the perplexities on the Penn Treebank dataset (Mikolov et al., 2011).",
"All ODE Transformer variants reduce the errors significantly.",
"RK4-order achieves the lowest PPL on both settings.",
"In addition, RK2-block can even obtain a lower PPL than a 2-layer residual-block.",
"The observation here again verifies larger ODE blocks behave superior to the standard residual block.",
"Inference Speed and Memory Consumption Table 6 shows the comparison of inference speed and memory consumption discussed in Section 3.3.",
"Experimental results demonstrate the proposed ODE design schema results in acceptable inference speeds.",
"And it is also memory-friendly through the memory comparison between the baseline and the RK variants in both base and big configurations.",
"BLEU against Encoder Depth Figure 3 (left) depicts BLEU scores of several ODE Transformer variants and the baseline under different encoder depths.",
"All ODE Transformer variants are significantly superior to the baseline when depth 24 .",
"RK2-block-v2 almost achieves the best perfor-Model 1-Layer 2-Layer Residual-Block 142.33 136.07 RK2-block 131.80 123.12 RK2-block ( i = 1 ) 132.67 123.90 RK2-block (learnable i ) 128.48 121.02 RK4-block 126.89 119.46 Table 5: Comparison of PPL on systems with different ODE blocks.",
"mance over all depths, especially when the model becomes deeper.",
"Interestingly, Figure 3 confirms again that ODE Transformer is parameter efficient, e.g., a 6-layer RK2-block is comparable with the 18-layer baseline system.",
"Another finding here is RK4-block performs well on shallow models, but it is inferior to RK2-block when the depth is going deep.",
"This is because original coefficients may cause the optimization problem in the backward propagation in deep models (see Section 3.2).",
"Also, Figure 3 (right) plots BLEU as a function of the model size when the hidden size is 256 .",
"The RK2 method significantly surpasses the baseline using much fewer parameters.",
"Ablation Study on Different F ( , ) As stated in Section 3, the F ( , ) function can either be SAN, FFN or both of them (SAN+FFN).",
"As shown in Figure 4, high-order ODE works better with FFN than SAN.",
"An explanation might be that the FFN component has more parameters than the SAN component.",
"5 The model that treats FFN and SAN as a single ODE block behaves the best.",
"Training and Validation Perplexity Figure 5 plots the training and validation PPL curves of RK blocks and the baseline enhanced by RPR (Shaw et al., 2018).",
"RK2-block obtains lower training and validation PPLs in both configurations (base and wide models).",
"Visualization of the Gradient Norm We also collect the gradient information of several well-trained systems during training.",
"Figure 6 plots the gradient norm of RK2-block-v2, RK4-block and the standard residual-block (baseline).",
"As we can see that Pre-Norm residual block is able to make the training stable (Wang et al., 2019).",
"Both RK2-block-v2 and RK4-block provide richer signals due to the implicit parameter sharing among intermediate approximations.",
"The two learning curves appear to be nearly the same, which is consistent with the results in Table 1.",
"Comparison of Different ODE Design Schemas Then, we take a comprehensive analysis of several ODE design schemas.",
"As stated in Lu et al. (2018)'s work, several models in computer vision, such as LeapfrogNet (He et al., 2019), PolyNet (Zhang et al., 2017) and MultistepNet (Lu et al., 2018), can also be interpreted from the ODE perspective.",
"The related ODE functions are summarized in Table 7.",
"We re-implemented these methods using the same codebase for fair comparisons.",
"We conducted experiments following the base configu-ration on the En-De task.",
"At the time t , Multistep Euler methods require previous states, e.g. y t 1 , to generate the current approximation, instead of iterative refinements based on the current-time state.",
"So these methods are heavier than ODE Transformer.",
"Note that DLCL (Wang et al., 2019) can also be regarded as a 2 6 10 14 18 22 4 .",
"multistep Euler method, which is more competitive in deep Transformer.",
"But there is just a modest improvement upon the shallow baseline.",
"Theoretically, the Backward Euler method is slightly better than the Forward Euler method in numerical analysis, but the improvement is marginal.",
"Note that our ODE Transformer achieves consistent BLEU improvements over the aforementioned methods.",
"The reason is that such iterative refinements provide more efficient and effective parameter learning.",
"Deep Transformer models Recently, deep Transformer has witnessed tremendous success in machine translation, especially on WMT news tasks (Li et al., 2019; Zhang et al., 2020; Zhou et al., 2021; Tran et al., 2021).",
"A straightforward way is to shorten the path from upper-level layers to lower-level layers thus to alleviate the gradient vanishing or exploding problems (Bapna et al., 2018; Wang et al., 2019; Wu et al., 2019; Wei et al., 2020).",
"For deeper models, the training cost is nonnegligible.",
"To speed up the training, an alternative way is to train a shallow model first and progressively increase the model depth (Li et al., 2020; Dong et al., 2020).",
"Apart from the model architecture improvements, another way of easing the optimization is to utilize carefully designed parameter initialization strategies (Zhang et al., 2019; Xu et al., 2020; Huang et al., 2020; Liu et al., 2020a).",
"With the model capacity going larger, one can use Layer-8342 Model Information Flow Related ODEs BLEU Leapfrog (He et al., 2019) y t +1 = y t 1 + 2 F ( y t , t ) Multistep Euler 28.07 Multistep (Lu et al., 2018) y t +1 = k n y t + (1 k n ) y t 1 + F ( y t , t ) Multistep Euler 28.17 DLCL (Wang et al., 2019) y t +1 = y 0 + (cid:80) tl =0 W l F ( y l , l ) Multistep Euler 27.78 PolyNet (Zhang et al., 2017) y t +1 = y t + F ( y t , t ) + F ( F ( y t , t ) , t ) Backward Euler 28.15 RK2-block y t +1 = y t + 12 F ( y t , t ) + 12 F ( y t + F ( y t , t ) , t ) Improved Euler 28.67 RK2-block ( i = 1 ) y t +1 = y t + F ( y t , t ) + F ( y t + F ( y t , t ) , t ) RK 2nd-order 28.77 RK2-block (learnable i ) y t +1 = y t + 1 F ( y t , t ) + 2 F ( y t + F ( y t , t ) , t ) RK 2nd-order 28.86 RK4-block y t +1 = y t + 16 F 1 + 26 F 2 + 26 F 3 + 16 F 4 RK 4th-order 29.03 Table 7: Comparison of several ODE-inspired design schemas on the En-De task.",
"Drop (Fan et al., 2020) or Skipping Sublayers (Li et al., 2021) to prevent deep models from the overfitting problem.",
"Note that ODE Transformer is orthogonal to the aforementioned methods, and we will test it on these methods in future work.",
"Ordinary Differential Equations The relationship between ResNet and ODEs was first proposed by Weinan (2017).",
"This shows a brand-new perspective on the design of effective deep architectures.",
"Moreover, the success of Neural ODENet (Chen et al., 2018) has attracted researchers.",
"Some insightful architectures (Zhang et al., 2017; Lars-son et al., 2017; Lu et al., 2018; He et al., 2019; Zhu and Fu, 2018; Lu et al., 2019; Sander et al., 2021) can also be interpreted from the ODE perspective.",
"But, in NLP, it is still rare to see studies on designing models from the ODE perspective.",
"Zhang et al. (2021) proposed continuous self-attention models using the same merit with neural ODE.",
"Perhaps the most relevant work with us is an (2021)'s work.",
"They redesigned the Transformer architecture from a multi-particle dynamic system view in terms of efficiency.",
"Unlike them, we show that the stacked first-order ODE blocks may cause error accumulation, thus hindering the model performance.",
"We address this issue by introducing high-order blocks, and demonstrate significant performance improvements on three sequence generation tasks, which is complementary to Baier-Reinio and De Sterck (2020)'s work.",
"This paper explores the relationship between Transformer and ODEs.",
"We propose ODE Transformer to help the model benefit from high-order ODE solutions.",
"Experimental results on the three representative sentence generations tasks (i.e., machine translation, abstractive summarization, and grammatical error correction) show the effectiveness and efficiency of ODE Transformer.",
"It achieves 30 .",
"77 and 44 .",
"11 BLEU scores on the WMT'14 En-De and En-Fr benchmarks, setting a new state-of-the-art result on the En-Fr.",
"Note that our code is publicly available at https://github.",
"com/libeineu/ODE-Transformer .",
"This work was supported in part by the National Science Foundation of China (Nos. 61732005 and 61876035), the National Key R&D Project of China (No. 2019QY1801), the China HTRD Center Project (No. 2020AAA0107904) and Yunnan Provincial Major Science and Technology Special Plan Projects (Nos. 201902D08001905 and 202103AA080015).",
"The authors would like to thank anonymous reviewers for their valuable comments.",
"And thank Yufan Jiang for his helpful advice to improve the paper."
] | [
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"other",
"other"
] |
[
"Leonardo Rocha 2 , Marcos Andre Goncalves 1 1 Universidade Federal de Minas Gerais Brazil 2 Universidade Federal de Sao Joao del Rei Brazil { frviegas, washingtoncunha } @dcc.ufmg.br { christianreis, mgoncalv } @dcc.ufmg.br { antoniopereira, lcrocha } @ufsj.edu.br",
"Abstract Hierarchical Topic modeling (HTM) exploits latent topics and relationships among them as a powerful tool for data analysis and exploration.",
"Despite advantages over traditional topic modeling, HTM poses its own challenges, such as (1) topic incoherence, (2) unreasonable (hierarchical) structure, and (3) issues related to the definition of the ideal number of topics and depth of the hierarchy.",
"In this paper, we advance the state-of-the-art on HTM by means of the design and evaluation of CluHTM , a novel non-probabilistic hierarchical matrix factorization aimed at solving the specific issues of HTM.",
"CluHTM's novel contributions include:",
"(i) the exploration of richer text representation that encapsulates both, global (dataset level) and local semantic information when combined, these pieces of information help to solve the topic incoherence problem as well as issues related to the unreasonable structure ;",
"(ii) the exploitation of a stability analysis metric for defining the number of topics and the shape the hierarchical structure.",
"In our evaluation, considering twelve datasets and seven state-of-the-art baselines, CluHTM outperformed the baselines in the vast majority of the cases, with gains of around 500% over the strongest state-of-the-art baselines.",
"We also provide qualitative and quantitative statistical analyses of why our solution works so well.",
"Topic Modeling (TM) is the task of automatically extracting latent topics (e.g., a concept or a theme) from a collection of textual documents.",
"Such topics are usually defined as a probability distribution over a fixed vocabulary (a set of words) that refers to some subject and describes the latent topic as a whole.",
"Topics might be related to each other, and if they are defined at different semantic granularity levels (more general or more specific), this naturally induces a hierarchical structure.",
"Although traditional TM strategies are of great importance to extract latent topics, the relationships among them are also extremely valuable for data analysis and exploration.",
"In this context, Hierarchical Topic Modeling (HTM) aims to achieve to induce latent topics from text data while preserving the inherent hierarchical structure (Teh et al., 2006).",
"Relevant scenarios have been shown to enjoy the usefulness of HTM, such as",
"(i) hierarchical categorization of Web pages (Ming et al., 2010),",
"(ii) extracting aspects hierarchies in reviews (Kim et al., 2013) and",
"(iii) discovering research topics hierarchies in academic repositories (Paisley et al., 2014).",
"Despite its practical importance and potential advantages over traditional TM, HTM poses its own challenges, the main ones being:",
"(i) topic incoherence and",
"(ii) unreasonable hierarchical structure .",
"Topic Incoherence has to do with the need to learn meaningful topics.",
"That is, the top words that represent a topic have to be semantically consistent with each other.",
"Unreasonable structure is related to the extracted hierarchical topic structure.",
"Topics near the root should be more general, while topics close to the leaves should be more specific.",
"Furthermore, child topics must be coherent with their corresponding parent topics, guaranteeing a reasonable hierarchical structure.",
"Finally,",
"(iii) the number of topics in each hierarchy level is usually unknown and cannot be previously set to a predefined value since it directly depends on the latent topical distribution of the data.",
"Both supervised and unsupervised approaches have been applied to HTM.",
"Supervised methods use prior knowledge to build the hierarchical tree structure, such as labeled data or linking relationships among documents (Wang et al., 2015).",
"Those strategies are unfeasible when there is no explicit taxonomy or hierarchical scheme to associate with documents or when such an association (a.k.a., labeling) is very cumbersome or costly to obtain.",
"Unsupervised HTM (uHTM) deals with such limitations.",
"uHTM methods do not rely on previous knowledge (such as taxonomies or labeled hierarchies), having the additional challenge of discovering the hierarchy of topics based solely on the data at hand.",
"HTM solutions can also be roughly grouped into non-probabilistic and probabilistic models.",
"In probabilistic strategies, textual data is considered to be ruled by an unknown probability distribution that governs the relationships between documents and topics, hierarchically.",
"The major drawback in this type of approach has to do with the number of parameters in the model, which rapidly grows with the number of documents.",
"This leads to learning inefficiencies and proneness to over-fitting, mainly for short textual data (Tang et al., 2014).",
"To overcome these drawbacks, non-probabilistic models aim at extracting hierarchical topic models through matrix factorization techniques instead of learning probability distributions.",
"Such strategies also pose challenges.",
"They are usually limited to just local information (i.e., data limitation) as they go deeper into the hierarchy when extracting the latent topics.",
"That is, as one moves more in-depth in the hierarchical structure representing the latent topics, the available data rapidly reduces in size, directly impacting the quality of extracted topics (in terms of both coherence and structure reasonableness).",
"Probabilistic models mitigate this phenomenon as they rely on global information when handling the probability distributions(Xu et al., 2018).",
"Because of that, the current main HTM methods are built based on probabilistic methods (Griffiths et al., 2004; Mimno et al., 2007).",
"In this paper, we aim at exploring the best properties of both non-probabilistic and probabilistic strategies while mitigating their main drawbacks.",
"Up to our knowledge, the only work to explore this research venue is (Liu et al., 2018).",
"In that work, the authors explore NMF for solving HTM tasks by enforcing three optimization constraints during matrix factorization: global independence, local independence, and information consistency.",
"Those constraints allow their strategy, named HSOC, to produce hierarchical topics that somehow preserve topic coherence and reasonable hierarchical structures.",
"However, as we shall see in our experiments, HSOC is still not capable of extracting coherent topics when applied to short text data, which is currently prominent on the Web, especially on social network environments.",
"We here propose a distinct approach, taking a data engineering perspective, instead of focusing on the optimization process.",
"More specifically, we explore a matrix factorization solution properly designed to explore global information (akin to probabilistic models) when learning hierarchical topics while ensuring proper topic coherence and structure reasonableness.",
"This strategy allows us to build a data-efficient HTM strategy, less prone to over-fitting that also enjoys the desired properties of topic coherence and reasonable (hierarchical) structure.",
"We do so by applying a matrix factorization method over a richer text representation that encapsulates both, global and semantic information when extracting the hierarchical topics.",
"Recent non-probabilistic methods (Shi et al., 2018; Viegas et al., 2019) have produced top-notch results on traditional TM tasks by taking advantage of semantic similarities obtained from distances between words within an embedding space (Mikolov et al., 2013; Pennington et al., 2014).",
"Our critical insight for HTM was to note that the richer (semantic) representation offered by distributional word embeddings can be readily explored as a global 1 source of information in more profound levels of the hierarchical structure of topics.",
"This insight gives us an essential building block to overcome the challenges of matrix factorization strategies for HTM without the need for additional optimization constraints.",
"In (Viegas et al., 2019), the authors exploit the nearest words of a given pre-trained word embedding to generate meta-words, aka Cluwords , able of expanding and enhancing the document representation in terms of syntactic and semantic information.",
"Such an improved representation is capable of mitigating the drawbacks of using the projected space of word embeddings as well as extracting cohesive topics when applying nonnegative matrix factorization for topic modeling.",
"Motivated by this finding, we here advance the state-of-the-art in HTM, by designing, developing and evaluating an unsupervised non-probabilistic HTM method that exploits CluWords as a key building block for TM when capturing the latent hierarchical structure of topics.",
"We focus on the NMF method for uncovering the latent hierarchy as it is the most effective matrix factorization method for our purposes.",
"Finally, the last aspect needed 1 Distances in the embeddings space are global as they do consider the whole vocabulary and interactions among words in specific contexts.",
"to be addressed for the successful use of NMF for HTM is the definition of the appropriate number of topics k to be extracted.",
"Choosing just a few topics will produce overly broad results while choosing too many will result in over-clustering the data into many redundant, highly-similar topics.",
"Thus, our proposed method uses a stability analysis concept to automatically select the best number of topics for each level of the hierarchy.",
"As we shall see, our approach outperforms HSOC and hLDA (current state-of-the-art) for both small and large text datasets, often by large margins.",
"To summarize, our main contributions are:",
"(i) a novel non-probabilistic HTM strategy CluHTM based on NMF and CluWords that excels on HTM tasks (in both short and large text data) while ensuring topic coherence and reasonable topic hierarchies;",
"(ii) the exploitation in an original way of a cross-level stability analysis metric for defining the number of topics and ultimately the shape' of the hierarchical structure; as far as we know this metric has never been applied with this goal ;",
"(iii) an extensive empirical analysis of our proposal considering twelve datasets and seven state-of-the-art baselines.",
"In our experimental evaluation, CluHTM outperformed the baselines in the vast majority of the cases (In case of NPMI, in all cases), with gains of 500% when compared to hLDA and 549% when compared to HSOC, some of the strongest baselines; and finally,",
"(iv) qualitative and quantitative statistical analyses of the individual components of our solution.",
"Hierarchical Topic Modeling (HTM) can be roughly grouped into supervised and unsupervised methods.",
"Considering the supervised HTM strategies, we here highlight some relevant supervised extensions to the traditional Latent Dirichlet Allocation (LDA) (Blei et al., 2003), a widely used strategy for the topic modeling (TM).",
"LDA assumes a Dirichlet probability distribution over textual data to estimate the probabilities of words for each topic.",
"In (Mcauliffe and Blei, 2008), the authors propose SLDA , a supervised extension of LDA that provides a statistical model for labeled documents.",
"SLDA allows connecting each document to a regression variable to find latent topics that will best predict the response variables for future unlabeled documents.",
"Based on SLDA , Hierarchical Supervised LDA (HSLDA) (Perotte et al., 2011) incorporates the hierarchy of multi-label and pre-labeled data into a single model, thus providing extended prediction capabilities w.r.t., the latent hierarchical topics.",
"The Supervised Nested LDA (SNLDA) (Resnik et al., 2015), also based on SLDA , implements a generative probabilistic strategy where topics are sampled from a probability distribution.",
"SNLDA extends SLDA by assuming that the topics are organized into a tree structure.",
"Although our focus is on unsupervised solutions, we include SLDA , HSLDA and SNLDA as baselines in our experimental evaluation.",
"We now turn our attention to unsupervised HTM strategies, in which a hierarchical structure is learned during topic extraction.",
"In (Mimno et al., 2007) the authors propose Hierarchical Pachinko Allocation Model (hPAM), an extension of Pachinko Allocation (PAM) (Li and McCallum, 2006).",
"In PAM, documents are a mix of distributions over an individual topic set, using a directed acyclic graph to represent the co-occurrences of topics.",
"Each node in such a graph represents a Dirichlet distribution.",
"At the highest level of PAM, there is only a single node, where the lowest levels represent a distribution between nodes of the next higher level.",
"In hPAM, each node is associated with a distribution over the vocabulary of documents.",
"In (Griffiths et al., 2004), the authors propose the hLDA algorithm, which is also an expansion of LDA, being considered state-of-the-art in HTM.",
"In hLDA, in addition to using the text Dirichlet distribution, the nested Chinese Restaurant Process (nCRP) is used to generate a hierarchical tree.",
"NCRP needs two parameters: the tree level and a parameter.",
"At each node of the tree, a document can belong to a path or create a new tree path with probability controlled by .",
"More recently, in (Xu et al., 2018), the authors propose the unsupervised HTM strategy named a knowledge-based hierarchical topic model ( KHTM ).",
"This method is based on hLDA and, as such, models a generative process whose parameter estimation strategy is based on Gibbs sampling.",
"KHTM is able to uncover prior knowledge (such as the semantic correlation among words), organizing them into a hierarchy, consisting of knowledge sets (k-sets).",
"More specifically, the method first generates, through hLDA, an initial set of topics.",
"After comparing pairs of topics, those topics with similarity higher than (a.k.a., k-sets) are then filtered so that the first 20 words of each topic are kept, and the remaining are just discarded.",
"Those extracted k-sets are then used as an extra weight when extracting the final topics.",
"Probably the most similar work to ours is the HSOC strategy, proposed in (Liu et al., 2018), which proposes to use NMF for solving HTM tasks.",
"In order to mitigate the main drawbacks of NMF in the HTM setting 2 , HSOC relies on three optimization constraints to properly drive the matrix factorization operations when uncovering the hierarchical topic structure.",
"Such constraints are global independence, local independence, and information consistency, and allow HSOC to derive hierarchical topics that somehow preserve topic coherence and reasonable hierarchical structures.",
"As it can be observed, almost all models, supervised or unsupervised, are based on LDA.",
"As discussed in Section 1, though matrix factorization strategies normally present better results than Dirichlet strategies in TM tasks, for HTM, the situation is quite different.",
"In fact, matrix factorization methods face difficult challenges in HTM, mainly regarding data size as ones go deeper into the hierarchy.",
"More specifically, at every hierarchical level, a matrix factorization needs to be applied to increasingly smaller data sets, ultimately leading to insufficient data at lower hierarchy levels.",
"These approaches also do not exploit semantics nor any external enrichment, relying only on the statistical information extracted from the dataset.",
"Contrarily, here we propose a new HTM approach, called CluHTM , which exploits externally built word embedding models to incorporate global semantic information into the hierarchical topic tree creation.",
"This brings some important advantages to our proposal in terms of effectiveness, topic coherence, and hierarchy reasonableness altogether.",
"Cluwords (Viegas et al., 2019) combine the traditional Bag of Words (BoW) statistical representation with semantic information related to the words present in the documents.",
"The semantic context is obtained employing a pre-trained word representation, such as Fasttext (Mikolov et al., 2018).",
"Figure 1 presents the process of transforming each original word into a Cluword 2 Namely, the incoherence of topics and unreasonable hierarchical structure caused by the lack of a learned probability distribution that governs the document/topics relationships (cluster of words) representation.",
"First, the strategy uses the information about the dataset, as well as pre-trained word embedding (i.e. Fasttext) to build semantic relationships between a word and its neighbors (described in Section 3.1.1).",
"Next, statistical information on words (e.g., term frequency, document frequency) is extracted from the dataset.",
"Then, both semantic and statistical information are combined to measure the importance of each Cluword as explained in Section 3.1.2.",
"Cluwords enjoy the best of two worlds: it conjugates statistical information on the dataset, which has demonstrated to be very effective, efficient and robust in text applications, enriched with semantic contextual information captured by distributional word embeddings adapted to the dataset by the clusterization process described next.",
"Let W be the set of vectors representing each word t in the dataset vocabulary (represented as V ).",
"Each word t V has a corresponding vector u W .",
"The CluWords representation is defined as in Figure 1. The semantic matrix in the Figure 1 is defined as C R |V||V| , where each dimension has the size of the vocabulary ( |V| ), t (cid:48) represents the rows of C while t represents the columns.",
"Finally, each index C t (cid:48) ,t is computed according to Eq.",
"1. C t (cid:48) ,t = (cid:26) ( u t (cid:48) , u t ) if ( u t (cid:48) , u t ) 0 otherwise , (1) where ( u t (cid:48) , u t ) is the cosine similarity and is a similarity threshold that acts as a regularizer for the representation.",
"Larger values of lead sparser representations.",
"In this notation each column t of the semantic matrix C will be forming a CluWord t and each value of the matrix C t (cid:48) ,t may receive the cosine similarity between the vectors u t (cid:48) and u t in the embedding space W if it is greater than or equal to .",
"In Figure 1, the CluWords representation is defined as the product between the statistical matrix (a.k.a. term-frequency matrix) and semantic matrix C .",
"The statistical matrix ( T F ) can be represented as a T F R |D||V| , where each position T F d,t relates to the frequency of a word t in document d .",
"Thus, given a CluWord (CW) t for a document d , its data representation corresponds to CW d,t = T F d C ,t , where T F d has the term-frequencies of document d , and C ,t is the semantic scores for the CluWord t , according to Eq.",
"1. The TFIDF weighting for a CluWord t in a document d is defined as CW d,t = CW d,t idf t .",
"The IDF component is defined as idf t = log (cid:16) |D| (cid:80) 1 d |D| t,d (cid:17) , where D is the number of documents and t,d is the average of semantic weights of the semantic matrix C for the CluWord t ( C ,t ) that occurs in the vocabulary V d .",
"The average t,d is defined as t,d = 1 | V d,t (cid:48) | (cid:80) t (cid:48) ( V d C ,t ) C t (cid:48) ,t .",
"The Stability measure is motivated by the term-centering approach generally taken in topic modeling strategies, where topics are usually summarized as a truncated set of top words (Greene et al., 2014).",
"The intuition behind this strategy is, given some K topics, to measure whether running multiple random samplings for a topic modeling strategy results in Stability, in terms of p top words extracted from the topics.",
"Given a range of topics [ K min , K max ] , and some topic modeling strategy (on our case, Non-negative Factorization Matrix method), the strategy proceeds as follows.",
"First, it learns a topic model considering the complete data set representation D , which will be used as a reference point ( WD ) for analyzing the Stability afforded by the K topics.",
"Note that the p top words represent each topic.",
"Subsequently, S samples of the data are randomly drawn from D without replacement, forming a subset of D (cid:48) documents.",
"Then, |S| topic models are generated, one for each subsampling ( WS i ).",
"To measure the quality of K topics, the Stability computes the mean agreement among each pair of ( WD , WS i ).",
"The goal is to find the best match between the p top words of the compared topics.",
"The agreement is defined as agree ( W x , W y ) = 1 p (cid:80) pi =1 AJ ( w xi , ( w xi )) , where AJ ( ) is the average Jaccard coefficient used to compare the similarity among the words w and ( ) is the optimal permutation of the words in WS i that can be found in O ( p 3 ) time by solving the minimal weight bipartite matching problem using the Hungarian method (Kuhn, 2010).",
"CluHTM is an iterative method able to automatically define the best number of topics in each hierarchy, given a range of possible number of topics [ K min , K max ] .",
"CluHTM explores Cluwords and Non-negative Matrix Factorization (NMF) (Lee and Seung, 2001), one of the main non-probabilistic strategies.",
"Finally, the Stability method (described in Section 3) is used to select NMF k parameters (a.k.a number of topics).",
"CluHTM has five inputs (Algorithm 1),",
"(i) D max corresponds to the depth down to which we want to extract the hierarchical structure.",
"(ii) K min and K max control the range of some topics, such range will be used in all levels of the hierarchy;",
"(iii) T is the input text data; and",
"(iv) W is the pre-trained word embedding vector space used in the CluWords generation.",
"The output is the hierarchical structure H of p top words for each topic.",
"The method starts by getting the root topic (line 2-3 of Algorithm 1), which is composed of all documents in T .",
"Since the method is iterative, each iteration is controlled by a queue schema to build a hierarchical structure.",
"Thus, at each iteration (line 3), the algorithm produces the CluWords representation for the documents T (cid:48) (line 5), chooses the number of topics, exploiting the Stability measure (line 6), and runs the NMF method (line 7) to extract the p words for each topic in O (line 8).",
"Then, in the loop of line 9, each topic is stored in the queue, as well as the respective documents of each topic.",
"Summarizing, our solution exploits global semantic information (captured by CluWords) within local factorizations, limited by a stability criterion that defines the shape' of the hierarchical structure.",
"Though simple (and original), the combination of these ideas is extremely powerful for solving the HTM task, as we will see next.",
"The primary goal of our solution is to effectively perform hierarchical topic modeling so that more coherent topics can be extracted.",
"To evaluate topic model coherence, we consider 12 real-world datasets as reference.",
"All of them were obtained from previous works in the literature.",
"For all datasets, we performed stopwords removal (using the standard SMART list) and removed words such as adverbs, using the VADER lexicon dictionary (Hutto and Gilbert, 2014), as the vast majority of the essential words for identifying topics are nouns and verbs.",
"These procedures improved both the efficiency and effectiveness of all analyzed strategies.",
"Table 1 provides a summary of the reference datasets, reporting the number of features (words) and documents, as well as the mean number of words per document (density) and the corresponding references.",
"We consider three classes of topic quality metrics based on three criteria:",
"(a) coherence,",
"(b) mutual information, and",
"(c) semantic representation.",
"In this paper, we focus on these three criteria since they are the most used metrics in the literature (Shi et al., 2018).",
"We consider three topic lengths (5, 10 and 20 words) for each parameter in our evaluation, since different lengths may bring different challenges.",
"Regarding the metrics, coherence captures easiness of interpretation by co-occurrence.",
"Words that frequently co-occur in similar contexts in a corpus are easier to correlate since they usually define a more well-defined concept or topic.",
"We employ an improved version of regular coherence (Nikolenko, 2016), called Coherence, defined as c ( t,W t ) = (cid:88) w 1 ,w 2 Wt logd ( w 1 ,w 2 ) + d ( w 1 ) , (2) where d ( w 1) denotes the number of occurrences of w 1 , d ( w 1 , w 2) is the number of documents that contain both w 1 and w 2 together, and is a smoothing factor used for preventing log (0) .",
"Another class of topic quality metrics is based on the notion of pairwise pointwise mutual information (PMI) between the top words in a topic.",
"It captures how much one gains in the information given the occurrence of the other word, taking dependencies between words into consideration.",
"Following a recent work (Nikolenko, 2016), we here compute a normalized version of PMI (NPMI) where, for a given ordered set of top words W t = ( w 1 , ..., w N ) in a topic: NPMI t = (cid:88) i<j log p ( w i ,w j ) p ( w i ) p ( w j ) logp ( w i , w j ) .",
"Finally, the third class of metrics is based on the distributed word representations introduced in (Nikolenko, 2016).",
"The intuition is that, in a well-defined topic, the words should be semantically similar, or at least related, to be easily interpreted by humans.",
"In a d -dimensional vector space model in which every vocabulary word w W has been assigned to a vector v w R d , the vectors corresponding to the top words in a topic should be close to each other.",
"In (Nikolenko, 2016), the authors define topic quality as the average distance between the top words in the topic, as follows: W 2 V L 1 = 1 | W t | ( | W t | 1) (cid:88) w 1 (cid:54) = w 2 W t d cos ( v w 1 , v w 2 ) .",
"Generally speaking, let d ( w 1 , w 2 ) be a distance function in R d .",
"In this case, larger d ( w 1 , w 2 ) corresponds to worse topics (with words not as localized as in topics with smaller average distances).",
"In (Nikolenko, 2016), the authors suggest four different distance metrics, with cosine distance achieving the best results.",
"We here also employ the cosine distance, defined as d cos ( x, y ) = 1 x T y .",
"We compare our approach described in Section 4, with seven hierarchical topic model strategies marked in bold in Section 2. For the input parameters of CluHTM (Algorithm 1), we set K min = 5 , K max =25, R = 10 and D max = 3 .",
"We define K min through empirical experiments, and the K max was defined according to the number of topics exploited in (Viegas et al., 2019).",
"For the baseline methods, we adopt the parameters suggested by their own works.",
"We assess the statistical significance of our results employing a paired t-test with 95% confidence and Holm-Bonferroni correction to account for multiple tests.",
"We start by comparing CluHTM against four state-of-the-art uHTM baselines considering the twelve reference datasets.",
"Three hierarchical levels for each strategy are used in this comparison.",
"In Figures 2, 4 and 3 we contrast the results of our proposed CluHTM and the reference strategies, considering the NPMI, W2V-L1, and Coherence metrics.",
"Considering NPMI, the most important metric to evaluate the quality of topics (Nikolenko, 2016), we can see in Figure 2 that our strategy outperforms all baselines in all datasets by large margins, with gains over 500% against some of the strongest ones.",
"Some of these results are the highest in terms of NMPI ever reported for several of these datasets.",
"Considering the Coherence scores (Figure 3), our strategy achieves the single best results in 2 out of 12 datasets, with gains up to 58% and 92% against the most robust baseline (hPAM), tying in 8 out 12 and losing two times for hLDA and hPAM.",
"Similar results can be observed for the W2V-L1 metric (Figure 4) CluHTM ties in 10 out of 12 results, with one win and one loss for KHTM.",
"As we will see, even with very few losses in these metrics, our method proves to be Dataset CluHTM SLDA SNLDA HSLDA Coherence 20News 62 .",
"We now turn our attention to the effectiveness of our proposal when compared to the supervised HTM strategies.",
"We consider the 20News and ACM datasets for which have a ground truth for supervised strategies.",
"Table 2 presents the results considering Coherence, W2V-L1, and NPMI.",
"The statistical significance tests ensure that the best results, marked in (cid:78) , are superior to others.",
"The statistically equivalent results are marked in while statistically significant losses are marked in (cid:72) .",
"Once again, in Table 2, our proposed strategy achieves the best results in 4 out of 6 cases, tying with SNLDA and HSLDA in ACM and loosing only to SLDA in 20News, both considering the W2V-L1 metric.",
"It is important to remind that, differently from these supervised baselines, our method does not use any privileged class information to build the hierarchical structure nor to extract topics.",
"We provide a comparative table with all experimental results 5 , including the results for each extracted level of the hierarchical structure.",
"We summarize our findings regarding the behavior of all analyzed strategies in the 12 datasets, counting the number of times each strategy figured out as a top performer 6 .",
"The summarized results can be seen in Table 3. Our proposal is in considerable advantage over the other explored baselines, being 5 see Appendix, Section Supplementary Results for detailed results 6 If two approaches are statistically tied as top performers in the same dataset, both will be counted.",
"the strategy of choice in the vast majority of cases.",
"Overall, considering a universe of 36 experimental results (the combination of 3 evaluation metrics over 12 datasets), we obtained the best results (33 best performances), with the most robust baseline hPAM coming far away, with just 17 top performances.",
"Another interesting observation is that, in terms of NPMI, CluHTM wins in all cases.",
"Details of this analysis are summarized in the Appendix.",
"One important open question remains to be answered: To what extent the characteristics of the dataset impact the quality of the topics generated by our strategy?",
"To answer this question, we provide a quantitative analysis regarding the hierarchical topic modeling effectiveness, measured by the NPMI score.",
"We start our analysis by quantifying the effects of the parameters of interest (i.e., factors).",
"Those factors might affect the performance of the system under study, while also determining whether the observed variations are due to significant effects (e.g., measurement errors, the inherent variability of the process being analyzed (Jain, 1991)).",
"To this end, we adopt a full factorial design , which uses all the possible combinations of the levels of the factors in each complete experiment.",
"The first factor is the dataset.",
"The idea is to analyze the impact of textual properties such as dataset size, density, dimensionality, etc.",
"Thus, each level of this factor is a dataset in Table 1. The second factor is the HTM strategies evaluated in the previous Section.",
"In this factor, we intend to assess the impact of the extracted topics, as well as the hierarchical structure.",
"Each level of this factor is an evaluated HTM strategy.",
"All the possible combination between these two factors will be measured by the average of NMPI among topics of the hierarchical structure.",
"factor.",
"From the effects, we can observe that the CluHTM impact in the NPMI value is 99 .",
"38% higher than the overall average.",
"We can also see that hLDA has an NPMI score higher than the overall average ( 18 . 67% ) and HSOC has an NPMI score of approximately 64 .",
"44% smaller than overall NMPI.",
"Concerning the datasets' effects, the full factorial design experiment tells us that they have a small impact on the variation concerning the obtained average NPMI scores.",
"We can also observe that the dataset with the most variation of NPMI is InfoVis-Vast, with a score of 29 .",
"97% smaller than the overall NPMI.",
"We perform a ANOVA test to assess whether the studied factors are indeed statistically significant and conclude, with 99% confidence according to the F-test, that the choice of algorithm (factor B) explains approximately 90% of the obtained NPMI values.",
"We can also conclude that the investigated properties of the textual data (factor A), as well as the experimental errors, have a small influence on the experimental results.",
"Summarizing, we can conclude that the characteristics of the datasets have a lower impact on the results and that the impact of CluHTM is consistent across all of them.",
"The ANOVA test details are presented in Table 5. 6 Conclusion We advanced the state-of-the-art in hierarchical topic modeling (HTM) by designing, implementing and evaluation a novel unsupervised non-probabilistic method CluHTM.",
"Our new method exploits a more elaborate (global) semantic data representation CluWords as well as an original application of a stability measure to define the shape of the hierarchy.",
"CLUHTM excelled in terms of effectiveness, being around two times more effective than the strongest state-of-the-art baselines, considering all tested datasets and evaluation metrics.",
"The overall gains over some of these strongest baselines are higher than 500% in some datasets.",
"We also showed that CluHTM results are consistent across most datasets, independently of the data characteristics and idiosyncrasies.",
"As future work, we intend to apply CluHTM in other representative applications on the Web, such as hierarchical classification by devising a supervised version of CluHTM.",
"We also intend to incorporate some type of attention mechanism into our methods to better understand which Cluwords are more important to define certain topics.",
"This work is partially supported by CAPES, CNPq, Finep, Fapemig, Mundiale, Astrein, projects InWeb and MASWeb."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"method",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"objective",
"method",
"result",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other"
] |
[
"One of the main bottlenecks in developing discourse dependency parsers is the lack of annotated training data.",
"A potential solution is to utilize abundant unlabeled data by using unsupervised techniques, but there is so far little research in unsupervised discourse dependency parsing.",
"Fortunately, unsupervised syntactic dependency parsing has been studied for decades, which could potentially be adapted for discourse parsing.",
"In this paper, we propose a simple yet effective method to adapt unsupervised syntactic dependency parsing methodology for unsupervised discourse dependency parsing.",
"We apply the method to adapt two state-of-the-art unsupervised syntactic dependency parsing methods.",
"Experimental results demonstrate that our adaptation is effective.",
"Moreover, we extend the adapted methods to the semi-supervised and supervised setting and surprisingly, we find that they outperform previous methods specially designed for supervised discourse parsing.",
"Further analysis shows our adaptations result in superiority not only in parsing accuracy but also in time and space efficiency.",
"Discourse parsing, aiming to find how the text spans in a document relate to each other, benefits various down-stream tasks, such as machine translation evaluation (Guzman et al., 2014; Joty et al., 2014), summarization (Marcu, 2000; Hirao et al., 2013), sentiment analysis (Bhatia et al., 2015; Huber and Carenini, 2020) and automated essay scoring (Miltsakaki and Kukich, 2004; Burstein et al., 2013).",
"Researchers have made impressive progress on discourse parsing from the constituency perspective, which presents discourse structures as constituency trees (Ji and Eisenstein, 2014; Feng and Hirst, 2014; Joty et al., 2015; Nishida and * Corresponding author. Nakayama, 2020).",
"However, as demonstrated by Morey et al. (2018), discourse structure can also be formulated as a dependency structure.",
"Besides that, there might exist ambiguous parsing in terms of the constituency perspective (Morey et al., 2018).",
"All of these suggest that dependency discourse parsing is a different promising approach for discourse parsing.",
"One of the main bottlenecks in developing discourse dependency parsing methods is the lack of annotated training data since the labeling effort is labor-intensive and time-consuming, and needs well-trained experts with linguistic knowledge (Marcu et al., 1999).",
"This problem can be tackled by employing unsupervised and semi-supervised methods that can utilize unlabeled data.",
"However, while unsupervised methodology has been studied for decades in syntactic dependency parsing, there is little attention paid to the counterpart in discourse dependency parsing.",
"Considering the similarity between syntactic and discourse dependency parsing, it is natural to suggest such methodology can be adapted from the former to the latter.",
"In this paper, we propose a simple yet effective adaptation method that can be readily applied to different unsupervised syntactic dependency parsing approaches.",
"Adaptation from syntactic dependency parsing to discourse dependency parsing has two challenges.",
"First, unlike syntactic parsing which has a finite vocabulary, in discourse parsing, the number of elementary discourse units (EDUs) is unlimited.",
"This makes it difficult if not impossible to directly apply syntactic approaches requiring enumeration of words or word categories to discourse parsing.",
"Second, in a discourse dependency parse tree, the dependencies within a sentence or a paragraph often form a complete subtree.",
"There is no correspondence to this constraint in syntactic parsing approaches.",
"To address these two challenges, we cluster the EDUs to produce clusters resembling Part-Of-Speech (POS) tags in syntactic parsing and we introduce the Hierarchical Eisner algorithm that finds the optimal parse tree conforming to the constraint.",
"We applied our adaptation method to two state-of-the-art unsupervised syntactic dependency parsing models: Neural Conditional Random Field Autoencoder (NCRFAE, Li and Tu (2020)) and Variational Variant of Discriminative Neural Dependency Model with Valences (V-DNDMV, Han et al. (2019)).",
"In our experiments, the adapted models performs better than the baseline on both RST Discourse Treebank (RST-DT, Carlson et al. (2001)) and SciDTB (Yang and Li, 2018) in the unsupervised setting.",
"When we extend the two models to the semi-supervised and supervised setting, we find they can outperform previous methods specially designed for supervised discourse parsing.",
"Further analysis indicates that the Hierarchical Eisner algorithm shows superiority not only in parsing accuracy but also in time and space efficiency.",
"Its empirical time and space complexity is close to O ( n 2 ) with n being the number of EDUs, while the unconstrained algorithm adopted by most previous work has a complexity of O ( n 3 ) .",
"The code and trained models can be found at: https://github.",
"com/Ehaschia/DiscourseDependencyParsing .",
"Unsupervised syntactic dependency parsing Unsupervised syntactic dependency parsing is the task to find syntactic dependency relations between words in sentences without guidance from annotations.",
"The most popular approaches to this task are Dependency Model with Valences (DMV, Klein and Manning (2004)), a generative model learning the grammar from POS tags for dependency predictions, and its extensions.",
"Jiang et al. (2016) employ neural networks to capture the similarities between POS tags ignored by vanilla DMV and Han et al. (2019) further amend the former with discriminative information obtained from an additional encoding network.",
"Besides, there are also some discriminative approaches modeling the conditional probability or score of the dependency tree given the sentence, such as the CRF autoencoder method proposed by Cai et al. (2017).",
"Discourse dependency parsing There is limited work focusing on discourse dependency parsing.",
"Li et al. (2014) proposes an algorithm to convert constituency RST tree to dependency structure.",
"In their algorithm, each non-terminal is assigned with a head EDU, which is the head EDU of its leftmost nucleus child.",
"Then, a dependency relation is created for each non-terminal from its head to its dependent, in a procedure similar to those designed for syntactic parsing.",
"Hirao et al. (2013) proposes another method that differs from the previous one in the processing of multinuclear relations.",
"Yoshida et al. (2014) proposes a dependency parser built around a Maximum Spanning Tree decoder and trains on dependency trees converted from RST-DT.",
"Their parser achieved better performance on the summarization task than a similar constituency-based parser.",
"Morey et al. (2018) reviews the RST discourse parsing from the dependency perspective.",
"They adapt the the best discourse constituency parsing models until 2018 to the dependency task.",
"Yang and Li (2018) constructs a discourse dependency treebank SciDTB for scientific abstracts.",
"To the best of our knowledge, we are the first to investigate unsupervised and semi-supervised discourse dependency parsing.",
"Unsupervised Constituent Discourse Parsing Kobayashi et al. (2019) propose two unsupervised methods that build unlabeled constituent discourse trees by using the CKY dynamic programming algorithm.",
"Their methods build the optimal tree in terms of a similarity (dissimilarity) score function that is defined for merging (splitting) text spans into larger (smaller) ones.",
"Nishida et al. (2020) use Viterbi EM with a margin-based criterion to train a span-based neural unsupervised constituency discourse parser.",
"The performance of these unsupervised methods is close to that of previous supervised parsers.",
"We propose an adaptation method that can be readily integrated with different unsupervised syntactic dependency parsing approaches.",
"First, we cluster the element discourse units (EDU) to produce clusters resembling POS tags or words used in syntactic parsing.",
"This is necessary because many unsupervised syntactic parsers require enumeration of words or word categories, typically in modeling multinomial distributions as we shall see in Section 4.",
"While EDUs, which are sequences of words, cannot be enumerated, its clusters can.",
"During parsing, we apply the Hierarchical Eisner algorithm used for parse tree, a novel modified ver-Figure 1: [ THE FINANCIAL ACCOUNTING STANDARDS BOARD'S coming rule on disclosure ] e 1 [ involving financial instruments ] e 2 [ will be effective for financial statements with fiscal years ] e 3 [ ending after June 15, 1990. ] e 4 [ The date was misstated in Friday's edition . ] e 5 [ (See: FASB Plans Rule on Financial Risk of Instruments ] e 6 [ WSJ Oct. 27, 1989) ] e 7 sion of the classic Eisner algorithm, used for parse tree to produce discourse dependency parse trees that conform to the constraint that every sentence or paragraph should correspond to a complete subtree.",
"Given an input document represented as an EDU sequence x 1 , x 2 , . . . , x n , we can use word embedding or context sensitive word embedding to get the vector representation x i of the i -th EDU x i",
"Specifically, we use BERT (Devlin et al., 2019) to encode each word.",
"Let w i be the encoding of the i -th word in the document.",
"For an EDU x i spanning from word position b to e , we follow Toshniwal et al. (2020) and concatenate the encoding of the endpoints to form its representation: x i = [ w b ; w e ] .",
"With the representations of all EDUs from the whole training corpus obtained, we use K-Means (Lloyd, 1982) to cluster them.",
"Let c i be the cluster label of x i .",
"The Eisner algorithm (Eisner, 1996) is a dynamic programming algorithm widely used to find the optimal syntactic dependency parse tree.",
"The basic idea of it is to parse the left and right dependents of an token independently and combine them at a later stage.",
"Algorithm 1 shows the pseudo-code of the Eisner algorithm.",
"Here C i j represents a complete span , which consists of a head token i and all of its descendants on one side, and I i j represent an incomplete span , which consists of a head i and its partial descendants on one side and can be extended by adding more descendants to that side.",
"Discourse dependency parse trees, however, Algorithm 1 Eisner Algorithm 1: Inputs: score matrix s R n n 2: Initialize: C = {} , I = {} , C i i = 0 , i = 1 , . . . , n 3: for l = 1 , ..., n do (cid:46) span length 4: for i = 1 , ...n l do (cid:46) span start index 5: j = i + l (cid:46) span end index 6: I i j = max i k j ( s ij + C i k + C k +1 j ) 7: I i j = max i k j ( s ji + C i k + C k +1 j ) 8: C i j = max i k j ( I i k + C k j ) 9: C i j = max i k j ( C k i + I j k ) 10: end for 11: end for Ratio Train Dev.",
"demonstrate structural characteristics not taken into account by the Eisner algorithm.",
"Specifically, a document has a hierarchical structure which divides the document into paragraphs, each paragraph into sentences, and finally each sentence into EDUs, and the discourse parse tree should be consistent with this hierarchical structure.",
"Equivalently, in a discourse parse tree, every sentence or paragraph should be exactly covered by a complete subtree, like Figure 1.",
"We empirically find that this constraint is satisfied by most of the gold discourse parses in the RST Discourse Treebank (RST-DT, Carlson et al. (2001)) and SciDTB (Yang and Li, 2018) datasets (Table 1).",
"We therefore propose the Hierarchical Eisner algorithm, a novel modification to the Eisner algorithm that incorporates the constraint.",
"Our new algorithm has almost the same state transition formulas as the Eisner algorithm except for a few changes brought by the hierarchical constraint.",
"Concretely, our algorithm finds the optimal parse tree in a bottom-up way and divides the process into 3 steps: intra-sentence parsing, intra-paragraph parsing, and intra-document parsing.",
"In the intra-sentence parsing step, we run the original Eisner algorithm, except that we need not to form a tree.",
"Then in the Algorithm 2 Modification to Algorithm 1 6: I i j = max i k j ( s ij + C i k + C k +1 j ) 7: I i j = max i k j ( s ji + C i k + C k +1 j ) 8: C i j = max i k j j E ( I i k + C k j ) (cid:46) Here E is a set of the index of the end boundary of sentences.",
"intra-paragraph step, we combine all intra-sentence spans in the paragraph.",
"Under the constraint that there can only be one EDU in every sentence whose head is not belong to this sentence.",
"To achieve that, we modify the state transition equations (step 6-9 in Algorithm 1) to prune invalid arcs.",
"Figure 2 shows some cases during merge across sentence spans.",
"Case 1 are valid because the constraint is satisfied.",
"Case 2 is invalid because the head of EDU e 6 can not be e 4 or e 5 hence the constraint is violated.",
"From these cases, we can find that for incomplete span I i k and complete span C k j across sentences, we only merge them when j is at the end boundary of a sentence as Algorithm 2 shows.",
"After the intra-paragraph step, we move to the intra-document step to combine paragraph-level spans following the same procedure as in the intra-paragraph step and form the final document-level tree.",
"Our method has lower time complexity than the original Eisner algorithm.",
"Suppose a document has k p paragraphs, each paragraph has k s sentences and each sentence has k e EDUs.",
"The time complexity of the original Eisner algorithm is O ( k 3 p k 3 s k 3 e ) while the time complexity of our Hierarchical Eisner algorithm is O ( k 2 p k 3 s k 3 e ) .",
"We adapt two current state-of-the-art models in unsupervised syntactic dependency parsing for discourse parsing.",
"One is Neural CRF Autoencoder (NCRFAE, Li and Tu (2020); Cai et al. (2017)), a discriminative model, and the other is : Variational Variant of DNDMV (V-DNDMV, Han et al. (2019)), a generative model.",
"A CRF autoencoder (Ammar et al., 2014) consists of an encoder and a decoder.",
"The encoder predicts a hidden structure, such as a discourse dependency tree in our task, from the input and the decoder tries to reconstruct the input from the hidden structure.",
"In a neuralized CRF autoencoder, we employ neural networks as the encoder and/or decoder.",
"We use the widely used biaffine dependency parser (Dozat and Manning, 2017) as the encoder to compute the hidden structure distribution P ( y | x ) , parameterized with .",
"Here y represents the hidden structure and x is input document.",
"We feed the input document x into a Bi-LSTM network to produce the contextual representation of each EDU segmentation r i , and then feed r i to two MLP networks to produce two continuous vectors v ( head ) i and v ( dep ) i , representing i -th EDU segmentation being used as dependency head and dependent respectively.",
"A biaffine function is used to compute the score matrix s .",
"Each matrix element s ij , the score for a dependency arc pointing from x i to x j , is computed as follows: s ij = v ( head ) (cid:62) i Wv ( dep ) i + b (1) where W is the parameter matrix and b is the bias.",
"Following Dozat and Manning (2017) we formulate P ( y | x ) as a head selection problem process that selects the dependency head of each EDU independently: P ( y | x ) = (cid:89) i P ( h i | x ) (2) where h i is the index of the head of EDU x i and P ( h i | x ) is computed by softmax function with score s ij : P ( h i = j | x ) = e s ji (cid:80) nk =1 e s ki (3) The decoder parameterized with computes P ( x | y ) , the probability of the reconstructed document x given the parse tree y .",
"Following Cai et al. (2017) and Li and Tu (2020), we independently predict each EDU x i from its head specified by y .",
"Since EDUs cannot be enumerated, we reformulate the process as predicting the EDU cluster c i given its dependency head cluster c h i .",
"Our decoder simply specifies a categorical distribution P ( c i | c h i ) for each possible EDU cluster and compute the reconstruction probability as follows: P ( x | y ) = (cid:89) i P ( c i | c h i ) (4) We achieve the final reconstruction distribution by cascading the encoder and decoder distribution: P , ( x , y | x ) = P ( x | y ) P ( y | x ) (5) The best parsing is obtained by maximizing P , ( x , y | x ) : y = arg max y P , ( x , y | x ) (6) We consider the general case of training the CRF autoencoder with dataset D containing both labelled data L and unlabelled data U .",
"Purely supervised or unsupervised learning can be seen as special cases of this setting.",
"The loss function L ( D ) consists of a labelled loss L l ( L ) and an unlabelled loss L u ( U ) : L ( D ) = L l ( L ) + (1 ) L u ( U ) (7) where is the hyperparameter weighting the importance of the two parts.",
"For the labelled data, where the gold parse trees y are known, labelled loss is: L l ( L ) = (cid:88) x L log P , ( x , y | x ) (8) For the unlabelled data where the gold parses are unknown, the unlabelled loss is: L u ( U ) = (cid:88) x U max y Y ( x ) log P , ( x , y | x ) (9) We optimize the encoder parameter and decoder parameter together with gradient descent methods.",
"V-DNDMV is a variational autoencoder model composed of both an encoder and a decoder.",
"The encoder is a Bi-LSTM that takes the input document and produces parameters of a Gaussian distribution from which a continuous vector s summarizing the document sampled.",
"The decoder models the joint probability of the document and its discourse dependency tree condition on s with a generative grammar.",
"The grammar is defined on a finite set of discrete symbols, so in our adapted model, input documents are represented by EDU clusters instead of EDUs that are infinite in number.",
"There are three types of grammar rules, each associated with a set of probabilistic distributions: ROOT , CHILD and DECISION .",
"To generate a document, we firstly sample from the ROOT distribution PROOT ( chd | s ) to determine the cluster label of the head EDU of the document and then recursively decide whether to generate a new child EDU cluster and what child EDU cluster to generate by sampling from the DECISION distribution PDECISION ( dec | h, dir, val, s ) and CHILD distribution PCHILD ( chd | h, dir, val, s ) .",
"dir denotes the generation direction (i.e, left or right), val is a binary variable denoting whether the current EDU already has a child in the direction dir or not.",
"dec is a binary variable indicating whether to continue generating a child EDU, and h and chd denote the parent and child EDU cluster respectively.",
"We use neural networks to calculate these distributions.",
"The input of the networks is the continuous vector or matrix representations of grammar rule components such as h, chd, val and dir as well as document vector s produced by the encoder.",
"The training objective for learning the model is the probability of the training data.",
"The intermediate continuous vector s and the hidden variable representing the dependency tree are both marginalized.",
"Since the marginalized probability cannot be calculated exactly, V-DNDMV maximizes the Evidence Lower Bound (ELBO), a lower bound of the marginalized probability.",
"ELBO consists of the conditional likelihood of the training data and an regularisation term given by the KL divergence between P ( s | x ) and P ( s ) (which is a standard Gaussian).",
"The conditional likelihood is shown as follows: L () = 1 NN (cid:88) i =1 (cid:88) y ( i ) Y ( x ( i ) ) log P ( x ( i ) , y ( i ) | s ( i ) ) (10) Here N is the number of training samples, y is the dependency tree and Y ( x ) is the set of all possible dependency tree in x .",
"is the parameters of the neural networks.",
"We can rewrite the conditional probability as following: P ( x , y | s ) = (cid:89) r ( x , y ) P ( r | s ) (11) where r is the grammar rule involved in generating x along with y .",
"We optimize ELBO using the expectation-maximization (EM) algorithm, alternating the E-step and the M-step.",
"In the E-step, we fix rule parameters and use our Hierarchical Eisner algorithm to compute the expectation of possible dependency tree y , which gives the expected count of rules used in the training samples.",
"In the M-step, expected count of rules computed in the E-step is used to train the prediction neural networks with gradient descent methods.",
"The regularisation term is also optimized using gradient descent methods in the M-step.",
"After training, the parsing result y of a new test case x is obtained as: y = arg max y Y ( x ) P ( x , y | s ) (12) 5 Experiment 5.1 Setting Data We evaluate the performance of our models on the RST Discourse Treebank * (RST-DT, Carlson et al. (2001)) and SciDTB (Yang and Li, 2018).",
"RST-DT consists of Wall Street Journal articles manually annotated with RST structures (Mann and Thompson, 1988).",
"We use the method proposed by Li et al. (2014) to convert the RST structure samples into dependency structures.",
"SciDTB consists of scientific abstracts from ACL Anthology annotated with dependency structures.",
"Hyper-parameter For our NCRFAE model, we adopt the hyper-parameters of Li and Tu (2020).",
"For our V-NDNMV model we adopt the hyper-parameters of Han et al. (2019).",
"We use Adam (Kingma and Ba, 2015) to optimize our objective functions.",
"Experimental details are provided in Appendix A. 5.2 Main Result We compared our methods with the following baselines: Right Branching (RB) is a rule based method.",
"Given a sequence of elements (i.e., EDUs or sub-trees), RB generates a left to right chain structure, like x 1 x 2 , x 2 x 3 .",
"In order to develop a strong baseline, we include the hierarchical constraint introduced in Section 3.2 in this procedure.",
"That is, we first build sentence-level discourse trees using the right branching method based on sentence segmentation.",
"Then we build paragraph-level trees using the right branching method to form a left to right chain of sentence-level subtrees.",
"Finally we obtain document-level trees in the same way.",
"Since this method has three stages, we call it RB RB RB .",
"This simple procedure forms a strong baseline in terms of performance.",
"As Nishida and Nakayama (2020) reports, the unlabeled F1 score of constituent structures of RB RB RB reaches 79.9 on RST-DT.",
"Correspondingly, the performance of the supervised method proposed by (Joty et al., 2015) is 82.5.",
"NISHIDA20 is a neural model for unsupervised discourse constituency parsing proposed by Nishida and Nakayama (2020).",
"This model runs a CKY parser that uses a Bi-LSTM model to learn representations of text spans, complemented with lexical, syntactic and structural features.",
"We convert its result to dependency structure using the same conversation method of Li et al. (2014).",
"To make a fair comparison, we use RB RB RB to initialize their model instead of RB RB RB as in their paper, where RB means using predicted syntactic structures for initialization at the sentence level.",
"Compared with baselines , our two adapted models NCRFAE and V-DNDMV both achieve better performance on the two datasets.",
"Results also show that the generative model V-DNDMV is better than the discriminatve model NCRFAE in the unsupervised setting.",
"We also investigate the semi-supervised setting SciDTB RST-DT RB RB RB 52.5 43.9 NISHIDA20 -41.9 Adapted V-DNDMV 54.4 44.2 Adapted NCRFAE 53.3 44.0 Table 2: Unsupervised discourse dependency parsing results on RST-DT and SciDTB.",
"on the SciDTB dataset of our adapted models with varied ratios of labeled/unlabeled data.",
"Experimental results are shown in Figure 3, which indicate that NCRFAE outperforms V-DNDMV for all the ratios.",
"Even when trained with only a few labeled data (0.01 of labeled data in SciDTB, only about 7 samples), the discriminative model already outperforms the generative model significantly.",
"Besides that, we also find our semi-supervised methods reach higher UAS scores than their supervised versions (trained with labeled data only) for all the labeled/unlabeled data ratios.",
"Inspired by the promising results in the semi-supervised setting, we also investigate the performance of our adapted NCRFAE and V-DNDMV in the fully supervised setting.",
"The results are shown in Table 3.",
"We evaluate our models on the RST-DT and SciDTB datasets and compare them with eight models.",
"NIVRE04 (Nivre et al., 2004) and WANG17 (Wang et al., 2017) are two transition-based models for dependency parsing.",
"Yang and Li (2018) adapts them to discourse dependency parsing.",
"FENG14 (Feng and Hirst, 2014), JI14 We correct their evaluation metrics, so the result is different from the original paper (Li et al., 2014).",
"(Ji and Eisenstein, 2014), JOTY15 (Joty et al., 2015) and BRAUD17 (Braud et al., 2017) are methods for discourse constituent parsing and they are adapted for discourse dependency parsing by Morey et al. (2018).",
"LI14 (Li et al., 2014) and MOREY18 (Morey et al., 2018) are graph-based and transition-based methods specially designed for discourse dependency parsing, respectively.",
"These models are statistical or simple neural models, and they do not use pretrained language models (like BERT, ELMo (Peters et al., 2018)) to extract features.",
"As Table 3 shows, the performance of our NCRFAE is significantly better than the baseline models.",
"Especially, the UAS and LAS of NCRFAE are 8.9 points and 11.5 points higher than the best baseline models on the SciDTB dataset, respectively.",
"Besides that, we find that V-DNDMV also beats baselines on the SciDTB dataset and reaches comparable results on RST-DT.",
"We also test our approaches without using BERT and find that they still outperform the baselines.",
"For example, the performance of NCRFAE with GloVe (Pennington et al., 2014) on Scidtb averaged over 5 runs is: UAS: 73.9 LAS: 55.5.",
"These results again give evidence for our success in adapting unsupervised syntactic dependency parsing methods for discourse dependency parsing as the adapted methods not only work in the unsupervised setting, but also reach state-of-the-art in the supervised setting.",
"As for the performance gap between V-DNDMV and NCRFAE, we believe that the main reason is their different abilities to extract contextual features from the input text for the parsing task.",
"As a generative model, the decoder of V-DNDMV follows Figure 4: Analysis of time and space cost in running our hierarchical Eisner and traditional Eisner algorithm on RST-DT dataset against document length.Left: time cost.",
"a strong assumption that each token in the input text is generated independently, which prevents the contextual features from being directly used.",
"Instead, contextual features are mixed with other information in the document representation which acts as the condition of the generation process in the model.",
"NCRFAE, on the other hand, employs a discriminative parser to leverage contextual features for dependency structure prediction directly.",
"Thus, as long as there is sufficient labeled data, NCRFAE can achieve much better results than V-DNDMV.",
"We have observed a similar phenomenon in syntactic parsing.",
"Significance test We investigate the significance of the performance improvement in every setting.",
"For unsupervised parsing, we perform a t-test between the strongest baseline RB RB RB and V-DNDMV.",
"The t-value and p-value calculated on 10 runs are 2.86 and 0.00104, which shows the significance of the improvement.",
"For the semi-supervised results, we also perform significance tests between the semi-supervised and supervised-only results.",
"The results show that our semi-supervised method significantly outperforms the supervised-only method.",
"For example, on the 0.5:0.5 setting, the t-value is 2.13 and the p-value is 0.04767.",
"For the fully supervised setting, due to a lack of code from previous work, it is currently difficult for us to carry out a significance analysis.",
"Instead, we show that our models are very stable and consistently outperform the baselines by running our models for 10-times.",
"For example, our NCRFAE UAS score is 78.950.29 on the Scidtb dataset.",
"In the left part of Figure 4 we show the curves of the time cost of the hierarchical and traditional Eisner algorithms against the RST-DT document length.",
"The experiments are run on servers equipped with NVIDIA Titan V GPUs.",
"We can observe clearly that the curve of the Hierarchical Eisner algorithm always stays far below that of the Eisner algorithm, which verifies our theoretical analysis on the time complexity of the hierarchical Eisner algorithm in section 3.2.",
"The right part of Figure 4 demonstrates a similar phenomenon where we illustrate the memory usage of the hierarchical and traditional Eisner algorithms against the training document length in the same computing environment.",
"From the curves of these two figures we can conclude that our Hierarchical Eisner algorithm has advantage over the traditional one in both time and space efficiencies.",
"Besides the superiority in computational effi-ciency, our experiments also indicate that our Hierarchical Eisner algorithm can achieve better performance than the traditional one.",
"With other conditions fixed, the UAS produced by Hierarchical Eisner is 79.1 in the task of supervised discourse parsing on the SciDTB dataset while the corresponding result of the Eisner algorithm is 78.6.",
"To explore the suitable number of clusters of EDUs, we evaluate our NCRFAE model with different cluster numbers from 10 to 100.",
"As table 4 shows, there is an upward trend while the number of clusters increases from 10 to 50.",
"After reaching the peak, the UAS decreases as the number of cluster continues to increase.",
"We thus choose 50 for our experiments.",
"In order to inspect if there exist any coherent relations between the clusters of EDUs obtained for",
"adaptation in discourse parsing and the labels of dependency arcs, similar to that between POS tags and syntactic dependency labels, we compute the co-appearance distribution of cluster labels and dependency arc labels.",
"In Figure 5, we show the probabilities of the clusters being used as heads p head ( c k | r m ) and children p child ( c k | r m ) given different dependency types respectively.",
"Here c k and r m represent different type of clusters and relations.",
"We cluster EDUs to 10 clusters and only show a subset of them.",
"Detailed heat-map can be found in Appendix B. By observing the two heat-maps, we notice obvious trends that for each dependency arc label, the co-appearance probabilities are concentrated at certain cluster labels.",
"For example, when the cluster is used as dependency heads, more than 60% of the co-appearance probability for arc label COMPARISON and SAME-UNIT is concentrated at cluster type 9 and 6 respectively; when the cluster is used as dependency children, cluster type 1 receives more than 40% of the co-appearance probability for certain arc labels.",
"The property displayed by the adaptation clusters is very similar to that of POS tags, which justifies our clustering strategy adopted for discourse parsing.",
"To further quantify the coherence between the adaptation clusters and dependency arcs, we evaluate the mutual information between two discrete random variables in the training set of SciDTB: one is the tuple consists of two cluster labels for a pair of EDUs in the training sample, representing dependency head and child respectively; and the other is the binary random variable indicating whether there exists a dependency arc between a EDU pair in the training data.",
"Besides our adaptation clusters, we also evaluate this metric for two other clustering strategies, random clustering and NICE proposed by He et al. (2018), for comparison and show the results in Table",
"5. We see that measured by mutual information, clusters produced by our clustering strategy is much more coherent with dependencies than the other strategies.",
"In this paper, we propose a method to adapt unsupervised syntactic parsing methods for discourse dependency parsing.",
"First, we cluster the element discourse units (EDU) to produce clusters resembling POS tags.",
"Second, we modify the Eisner algorithm used for finding the optimal parse tree with hierarchical constraint.",
"We apply the adaptations to two unsupervised syntactic dependency parsing methods.",
"Experimental results show that our method successfully adapts the two models for discourse dependency parsing, which demonstrate advantages in both parsing accuracy and running efficiency.",
"This work was supported by the National Natural Science Foundation of China (61976139)."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"objective",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"method",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"objective",
"objective",
"result",
"method",
"objective",
"other"
] |
[
"In many natural language processing (NLP) tasks the same input (e.g. source sentence) can have multiple possible outputs (e.g. transla-tions).",
"To analyze how this ambiguity (also known as intrinsic uncertainty ) shapes the distribution learned by neural sequence models we measure sentence-level uncertainty by computing the degree of overlap between references in multi-reference test sets from two different NLP tasks: machine translation (MT) and grammatical error correction (GEC).",
"At both the sentenceand the task-level, intrinsic uncertainty has major implications for various aspects of search such as the inductive biases in beam search and the complexity of exact search.",
"In particular, we show that well-known pathologies such as a high number of beam search errors, the inadequacy of the mode, and the drop in system performance with large beam sizes apply to tasks with high level of ambiguity such as MT but not to less uncertain tasks such as GEC.",
"Furthermore, we propose a novel exact n -best search algorithm for neural sequence models, and show that intrinsic uncertainty affects model uncertainty as the model tends to overly spread out the probability mass for uncertain tasks and sentences.",
"With the advent of deep learning, many applications of machine learning have converged on a similar set of methods and models.",
"For example, the Transformer (Vaswani et al., 2017) sequence-to-sequence architecture is ubiquitous in various fields of natural language processing (NLP) such as machine translation (MT), grammatical error correction (GEC), speech recognition (Karita et al., 2019), etc., and has also been applied successfully to other tasks such as computer vision (Dosovitskiy et al., 2021).",
"Recent large pre-trained NLP models Research done during internship at Google Research, now at Meta AI.",
"such as BERT (Devlin et al., 2019), GPT-3 (Brown et al., 2020), T5 (Raffel et al., 2020), RoBERTa (Liu et al., 2019), and XLNet (Yang et al., 2019) are all based on the Transformer, with relatively minor changes to the architecture itself.",
"We show that despite this architectural uniformity the learned distribution over sequences has strikingly different characteristics for different NLP tasks.",
"Inspired by Ott et al. (2018) we identify intrinsic uncertainty the nature of some NLP tasks to allow multiple viable outputs for a given input 1 to be a major factor that shapes the search space of Transformer models and determines its tractability.",
"In machine translation (MT) a task known to have high intrinsic uncertainty (Pad et al., 2009; Dreyer and Marcu, 2012; Ott et al., 2018) Transformer models suffer from a high number of beam search errors (Stahlberg and Byrne, 2019), an inadequacy of the mode (Eikema and Aziz, 2020), and translation performance degradation with large beam sizes (Koehn and Knowles, 2017) (also known as the beam search curse).",
"In contrast, for the correction of writing errors in text (grammatical error correction GEC) (Brockett et al., 2006), a task with a lower level of uncertainty (Bryant and Ng, 2015), none of these pathologies are evident.",
"This pattern holds even at the sequence-level: input sentences with high uncertainty tend to result in more search errors and a less tractable search space.",
"To study the influence of uncertainty on sequences around the mode, we propose an exact n -best search algorithm for neural sequence models.",
"We show that the probability mass covered by the n -best candidates differs markedly between certain and uncertain tasks and sentences, which shows that intrinsic uncertainty also affects the spread of probability mass and thus the model uncertainty.",
"We confirm recent work showing that beam search has drawbacks as a decoding scheme for MT. Nevertheless, 1 This is sometimes referred to as aleatoric uncertainty in the literature (Der Kiureghian and Ditlevsen, 2009).",
"it is effective for GEC, a problem where modes are adequate, search errors are rare, and the n -best lists cover a large fraction of the probability mass. 2 Measuring Intrinsic Uncertainty Intrinsic uncertainty refers to the inherent nature of some NLP tasks to allow for more than one feasible output for a given input.",
"For example, intrinsic uncertainty in MT stems from the fact that there are often several semantically equivalent translations for the same source sentence, or that the translation into a highly inflected language is sometimes under-specified (Ott et al., 2018).",
"Studies have shown that even for tasks like GEC, annotators do not always agree (Tetreault and Chodorow, 2008; Rozovskaya and Roth, 2010; Bryant and Ng, 2015), but the level of intrinsic uncertainty is arguably lower than for MT because there is a limited number of ways to correct an ungrammatical sentence.",
"We propose a simple way to measure sentence-level output uncertainty by making use of multi-reference test sets.",
"For an n -way annotated sentence with references y 1 , ..., y n we define the uncertainty u as the average relative edit distance between two references: u := 1 1 n (cid:80) ni =1 | y i | (cid:124) (cid:123)(cid:122) (cid:125) Avg.",
"(1) where d edit ( , ) denotes the Levenshtein distance.",
"Fig. 1 presents this uncertainty score for one MT test set and two GEC test sets.",
"MT-ende is the of-ficial WMT19 English-German test set (Barrault et al., 2019) paired with the additional human-annotated newstest2019 AR references provided by Freitag et al. (2020).",
"2 GEC-conll14 uses the 10 references published by Bryant and Ng (2015) for the CoNLL-2014 shared task on GEC (Ng et al., 2014), and GEC-jfleg is a 4-reference GEC test set that represents a broad range of language proficiency levels (Napoles et al., 2017).",
"Our uncertainty measure reflects our intuition that MT is a significantly more uncertain task than GEC.",
"3 For both tasks the uncertainty increases with the sentence length as longer sentences typically have more feasible mappings than shorter ones.",
"We use the edit distance rather than task-specific metrics like BLEU (Papineni et al., 2002) or BLEURT (Sellam et al., 2020) since they are designed to be robust against uncertainty effects such as reordering or semantically equivalent references, precisely the kinds of effects we aim to capture with u .",
"We follow Bryant and Ng (2015) by not using inter-annotator agreement statistics like Cohen's (Co-hen, 1960) since they are more appropriate for the classification into single, well-defined categories.",
"Neural sequence-to-sequence models define a probability distribution P ( y | x ) over target sequences y given a source sequence x :",
"Sequences are typically computed over a subword (Sennrich et al., 2016; Kudo and Richardson, 2018) vocabulary V and end with a special end-of-sentence symbol </s>:",
"where V is the Kleene closure over V which includes the empty sequence (cid:15) .",
"Since sequence models are usually trained to maximize the probability of the sequences in the training set, a common strategy to use such a model for inference is to search for the most likely output sequence y , also known as the mode of the model distribution: 4 y = arg max y P ( y | x ) .",
"2 The AR references are created from scratch, unlike the other paraphrasing references by Freitag et al. (2020).",
"3 The mean u value differs significantly between GEC and MT in each length bucket ( t -test p -value of less than 0 . 0001 ).",
"4 In a Bayesian framework this is often referred to as maximum a posteriori (MAP) inference.",
"Eq.",
"4 is usually approximated using beam search.",
"For analysis purposes, Stahlberg and Byrne (2019) proposed an exact depth-first search (DFS) algorithm that is guaranteed to find the mode.",
"In addition to our investigations into the mode we also examine the cumulative probability mass that is covered by the n best hypotheses.",
"If a hypothesis set covers a large fraction of the entire probability mass it approximates the full model distribution well.",
"Approximating the full model distribution is useful for various methods such as minimum risk training (Shen et al., 2016), reinforcement learning (Williams, 1992; Ranzato et al., 2015), minimum Bayes risk decoding (Kumar and Byrne, 2004; Stahlberg et al., 2017; Eikema and Aziz, 2020), etc.",
"Ott et al. (2018) argued that the fraction of probability mass which is covered by a fixed number of candidates reflects the model uncertainty on the sequence level.",
"We show that this model uncertainty is in line with our notion of intrinsic uncertainty that we measure with u (Sec. 2).",
"To that end, we propose a generalization of the exact search algorithm of Stahlberg and Byrne (2019) that is able to find the n global best hypotheses rather than the single best one.",
"Similarly to the single-best algorithm, we use the monotonicity of neural sequence model scores: j [2 , | y | ] : log P ( y j 1 1 | x ) > log P ( y j 1 | x ) .",
"complete (i.e. ending with the end-of-sentence symbol </s>) hypothesis score during search, and use it to safely prune entire subspaces using Eq.",
"5.",
"In contrast, we keep track of the n -th best complete hypothesis score by keeping the n best complete hypotheses in a priority queue.",
"Our exact n -best search algorithm is listed in Algorithm 1.",
"Note that we recover the DFS scheme of Stahlberg and Byrne (2019) with n = 1 .",
"We trained four Transformer neural machine translation (NMT) models (Table 1) English-German ( MT-ende ), German-English ( MT-deen ), Finnish-English ( MT-fien ), and Lithuanian-English ( MT-lten ) on the WMT19 (Barrault et al., 2019) training sets as provided by TensorFlow Datasets.",
"5 We selected these language pairs to experiment with different training set sizes (Table 2).",
"The MT training sets were filtered using language ID and simple length-based heuristics, and split into subwords using joint 32K SentencePiece (Kudo and Richardson, 2018) models.",
"For training our GEC model we used the hyper-parameters from Table 1 and followed the three-stage training recipe of Stahlberg and Kumar (2021) using the 32K SentencePiece model from Raffel et al. (2020).",
"All our models were trained until convergence on the development set using the LAMB (You et al., 2020) optimizer in JAX (Bradbury et al., 2018) by minimizing cross-entropy without label smoothing.",
"Our NMT models are evaluated on the WMT19 test sets (Barrault 5 https://www.tensorflow.org/datasets/ catalog/wmt19_translate 8636 System ende deen fien lten Xia et al. (2019) 44.9 42.8 31.9 35.6 Our baselines 39.6 39.7 27.7 26.9 Table 3: BLEU scores of our NMT baselines and one of the best systems in the WMT19 evaluation campaign MSRA.MADL (Xia et al., 2019).",
"et al., 2019) using SacreBLEU (Post, 2018).",
"Our GEC model is evaluated on the CoNLL14 (Ng et al., 2014, GEC-conll14) test set using F 0 .",
"5 -scores computed with the M2 scorer (Dahlmeier and Ng, 2012) and on the JFLEG test set (Napoles et al., 2017, GEC-jfleg) using GLEU (Napoles et al., 2015).",
"In this work our focus is to analyze the impact of intrinsic uncertainty on search.",
"Thus we keep our setup simple, reproducible, and computationally economical rather than obtain new state-of-the-art results.",
"Nevertheless, Tables 3 and 4 show that our baselines are not unreasonably far off from the best results in the literature given that the systems we compare with are often highly engineered and use many more parameters.",
"Xia et al. (2019) used various techniques like back-translation, en-sembling, dual learning, MASS pre-training, architecture search, larger models, etc. to improve their systems, and Rothe et al. (2021) used a 11B parameters T5 (Raffel et al., 2020) model.",
"Even though alternative decision rules like MBR have recently received some attention in the NMT literature (Eikema and Aziz, 2020; Mller and Sen-nrich, 2021), mode-seeking decoding schemes such as beam search or Nucleus sampling (Holtzman et al., 2020) are by far the most common choices.",
"In this section we explore how uncertainty changes the mode and the ability of beam search to find it.",
"A well-known pathology of NMT models is the beam search curse (Koehn and Knowles, 2017): Increasing the beam size improves the predictive log-probabilities of the hypotheses, but it leads to worse translation quality due to the NMT model error of preferring short translations.",
"We replicate -20% -15% -10% -5% 0% 5% 1 10 100 1000 R e l .",
"this result in Fig. 2: BLEU scores for MT initially improve over greedy search at smaller beam sizes but after reaching a peak at beam size of 4, we observe a dramatic drop in BLEU.",
"The trajectory of the blue curves (GEC) is markedly different: the performance does not drop for large beams but saturates instead.",
"The beam search curse affects tasks with high intrinsic uncertainty like MT but spares more certain tasks like GEC although both tasks use the same neural Transformer architecture.",
"To determine why the beam size affects NMT and GEC so differently we ran the exact decoding algorithm of Stahlberg and Byrne (2019) to find the global best hypotheses and counted search errors, i.e. the number of sentences in the test set for which beam search does not find the global best sequence.",
"Our results confirm the findings of Stahlberg and Byrne (2019) that increasing the beam sizes leads to fewer NMT search errors (Fig. 3).",
"Among our MT language pairs, English-German (MT-ende) suffers the most from the beam search curse and the proportion of search errors in the test set.",
"This is possibly because translation from English to German typically results in a longer sequence and thus more uncertainty.",
"GEC differs significantly from NMT in the total number of search errors.",
"For MT, even with a very large beam size of 500, beam search does not find the mode for more than 20% of the sentences in any language pair.",
"In contrast for GEC, we do not observe any search errors for beam sizes larger than 10.",
"This suggests that task uncertainty determines the tractability of the search space and particularly the search for the mode.",
"Uncertainty also determines the computational costs of exact search.",
"To abstract away from hardware and implementation details, we measure the time complexity of exact search by counting the 8637 0% 10% 20% 30% 40% 50% 60% 70% 1 10 100 1000 S e a r c h e rr o r s Beam size MT-endeMT-ltenMTfi en MT-deenGEC-j fl eg GEC-conll14 Figure 3: Number of beam search errors.",
"number of explored states, i.e. the number of for-ward passes through the model, which is identical to the number of recursive calls of Algorithm 1.",
"6 Fig. 4 plots the fraction of sentences in the test set for which the exact search explores a certain maximum number of states to terminate.",
"For example, exact search returned the mode for around 50% of the MT sentences after exploring no more than 1000 states.",
"With the same computational budget, however, it was able to find the mode for nearly 100% of the GEC sentences (blue curves).",
"For some of the MT sentences, exact search needed to explore around 100K states, or even more in the case of Lithuanian-English (orange curve).",
"Sentence-level uncertainty In the previous paragraph we showed that MT, a task with high intrinsic uncertainty, suffers from more beam search errors and a less tractable search space than GEC, a task with relatively low intrinsic uncertainty.",
"Figs.",
"5 and 6 demonstrate that this pattern is not only present at the task-level but also at the sentence-level.",
"First, the bar charts show that there is a general trend towards more search errors and more explored states 6 For example, the number of explored states in standard beam search is the beam size times the target sequence length.",
"for longer sentences.",
"Longer input sentences often result in higher entropy distributions (i.e. more uncertainty) since there are usually more ways to map a long sentence than a short one.",
"We also see a pattern within each group, i.e. within a reference length interval, that shows that sentences with higher uncertainty u result in more search errors and a longer exact search runtime even when compared to other sentences with similar lengths.",
"Table 5 lists the test set level correlation coefficients.",
"We argued in Sec. 4 that the ability to approximate the entire search space with a fixed set of candidates can be useful in training (Shen et al., 2016; Williams, 1992; Ranzato et al., 2015) and decoding (Kumar and Byrne, 2004; Eikema and Aziz, 2020), and proposed an exact n -best search algorithm.",
"However, finding the exact n -best hypotheses is computationally much more expensive than finding the single-best hypothesis (mode).",
"Therefore, to keep the runtime under control, we stopped n -best decoding after 1M explored states.",
"Fig. 7 shows that the 1M threshold is not reached for n = 1 for any sentence: it was always possible to find and verify the mode.",
"We can guarantee that the n = 100 best candidates returned by our algorithm are indeed the global best ones for around 90% of the MT-deen sentences (right end of the green curve in Fig. 7).",
"The blue curves in Fig. 7 suggest that as before the GEC search space is much more tractable given that our exact n -best search algorithm was able to find the 100 global best hypotheses for all GEC sentences before reaching 1M explored states.",
"Indeed, Fig. 8 shows that exact 100-best search terminated with fewer than 10K explored states for almost all GEC sentences while the pruning criterion in Eq.",
"5 is much less effective for the NMT search space (green curves in Fig. 8).",
"The cumulative probability mass of the set returned by exact n -best search is an upper bound for the cumulative probability mass of any hypothesis set with a cardinality of n .",
"Despite the high number of search errors (Fig. 3), the probability mass covered by the n -best beam search hypotheses is very close to this upper bound.",
"Fig. 9 shows that for n = 100 that difference is less than 0.001 for all setups except MT-fien.",
"Since the difference in probability mass is negligible we ran our subsequent investigations of probability mass with beam search instead of exact search to save computational costs.",
"Fig. 10 visualizes the difference between NMT and GEC in terms of the probability mass covered by the beam search hypotheses.",
"We confirm the finding of Ott et al. (2018); Eikema and Aziz (2020) that the NMT distribution is rather flat: even 1000 MT candidates cover only 20% of the probability 0% 5% 10% 15% 20% 25% 1 10 100 # U n fi n i s h e d s e n t e n c e s n MT-endeMT-ltenMTfi en MT-deen GEC-conll14GEC-j fl eg Figure 7: Number of sentences for which exact n -best search did not terminate before 1M explored states.",
"mass on average.",
"In GEC, however, the model assigns twice as much probability (40%) to the single best hypothesis on average (left end of the 8639 0 0.001 0.002 0.003 0.004 1 10 100 C u m u l a t i v e p r o b . m a ss d i e r e n c e Number of candidates GEC-j fl eg GEC-conll14MT-deen MT-endeMTfi en MT-lten Figure 9: Difference in cumulative probability mass between the global n best hypothesis set returned by exact n -best search and the n -best list returned by beam search with different beam sizes. 0 0.2 0.4 0.6 0.8 1 1 10 100 1000 C u m u l a t i v e p r o b . m a ss Number of candidates GEC-conll14GEC-j fl eg MTfi en MT-ltenMT-deenMT-ende Figure 10: Average probability mass covered by the n best list from beam search for beam sizes between 1 and 1000. between u and... GEC MT conll14 jfleg ende Greedy search errors 0.18 0.19 0.24 #Explored DFS states 0.20 0.18 0.19 Cumul. prob. mass -0.23 -0.51 -0.53 Table 5: Spearman's rank correlation coefficient between the uncertainty u and the number of greedy search errors, the number of explored DFS states, and the 100-best cumulative probability mass. All correlations are significant with a p -value of less than 0.00001. blue curves in Fig. 10).",
"Fig. 11 provides even more insight: A beam size of 1000 covers 40% of the probability mass for nearly all sentences in the GEC test sets.",
"Even more practical beam sizes of 10 cover more than half of the probability mass for around 75% of the GEC-conll14 sentences.",
"The same plot looks very different for MT (Fig. 12): Covering half the probability mass is only possible for a tiny fraction of the MT sentences.",
"we reported that the effects caused by intrinsic uncertainty on the ability to find the mode are visible",
"at both the taskand the sentence-levels.",
"Similarly, we can track down our observations about how uncertainty determines the probability mass of n -best lists at the sentence level.",
"Fig. 13 shows that the cumulative probability mass in the n -best list decreases for longer sentences as the mappings of long sentences are more uncertain.",
"Again, the trend within a group in Fig. 13 suggests that even among sentences with similar lengths, n -best lists for uncertain sentences (higher u ) accumulate less probability mass.",
"We make analogous observations for NMT (Fig. 14), although the total n -best probability mass is much smaller than for GEC.",
"Ambiguity is one of the core challenges in MT, a fact that is supported (inter alia) by the long history of designing evaluation metrics that are robust against it (Papineni et al., 2002; Banerjee and Lavie, 2005; Sellam et al., 2020).",
"In this work we examine the impact of ambiguity on the NMT search space, and show how it is related to various well-8640 0 0.2 0.4 0.6 0.8 1 [0,10] (10,20] (20,30] (30,) C u m u l .",
"known issues of NMT models like the beam search curse (Koehn and Knowles, 2017), a pathology that has also been linked to the local normalization in sequence models (Sountsov and Sarawagi, 2016; Murray and Chiang, 2018) or poor model calibration (Kumar and Sarawagi, 2019).",
"Our work is heavily inspired by Ott et al. (2018) who analyzed different kinds of uncertainty in NMT.",
"In particular, they found that NMT spreads out the probability mass over a large number of candidates, and connected the beam search curse with uncertainty.",
"We confirm their results and extend their line of research along the following directions: We introduce a measure for uncertainty in multi-reference test sets, and show that the negative effects of uncertainty are visible even on the sentence level.",
"Second, we propose an exact n best search algorithm and demonstrate how it can be used to analyze the spread of probability mass.",
"Third, we focus not only on MT but also on GEC.",
"Stahlberg and Byrne (2019) showed that beam search errors often obscure the length deficiency of the NMT modes, and reducing search errors by using large beams exposes this model error.",
"In this work, we found that these mechanics are limited to NMT: GEC does not suffer from the beam search curse since search errors are rare and modes are not too short.",
"Eikema and Aziz (2020) suggested that picking a hypothesis based solely on probability is erratic because NMT spreads out the probability mass over a large set of hypotheses with similar probabilities.",
"Therefore, alternative approaches that in addition to the probabilities incorporate MT-specific metrics such as BLEU (Papineni et al., 2002) or BLEURT (Sellam et al., 2020) have recently been in focus of research, including minimum Bayes risk decoding (Eikema and Aziz, 2020, 2021; Mller and Sennrich, 2021), Monte-Carlo tree search (Leblond et al., 2021), and energy-based (Bhattacharyya et al., 2021) or discriminatively trained (Lee et al., 2021) rerankers.",
"Our work on how uncertainty determines the spread of probability mass is relevant to those approaches.",
"We identified a major culprit behind various inference-related issues in sequence-to-sequence models such as the intractability of the search space, degenerate large beam or exact search outputs and the large spread in probability mass over the output space.",
"This factor is intrinsic uncertainty the existence of multiple ways to correctly map an input sequence.",
"We measured the intrinsic uncertainty of input sentences as the degree of agreement between multiple references and showed that ambiguous sentences typically result in a higher number of beam search errors and an exceedingly flat output distribution.",
"We also find that known NMT pathologies such as the beam search curse or inadequate modes do not extend to less ambiguous tasks like GEC despite using the same neural architecture."
] | [
"abstain",
"method",
"abstain",
"result",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"result",
"method",
"other",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"method",
"other",
"objective",
"objective",
"method",
"other",
"abstain",
"other",
"other",
"abstain",
"method",
"abstain",
"result",
"objective"
] |
[
"Lexica distinguishing all morphologically related forms of each lexeme are crucial to many language technologies, yet building them is expensive.",
"We propose Frugal Paradigm Completion , an approach that predicts all related forms in a morphological paradigm from as few manually provided forms as possible.",
"It induces typological information during training which it uses to determine the best sources at test time.",
"We evaluate our language-agnostic approach on 7 diverse languages.",
"Compared to popular alternative approaches, our Frugal Paradigm Completion approach reduces manual labor by 16-63% and is the most robust to typological variation.",
"From syntactic parsing (Seeker and Kuhn, 2013) to text-to-speech (Zen et al., 2016; Wan et al., 2019), many linguistic technologies rely on accurate lexica decorated with morphological information.",
"Yet, building such lexica requires much human effort (Buckwalter, 2002; Tadic and Fulgosi, 2003; Forsberg et al., 2006; Sagot, 2010; Eskan-der et al., 2013).",
"We present a language-agnostic method for minimizing the manual labor required to add new paradigms to an existing lexicon.",
"Formally, let each lexicon entry, or realization, be a triple ( P , C , f ).",
"P marks membership in some paradigm P of morphologically related words, C defines a cell in P as a bundle of morphosyntactic features, and f is the form realizing C in P .",
"Hence, paradigm SING can be expressed (in the UniMorph schema (Kirov et al., 2018)) as a set of realizations: {( SING , NFIN , sing ), ( SING",
", 3. SG .",
"PRES , sings ), . . . }.",
"sible to be manually realized, e.g., {( FLY , NFIN , fly ), ( FLY , PST , flew )} such that the forms realizing the remaining cells can be predicted, i.e., flies , flying , flown .",
"Here, sources are manually provided realizations.",
"Targets are realizations whose forms must be predicted from sources.",
"Our work differs from traditional paradigm completion (Durrett and DeNero, 2013) in that sources are not given blindly, but the system must strategically select which sources it wants to be given at test time.",
"Paradigm completion from one source is typically non-deterministic due to multiple inflection classes realizing different exponents in some cells, e.g., suffixing + ed generates the past tense for WALK , but not for SING or FLY which are members of different classes.",
"Hence, many works discuss paradigm completion in the context of (implicit) inflection class disambiguation (Ackerman et al., 2009; Montermini and Bonami, 2013; Beniamine et al., 2018).",
"Finkel and Stump (2007) propose three approaches to select the fewest sources required to deterministically identify class.",
"Yet, neural sequence models can often complete paradigms accurately from less sources without fully disambiguating inflection class (Kann and Schtze, 2016; Aharoni and Goldberg, 2017; Wu and Cotterell, 2019).",
"See Elsner et al. (2019) for an overview of the application of neural sequence models to morphological theory.",
"We propose Frugal Paradigm Completion (FPC), inspired by work on inflection class disambiguation and neural sequence modeling.",
"We train a source selection agent (SSA) to induce typological knowledge regarding the distribution of complexity in paradigms and use this to request informative source cells to be realized by an oracle .",
"Sources are fed to a predictor to generate target forms.",
"For each paradigm, SSA iteratively requests sources until the oracle confirms all cells have been realized correctly.",
"We introduce a novel metric, auto-rate , to quantify the manual labour (performed by the oracle) needed to complete each paradigm.",
"Using this metric, we demonstrate that FPC reduces labor by 63% over predicting targets from lemmata, and 47% over predicting them from the smallest set of sources that fully disambiguates inflection class.",
"We propose a new typology for discussing the organization of complexity in paradigms which helps explain why strategies perform better or worse on certain languages while FPC, being sensitive to typological variation, performs robustly.",
"After discussing related paradigm completion approaches in Section 2, we describe FPC in Section",
"3. Section 4 covers all data and experimental set up details.",
"We discuss results in Section 5 and analyze FPC's behavior in Section 6.",
"Lemma-based Paradigm Completion The standard paradigm completion approach does not select sources, but assumes one source: the lemma (Dreyer and Eisner, 2011), whose distinction is ultimately arbitrary.",
"Yet many have shown that more informative sources can be chosen (Finkel and Stump, 2007; Cotterell et al., 2017b; Kann and Schtze, 2018).",
"Most Informative Source For each target form to be predicted, Kann and Schtze (2018) select the source most likely to predict that form.",
"Unlike FPC, they do not attempt to minimize the number of unique sources that must be manually realized.",
"Static Principal Parts To minimize sources required to fully disambiguate inflection class, Finkel and Stump (2007); Stump and Finkel (2013) propose three approaches: static , dynamic , and adaptive .",
"In the static approach, the same sources must be used for every paradigm (these sources are referred to as principal parts in a much older pedagogical tradition dating back to ancient Rome with Varro's de lingua latina (Grinstead, 1916; Ahern, 1990)).",
"Cotterell et al. (2017b) train a model on static sources and attain near 100% accuracy in Latin verb paradigm completion.",
"However, they do not consider that one paradigm may require fewer sources than another, nor that paradigm completion may require fewer sources than inflection class disambiguation.",
"Dynamic Principal Parts Finkel and Stump (2007)'s dynamic approach selects a minimal set of sources necessary to fully disambiguate inflection class which can be unique to that inflection class.",
"While efficient, this is impractical in that it requires oracular knowledge of class prior to seeing any forms.",
"Adaptive Principal Parts Finkel and Stump (2007)'s adaptive approach, like our FPC method, chooses the same first source cell for each paradigm P .",
"Subsequent sources are selected conditional on the set of inflection classes P could belong to given the sources realized so far.",
"Hence, the number of sources required per paradigm is upper bounded by the static approach and lower bounded by the dynamic.",
"Our FPC approach is a neural update, inspired by their adaptive approach.",
"While their implementation tracks viable inflection classes explicitly with rules operating on oracularly segmented af-fixes, we use sequence models operating on whole words to remove reliance on oracular segmentation and leverage stem-internal phonology known to correlate with inflection class (Aronoff, 1992; Dressler and Thornton, 1996; Dawdy-Hesterberg and Pierrehumbert, 2014).",
"This section describes the interactions of the three FPC components.",
"As illustrated in Figure 1, the predictor takes a source cell and its realizing form as input, e.g.,",
"3. SG .",
"PRES : sings , or cell 2 : form 2 in the figure.",
"The predictor is composed of as many sub-predictors as there are cells in the paradigm, each of which is trained to predict the entire paradigm from one source cell's realization.",
"Cell 2 in the paradigm is grayed out in the figure, as this was provided as input so it does not have to be predicted.",
"The predicted paradigm is evaluated by the oracle.",
"If there are no errors, we are done.",
"Otherwise, based on previous sources, SSA chooses a new cell to be realized by the oracle and gives it to the predictor as the next source.",
"Because cell 3 is chosen in the figure, sub-predictor 3 will be used to predict the paradigm going forward, and cells 2 and 3 will both be grayed out.",
"The process continues like this until all cells have been correctly predicted by at least one sub-predictor.",
"Crucially, during inference, each test paradigm is empty, i.e., no realization has been seen during training and no source is available to inflect from sub-predictor 1 sub-predictor n ... sub-predictor 2 predictor cell 1: form 1 cell n : form n cell 2: form 2 paradigm ... oracle done error source selection agent (SSA) cell3: form 3 cell 2: form 2 input cell 3 oracle Figure 1: Schematic representation of the flow of Frugal Paradigm Completion at inference time.",
"a-priori .",
"Our setup aims to minimize the number of sources which the SSA must request from the oracle (typically a human in the loop at inference time) to predict the remaining paradigm slots correctly.",
"The predictor outputs a target form given its cell, a source form and the source form's cell as input.",
"To train the predictor, for each possible source cell, we train a sub-predictor to predict every possible target form in every paradigm in the training data given the realization of that source cell in that paradigm.",
"Details of all sequence model architectures are provided in Section",
"4. 3.2 Source Selection Agent SSA's choice of a cell for a given paradigm depends on all previously selected cells for that paradigm and their corresponding forms.",
"This allows SSA to learn, e.g., that given a previous English PST source, PST .",
"PTCP should only be requested as a subsequent source if the PST form did not take the regular ed suffix.",
"Otherwise, PST .",
"PTCP is likely to be regular and unlikely to contribute new information.",
"To induce such knowledge, we train SSA on an oracle policy of ideal source selections extracted from the train set (Ross et al., 2011; Ross and Bag-nell, 2014; Welleck et al., 2019).",
"1 To extract the oracle policy, we divide the training lexicon into two folds and train one predictor on each, allowing us to cross-validate each predictor on its held out fold.",
"For each training paradigm, we test which target forms can be correctly predicted by which source cells' sub-predictors.",
"As shown for SING 1 While we borrow the term oracle policy from Imitation Learning (Ross et al., 2011; Ross and Bagnell, 2014; Welleck et al., 2019), we mimic the oracle policy with simple sequence learning.",
"Our analysis suggests even this may be more machinery than necessary.",
"in Figure 2, we use this information to extract minimum set covers, i.e., the fewest source cells such that the union of the subsets they predict correctly covers the entire paradigm.",
"These covers constitute the oracle policy used to train SSA.",
"The minimum set cover problem is NP-complete (Lund and Yannakakis, 1994; Kuhn et al., 2005), but we approximate it in O ( log e | P | ) by iteratively selecting the cell whose subset most enlarges the union.",
"We break ties by averaging predictiveness (Equation 1) over both folds, where fold F contains | F | paradigms; P m , | P m | cells; and Acc ( P m , C trg , C src ) returns 1 if using C src 's realization as a source correctly predicts the form realizing cell C trg in paradigm P m .",
"predictiveness ( C src , F ) = (cid:80) | F | m =1 (cid:80) | P m | j =1 Acc ( P m , C j , C src ) (cid:80) | F | m =1 | P m | (1) At this stage, paradigm covers are dynamic in that no single cell need be shared by all covers.",
"Yet, when selecting the first source, SSA has no previous sources to condition on, making it impossible to predict the first cell.",
"Thus, we get adaptive minimum set covers by designating the start cell to be that which occurs in the most dynamic covers.",
"Then we re-approximate all covers such that each includes this cell.",
"2 Finally, we rank cells within each cover by the total number of covers in which they appear.",
"For each cell in each cover, we train SSA to predict said cell from all higher ranked cells and their realizing forms (holding out 2% of them for development).",
"2 We train and test on a single part-of-speech for each language, so each paradigm should contain the start cell.",
"For defective paradigms lacking said cell, we back off to the most frequent cell that exists in the paradigm.",
"The oracle represents a human-in-the-loop during inference, providing requested source realizations to the predictor and informing SSA when a paradigm is complete and accurate (Figure 1).",
"In our implementation, the oracle does not specify which individual predictions are incorrect, but it thus must resolve any discrepancies when two sub-predictors disagree after the fact.",
"We do not attempt to model the additional cost this incurs, as it is unclear how to combine it with the presumably more expensive cost of correcting errors, which we model instead.",
"This is worth re-visiting in future work.",
"We evaluate 4 paradigm completion approaches on 7 languages.",
"Here we discuss implementation, data and evaluation details.",
"All sequence models in all implementations of any paradigm completion approach use the Transformer architecture (Vaswani et al., 2017).",
"Here we describe the formatting of input and outputs as well as our hyperparameters.",
"Input and Output Formats Following Kann and Schtze (2016), input sequences combine characters and morphosyntactic features.",
"The following is a sample input and output for a single source FPC sub-predictor specializing in the cell NFIN : Input: f l y out_ V .",
"For any inflected-form-predicting sequence model whose input is not limited to realizations of a single cellas in, e.g., the static principal parts approachsource cell features are prepended to the input as such:",
"For multi-source sequence models, the features of each source are inserted into the input and the target features are listed after the first source.",
"We experimented with several different multi-source representations and the Transformer performed fairly similarly with all of them.",
"Input: in_ NFIN f l y out_ V .",
"PTCP out_ PST in_ PST f l e w Output: f l o w n The FPC's SSA predicts not a form, but a cell, conditional on any previously realized sources.",
"To predict the first source, it is given nothing and will thus deterministically select the best starting cell as determined by the oracle policy (see Section 3.2).",
"To predict any subsequent source, it conditions on the realizations of all previously requested sources for that paradigm.",
"The following exempli-fies SSA inputs and outputs when predicting the second source for paradigm FLY : Input: in_ NFIN f l y Output: in_ V .",
"Wu et al. (2018) and others have achieved improvements by embedding morphosyntactic features separately and concatenating them to the encoder output prior to feeding it to the decoder.",
"Our error analysis, however, suggests Transformers handle Kann and Schtze (2016)-style input well.",
"More sophisticated feature handling may not be necessary, but should be investigated in future work.",
"Hyperparameters We train all Transformer models for 100 epochs in batches of 64 with 0.1 dropout probability.",
"The final model is restored from the epoch with the highest dev accuracy.",
"We stop early if there is no improvement for 20 Train Dev Test Arabic nouns paradigms 1006 100 100 instances 24160 2260 2352 German verbs paradigms 1031 100 100 instances 27762 2690 2692 English verbs paradigms 2908 200 201 instances 14522 1000 1001 Russian nouns paradigms 3289 100 100 instances 37423 1133 1137 Latin nouns paradigms 1630 100 100 instances 19150 1180 1174 Hungarian nouns paradigms 1405 100 100 instances 47689 3400 3383 Irish nouns paradigms 549 100 100 instances 6460 1197 1195 Table 1: Number of paradigms and instances by split for every language and POS considered.",
"epochs.",
"The only exception is during FPC cross-validation where sub-predictor models are trained for only 50 epochs with early stopping after 5 epochs without improvement.",
"This is just to reduce computational cost as it is sufficient to induce an oracle policy.",
"The final sub-predictor models however (those used at inference time, not those used to induce the oracle policy), are trained on the full training data set using the full 100 epochs with 20 epochs patience for early stopping.",
"As for Transformer-specific hyperparameters, using the original notation of Vaswani et al. (2017), we set N = 4 , d model = 128 , d ff = 512 , and h = 8 , scaling down the hyperparameters recommended for machine translation as our task is less expensive (Aharoni and Goldberg, 2017; Wu et al., 2018).",
"For every language and part of speech (POS) considered, we extract train, dev and test sets from UniMorph (Kirov et al., 2018).",
"Each split contains full paradigms, though the cells realized in each may vary due to defectiveness (Corbett, 2005; Sims, 2015).",
"We filter many gold errors by removing paradigms for which no realization can be attested in actual text.",
"We use Universal Dependencies (UD) (Nivre et al., 2016) to check for attestations.",
"We also filter overabundant realizations (multiple forms realizing one cell), keeping only the most frequent form, as attested in UD.",
"While some languages allow for overabundance (Thornton, 2010, 2011), in UniMorph, this often indicates a gold error.",
"We randomly divide paradigms into splits such that train is maximally large and dev and test contain at least 100 paradigms and 1,000 realizations.",
"Exact quantities are displayed in Table 1.",
"Arabic, German, English, and Russian were used for development, while Irish, Hungarian, and Latin were only evaluated after fixing hyperparameters.",
"The languages considered represent 3 families and 4 diverse Indo-European branches.",
"They exhibit multiple non-canonical behaviors (Corbett, 2005) and present diverse challenges from non-concatenative morphology to complex inflection class systems.",
"Paradigm completion is usually evaluated via exact match accuracy on held out target forms (Cot-terell et al., 2016, 2017a, 2018; McCarthy et al., 2019).",
"Yet we use as many sources as are necessary to reach 100% accuracy in predicting the remaining slots, so accuracy is not a meaningful metric for the FPC.",
"Some theoretical works focus on the sources required to unambiguously complete a paradigm given some implicit knowledge of viable inflection classes (Finkel and Stump, 2007; Ackerman and Malouf, 2013).",
"Yet these tend not to propose actual paradigm completion models or evaluate their decisions in ambiguous cases.",
"To evaluate our system and bridge these traditions, we propose auto-rate : auto-rate = (cid:80) ni =1 auto ( P i ) (cid:80) ni =1 | P i | , (2) where auto ( P ) denotes the number of realizations correctly predicted while not having been provided as sources for paradigm P by the oracle.",
"Intuitively, auto-rate is like accuracy but it counts oracularly provided sources as additional errors since both errors and sources require labor, i.e., sources require manual input and errors, post-correction.",
"We also report manual cells per paradigm , i.e., sources plus errors.",
"Of course, FPC resolves all errors eventually, but other systems can make errors requiring post-correction.",
"We compare the FPC method to three baselines.",
"One is a variant of FPC using a random SSA.",
"This allows us to distinguish the benefit of a smart SSA from that of simply receiving additional feedback from an oracle in the loop.",
"Each time a source must be selected, random SSA chooses randomly without replacement.",
"Its performance is averaged over two runs.",
"The lemma approach baseline predicts all paradigm forms from one designated source: the lemma.",
"Finally, for the static approach baseline, we considered two static approach implementations.",
"The single-source implementation predicts each target from the source that is, in theory, its best predictor (Kann and Schtze, 2018).",
"The multi-source implementation concatenates these sources, predicting each target from the concatenated input.",
"As results are nearly identical for either implementation, we report results only for single-sourcewith the exception of Latin, as explained presently.",
"For some languages, there is little theoretical or pedagogical literature to help identify the best sources for the static approach.",
"Our single-source static approach for Arabic nouns predicts singular and dual forms from SG ; NDEF ; NOM and plurals from PL ; NDEF ; NOM .",
"In theory, any non-plural plus any plural should be sufficient (Brustad et al., 2005; Habash, 2010).",
"For German verbs, we predict present and imperative forms from NFIN and past forms from IND ; PST ;1 SG (Grebe et al., 1966).",
"We predict English present forms from NFIN ; PST and V .",
"PTCP ; PST predict themselves.",
"For Russian nouns, Zaliznyak (1980) argues for five sources, yet Parker (2016) demonstrates that three are usually sufficient.",
"We follow the latter, predicting all nominative or accusative forms from ACC ; SG , all other singulars from INS ; SG , and all other plurals from GEN ; PL .",
"In preliminary experiments, we found this to match the accuracy of the five source approach, thus achieving a higher auto-rate.",
"For Latin, we could not evaluate a single-source static implementation as it is unclear which source cell best predicts each target.",
"The multi-source static approach for Latin nouns predicts all forms from NOM ; SG and GEN ; SG (following the classical grammatical analyses of Varro, Priscian and the Roman ars grammatica ).",
"For Irish and Hungarian, we do not evaluate a static approach as we lack the requisite linguistic knowledge to determine the best sources.",
"As shown in Table 2, FPC always ties or beats the next best approach, while the next best approach varies by language.",
"On average, FPC reduces labor by 63% over the lemma approach, 47% over static, 16% over random agent, and 13% over the next best approach.",
"Its success is mainly due to (1) making predictions from fewer sources than are required for fully disambiguating inflection class and (2) receiving feedback after each source.",
"Surprisingly, training a sophisticated SSA does not improve much over using a random agent.",
"We argue this is due to an unexpectedly large margin of error in the agent's source selection task.",
"Despite the complexity of source selection strategies required for inflection class disambiguation, FPC uses lexical frequencies to expect regularity and stem-internal clues to anticipate irregular classes, requiring a median of just one source per paradigm for all languages except under-resourced Irish.",
"Furthermore, inspection of the source selection minimum set covers reveals that it is often the case that a paradigm can be completed correctly from any single source.",
"This is surprising in light of the precise strategies required for completely deterministic paradigm completion in Finkel and Stump (2007)'s framework and in light of Albright (2002)'s case for the privileged status of a single form per paradigm, though in our framework with full words and full paradigms for training, it seems that many sources can often serve as good enough singleton principal parts.",
"This supports Bonami and Beniamine (2016) proposal of gradient principal part analyses.",
"Here, we discuss patterns relating SSA's first and second sources chosen (Figures 3a-b and 4a-b) to the inter-predictability of cells represented by heat maps (3c and 4c).",
"Maps display the average accuracies with which each target (column) can be predicted from each source (row).",
"We analyze spe-cific SSA choices and predictor errors in Arabic and Latin.",
"The maps (for all languages, see the Appendix) suggest complexity can be distributed within paradigms in systematically distinct ways.",
"Ackerman and Malouf (2013) propose integrative (I-) complexity, using average conditional entropy to describe paradigmatic organization, but this has been criticized for obscuring differences in the predictability of sub-paradigm regions (Cot-terell et al., 2019; Elsner et al., 2019).",
"To remedy this, we propose a typology for measuring the extent to which I-complexity is realized via different organizational strategies, which is useful for discussing source selection strategies.",
"Our typology describes paradigms in terms of mutual predictability , the correlation of a map and its transpose, and entropy predictiveness , the negative correlation of cells' average predictiveness (see Equation 1) and average predictability, defined here in comparable terms as: predictability ( C trg , F ) = (cid:80) | F | m =1 (cid:80) | P m | j =1 Acc ( P m , C trg , C j ) (cid:80) | F | m =1 | P m | (3) Intuitively, a paradigm is mutually predictable if the fact that cell A predicts cell B means that B is likely to predict A .",
"Such paradigms often feature regions of mutually predictable cells (as in 3c), such that an optimal strategy avoids picking multiple sources from one region.",
"For entropy predictive paradigms, if A is generally more difficult to predict than B , A is likely to be a better predictor of the remaining cells (following the information theoretic logic that surprisal is informative (Shannon, 1948; Jaeger, 2010)).",
"For such paradigms, the optimal strategy selects the source which would have been the most difficult target to predict.",
"Unlike Sims (2020)'s graph-theoretic typology for describing inflection class structure, our typology is a two-dimensional description of how the optimal paradigm completion strategy is affected by underlying class structure.",
"In this sense, our typology is complementary to hers and future work might investigate the relationship between traits in her typology and mutual predictability or entropy predictiveness.",
"Furthermore, our typology might be updated to consider the impact of type frequency (Sims and Parker, 2016) in a framework where distributional data is available.",
"Figure 5 demonstrates that cross-linguistic variation is vast with respect to our typology, as some languages even exhibit negative entropy predictiveness or mutual predictability.",
"This partly explains why non-FPC approaches perform erratically: if paradigmatic organization varies by language, source selection strategies must be able to adapt to the data.",
"Arabic nouns are mutually predictable (Figure 5).",
"Any singular or dual form can predict another.",
"Plural forms also predict each other.",
"Yet, in general, plurals are less predictive/able (Figure 3c) due to several inflection classes varying in the plural.",
"The sound plurals take suffixes while broken plural classes are realized via non-concatenative processes.",
"For example, (cid:73)(cid:46)(cid:187) (cid:64)(cid:80) rAkb , rider from root (cid:72) (cid:46) (cid:188) (cid:80) r k b , takes the broken plural pattern _ _ A _ , becoming (cid:72) (cid:46) (cid:65)(cid:191)(cid:80) rkAb .",
"Yet, having heard only singular realizations, a human might posit a sound plural, i.e., (cid:9)(cid:224)(cid:241)(cid:74) (cid:46) (cid:187) (cid:64)(cid:80) rAkbwn , realizing the",
"SSA learns an ideal strategy, requesting a singular source (Figure 3a) and then a plural (3b).",
"Interestingly, 6 of 18 sound feminine plurals (most frequent single class) require multiple sources and 8 of 28 broken plurals do not.",
"Thus, the predictor does not default to regularity, but uses stem-internal phonology to anticipate irregularity.",
"Most errors made from the first source posit a viable broken plural, just not the right one.",
"In future work, modeling semantics can fix such errors, e.g., knowing that (cid:73) (cid:46) (cid:187) (cid:64)(cid:80) rAkb is animate makes plural",
"For future work, we can pre-train on raw corpora to give our model access to such information (Devlin et al., 2019).",
"Indeed Erdmann and Habash (2018) found distributional information to benefit inflectional paradigm clustering in Arabic.",
"Though the benefits should generalize as semantics correlates with inflection class in many languages (Wurzel, 1989; Aronoff, 1992; Harris, 1992; Noyer, 1992; Carstairs-McCarthy, 1994; Corbett and Fraser, 2000; Kastner, 2019).",
"Latin is not mutually predictable with moderate entropy predictiveness.",
"SSA's choices are, at first, opaque, but Table 3 shows that ACC ; PL nar-rows the inflection class to variants of one declension.",
"Remaining ambiguity mostly involves 3 rd declension nominative and vocative realizations, which can usually be predicted from the preferred second source cell, VOC ; SG .",
"44 of 100 test paradigms were 3 rd declension, which required multiple sources at the highest rate (16 of 44; 2 nd masculine declension was next highest at 3 of 15).",
"There was no correlation between declension and second source chosen, yet high auto-rate suggests SSA's choices may not need to condition on previously realized source forms, but only their cells.",
"from a single source, we found paradigms requiring three sources that might be completable from two using a multi-source FPC implementation.",
"For example, greges , flocks realizes GREX .",
"ACC ; PL , but the predictor mistakenly posits gregium for GEN ; PL from this source, guessing the wrong 3 rd declension variant.",
"While second source VOC ; SG grex corrects this, it obscures the underlying stem, as x can be an allophone of g or c .",
"Thus, we still get an error, grecum .",
"A multi-source predictor could avoid forgetting the underlying allophone g after seeing the second source.",
"3 That said, multi-source FPC is not as simple as multi-source static.",
"Heuristic sampling of training instances based on the oracle policy yields predictors that only attend to one source or make bad predictions when only given one.",
"This is worth exploring further in future work as there is more evidence of paradigms that are difficult to handle without jointly encoding sources in the linguistic literature (Corbett, 2005; Bonami and Beniamine, 2016).",
"We presented Frugal Paradigm Completion, which reduces the manual labor required to expand a morphological lexicon by 16-63% over competitive approaches across 7 languages.",
"We demonstrated that typologically distinct morphological systems require unique treatment and benefit from our SSA, that learns its strategy from data.",
"We found that inducing this strategy is not as challenging as previously suggested (Finkel and Stump, 2007).",
"Thus, SSA might be replaced with a less costly architecture while our model might be improved by conditioning on semantics and jointly decoding from a variable number of sources.",
"We are indebted to helpful conversations with Shijie Wu, Ryan Cotterell, Katharina Kann, Andrea Sims, and Olivier Bonami.",
"We would also like to acknowledge insightful feedback from Google's Word Graph and TTS teams, as well as four anonymous reviewers.",
"3 See rexregis , king or paxpacis , peace , which are technically conditioned on preceding vowel quality, though there are probably not enough training examples for the model to learn that."
] | [
"abstain",
"objective",
"abstain",
"method",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"result",
"other",
"other",
"other"
] |
[
"The current state-of-the-art model HiAGM for hierarchical text classification has two limitations.",
"First, it correlates each text sample with all labels in the dataset which contains irrelevant information.",
"Second, it does not consider any statistical constraint on the label representations learned by the structure encoder, while constraints for representation learning are proved to be helpful in previous work.",
"In this paper, we propose HTCInfoMax to address these issues by introducing information maximization which includes two modules: text-label mutual information maximization and label prior matching.",
"The first module can model the interaction between each text sample and its ground truth labels explicitly which filters out irrelevant information.",
"The second one encourages the structure encoder to learn better representations with desired characteristics for all labels which can better handle label imbalance in hierarchical text classification.",
"Experimental results on two benchmark datasets demonstrate the effectiveness of the proposed HTCInfoMax.",
"Hierarchical text classification (HTC) is a particular subtask of multi-label text classification (Li et al., 2020).",
"Many datasets have been proposed to study HTC for decades, such as RCV1 (Lewis et al., 2004) and NYTimes (Sandhaus, 2008), which categorize a news into several categories/labels.",
"And all the labels in each dataset are usually organized as a tree or a directed acyclic graph.",
"Thus, there is a label taxonomic hierarchy existing in each dataset.",
"The goal of HTC is to predict multiple labels in a given label hierarchy for a given text.",
"There are two groups of existing methods for HTC: local approaches and global approaches.",
"Local approaches usually build a classifier for each label/node (Banerjee et al., 2019), or for each par-ent node, or for each level of the label hierar-chy(Wehrmann et al., 2018; Huang et al., 2019; Chang et al., 2020).",
"Global approaches just build one classifier to simultaneously predict multiple labels of a given text.",
"The earlier global approaches ignore the hierarchical structure of labels and assume there is no dependency among labels which leads to flat models such as (Johnson and Zhang, 2015).",
"Later on, more and more works try to make use of the label taxonomic hierarchy to improve the performance by employing different strategies such as recursively regularized Graph-CNN (Peng et al., 2018), reinforcement learning (Mao et al., 2019), attentional capsule network (Peng et al., 2019), meta-learning (Wu et al., 2019) and structure encoder (Zhou et al., 2020).",
"Many attention-based models are also proposed to learn more refined text features for text classification tasks such as (You et al., 2019; Deng et al., 2020).",
"Among these methods, HiAGM proposed by Zhou et al. (2020) is the state-of-the-art model for HTC which designs a structure encoder that integrates the label prior hierarchy knowledge to learn label representations, and then proposes a model HiAGM with two variants (one is HiAGM-LA, the other is HiAGM-TP) based on the structure encoder to capture the interactions between text features and label representations.",
"However, there are some limitations of HiAGM.",
"Firstly, it utilizes the same label hierarchy information for every text sample which cannot distinguish the relevant and irrelevant labels to a specific text sample.",
"Although HiAGM-LA can implicitly relate each text to its corresponding labels by soft attention weights, there are still irrelevant and noisy information.",
"Secondly, for HiAGM-LA, there is no statistical constraint on the label embeddings generated by the structure encoder, while statistical constrains for representation learning are proved to be helpful by Hjelm et al. (2019).",
"To address the two limitations of HiAGM-LA, we propose HTCInfoMax which introduces information maximization consisting of two new modules which are text-label mutual information maximization and label prior matching on top of HiAGM-LA.",
"Specifically, the first new module makes a connection between each text sample and its corresponding labels explicitly by maximizing the mutual information between them, and thus can filter out irrelevant label information for a specific text sample.",
"The label prior matching module can impose some constraints on the learned representation of each label to force the structure encoder to learn better representations with desirable properties for all labels and thus also improve the quality of representations for low-frequency labels, which helps handle label imbalance issue better.",
"In summary, our main contributions are:",
"1) We propose a novel global model HTCInfoMax for HTC by introducing information maximization which includes two modules: text-label mutual information maximization and label prior matching.",
"2) To our best knowledge, this is the first work to utilize text-label mutual information maximization for HTC which enables each text to capture its corresponding labels' information in an effective way.",
"3) Also, to our best knowledge, this is the first work to introduce label prior matching for HTC which encourages the structure encoder to learn desired label representations for all labels which can better handle inherent label imbalance issue in HTC.",
"4) Experimental results demonstrate the effectiveness of our proposed model for HTC.",
"5) We release our code to enable replication, available at https://github.",
"com/RingBDStack/HTCInfoMax .",
"The overall architecture of our model is shown in Figure 1.",
"The major part of HTCInfoMax is the \"In-formation Maximization\" part shown in the dashed box which has two new modules: text-label mutual information maximization and label prior matching, which will be introduced in the following sections.",
"We keep the remaining part such as text encoder, structure encoder and the predictor be the same as in HiAGM-LA (Zhou et al., 2020).",
"Good text representation is critical for predicting its corresponding labels, thus fusing label information into text feature can help improve the prediction performance.",
"The HiAGM-LA utilizes multi-label attention to bridge the text feature of each sample with all labels' information implicitly, which can somehow help each text obtain some label information.",
"However, irrelevant label information is also injected into the text feature by using soft attention weights.",
"Therefore, we design a text-label mutual information maximization module to help remove irrelevant label information for each text as well as help each text capture its corresponding labels' information.",
"In this way, the learned representation for each text incorporates useful label information which is helpful for predicting its labels.",
"To implement the text-label mutual information maximization, we first select the ground truth labels for each text sample in the training process, and then apply a discriminator to estimate the mutual information between text and its labels, which is also known as negative sampling estimation.",
"Let PT and PY denote the distribution of text feature outputted by the text encoder and the distribution of label representation produced by the structure encoder respectively.",
"And the joint distribution of text and label is denoted as PTY = PY | TPT .",
"Then the positive samples are the pairs of text t and its corresponding labels y which is denoted as ( t , y ) , in other words, these positive samples are drawn from the joint distribution of text and label.",
"For the negative samples, we pair y with another text sample t (cid:48) in the same batch which is denoted as ( t (cid:48) , y ) , the negative samples can be deemed as drawn from the product of marginal distribution of text PT and label PY .",
"Both positive and negative samples are fed to the discriminator DMI to do classification and to estimate the mutual information I ( T ; Y ) between text and label shown in Eq.",
"(1).",
"DMI ( t , y ) and DMI ( t (cid:48) , y ) represents the probability score assigned to the positive and negative sample by the discriminator respectively.",
"The goal of the text-label mutual information maximization module is to maximize I ( T ; Y ) , thus the loss from this module is shown in Eq.",
"(2).",
"This module is inspired by Deep InfoMax (DIM) (Hjelm et al., 2019) which utilizes local and global mutual information maximization to help the encoder learn high-level representation for an image.",
"The structure of the discriminator DMI in this module can be found in the Appendix A.1.",
"There is an inherent label imbalance issue in HTC, thus the learned label embeddings by the model for low-frequency labels are not good because of underfitting caused by less training examples.",
"The label prior matching imposes some statistical constrains on the learned representation of each label which can help the structure encoder learn better label representations with desirable characteristics for all labels.",
"This also improves the quality of representations for low-frequency labels, which helps handle the label imbalance situation better in terms of improvement of Macro-F1 score.",
"To implement the label prior matching mechanism, we use a method similar to adversarial training in adversarial autoencoders (Makhzani et al., 2015) but without a generator to force the learned label representation to match a prior distribution.",
"We denote the prior as Q and the distribution of label representation learned by the structure encoder as P .",
"Specifically, a discriminator network D pr is employed to distinguish the representation/sample drawn from the prior (i.e., real sample which is denoted as y ) from the label embedding produced by the structure encoder (i.e., fake sample which is denoted as y ).",
"For each label, we utilize D pr to calculate its corresponding prior matching loss l pr , which is shown in Eq.",
"(3).",
"This loss aims at pushing the distribution P of learned representation for a label towards its prior distribution Q .",
"The final label prior matching loss is the average of losses from all the labels which is shown in Eq.",
"(4), N is the number of labels.",
"This idea is inspired by DIM which matches the representation of an image to a prior, but different from DIM, it trains the structure encoder to learn desired representations for all labels by imposing the constraints on each label's representation.",
"An uniform distribution on the interval [0,",
"1) is adopted as the label prior distribution Q in the label prior matching module.",
"The reason for choosing the uniform distribution is that it works well as a prior in DIM for generating image representations.",
"And the improvement of Macro-F1 score in the experimental results of hierarchical text classification further verifies the suitability of using the uniform distribution as the label prior.",
"The detailed structure of the discriminator D pr can be found in the Appendix A.2.",
"A loss weight estimator is adopted to learn the weights for text-label mutual information loss and label prior matching loss by using learned text features t and all labels' representation y , shown in Eq.",
"(5), and both W 1 and W 2 are trainable parameters.",
"And the loss from the predictor is the traditional binary cross-entropy loss L c (Zhou et al., 2020).",
"Then the final objective function of HTCInfoMax is the combination of all the three losses as follows: L = L c + F LMI + (1 F ) L pr .",
"Following HiAGM (Zhou et al., 2020), we use RCV1-V2 (Lewis et al., 2004) and Web of Science (WOS) (Kowsari et al., 2017) benchmark datasets to evaluate our model and adopt the same split of RCV1-V2 and WOS as HiAGM.",
"The statistics of the two datasets are shown in Table 1.",
"Standard evaluation metrics including Micro-F1 (Mi-F1) and Macro-F1 (Ma-F1) score are employed to evaluate our model.",
"In label imbalance situation, Ma-F1 can better evaluate model's performance in the perspective of not focusing on frequent labels in a certain degree.",
"In order to make a fair comparison between our model and HiAGM, we use the same parameter settings as HiAGM and follow its implementation details which can be seen in (Zhou et al., 2020).",
"The experimental results of our model are shown in Table 2, each score is the average result of 8 runs.",
"The results of HiAGM are referred from (Zhou et al., 2020).",
"There are two variants of HiAGM which are HiAGM-LA and HiAGM-TP.",
"As stated before, our model is built on top of HiAGM-LA to address its limitations.",
"From Table 2, one can see that our model outperforms the HiAGM-LA model with either GCN or TreeLSTM as structure encoder on two datasets, which demonstrates that the introduced information maximization in our model can address the limitations of HiAGM-LA and improve the performance.",
"This is because the label prior matching can drive the structure encoder to learn good and desired label representations that encode more useful and informative information of labels, and the text-label mutual information maximization module helps learn better representation of each text for prediction by fusing the above learned good representations of its ground truth labels while ignoring irrelevant labels' information.",
"It is also worth nothing that the improvement of Ma-F1 on the RCV1-V2 dataset is bigger compared with that on WOS, which indicates that our model can work better on dataset with a more complicated label hierarchy as RCV1-V2 has a deeper label hierarchical structure than WOS.",
"Although our model does not outperform all the results of HiAGM-TP, it reaches the similar performance.",
"This indicates that information maximization is an alternative effective way to fuse the text feature and label information together to boost the performance.",
"In addition, apart from generating text representations, our model can also generate refined label representations via information maximization which can be utilized for inference, while HiAGM-TP cannot produce such label embeddings for usage in the inference phase because it directly feeds the text feature into the structure encoder to obtain final text representation for prediction.",
"In other words, HiAGM-TP encodes text and label information into only one feature space.",
"However, obtaining separate text features and label features such as the ones generated by our model can help encode more semantic information of labels, which may be helpful for HTC especially when there is a large label hierarchy in the dataset.",
"We do not report the results of other baselines such as HFT(M) (Shimura et al., 2018), SGM (Yang et al., 2018), HiLAP-RL (Mao et al., 2019), etc. as they can be found in (Zhou et al., 2020), and our model performs better than these baselines.",
"To demonstrate the effectiveness of the two modules of information maximization, we conduct an ablation study and the results are shown in Table 3.",
"Every score in Table 3 is the average result of 8 runs.",
"From Table 3, one can see that HTCInfoMax outperforms the variant without text-label mutual information maximization module (i.e., HTCInfo-Models RCV1-V2 WOS Mi-F1 Ma-F1 Mi-F1 Ma-F1 HTCInfoMaxw/oMI 83.42 61.79 85.46 79.94 HTCInfoMaxw/oLabelPrior 82.75 60.57 84.74 79.01 HTCInfoMax 83.51 62.71 85.58 80.05 Table 3: Ablation study results on RCV1-V2 and WOS datasets.",
"Max w/o MI) by 0.09, 0.92 points on RCV1-V2 and 0.12, 0.11 points on WOS in terms of Mi-F1 and Ma-F1 respectively, which indicates that the text-label mutual information maximization module can make each text capture its corresponding labels' information and thus improves the Mi-F1 and Ma-F1 score at the same time.",
"When compared with the other variant (i.e., HTCInfoMax w/o LabelPrior), the improvements of the two metrics can also be observed but Ma-F1 has larger improvements by 2.14 and 1.04 points on RCV1-V2 and WOS respectively compared with Mi-F1.",
"This demonstrates that label prior matching helps regularize the label feature space and forces the structure encoder to learn better representations with desired properties for all labels.",
"Thus the representations of imbal-anced labels are also well learned, which helps mitigate the issue of underfitting of low-frequency labels, and thus improves the Ma-F1 score more and better to handle the label imbalance issue.",
"We propose HTCInfoMax to address the limitations of HiAGM by introducing information maximization which includes two modules: text-label mutual information maximization and label prior matching.",
"The label prior matching can drive the model to learn better representations for all labels, while the other module further fuses such learned label representations into text to learn better text representations containing effective label information for prediction.",
"The experimental results demonstrate the effectiveness of HTCInfoMax.",
"The corresponding author is Hao Peng.",
"The authors of this paper were supported by NSFC through grants U20B2053, 62002007, 62073012 and 61876128, S&T Program of Hebei through grant 20310101D, and in part by NSF under grants III-1763325, III-1909323, and SaTC-1930941.",
"We thank the reviewers for their constructive feedback."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"While significant progress has been made on the task of Legal Judgment Prediction (LJP) in recent years, the incorrect predictions made by SOTA LJP models can be attributed in part to their failure to (1) locate the key event information that determines the judgment, and (2) exploit the cross-task consistency constraints that exist among the subtasks of LJP.",
"To address these weaknesses, we propose EPM , an Event-based Prediction Model with constraints, which surpasses existing SOTA models in performance on a standard LJP dataset.",
"Legal Judgment Prediction (LJP) is a crucial task in the legal judgment decision making process.",
"Given the facts of a legal case, the goal is to predict the court's outcome.",
"So far, English LJP has focused on predicting law articles (Chalkidis et al., 2019a, 2021) and court decisions (Malik et al., 2021) while French LJP (Sulea et al., 2017b) has focused on predicting court rulings.",
"In this paper, we examine LJP in the context of Chinese via the widely used CAIL dataset (Zhong et al., 2018; Xu et al., 2020), which involves three subtasks: predicting (1) law articles, (2) charges and (3) terms of penalty, as shown in Figure",
"1. While state-of-the-art (SOTA) LJP models have several fundamental limitations (Binns, 2019), one of the technical issues they face concerns their failure to locate the key event information that determines the judgment results.",
"Consider Figure 2, where the fact statement of a robbery case involves the illegal break-in description.",
"Existing models wrongly predict that the law article is about illegal search since many words describe the break-in process even though the main point is about robbery.",
"How can we address this problem?",
"Recall that in the continental judicial system, a law article consists of two parts: (1) the event pattern, which stipulates the behavior that violates the law, and (2) Law Article Fact Statement Article 263: [Crime ofRobbery] Anyonewho robs public or private property is guilty of the crime of robbery.",
"the judgment, which describes the corresponding penalties.",
"In the law article related to Robbery in Figure 1, the event pattern is Anyone robs public or private property and the judgment is be sentenced to imprisonment of not less than three years and not more than ten years .",
"The event pattern and the corresponding judgment defined by each law article can be viewed as a causal pair: if an event pattern is detected, the corresponding judgment can be inferred from the causal pair.",
"In other words, it is the event information described in the case facts on which the reasoning judgment for the case is based.",
"If we use the fine-grained key event information extracted from the facts to match the event pattern defined in the law articles, the law articles that are applicable to the case could be retrieved accurately and the penalty could be inferred with the judgment in the law article.",
"For example, if we could compress the fact statement in Figure 1 into the fine-grained event in Table 1, we could easily match it with the event pattern defined in Article 263 (see Figure 1).",
"Then the penalty defined in this article can be used as the predicted judgment.",
"by (1) extracting the fine-grained key event of the case and then (2) predict the judgment based on the extracted event information (instead of the whole fact statement).",
"To this end, we propose a hierarchical event definition referring to the hierarchy of law articles.",
"Since there is no public LJP dataset that is annotated with event information, we manually annotate a legal event dataset on the top of CAIL (a public LJP dataset widely used by SOTA methods) (Xiao et al., 2018).",
"Nevertheless, event extraction is challenging.",
"So, to guide the learning process, we design output constraints on event extraction (e.g., what role types are compulsory for a given trigger type) and employ them in our model.",
"Another weakness associated with SOTA methods concerns their failure to exploit the consistency constraints among the three LJP subtasks.",
"Specifically, each law article imposes constraints on what charge and term penalty are possible.",
"However, SOTA methods typically frame LJP as a multi-task learning problem in which the three tasks are jointly learned in a model via a shared representation, without guaranteeing that the aforementioned cross-task constraints are satisfied.",
"To address this problem, we introduce consistency constraints.",
"In sum, our contributions are three-fold.",
"First, we present the first study on leveraging event extraction from case facts to solve LJP tasks.",
"Second, we define a hierarchical event structure for legal cases and collect a new LJP dataset with event annotations.",
"Finally, we propose a model that learns LJP and event extraction jointly subject to two kinds of constraints.",
"Experiments show that our model surpasses the existing SOTA models in performance.",
"Legal judgment prediction.",
"LJP has been investigated in the context of different jurisdictions, such as China (Luo et al., 2017; Zhong et al., 2018; Yue et al., 2021; Feng et al., 2021), the U.S. (Katz Argument Role Who is the criminal? Mike Criminal Who is the victim Jessica Victim What happened? robbed Trigger-Rob What were robbed? gold ring Property What is the price of swag? 1,535 RMB Quantity Judgment Results : Article 263, Robbery, three-year imprisonment Table 1: An example of the judging process of a real case based on event information. et al., 2017), Europe (Chalkidis et al., 2019a, 2021), French (Sulea et al., 2017b,a), India (Malik et al., 2021; Paul et al., 2020).",
"While early works relied on rule-based approaches (Kort, 1957; Segal, 1984; Nagel, 1963), later approaches use classification techniques (Aletras et al., 2016; Liu et al., 2015; Sulea et al., 2017a,b; Katz et al., 2017).",
"More recently, neural models are learned to predict judgment results jointly by sharing parameters in a unified framework (Zhong et al., 2018; Xu et al., 2020; Dong and Niu, 2021; Yang et al., 2019; Feng et al., 2019), applying pre-trained language models (Chalkidis et al., 2020, 2021; Xiao et al., 2021; Niklaus et al., 2021; Zhong et al., 2019), exploiting label-attention mechanisms (Wang et al., 2018, 2019), or injecting legal knowledge (Hu et al., 2018; Gan et al., 2021; Zhong et al., 2020).",
"Unlike our work, these works do not explore the use of case events for LJP.",
"Though existing works exploit dependency between subtasks (Zhong et al., 2018; Yang et al., 2019), they merely utilize the subtasks' prediction results as auxiliary features to influence each other and therefore may still predict inconsistent results.",
"In contrast, our cross-task consistency constraints can guarantee that the predictions are consistent.",
"Event extraction in legal domain.",
"Some works have defined legal events and built models to automatically extract legal events from fact statements using these definitions (Shen et al., 2020; Li et al., 2019; Chen et al., 2020).",
"However, we cannot use these event-annotated legal datasets for two reasons.",
"First, the legal documents in these datasets do not contain legal judgment predictions, so we cannot use them to jointly extract events and make legal judgment predictions.",
"Second, there is a key difference between our work and previous work in terms of how legal events (i.e., the trigger types and argument roles) are defined: while existing works define legal events solely from the perspective of event extraction, we define legal events so that the trigger types and argument roles are useful for LJP.",
"Dataset.",
"We employ as our dataset CAIL (Xiao et al., 2018), a large-scale publicly available Chinese legal document dataset that has been widely used.",
"In CAIL, each judgment document consists of a fact statement and judgment results (law articles, charges and term of penalty).",
"We follow prior works (Xu et al., 2020; Yang et al., 2019) for preprocessing CAIL (see Appendix H).",
"CAIL is composed of two subdatasets: CAIL-big and CAIL-small, and their statistics are shown in Table",
"2. LJP on CAIL is by no means trivial: there are 127, 140 and 11 categories for article, charge and penalty respectively on CAIL-big.",
"Task definition.",
"Given a fact statement, LJP on CAIL involves three prediction subtasks t a , t c , t p T , which correspond to law article, charge and term of penalty respectively.",
"Following previous works (Xu et al., 2020; Yang et al., 2019), we formalize each subtask t T as a multi-class classification problem and predict for each t the corresponding result y t Y t , where Y t is the label set of t .",
"We begin by designing a multi-task legal judgment prediction model, which we will use as a baseline and augment with event extraction and constraints in subsequent sections.",
"The framework of our model is shown in Figure",
"3. Token representation layer.",
"Given a fact statement represented as a character sequence D = { x 1 , x 2 , ...x l f } , we first encode each character by passing them into a pretrained legal BERT encoder (Zhong et al., 2019).",
"where H f = { h 1 , h 2 , ... h l f } is the hidden vector sequence of the fact statement and l f is the length of the fact statement.",
"robbed a gold ring.",
"Incorporating law article semantics.",
"Using the aforementioned context representation to predict judgment essentially treats each law article as an atomic label, leaving its semantic information unexploited.",
"Inspired by previous work (Chalkidis et al., 2019b; Rios and Kavuluru, 2018), we employ an attention mechanism to incorporate article semantics into the model.",
"Specifically, we match h with all candidate law articles.",
"To do so, we first use the same encoder to encode the character sequence of each law article and obtain the hidden vector sequence H a = { h 1 , h 2 , ... h l a } , where l a is the length of a law article text.",
"Then we apply a max-pooling layer to H a to get the context representation c .",
"Next, we use the context representation of the fact statement, h , to query all candidate articles in order to mine the most relevant semantics in the article texts.",
"Specifically, we first obtain the relevance scores between h and the j -th article c j : j = h TW c c j (3) where W c is a trainable matrix.",
"where c contains the integrated article semantics.",
"Legal judgment prediction layer.",
"To predict legal judgment, we input h and c into three task-specific classifiers as follows: y t = softmax ( W t [ h ; c ] + b t ) (5) where W t and b t are the learnable parameters and y t is the prediction distribution of task t .",
"Training.",
"For each legal judgment prediction task t , we employ cross-entropy as the loss function to measure the distance between the predicted y t and the ground-truth y t .",
"where hyperparameters determine the trade-off between all subtask losses.",
"The model is trained to minimize L () .",
"In this section, we propose a novel method to leverage event extraction to improve LJP.",
"Event definition.",
"Each law article stipulates what event violates this article, so it is easy to define legal events based on law articles.",
"The Chinese law articles have been organized in a hierarchical manner.",
"For example, robbery-related and theft-related articles belong to Property Infringement , which is the general name of robbery-related and theft-related articles.",
"We define legal events following this hierarchy.",
"As shown in Figure 4, Property Infringement is treated as a superordinate event type, whereas Robbery and Theft are treated as subordinate event types.",
"This hierarchy can express the connections between different legal events.",
"Trigger and role definitions.",
"An event trigger is a word that realizes the occurrence of an event and has a type.",
"There is a one-to-one correspondence between event type and trigger type.",
"For example, the Robbery event has the trigger type Trigger-Rob .",
"Next, we define the roles for each event such that they reflect the key elements of the event that would be useful for making legal judgments.",
"For example, the Criminal and Victim roles specify the parties involved in a case, whereas the Quantity role measures the value of loot, based on which term penalty is derived.",
"We define the roles in a hierarchical manner.",
"As seen in Figure 4, the Party arguments are the people involved in the cases, and its subordinate roles include Criminal and Victim .",
"1 5.2 Dataset Collection To investigate the use of event extraction for LJP, we manually create an event-annotated LJP dataset since no such dataset is publicly available.",
"Step 1: judgment document collection.",
"We construct LJP-E, our event-annotated dataset, based on CAIL.",
"Specifically, we first analyze the performance of the SOTA models (Xu et al., 2020; Zhong et al., 2018) on the validation portion of CAIL-small and identify the 15 law articles for which they achieved poor performance, and then select a subset of the cases that can be judged by these 15 law articles for annotation.",
"This subset consists of 1367 documents (957 as training set, 136 as validation set and 274 as test set).",
"We henceforth refer to this set of judgment documents as D o .",
"Step 2: event trigger and argument role annotation.",
"Next, we hire two annotators to manually produce event triggers and argument roles for each case in D o after giving them a three-hour tutorial on how to annotate events.",
"The annotators are native speakers of Chinese who are graduate students in NLP with significant experience with working on legal problems (none of them are the authors).",
"The annotation process.",
"Given the fact statement and the gold law article of a case, each annotator is asked to independently highlight the salient words in the fact statement that reflect the core event of the case and correlate well with the event pattern of the law article.",
"Then each of them is asked to (1) select a trigger word and assign it a subordinate trigger type, and (2) assign a subordinate role type to each of its arguments from a predefined role list.",
"The trigger type and role type inventories were designed by the authors after having read a large number of fact statements and corresponding articles.",
"Inter-annotator agreement numbers can be found in Appendix E. After the above steps, each case in D o is annotated with a trigger, its type, its arguments and roles.",
"The average number of arguments per event is 4.13.",
"1 Details of the event and role definitions together with their explanations can be found in Appendix A and B. 651 There are 16 distinct subordinate roles and 15 distinct subordinate trigger types.",
"2 Each disagreement between the annotators is resolved via discussion.",
"To make use of the event annotations, we augment our baseline model with a hierarchical event extraction layer that detects event triggers and arguments and determines trigger types and arguments roles (see Figure 3).",
"The resulting model simultaneously learns event extraction and LJP.",
"We formalize event extraction as a token labeling problem.",
"Given the hidden vectors of the fact tokens H l f = { h 1 , h 2 , ... h l f } , we assign each token a subordinate trigger (if it is part of a trigger) or a subordinate role type (if it is part of an argument).",
"The hierarchical event extraction layer consists of two modules: (1) a superordinate module that attends each hidden vector to all superordinate types/roles for obtaining their correlations, and (2) a subordinate module that computes the subordinate type/role probability distribution based on hierarchical information, as described below.",
"For a specific superordinate type/role j , we represent its semantic features with a trainable vector p j .",
"We adopt a fully-connected layer to calculate the correlation score between hidden vector h i and superordinate type/role p j .",
"where u ij represents the correlation score and [; ] denotes the concatenation of two vectors.",
"Then, we apply a softmax operation to get the superordinate type/role feature for each token x i .",
"ij = exp ( u ij ) (cid:80) k =1 exp ( u ik ) (9) o i = (cid:88) j =1 ij p j (10) where o i is the integrated superordinate type/role feature, which provides superordinate-oriented information useful for predicting subordinate types/ roles.",
"Next, we concatenate each h i with o i as the input feature for the trigger type and argument role classifier and estimate the probability that token x i belongs to subordinate type/role r j as follows: s r ( x i , y r j ( i ) ) = exp ( q Tj [ h i ; o i ]) (cid:80) k =1 exp ( q Tk [ h i ; o i ]] (11) 2 Statistics of LJP-E can be found in Appendix D. where q j is the trainable vector of r j .",
"After obtaining the type/role probability distribution of x i , we apply a CRF (Lafferty et al., 2001) to produce the sequence of types/roles with the highest score, where the score of a sequence of types/roles y r = { y r (1) , y r (2) ... } is computed as: score ( D, y r ) = l f (cid:88) i =1 T y r ( i 1) , y r ( i ) + l f (cid:88) i =1 s r ( x i , y r ( i ) ) (12) Here, T is the score of transitioning from one tag to another tag.",
"Instead of predicting LJP based on the context fact representation h , we replace it with the detected event features.",
"Specifically, we input the extracted trigger word and arguments with their type/role embeddings into the three task-specific classifiers.",
"Denote an extracted span as H s = { h 1 , h 2 , ... h l s } .",
"We apply a max-pooling layer to each span and concatenate with the corresponding subordinate type/role embedding q : g i = [ maxpooling ( H s ); q ] (13) where g i denotes the representation of span i , which contains both semantics and subordinate type/role features.",
"Based on g i , we can calculate the context span representation g as follows: g = maxpooling ( g 1 , g 2 ... ) (14) which is used to replace the context fact representation h in Equation",
"2. Training.",
"L () = t a L t a + t c L t c + t p L t p + r L r (16)",
"To improve model performance, we explore two types of constraints, as described below.",
"Absolute constraint.",
"For a legal event, the trigger must appear exactly once and certain roles are compulsory (e.g., subordinate role Criminal should appear at least once).",
"If the trigger is missing, we impose the following penalty: P t = (cid:88) r T G l f (cid:88) i =1 s r ( x i , y r ( i ) ) max i, r T G [ s r ( x i , y r ( i ) )] + | 1 max i, r T G [ s r ( x i , y r ( i ) )] | (17) where T G is the trigger inventory.",
"3 If a required role r is missing, we impose the following penalty: P r = | 1 max i [ s r ( x i , y r ( i ) )] | (18) Event-Based consistency constraint.",
"If a trigger type is detected, all and only its related roles should be detected.",
"For example, if a Illegal Doctoring event is detected, no roles related to Illegal Logging should be predicted.",
"If a trigger r is predicted, we impose the following penalty: P e = (cid:88) r R + | 1 max i [ s r ( x i , y r ( i ) )] | + (cid:88) r R l f (cid:88) i =1 s r ( x i , y r ( i ) ) (19) where R + is the set of roles that should occur if r is predicted, and R is the set of roles that cannot occur.",
"We sum all the penalty terms and incorporate them into the total loss as follows: L () = t a L t a + t c L t c + t p L t p + r L r + p (cid:88) P i (20) 6.2 Cross-Task Consistency Constraints While the multi-task learning setup employed by our model allows subtasks of LJP to benefit each other via the shared representation layer, it fails to exploit the dependency explicitly that exist among them.",
"Below we exploit two such dependencies, one between law article and charge and the other between law article and term of penalty.",
"Each law article states the allowable charges and range of term of penalty.",
"Hence, we can utilize these dependencies to constrain (and hopefully improve) the prediction of charge and term of penalty 3 An explanation of the penalty functions in Equation 17, 18 and 19 can be found in Appendix K using the predicted law article.",
"More specifically, we make the model learn how to predict charge and term of penalty based on the predicted article during training by modifying the cross entropy loss as follows.",
"If the law article is predicted correctly by the model, then when calculating L t c (i.e., the cross-entropy loss associated with the charge prediction task), we mask each term in the loss corresponding to a charge that is not allowed according to the predicted article: L t c = (cid:88) (cid:88) mask y t c log y t c (21) where mask is equal to 0 if the charge is not allowed according to the predicted article and 1 otherwise.",
"However, if the article is predicted incorrectly by the model, L t c is the standard cross entropy loss.",
"Intuitively, through masking, the model is forced to predict a charge that is allowed according to the predicted article.",
"During testing, since we do not know whether the law article is predicted correctly or not, we always mask the charge probability distribution according to the predicted article.",
"We adopt the same strategy to compute L t p when enforcing the consistency constraint between law article prediction and term of penalty prediction.",
"We train our model EPM using the pre-training and fine-tuning strategy.",
"Specifically, we pre-train EPM without event components on the training portion of CAIL (Table 2), and then fine-tune EPM on the training portion of LJP-E, our event-annotated LJP dataset, to learn from the event annotations.",
"As for the encoder, the maximum fact length is set to 512.",
"For training, we utilize the Adam optimizer with learning rate of 10 4 and the batch size is 32.",
"The warmup step is 3000.",
"For the hyperparameters, , in the loss function, the best setting is {0.5, 0.5, 0.4, 0.2, 0.1} for { t a , t c , t p , r , p }.",
"Models are trained for a maximum of 20 epochs.",
"4 LJP results are reported in terms of Accuracy (Acc), Macro-Precision (MP), Macro-Recall (MR) and Macro-F1 (F1).",
"We compare EPM with SOTA models on the test portion of our annotated dataset LJP-E in Table",
"3. and the official test portion of the CAIL dataset in 4 Details of experimental setup can be found in Appendix I 653 Law Article Charge Term of Penalty Acc % MP % MR % F1 % Acc % MP % MR % F1 % Acc % MP % MR % F1 % 1 MLAC 83.75 71.49 71.79 70.05 73.20 52.82 55.93 52.19 23.57 17.92 17.22 16.38 2 TOPJUDGE 86.46 75.51 75.07 73.97 75.16 56.04 58.96 55.34 23.82 18.59 18.43 17.63 3 MBPFN 86.72 86.72 75.60 74.28 73.95 73.95 56.48 54.00 27.53 27.53 17.97 19.65 4 LADAN 89.92 78.13 78.01 77.06 79.12 58.54 61.87 58.35 26.06 20.86 18.03 16.58 5 NeurJudge 87.87 81.17 82.68 80.41 76.04 61.95 60.07 59.46 27.88 20.99 16.81 18.51 6 EPM 93.85 91.15 89.14 89.37 79.37 61.14 63.15 60.79 28.51 28.27 23.58 23.23 7 w/ gold 97.05 95.63 93.42 93.82 86.98 70.45 73.08 70.92 33.16 30.28 23.52 24.11 8 TOPJUDGE+Event 88.84 78.03 79.72 77.39 77.24 60.70 60.57 57.40 27.49 22.38 18.17 18.10 Table 3: Comparisons with the SOTA models on LJP-E.",
"Table 4 and 5.",
"Since LJP-E only contains the 15 case types of CAIL, when applying EPM on the CAIL test set we use the pretrained version of EPM (i.e., without fine-tuning) to predict samples that do not belong to the 15 types and use the fine-tuned version of EPM to predict samples that belong to one of the 15 types.",
"In order to determine whether a sample belongs to one of the 15 types, we train a binary classifier using legal BERT on the training set of CAIL.",
"We refer to this model as the Switch .",
"5 We compare EPM with four SOTA neural models: (1) MLAC (Luo et al., 2017), which jointly 5 Details of the Switch can be found in Appendix J. modeled charge prediction and the relevant article extraction task in a unified framework.",
"Here, we add a fully-connected layer in order to predict the term of penalty; (2) TOPJUDGE (Zhong et al., 2018), which formalized the subtasks of LJP in a joint framework as a directed acyclic graph in which the subtasks share parameters; (3) MPBFN (Yang et al., 2019), which proposed a multi-perspective forward and backward prediction framework to make the sharing of parameters by different subtasks effectively, as well as a number embedding method for term of penalty prediction; (4) LADAN (Xu et al., 2020), which developed a 654 graph networks to learn the subtle differences between law articles in order to extract compelling discriminative features from fact statements; and (5) NeurJudge (Yue et al., 2021), which utilized the results of intermediate subtasks to separate the fact statement into different circumstances and exploits them to make the predictions of other subtasks.",
"As shown in Tables 3, 4 and 5, EPM (row 6) achieves the best results, substantially outperforming not only MLAC but also TOPJUDGE, MPBFN, LADAN and NeurJudge, which further leverage extensions like number embedding and graph networks , particularly on law article prediction.",
"Next, we conduct two oracle experiments involving EPM .",
"First, we use gold rather than predicted event annotations to make predictions for the three subtasks.",
"6 The results, which are show in row 7 of Table 3, show that considerably better results can be obtained when gold event annotations are used.",
"These results suggest that existing LJP results can be substantially improved by improving event extraction.",
"Next, we assume that the Switch is perfect when obtaining the EPM results on CAIL.",
"Perhaps not surprisingly, results, which are shown in row 7 of Table 4 and 5, are better w.r.t. all subtasks.",
"Further, we apply EPM to MLAC , TOPJUDGE , MPBFN , LADAN and NeurJudge on CAIL, the five SOTA models following the same scheme (i.e., use fine-tuned EPM to classify when the Switch says the sample belongs to the 15 types and use the SOTA model to classify otherwise), showing the results in rows 8 to 12 in Table 4 and 5.",
"We see that EPM can also improve the performance of the four SOTA models, yielding new SOTA results.",
"Finally, we examine whether modifying a SOTA model, TOPJUDGE , by having it jointly perform event extraction and the LJP tasks can improve its performance.",
"To do so, we replace its CNN encoder by an LSTM and feed the LJP classifiers with the extracted events rather than the case facts in the same way as in EPM .",
"We can see that TOPJUDGE+Event outperforms TOPJUDGE , which shows the usefulness of event information.",
"However, TOPJUDGE+Event underperforms TOP-JUDGE+EPM .",
"This suggests that better LJP results can be achieved by treating TOPJUDGE as a black box (by exploiting event information using the Switch ) rather than a glass box (by modifying 6 Event extraction results in EPM are as follows: 53.75% (R), 47.52% (P), and 50.37% (F1) for trigger detection and 55.69% (R), 49.88% (P), and 52.59% (F1) for role prediction.",
"the model to learn from event annotations).",
"We conduct experiments on the ablated versions of EPM .",
"Ablation results on LJP-E and CAIL are shown in Tables 6 and 7,",
"8. Event extraction.",
"To test the usefulness of event extraction, we delete all event components from EPM .",
"Results are shown in row",
"2. As we can see, performance degrades substantially on all three subtasks in terms of both Acc and F1.",
"Event-based constraints.",
"Next, we evaluate the usefulness of the two event-based constraints (Sec-tion 6.1) on the outputs of event extraction.",
"Removing the absolute constraint ( w/o CSTR1 , row 3) or the event-based consistency constraint ( w/o CSTR2 , row 4) generally yields worse results in terms of both Acc and F1.",
"In particular, removing the consistency constraint generally provides bigger deterioration than the absolute constraint.",
"Cross-task consistency constraints.",
"We also evaluate the cross-task consistency constraints.",
"Removing the article-charge constraint ( w/o DEP1 , row 5) or the article-term constraint ( w/o DEP2 , row 6) negatively impacts performance, with the largest negative impact observed on charge prediction.",
"While these constraints are intended to employ the predicted law article results to improve charge prediction and term prediction, we see that law article performance also deteriorates.",
"Superordinate types.",
"So far, we have assumed that hierarchical event extraction would be beneficial to LJP.",
"To better understand whether the hierarchy is indeed useful, we evaluate a version of EPM without using superordinate features.",
"In other words, the model predicts the subordinate types/roles directly.",
"Results are shown in row",
"7. Comparing rows 1 and 7, we see that Acc and F1 scores drop across all subtasks when superordinate features are not used, indicating their usefulness.",
"Event extraction as an auxiliary task.",
"In EPM , we use the predicted event features as inputs for the three LJP task classifiers.",
"Another way to exploit event information would be to treat event extraction as an auxiliary task in the model by having it share encoders with the LJP tasks.",
"Results of treating event extraction as an auxiliary task are shown in row",
"8. As we can see, these results are worse than those of EPM (row 1), which means that EPM 's way of exploiting event information is better, but they are better than those when event information is 655 Law Article Charge Term of Penalty Acc % MP % MR % F1 % Acc % MP % MR % F1 % Acc % MP % MR % F1 % 1 EPM 93.85 91.15 89.14 89.37 79.37 61.14 63.15 60.79 28.51 28.27 23.58 23.23 2 w/o event 86.17 76.49 75.66 75.19 73.21 56.94 56.46 55.46 26.25 18.78 15.71 15.06 3 w/o CSTR1 88.44 83.22 80.77 80.00 74.21 57.06 59.37 56.54 27.85 18.01 18.16 16.87 4 w/o CSTR2 86.96 77.95 76.17 76.84 74.69 57.29 56.98 55.99 27.77 19.18 17.55 16.99 5 w/o DEP1 91.15 86.29 85.23 84.96 73.62 56.84 58.69 56.71 23.35 16.29 16.06 15.23 6 w/o DEP2 91.89 90.98 87.41 88.30 78.38 60.48 61.43 59.31 23.31 17.30 14.12 14.49 7 w/o hierarchy 87.21 79.20 76.52 76.38 73.95 58.23 59.50 57.36 23.59 19.42 16.77 16.32 8 w/ auxiliary 92.62 85.24 83.95 83.90 76.18 55.08 59.05 56.39 25.31 21.06 16.58 15.88 Table 6: Ablation results on LJP-E.",
"not use (row 2), which means that using predicted events for LJP is still better than not using them.",
"Next, we perform a qualitative analysis of EPM to better understand the role played by event information and constraints.",
"In CAIL, the data distribution of term penalty for the same law article is skewed towards larger penalty values, thus causing EPM to inherit this bias in its prediction of term penalty when cross-task consistency constraints are not used.",
"However, when constraints are used, EPM was forced to only predict those term penalties that are allowed by the predicted law article and was thus more robust to the skewed data distribution.",
"As for events, the use of event information prevents EPM from focusing on certain words in a case fact that could trigger the prediction of wrong law articles.",
"A detailed analysis can be found in Appendix F. 8 Conclusion We proposed the first model that uses event extraction and hand-crafted constraints to improve LJP, achieving SOTA results.",
"To facilitate future research, we make our codes and annotations publicly available at https://github.com/WAPAY/EPM.",
"We thank the three anonymous reviewers for their comments on an earlier draft of this paper.",
"This work was supported in part by the National Natural Science Foundation of China (No. 61802167), the US National Science Foundation (Grant IIS1528037).",
"Chuanyi Li is the corresponding author.",
"Any opinions, findings, conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views or official policies, either expressed or implied, of the funding agencies.",
"Automatic legal judgment prediction is a sensitive research area.",
"The proposed EPM model is a preliminary work and is not ready to be productized.",
"The goal of designing the EPM model is to surpass the performances of existing SOAT approaches which are not ready to be productized either.",
"Legal cases contain personal privacy information.",
"Therefore, we use a public dataset that has been anonymized, i.e., CAIL, which is collected from the official website for disclosing legal case information.",
"The private information of the cases has been anonymized when the cases are published on the official website."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"method",
"abstain",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"result",
"other",
"other",
"other",
"other",
"objective",
"other",
"objective",
"other",
"other",
"method",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"other",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain"
] |
[
"This paper is concerned with semantic parsing for English as a second language (ESL).",
"Motivated by the theoretical emphasis on the learning challenges that occur at the syntax-semantics interface during second language acquisition, we formulate the task based on the divergence between literal and intended meanings.",
"We combine the complementary strengths of English Resource Grammar, a linguistically-precise hand-crafted deep grammar, and TLE, an existing manually annotated ESL UD-TreeBank with a novel reranking model.",
"Experiments demonstrate that in comparison to human annotations, our method can obtain a very promising SemBanking quality.",
"By means of the newly created corpus, we evaluate state-of-the-art semantic parsing as well as grammatical error correction models.",
"The evaluation profiles the performance of neural NLP techniques for handling ESL data and suggests some research directions.",
"There are more people around the world learning English as a second language (ESL) than there are native speakers of English with this gap continually and steadily expanding (Crystal, 2012).",
"Accordingly, an extremely large volume of nonnative English texts are generated every day.",
"We need an automatic machinery to annotate such large-scale atypical data with in-depth linguistic analysis.",
"High-performance automatic annotation of learner texts, from an engineering point of view, enables it possible to derive high-quality information by structuring the specific type of data, and from a scientific point of view, facilitates quantitative studies for Second Language Acquisition (SLA), which is complementary to hands-on experiences in interpreting interlanguage phenomNow works at Alibaba Group.",
"ena (Gass, 2013).",
"This direction has been recently explored by the NLP community (Nagata and Sakaguchi, 2016; Berzak et al., 2016a; Lin et al., 2018).",
"Different from standard English, ESL may preserve many features of learners' first languages 1 .",
"The difference between learner texts and benchmark training data, e.g. Penn TreeBank (PTB; Marcus et al., 1993), is more related to linguistic competence, rather than performance (Chom-sky, 2014).",
"This makes processing ESL different from almost all the existing discussions on domain adaptation in NLP.",
"Despite the ubiquity and importance of interlan-guages at both the scientific and engineering levels, it is only partially understood how NLP models perform on them.",
"In this paper, we present, to the best of our knowledge, the first study on Semantic Parsing for English as a Second Language.",
"Motivated by the Interface Hypothesis (Sorace, 2011) in SLA, we emphasize on the divergence between literal and intended meanings.",
"To obtain reliable semantic analyses in order to represent the two types of meanings, we propose to combine English Resource Grammar (Flickinger, 2000), which is a wide-coverage, linguistically-precise, hand-crafted grammar and TLE, which is a manually annotated syntactic treebank for ESL in the Universal Dependency (UD; Berzak et al., 2016b) framework.",
"In particular, we introduce a reranking model which utilizes the partial constraints provided by gold syntactic annotations to disambiguate among the grammar-licensed candidate analyses.",
"Experiments on DeepBank (Flickinger et al., 2012) demonstrates the effectiveness of our proposed model.",
"By means of the newly created corpus, we study semantic parsing for ESL, taking Elementary De-1 Henceforth, the first and second language are referred to as L1 and L2, respectively.",
"pendency Structure (EDS; Oepen and Lnning, 2006) as the target representation.",
"We probe the semantic parsing of multiple state-of-the-art neural parsers for literal meaning and intended meaning, and investigate how grammatical error correction (GEC) can contribute to the parsing.",
"In addition, we give a detailed analysis of the effect from grammatical errors.",
"Results reveal three facts: 1) semantic parsing is sensitive to non-canonical expressions, and the distribution as well as types of grammatical errors have an effect on parsing performance; 2) Factorization-based parser is the most effective and robust parser to process learner English; and 3) automatic GEC has a positive, but limited influence on the parsing of intended meaning.",
"Early work regarding the collection of learner corpora mainly concentrates on tagging alleged errors (Rozovskaya and Roth, 2010; Nagata et al., 2011).",
"The past decade has seen a tendency to directly annotate the linguistic properties in learner sentences (Dickinson and Ragheb, 2009; Daz-Negrillo et al., 2010; Rastelli, 2013).",
"The lack of precisely annotated data has limited the systematic analysis of interlanguages.",
"There are several attempts to set up annotation schemes for different linguistic layers of learner languages, such as POS tags and syntactic information (Hirschmann et al., 2007; Daz-Negrillo et al., 2010; Rosen et al., 2014; Nagata and Sak-aguchi, 2016; Berzak et al., 2016b).",
"But it is challenging to elucidate the exact definition of syn-tax for learner languages.",
"Ragheb and Dickinson (2012) defines multiple layers (morphologi-cal dependencies, distributional dependencies, and subcategorization) based on different evidence to capture non-canonical properties.",
"Similarly, motivated by the Interface Hypothesis (Sorace, 2011), we employ a principled method to create parallel semantic representations for learner English by discriminating between the literal and intended meanings.",
"With regard to the semantic analysis for learner languages, Lin et al. (2018) takes the first step in this direction.",
"Based on a parallel semantic role labeling (SRL) corpus, they prove the importance of syntactic information to SRL for learner Chinese.",
"In this paper, we provide a much deeper semantic analysis for learner English.",
"There is a classic distinction between two aspects of meaning: the literal meaning (conventional meaning or sentence meaning) versus the intended meaning (speaker meaning or interpretation).",
"The former puts an emphasis on the linguistic code features appearing in the sentence, while the latter is derived from the author's intention.",
"When we consider an interlanguage, the divergence between literal and intended meanings is much larger due to various cross-lingual influences.",
"It is reasonable to consider both aspects to develop a principled method to process outputs from L2 learners.",
"Contemporary research on SLA has extensively argued and empirically supported the claim that linguistic properties pertaining to the interface between syntax and other linguistic modules are vulnerable in L2 and integrating linguistic phenomena relevant to such interfaces imposes much difficulty to L2 learners (Sorace, 2006; White, 2011).",
"According to this view, the interaction or mapping between syntactic and semantic representations is less likely to be acquired completely than structures within one single module, either syntactic or semantic.",
"With respect to outputs of L2 learners, mismatches between syntactic structures and intended meanings are frequently observable.",
"Figure 1 presents an example from the TLE corpus.",
"Although discussion is misused, the whole fragment is grammatical and thus interpretable according to syntactic analysis.",
"However, the literal meaning along with a sound syntactic analysis is far from the intended meaning that a native speaker can infer from intraand inter-sentence contexts.",
"It is quite obvious that discussion should be regarded as a verb coordinating with give .",
"The application scenarios of both literal and intended meanings are practiced in accordance with their different emphases.",
"For example, extracting literal meanings according to the morphosyntactic forms are more useful for text quality assessment tasks in computer-assisted language learning, such as content-based automatic essay scoring.",
"On the contrary, the intended meaning-centric representations help figure out logical relationships and may benefit text mining applications like relation extraction.",
"In order to comprehensively study the issue, we consider both literal and intended meanings.",
"To conduct quantitative research, we create two versions of high-quality silver data and provide a two-sided evaluation for the semantic parsing on learner English.",
"English Resource Semantics (ERS; Flickinger et al., 2016) is an important resource of semantic representations produced by the English Resource Grammar (ERG; Flickinger, 1999), a broad-coverage, linguistically motivated precision Head-Driven Phrase Structure Grammar (HPSG; Pollard and Sag, 1994) of English (Flickinger, 2000, 2011).",
"It provides rich semantic representations including the semantic roles and other detailed information such as the scope of quantifiers and scopal operators including negation, as well as semantic representations of linguistically complex phenomena such as time and date expressions, conditionals, and comparatives (Flickinger et al., 2014).",
"ERS helps to reveal much deeper semantic analysis than other shallow target structures such as the predicate-argument relations in the semantic role labeling (SRL) task.",
"Moreover, it can be derived into several different forms, like the logical-form-based representation Minimal Recursion Semantics (MRS) and the graph-shaped structure Elementary Dependency Structures (EDS).",
"We resort to this resource to build an informative analysis for learner English and choose EDS as the target structure.",
"Figure 2 shows the two kinds of semantic analysis of our running example.",
"As there is no gold semantics-annotated corpus for learner English and building such a corpus from scratch is tedious and time-consuming, we exploit ERG to establish a large-scale sembanking with informative semantic representations.",
"To be specific, for each input sentence S , we generate K -best semantic graphs G 1 , G 2 , ..., GK with an ERG-based processor, i.e. ACE 2 .",
"The created grammar-licensed analyses contain both a derivation tree recording the used grammar rules and lexical entries, and the associated semantic representation constructed compositionally via this derivation (Bender et al., 2015).",
"The elaborate grammar rules enable sembanking reusable, automatically derivable and task-independent, and it can benefit many NLP systems by incorporating domain-specific knowledge and reasoning.",
"Previous work has proved that high-quality syntax makes a large impact on semantic parsing tasks such as SRL (Hermann and Blunsom, 2013; He et al., 2017; Qian et al., 2017).",
"The exploratory work in Lin et al. (2018) draws the same conclusion in an L2 situation.",
"We assume that the incorporation of syntactic trees helps improve the quality of our evaluation data.",
"We conduct a reranking procedure on the K best candidates derived under the ERG framework with the aid of gold Universal Dependencies (UD; Berzak et al., 2016b) trees and select the graph which best fits into the gold syntactic tree (repre-sented as T ).",
"Our reranking model can be formulated into: G = arg max 1 (cid:54) i (cid:54) KSCORE ( G i , T ) where SCORE ( G i , T ) is a numerical measurement of the matching between G i and T .",
"Here, we define it as follows: SCORE ( G i , T ) = WTF ( f G i , f T ) where W refers to the parameter matrix and F is the function to calculate the coherency between feature vectors f G i and f T , which can resort to neural encoders or feature engineering.",
"Here, we use feature engineering which outperformed Graph Neural Network (GNN; Scarselli et al., 2008) in the pilot study to encode the discrete properties in the graph and the UD tree.",
"During the training process, there is a gold semantic graph G g for S .",
"By going through all the K graphs, we can pick out graph G p with the highest score SCORE ( G p , T ) .",
"Our goal is to ensure SCORE ( G g , T ) (cid:62) SCORE ( G p , T ) , which can be achieved with the help of the averaged structured perceptron learning algorithm.",
"To evaluate the capability of our proposed reranking model, we randomly extract 10,000 and 2,476 sentences from DeepBank (Flickinger et al., 2012) as the training and validation data respectively.",
"The gold UD analyses are derived from the original PTB (Marcus et al., 1993) annotations.",
"With regard to evaluation metrics, we use SMATCH (Cai and Knight, 2013) and Elementary Dependency Matching ( EDM ; Dridan and Oepen, 2011).",
"Results are shown in Table",
"1. The first three rows demonstrates that the parsing performance has been greatly improved after reranking, proving the power of the proposed model.",
"The larger K is set to, the greater the improvement will be, since the search space has been expanded.",
"Results of Oracle provide the upper bound.",
"The high numerical value demonstrates the potential of reranking method.",
"The results also prove that syntactic information does facilitate the semantic analysis, which is in line with previous studies.",
"3.3.5 The Data The Treebank of Learner English (TLE; Berzak et al., 2016a) is a collection of 5,124 ESL sentences, manually annotated with POS tags and dependency trees according to Universal Dependencies (UD; Nivre et al., 2016) framework.",
"Both original sentences which contain grammatical errors and corrected sentences which are revised by native speakers are provided to constitute a parallel corpus.",
"The corrected sentences are reconstructed based on a target hypothesis.",
"Following the idea of parallel semantic representations, we produce two versions of silver semantic annotation for learner English.",
"The first version of annotation is obtained by processing the original sentences in TLE with the sembanking-reranking pipeline.",
"Henceforth, this will be called L-silver .",
"It concentrates on the morphosyntactic features encoded in the sentences.",
"Then we process the corrected sentences in the same way and call the produced semantic graphs I-silver , henceforth.",
"In this case, we give priority to the intended meaning.",
"During the process of building the corpus, a part of the sentences from TLE are excluded.",
"With the elaborate semantic representations, ERG fails to analyse sentences which are too long or contain particular unknown words/constructions within a certain time limit.",
"The coverage of ACE on original sentences and corrected sentences from TLE is 55.39% and 79.63%, respectively.",
"In addition, a further reduction of coverage is caused by the inconsistent tokenization between the ERG-licensed analysis and the TLE annotation, such as the different treatment of apostrophes.",
"Ultimately, 52.50% original sentence and 73.54% corrected sentences are processed, forming the final data.",
"This may introduce bias, and how to include the rest part of sentences is left for future research.",
"The parallel meaning representations focus on different linguistic layers.",
"Previous studies on the relevance of the two kinds of meanings are mostly based on psycholinguistic methods.",
"We propose to measure the similarity in a quantitative manner with a corpus-based approach.",
"The literal and intended meanings are represented as the semantic graphs in L-silver and I-silver , respectively.",
"Since the sentences are parallel, we can compare the graph structures directly.",
"We use SMATCH (Cai and Knight, 2013) as the evaluation metric which provides the token-wise evaluation along with effective explorations of variable alignments.",
"The numerical results are displayed in Table",
"2. The modest SMATCH scores indicate the existence of great divergence between the literal and intended meaning representations.",
"Existing work in data-driven semantic graph parsing can be roughly divided into four types, namely composition-, factorization-, transition-and translation-based ones (Koller et al., 2019).",
"According to experimental results obtained on benchmark datasets with various target structures including Abstract Meaning Representa-tion(AMR; Langkilde and Knight, 1998; Ba-narescu et al., 2013), Elementary Dependency Structures (EDS; Oepen and Lnning, 2006), Semantic Dependency Parsing (SDP) as well as Universal Conceptual Cognitive Annotatio (UCCA; Abend and Rappoport, 2013), the composition-and factorization-based approaches are the leading approaches obtained by now (Lindemann et al., 2019; Zhang et al., 2019).",
"In this paper, we use these two kinds of parsers (composition-and factorization-based parsers) described in Chen et al. (2019) as state-of-the-art representatives.",
"Following the principle of compositionality, a semantic graph can be viewed as the result of a derivation process, in which a set of lexical and syntactico-semantic rules are iteratively applied and evaluated.",
"The core engine of the composition-based parser is a graph rewriting system that explicitly explores the syntactico-semantic recursive derivations that are governed by a Synchronous Hyperedge Replacement Grammar ( SHRG ; Chen et al., 2018b).",
"The parser constructs DMRS graphs by explicitly modeling such derivations.",
"It utilizes a constituent parser to build a syntactic derivation, and then selects semantic HRG rules associated to syntactic CFG rules to generate a graph.",
"When multiple rules are applicable for a single phrase, a neural network is used to rank them.",
"We use the parser in Chen et al. (2019) based on both the lexicalized grammar and the constructional grammar (refer to Chen et al. (2018b) for the distinction).",
"Henceforth, they are called lexicalized and constructional composition-based parsers respectively.",
"Figure 3 shows an example of the SHRG -based syntactico-semantic derivation from the constructional composition-based parser.",
"The derivation can be viewed as a syntactic tree enriched with semantic interpretation rules that are defined by an HRG .",
"Each phrase in the syntactic tree is assigned with a sub-graph of the final semantic structure.",
"Moreover, some particular nodes in a sub-graph are marked as communication channels to other meaning parts in the same sentence.",
"In HRG , these nodes are summarized as a hyperedge.",
"Two subgraphs are glued according to a construction rule following the graph substitution principle of HRG .",
"The factorization-based parser explicitly models the target semantic structures by defining a score function that is able to evaluate the goodness of any candidate graph.",
"It needs to know how to find the highest-scoring graph from a large set of discussion about pron udef pronoun BV ARG1 ARG2 BV R-INDEX ARG1 then and NP discussion about pron udef pronoun BV ARG1 ARG2 BV NP discussion about it and then ARG1 CONJ and then = R-INDEX NP CONJ NP Figure 3: An SHRG -based syntactico-semantic derivation from the composition-based parser.",
"The parser works with a two-stage pipeline structure, for concept identification and relation detection, as illustrated in Figure 4.",
"In the first phase, sequence labeling models are used to predict nodes, and in the second phase, we utilize the dependency model introduced by Dozat and Manning (2018) to link nodes.",
"The two models in both stages use a multi-layer BiLSTM to encode tokens.",
"In the first stage, another softmax layer is utilized to predict concept-related labels, while in the second stage, the dependency model is utilized to calculate a score for selecting token pairs.",
"We experiment with three different parsers introduced in last section, i.e., lexicalized and constructional composition-based parsers and the factorization-based parser.",
"We train these parsers on DeepBank version 1.1, corresponding to ERG 1214, and use the standard data split.",
"In order to examine the robustness of parsing models, we test on both L1 and L2 sentences.",
"Detailed results are shown in Table 3.",
"The parsing performances are depicted by SMATCH scores with regard to nodes, edges and the overall view.",
"Comparing different models, we can see that the factorization-based approach performs better on all setups, which is consistent with previous studies (Koller et al., 2019).",
"The gap between results on DeepBank and the other two datasets demonstrates the existence of cross-domain effect, which has been observed in plenty of NLP tasks, including but not limited to semantic parsing (Chen et al., 2018a; Lindemann et al., 2019; Blitzer and Pereira, 2007; Ben-David et al., 2010; Elsahar and Gall, 2019).",
"Furthermore, it is clear that there is a drop from L1 to L2 data.",
"The gap is marked in the last row, the average of which is about 4 points, indicating the insufficiency of using standard models to parse learner texts.",
"Still, the factorization-based model yields a lit-tle bit more robust results on non-native data.",
"We hold that the poor performance of composition-based model is caused by the explicit syntactico-semantic derivation process.",
"Since the interface between syntax and semantics of learner languages is somewhat unclear, directly applying rewriting rules extracted from L1 data may be partly misleading.",
"It is crucial to understand whether and to what extent parsers are indeed robust to learner errors.",
"We re-analyse the results from two aspects.",
"First, we modify the original SMATCH evaluation metric and enable it to be sensitive to distances from errors.",
"Then we make a distinction among typical error types proposed in CoNLL-2014 Shared Task (Ng et al., 2014).",
"Results show that standard parsers can not handle learner errors well enough and their behaviors vary among different Data LEX CXG FAC Node Edge All Node Edge All Node Edge All DeepBank 94.05 92.96 93.50 95.83 92.87 94.34 96.85 95.19 96.01 L1 88.41 86.44 87.41 90.32 86.04 88.14 92.28 89.12 90.91 L2 84.38 82.23 83.29 86.47 81.70 84.04 88.68 84.45 86.91 4.03 4.21 4.12 3.85 4.34 4.10 3.60 4.67 4.00 Table 3: SMATCH scores of semantic parsing on different test data.",
"It should be noticed that only several points in a sentence are occupied by errors while most of the structure is still well-formed.",
"The scores of L2 in Table 3 may be not able to exactly reflect the robustness of models.",
"Therefore, we modify the original SMATCH evaluation metric by paying additional attention to erroneous points.",
"The original metric can be formulated into an Integer Linear Programming (ILP) problem.",
"Suppose there are gold and predicted graphs G g ( m variables) and G p ( n variables).",
"Semantic relations in graphs are represented as triples which can illustrate both the concepts (represented as ( variable , concept , relation )) and edges (represented as ( variable1 , variable2 , relation )).",
"We define v ij = 1 iff the i th variable in G g is mapped to the j th variable in G p in the current alignment, v ij = 0 otherwise.",
"We have t kl = 1 iff the k th triple ( x , y , relation1 ) in G g and the l th triple ( w , z , relation2 ) in G p are matched, which means v xw = 1 , v yz = 1 and relation1=relation2 .",
"In the original metric, t kl takes the value of 1 or 0 and all triple pairs are treated equally.",
"In order to focus on the erroneous points, we put various weights on different triple pairs depending on their distance from errors.",
"Then the optimization problem can be stated as: max (cid:88) kl k t kl s.t. (cid:80) j v ij 1 , i = 1 , 2 , 3 . . . , m (cid:80) i v ij 1 , j = 1 , 2 , 3 . . . , n t r xy r wz v xw , t r xy r wz v yz , r xy r wz R Here, r xy means the triple describing the relationship between x and y , and R means the set of all triple pairs.",
"k refers to the weight of the k th triple in G g .",
"If we want to explore the performance on erroneous points, triples related to these points will be assigned a larger weight.",
"If we want to find out the performance on good part, we can just set the weight of triples involved with errors to zero.",
"Table 4 compares the error-oriented and error-ignored results.",
"We can see that although the average gap in Table 3 is about 4 points, the actual performance pertaining to the ill-formed part is much lower.",
"Especially, the F-score of nodes drops heavily.",
"The gray line in Figure 6 illustrates the tendency of scores changing with the distance from abnormal points.",
"It clearly shows that farther nodes suffer less.",
"Moreover, we explore the relationship between learner errors (LEs) and parsing errors (PEs).",
"We find that 21.40% PEs are caused by LEs and 66.80% LEs cause at least one PE.",
"It indicates that parsing models are really struggling with learner errors.",
"Furthermore, we look into the produced graphs with regard to different error types.",
"We refer to the list of error types introduced in the CoNLL-2014 Shared Task (Ng et al., 2014).",
"Detailed results are illustrated in Figure",
"5. This dia-40 60 80 WOadv WOinc Others Wform Pform Nn ArtOrDet FAC CXG LEX Figure 5: Overall SMATCH scores with regard to different grammatical error types.",
"gram reflects a clear comparison among different error types.",
"The four best-performing types are ArtOrDet , Nn , Pform and Wform , referring to errors of article or determinier, noun number, pronoun form and general word form, respectively.",
"We can see that most of them are related to morphological variations and can be disambiguated at the word level.",
"In contrast, WOadv and WOinc , meaning incorrect adjective/adverb order and other word order errors, are much more complex.",
"They are involved with reorganizations of the sentence structure and hence more difficult to handle.",
"Factorization-based model is more robust to these hard cases than composition-based models since it is grounded on graph structures and can reduce the influence from broken sequential syntax.",
"Previous evaluation indicates the difficulty to adopt a standard semantic parsing model to handle competence errors.",
"Motivated by this fact, we are concerned with whether it is feasible to automatically normalize the texts first.",
"Specifically, our strategy is correcting the grammatical errors contained in the input sentences, and then parsing the revised texts into semantic structures with standard models.",
"The first step can resort to Grammatical Error Correction (GEC), the task of correcting different kinds of errors in text.",
"It has attracted a lot of attention and considerable effort has been made to promote the performance on specific benchmark data.",
"We utilize two off-the-shelf GEC models.",
"One is a multilayer convolutional encoder-decoder neural network proposed in Chollampatt and Ng (2018).",
"We choose the basic model introduced in the paper.",
"The other model copies the unchanged words from the source sentence to the target sentence using a pretrained copy-augmented architecture with a denoising auto-encoder (Zhao et al., 2019).",
"It achieves the state-of-the-art performance without extra pseudo data.",
"Performances of the two GEC models on CoNLL-2014 test set are shown in Table",
"5. We train the factorization-based model on DeepBank and examine the performance on L2 and L1 sentences as well as the revised sentences by two GEC models.",
"The produced graphs are compared with I-silver which represents the intended meaning.",
"We notice that during the computation of SMATCH , some disagreements of nodes result from the discrepancy of morphological variation or different collocations between the input and the standard sentence.",
"Hence the node score may be underestimated.",
"Therefore, we relax the standards of matching nodes.",
"We establish a paraphrase table based on the statistical machine translation between a parallel learner corpus 3 .",
"As long as the labels of two aligned nodes have the same stem or they form a paraphrase pair in our table, then the two nodes can be considered match-ing.",
"We call the new evaluation metric as node-relaxed SMATCH .",
"Table 6 summarizes the results.",
"The gap between the first and the last rows demonstrates that it may be difficult to automatically infer the intended meaning based on the literal representation.",
"GEC does help us to understand the learner English, but it seems to be a small step on the progress bar.",
"Although the second GEC model (Zhao et al., 2019) outperforms the first model (Chollampatt and Ng, 2018) a lot on benchmark data (Table 5), its superiority on semantic parsing is not so obvious.",
"There is still a long way to go before automatically capturing the intended mean-3 https://sites.google.com/site/ naistlang8corpora/ Test Data Standard Error-oriented Node-relaxed Node Edge All Node Edge All Node Edge All L2 sentence 83.91 84.86 84.39 45.91 78.93 66.73 57.18 78.45 70.59 Chollampatt and Ng (2018) 84.98 85.13 85.06 53.06 80.06 70.08 62.05 79.26 72.90 Zhao et al. (2019) 86.10 85.85 85.98 58.73 80.56 72.49 65.39 80.09 74.66 L1 sentence 92.28 89.60 90.92 86.08 85.72 85.85 89.64 85.48 87.02 Table 6: Results of SMATCH scores compared to I-silver .",
"In order to figure out to what extent grammatical errors influence the good part and hence the whole sentence structure, we draw curves concerning distance from errors, which is displayed in Figure",
"6. The red line compares the two kinds of silver representations, which indicates the deviation from the intended meaning due to ungrammatical expressions.",
"It appears as a smooth curve which goes steadily up.",
"The overall trend indicates that the damage to farther parts from errors is less extensive.",
"We assume that the propagation process is limited by the syntactic architecture.",
"However, the situation of automatically predicted graphs by neural models is slightly different.",
"It is depicted by the blue line in Figure 6 and the gradient is much smaller.",
"We suggest it results from the great power of neural models to encode contextual information.",
"In the L2 circumstance, while such characteristic enables the encoder to capture long-distance dependencies, it also expands the scope of errors' influence.",
"In this paper, we formulate the ESL semantic parsing task based on the divergence on literal and intended meanings.",
"We establish parallel meaning representations by combining the complementary strengths of knowledge-intensive ERG-licensed analysis and dependency tree annotations through a new reranking model.",
"For literal meaning, we probe the semantic parsing of multiple state-of-the-art neural parsers and give detailed analysis of effects from grammatical errors.",
"For intended meaning, we investigate how grammatical errors affect the understanding of sentences as well as how grammatical error correction (GEC) can contribute to the parsing.",
"Results reveal three facts: 1) semantic parsing is sensitive to non-canonical expressions, and the parsing performance varies with regard to the distribution as well as types of grammatical errors; 2) Factorization-based parser is the most promising parser to process learner English; and 3) GEC has a positive, but limited influence on the parsing of intended meaning.",
"This paper shows a pilot study on the semantic parsing for learner language.",
"Future research may involve tailoring existing parsers to learner data, combining literal and intended meanings in a uni-fied framework, evaluating GEC models in terms of speakers' intention and parsing for other languages.",
"This work is supported in part by the National Hi-Tech R&D Program of China (No. 2018YFB1005100).",
"We thank the anonymous reviewers and the area chair for their useful feedback and suggestions, Yufei Chen and Yajie Ye for providing the parsers and Ben Roberts for proofreading.",
"Weiwei Sun is the corresponding author."
] | [
"abstain",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"other",
"other",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"other",
"other",
"other"
] |
[
"Generating a concise summary from a large collection of arguments on a given topic is an intriguing yet understudied problem.",
"We propose to represent such summaries as a small set of talking points, termed key points , each scored according to its salience.",
"We show, by analyzing a large dataset of crowd-contributed arguments, that a small number of key points per topic is typically sufficient for covering the vast majority of the arguments.",
"Furthermore, we found that a domain expert can often predict these key points in advance.",
"We study the task of argument-to-key point mapping, and introduce a novel large-scale dataset for this task.",
"We report empirical results for an extensive set of experiments with this dataset, showing promising performance.",
"Governments, businesses and individuals, all need to make decisions on a daily basis: Should cannabis be legalized? , Should we develop this product? , Should I become a vegetarian? .",
"When making an important decision, the process typically comprises several steps: first, we gather as much information as we can about the pros and cons of the proposal under consideration.",
"We may then summarize the collected information as a short list of the main arguments for each side.",
"Lastly, we aim to weigh the pro and con arguments against each other to make the final decision.",
"Where can we find relevant arguments for a given topic?",
"In recent years, significant progress was made in the field of argument mining , automatic identification and extraction of argumentative structures in text (Lawrence and Reed, 2020).",
"Specifically, several works focused on topic-related argument mining from the Web or other massive corpora (Levy et al., 2017, 2018; Wachsmuth et al., All authors equally contributed to this work. 2017; Stab et al., 2018a,b; Ein-Dor et al., 2020).",
"Policy makers in governments or businesses may also conduct surveys to collect from large audiences arguments supporting or contesting some proposal.",
"Each of the above methods may result in hundreds or thousands of arguments per topic, making it impossible for the decision maker to read and digest such large amounts of information.",
"Several works aimed to alleviate this problem by clustering together related arguments, based on different notions of relatedness, such as similarity (Reimers et al., 2019), frames (Ajjour et al., 2019), and argument facets (Misra et al., 2016).",
"These works, however, did not attempt to create a concise textual summary from the resulting clusters.",
"In this work we propose to summarize the arguments supporting each side of the debate by mapping them to a short list of talking points, termed key points .",
"The salience of each key point can be represented by the number of its matching arguments.",
"An example for such summary is shown in Table 1. Key points may be viewed as high-level arguments.",
"They should be general enough to match a significant portion of the arguments, yet informative enough to make a useful summary.",
"The proposed method raises a fundamental question: can a small number of key points effectively summarize massive amount of arguments collected from a large population?",
"In this work we give a positive answer to this question, based on extensive analysis over 28 controversial topics and 7,000 crowd-contributed pro and con arguments for these topics.",
"Furthermore, we found that, given a controversial topic, a domain expert can compose a short, comprehensive list of key points even without looking at the arguments themselves.",
"Motivated by the above findings, we assume in this work that the key points for each topic are given, and focus on the task of automatically map-Homeschooling should be banned #Args Pro Mainstream schools are essential to develop social skills.",
"ping arguments to these key points.",
"This setting may be viewed as an intermediate step towards fully automatic argument summarization, but also as a valuable setting by itself: argument-to-key point mapping allows measuring the distribution of key points in a massive collection of arguments.",
"It also allows interactive exploration of large argument collections, where key points serve as queries for retrieving matching arguments.",
"In addition, it can be used for novelty detection identifying unexpected arguments that do not match presupposed key points.",
"We develop the ArgKP dataset for the argument-to-keypoint mapping task, comprising about 24,000 (argument, key point) pairs labeled as matching/non matching.",
"1 To the best of our knowledge, this is the first dataset for this task.",
"As discussed in the next section in more detail, our dataset is also much larger and far more comprehensive than datasets developed for related tasks such as mapping posts or comments in online debates to reasons or arguments (Hasan and Ng, 2014; Boltuzic and Snajder, 2014).",
"We report empirical results for an extensive set of supervised and unsupervised configurations, achieving promising results.",
"The main contributions of this work are: 1. We demonstrate, through extensive data annotation and analysis over a variety of topics, the feasibility and effectiveness of summarizing a large set of arguments collected from a large audience by mapping them to a small set of key points.",
"1 The dataset is available at https://www.research.",
"ibm.com/haifa/dept/vst/debating_data.shtml 2. We develop the first large-scale dataset for the task of argument-to-key point mapping.",
"3. We perform empirical evaluation and analysis of a variety of classification methods for the above task.",
"The starting point for the current work is a collection of pro and con arguments for a given topic.",
"As previously mentioned, these arguments may be collected from a large audience by conducting a survey, or mined automatically from text.",
"Some of the previous work on argument mining focused on specific domains such as legal documents (Moens et al., 2007; Wyner et al., 2010), student essays (Stab and Gurevych, 2017; Persing and Ng, 2016), and user comments on proposed regulations (Park and Cardie, 2014).",
"Mining arguments and argument components for a given topic (also known as context ) has been a prominent line of research in argument mining.",
"Levy et al. (2014) introduced the task of context-dependent claim detection in a collection of Wikipedia articles, and Rinott et al. (2015) did the same for context-dependent evidence detection.",
"More recently, several works focused on topic-related argument mining from the Web or other massive corpora (Levy et al., 2017, 2018; Wachsmuth et al., 2017; Stab et al., 2018a,b; Ein-Dor et al., 2020).",
"Stance classification of extracted arguments can be performed as a separate step (Bar-Haim et al., 2017) or jointly with argument detection, as a three-way classification (pro argument/con argu-ment/none), as done by Stab et al. (2018b).",
"Several works have focused on identifying pairs of similar arguments, or clustering similar arguments together.",
"Ajjour et al. (2019) addressed the task of splitting a set of arguments into a set of nonoverlapping frames such as Economics , Environment and Politics .",
"Reimers et al. (2019) classified argument pairs as similar/dissimilar.",
"Misra et al. (2016) aimed to detect argument pairs that are assumed to share the same argument facet , which is similar to our notion of key points .",
"However, they did not attempt to explicitly identify or generate these facets, which remained implicit, but rather focused on detecting similarity between argument pairs.",
"In contrast to these works, we directly map arguments to key points.",
"Egan et al. (2016) proposed to summarize argumentative discussions through the extraction of salient points, where each point is a verb and its syntactic arguments.",
"Applying their unsupervised method to online political debates showed significant improvement over a baseline extractive summarizer, according to human evaluation.",
"While the current work also aims to summarize argumentative content via concise points, our goal is not to extract these points but to accurately map arguments to given points.",
"Our main challenge is to identify the various ways in which the meaning of a point is conveyed in different arguments.",
"The method employed by Egan et al. only matches arguments with the same signature the same verb, subject and object dependency nodes, hence its ability to capture such variability is limited.",
"The line of work that seems most similar to ours is of Hasan and Ng (2014), Boltuzic and Snajder (2014) and Naderi (2016).",
"Hasan and Ng classified posts and individual sentences from online debates into a closed set of reasons , composed manually for each topic.",
"Boltuzic and Snajder mapped comments from one debating website ( ProCon.org ) to arguments taken from another debating website ( iDebate.org ).",
"Naderi (2016) addressed a similar task: she used part of the Boltuzic and Snajder corpus as training data for an SVM classifier, which was then tested on sentences and paragraphs from same-sex marriage debates in the Canadian Parliament, annotated with the same set of arguments.",
"Our work differs from these works in several respects.",
"First, we deal with crowd-contributed arguments, taken from the dataset of Gretz et al. (2020) while these works dealt with posts or comments in debate forums, and parliamentary debates.",
"Second, the dataset developed in this work is far more extensive, covering 28 topics and over 6,500 arguments 2 , as compared to 2-4 topics in the datasets of Boltuzic and Snajder and Hasan and Ng, respectively.",
"This allows us to perform a comprehensive analysis on the feasibility and effectiveness of argument-to-key point mapping over a variety of topics, which has not been possible with previous datasets.",
"Lastly, while Hasan and Ng only perform within-topic classification, where the classifier is trained and tested on the same topic, we address the far more challenging task of cross-topic classification.",
"Boltuzic and Snajder experimented with both within-topic and cross-topic classification, however they used a limited amount of data for training and testing: two topics, with less than 200 comments per topic.",
"Finally, we point out the similarity between the argument/key point relation and the text/hypothesis relation in textual entailment , also known as natural language inference (NLI) (Dagan et al., 2013).",
"Indeed, Boltuzic and Snajder (2014) used textual entailment as part of their experiments, following the earlier work of Cabrio and Villata (2013), who used textual entailment to detect support/attack relations between arguments.",
"As a source of arguments for this work we have used the publicly available IBM-Rank-30k dataset (Gretz et al., 2020).",
"This dataset contains around 30K crowd-sourced arguments, annotated for polarity and point-wise quality.",
"The arguments were collected with strict length limitations, accompanied by extensive quality control measures.",
"Out of the 71 controversial topics in this dataset, we selected the subset of 28 topics for which a corresponding motion exists in the Debatabase repository of the iDebate website 3 .",
"This requirement guaranteed that the selected topics were of high general interest.",
"We filtered arguments of low quality (below 0.5) and unclear polarity (below 0.6), to ensure sufficient argument quality in the downstream analysis.",
"We randomly sampled 250 arguments per topic 2 As detailed in the next section, a few hundreds of arguments out of the initial 7,000 were filtered in the process of constructing the dataset.",
"from the set of arguments that passed these filters",
"(7,000 arguments in total for the 28 topics).",
"Debatabase lists several pro and con points per motion, where each point is typically 1-2 paragraphs long.",
"The headline of each point is a concise sentence that summarizes the point.",
"Initially, we intended to use these point headlines as our key points.",
"However, we found them to be unsuitable for our purpose, due to a large variance in their level of specificity, and their low coverage of the crowd's arguments, as observed in our preliminary analysis.",
"To overcome this issue, we let a domain expert who is a professional debater write the key points from scratch.",
"The expert debater received the list of topics and was asked to generate a maximum of 7 key points for each side of the topic, without being exposed to the list of arguments per topic.",
"The maximal number of key points was set according to the typical number of pro and con points in Debatabase motions.",
"1. Given a debate topic, generate a list of possible key points in a constrained time frame of 10 minuets per side.",
"2. Unify related key points that can be expressed as a single key point.",
"3. Out of the created key points, select a maximum of 7 per side that are estimated to be the most immediate ones, hence the most likely to be chosen by crowd workers.",
"The process was completed within two working days.",
"A total of 378 key points were generated, an average of 6.75 per side per topic.",
"Using the Figure Eight crowd labeling platform 4 , we created gold labels for associating the arguments selected as described in Section 3.1 with key points.",
"For each argument, given in the context of its debatable topic, annotators were presented with the key points created for this topic in the relevant stance.",
"They were guided to mark all of the key points this argument can be associated with, and if none are relevant, to select the 'None' option.",
"Each argument was labeled by 8 annotators.",
"Quality Measures: to ensure the quality of the",
"1. Test questions.",
"Annotators were asked to determine the stance of each argument towards the topic.",
"Similarly to Toledo et al. (2019), this question functioned as a hidden text question 5 .",
"All judgments of annotators failing in more than 10% of the stance questions were discarded.",
"2. Annotator score.",
"This score, measuring inter annotator agreement, as defined by Toledo et al. (2019), was calculated for each annotator, and all judgments of annotators with annotator < 0 .",
"3 were ignored.",
"This score averages all pair-wise Cohen's Kappa (Landis and Koch, 1997) for a given annotator, for any annotator sharing at least 50 judgments with at least 5 other annotators.",
"3. Selected group of trusted annotators.",
"As in Gretz et al. (2020), the task was only available to a group of annotators which had performed well in previous tasks by our team.",
"As described above, the annotation of each key point with respect to a given argument was performed independently, and each annotator could select multiple key points to be associated with each given argument.",
"For the purpose of calculating inter-annotator agreement, we considered (argument, key point) pairs, annotated with a binary label denoting whether the argument was matched to the key point.",
"Fleiss' Kappa for this task was 0 .",
"44 (Fleiss, 1971), and Cohen's Kappa was 0 .",
"5 (averaging Annotator scores).",
"These scores correspond to moderate agreement and are comparable to agreement levels previously reported for other annotation tasks in computational argumentation (Boltuzic and Snajder, 2014; Ein-Dor et al., 2020).",
"As for the stance selection question, 98% of the judgments were correct, indicating overall high annotation quality.",
"Data Cleansing : In addition to the above measures, the following annotations were removed from the data:",
"(i) Annotations in which the answer to the stance selection question was wrong;",
"(ii) Annotations in which key point choice was illegal the 'None' option and one of the key points were 5 Unlike Toledo et al., the results were analyzed after the task was completed, and the annotators were not aware of their success/failure.",
"both selected.",
"However, the rate of these errors, for each of the annotators, was rather low ( < 10% and < 5% , respectively).",
"Arguments left with less than 7 valid judgments after applying the above quality measures and data cleansing were removed from the dataset.",
"6 , 568 labeled arguments remain in the dataset.",
"Next, we consolidate the individual annotations as follows.",
"We say that an argument a is mapped to a key point k if at least 60% of the annotators mapped a to k .",
"Recall that an argument can be mapped to more than one key point.",
"Similarly, we say that a has no key point if at least 60% of the annotators mapped a to None (which is equivalent to not selecting any key point for the argument).",
"Otherwise, we say that a is ambiguous , i.e., the annotations were indecisive.",
"Table 2 shows examples for arguments and their matching key points in our dataset.",
"The distribution of the arguments in the dataset over the above categories is shown in Table 3. Remarkably, our key points, composed independently of the arguments, were able to cover 72.5% of them, with 5% of the arguments mapped to more than one key point.",
"We further investigated the differences between arguments in each category, by comparing their average quality score (taken from the IBM-Rank-30k dataset), number of tokens and number of sentences.",
"The results are shown as additional columns in Table 3. Interestingly, arguments that have no key point tend to be shorter and have lower quality score, comparing to arguments mapped to a single key point; arguments mapped to more than one key point are the longest and have the highest quality.",
"Figure 1 examines the impact of the number of key points on argument coverage.",
"For each topic and stance, we order the key points according to the number of their matched arguments, and add them incrementally.",
"The results indicate that arguments are not trivially mapped to only one or two key points, but a combination of several key points is required to achieve high coverage.",
"The marginal contribution decays for the sixth and seventh key points, suggesting that seven key points indeed suffice for this task.",
"22.8% of the arguments are ambiguous .",
"Annotations for these arguments are split over several possible key points, none reaching the 60% threshold.",
"For instance, the argument homeschooling Figure 1: Argument coverage per number of key points. enables parents with fringe views to push their agenda on their children without allowing exposure to alternative viewpoints. , had two key points with annotator votes higher than 40%, but below 60%: 1. Homeschools cannot be regulated / standardized.",
"Such cases suggest that many arguments are somewhat covered by the key points, but if the judgment is not clear-cut, the different intuitions of the annotators may result in no label receiving the required majority.",
"The ArgKP dataset includes (argument, key point) pairs with binary labels indicating whether the argument is matched to the key point.",
"The dataset was created from the labeled data as follows.",
"We define the label score of a pair as the fraction of annotations that classified the pair as matching .",
"Pairs with label score 0 .",
"6 were labeled as positive (matching).",
"Pairs with label score 0 .",
"15 were labeled as negative (non-matching).",
"Pairs with label score in between these thresholds were removed.",
"We further cleansed our data by discarding key points having less than three matching arguments.",
"This led to the removal of 135 out of the 378 key points and 14,679 out of 38,772 pairs obtained from the previous step.",
"The final dataset has 24,093 labeled (argument, key point) pairs, of which 4,998 pairs (20.7%) are positive.",
"It has 6,515 arguments (232.67 per topic), and 243 key points (8.67 key points per topic).",
"For each pair, the dataset also specifies the topic and the stance of the argument towards the topic.",
"We assessed the quality of the resulting dataset by having an expert annotator 6 mapping 100 ran-6 A professional debater who was not involved in the development of the dataset.",
"domly sampled arguments to key points, and comparing the annotations to the gold labels for all the corresponding pairs in the dataset.",
"We obtained a remarkably high Cohen's Kappa of 0.82 (almost perfect agreement), validating the high quality of the dataset.",
"We perform the task of matching arguments to key points in two steps.",
"In the Match Scoring step (Sec-tion 4.1.1), we generate a score for each argument and key point.",
"Then, in the Match Classification step (Section 4.1.2), we use these scores to classify the pairs as matching or non-matching.",
"We perform 4-fold cross-validation over the ArgKP dataset.",
"Each fold comprises 7 test topics, 17 train topics and 4 development topics.",
"We experimented with both unsupervised and supervised methods for computing a match score for a given (argument, key point) pair.",
"We also explored transfer learning from the related task of natural language inference (NLI).",
"Word Embedding.",
"We examined averaged word embeddings using GloVe (Pennington et al., 2014) and BERT (Devlin et al., 2019).",
"GloVe is a context independent model that computes a single embedding for each word.",
"BERT is a contextualized embedding model that takes the entire sentence into account.",
"We also experimented with other embedding methods that under-performed BERT and thus their results are not reported here: Universal Sentence Encoder (Cer et al., 2018) and In-ferSent (Conneau et al., 2017).",
"Again, we use cosine similarity to compute the match score.",
"Supervised Methods.",
"We fine tuned the BERT-base-uncased and BERT-large-uncased models (De-vlin et al., 2019) to predict matches between argument and key point pairs.",
"We added a linear fully connected layer of size 1 followed by a sigmoid layer to the special [CLS] token in the BERT model, and trained it for three epochs with a learning rate of 2e-5 and a binary cross entropy loss.",
"NLI Transfer Learning.",
"We also experimented with transfer learning from NLI to our task of argument-to-key point match classification.",
"This was motivated by the similarity between these tasks (as discussed in Section 2.2), as well as the availability of large-scale NLI labeled datasets.",
"We considered the Stanford (SNLI) and the Multi-Genre (MNLI) datasets (Bowman et al., 2015; Williams et al., 2018), each comprising hundreds of thousands of labeled premise-hypothesis pairs.",
"Pairs labeled as ENTAILMENT were considered positive instances, while the rest of the pairs, labeled as NEUTRAL or CONTRADICTION were considered negative.",
"We trained BERT-base and BERT-large models on each of these datasets, following the procedure described above.",
"In the match classification step we select the matching key points for each argument, based on their respective matching scores.",
"The classification can be done locally, treating each pair individually, or globally, by examining all possible key points for each argument.",
"We compared the following policies for selecting matching key points for a given argument.",
"Threshold.",
"For each fold, we find the threshold on the match score that maximizes the F1 score for the positive (matching) class.",
"Pairs whose score exceeds the learned threshold are considered matched.",
"Best Match (BM).",
"Using a threshold is not optimal for our data, where most arguments have at most one matched key point.",
"A natural solution is to select the best matching key point.",
"For each argument, we consider all key points for the same topic and stance as candidates and predict only the candidate with the highest match score as matched to the argument and the rest as unmatched.",
"Note that this is the only fully unsupervised selection policy, as it does not require labeled data for learning a threshold.",
"BM+Threshold.",
"The BM policy always assigns exactly one key point for each argument, while 27.5% of the arguments in our data are not matched to any key point.",
"To address this, we combine the two former policies.",
"The top matching key point is considered a match only if its match score exceeds the learned threshold.",
"Dual Threshold.",
"In order to account for arguments with more than one matching key point, two thresholds are learned.",
"If two key points exceed the lower threshold and at least one of them exceeds the upper threshold, both will be matched.",
"Otherwise, it works the same as the BM+Threshold policy using only the lower threshold.",
"This allows for zero to two matches per argument.",
"Thresholds are learned from the development set for supervised match scoring methods, and from both train and development set for unsupervised match scoring methods.",
"Table 4 compares the various match scoring methods, all using the Threshold key point selection policy.",
"Results are obtained by micro-averaging over the argument-key point pairs in each fold, and averaging over the different folds.",
"We consider Precision, Recall and F1 of the positive class, as well as the overall accuracy.",
"We also list for reference the majority class baseline that always predicts no match, and the random baseline, which randomly predicts the positive class according to its probability in the training data.",
"The unsupervised models fail to capture the relation between the argument and the key points.",
"Tf-Idf and Glove perform the worst, showing that simple lexical similarity is insufficient for this task.",
"BERT embedding does better but still reaches a relatively low F1 score of 0.4.",
"In contrast to the unsupervised models, supervised models are shown to perform well.",
"BERT with fine tuning leads to a substantial improvement, reaching F1 score of 0.657 with the BERT-base model, and 0.684 with the BERT-large model.",
"BERT Models trained on NLI data are considerably better than the unsupervised methods, with the best model reaching F1 of 0.526, yet their performance is still far below the supervised models trained on our ArgKP dataset.",
"This may reflect both the similarities and the differences between NLI and the current task.",
"We have also experimented with combining these two types of data in cascade: BERT was first trained on a large NLI dataset (SNLI, MNLI or their union), and was then fine-tuned on the smaller ArgKP data.",
"However, it did not improve the supervised results.",
"Error Analysis.",
"By analyzing the top errors of the supervised classifier (BERT-large), we found several systematic patterns of errors.",
"In most cases, non-matching arguments and key points received a high match score in one of the following cases: They share some key phrases.",
"is very expensive and it would also need to be subsidized. and Subsidizing vocational education is expensive.",
"They share a large portion of the sentence, but not the main point, for example: Women should be able to fight if they are strong enough and Women should be able to serve in combat if they choose to.",
"They are at least partially related, but labeled as non-matching due to a better fitting key point for the same argument.",
"For example: We should subsidize space exploration because it increases the knowledge of the universe we are in and Space exploration improves sci-ence/technology can be considered matched, but were labeled as unmatched due to the key point Space exploration unravels information about the universe .",
"Using the Best Match policy helps in these cases.",
"For arguments and key points that were labeled as matched but received a low match score, the relation was in many cases implied or required some further knowledge, for examples: Journalism is an essential part of democracy and freedom of expression and should not be subsidized by the state. and government intervention has the risk of inserting bias/harming objectivity.",
"Table 5 compares different key point selection policies, all using the best performing match scoring method: BERT-large fine-tuned on ArgKP.",
"We report the results over the whole dataset (all argu-ments), as well as the subsets of arguments having none, single or multiple matching key points according to the labeled data.",
"In case of no matches we present the accuracy, as recall and F1 scores are undefined.",
"When considering all the arguments, the Dual Threshold policy achieves the best F1 score of 0.73.",
"The Threshold method performs well for arguments with no matches or multiple matches.",
"When there is exactly one match (the common case in our data), it has lower precision.",
"The Best Match policy performs well when there is a single match, but is not able to cope with arguments that have no matches or have multiple matches.",
"The BM+Threshold method combines the two and is useful when there are no matching key points or a single matching key point, but still have lower recall when there are multiple matching key points.",
"The Dual Threshold method improves the recall and therefore the F1 score for multiple matches while maintaining good performance for arguments with single or no matches.",
"Figure 2 shows Precision-Recall trade-off for the various policies, using the different possible thresholds, computed for one of the folds.",
"For each policy, we specify the best F1 score, as well Figure 2: Precision/Recall trade-off for different key point selection policies.",
"as the F1 score obtained for the selected threshold, which was optimized over the development set.",
"The Threshold policy allows to control recall, up to one (where the threshold is zero), at the price of low precision.",
"The BM+Threshold policy generates the highest precision, but low recall, since at most one candidate is selected.",
"Note that when the threshold is zero, the BM+Threshold policy is equivalent to the BM policy.",
"The Dual Threshold policy offers the best trade-off, for mid-range precision and recall.",
"This work addressed the practical problem of summarizing a large collection of arguments on a given topic.",
"We proposed to represent such summaries as a set of key points scored according to their relative salience.",
"Such summary aims to provide both textual and quantitative views of the argument data in a concise form.",
"We demonstrated the feasibility and effectiveness of the proposed approach through extensive data annotation and analysis.",
"We showed that a domain expert can quickly come up with a short list of pro and con key points per topic, that would capture the gist of crowd-contributed arguments, even without being exposed to the arguments themselves.",
"We studied the problem of automatically matching arguments to key points, and developed the first large-scale dataset for this task, which we make publicly available.",
"Our experimental results demonstrate that the problem is far from trivial, and cannot be effectively solved using unsupervised methods based on word or sentence-level embedding.",
"However, by using state of the art supervised learning methods for match scoring, together with an appropriate key point selection policy for match classification, we were able to achieve promising results on this task.",
"The natural next step for this work is the challenging task of automatic key point generation.",
"In addition, we plan to apply the methods presented in this work also to automatically-mined arguments.",
"Finally, detecting the more implicit relations between the argument and the key point, as seen in our error analysis, is another intriguing direction for future work.",
"We would like to thank the anonymous reviewers for their helpful comments and suggestions."
] | [
"abstain",
"objective",
"objective",
"result",
"objective",
"result",
"objective",
"objective",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"result",
"objective",
"other",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"other",
"other",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"objective",
"method",
"method",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"result",
"abstain",
"method",
"result",
"other"
] |
[
"One great challenge in neural sequence labeling is the data sparsity problem for rare entity words and phrases.",
"Most of test set entities appear only few times and are even unseen in training corpus, yielding large number of out-of-vocabulary (OOV) and low-frequency (LF) entities during evaluation.",
"In this work, we propose approaches to address this problem.",
"For OOV entities, we introduce local context reconstruction to implicitly incorporate contextual information into their representations.",
"For LF entities, we present delexicalized entity identification to explicitly extract their frequency-agnostic and entity-type-specific representations.",
"Extensive experiments on multiple benchmark datasets show that our model has significantly outperformed all previous methods and achieved new start-of-the-art results.",
"Notably, our methods surpass the model fine-tuned on pre-trained language models without external resource.",
"In the context of natural language processing (NLP), the goal of sequence labeling is to assign a categorical label to each entity word or phrase in a text sequence.",
"It is a fundamental area that underlies a range of applications including slot filling and named entity recognition.",
"Traditional methods use statistical models.",
"Recent approaches have been based on neural networks (Collobert et al., 2011; Mesnil et al., 2014; Ma and Hovy, 2016; Strubell et al., 2017; Li et al., 2018; Devlin et al., 2018; Liu et al., 2019a; Luo et al., 2020; Xin et al., 2018) and they have made great progresses in various sequence labeling tasks.",
"However, a great challenge to neural-network-based approaches is from the data sparsity problem (Augenstein et al., 2017).",
"Specifically in the context of sequence labeling, the majority of entities Frequency Number Percentage = 0 (OOV) 1611 65 .",
"in test dataset may occur in training corpus few times or are absent at all.",
"In this paper, we refer this phenomenon particularly to rare entity problem .",
"It is different from other types of data sparsity problems such as the lack of training data for low-resource language (Lin et al., 2018), as this rare entity problem is more related to a mismatch of entity distributions between training and test, rather than the size of training data.",
"We present an example of the problem in Table 1. It shows that less than 5% of test set entities are frequently observed in the training set, and about 65% of test set entities are absent from the training set.",
"The rare entities can be categorized into two types: out-of-vocabulary (OOV) for those test set entities that are not observed in the training set, and low frequency (LF) for those entities with low frequency (e.g., fewer than 10) occurrences in the training set.",
"Without proper processing, rare entities can incur the following risks when building a neural network.",
"Firstly, OOV terms may act as noise for inference, as they lack lexical information from training set (Bazzi, 2002).",
"Secondly, it is hard to obtain high-quality representations on LF entities (Gong et al., 2018).",
"Lastly, high occurrences of OOV and LF entities expose distribution discrepancy between training and test, which mostly leads to poor performances during test.",
"In general, there are two existing strategies attempting to mitigate the above issues: external resource and transfer learning .",
"The external resource approach, for example (Huang et al., 2015; Li et al., 2018), uses external knowledge such as part-of-speech tags for NER or additional information from intent detection for slot filling.",
"However, external knowledge such as part-of-speech tag is not always available for practical applications and open source taggers such as (Manning et al., 2014) may perform poorly for cross-domain annotations.",
"Character or n-gram feature are mainly designed to deal with morphologically similar OOV words.",
"The transfer learning approach, such as using ELMo (Peters et al., 2018) and BERT (De-vlin et al., 2018), fine-tunes pre-trained models on the downstream task (Liu et al., 2019a).",
"Nevertheless, it is not directly addressing problems such as entity distribution discrepancy between training and test.",
"Moreover, our proposed methods surpass these methods without resorting to external resources nor large pre-trained language models.",
"This paper proposes novel techniques that enable sequence labeling models to achieve state-of-the-art performances without using external resource nor transfer learning.",
"These are local context reconstruction (LCR), which is applied on OOV entities, and delexicalized entity identification (DEI), which is applied on LF entities.",
"Local context reconstruction enables OOV entities to be related to their contexts.",
"One key point is applying variational autoencoder to model this reconstruction process that is typically a one-to-many generation process.",
"Delexicalized entity identification aims at extracting frequency-agnostic and entity-type-specific representation, therefore reducing the reliance on high-frequency occurrence of entities 1 .",
"It uses a novel adversarial training technique to achieve this goal.",
"Both methods use an effective random entity masking strategy.",
"We evaluate the methods on sequence labeling tasks on several benchmark datasets.",
"Extensive experiments show that the proposed methods significantly outperform previous models by a large margin.",
"Detailed analysis indicates that the proposed 1 This paper refers slots in slot filling tasks as entities for brevity, although their definitions are not equivalent.",
"methods indeed alleviate the rare entity problem.",
"Notably, without using any external knowledge nor pre-trained models, the proposed methods surpass the model that uses fine-tuned BERT.",
"Given an input sequence X = [ x 1 , x 2 , , x N ] with N tokens, the sequence labeling task aims at learning a functional mapping to obtain a target label sequence Y = [ y 1 , y 2 , , y N ] with equal length.",
"In the following, we briefly introduce a typical method for sequence labeling and review related techniques we use in deriving our model.",
"Recurrent neural network (RNN) (Hochreiter and Schmidhuber, 1997) has been widely used for sequence labeling.",
"The majority of high performance models use bidirectional RNN (Schuster and Pali-wal, 1997) to encode input sequence X and conditional random field (CRF) (Lafferty et al., 2001) as a decoder to output Y .",
"The bidirectional RNN firstly embeds observation x i at each position i to a continuous space x i .",
"It then applies forward and backward operations on the whole sequence time-recursively as (cid:40) h i = f ( x i , h i 1 ) h i = f ( x i , h i +1 ) .",
"CRF computes the probability of a label sequence Y given X as",
"log p ( Y | X ) (cid:88) i ( g i [ y i ] + G [ y i , y i +1 ]) g i = W ( h i h i ) , (2)",
"where denotes concatenation operation.",
"G and W are learnable matrices.",
"The sequence with the maximum score is the output of the model, typically obtained using Viterbi algorithm.",
"We use bidirectional RNN + CRF model, in particular, Bi-LSTM+CRF (Huang et al., 2015), as the baseline model in our framework and it is referred in the bottom part of Figure 1. 2.2 Variational Autoencoder The above model, together with other encoder-decoder models (Sutskever et al., 2014; Bahdanau et al., 2014), learn deterministic and discriminative functional mappings.",
"The variational auto-encoder (VAE) (Kingma and Welling, 2015; Rezende et al., [SOS] list flights to indianapolis with fares on monday morning , please .",
"2014; Bowman et al., 2015), on the other hand, is stochastic and generative.",
"Using VAE, we may assume a sequence x = [ x 1 , x 2 , , x N ] is generated stochastically from a latent global variable z with a joint probability of p ( x , z ) = p ( x | z ) p ( z ) .",
"where p ( z ) is the prior probability of z , generally a simple Gaussian distribution, to keep the model from generating x deterministically.",
"p ( x | z ) represents a generation density, usually modeled with a conditional language model with initial state of z .",
"Maximum likelihood training of a model for Eq.",
"(3) involves computationally intractable integration of z .",
"To circumvent this, VAE uses variational inference with variational distribution of z coming from a Gaussian density q ( z | x ) = N ( , diag( 2 )) , with vector mean and diagonal matrix variance diag( 2 ) parameterized by neural networks.",
"VAE also uses reparameterization trick to obtain latent variable z as follows: z = + (cid:12) (cid:15), (4) where (cid:15) is sampled from standard Gaussian distribution and (cid:12) denotes element-wise product.",
"The evidence lower bound (ELBO) of the likelihood p ( x ) is obtained using Jensen's inequality E q ( z | x ) log p ( x , z ) log p ( x ) as follows: L vae ( x ) = KL( q ( z | x ) || p ( z )) CE( q ( z | x ) | p ( x | z )) , (5) where KL( q || p ) and CE( q | p ) respectively denote the Kullback-Leibler divergence and the cross-entropy between distribution q and p .",
"ELBO can be optimized by alternating between optimizations of parameters of q ( z | x ) and p ( x | z ) .",
"We apply VAE for local context reconstruction from slot/entity tags in Figure 1. This is a generation process that is inherently one-to-many.",
"We observe that VAE is superior to the deterministic model (Bahdanau et al., 2014) in learning representations of rare entities.",
"Adversarial training (Goodfellow et al., 2014), originally proposed to improve robustness to noise in image, is later extended to NLP tasks such as text classification (Miyato et al., 2015, 2016) and learning word representation (Gong et al., 2018).",
"We apply adversarial training to learn better representations of low frequency entities via delexicalized entity identification in Figure 1. It has a discriminator to differentiate representations from the original low-frequency entities and the representations of the delexicalized entities.",
"Training aims at obtaining representations that can fool the discriminator, therefore achieving frequency-agnostics and entity-type-specificity.",
"We illustrate the overall framework of the proposed model in Figure 1. Its baseline sequence labeling module is described in Section 2.1.",
"We describe the details of local context reconstruction in Sec. 3.1 and delexicalized entity identification in Sec. 3.2, together with an example to illustrate them in Figure 2. We denote parameters in Sec. 2.1 as rnn and emb , respectively, for its RNN and matrix to obtain (0,1) ( on fares with [SOS] [EOS] on fares with ) * + KLdivergence indianapolis with fares on monday Local Context Representation OOVmasking [UNK] log Posterior Gaussian MLP reparameterization trick minus feature",
"embedding.",
"Parameters in Sec. 3.1 and Sec. 3.2 are each denoted as lcr and D .",
"Contrary to the conventional methods that explicitly provide abundant lexical features from external knowledge, we implicitly enrich word representations with contextual information by training them to reconstruct their local contexts.",
"Masking Every entity word x i in X , which is defined to be not associated with non-entity label O , in sequence X is firstly randomly masked with OOV symbol [UNK] as follows: x ui = (cid:26) [UNK] if y i (cid:54) = O (cid:15) > p x i otherwise , (6) where constant p is a threshold and (cid:15) is uniformly sampled between 0 and 1 .",
"(7) To apply VAE to reconstruct the local context, the mean and log-variance log are firstly computed from the above representation as follows: (cid:40) fjk = W 1 tanh( W 0 m fjk ) log fjk = W 1 tanh( W 0 m fjk ) , (8) where W are all learnable matrices.",
"Forward Reconstruction In the forward reconstruction process, the forward pass of Eq.",
"(1) is firstly applied on sequence X u = [ x u 1 , x u 2 , , x uN ] to obtain hidden states h ui .",
"Then, a forward span representation, m fjk , of the local context between position k and j is obtained using RNN-minus feature (Wang and Chang, 2016) as follows: m fjk = h uk h uj .",
"Then, the reparameterization trick in Eq.",
"(4) is applied on fjk and fjk = exp(log fjk ) to obtain a global latent variable z fjk for the local context.",
"To generate the i -th word in the local context sequence [ x j +1 , x j +2 , , x k 1 ] , we first apply a RNN-decoder with its initial hidden state from the latent variable z fjk and the first observation from the embedding of [SOS] symbol to recursively obtain hidden state r fi as follows: r f i = f ( x i , r f i 1 ) , (9) This RNN-decoder specifically does parameter sharing with the forward pass RNN-encoder in Eq.",
"(1).",
"We then use softmax to compute the distribution of word at position l as P vae i = Softmax ( W fg r fi ) , (10) where W fg is a learnable matrix.",
"Lastly, we compute KL distance and cross-entropy for lengthL local context sequence in Eq.",
"(5) as follows: KL fjk = (cid:88) d ( fjk [ d ] , fjk [ d ]) , CE fjk = 1 L (cid:88) i log( P vae i [ x i ]) L vae jk = KL fjk CE fjk , , (11) where d denotes hidden dimension index and the closed form KL divergence is defined as ( , ) = 2 + (1 + log ) .",
"Backward Reconstruction Same as the forward reconstruction, the backward reconstruction is applied on non-adjacent successive entities.",
"The backward pass of Eq.",
"(1) is firstly applied on the entity-masked sequence X u .",
"Once the backward span representation, m b kj , of the local context between position k and j is obtained as m bkj = h uj h uk , the same procedures of the above described forward reconstruction are conducted, except using the backward RNN-encoder f ( ) in lieu of the forward RNN-encoder in Eq.",
"For low-frequency entities, the delexicalized entity identification aims at obtaining frequency-agnostic and entity-type-specific representations.",
"Delexicalization We first randomly substitute entity words in input sequence X with their corresponding labels as x di = (cid:26) y i if y i (cid:54) = O (cid:15) > p x i otherwise , (14) where p is a threshold and (cid:15) is uniformly sampled from [0 , 1] .",
"We refer this to delexicalization (Wen et al., 2015), but insert randomness in it.",
"Representation for Identification To obtain representation to identify whether an entity has been delexicalized to its label, we first use forward and backward RNN-encoders in Eq.",
"(1) on the sentence X d = [ x d 1 , x d 2 , , x dN ] and obtain hidden states h di and h di for each position i .",
"Their concatenation is h di = h di h di .",
"For position i in the original sequence without delexicalization, its concatenated hidden state h i = h i h i .",
"For an entity with a span from position j to k , its representation e djk is obtained from the following average pooling e djk = 1 k j + 1 (cid:88) i h di .",
"Discriminator A multi-layer perceptron (MLP) based discriminator with parameter D is employed to output a confidence score in [0 , 1] , indicating the probability of the delexicalization of an entity; i.e., (cid:40) p djk = ( v Td tanh( W d e djk )) p jk = ( v Td tanh( W d e jk )) , (16) where paramters v d and W d are learnable and ( x ) is Sigmoid function 1 1+exp( x ) .",
"which aims at fooling a strong discriminator D with parameter rnn optimized, leading to frequency-agnostics.",
"Notice that the model has three modules with their own objectives.",
"We update their parameters jointly using Algorithm 1. The algorithm first improves discriminator D to identify delexicalized items.",
"It then updates lcr and rnn with joint optimization J vae and J at to improve rnn to fool the discriminator.",
"As VAE optimization of J vae has posterior collapse problem, we adopt KL cost annealing strategy and word dropout techniques (Bowman et al., 2015).",
"Finally, the algorithm updates both of rnn and emb in Bi-LSTM+CRF by gradient ascent according to Eq.",
"(2).",
"Note that lcr shares the same parameters with rnn and emb .",
"This section compares the proposed model against state-of-the-art models on benchmark datasets.",
"Slot Filling We use available ATIS dataset (Tur et al., 2010) and SNIPS dataset (Coucke et al.,",
"NER We use the public CoNLL-03 dataset (Sang and Meulder, 2003) as in (Huang et al., 2015; Lample et al., 2016; Liu et al., 2019a).",
"The dataset is tagged with four named entity types, including PER, LOC, ORG, and MISC.",
"Baselines We compare the proposed model with five types of methods: 1) strong baseline (Lample et al., 2016) use character embedding to improve sequence tagger; 2) recent state-of-the-art models for slot filling (Qin et al., 2019; Liu et al., 2019b) that utilize multi-task learning to incorporate additional information from intent detection; 3) recent state-of-the-art models, including Liu et al. (2018) and Liu et al. (2019a), for NER; 4) Bi-LSTM + CRF model augmented with external resources, (i.e., POS tagging using Stanford Parser 2 ); and 5) Bi-LSTM + CRF model with word embedding from fine-tuned BERTLARGE (Devlin et al., 2018).",
"Results are reported in F1 scores.",
"We follow most of the baseline performances reported in (Lample et al., 2016; Liu et al., 2019b; Qin et al., 2019; Liu et al., 2019a) and rerun the open source toolkit NCRFpp 3 , LM-LSTM-CRF 4 , and GCDT 5 on slot filling tasks 6 .",
"Implementation Details We use the same con-figuration setting for all datasets.",
"The hidden dimensions are set as 500 .",
"We apply dropout to hid-2 https://nlp.stanford.edu/software/lex-parser.shtml.",
"3 https://github.com/jiesutd/NCRFpp.",
"4 https://github.com/LiyuanLucasLiu/LM-LSTM-CRF.",
"5 https://github.com/Adaxry/GCDT.",
"6 Few results are not available for comparison as Qin et al. (2019); Liu et al. (2019b) are for mult-task learning of intent detection and slot filling.",
"den states with a rate of 0.3.",
"L2 regularization is set as 1 10 6 to avoid overfit.",
"Following (Liu et al., 2018, 2019a,b), we adopt the cased, 300d Glove (Pennington et al., 2014) to initialize word embeddings.",
"We utilize Adam algorithm (Kingma and Ba, 2015) to optimize the models and adopt the suggested hyper-parameters.",
"The main results of the proposed model on ATIS and CoNLL-03 are illustrated in Table 2. The proposed model outperforms all other models on all tasks by a substantial margin.",
"On slot filling tasks, the model obtains averaged improvements of 0 .",
"15 points on ATIS and 1 .",
"53 points on SNIPS over CM-Net and Stack-propagation, without using extra information from jointly modeling of slots and intents in these models.",
"In comparison to the prior state-of-the-art models of GCDT, the improvements are 0 .",
"03 points on ATIS, 2 .",
"17 points on SNIPS and 0 .",
"71 points on CoNLL-03.",
"Compared with strong baseline (Lample et al., 2016) that utilizes char embedding to improve Bi-LSTM + CRF, the gains are even larger.",
"The model obtains improvements of 0 .",
"84 points on ATIS, 3 .",
"49 points on SNIPS and 1 .",
"73 points on CoNLL-03, over Bi-LSTM + CRF and LM-LSTM-CRF.",
"Finally, we have tried improving the baseline Bi-LSTM+CRF in our model with external resources of lexical information, including part-of-speech tags, chunk tags and character embeddings.",
"However, their F1 scores are consistently below the proposed model by an average of 1 .",
"47 points.",
"We also replace word embeddings in Bi-LSTM+CRF with those from fine-tuned BERTLARGE but its results are worse than the proposed model, by 0 .",
"07 Method SNIPS Bi-LSTM + CRF + LCR + DEI 97 .",
"points, 1 .",
"05 points and 0 .",
"14 points, respectively, on ATIS, SNIPS, and CoNLL-03.",
"It is noteworthy that the substantial improvements by the model are obtained without using external resources nor large pre-trained models.",
"Keys to its success are local context reconstruction and delexicalized entity identification.",
"This section reports our analysis of these modules.",
"Local Context Reconstruction (LCR) We first examine the impact bought by the LCR process.",
"In Table 3, we show that removing LCR (w/o LCR) hurts performance significantly on SNIPS.",
"We then study if constructing local context in LCR using a traditional deterministic encoder-decoder can be equally effectively as using VAE.",
"We make a good faith attempt of using LSTM-based language model (Sundermeyer et al., 2012) to generate local context directly from local context representation (w/o VAE, w/ LSTM-LM).",
"This does improve results over that without LCR at all, indicating the information from reconstructing local context is indeed useful.",
"However, its F1 score is still far worse than that of using VAE.",
"This confirms that VAE is superior to deterministic model in dealing with the inherently one-to-many generation of local context from entities.",
"Lastly, we examine the impact of OOV masking and observe that F1 score without it (w/o OOV masking) drops about 1 .",
"6 point below the model.",
"We attribute this improvement from OOV masking to mitigating the entity distribution discrepancy between training and test.",
"Delexicalized Entity Identification (DEI) Removing delexicalized entity identification (w/o DEI) performs worse than the model, with large drop of 1 .",
"38 point on SNIPS.",
"These results show that both local context reconstruction and delexicalized entity identification contribute greatly to the improved performance by the proposed model.",
"Because both LCR and DEI share the same RNN-encoder as the baseline Bi-LSTM, the information from reconstructing local context and fooling the discriminator of delexicalization is useful for the Bi-LSTM to better predict sequence labels.",
"In this section, we compare models specifically by the numbers of OOV and LF entities they can recall correctly.",
"Such comparison reveals the capability of each model in handling rare entities.",
"Results are presented in Table 4.",
"In the case of without using any external resource and pre-trained models, the proposed model recalls 3 .",
"66% more OOV entities and 3 .",
"96% more LF entities than LM-LSTM-CRF.",
"This gain is similar when comparing against Bi-LSTM+CRF.",
"Furthermore, the proposed model also recalls more rare entities than GCDT, a recent state-of-the-art model in NER.",
"Separately using LCR or DEI improves performance over baseline Bi-LSTM+CRF.",
"Their gains are complementary as results show that jointly applying LCR and DEI obtains the best performance.",
"These results demonstrate convincingly the capability of local context reconstruction and delexicalized entity identification in rare entities.",
"Importantly, results in the last two rows reveal that potentially large improvements can be potentially achieved since there are still 15 .",
"34% of OOV entities and 13 .",
"35% of LF entities not recalled.",
"We visualize the learned representation at Eq.",
"(15) using t-SNE (Maaten and Hinton, 2008) in Figure 3.",
"It shows 2 -dimensional projections of randomly sampled 800 entities on CoNLL-03 dataset.",
"Figure 3 clearly shows separability of entities by their entity types but no separations among low-frequency and frequent entities.",
"This observation is consistent to the mini-max objective in Eq.",
"(17) to learn entity-type-specific and frequency-agnostic representations.",
"This section investigates the proposed model on data scarcity.",
"On ATIS, the percentage of training samples are reduced down to 20% of the original size, with a reduction size of 20%.",
"This setting is challenging and few previous works have experimented.",
"Results in Figure 3 show that the proposed model consistently outperforms other models, especially in low-resource conditions.",
"Furthermore, reductions of performance from the proposed model are much smaller, in comparison to other models.",
"For instance, at percentage 40% , the proposed model only lose 1 .",
"17% of its best F1 score whereas GCDT loses 3 .",
"62% of its F1 score.",
"This suggests that the proposed model is more robust to low resource than other models.",
"Neural sequence labeling has been an active field in NLP, and we briefly review recently proposed approaches related to our work.",
"Slot Filling and NER Neural sequence labeling has been applied to slot filling (Mesnil et al., 2014; Zhang and Wang, 2016; Liu and Lane, 2016; Qin et al., 2019) and NER (Huang et al., 2015; Strubell et al., 2017; Liu et al., 2018; Devlin et al., 2018; Liu et al., 2019a).",
"For slot filling, multi-task learning for joint slot filling and intent detection has been dominating in the recent literature, for example (Liu and Lane, 2016).",
"The recent work in (Liu et al., 2019b) employs a collaborative memory network to further model the semantic correlations among words, slots and intents jointly.",
"For NER, recent works use explicit architecture to incorporate information such as global context (Liu et al., 2019a) or conduct optimal architecture searches (Jiang et al., 2019).",
"The best performing models have been using pre-training models on large corpus (Baevski et al., 2019) or incorporating fine-tuning on existing pre-trained models (Liu et al., 2019a) such as BERT (Devlin et al., 2018).",
"External Resource This approach to handle rare entities includes feature engineering methods such as incorporating extra knowledge from part-of-speech tags (Huang et al., 2015) or character embeddings (Li et al., 2018).",
"Extra knowledge also includes tags from public tagger (Manning et al., 2014).",
"Multi-task learning has been effective in incorporating additional label information through multiple objectives.",
"Joint slot filling and intent detection have been used in (Zhang and Wang, 2016; Qin et al., 2019; Zhang et al., 2019).",
"Transfer Learning This approach refers to methods that transfer knowledge from high-resources to low-resources (Zhou et al., 2019) or use models pretrained on large corpus to benefit downstream tasks (Devlin et al., 2018; Liu et al., 2019a).",
"The most recent work in (Zhou et al., 2019) applies adversarial training that uses a resource-adversarial discriminator to improve performances on low-resource data.",
"We have presented local context reconstruction for OOV entities and delexicalized entity identification for low-frequency entities to address the rare entity problem .",
"We adopt variational autoencoder to learn a stochastic reconstructor for the reconstruction and adversarial training to extract frequency-agnostic and entity-type-specific features.",
"Extensive experiments have been conducted on both slot filling and NER tasks on three benchmark datasets, showing that sequence labeling using the proposed methods achieve new state-of-the-art performances.",
"Importantly, without using external knowledge nor fine tuning of large pretrained models, our methods enable a sequence labeling model to outperform models fine-tuned on BERT.",
"Our analysis also indicates large potential of further performance improvements by exploiting OOV and LF entities.",
"This work was done while the first author did internship at Ant Financial.",
"We thank anonymous reviewers for valuable suggestions."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"method",
"other",
"abstain",
"other",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"result",
"abstain",
"other",
"other"
] |
[
"Previous research shows that eye-tracking data contains information about the lexical and syntactic properties of text, which can be used to improve natural language processing models.",
"In this work, we leverage eye movement features from three corpora with recorded gaze information to augment a state-of-the-art neural model for named entity recognition (NER) with gaze embeddings.",
"These corpora were manually annotated with named entity labels.",
"Moreover, we show how gaze features, generalized on word type level, eliminate the need for recorded eye-tracking data at test time.",
"The gaze-augmented models for NER using token-level and type-level features outperform the baselines.",
"We present the benefits of eyetracking features by evaluating the NER models on both individual datasets as well as in cross-domain settings.",
"The field of natural language processing includes studies of tasks of different granularity and depths of semantics: from lower level tasks such as tokenization and part-of-speech tagging up to higher level tasks of information extraction such as named entity recognition, relation extraction, and semantic role labeling (Collobert et al., 2011).",
"As NLP systems become increasingly prevalent in society, how to take advantage of information passively collected from human readers, e.g. eye movement signals, is becoming more interesting to researchers.",
"Previous research in this area has shown promising results: Eye-tracking data has been used to improve tasks such as part-of-speech tagging (Barrett et al., 2016), sentiment analysis (Mishra et al., 2017), prediction of multiword expressions (Rohanian et al., 2017), and word embedding evaluation (Sgaard, 2016).",
"However, most of these studies focus on either relatively lower-level tasks (e.g. part-of-speech tagging and multiword expressions) or relatively global properties in the text (e.g. sentiment analy-sis).",
"In this paper, we test a hypothesis on a different level: Can eye movement signals also help improve higher-level semantic tasks such as extracting information from text?",
"The answer to this question is not obvious.",
"On one hand, the quality improvement attributed to eye movement signals on lower-level tasks implies that such signals do contain linguistic information.",
"On the other hand, it is not clear whether these signals can also provide significant improvement for tasks dealing with higher-level semantics.",
"Moreover, even if eye movement patterns contain signals related to higher-level tasks, as implied by a recent psycholinguistic study (Tokunaga et al., 2017), noisy as these signals are, it is not straightforward whether they would help, if not hurt, the quality of the models.",
"In this paper, we provide the first study of the impact of gaze features to automatic named entity recognition from text.",
"We test the hypothesis that eye-tracking data is beneficial for entity recognition in a state-of-the-art neural named entity tagger augmented with embedding layers of gaze features.",
"Our contributions in the current work can be summarized as follows:",
"1. First, we manually annotate three eyetracking corpora with named entity labels to train a neural NER system with gaze features.",
"This collection of corpora facilitates future research in related topics.",
"The annotations are publicly available.",
"2. Beyond that, we present a neural architecture for NER, which in addition to textual information, incorporates embedding layers to encode eye movement information.",
"3. Finally, we show how gaze features generalized to word types eliminate the need for recorded eye-tracking data at test time.",
"makes the use of eye-tracking data in NLP applications more feasible since recorded eye-tracking data for each token in context is not required anymore at prediction time.",
"Moreover, type-aggregated features appear to be particularly useful for cross-domain systems.",
"Our hypotheses are evaluated not only on the available eye-tracking corpora, but also on an external benchmark dataset, for which gaze information does not exist.",
"The benefits of eye movement data for machine learning have been assessed in various domains, including NLP and computer vision.",
"Eye-trackers provide millisecond-accurate records on where humans look when they are reading, and they are becoming cheaper and more easily available by the day (San Agustin et al., 2009; Sewell and Ko-mogortsev, 2010).",
"Although eye-tracking data is still being recorded in controlled experiment environments, this will likely change in the near future.",
"Recent approaches have shown substantial improvements in recording gaze data while reading by using cameras of mobile devices (Gomez-Poveda and Gaudioso, 2016; Papoutsaki et al., 2016).",
"Hence, eye-tracking data will probably be more accessible and available in much larger volumes in due time, which will facilitate the creation of sizable datasets enormously.",
"Tokunaga et al. (2017) recently analyzed eyetracking signals during the annotation of named entities to find effective features for NER.",
"Their work proves that humans take into account a broad context to identify named entities, including predicate-argument structure.",
"This further strengthens our intuition to use eye movement information to improve existing NER systems.",
"And going even a step further, it opens the possibility for real-time entity annotation based on the reader's eye movements.",
"The benefit of eye movement data is backed up by extensive psycholinguistic studies.",
"For example, when humans read a text they do not focus on every single word.",
"The number of fixations and the fixation duration on a word depends on a number of linguistic factors (Clifton et al., 2007; Demberg and Keller, 2008).",
"First, readers are more likely to fixate on open-class words that are not predictable from context (Rayner, 1998).",
"Reading patterns are a reliable indicator of syntactical categories (Barrett and Sgaard, 2015a).",
"Second, word frequency and word familiarity influ-ence how long readers look at a word.",
"The frequency effect was first noted by Rayner (1977) and has been reported in various studies since, e.g. Just and Carpenter (1980) and Cop et al. (2017).",
"Moreover, although two words may have the same frequency value, they may differ in familiarity (es-pecially for infrequent words).",
"Effects of word familiarity on fixation time have also been demonstrated in a number of recent studies (Juhasz and Rayner, 2003; Williams and Morris, 2004).",
"Additionally, the positive effect of fixation information in various NLP tasks has recently been shown by Barrett et al. (2018), where an attention mechanism is trained on fixation duration.",
"State-of-the-art NER Non-linear neural networks with distributed word representations as input have become increasingly successful for any sequence labeling task in NLP (Huang et al., 2015; Chiu and Nichols, 2016; Ma and Hovy, 2016).",
"The same applies to named entity recognition: State-of-the-art systems are combinations of neural networks such as LSTMs or CNNs and conditional random fields (CRFs) (Strauss et al., 2016).",
"Lample et al. (2016) developed such a neural architecture for NER, which we employ in this work and enhance with eye movement features.",
"Their model successfully combines word-level and character-level embeddings, which we augment with embedding layers for eye-tracking features.",
"For our experiments, we resort to three eyetracking data resources: the Dundee corpus (Kennedy et al., 2003), the GECO corpus (Cop et al., 2017) and the ZuCo corpus (Hollenstein et al., 2018).",
"For the purpose of information extraction, it is important that the readers process longer fragments of text, i.e. complete sentences instead of single words, which is the case in all three datasets.",
"Table 1 shows an overview of the domain and size of these datasets.",
"In total, they comprise 142,441 tokens with gaze information.",
"Table 1 also shows the differences in mean fixation times between the datasets (i.e. fixation duration (the average duration of a single fixation on a word in Dundee GECO ZuCo Total domain(s) news articles literature movie reviews, Wikipedia articles number of sentences 2367 5424 700 8491 mean sentence length 24.75 12.65 22.12 19.84 number of words 58598 68606 15237 142441 unique word types 9131 5283 4408 13937 mean word length 4.29 3.76 4.44 4.16 fixation duration (ms) 202 214 226 214 gaze duration (ms) 237 232 265 244.7 Table 1: Descriptive statistics of the eye-tracking corpora, including domain, size and mean fixation and gaze duration per token.",
"Dundee Corpus The gaze data of the Dundee corpus (Kennedy et al., 2003) was recorded with a Dr. Bouis Oculometer Eyetracker .",
"The English section of this corpus comprises 58,598 tokens in 2,367 sentences.",
"It contains eye movement information of ten native English speakers as they read the same 20 newspaper articles from The Independent .",
"The text was presented to the readers on a screen five lines at a time.",
"This data has been widely used in psycholinguistic research to analyze the reading behavior of subjects while reading sentences in context under relatively naturalistic conditions.",
"GECO Corpus The Ghent Eye-Tracking Corpus (Cop et al., 2017) is a more recent dataset, which was created for the analysis of eye movements of monolingual and bilingual subjects during reading.",
"The data was recorded with an EyeLink 1000 system.",
"The text was presented one paragraph at a time.",
"The subjects read the entire novel The Mysterious Affair at Styles by Agatha Christie (1920) containing 68,606 tokens in 5,424 sentences.",
"We use only the monolingual data recorded from the 14 native English speakers for this work to maintain consistency across corpora.",
"ZuCo Corpus The Zurich Cognitive Language Processing Corpus (Hollenstein et al., 2018) is a combined eye-tracking and EEG dataset.",
"The gaze data was also recorded with an EyeLink 1000 system.",
"The full corpus contains 1,100 English sentences read by 12 adult native speakers.",
"The sentences were presented at the same position on the screen one at a time.",
"For the present work, we only use the eye movement data of the first two reading tasks of this corpus (700 sentences, 15,237 tokens), since these tasks encouraged natural reading.",
"The reading material included sentences from movie reviews from the Stanford Sentiment Treebank (Socher et al., 2013) and the Wikipedia dataset by Culotta et al. (2006).",
"For the purposes of this work, all datasets were manually annotated with named entity labels for three categories: PERSON, ORGANIZATION and LOCATION.",
"The annotations are available at https://github.com/ DS3Lab/ner-at-first-sight .",
"The datasets were annotated by two NLP experts.",
"The IOB tagging scheme was used for the labeling.",
"We followed the ACE Annotation Guidelines (Linguistic Data Consortium, 2005).",
"All conflicts in labelling were resolved by adjudication between both annotators.",
"An inter-Basic n fixations total number of fixations on a word w fixation probability the probability that a word w will be fixated mean fixation duration mean of all fixation durations for a word w Early first fixation duration duration of the first fixation on a word w first pass duration sum of all fixation durations during the first pass Late total fixation duration sum of all fixation durations for a word w n re-fixations number of times a word w is fixated (after the first fixation) re-read probability the probability that a word w will be read more than once Context total regression-from duration combined duration of the regressions that began at word w w-2 fixation probability fixation probability of the word before the previous word w-1 fixation probability fixation probability of the previous word w+1 fixation probability fixation probability of the next word w+2 fixation probability fixation probability of the word after the next word w-2 fixation duration fixation duration of the word before the previous word w-1 fixation duration fixation duration of the previous word w+1 fixation duration fixation duration of the next word w+2 fixation duration fixation duration of the word after the next word Table 3: Gaze features extracted from the Dundee, GECO and ZuCo corpora.",
"annotator reliability analysis on 10,000 tokens (511 sentences) sampled from all three datasets yielded an agreement of 83.5% on the entity labels ( = 0.68).",
"Table 2 shows the number of annotated entities in each dataset.",
"The distribution of entities between the corpora is highly unbalanced: Dundee and ZuCo, the datasets containing more heterogeneous texts and thus, have a higher ratio of unique entity occurrences, versus GECO, a homogeneous corpus consisting of a single novel, where the named entities are very repetitive.",
"The gaze data of all three corpora was recorded for multiple readers by conducting experiments in a controlled environment using specialized equipment.",
"It is important to consider that, while we extract the same features for all corpora, there are certainly practical aspects that differ across the datasets.",
"The following factors are expected to in-fluence reading: experiment procedures; text presentation; recording hardware, software and quality; sampling rates; initial calibration and filtering, as well as human factors such as head movements and lack of attention.",
"Therefore, separate normalization for each dataset should better preserve the signal within each corpus and for the same reason the type-aggregation was computed on the normalized feature values.",
"This is especially relevant for the type-aggregated features and the cross-corpus experiments described below.",
"In order to add gaze information to the neural network, we have selected as many features as available from those present in all three corpora.",
"Previous research shows benefits in combining multiple eye-tracking features of different stages of the human reading process (Barrett et al., 2016; Tokunaga et al., 2017).",
"The features extracted follow closely on Barrett et al. (2016).",
"As described above, psycholinguistic research has shown how fixation duration and probability differ between word classes and syntactic comprehension processes.",
"Thus, the features focus on representing these nuances as broadly as possible, covering the complete reading time of a word at different stages.",
"Table 3 shows the eye movement features incorporated into the experiments.",
"We split the 17 features into 4 distinct groups (analogous to Barrett et al. (2016)), which define the different stages of the reading process:",
"fixations on a word or the probability that a word will be fixated (namely, the number of subjects who fixated the word divided by the total number of subjects).",
"2. EARLY gaze measures capture lexical access and early syntactic processing and are based on the first time a word is fixated.",
"3. LATE measures reflect the late syntactic processing and general disambiguation.",
"These features are significant for words which were fixated more than once.",
"4. CONTEXT features capture the gaze measures of the surrounding tokens.",
"These features consider the fixation probability and duration up to two tokens to the left and right of the current token.",
"Additionally, regressions starting at the current word are also considered to be meaningful for the syntactic processing of full sentences.",
"The eye movement measurements were averaged over all native-speaking readers of each dataset to obtain more robust estimates.",
"The small size of eye-tracking datasets often limits the potential for training data-intensive algorithms and causes overfitting in benchmark evaluation (Xu et al., 2015).",
"It also leads to sparse samples of gaze measurements.",
"Hence, given the limited number of observations available, we normalize the data by splitting the feature values into quantiles to avoid sparsity issues.",
"The best results were achieved with 24 bins.",
"This normalization is conducted separately for each corpus.",
"Moreover, special care had to be taken regarding tokenization, since the recorded eye-tracking data considers only whitespace separation.",
"For example, the string John's would constitute a single token for eye-tracking feature extraction, but would be split into John and 's for NER, with the former token holding the label PERSON and the latter no label at all.",
"Our strategy to address this issue was to assign the same values of the gaze features of the originating token to split tokens.",
"Barrett and Sgaard (2015b) showed that type-level aggregation of gaze features results in larger improvements for part-of-speech tagging.",
"Following their line of work, we also conducted experiments with type aggregation for NER.",
"This implies that the eye-tracking feature values were averaged for each word type over all occurrences in the training data.",
"For instance, the sum of the features of all n occurrences of the token island are averaged over the number of occurrences n .",
"As a result, for each corpus as well as for the aggregated corpora, a lexicon of lower-cased word types with their averaged eye-tracking feature values was compiled.",
"Thus, as input for the network, either the type-level aggregates for each individual corpus can be used or the values from the combined lexicon, which increases the number of word types with known gaze feature values.",
"The goal of type aggregation is twofold.",
"First, it eliminates the requirement of eye-tracking features when applying the models at test time, since the larger the lexicon, the more tokens in the unseen data receive type-aggregated eye-tracking feature values.",
"For those tokens not in the lexicon, we assign a placeholder for unknown feature values.",
"Second, type-aggregated features can be used on any dataset and show that improvements can be achieved with aggregated gaze data without requiring large quantities of recorded data.",
"The experiments in this work were executed using an enhanced version of the system presented by Lample et al. (2016).",
"This hybrid approach is based on bidirectional LSTMs and conditional random fields and relies mainly on two sources of information: character-level and word-level representations.",
"For the experiments, the originally proposed values for all parameters were maintained.",
"Specifically, the bidirectional LSTMs for character-based embeddings are trained on the corpus at hand with dimensions set to 25.",
"The lookup table tor the word embeddings was initialized with the pre-trained GloVe vectors of 100 dimensions (Pennington et al., 2014).",
"The model uses a single layer for the forward and backward LSTMs.",
"All models were trained with a dropout rate at 0.5.",
"Moreover, all digits were replaced with zeros.",
"The original model 1 was modified to include the gaze features as additional embedding layers to the network.",
"The character-level representation, i.e. the output of a bidirectional LSTM, is concatenated with the word-level representation from 1 https://github.com/glample/tagger l 1 Manners Canada David was born in r 1 c 1 l 2 r 2 c 2 l 3 r 3 c 3 l 4 r 4 c 4 l 5 r 5 c 5 l 6 r 6 c 6 B-LOCO O OI-PER B-PERCRF layer biLSTMencoder character + word + gaze embeddings word character gaze f 1 gaze f 2 gaze f 17 Figure 1: Main architecture of the network.",
"a word lookup table.",
"In the augmented model with eye-tracking information, the embedding for each discrete gaze feature is also concatenated to the input.",
"The dimension of the gaze feature embeddings is equal to the number of quantiles.",
"This architecture is shown in Figure",
"1. Word length and word frequency are known to correlate and interact with gaze features (Tomanek et al., 2010), which is why we selected a base model that allows us to combine the eye-tracking features with wordand character-level information.",
"Our main finding is that our models enhanced with gaze features consistently outperform the baseline.",
"As our baseline, we trained and evaluated the original models with the neural architecture and parameters proposed by Lample et al. (2016) on the GECO, Dundee, and ZuCo corpora and compared it to the models that were enriched with eyetracking measures.",
"The best improvements on F 1 score over the baseline models are significant under one-sided t-tests (p < 0.05).",
"All models were trained with 10-fold cross validation (80% training set, 10% development set, 10% test set) and early stopping was performed after 20 epochs of no improvement on the development set to reduce training time.",
"First, the performance on the individual datasets is tested, together with the performance of one combined dataset consisting of all three corpora (consisting of 142,441 tokens).",
"In addition, we evaluate the effects of the type-aggregated features using individual type lexicons for each datasets, and combining the three type lexicons of each corpus.",
"Finally, we experiment with cross-corpus scenarios to evaluate the potential of eye-tracking features in NER for domain adaptation.",
"Both settings were also tested on an external corpus without eye-tracking features, namely the CoNLL-2003 dataset (Sang and De Meulder, 2003).",
"First, we analyzed how augmenting the named entity recognition system with eye-tracking features affects the results on the individual datasets.",
"Table 4 shows the improvements achieved by adding all 17 gaze features to the neural architecture, and training models on all three corpora, and on the combined dataset containing all sentences from the Dundee, GECO and ZuCo corpora.",
"Noticeably, adding token-level gaze features improves the results on all datasets individually and combined, even on the GECO corpus, which yields a high baseline due to the homogeneity of the contained named entities (see Table 2).",
"Furthermore, Table 4 also presents the results of the NER models making use of the type-aggregated features instead of token-level gaze features.",
"There are two different experiments for these type-level features: Using the features of the word types occurring in the corpus only, or using the aggregated features of all word types in the three corpora (as describe above).",
"As can be seen, the performance of the different gaze fea-P R F Dundee baseline 79.29 78.56 78.86 with gaze 79.55 79.27 79.35 type individual 81.05 79.37 80.17 * type combined 80.27 79.26 79.67 Geco baseline 96.68 97.24 96.95 with gaze 98.08 97.94 98.01 * type individual 97.72 97.42 97.57* type combined 97.76 97.16 97.46* ZuCo baseline 84.52 81.66 82.92 with gaze 86.19 84.28 85.12 * type individual 84.21 82.61 83.30 type combined 83.26 83.37 83.31 All baseline 86.92 86.58 86.72 with gaze 88.72 89.39 89.03* type combined 89.04 89.52 89.26 * Table 4: Precision (P), recall (R) and F 1 -score (F) for all models trained on individual datasets (best results in bold; * indicates statistically significant improvements on F 1 -score).",
"ture levels varies between datasets, but both the original token-level features as well as the individual and combined type-level features achieve improvements over the baselines of all datasets.",
"To sum up, the largest improvement with eyetracking features is achieved when combining all corpora into one larger dataset, where an additional 4% is gained in F 1 -score by using type-aggregated features.",
"Evidently, a larger mixed-domain dataset benefits from the type aggregation, while the original token-level gaze features achieve the best results on the individual datasets.",
"Moreover, the additional gain when training on all datasets is due to the higher signal-to-noise ratio of type-aggregated features from multiple datasets.",
"Evaluation on CoNLL-2003 Going on step further, we evaluate the type-aggregated gaze features on an external corpus with no eye movement information available.",
"The CoNLL-2003 corpus (Sang and De Meulder, 2003) has been CoNLL-2003 P R F baseline 93.89 94.16 94.03 type combined 94.38 94.32 94.35 * Table 5: Precision (P), recall (R) and F 1 -score (F) for using type-aggregated gaze features on the CoNLL-2003 dataset (* marks statistically significant improve-ment).",
"widely used as a benchmark dataset for NER in different shared tasks.",
"The English part of this corpus consists of Reuters news stories and contains 302,811 tokens in 22,137 sentences.",
"We use this dataset as an additional corpus without gaze information.",
"Only the type-aggregated features (based on the combined eye-tracking corpora) are added to each word.",
"Merely 76% of the tokens in the CoNLL-2003 corpus also appear in the eyetracking corpora described above and thus receive type-aggregated feature values.",
"The rest of the tokens without aggregated gaze information available receive a placeholder for the unknown feature values.",
"Note that to avoid overfitting we do not train on the official train/test split of the CoNLL-2003 dataset, but perform 10-fold cross validation.",
"Applying the same experiment setting, we train the augmented NER model with gaze features on the CoNLL-2003 data and compare it to a baseline model without any eye-tracking features.",
"We achieve a minor, but nonetheless significant improvement (shown in Table 5), which strongly supports the generalizability effect of the type-aggregated features on unseen data.",
"In a second evaluation scenario, we test the potential of eye-tracking features for NER across corpora.",
"The goal is to leverage eye-tracking features for domain adaptation.",
"To show the robustness of our approach across domains, we train the models with token-level and type-level features on 100% of corpus A and a development set of 20% of corpus B and test on the remaining 80% of the corpus B, alternating only the development and the test set for each fold.",
"Table 6 shows the results of this cross-corpus evaluation.",
"The impact of the eye-tracking features varies between the different combinations of datasets.",
"However, the inclusion of eye-tracking features improves the results for all combinations, except for the models trained on the ZuCo corpus Dundee GECO ZuCo P R F P R F P R F baseline 74.20 70.71 72.40 75.36 75.62 75.44 Dundee token 75.68 71.54 73.55* 78.85 74.51 77.02 type 76.44 77.09 76.75 * 78.33 76.49 77.35 baseline 58.91 34.91 43.80 68.88 42.49 52.38 GECO token 59.61 35.62 44.53 69.18 44.22 53.81 type 58.39 35.99 44.44 67.69 42.36 52.01 baseline 65.85 54.01 59.34 83.00 78.11 80.48 ZuCo token 72.62 50.76 59.70 82.92 75.35 78.91 type 69.21 53.05 59.95 83.68 74.57 78.85 Table 6: Cross-corpus results: Precision (P), recall (R) and F 1 -score (F) for all models trained on one dataset and tested on another (rows = training dataset; columns = test dataset; best results in bold; * indicates statistically significant improvements).",
"and tested on the GECO corpus.",
"Presumably, this is due to the combination of the small training data size of the ZuCo corpus and the homogeneity of the named entities in the GECO corpus.",
"Evaluation on CoNLL-2003 Analogous to the individual dataset evaluation, we also test the potential of eye-tracking features in a cross-dataset scenario on an external benchmark dataset.",
"Again, we use the CoNLL-2003 corpus for this purpose.",
"We train a model on the Dundee, GECO and ZuCo corpora using type-aggregated eye-tracking features and test this model on the ConLL-2003 data.",
"Table 7 shows that compared to a baseline without gaze features, the results improve by 3% F 1 -score.",
"These results underpin our hypothesis of the possibility of generalizing eye-tracking features on word type level, such that no recorded gaze data is required at test time.",
"The models evaluated in the previous section show that eye-tracking data contain valuable semantic information that can be leveraged effectively by NER systems.",
"While the individual datasets are Figure 2: Results per class for the models trained on all gaze datasets combined.",
"still limited in size, the largest improvement is observed in the models making use of all the available data.",
"At a closer look, the model leveraging gaze data yield a considerably higher increase in recall when comparing to the baselines.",
"In addition, a classwise analysis shows that the entity type benefiting the most from the gaze features over all models is ORGANIZATION, which is the most difficult class to predict.",
"Figure 2 illustrates this with the results per class of the models trained on all three gaze corpora jointly.",
"In the individual dataset evaluation setting, the combined type-level feature aggregation from all datasets does not yield the best results, since each sentence in these corpora already has accurate eyetracking features on toke-level.",
"Thus, it is understandable that in this scenario the original gaze features and the gaze features aggregated only on the individual datasets result in better models.",
"However, when evaluating the NER models in a cross-corpus scenario, the type-aggregated features lead to significant improvements.",
"Type aggregation evidently reduces the fine-grained nuances contained in eye-tracking information and eliminates the possibility of disambiguation between homographic tokens.",
"Nevertheless, this type of disambiguation is not crucial for named entities, which mainly consist of proper nouns and the same entities tend to appear in the same context.",
"Especially noteworthy is the gain in the models tested on the CoNLL-2003 benchmark corpus, which shows that aggregated eyetracking features from other datasets can be applied to any unseen sentence and show improvements, even though more than 20% of the tokens have unknown gaze feature values.",
"While the high number of unknown values is certainly a limitation of our approach, it shows at once the possibility of not requiring original gaze features at prediction time.",
"Thus, the trained NER models can be applied robustly on unseen data.",
"We presented the first study of augmenting a NER system with eye-tracking information.",
"Our results highlight the benefits of leveraging cognitive cues such as eye movements to improve entity recognition models.",
"The manually annotated named entity labels for the three eye-tracking corpora are freely available.",
"We augmented a neural NER architecture with gaze features.",
"Experiments were performed using a wide range of features relevant to the human reading process and the results show significant improvements over the baseline for all corpora individually.",
"In addition, the type-aggregated gaze features are effective in cross-domain settings, even on an external benchmark corpus.",
"The results of these type-aggregated features are a step towards leveraging eye-tracking data for information extraction at training time, without requiring real-time recorded eye-tracking data at prediction time."
] | [
"abstain",
"objective",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"method",
"result",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain"
] |
[
"Despite recent progress in abstractive summarization, systems still suffer from faithfulness errors.",
"While prior work has proposed models that improve faithfulness, it is unclear whether the improvement comes from an increased level of extractiveness of the model outputs as one naive way to improve faithfulness is to make summarization models more extractive.",
"In this work, we present a framework for evaluating the effective faithfulness of summarization systems, by generating a faithfulness-abstractiveness trade-off curve that serves as a control at different operating points on the abstractiveness spectrum.",
"We then show that the baseline system as well as recently proposed methods for improving faithfulness, fail to consistently improve over the control at the same level of abstractiveness.",
"Finally, we learn a selector to identify the most faithful and abstractive summary for a given document, and show that this system can attain higher faithfulness scores in human evaluations while being more abstractive than the baseline system on two datasets.",
"Moreover, we show that our system is able to achieve a better faithfulness-abstractiveness trade-off than the control at the same level of abstractiveness.",
"Generating abstractive summaries of documents has been a long-standing goal of summarization.",
"While there has been tremendous progress towards this goal (Kryscinski et al., 2018; Dong et al., 2019; Zhang et al., 2019; Lewis et al., 2020), abstractive summarization systems still suffer from faithfulness errors (Cao et al., 2018), generating information that is not present in the original text.",
"This has led to an increased research in faithfulness evaluation of summarization systems (Falke et al., 2019; Kryscinski et al., 2020; Durmus et al., 2020) as Equal contribution.",
"Corresponding author for queries: faisal@cs.columbia.edu.",
"well as methods to improve faithfulness of generated summaries (Kang and Hashimoto, 2020; Chen et al., 2021).",
"Intuitively, one straightforward way of improving faithfulness of generated summaries is to copy a larger amount of content from the source article (i.e. more extraction).",
"Thus, any methods that increase the level of extractiveness, whether intentionally or not, would improve faithfulness.",
"Without reported extractiveness, it is unclear whether prior improvements mainly arise from increased extractiveness.",
"We argue that in order to make progress in abstractive summarization, it is important to tease apart faithfulness improvements due to increased extractiveness versus improvements due to improved abstraction.",
"In order to tease this apart, we develop a framework for evaluating progress in faithfulness, by considering the effective faithfulness , i.e. the improvement in faithfulness over a baseline system ( control ) operating at the same level of extractiveness.",
"In particular, we split the training examples into different groups by the extractiveness of the summary, and train the control models on each group.",
"Each of these models corresponds to a specific tradeoff between abstractiveness and faithfulness, forming a trade-off curve indicating how much faithfulness can be improved solely by increasing extractiveness.",
"Systems that improve effective faithfulness should lie above this curve.",
"Using this framework, we show that the improved faithfulness of recently proposed methods comes mainly from an increased extractiveness.",
"We then conduct further analysis to explore whether it is possible to have a system that can be both more abstractive and more faithful than the baseline system.",
"We train a selector on a small set of human-annotated data that, given a set of output summaries with varying levels of extractiveness, picks the most abstractive output that is faithful to the source.",
"Our proposed system is both more abstractive and more faithful than the baseline.",
"Moreover, we show that 1410 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 Extractiveness 0.4 0.5 0.6 0.7 0.8 0.9 M e t r i c S c o r e M1 M2 M3 M4 M5 M1 M2 M3 M4 M5 M1 M2 M3 M4 M5 Entailment FactCC DAE Figure 1: Extractiveness of generated outputs versus automated metric scores for Entailment, FactCC and DAE on the Gigaword dataset.",
"our system is able to improve the effective faithfulness , achieving a better trade-off than the control at the same point on the abstractiveness spectrum.",
"To summarize, our contributions are as follows:",
"1. We present a framework to evaluate the progress in improving effective faithfulness of models considering the control at the same level of extractiveness.",
"2. We illustrate the importance of considering effective faithfulness by showing that recently proposed methods for improving faithfulness are able to attain higher faithfulness scores than the baseline, but do not consistently improve over the control curve , indicating that most of their improvements come from generating more extractive outputs, on average.",
"3. We propose a selector that picks the most abstractive and faithful summary from a set of possible summaries, and show that this method gets higher effective faithfulness compared to the existing methods.",
"We conduct our study on two English abstractive summarization datasets, one from the news domain,",
"and one from a non-news domain.",
"For the news do-main dataset, we decided against using the popular CNN/Dailymail dataset since its reference summaries tend to be very extractive (Kedzie et al., 2018; Bommasani and Cardie, 2020), making it a poor choice for studying faithfulness in abstractive summarization.",
"Similarly, we also decided against using XSum, another popular news summarization dataset, since almost 77% of the gold reference summaries contain hallucinations (Maynez et al., 2020).",
"Instead, we opted for Gigaword and Wikihow, which are datasets with substantial abstraction without as much hallucination problems as XSum.",
"Gigaword reference summaries have substantially less hallucinations than XSum (Kang and Hashimoto, 2020), and WikiHow summaries tend to be of a higher quality since they are written and curated by humans (Koupaee and Wang, 2018; Ladhak et al., 2020).",
"Wikihow (Koupaee and Wang, 2018) is a dataset of how-to articles covering a diverse set of topics, collected from the wikihow.com website.",
"Each article contains several paragraphs detailing step by step instructions for a procedural task.",
"There are about 12 M such paragraphs in the dataset, paired with a one sentence summary.",
"extracted from news articles that were collected as part of the Gigaword corpus (Graff et al., 2003).",
"The model is tasked with generating the headline of the article given the first sentence.",
"We follow the process detailed by Grusky et al. (2018), and use extractive fragment coverage and extractive fragment density as the measures of extractiveness of a given summary.",
"Henceforth we will refer to these as coverage and density respectively.",
"Coverage is the percentage of words in a summary that are from the source article.",
"Density is the average length of the text spans copied from the document that are contained in the summary.",
"A summary that copies larger chunks of text from the source article will have a higher density.",
"Recent studies of faithfulness evaluation have proposed model-based automated metrics to detect whether a given summary is faithful to the source article.",
"For example, Falke et al. (2019) (Entail-ment) have studied using pretrained entailment based methods to assess the probability of the generated output being entailed by the source article.",
"Kryscinski et al. (2020) (FactCC) augment hallucinated summaries by applying rule-based transformations to the document sentences and train a BERT-based model to classify whether the generated output is faithful.",
"Goyal and Durrett (2021) (DAE) have collected fine-grained annotations to study word-, dependencyand sentence-level faithfulness and use these annotations to train a factuality detection model.",
"Figure 1 shows the relationship between the average coverage of the generated outputs (extrac-tiveness) vs. average metric scores (faithfulness) assigned to various abstractive summarization models trained on Gigaword.",
"1 We observe that there is a positive correlation between extractiveness and faithfulness scores, as models whose generated summaries have a higher average coverage tend to also get higher scores for each of the faithfulness metrics.",
"This correlation between exractiveness and faithfulness makes it unclear whether a model gets higher factuality scores simply because it is more extractive or it is capable of generating faithful summaries at the original level of extractiveness.",
"This highlights the need for accounting for extractiveness in order to compare faithfulness across different abstractive summarization systems.",
"Given that extractiveness is confounded with faithfulness, we propose a framework for evaluating effective faithfulness , which takes into account the extractiveness of a system.",
"In order to do this, we first need to determine the faithfulness of a system operating at a given level of extractiveness.",
"We call this the Faithfulness-Abstractiveness Tradeoff and we describe it further in 4.1.",
"The effective faithfulness of a system is then simply the relative difference between the faithfulness score assigned to the system, and the score of a system operating with the same average extractiveness according to the trade-off curve.",
"In order to understand the effectiveness of a proposed system for improving faithfulness, we need to be able to account for its extractiveness.",
"We finetune pre-trained BART models (Lewis et al., 2020) for different levels of extractiveness, without any explicit recourse for improving faithfulness.",
"We then use these systems to create a faithfulness-abstractiveness trade-off curve that can serve as a control to measure the effective faithfulness of summarization systems.",
"Models that improve effective faithfulness should lie above the faithfulness-abstractiveness trade-off curve .",
"2 In particular, we sub-sample the training data into extractiveness quartiles by computing the coverage of the references with respect to the source articles.",
"We then fine-tune BART on each of these quartiles to obtain quartile models with varying level of extractiveness.",
"In addition, we also finetune BART on all of the data, which we call the baseline .",
"We collect faithfulness annotations for summaries generated by each of these models for a random sample of 200 articles.",
"We collect three annotations per example on Amazon Mechanical Turk asking whether an output is faithful or unfaithful with respect to the corresponding source article.",
"We then compute the percentage of annotators that selects \"faithful\", and use this as the faithfulness 2 Human evaluation data and trade-off curves can be found at https://github.com/fladhak/effective-faithfulness.",
"Article Once you decide what to outsource, look for the right contractors.",
"Start by asking for referrals from your own professional network.",
"Talk to other business owners and professionals about how and where they outsource.",
"You can also check professional associations or trade groups field in which you are trying to outsource work.",
"Use other social media platforms such as Facebook or Twitter to advertise what you are looking for.",
"Alternately, you can connect with contractors and freelancers on sites such as eLance, Guru and oDesk.",
"These websites allow business owners to place an ad that describes what kind of work they need to have done, and contractors respond with their qualifications and rates.",
"[TRUNCATED] ...",
"Baseline Search for contractors and freelancers to outsource the work.",
"Q1 Conduct an initial search for qualified contractors and freelancers.",
"Q2 Search for qualified contractors and freelancers to work on your project.",
"Q3 Search for contractors and freelancers to do the work.",
"Table 2 shows the coverage and faithfulness scores for the baseline and the quartile models, where Q1 is the most abstractive and Q4 is the most extractive quartile.",
"4 We observe that the models that are fine-tuned on more extractive quartiles produce outputs with significantly higher coverage and faithfulness scores.",
"The baseline model generates relatively extractive outputs with coverage closest to Q3 on both Gigaword and Wikihow.",
"Furthermore, we observe that the baseline model has a higher coverage than the model fine-tuned on Q3 but it has lower faithfulness score for Gigaword.",
"Table 1 shows an article from the Wikihow dataset and corresponding output summaries generated by the baseline and each of the quartile models.",
"We observe that the generated summaries are very similar in meaning; however, the output generated by the Q1 model includes a higher number of novel words (i.e. lower coverage) compared to the other models while staying faithful to the article.",
"Conversely, Q4 model has a coverage of 1 in this example; all the words generated by this model are from the source article.",
"On average, the Q1 model generates outputs that are more abstractive and less faithful while Q4 generates outputs that are more extractive and more faithful.",
"We first aim to understand whether it is possible to mitigate the faithfulness-abstractiveness tradeoff by designing several oracle experiments where we have access to human judgments.",
"baseline + faithfulness (bf).",
"We use the output from the baseline model if it is faithful (i.e. at least two out of three annotators agree that the output is faithful).",
"If the baseline output is not faithful, we select the output from the quartile model that is more extractive than the baseline to see whether we can have a similar coverage as the baseline but preserve faithfulness.",
"baseline + faithfulness-extractiveness (bfe).",
"This oracle system behaves similar to the one described above when the baseline output is unfaithful.",
"However, rather than always selecting the base-1413 Dataset Cov.",
"line output when it is faithful, we pick the output from the quartile model that is more abstractive than the baseline whenever it is also faithful according to human judgement.",
"quartile + faithfulness-extractiveness (qfe).",
"Amongst the outputs of all four quartile models, we pick the most faithful output with the highest level of abstractiveness to understand whether it is possible to generate abstractive output while remaining faithful.",
"Analysis.",
"Table 3 shows the coverage and faithfulness of the baseline and each of these oracles for Gigaword and Wikihow.",
"We observe that it is possible to be more faithful than the baseline at a similar level of abstractiveness (bf).",
"Furthermore, we can be more abstractive than the baseline while being more faithful (bfe).",
"Selecting the most faithful and abstractive output from the quartile models achieves a really high faithfulness score ( 98%) while having significantly less coverage than the baseline.",
"This oracle analysis suggests that it should be possible to build models that can mitigate the faithfulness-abstractiveness trade-off by controlling the level of extractiveness.",
"Given this, we further explore whether we can learn a selector that is capable of doing this selection automatically to mitigate the faithfulness-abstractiveness tradeoff.",
"Kang and Hashimoto (2020) have proposed a method to adaptively remove high loss examples to optimize the distinguishability of samples from the model and the reference.",
"They have shown that the samples generated by this Loss Truncation model achieves higher factuality ratings compared to the baseline methods.",
"We study this method to understand where it lies in terms of faithfulness-abstractiveness trade-off and whether it can achieve a improved effective faithfulness over the control .",
"Goyal and Durrett (2020) have proposed a factuality evaluation metric (DAE) that evaluates whether each dependency arc in the generated output is consistent with the input.",
"They show that their proposed metric works better than existing factuality metrics, while also being able to localize the parts of the generated output that are non-factual.",
"Goyal and Durrett (2021) take advantage of DAE's ability to localize factuality errors, and train a summarization model only on the subset of tokens that is deemed factual according to the DAE metric.",
"We follow their methodology to train summarization models, and assess them using our evaluation framework.",
"We aim to understand whether we can build a model that achieves a better effective faithfulness than Loss Truncation.",
"We propose a selector that can identify the most abstractive but faithful output to improve this trade-off.",
"We first generate four possible candidate summaries using the quartile models for each example in the validation set.",
"This results in outputs with varying levels of extractiveness.",
"For our selector, we fine-tune a FactCC model (Kryscinski et al., 2020) on the data we collected to generate the trade-off curve, using 10-fold cross validation, to assign faithfulness scores to the generated summaries (in the test folds).",
"5 In addition, we learn a threshold for the faithfulness score that maximizes the area under the ROC curve (Selector-ROC) (also using 10-fold cross validation).",
"For each example in the test fold, we select the most abstractive candidate (amongst the four possible candidates from the quartile models) that is considered faithful according to the fintuned FactCC model (i.e. the faithfulness score is above the tuned threshold).",
"Instead of maximizing for the area under the ROC curve, we can also tune the faithfulness threshold to maximize F scores ( Selector-F ).",
"Using F score with < 1 allows us to assign a higher weight to the precision of our selector which would result in outputs with higher coverage and faithfulness.",
"5 We collected annotations for 200 articles for each of the quartile models.",
"We find that the fine-tuning FactCC is important since the pre-trained FactCC model is trained on a different dataset and does not transfer well to our setttings.",
"This is consistent with the findings of Goyal and Durrett (2021).",
"Table 4 shows the coverage and faithfulness results for the baseline, Loss Truncation, DAE, and the selectors.",
"We observe that as we use smaller values for for Selector-F , we get more extractive and more faithful outputs.",
"This allows us to have a trade-off between faithfulness and abstractiveness.",
"Moreover, with both Selector-ROC and Selector-F , we produce output with less coverage but higher faithfulness scores than the baseline.",
"For Wikihow, Selector-ROC produces outputs with lower coverage but similar faithfulness scores to Loss Truncation.",
"We can further obtain a higher faithfulness score at a similar coverage level as DAE and Loss truncation with Selector-F with = 0 .",
"1 .",
"For Gigaword, Select-ROC produces output with significantly lower coverage than Loss Truncation and DAE.",
"Selector-F produces output with similar coverage to Loss Truncation with a higher faithfulness score ( = 0 . 1 ).",
"It is important to understand whether models improve faithfulness by simply being more extractive or if they are able to improve effective faithfulness .",
"In order to understand this, we measure whether the models get improvement in faithfulness over the control operating at the same level of extractiveness.",
"In Figure 2, we plot the faithfulness-abstractiveness curve with the faithfulness and abstractiveness of the quartile models.",
"If a model lies above this curve, it improves the effective faithfulness .",
"If the model is below this curve, it is not able to improve the effective faithfulness and it has a worse tradeoff than the control operating at the same level of extractiveness.",
"For both Gigaword and Wikihow, Selector-ROC lies above the curve improving this trade-off.",
"However, both the baseline and Loss Truncation models get worse trade-off than the control operating at the same level of extractiveness.",
"Similarly, we can obtain several models that lie above the curve for both Gigaword and Wikihow using Selector-F .",
"The selector approach allows us to get better effective faithfulness at different points in the abstractiveness-extractiveness spectrum.",
"The DAE based model is able to improve effective faithfulness on the Wikihow dataset, but not on the Gigaword dataset, indicating that the improvements are not consistent across datasets.",
"Table 5 shows example summaries generated by the baseline, Loss Truncation, DAE and the Selector-ROC models.",
"We observe that selector model is able to generate summaries that are faithful to the original article while having more novel words and phrases in the generated summaries.",
"There has been a lot of recent work in abstractive summarization showing that state-of-the-art systems suffer from generating inconsistent information with respect to the source article, despite their improved success in producing fluent summaries",
"(a) Selector-ROC and the baseline trade-off on Gigaword .",
"(b) Selector-F and the baseline trade-off on Gigaword .",
"(Falke et al., 2019; Lux et al., 2020; Wilber et al., 2021).",
"Since word-overlap based metrics such as ROUGE have low correlation with human scores of faithfulness (Kryscinski et al., 2019; Fabbri et al., 2020), there has been significant effort to develop automated metrics that can detect such errors (Zhou et al., 2021; Gabriel et al., 2021; Pagnoni et al., 2021a).",
"For example, Falke et al. (2019), Maynez et al. (2020) and Goyal and Durrett (2020) have proposed to assess faithfulness using entailment models, where a faithful summary should be assigned a high entailment score with respect to the original article.",
"Kryscinski et al. (2020) presented FactCC, a weakly-supervised BERT-based entailment model, by augmenting the dataset with artificial faithfulness errors.",
"Durmus et al. (2020) and Wang et al. (2020) proposed question-answering based evaluation frameworks by automatically generating questions from the generated summary, and comparing the corresponding answers from both the source and the generated summary in order assess information consistency.",
"Furthermore, several benchmarks have been proposed to evaluate the strengths and weaknesses of these evaluation metric (Gabriel et al., 2021; Pagnoni et al., 2021b).",
"Previous studies in faithfulness evaluation, however, has not accounted for the effect of extractiveness of the output summaries.",
"As we show in this study, the extractiveness of the output is correlated with the faithfulness scores assigned by these automated metrics.",
"Therefore, it is not clear whether the models with higher scores are better at abstraction, or extract more from the source article.",
"We suggest that we need to account for this confounding factor in order to assess the real progress in building models that are better at abstraction.",
"We note that there is concurrent work that also argues for accounting for extractiveness in assessing the faithfulness of models (Dreyer et al., 2021), however, unlike our work, they do they do not propose any mitigation for the faithfulness-abstractiveness trade-off.",
"world scenarios, as such recent work has studied methods to improve the faithfulness of abstractive summarization systems (Matsumaru et al., 2020; Zhao et al., 2020; Dong et al., 2020; Goyal and Durrett, 2021; Xu et al., 2020; Chen et al., 2021; Zhu et al., 2021).",
"For example, Goyal and Durrett (2021) train summarization systems by modifying the training objective to maximize the likelihood of the subset of summary tokens that are considered faithful according to their factuality detection model.",
"Zhao et al. (2020) specifically target hallucination of quantities in generated summaries, and train a verification model that they use to re-rank summaries such that summaries containing quantities consistent with the source article are up-ranked.",
"Although these methods have shown improvements over the compared baselines, unlike our work, they do not measure the effective faithfulness taking extractiveness of the generated outputs into account.",
"Recent studies that propose methods to improve faithfulness evaluate progress by conducting human evaluation on generated summaries and check",
"whether the faithfulness scores are higher for their proposed system as compared to their baselines.",
"We show that there is a strong relationship between the extractiveness and faithfulness of generated outputs (i.e., more extractive outputs tend to be more faithful), and therefore we cannot simply disregard extractiveness in faithfulness evaluation.",
"We propose that we should instead be measuring effective faithfulness and introduce a framework that takes into account the faithfulness-abstractiveness trade-off curve that is generated by training control models at different points in the abstractiveness spectrum.",
"We demonstrate the importance of measuring effective faithfulness by showing that recently proposed methods that improve faithfulness over the baseline fails to consistently improve over a simple control operating at the same level of abstractiveness.",
"We argue that measuring effective faithfulness is important since our goal is to build abstractive, faithful summarization systems.",
"If the objective was to optimize for faithfulness alone, we could do so by simply building more extractive systems (such as the Q4 model we trained above).",
"Limitations.",
"Note that this method relies on some diversity in the extractiveness of reference summaries, since we rely on sub-sampling to train models for the control .",
"It is less likely to be effective for datasets with very little variation in the extractiveness of the generated summaries.",
"However, in general, we see significantly more faithfulness problems for datasets with higher diversity of abstractiveness.",
"Therefore, we suggest to account for the faithfulness-abstractiveness trade-off for such datasets in future work.",
"This work was partly supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract #FA8650-17-C-9117, Sam-sung Advanced Institute of Technology (Next Generation Deep Learning: From Pattern Recognition to AI), and a collaborative grant from Amazon to the Columbia Center for Artificial Intelligence entitled Extremely Abstractive Summarization.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the funding agencies.",
"We further thank the anonymous reviewers and the Stanford NLP group for their helpful feedback."
] | [
"abstain",
"abstain",
"method",
"objective",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"result",
"result",
"objective",
"result",
"objective",
"objective",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Interpretability or explainability is an emerging research field in NLP .",
"From a user-centric point of view, the goal is to build models that provide proper justification for their decisions, similar to those of humans, by requiring the models to satisfy additional constraints.",
"To this end, we introduce a new application on legal text where, contrary to mainstream literature targeting word-level rationales, we conceive rationales as selected paragraphs in multi-paragraph structured court cases.",
"We also release a new dataset comprising European Court of Human Rights cases, including annotations for paragraph-level rationales.",
"We use this dataset to study the effect of already proposed rationale constraints, i.e., sparsity , continuity , and comprehensiveness , formulated as regularizers.",
"Our findings indicate that some of these constraints are not beneficial in paragraph-level rationale extraction, while others need re-formulation to better handle the multi-label nature of the task we consider.",
"We also introduce a new constraint, singularity , which further improves the quality of rationales, even compared with noisy rationale supervision.",
"Experimental results indicate that the newly introduced task is very challenging and there is a large scope for further research.",
"Model interpretability (or explainability ) is an emerging field of research in NLP (Lipton, 2018; Jacovi and Goldberg, 2020).",
"From a model-centric point of view, the main focus is to demystify a model's inner workings, for example targeting self-attention mechanisms (Jain and Wallace, 2019; Wiegreffe and Pinter, 2019), and more recently Transformer-based language models (Clark et al., 2019; Kovaleva et al., 2019; Rogers et al., 2020).",
"From a user-centric point of view, the main focus is to build models that learn to provide proper Correspondence to: ihalk.aueb.gr justification for their decisions, similar to those of humans, (Zaidan et al., 2007; Lei et al., 2016; Chang et al., 2019; Yu et al., 2019) by requiring the models to satisfy additional constraints.",
"Here we follow a user-centric approach to rationale extraction , where the model learns to select a subset of the input that justifies its decision.",
"To this end, we introduce a new application on legal text where, contrary to mainstream literature targeting word-level rationales, we conceive rationales as automatically selected paragraphs in multi-paragraph structured court cases.",
"While previous related work targets mostly binary text classification tasks (DeYoung et al., 2020), our task is a highly skewed multi-label text classification task.",
"Given a set of paragraphs that refer to the facts of each case (henceforth facts ) in judgments of the European Court of Human Rights ( EC t HR ), the model aims to predict the allegedly violated articles of the European Convention of Human Rights ( ECHR ).",
"We adopt a rationalization by construction methodology (Lei et al., 2016; Chang et al., 2019; Yu et al., 2019), where the model is regularized to satisfy additional constraints that reward the model, if its decisions are based on concise rationales it selects, as opposed to inferring explanations from the model's decisions in a post-hoc manner (Ribeiro et al., 2016; Alvarez-Melis and Jaakkola, 2017; Murdoch et al., 2018).",
"Legal judgment prediction has been studied in the past for cases ruled by the European Court of Human Rights (Aletras et al., 2016; Medvedeva et al., 2018; Chalkidis et al., 2019) and for Chinese criminal court cases (Luo et al., 2017; Hu et al., 2018; Zhong et al., 2018), but there is no precedent of work investigating the justification of the models' decisions.",
"Similarly to other domains (e.g., finan-cial, biomedical), explainability is a key feature in the legal domain, which may potentially improve the trustworthiness of systems that abide by the principle of the right to explanation (Goodman and Figure 1: A depiction of the EC t HR process: The applicant(s) request a hearing from EC t HR regarding specific accusations (alleged violations of ECHR articles) against the defendant state(s), based on facts.",
"Flaxman, 2017).",
"We investigate the explainability of the decisions of state-of-the-art models, comparing the paragraphs they select to those of legal professionals, both litigants and lawyers, in alleged violation prediction .",
"In the latter task, introduced in this paper, the goal is to predict the accusations (allegations) made by the applicants.",
"The accusations can be usually predicted given only the facts of each case.",
"By contrast, in the previously studied legal judgment prediction task, the goal is to predict the court's decision; this is much more difficult and vastly relies on case law (precedent cases).",
"Although the new task (alleged violation prediction) is simpler than legal judgment prediction, models that address it (and their rationales) can still be useful in the judicial process (Fig. 1).",
"For example, they can help applicants (plaintiffs) identify alleged violations that are supported by the facts of a case.",
"They can help judges identify more quickly facts that support the alleged violations, contributing towards more informed judicial decision making (Zhong et al., 2020).",
"They can also help legal experts identify previous cases related to particular allegations, helping analyze case law (Katz, 2012).",
"Our contributions are the following: We introduce rationale extraction for alleged violation prediction in EC t HR cases, a more tractable task compared to legal judgment prediction.",
"This is a multi-label classification task that requires paragraph-level rationales, unlike previous work on word-level rationales for binary classification.",
"We study the effect of previously proposed rationale constraints, i.e., sparsity , continuity (Lei et al., 2016), and comprehensiveness (Yu et al., 2019), formulated as regularizers.",
"We show that continuity is not beneficial and requisite in paragraph-level rationale-extraction, while comprehensiveness needs to be re-formulated for the multi-label nature of the task we consider.",
"We also introduce a new constraint, singularity , which further improves the rationales, even compared with silver (noisy) rationale supervision.",
"We release a new dataset for alleged article violation prediction, comprising 11k EC t HR cases in English, with silver rationales obtained from references in court decisions, and gold rationales provided by ECHR -experienced lawyers.",
"1 To the best of our knowledge, this is also the first work on rationale extraction that fine-tunes end-to-end pre-trained Transformer-based models.",
"2 2 Related Work Legal judgment prediction: Initial work on legal judgment prediction in English used linear models with features based on bags of words and topics, applying them to EC t HR cases (Aletras et al., 2016; Medvedeva et al., 2018).",
"More recently, we experimented with neural methods (Chalkidis et al., 2019) , showing that hierarchical RNN s (Yang et al., 2016), and a hierarchical variation of BERT (Devlin et al., 2019) that encodes paragraphs, outperform linear classifiers with bag-of-word representations.",
"In all previous work, legal judgment prediction is tackled in an over-simplified experimental setup 1 Our dataset is publicly available at https:// huggingface.co/datasets/ecthr_cases , see usage example in Appendix E. 2 Others fine-tuned such models only partially (Jain et al., 2020), i.e., top two layers, or not at all (DeYoung et al., 2020).",
"where only textual information from the cases themselves is considered, ignoring many other important factors that judges consider, more importantly general legal argument and past case law.",
"Also, Aletras et al. (2016), Medvedeva et al. (2018), Chalkidis et al. (2019) treat EC t HR judgment prediction as a binary classification task per case (any article violation or not), while the EC t HR actually considers and rules on the violation of individual articles of the European Convention of Human Rights ( ECHR ).",
"In previous work (Chalkidis et al., 2019), we also attempted to predict which particular articles were violated, assuming, however, that the Court considers all the ECHR articles in each case, which is not true.",
"In reality, the Court considers only alleged violations of particular articles, argued by applicants.",
"Establishing which articles are allegedly violated is an important preliminary task when preparing an EC t HR application.",
"Instead of oversimplifying the overall judgment prediction task, we focus on the preliminary task and use it as a test-bed for generating paragraph-level rationales in a multi-label text classification task for the first time.",
"Legal judgment prediction has also been studied in Chinese criminal cases (Luo et al., 2017; Hu et al., 2018; Zhong et al., 2018).",
"Similarly to the literature on legal judgment prediction for EC t HR cases, the aforementioned approaches ignore the crucial aspect of justifying the models' predictions.",
"Given the gravity that legal outcomes have for individuals, explainability is essential to increase the trust of both legal professionals and laypersons on system decisions and promote the use of supportive tools (Barfield, 2020).",
"To the best of our knowledge, our work is the first step towards this direction for the legal domain, but is also applicable in other domains (e.g., biomedical), where justifications of automated decisions are essential.",
"Rationale extraction by construction: Contrary to earlier work that required supervision in the form of human-annotated rationales (Zaidan et al., 2007; Zhang et al., 2016), Lei et al. (2016) introduced a self-supervised methodology to extract rationales (that supported aspect-based sentiment analysis pre-dictions), i.e., gold rationale annotations were used only for evaluation.",
"Furthermore, models were designed to produce rationales by construction , contrary to work studying saliency maps (generated by a model without explainability constraints) using gradients or perturbations at inference time (Ribeiro et al., 2016; Alvarez-Melis and Jaakkola, 2017; Murdoch et al., 2018).",
"Lei et al. (2016) aimed to produce short coherent rationales that could replace the original full texts, maintaining the model's predictive performance.",
"The rationales were extracted by generating binary masks indicating which words should be selected; and two additional loss regularizers were introduced, which penalize long rationales and sparse masks (that would select non-consecutive words).",
"Yu et al. (2019) proposed another constraint to ensure that the rationales would contain all the relevant information.",
"They formulated this constraint through a minimax game, where two players, one using the predicted binary mask and another using the complement of this mask, aim to correctly classify the text.",
"If the first player fails to outperform the second, the model is penalized.",
"Chang et al. (2019) use a Generative Adversarial Network ( GAN ) (Goodfellow et al., 2014), where a generator producing factual rationales competes with a generator producing counterfactual rationales to trick a discriminator.",
"The GAN was not designed to perform classification.",
"Given a text and a label it produces a rationale supporting (or not) the label.",
"Jain et al. (2020) decoupled the model's predictor from the rationale extractor to produce inherently faithful explanations, ensuring that the predictor considers only the rationales and not other parts of the text.",
"Faithfulness refers to how accurately an explanation reflects the true reasoning of a model (Lipton, 2018; Jacovi and Goldberg, 2020).",
"All the aforementioned work conceives rationales as selections of words, targeting binary classification tasks even when this is inappropriate.",
"For instance, DeYoung et al. (2020) and Jain et al. (2020) over-simplified the task of the multi-passage reading comprehension (MultiRC) dataset (Khashabi et al., 2018) turning it into a binary classification task with word-level rationales, while sentence-level rationales seem more suitable.",
"Responsible AI: Our work complies with the EC t HR data policy.",
"By no means do we aim to build a robot' lawyer or judge, and we acknowledge the possible harmful impact (Angwin et al., 2016; Dressel and Farid, 2018) of irresponsible deployment.",
"Instead, we aim to support fair and explainable AI -assisted judicial decision making and empirical legal studies.",
"We consider our work as part of ongoing critical research on responsible AI (Elish et al., 2021) that aims to provide explainable and fair systems to support human experts.",
"The court ( EC t HR ) hears allegations regarding breaches in human rights provisions of the European Convention of Human Rights ( ECHR ) by European states (Fig. 1).",
"3 The court rules on a subset of all ECHR articles, which are predefined (alleged) by the applicants ( plaintiffs ).",
"Our dataset comprises 11k EC t HR cases and can be viewed as an enriched version of the EC t HR dataset of Chalkidis et al. (2019), which did not provide ground truth for alleged article violations (articles discussed) and rationales.",
"The new dataset includes the following: Facts: Each judgment includes a list of paragraphs that represent the facts of the case, i.e., they describe the main events that are relevant to the case, in numbered paragraphs.",
"We hereafter call these paragraphs facts for simplicity.",
"Note that the facts are presented in chronological order.",
"Not all facts have the same impact or hold crucial information with respect to alleged article violations and the court's assessment; i.e., facts may refer to information that is trivial or otherwise irrelevant to the legally crucial allegations against defendant states.",
"Allegedly violated articles: Judges rule on specific accusations (allegations) made by the applicants (Harris, 2018).",
"In EC t HR cases, the judges discuss and rule on the violation, or not, of specific articles of the Convention.",
"The articles to be discussed (and ruled on) are put forward (as alleged article violations) by the applicants and are included in the dataset as ground truth; we identify 40 violable articles in total.",
"4 In our experiments, however, the models are not aware of the allegations.",
"They predict the Convention articles that will be discussed (the allegations) based on the case's facts, and they also produce rationales for their predictions.",
"Models of this kind could be used by potential applicants to help them formulate future allegations (articles they could claim to have been 3 The Convention is available at https://www.echr. coe.int/Documents/Convention_ENG.pdf . 4 The rest of the articles are procedural, i.e., the number of judges, criteria for office, election of judges, etc. violated), as already noted, but here we mainly use the task as a test-bed for rationale extraction.",
"Violated articles: The court decides which allegedly violated articles have indeed been violated.",
"These decisions are also included in our dataset and could be used for full legal judgment prediction experiments (Chalkidis et al., 2019).",
"However, they are not used in the experiments of this work.",
"Silver allegation rationales: Each decision of the EC t HR includes references to facts of the case (e.g., See paragraphs 2 and 4 .) and case law (e.g., See Draci vs. Russia (2010) .).",
"We identified references to each case's facts and retrieved the corresponding paragraphs using regular expressions.",
"These are included in the dataset as silver allegation rationales, on the grounds that the judges refer to these paragraphs when ruling on the allegations.",
"Gold allegation rationales: A legal expert with experience in EC t HR cases annotated a subset of 50 test cases to identify the relevant facts (paragraphs) of the case that support the allegations (alleged article violations).",
"In other words, each identified fact justifies (hints) one or more alleged violations.",
"5 Task definition: In this work, we investigate alleged violation prediction , a multi-label text classification task where, given the facts of a EC t HR case, a model predicts which of the 40 violable ECHR articles were allegedly violated according to the applicant(s).",
"4 The model also needs to identify the facts that most prominently support its decision.",
"We first describe a baseline model that we use as our starting point.",
"It adopts the framework proposed by Lei et al. (2016), which generates rationales by construction: a text encoder sub-network reads the text; a rationale extraction sub-network produces a binary mask indicating the most important words of the text; and a prediction sub-network classifies a hard-masked version of the text.",
"We then discuss additional constraints that have been proposed to improve word-level rationales, which can be added to the baseline as regularizers.",
"We argue that one of them is not beneficial for paragraph-level rationales.",
"We also consider variants of previous constraints that better suit multi-label classification tasks and introduce a new one.",
"Our baseline is a hierarchical variation of BERT (Devlin et al., 2019) with hard attention, dubbed HIERBERT-HA .",
"6 Each case (document) D is viewed as a list of facts (paragraphs) D = [ P 1 , . . . , PN ] .",
"Each paragraph is a list of tokens P i = [ w 1 , . . . , w L i ] .",
"We first pass each paragraph independently through a shared BERT encoder (Fig. 2) to extract context-unaware paragraph representations P [ CLS ] i , using the [ CLS ] embedding of BERT .",
"Then, a shallow encoder with two Transformer layers (Vaswani et al., 2017) produces contextualized paragraph embeddings, which are in turn projected to two separate spaces by two different fully-connected layers, K and Q , with SELU activations (Klambauer et al., 2017).",
"K produces the paragraph encoding P Ki , to be used for classification; and Q produces the paragraph encoding PQ i , to be used for rationale extraction.",
"The rationale extraction sub-network passes each P Qi encoding independently through a fully-connected layer with a sigmoid activation to produce soft attention scores a i [0 , 1] .",
"The attention scores are then binarized using a 0.5 threshold, leading to hard attention scores z i ( z i = 1 iff a i > 0 . 5 ).",
"The hard-masked document representation DM is obtained by hard-masking paragraphs and max-pooling: DM = maxpool (cid:0) [ z 1 PK 1 , . . . , z N PKN ] (cid:1) DM is then fed to a dense layer with sigmoid activations, which produces a probability estimate per label, (cid:98) Y = [ y 1 , . . . , y | A | ] , in our case per article of the Convention, where | A | is the size of the label set.",
"For comparison, we also experiment with a model that masks no facts, dubbed HIERBERT-ALL .",
"The thresholding that produces the hard (binary) masks z i is not differentiable.",
"To address this problem, Lei et al. (2016) used reinforcement learning (Williams, 1992), while Bastings et al. (2019) proposed a differentiable mechanism relying on the re-parameterization trick (Louizos and Welling, 2017).",
"We follow a simpler trick, originally proposed by Chang et al. (2019), where during backpropagation the thresholding is detached from the computation graph, allowing the gradients to bypass the thresholding and reach directly the soft attentions a i .",
"6 In previous work, we proposed a hierarchical variation of BERT with self-attention (Chalkidis et al., 2019).",
"In parallel work, Yang et al. (2020) proposed a similar Transformer-based Hierarchical Encoder (SMITH) for long document matching.",
"Sparsity: Modifying the word-level sparsity constraint of Lei et al. (2016) for our paragraph-level rationales, we also hypothesize that good rationales include a small number of facts (paragraphs) that sufficiently justify the allegations; the other facts are trivial or secondary.",
"For instance, an introductory fact like The applicant was born in 1984 and lives in Switzerland. does not support any allegation, while a fact like The applicant contended that he had been beaten by police officers immediately after his arrest and later during police questioning. suggests a violation of Article 3 Prohibition of Torture.",
"Hence, we use a sparsity loss to control the number of selected facts: L s = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) T 1 NN (cid:88) i =1 z i (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (1) where T is a predefined threshold specifying the desired percentage of selected facts per case.",
"We can estimate T from silver rationales (Table 1).",
"Continuity: In their work on word-level rationales, Lei et al. (2016) also required the selected words to be contiguous , to obtain more coherent rationales.",
"In other words, the transitions between selected ( z i = 1 ) and not selected ( z i = 0 ) words in the hard mask should be minimized.",
"This is achieved by adding the following continuity loss: L c = 1 N 1 N (cid:88) i =2 | z i z i 1 | (2) In paragraph-level rationale extraction, where entire paragraphs are masked, the continuity loss forces the model to select contiguous paragraphs.",
"In EC t HR cases, however, the facts are self-contained and internally coherent paragraphs (or single sentences).",
"Hence, we hypothesize that the continuity loss is not beneficial in our case.",
"Nonetheless, we empirically investigate its effect.",
"Comprehensiveness: We also adapt the comprehensiveness loss of Yu et al. (2019), which was introduced to force the hard mask Z = [ z 1 , . . . , z N ] to (ideally) keep all the words (in our case, paragraphs about facts) of the document D that support the correct decision Y .",
"In our task, Y = [ y 1 , . . . , y | A | ] is a binary vector indicating the Convention articles the court discussed (gold allegations) in the case of D .",
"Intuitively, the complement Z c of Z , i.e., the hard mask that selects the words (in our case, facts) that Z does not select, should not select sufficient information to predict Y .",
"Given D , let DM , D cM be the representations of D obtained with Z, Z c , respectively; let (cid:98) Y , (cid:98) Y c be the corresponding probability estimates; let L p , L cp be the classification loss, typically total binary cross-entropy, measuring how far (cid:98) Y , (cid:98) Y c are from Y .",
"In its original form, the comprehensiveness loss requires L cp to exceed L p by a margin h .",
"While this formulation may be adequate in binary classification tasks, in multi-label classification it is very hard to pre-select a reasonable margin, given that cross-entropy is unbounded, that the distribution of true labels (articles discussed) is highly skewed, and that some labels are easier to predict than others.",
"To make the selection of h more intuitive, we propose a reformulation of L g that operates on class probabilities rather than classification losses.",
"The right-hand side of Eq.",
"3 becomes: 1 | A | | A | (cid:88) i =1 y i ( y ic y i + h )+(1 y i )( y i y ic + h ) (4) The margin h is now easier to grasp and tune.",
"It encourages the same gap between the probabilities predicted with Z and Z c across all labels (articles).",
"We also experiment with a third variant of comprehensiveness, which does not compare the probabilities we obtain with Z and Z c , comparing instead the two latent document representations: L g = | cos( DM , D cM ) | (5) where cos denotes cosine similarity.",
"This variant forces DM and D cM to be as dissimilar as possible, without requiring a preset margin.",
"Singularity: A limitation of the comprehensiveness loss (any variant) is that it only requires the mask Z to be better than its complement Z c .",
"This does not guarantee that Z is better than every other mask.",
"Consider a case where the gold rationale identifies three articles and Z selects only two of them.",
"The model may produce better predictions with Z than with Z c , and DM may be very different than D cM in Eq.",
"5, but Z is still not the best mask.",
"To address this limitation, we introduce the singularity loss L r , which requires Z to be better than a mask Z r , randomly generated per training instance and epoch, that selects as many facts as the sparsity threshold T allows: L r = L g ( Z, Z r ) (6) = 1 cos( Z r , Z ) Here L g ( Z, Z r ) is any variant of L g , but now using Z r instead of Z c ; and regulates the effect of L g ( Z, Z r ) by considering the cosine distance between Z r and Z .",
"The more Z and Z r overlap, the less we care if Z performs better than Z r .",
"The total loss of our model is computed as follows.",
"Again L p is the classification loss; L cp , L rp are the classification losses when using Z c , Z r , respectively; and all s are tunable hyper-parameters.",
"We include L cp in Eq.",
"7, because otherwise the network would have no incentive to make D cM and (cid:98) Y c competitive in prediction; and similarly for L rp .",
"Rationales supervision: For completeness we also experimented with a variant that utilizes silver rationales for noisy rationale supervision (Zaidan et al., 2007).",
"In this case the total loss becomes: L = L p + ns MAE ( Z, Z s ) (8) where MAE is the mean absolute error between the predicted mask, Z , and the silver mask, Z s , and ns weighs the effect of MAE in the total loss.",
"For all methods, we conducted grid-search to tune the hyper-parameters .",
"We used the Adam optimizer (Kingma and Ba, 2015) across all experiments with a fixed learning rate of 2e 5 .",
"7 All methods rely on LEGAL-BERT-SMALL (Chalkidis et al., 2020), a variant of BERT (Devlin et al., 2019), with 6 layers, 512 hidden units and 8 attention heads, pre-trained on legal corpora.",
"Based on this model, we were able to use up to 50 paragraphs of 256 words each in a single 32 GB GPU .",
"In preliminary experiments, we found that the proposed model relying on a shared paragraph encoder, i.e., one that passes the same context-aware paragraph representations P [ CLS ] i to both the Q and K sub-networks, as in Fig. 2, has comparable performance and better rationale quality, compared to a model with two independent paragraph encoders, as the one used in the literature (Lei et al., 2016; Yu et al., 2019; Jain et al., 2020).",
"8 For all experiments, we report the average and standard deviation across five runs.",
"We evaluate:",
"(a) classification performance,",
"(b) faithfulness (Section 2), and",
"(c) rationale quality , while respecting a given sparsity threshold ( T ).",
"Classification performance: Given the label skewness, we evaluate classification performance using micro-F1 , i.e., for each Convention article, we compute its F1, and micro-average over articles.",
"Faithfulness: Recall that faithfulness refers to how accurately an explanation reflects the true reasoning of a model.",
"To measure faithfulness, we report sufficiency and comprehensiveness (DeYoung et al., 2020).",
"Sufficiency measures the difference between the predicted probabilities for the gold (positive) labels when the model is fed with the whole text ( (cid:99) Y + f ) and when the model is fed only with the predicted rationales ( (cid:99) Y + ).",
"Comprehensiveness (not to be confused with the homonymous loss of Eq. 35) measures the difference between the predicted probabilities for the gold (positive) labels obtained when the model is fed with the full text ( (cid:99) Y + f ) and when it is fed with the complement of the predicted rationales ( (cid:99) Y + c ).",
"We also compare classification performance (again using micro-F1 ) in both cases, i.e., when considering masked inputs (using Z ) and complementary inputs (using Z c ).",
"Rationale quality: Faithful explanations (of system reasoning) are not always appropriate for users (Jacovi and Goldberg, 2020), thus we also evaluate rationale quality from a user perspective.",
"The latter 7 In preliminary experiments, we tuned the baseline model on development data as a stand-alone classifier and found that the optimal learning rate was 2e 5 , searching in the set { 2e 5 , 3e 5 , 4e 5 , 5e 5 }.",
"The optimal drop-out rate was 0.",
"8 See Appendix B for additional details and results.",
"can be performed in two ways.",
"Objective evaluation compares predicted rationales with gold annotations, typically via Recall, Precision, F1 (com-paring system-selected to human-selected facts in our case).",
"In subjective evaluation, human annotators review the extracted rationales.",
"We opt for an objective evaluation, mainly due to lack of resources.",
"As rationale sparsity (number of selected paragraphs) differs across methods, which affects Recall, Precision, F1 , we evaluate rationale quality with mean R-Precision ( mRP ) (Manning et al., 2009).",
"That is, for each case, the model ranks the paragraphs it selects by decreasing confidence, and we compute Precision@ k , where k is the number of paragraphs in the gold rationale; we then average over test cases.",
"For completeness, we also report F1 (comparing predicted and gold rationale paragraphs), although it is less fair, because of the different sparsity of different methods, as noted.",
"Table 2 reports the classification performance of HIERBERT-ALL (no masking, no rationales), across ECHR articles.",
"F1 is 72.5% or greater for most of the articles with 1,000 or more training instances.",
"The scores are higher for articles 2, 3, 5, 6, because (according to the legal expert who provided the gold allegation rationales),",
"(i) there is a sufficient number of cases regarding these articles, and",
"(ii) the interpretation and application of these articles is more fact-dependent than those of other articles, such as articles 10 or 11 (Harris, 2018).",
"On the other hand, although there is a fair amount of training instances for articles 13, 14, 34, and 46, these articles are triggered in a variety of ways, many of which turn on legal procedural technicalities.",
"Instead of tuning simultaneously all the hyper-parameters of Eq.",
"7, we adopt a greedy, but more intuitive strategy: we tune one at a time, fix its value, and proceed to the next; s that have not been tuned are set to zero, i.e., the corresponding regularizer is not used yet.",
"We begin by tuning s , aiming to achieve a desirable level of sparsity without harming classification performance.",
"We set the sparsity threshold of L s (Eq. 1) to T = 0 .",
"3 (select approx. 30% of the facts), which is the average sparsity of the silver rationales (Table 1).",
"We found s = 0 .",
"1 achieves the best overall results on development data, thus we use this value for the rest of the experiments.",
"9 To check our hypothesis that continuity ( L c ) is not beneficial in our task, we tuned c on development data, confirming that the best overall results are obtained for c = 0 .",
"9 Thus we omit L c in the rest of the experiments.",
"Next, we tuned and compared the variants of the comprehensiveness loss L g (Table 4).",
"Targeting the label probabilities (Eq. 4) instead of the losses (Eq. 3) leads to lower rationale quality.",
"Targeting the document representations (Eq. 5) has the best rationale quality results, retaining (as with all versions of L g ) the original classification performance (micro-F1) of Table 2.",
"Hence, we keep the L g variant of Eq.",
"5 in the remaining experiments of this section, with the corresponding g value ( 1e 3 ).",
"L g classification sparsity rationale quality variant micro-F1 (aim: 30%) F1 mRP Eq.",
"3 73.0 0.5 31.4 1.9 35.4 5.8 38.4 5.9 Eq.",
"4 73.1 0.7 31.9 1.4 30.3 3.0 32.6 2.6 Eq.",
"5 72.8 0.8 31.8 1.3 38.3 2.3 41.2 2.1 Table 4: Development results for variants of L g ( comprehensiveness ) and varying g values (omitted).",
"9 Consult Appendix D for more detailed results.",
"L r classification sparsity rationale quality variant micro-F1 (aim: 30%) F1 mRP Eq.",
"3, 6 73.4 0.8 32.8 2.8 36.9 3.6 39.0 3.9 Eq.",
"4, 6 72.5 0.7 32.0 1.0 39.7 3.1 42.6 3.8 Eq.",
"5, 6 72.8 0.3 31.5 0.9 33.0 2.7 35.5 2.6 Table 5: Development results for variants of L c ( singularity ) and varying r values (omitted).",
"Concerning the singularity loss L r (Table 5), targeting the label probabilities (Eq. 4, 6) provides the best rationale quality, comparing to all the methods considered.",
"Interestingly Eq.",
"5, which performed best in L g (Table 4), does not perform well in L r , which uses L g (Eq. 6).",
"We suspect that in L r , where we use a random mask Z r that may overlap with Z , requiring the two document representations DM , D rM to be dissimilar (when using Eq. 5, 6) may be a harsh regularizer with negative effects.",
"Table 3 presents results on test data.",
"The models that use the hard attention mechanism and are regularized to extract rationales under certain constraints ( HIERBERT-HA + L ) have comparable classification performance to HIERBERT-ALL .",
"Furthermore, although paragraph embeddings are contextualized and probably have some information leak for all methods, our proposed extensions in rationale constraints better approximate faithfulness, while also respecting sparsity.",
"Our proposed extensions lead to low sufficiency (lower is better, ), i.e., there is only a slight deterioration in label probabilities when we use the predicted rationale instead of the whole input.",
"They also lead to high comprehensiveness (higher is better, ); we see a 20% deterioration in label probabilities when using the complement of the rationale instead of the whole input.",
"Interestingly, our variant with the singularity loss (Eq. 4, 6) is more faithful than the model that uses supervision on silver rationales (Eq. 8).",
"that have both silver and gold allegation rationales.",
"Average silver/gold rationale sparsity (%) in brackets.",
"We now consider rationale quality, focusing on HIERBERT-HA variants without rationale supervision.",
"Similarly to our findings on development data (Tables 4, 5), we observe (Table 6) that using",
"(a) our version of comprehensiveness loss (Eq. 5) or",
"(b) our singularity loss (Eq. 4, 6) achieves better results compared to former methods, and",
"(b) has the best results.",
"The singularity loss is better in both settings (silver or gold test rationales), even compared to a model that uses supervision on silver rationales.",
"The random masking of the singularity loss, which guides the model to learn to extract masks that perform better than any other mask, proved to be particularly beneficial in rationale quality.",
"Similar observations are derived given the results on the full test set considering silver rationales.",
"10 In general, however, we observe that the rationales extracted by all models are far from human rationals, as indicated by the poor results ( mRP, F1 ) on both silver and gold rationales.",
"Hence, there is ample scope for further research.",
"Quality of silver rationales: Comparing silver rationales with gold ones, annotated by the legal expert, we find that silver rationales are not complete, i.e., they are usually fewer than the gold ones.",
"They also include additional facts that have not been annotated by the expert.",
"According the expert, these facts do not support allegations, but are included for technical reasons (e.g., The national court did not accept the applicant's allegations. ).",
"Nonetheless, ranking methods by their rationale quality measured on silver rationales produces the same ranking as when measuring on gold rationales in the common subset of cases (Table 6).",
"Hence, it may be possible to use silver rationales, which are available for the full dataset, to rank systems participating in EC t HR rationale generation challenges.",
"Model bias: Low mRP with respect to gold rationales means that the models rely partially on non causal reasoning, i.e., they select secondary facts that do not justify allegations according to the legal expert.",
"In other words, the models are sensitive to specific language, e.g., they misuse (are easily fooled by) references to health issues and medical examinations as support for Article 3 alleged violations, or references to appeals in higher courts as support for Article 5, even when there is no concrete evidence.",
"11 Manually inspecting the predicted rationales, we did not identify bias on demographics.",
"Although such spurious features may be buried in the contextualized paragraph encodings ( P [ CLS ] i ).",
"In general, de-biasing models could benefit rationale extraction and we aim to investigate this direction in future work (Huang et al., 2020).",
"Plausibility: Plausibility refers to how convincing the interpretation is to humans (Jacovi and Goldberg, 2020).",
"While the legal expert annotated all relevant facts with respect to allegations, according to his manual review, allegations can also be justified by sub-selections (parts) of rationales.",
"Thus, although a method may fail to extract all the available rationales, the provided (incomplete) set of rationales may still be a convincing explanation.",
"To properly estimate plausibility across methods, one has to perform a subjective human evaluation which we did not conduct due to lack of resources.",
"We introduced a new application of rationale extraction in a new legal text classification task concerning alleged violations on EC t HR cases.",
"We also released a dataset for this task to foster further research.",
"Moreover, we compared various rationale constraints in the form of regularizers and introduced a new one ( singularity ) improving faithfulness and rationale quality in a paragraph-level setup comparing both with silver and gold rationales.",
"In the future, we plan to investigate more constraints that may better fit paragraph-level rationale extraction and explore techniques to de-bias models and improve rationale quality.",
"Paragraph-level rationale extraction can be also conceived as self-supervised extractive summarization to denoise long documents, a direction we plan to explore in the challenging task of case law retrieval (Locke and Zuccon, 2018).",
"We would like to thank the anonymous reviewers (esp. reviewer #2) for their constructive detailed comments.",
"Nikolaos Aletras is supported by EP-SRC grant EP/V055712/1, part of the European Commission CHIST-ERA programme, call 2019 XAI: Explainable Machine Learning-based Artificial Intelligence."
] | [
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"method",
"other",
"other",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"result",
"objective",
"objective",
"objective",
"abstain",
"result",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"other",
"other"
] |
[
"Social media is a breeding ground for threat narratives and related conspiracy theories.",
"In these, an outside group threatens the integrity of an inside group, leading to the emergence of sharply defined group identities: Insider s agents with whom the authors identify and Outsider s agents who threaten the insiders.",
"Inferring the members of these groups constitutes a challenging new NLP task:",
"(i) Information is distributed over many poorly-constructed posts;",
"(ii) Threats and threat agents are highly contextual, with the same post potentially having multiple agents assigned to membership in either group;",
"(iii) An agent's identity is often implicit and transitive; and",
"(iv) Phrases used to imply Outsider status often do not follow common negative sentiment patterns.",
"To address these challenges, we define a novel Insider Outsider classification task.",
"Because we are not aware of any appropriate existing datasets or attendant models, we introduce a labeled dataset (CT5K) and design a model (NP2IO) to address this task.",
"NP2IO leverages pretrained language modeling to classify Insider s and Outsider s.",
"NP2IO is shown to be robust, generalizing to noun phrases not seen during training, and exceeding the performance of non-trivial baseline models by 20% .",
"Narrative models often succinctly represented as a network of characters, their roles, their interactions ( syuzhet ) and associated time-sequencing information ( fabula ) have been a subject of considerable interest in computational linguistics and narrative theory.",
"Stories rest on the generative backbone of narrative frameworks (Bailey, 1999; Beatty, 2016).",
"While the details might vary from one story to another, this variation can be compressed into a limited set of domain-dependent narrative roles and functions (Dundes, 1962).",
"group identities are an emergent phenomenon resulting from distributed social discourse.",
"Currently, this phenomenon is most readily apparent on social media platforms, with their large piazzas and niche enclaves.",
"Here, multiple threat-centric narratives emerge and, often, over time are linked together into complex conspiracy theories (Tangher-lini et al., 2020).",
"Conspiracy theories, and their constituent threat narratives (legend, rumor, personal experience narrative) share a signature semantic structure: an implicitly accepted Insider group; a diverse group of threatening Outsider s; specific threats from the Outsider directed at the Insider s; details of how and why Outsider s are threatening; and a set of strategies proposed for the Insider s to counter these threats (Tangherlini, 2018).",
"Indeed, the Insider / Outsider groups are fundamental in most studies of belief narrative, and have been exhaustively studied in social theory and more specifically, in the context of conspiracy theories (Bodner et al., 2020; Barkun, 2013).",
"On social media, these narratives are negotiated one post at a time, expressing only short pieces of the immanent narrative whole",
"(Clover, 1986).",
"This gives rise to a new type of computational linguistic problem: Given a large enough corpus of social media text data, can one automatically distill semantically-labeled narratives",
"(potentially several overlapping ones)",
"that underlie the fragmentary conversational threads?",
"Recent work",
"(Shahsavari et al., 2020b; Tangherlini et al., 2020; Shahsavari et al., 2020a; Holur et al., 2021)",
"has shown considerable promise that such scalable automated algorithms can be designed.",
"An automated pipeline of interlocking machine learning modules decomposes the posts into actors, actants and their inter-actant relationships to create narrative networks via aggregation.",
"These network representations are interpretable on inspection , allowing for the easy identification of the various signature semantic structures: Insider s, 4975 Figure 1: A pair of inferred text segments labeled by NP2IO showing Insider-Outsider context-sensitivity: Colored spans are used to highlight noun phrases that are inferred",
"Outsider s, strategies for dealing with Outsider s and their attendant threats and, in the case of conspiracy theories, causal chains of events that support that theory.",
"By itself, this unsupervised platform does not understand the different narrative parts .",
"Since the submodules are not trained to look for specific semantic abstractions inherent in conspiracy theories, the platform cannot automatically generate a semantically tagged narrative for downstream NLP tasks.",
"It cannot, for example, generate a list across narratives of the various outside threats and attendant inside strategies being recommended on a social media forum, nor can it address why these threats and strategies are being discussed.",
"As a fundamental first step bringing in supervised information to enable automated narrative structure discovery, we introduce the Insider Outsider classification task: To classify the noun phrases in a post as Insider , Outsider or neither .",
"A working conceptualization of what we consider Insider s and Outsider s is provided in the following insets.",
"As with most NLP tasks, we do not provide formal definitions of and rules to determine these groups.",
"Instead we let a deep learning model learn the representations needed to capture these notions computationally by training on data annotated with human-generated labels.",
"Insiders: Some combination of actors and their associated pronouns, who display full agency",
"(people, organizations, govern-ment), partial agency",
"(policies, laws, rules, current events)",
"or no agency",
"(things, places, circumstances), with whom the author identifies",
"(including themselves).",
"These are often ascribed beneficial status; Outsiders: A set of actors whom the author opposes and, in many cases, perceives as threatening the author and the insiders with disruption or harm.",
"For our purposes, these agents need not have full agency : Diseases and natural disasters, for example, would be universal outsiders, and any man-made object/policy that works against the Insider s would be included in this group.",
"different categories is inspired by social categorization, identification and comparison in the well-established Social Identity Theory",
"(SIT)",
"(Tajfel et al., 1979; Tajfel, 1974)",
"and rests on established perspectives from Narrative Theory",
"(Dundes, 1962; Labov and Waletzky, 1967; Nicolaisen, 1987).",
"Following are some of the reasons why this classification task is challenging and why the concepts of Insider s/ Outsider s are not sufficiently captured by existing labeled datasets used in Sentiment 4976 Analysis",
"1. Commonly-held Beliefs and Worldviews: Comprehensively incorporating shared values, crucial to the classification of Insider s and Outsider s, is a task with varied complexity.",
"Some beliefs are easily enumerated: most humans share a perception of a nearly universal set of threats",
"(virus, bomb, cancer, dictatorship)",
"or threatening actions",
"(kills millions of people, tries to mind-control every-one)",
"or benevolent actions",
"(donating to a charitable cause, curing disease, freeing people).",
"Similarly, humans perceive themselves and their close family units as close, homogeneous groups with shared values, and therefore I, us, my children and my family are usually Insider s.",
"In contrast, they and them are most often Outsider s.",
"Abstract beliefs pose a greater challenge as the actions that encode them can be varied and subtle.",
"For example, in the post: The microchips in vaccines track us, the noun phrase microchips is in the Outsider category as it violates the Insider s' right to privacy by track[ing] us.",
"Thus, greater attention needs to be paid in labeling datasets, highlighting ideas such as the right to freedom, religious beliefs, and notions of equality.",
"2. Contextuality and Transitivity: People express their opinions of Insider/Outsider affiliation by adding contextual clues that are embedded in the language of social media posts.",
"For example, a post We should build cell phone towers suggests that cell phone towers are helpful to Insider s, whereas a post We should build cell phone towers and show people how it fries their brains suggests, in contrast, that cell phone towers are harmful to Insiders and belong, therefore, to the class of Outsider s.",
"Insider/Outsider affiliations are also implied in a transitive fashion within a post.",
"For example, consider two posts:",
"(i)",
"Bill Gates is developing a vaccine. Vaccines kill people. and",
"(ii)",
"Bill Gates is developing a vaccine. Vaccines can eradicate the pandemic.",
"In the first case, the vaccine's toxic quality and attendant Outsider status would transfer to Bill Gates, making him an Outsider as well; in the second post, vaccine's beneficial qualities would transfer to him, now making Bill Gates an Insider .",
"3. Model Requirement under Biased Data Conditions: Designing effective classifiers that do not inherit bias from the training data especially data in which particular groups or individuals are derided or dehumanized is a challenging but necessary task.",
"Because conspiracy theories evolve, building on earlier versions, and result in certain communities and individuals being othered, our models must learn the phrases, contexts, and transitivity used to ascribe group membership, here either Insider s or Outsider s and not memorize the communities and/or individuals being targeted.",
"Figure 1 illustrates an example where we probed our model to explore whether such a requirement is indeed satisfied.",
"The first text conforms to the bias in our data, where tech, Bill Gates, and vaccines are primarily Outsiders .",
"The second text switches the context by changing the phrases.",
"Our classifier is able to correctly label these same entities, now presented in a different context, as Insider s!",
"We believe that such subtle learning is possible because of the use of pretrained language models.",
"We provide several such examples in Table 3 and Figure 3 and also evaluate our model for Zero-shot learning in Table 1 and Figure 6.",
"Recent NLP efforts have examined the effectiveness of using pretrained Language Models (LM) such as BERT, DistilBERT, RoBERTa, and XLM to address downstream classification tasks through fine-tuning (Sanh et al., 2020; Liu et al., 2019; Lample and Conneau, 2019).",
"Pretraining establishes the contextual dependencies of language prior to addressing a more specialized task, enabling rapid and efficient transfer learning.",
"A crucial benefit of pretraining is that, in comparison to training a model from scratch, fewer labeled samples are necessary.",
"By fine-tuning a pretrained LM, one can subsequently achieve competitive or better performance on an NLP task.",
"As discussed in Section 2, since our model is required to be contextual and transitive , both of which are qualities that rely on the context embedded in language, we utilize a similar architecture.",
"In recent work involving span-based classification tasks, token-classification heads have proven to be very useful for tasks such as, Parts-of-Speech (POS) Tagging, Named Entity Recognition (NER) and variations of Sentiment Analysis (SA) (Yang et al., 2019; Vlad et al., 2019; Yin et al., 2020).",
"Since the Insider Outsider classification task is also set up as a noun phrase labeling task, our architecture uses a similar token-classification head on top of the pretrained LM backbone.",
"Current SA datasets' definitions of positive negative and neutral sentiments can be thought of as a particularized form of the Insider-Outsider classification task.",
"For example, among the popular datasets used for SA, Rotten Tomatoes, Yelp reviews (Socher et al., 2013) and others (Dong et al., 2014; Pontiki et al., 2014) implicitly associate a sentiment's origin to the post's author (source) (a single Insider ) and its intended target to a movie or restaurant (a single Outsider if the sentiment is negative or an Insider if positive ).",
"The post itself generally contains information about the target and particular aspects that the Insider found necessary to highlight.Inmorerecent SA work, such as Aspect-Based Sentiment Analysis (ABSA) (Gao et al., 2021; Li et al., 2019; Wang et al., 2021; Dai et al., 2021), researchers have developed models to extract sentiments positive, negative, neutral associated with particular aspects of a target entity .",
"One of the subtasks of ABSA, aspect-level sentiment classification (ALSC), has a form that is particularly close to the Insider Outsider classification.",
"Interpreted in the context of our task, the author of the post is an Insider although now there can potentially be multiple targets or aspects that need to be classified as Insider s and Outsider s.",
"Still, the constructed tasks in ABSA appear to not align well with the goal of Insider Outsider classification: 1) Datasets are not transitive : Individual posts appear to have only one agent that needs classification, or a set of agents, each with their own separate sets of descriptors; 2) The ALSC data is often at the sentence-level as opposed to post-level, limiting the context-space for inference.",
"Despite these obvious differences, we quantitatively verify our intuitions in Section 7.1, and show that ABSA models do not generalize to our dataset.",
"Closely related to ABSA is Stance Classification (SC) (also known as Stance Detection / Iden-tification), the task of identifying the stance of the text author ( in favor of , against or neutral ) toward a target (an entity, concept, event, idea, opinion, claim, topic, etc.)(Walker et al., 2012; Zhang et al., 2017; Kk and Can, 2021).",
"Unlike ABSA, the target in SC does not need to be embedded as a span within the context.",
"For example, a perfect SC model given an input for classification of context: This house would abolish the monarchy.",
"and target: Hereditary succession , would predict the Negative label (Bar-Haim et al., 2017; Du et al., 2017).",
"While SC appears to require a higher level of abstraction and, as a result, a model of higher complexity and better generalization power than those typically used for ABSA, current implementations of SC are limited by the finite set of queried targets; in other words, SC models currently do not generalize to unseen abstract targets.",
"Yet, in real-time social media, potential targets and agents exhibit a continuous process of emergence, combination and dissipation.",
"We seek to classify these shifting targets using the transitive property of language, and would like the language to provide clues about the class of one span relative to another.",
"Ultimately, while SC models are a valuable step in the direction of better semantic understanding, they are ill-suited to our task.",
"Parallel to this work in SA, there are complementary efforts in consensus threat detection on social media (Wester et al., 2016; Kandias et al., 2013; Park et al., 2018), a task that broadly attempts to classify longer segments of text such as comments on YouTube or tweets on Twitter as more general threats.",
"The nuanced instruction to the labelers of the data is to identify whether the author of the post is an Outsider from the labeler's perspective as an Insider .",
"Once again, we observe that this task aligns with the Insider Outsider paradigm, but does not exhaust it, and the underlying models cannot accomplish our task.",
"The sets of Insider s and Outsider s comprise a higher-order belief system that cannot be adequately captured with the current working definitions of sentiment nor the currently available datasets.",
"This problem presents a primary motivation for creating a new dataset .",
"For example, the post: Microchips are telling the government where we are, does not directly feature a form of prototypical sentiment associated with microchips, the government and we, yet clearly insinuates an invasion on our right to privacy making clear the Insider s (we) and Outsider s (microchips, the government) in the post.",
"To construct our novel dataset C onspiracy T heory-5000 ( CT5K ) we designed crawlers to extract a corpus of social media posts generated by the underlying narrative framework of vaccine hesitancy (Details of the crawlers are documented in Appendix A.1).",
"Vaccine hesitancy is a remarkably resilient belief fueled by conspiracy theories that overlaps with multiple other narratives including ones addressing depopulation, government 4978 overreach and the deep state, limits on freedom of choice and Satanism.",
"The belief's evolution on social media has already enabled researchers to take the first steps in modeling critical parts of the underlying generative models that drive antivaccination conversations on the internet (Tangher-lini et al., 2016; Bandari et al., 2017).",
"Moreover, vaccine hesitancy is especially relevant in the context of the ongoing COVID-19 pandemic (Burki,",
"2020).Onthe crawled corpus, we extract the noun-chunks from each post using SpaCy's noun chunk extraction module and dependency parsers (Hon-nibal and Johnson, 2015).",
"A noun chunk is a subtree of the dependency parse tree, the headword of which is a noun.",
"The result is a set of post-phrase pairs, ( p , n ) , where p is a post and n is one of the noun phrases extracted from the post.",
"Amazon Mechanical Turk (AMT) (see Appendix A.2 for labeler instructions) was used to label the post-phrase pairs.",
"For each pair, the labeler was asked, given the context , whether the writer of the post p perceives the noun phrase n to be an Insider , Outsider or neither (N/A).",
"The labeler then provides a label c C , where C = { Insider , Outsider , N/A } (hence |C| = 3 ).",
"The triplets of post-phrase pairs along with their labels form the dataset D = (cid:8)(cid:0) ( p i , n i ) , c i (cid:1)(cid:9) |D| i =1 .",
"Note that a single post can appear in multiple triplets, because multiple different noun phrases can be extracted and labeled from a single post.",
"The overall class distribution and a few conditional class distributions across the labeled samples for several particular noun phrases are provided in Figure 5 in the Appendix B. Manual inspection of the labeled samples (( p , n ) , c ) suggests that the quality of the dataset is good ( < 10% misclassified by random sam-pling).",
"The now-labeled CT5K dataset (Holur et al., 2022) 1 ( |D| = 5000 samples) is split into training ( 90% ), and 10% testing sets.",
"10% of the training set is held out for validation.",
"The final training set is 20 -fold augmented by BERT-driven multi-token insertion (Ma, 2019).",
"The N ounP hrase-toI nsider O utsider (NP2IO) model 2 adopts a token classification architecture comprising a BERT-like pre-trained backbone and a softmax classifier on top of the backbone.",
"Token-1 See: Data and Model Checkpoints 2 Code Repository: NP2IO level labels are induced from the span-level labels for the fine-tuning over CT5K, and the span-level labeling of noun phrases is done through majority vote during inference.",
"An outline of the fine-tuning pipeline is provided in Figure 2.",
"Given a labeled example (( p , n ) , c ) , the model labels each token t i in the post p = [ t 1 , . . . , t N ] , where N is the number of tokens in the post p .",
"The BERT-like backbone embeds each token t i into a contextual representation i R d (for example, d = 768 for BERT-base or RoBERTa-base).",
"The embedding is then passed to the softmax classification layer i Softmax( WT i + b ) (1) where i |C| is the Insider Outsider classification prediction probability vector of the i th token, and W R d |C| and b R |C| are the parameters of the classifier.",
"The ground truth class label c accounts for all occurrences of the noun phrase n in the post p .",
"We use this span-level label to induce the token-level label and facilitate the computation of the fine-tuning loss.",
"Concretely, consider the spans where the noun phrase n occurs in the post p : S n = { s 1 , . . . , s M } , where s j S n denotes the span of the j th occurrence of n , and M is the number of occurrences of n in p .",
"Each span is a sequence of one or more tokens.",
"The set of tokens appearing in one of these labeled spans is: T n = { t p | s S n s.t. t s } .",
"We define the fine-tuning loss L of the labeled example (( p , n ) , c ) as the cross-entropy (CE) loss computed over T n using c as the label for each token in it,",
"i : t i T n where ( i ) c denotes the prediction probability for the class c C of the i th token.",
"The fine-tuning is done with mini-batch gradient descent for the classification layer and a number of self-attention layers in the backbone.",
"The number of fine-tuned self-attention layers is a hyperparameter.",
"The scope of hyperparameter tuning is provided in Table 4.",
"During fine-tuning, we extend the label of a noun phrase to all of its constituent tokens; during inference, conversely, we summarize constituent token labels to classify the noun phrases by a majority vote.",
"For a pair of post and noun-phrase ( p , n ) , assuming the definition of { t i } Ni =1 , { i } Ni =1 and T n from the Section 5.1, the Insider Outsider label prediction c is given by c = arg max k (cid:88) i : t i T n 1 { k =(arg max ( i ) ) } .",
"Now c can be compared to c with a number of classification evaluation metrics.",
"Visual display of individual inference results such as those in Figure 1 are supported by displaCy (Honnibal and Mon-tani, 2017).",
"In this section, we list baselines that we compare to our model's performance ordered by increasing parameter complexity.",
"Random Model (RND): Given a sample from the testing set { p , n } , c is randomly selected with uniform distribution from C = { Insider , Outsider , N/A } .",
"post-phrase pair ( p , n ) , give a fixed classification prediction: c = Insider (DET-I), c = Outsider (DET-O) or c = N/A (DET-NA).",
"Nave Bayes Model (NB / NB-L): Given a training set, the nave Bayes classifier estimates the likelihood of each class conditioned on a noun chunk PC , N ( c | n ) assuming its indepen-dence w.r.t. the surrounding context.",
"That is, a noun phrase predicted more frequently in the training-set as an Insider will be predicted as an Insider during the inference, regardless of the context.",
"For noun phrases not encountered during training, the uniform prior distribution over C is used for the prediction.",
"The noun chunk may be lemmatized (by word) during training and testing to shrink the conditioned event space.",
"We abbreviate the nave Bayes model without lemmatization as NB, and the one with lemmatization as NB-L.",
"GloVe+CBOW+XGBoost (CBOW 1/2/5): This baseline takes into account the context of a post but uses global word embeddings, instead of contextual-embeddings.",
"A window length w is fixed such that for each noun phrase, we extract the w words before and w words after the noun phrase, creating a set of context words, S w .",
"Stopwords are filtered, and the remaining con-4980 text words are lemmatized and encoded via 300 dimensional GloVe (Pennington et al., 2014).",
"The Continuous Bag of Words (CBOW) model (Mikolov et al., 2013) averages the representative GloVe vectors in S w to create an aggregate contextual vector for the noun phrase.",
"XGBoost (Chen and Guestrin, 2016) is used to classify the aggregated contextual vector.",
"The same model is applied on the test set to generate labels.",
"We consider window lengths of 1 , 2 and 5 (CBOW-1, CBOW-2 and CBOW-5 respectively).",
"7 Results and Evaluation Comparison of NP2IO to baselines is provided in Table 1. The random (RND) and deterministic (DET-I, DET-O, DET-NA) models perform poorly.",
"We present these results to get a better sense of the unbalanced nature of the labels in the CT5K dataset (see Figure 5).",
"The nave Bayes model (NB) and its lemmatized form (NB-L) outperform the trivial baselines.",
"However, they perform worse than the two contextual models, GloVe+CBOW+XGBoost and NP2IO.",
"This fact validates a crucial property of our dataset: Despite the bias in the gold standard labels for particular noun phrases such as I,they and mi-crochip see Figure 5 in Appendix B context dependence plays a crucial role in Insider-Outsider classification.",
"Furthermore, NP2IO outperforms GloVe+CBOW+XGBoost (CBOW-1, CBOW-2, CBOW-5) summarily.",
"While both types of models employ context-dependence to classify noun phrases, NP2IO does so more effectively.",
"The fine-tuning loss convergence plot for the optimal performing NP2IO model is presented in Figure 4 in Appendix B and model checkpoints are uploaded in the data repository.",
"Given the limitations of current ABSA datasets for our task (see Section 2 and Section 3), we computationally show that CT5K is indeed a different dataset, particularly in comparison to other classical ones in Table 2. For this experiment, we train near-state-of-the-art ABSA models with RoBERTa-base backbone (Dai et al., 2021) on three popular ABSA datasets Laptop reviews and Restaurant reviews from SemEval 2014 task 4 (Pontiki et al., 2014), and Tweets (Dong et al., 2014).",
"Each trained model is then evaluated on all three datasets as well as the test set of CT5K.",
"The Insider class in CT5K is mapped to the positive sentiment and the Outsider class to the negative sentiment.",
"The F1-macro scores of the models trained and tested among the three ABSA datasets are much higher than the scores when testing on the CT5K dataset.",
"Clearly, models that are successful with typical ABSA datasets do not effectively generalize to CT5K, suggesting that our dataset is different.",
"A challenge for any model, such as NP2IO, is zero-shot performance, when it encounters noun phrases never tagged during training.",
"Answering this question offers a means for validating the context-dependence requirement, mentioned in Section 2. This evaluation is conducted on a subset of the entire testing set: A sample of the subset { p , n } is such that the word-lemmatized, stopword-removed form of n does not exist in the set of word-lemmatized, stopword-removed noun phrases seen during training.",
"We extract 30% of test samples to be in this set.",
"The results are presented in Table 1. As expected, the performance of the nave Bayes models (NB, NB-L) degrades severely to random.",
"The performance of the contextual models CBOW-1/2/5, and NP2IO stay strong, suggesting effective context sensitivity in inferring the correct labels for these models.",
"A visualization of the zero-shot capabilities of NP2IO on unseen noun phrases is presented in Figure 6 in Appendix B. 7.3 Does NP2IO Memorize?",
"We construct a set of adversarial samples to evaluate the extent to which NP2IO accurately classifies a noun phrase that has a highly-biased label distribution in CT5K.",
"We consider 3 noun phrases in particular: microchip, government, and chemi-cal.",
"Each of these has been largely labeled as Outsider s.",
"The adversarial samples for each phrase, in contrast, are manually aggregated ( 5 seed posts augmented 20 times each) to suggest that the phrase is an Insider (see Table 5 in Appendix B for the seed posts).",
"We compute the recall of NP2IO in detecting these Insider labels (results in Table 3).",
"NP2IO is moderately robust against adversarial attacks: In other words, highly-skewed distributions of labels for noun phrases in our dataset do not appear to imbue a similar drastic bias into our model.",
"We presented a challenging Insider Outsider classification task, a novel framework necessary for addressing burgeoning misinformation and the proliferation of threat narratives on social media.",
"We compiled a labeled CT5K dataset of conspiracy-theoretic posts from multiple social media platforms and presented a competitive NP2IO model that outperforms non-trivial baselines.",
"We have demonstrated that NP2IO is contextual and transitive via its zero-shot performance, adversarial studies and qualitative studies.",
"We have also shown that the CT5K dataset consists of underlying information that is different from existing ABSA datasets.",
"Given NP2IO's ability to identify Insider s and Outsider s in a text segment, we can extend the inference engine to an entire set of interrelated samples in order to extract, visualize and interpret the underlying narrative (see Figure 3).",
"This marks a first and significant step in teasing out narratives from fragmentary social media records, with many of its essential semantic parts such as, Insider / Outsider tagged in an automated fashion.",
"As extensive evaluations of the NP2IO model show, our engine has learned the causal phrases used to designate the labels.",
"We believe an immediate future work can identify such causal phrases, yet another step toward semantic understanding of the parts of a narrative.",
"Broadly, work similar to this promises to expedite the development of models that rely on a computational foundation of structured information, and that are better at explaining causal chains of inference, a particularly important feature in the tackling of misinforma-4982 Figure 3: An actor-actant subnarrative network constructed from social media posts: Selected posts from anti-vaccination forums such as qresearch on 4chan were decomposed into relationship tuples using a state-of-the-art relationship extraction pipeline from previous work (Tangherlini et al., 2020) and these relationships are overlayed with the inferences from NP2IO.",
"tion.",
"Indeed, NP2IO's success has answered the question: Which side are you on?",
"What remains to be synthesized from language is: Why?"
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"objective",
"other",
"method",
"method",
"method",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"other",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"result",
"objective",
"abstain",
"result",
"method",
"other",
"abstain",
"abstain",
"abstain"
] |
[
"Clinical notes are text documents that are created by clinicians for each patient encounter.",
"They are typically accompanied by medical codes, which describe the diagnosis and treatment.",
"Annotating these codes is labor intensive and error prone; furthermore, the connection between the codes and the text is not annotated, obscuring the reasons and details behind specific diagnoses and treatments.",
"We present an attentional convolutional network that predicts medical codes from clinical text.",
"Our method aggregates information across the document using a convolutional neural network, and uses an attention mechanism to select the most relevant segments for each of the thousands of possible codes.",
"The method is accurate, achieving precision @8 of 0.71 and a Micro-F1 of 0.54, which are both better than the prior state of the art.",
"Furthermore, through an interpretability evaluation by a physician, we show that the attention mechanism identi-fies meaningful explanations for each code assignment.",
"Clinical notes are free text narratives generated by clinicians during patient encounters.",
"They are typically accompanied by a set of metadata codes from the International Classification of Diseases (ICD), which present a standardized way of indicating diagnoses and procedures that were performed during the encounter.",
"ICD codes have a variety of uses, ranging from billing to predictive modeling of patient state (Choi et al., 2016; Ranganath et al., 2015; Denny et al., 2010; Avati et al., 2017).",
"Because manual coding is time-consuming and error-prone, automatic coding has been studied since at least the 1990s (de Lima et al., 1998).",
"The task is dicult for two main reasons.",
"First, the label space is very high-dimensional, with over 15,000 codes in the ICD-9 taxonomy, and over 140,000 codes combined in the newer ICD-10-CM and ICD-10-PCS taxonomies (World Health Organization, 2016).",
"Second, clinical text includes irrelevant information, misspellings and non-standard abbreviations, and a large medical vocabulary.",
"These features combine to make the prediction of ICD codes from clinical notes an especially dicult task, for computers and human coders alike (Birman-Deych et al., 2005).",
"In this application paper, we develop convolutional neural network (CNN)-based methods for automatic ICD code assignment based on text discharge summaries from intensive care unit (ICU) stays.",
"To better adapt to the multi-label setting, we employ a per-label attention mechanism, which allows our model to learn distinct document representations for each label.",
"We call our method C onvolutional A ttention for M ultiL abel classification (CAML).",
"Our model design is motivated by the conjecture that important information correlated with a code's presence may be contained in short snippets of text which could be anywhere in the document, and that these snippets likely dier for dierent labels.",
"To cope with the large label space, we exploit the textual descriptions of each code to guide our model towards appropriate parameters: in the absence of many labeled examples for a given code, its parameters should be similar to those of codes with similar textual descriptions.",
"We evaluate our approach on two versions of MIMIC (Johnson et al., 2016), an open dataset of ICU medical records.",
"Each record includes a variety of narrative notes describing a patient's stay, including diagnoses and procedures.",
"Our approach substantially outperforms previous results on medical code prediction on both MIMIC-II and MIMIC-III datasets.",
"We consider applications of this work in a decision support setting.",
"Interpretability is important for any decision support system, especially in the 1101 934.1 : Foreign body in main bronchus CAML (HI) ...line placed bronchoscopy performed showing large mucus plug on the left on transfer to...",
"are not given.",
"An I' marking indicates a snippet evaluated as informative, and HI' indicates that it is highly informative; see 4 for more details.",
"medical domain.",
"The system should be able to explain why it predicted each code; even if the codes are manually annotated, it is desirable to explain what parts of the text are most relevant to each code.",
"These considerations further motivate our per-label attention mechanism, which assigns importance values to -grams in the input document, and which can therefore provide explanations for each code, in the form of extracted snippets of text from the input document.",
"We perform a human evaluation of the quality of the explanations provided by the attention mechanism, asking a physician to rate the informativeness of a set of automatically generated explanations.",
"1 2 Method We treat ICD-9 code prediction as a multilabel text classification problem (McCallum, 1999).",
"2 Let represent the set of ICD-9 codes; the labeling problem for instance is to determine , {0 , 1} for all .",
"We train a neural network which passes text through a convolutional layer to compute a base representation of the text of each document (Kim, 2014), and makes | | binary classifi-1 Our code, data splits, and pre-trained models are available at github.com/jamesmullenbach/ caml-mimic .",
"cation decisions.",
"Rather than aggregating across this representation with a pooling operation, we apply an attention mechanism to select the parts of the document that are most relevant for each possible code.",
"These attention weights are then applied to the base representation, and the result is passed through an output layer, using a sigmoid transformation to compute the likelihood of each code.",
"We employ a regularizer to encourage each code's parameters to be similar to those of codes with similar textual descriptions.",
"We now describe each of these elements in more detail.",
"At the base layer of the model, we have dimensional pre-trained embeddings for each word in the document, which are horizontally concatenated into the matrix = [ 1 , 2 , , ] , where is the length of the document.",
"Adjacent word embeddings are combined using a convolutional filter , where is the filter width, the size of the input embedding, and the size of the filter output.",
"At each step , we compute = ( + 1 + ) , (1) where denotes the convolution operator, is an element-wise nonlinear transformation, and is the bias.",
"We additionally pad each side of 1102 the input with zeros so that the resulting matrix has dimension .",
"After convolution, the document is represented by the matrix .",
"It is typical to reduce this matrix to a vector by applying pooling across the length of document, by selecting the maximum or average value at each row (Kim, 2014).",
"However, our goal is to assign multiple labels (i.e., medical codes) for each document, and dierent parts of the base representation may be relevant for dierent labels.",
"For this reason, we apply a per-label attention mechanism.",
"An additional benefit is that it selects the -grams from the text that are most relevant to each predicted label.",
"Formally, for each label , we compute the matrix-vector product, , where is a vector parameter for label .",
"We then pass the resulting vector through a softmax operator, obtaining a distribution over locations in the document, = SoftMax ( ) , (2) where SoftMax ( ) = exp( ) exp( ) , and exp( ) is the element-wise exponentiation of the vector .",
"The attention vector is then used to compute vector representations for each label, = =1 , .",
"As a baseline model, we instead use max-pooling to compute a single vector for all labels,",
"Given the vector document representation , we compute a probability for label using another linear layer and a sigmoid transformation:",
"where is a vector of prediction weights, and is a scalar oset.",
"The overall model is illustrated in Figure",
"1. Figure 1: CAML architecture with per-label attention shown for one label.",
"The training procedure minimizes the binary cross-entropy loss,",
"Due to the dimensionality of the label space, many codes are rarely observed in the labeled data.",
"To improve performance on these codes, we use text descriptions of each code from the World Health Organization (2016).",
"Examples can be found in Table 1, next to the code numbers.",
"We use these descriptions to build a secondary module in our network that learns to embed them as vectors.",
"These vectors are then used as the target of regularization on the model parameters .",
"If code is rarely observed in the training data, this regularizer will encourage its parameters to be similar to those of other codes with similar descriptions.",
"The code embedding module consists of a max-pooling CNN architecture.",
"Let be a max-pooled vector, obtained by passing the description for code into the module.",
"Let be the number of true labels in a training example.",
"We add the following regularizing objective to our loss , ( , ) = BCE + 1 =1 2 , (7) 1103 where is a tradeo hyperparameter that calibrates the performance of the two objectives.",
"We call this model variant Description Regularized-CAML (DR-CAML).",
"This section evaluates the accuracy of code prediction, comparing our models against several competitive baselines.",
"MIMIC-III (Johnson et al., 2016) is an open-access dataset of text and structured records from a hospital ICU.",
"Following previous work, we focus on discharge summaries, which condense information about a stay into a single document.",
"In MIMIC-III, some admissions have addenda to their summary, which we concatenate to form one document.",
"Each admission is tagged by human coders with a set of ICD-9 codes, describing both diagnoses and procedures which occurred during the patient's stay.",
"There are 8,921 unique ICD-9 codes present in our datasets, including 6,918 diagnosis codes and 2,003 procedure codes.",
"Some patients have multiple admissions and therefore multiple discharge summaries; we split the data by patient ID, so that no patient appears in both the training and test sets.",
"In this full-label setting, we use a set of 47,724 discharge summaries from 36,998 patients for training, with 1,632 summaries and 3,372 summaries for validation and testing, respectively.",
"Secondary evaluations For comparison with prior work, we also follow Shi et al. (2017) and train and evaluate on a label set consisting of the 50 most frequent labels.",
"In this setting, we filter each dataset down to the instances that have at least one of the top 50 most frequent codes, and subset the training data to equal the size of the training set of Shi et al. (2017), resulting in 8,067 summaries for training, 1,574 for validation, and 1,730 for testing.",
"We also run experiments with the MIMIC-II dataset, to compare with prior work by Baumel et al. (2018) and Perotte et al. (2013).",
"We use the train/test split of Perotte et al. (2013), which consists of 20,533 training examples and 2,282 testing examples.",
"Detailed statistics for the three settings are summarized in Table",
"2. Preprocessing We remove tokens that contain no alphabetic characters (e.g., removing 500 but keeping 250mg), lowercase all tokens, and replace tokens that appear in fewer than three training documents with an UNK' token.",
"We pretrain word embeddings of size = 100 using the word2vec CBOW method (Mikolov et al., 2013) on the preprocessed text from all discharge summaries.",
"All documents are truncated to a maximum length of 2500 tokens.",
"We compare against the following baselines:",
"a single-layer one-dimensional convolutional neural network (Kim, 2014); a bag-of-words logistic regression model; a bidirectional gated recurrent unit (Bi-GRU).",
"3 For the CNN and Bi-GRU, we initialize the embedding weights using the same pretrained word2vec vectors that we use for the CAML models.",
"All neural models are implemented using PyTorch 4 .",
"The logistic regression model consists of | | binary one-vs-rest classifiers acting on unigram bag-of-words features for all labels present in the training data.",
"If a label is not present in the training data, the model will never predict it in the held-out data.",
"Parameter tuning We tune the hyperparameters of the CAML model and the neural baselines using the Spearmint Bayesian optimization package (Snoek et al., 2012; Swersky et al., 2013).",
"5 We allow Spearmint to sample parameter values for the L2 penalty on the model weights and learning rate , as well as filter size , number of filters , and dropout probability for the convolutional models, and number of hidden layers of dimension for the Bi-GRU, using precision @8 on the MIMIC-III full-label validation set as the performance measure.",
"We use these parameters for DR-CAML as well, and port the optimized parameters to the MIMIC-II full-label and MIMIC-III 50-label models, and manually fine-tune the learning rate in these settings.",
"We select for DR-CAML based on pilot experiments on the validation sets.",
"Hyperparameter tuning is summarized in Table",
"3. Convolutional models are trained with dropout after the 3 Our pilot experiments found that GRU was stronger than long short-term memory (LSTM) for this task.",
"embedding layer.",
"We use a fixed batch size of 16 for all models and datasets.",
"Models are trained with early stopping on the validation set; training terminates after the precision@8 does not improve for 10 epochs, and the model at the time of the highest precision@8 is used on the test set.",
"To facilitate comparison with both future and prior work, we report a variety of metrics, focusing on the micro-averaged and macro-averaged F1 and area under the ROC curve (AUC).",
"Micro-averaged values are calculated by treating each (text, code) pair as a separate prediction.",
"Macro-averaged values, while less frequently reported in the multilabel classification literature, are calculated by averaging metrics computed per-label.",
"For recall, the metrics are distinguished as follows: Micro-R = | | =1 TP | | =1 TP + FN (8) Macro-R = 1 | | | | =1 TP TP + FN , (9) where TP denotes true positive examples and FN denotes false negative examples.",
"Precision is computed analogously.",
"The macro-averaged metrics place much more emphasis on rare label prediction.",
"We also report precision at (denoted as P@n'), which is the fraction of the highest-scored labels that are present in the ground truth.",
"This is motivated by the potential use case as a decision support application, in which a user is presented with a fixed number of predicted codes to review.",
"In such a case, it is more suitable to select a model with high precision than high recall.",
"We choose = 5 and = 8 to compare with prior work (Vani et al., 2017; Prakash et al., 2017).",
"For the MIMIC-III full label setting, we also compute precision@15, which roughly corresponds to the average number of codes in MIMIC-III discharge summaries (Table 2).",
"Our main quantitative evaluation involves predicting the full set of ICD-9 codes based on the text of the MIMIC-III discharge summaries.",
"These results are shown in Table",
"4. The CAML model gives the strongest results on all metrics.",
"Attention yields substantial improvements over the vanilla convolutional neural network (CNN).",
"The recurrent Bi-GRU architecture is comparable to the vanilla CNN, and the logistic regression baseline is substantially worse than all neural architectures.",
"The best-performing CNN model has 9.86M tunable parameters, compared with 6.14M tunable parameters for CAML.",
"This is due to the hyperparameter search preferring a larger number of filters for the CNN.",
"Finally, we observe that the DR-CAML performs worse on most metrics than CAML, with a tuned regularization coecient of = 0 .",
"01 .",
"Among prior work, only Scheurwegs et al. (2017) evaluate on the full ICD-9 code set for MIMIC-III.",
"Their reported results distinguished between diagnosis codes and procedure codes.",
"The CAML models are stronger on both sets.",
"Additionally, our method does not make use of any external information or structured data, while 1105 AUC F1 P@n Model Macro Micro Macro Micro Diag Proc 8 15 Scheurwegs et.",
"Scheurwegs et al. use structured data and various medical ontologies in their text representation.",
"We feel that precision@8 is the most informative of the metrics, as it measures the ability of the system to return a small high-confidence subset of codes.",
"Even with a space of thousands of labels, our models achieve relatively high precision: of the eight most confident predictions, on average 5.5 are correct.",
"It is also apparent how dicult it is to achieve high Macro-F1 scores, due to the metric's emphasis on rare-label performance.",
"To put these results in context, a hypothetical system that performs perfectly on the 500 most common labels, and ignores all others, would achieve a Macro-F1 of 0.052 and a Micro-F1 of 0.842.",
"Secondary evaluations To compare with prior published work, we also evaluate on the 50 most common codes in MIMIC-III (Table 5), and on MIMIC-II (Table 6).",
"We report DR-CAML results on the 50-label setting of MIMIC-III with = 10 , and on MIMIC-II with = 0 .",
"1 , which were determined by grid search on a validation set.",
"The other hyperparameters were left at the settings for the main MIMIC-III evaluation, as described in Table",
"3. In the 50-label setting of MIMIC-III, we see strong improvement over prior work in all reported metrics, as well as against the baselines, with the exception of precision@5, on which the CNN baseline performs best.",
"We hypothesize that this is because the relatively large value of = 10 for CAML leads to a larger network that is more suited to larger datasets; tuning CAML's hyperparameters on this dataset would be expected to improve performance on all metrics.",
"Baumel et al. (2018) additionally report a micro-F1 score of 0.407 by training on MIMIC-III, and evaluating on MIMIC-II.",
"Our model achieves better performance using only the (smaller) MIMIC-II training set, leaving this alternative training protocol for future work.",
"We now evaluate the explanations generated by CAML's attention mechanism, in comparison with three alternative heuristics.",
"A physician was presented with explanations from four methods, using a random sample of 100 predicted codes from the MIMIC-III full-label test set.",
"The most important -gram from each method was extracted, along with a window of five words on either side for context.",
"We select = 4 in this setting to emulate a span of attention over words likely to be given by a human reader.",
"Examples can be found in Table",
"1. Observe that the snippets may overlap in multiple words.",
"We prompted the evaluator to select all text snippets which he felt adequately explained the presence of a given code, provided the code and its description, with the option to distinguish snippets as highly informative should they be found particularly informative over others.",
"CAML The attention mechanism allows us to extract -grams from the text that are most influ-ential in the prediction of each label, by taking the argmax of the SoftMax output .",
"Max-pooling CNN We select the -grams that provide the maximum value selected by max-pooling at least once and weighting by the final layer weights.",
"Defining an argmax vector which 1106 AUC F1 Model Macro Micro Macro Micro P@5 C-MemNN (Prakash et al., 2017) 0.833 0.42 Shi et al. (2017) 0.900 0.532 Logistic Regression 0.829 0.864 0.477 0.533 0.546 CNN 0.876 0.907 0.576 * 0.625 0.620 Bi-GRU 0.828 0.868 0.484 0.549 0.591 CAML 0.875 0.909 0.532 0.614 0.609 DR-CAML 0.884 * 0.916 0.576 * 0.633 0.618 Table 5: Results on MIMIC-III, 50 labels.",
"results from the max-pooling step as = arg max {1 , , +1} ( ) , (10) we can compute the importance of position for label , = = , .",
"(11)",
"Logistic regression The informativeness of each -gram with respect to label is scored by the sum of the coecients of the weight matrix for , over the words in the -gram.",
"The top-scoring -gram is then returned as the explanation.",
"Code descriptions Finally, we calculate a word similarity metric between each stemmed -gram and the stemmed ICD-9 code description.",
"We compute the idf-weighted cosine similarity, with idf weights calculated on the corpus consisting of all notes and relevant code descriptions.",
"We then select the argmax over -grams in the document, breaking ties by selecting the first occurrence.",
"We remove those note-label pairs for which no -gram has a score greater than 0, which gives an unfair advantage to this baseline.",
"The results of the interpretability evaluation are presented in Table 7.",
"Our model selects the greatest number of highly informative explanations, and selects more informative explanations than both the CNN baseline and the logistic regression model.",
"While the cosine similarity metric also performs well, the examples in Table 1 demonstrate the strengths of CAML in extracting text snippets in line with more intuitive explanations for the presence of a code.",
"As noted above, there exist some cases, which we exclude, where the cosine similarity method is unable to provide any explanation, because no -grams in a note have a nonzero similarity for a given label description.",
"This occurs for about 12 % of all note-label pairs in the test set.",
"Attentional Convolution for NLP CNNs have been successfully applied to tasks such as sentiment classification (Kim, 2014) and language modeling (Dauphin et al., 2017).",
"Our work combines convolution with attention (Bahdanau et al., 2015; Yang et al., 2016) to select the most relevant parts of the discharge summary.",
"Other recent work has combined convolution and attention (e.g., Allamanis et al., 2016; Yin et al., 2016; dos Santos et al., 2016; Yin and Schtze, 2017).",
"Our attention mechanism is most similar to those of Yang et al. (2016) and Allamanis et al. (2016), in that we use context vectors to compute attention over specific locations in the text.",
"Our work diers in that we compute separate attention weights for each label in our label space, which is better tuned to our goal of selecting locations in a document which are most important for predicting specific labels.",
"Automatic ICD coding ICD coding is a longstanding task in the medical informatics community, which has been approached with machine learning and handcrafted methods (Scheurwegs et al., 2015).",
"Many recent approaches, like ours, use unstructured text data as the only source of information (e.g., Kavuluru et al., 2015; Subotin and Davis, 2014), though some incorporates struc-1107 AUC F1 Model Macro Micro Macro Micro P@8 Flat SVM (Perotte et al., 2013) 0.293 HA-GRU (Baumel et al., 2018) 0.366 Logistic Regression 0.690 0.934 0.025 0.314 0.425 CNN 0.742 0.941 0.030 0.332 0.388 Bi-GRU 0.780 0.954 0.024 0.359 0.420 CAML 0.820 0.966 * 0.048 0.442 0.523 * DR-CAML 0.826 0.966 * 0.049 0.457 * 0.515 Table 6: Results on MIMIC-II full, 5031 labels.",
"tured data as well (e.g., Scheurwegs et al., 2017; Wang et al., 2016).",
"Most previous methods have either evaluated only on a strict subset of the full ICD label space (Wang et al., 2016), relied on datasets that focus on a subset of medical scenarios (Zhang et al., 2017), or evaluated on data that are not publicly available, making direct comparison dicult (Subotin and Davis, 2016).",
"A recent shared task for ICD-10 coding focused on coding of death certificates in English and French (Nvol et al., 2017).",
"This dataset also contains shorter documents than those we consider, with an average of 18 tokens per certificate in the French corpus.",
"We use the open-access MIMIC datasets containing de-identified, general-purpose records of intensive care unit stays at a single hospital.",
"Perotte et al. (2013) use flat and hierarchical SVMs; the former treats each code as an individual prediction, while the latter trains on child codes only if the parent code is present, and predicts on child codes only if the parent code was positively predicted.",
"Scheurwegs et al. (2017) use a feature selection approach to ICD-9 and ICD-10 classification, incorporating structured and unstructured text information from EHRs.",
"They evaluate over various medical specialties and on the MIMIC-III dataset.",
"We compare directly to their results on the full label set of MIMIC-III.",
"Other recent approaches have employed neural network architectures.",
"Baumel et al. (2018) apply recurrent networks with hierarchical sentence and word attention (the HA-GRU) to classify ICD9 diagnosis codes while providing insights into the model decision process.",
"Similarly, Shi et al. (2017) applied character-aware LSTMs to generate sentence representations from specific subsections of discharge summaries, and apply attention to form a soft matching between the representations and the top 50 codes.",
"Prakash et al. (2017) use memory networks that draw from discharge summaries as well as Wikipedia, to predict top-50 and top-100 codes.",
"Another recent neural architecture is the Grounded Recurrent Neural Network (Vani et al., 2017), which employs a modified GRU with dimensions dedicated to predicting the presence of individual labels.",
"We compare directly with published results from all of these papers, except Vani et al. (2017), who evaluate on only a 5000 code subset of ICD-9.",
"Empirically, the CAML architecture proposed in this paper yields stronger results across all experimental conditions.",
"We attribute these improvements to the attention mechanism, which focuses on the most critical features for each code, rather than applying a uniform pooling operation for all codes.",
"We also observed that convolution-based models are at least as effective, and significantly more computationally ef-ficient, than recurrent neural networks such as the Bi-GRU.",
"of this work is that the code predictions be explainable from features of the text.",
"Prior work has also em-1108 phasized explainability.",
"Lei et al. (2016) model rationales through a latent variable, which tags each word as relevant to the document label.",
"Li et al. (2016) compute the salience of individual words by the derivative of the label score with respect to the word embedding.",
"Ribeiro et al. (2016) use submodular optimization to select a subset of features that closely approximate a specific classification decision (this work is also notable for extensive human evaluations).",
"In comparison to these approaches, we employ a relatively simple attentional architecture; this simplicity is motivated by the challenge of scaling to multi-label classification with thousands of possible labels.",
"Other prior work has emphasized the use of attention for highlighting salient features of the text (e.g., Rush et al., 2015; Rocktschel et al., 2016), although these papers did not perform human evaluations of the interpretability of the features selected by the attention mechanism.",
"We present CAML, a convolutional neural network for multi-label document classification, which employs an attention mechanism to adaptively pool the convolution output for each label, learning to identify highly-predictive locations for each label.",
"CAML yields strong improvements over previous metrics on several formulations of the ICD-9 code prediction task, while providing satisfactory explanations for its predictions.",
"Although we focus on a clinical setting, CAML is extensible without modification to other multi-label document tagging tasks, including ICD-10 coding.",
"We see a number of directions for future work.",
"From the linguistic side, we plan to integrate the document structure of discharge summaries in MIMIC-III, and to better handle non-standard writing and other sources of out-of-vocabulary tokens.",
"From the application perspective, we plan to build models that leverage hierarchy of ICD codes (Choi et al., 2016), and to attempt the more dicult task of predicting diagnosis and treatment codes for future visits from discharge summaries.",
"Acknowledgments Helpful feedback was provided by the anonymous reviewers, and by the members of the Georgia Tech Computational Linguistics lab.",
"The project was partially supported by project HDTRA1-15-1-0019 from the Defense Threat Reduction Agency, by the National Science Foundation under awards IIS-1418511 and CCF-1533768, by the National Institutes of Health under awards 1R01MD011682-01 and R56HL138415, by Children's Healthcare of Atlanta, and by UCB."
] | [
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"other",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"result",
"abstain",
"method",
"abstain",
"abstain"
] |
[
"The performance of Part-of-Speech tagging varies significantly across the treebanks of the Universal Dependencies project.",
"This work points out that these variations may result from divergences between the annotation of train and test sets.",
"We show how the annotation variation principle, introduced by Dickinson and Meurers (2003) to automatically detect errors in gold standard, can be used to identify inconsistencies between annotations; we also evaluate their impact on prediction performance.",
"The performance of Part-of-Speech (PoS) taggers significantly degrades when they are applied to test sentences that depart from training data.",
"To illustrate this claim, Table 1 reports the error rate achieved by our in-house PoS tagger on the different combinations of train and test sets of the French treebanks of the Universal Dependencies (UD) project (Nivre et al., 2018).",
"1 It shows that depending on the train and test sets considered, the performance can vary by a factor of more than 25.",
"Many studies (Foster, 2010; Plank et al., 2014) attribute this drop in accuracy to covariate shift (Shimodaira, 2000), characterizing the differences between domains by a change in the marginal distribution p ( x ) of the input (e.g. increase of out-of-vocabulary words, missing capitalization, different usage of punctuation, etc), while assuming that the conditional label distribution remains unaffected.",
"This work adopts a different point of view : we believe that the variation in tagging performance is due to a dataset shift (Candela et al., 2009), i.e. a change in the joint distribution of the features and labels.",
"We assume that this change mainly results",
"from incoherences in the annotations between corpora or even within the same corpus.",
"Indeed, ensuring inter-annotator agreement in PoS tagging is known to be a difficult task as annotation guidelines are not always interpreted in a consistent manner (Marcus et al., 1993).",
"For instance, Manning (2011) shows that many errors in the WSJ corpus are just mistakes rather than uncertainties or difficulties in the task; Table 2 reports some of these annotation divergences that can be found in UD project.",
"The situation is naturally worse in cross-corpora settings, in which treebanks are annotated by different laboratories or groups.",
"The contribution of this paper is threefold : we show that, as already pointed out by de Marneffe et al. (2017), the variation principle of Boyd et al. (2008) can be used to flag potential annotation discrepancies in the UD project.",
"Building on this principle, we introduce, to evaluate the annotation consistency of a corpus, several methods and metrics that can be used, during the annotation to improve the quality of the corpus.",
"we generalize the conclusions of Manning (2011), highlighting how error rates in PoS tagging are stemming from the poor quality of annotations and inconsistencies in the resources; we also systematically quantify the impact of annotation variation on PoS tagging performance for a large number of languages and corpora.",
"we show that the evaluation of PoS taggers in cross-corpora settings (typically in domain adaptation experiments) is hindered by systematic annotation discrepancies between the corpora and quantify the impact of this divergence on PoS tagger evaluation.",
"Our observations stress the fact that comparing inand out-domain scores as many test FTB GSD ParTUT SRCMF Sequoia Spoken PUD train FTB 2.8% 7.0% 6.5% 45.4% 5.4% 18.7% 12.9% GSD 6.7% 3.7% 7.2% 45.5% 5.4% 16.3% 10.2% ParTUT 11.2% 10.9% 5.9% 55.7% 11.3% 22.9% 15.8% SRCMF 38.8% 37.8% 36.2% 7.5% 37.4% 34.7% 36.1% Sequoia 7.5% 7.5% 8.4% 48.0% 4.0% 19.3% 13.6% Spoken 32.1% 30.3% 25.7% 51.8% 29.5% 7.9% 30.1% Table 1: Error rate (%) achieved by a PoS tagger trained and tested on all possible combinations of the French train and test sets of the UD project.",
"works do (e.g. to evaluate the quality of a domain adaptation method or the measure the difficulty of the domain adaptation task) can be flawed and that this metrics has to be corrected to take into account the annotation divergences that exists between corpora.",
"The rest of this paper is organized as follows.",
"We first present the corpora and the tools used in our experiments ( 2).",
"We then describe the annotation variation principle of Dickinson and Meurers (2003) ( 3) and its application to the treebanks of the Universal Dependencies project ( 4).",
"We eventually assess the impact of annotation variations on prediction performance ( 5 and 6).",
"The code and annotations of all experiments are available on the first author website.",
"2 For the sake of clarity, we have only reported our observations for the English treebanks of the UD project and, sometimes, for the French treebanks (because it has seven treebanks).",
"Similar results have however been observed for other languages and corpora.",
"Data All experiments presented in this work use the Universal Dependencies (UD) 2.3 dataset (Nivre et al., 2018) that aims at developing cross-linguistically consistent treebank annotations for a wide array of languages.",
"This version of the UD project contains 129 treebanks covering 76 languages.",
"Among those, 97 treebanks define a train set that contains between 19 sentences and 68,495 sentences and a test set that contains between 34 and 10,148 sentences.",
"For 21 languages, several test sets are available : there are, for instance, 7 test sets for French,",
"6 for English, 5 for Czech and 4 for Swedish, Chinese, Japanese, Russian and Italian.",
"Overall, it is possible to train and test 290 taggers (i.e. there are 290 possible combinations of a train and a test set of the same language), 191 of these conditions (i.e. pairs of a train set and a test set) correspond to a cross-corpus setting and can be considered for domain adaptation experiments.",
"Many of these corpora 3 result from an automatic transformation (with, for some of them, manual corrections) from existing dependency or constituent treebanks (Bosco et al., 2013; Lipenkova and Soucek, 2014).",
"Because most treebanks have been annotated and/or converted independently by different groups, 4 the risk of inconsistencies and errors in the application of annotation guidelines is increased.",
"There may indeed be several sources of inconsistencies in the gold annotations : in addition to the divergences in the theoretical linguistic principles that governed the design of the original annotation guidelines, inconsistencies may also result from automatic (pre-)processing, human post-editing, or human annotation.",
"Actually, several studies have recently pointed out that treebanks for the same language are not consistently annotated (Vilares and Gmez-Rodrguez, 2017; Aufrant et al., 2017).",
"In a closely related context, Wisniewski et al. (2014) have also shown that, in spite of common annotation guidelines, one of the main bottleneck in cross-lingual transfer between UD corpora is the difference in the annotation conventions across treebanks and languages.",
"3. For PoS, only 23 treebanks have been manually annotated natively with the Universal PoS tagset.",
"PoS tagger In all our experiments, we use a history-based model (Black et al., 1992) with a LaSO-like training method (Daum III and Marcu, 2005).",
"This model reduces PoS tagging to a sequence of multi-class classification problems : the PoS of the words in the sentence are predicted one after the other using an averaged perceptron.",
"We consider the standard feature set for PoS tagging (Zhang and Nivre, 2011) : current word, two previous and following words, the previous two predicted labels, etc.",
"This standard' feature set has been designed for English and has not been adapted to the other languages considered in our experiments.",
"Our PoS tagger achieves an average precision of 91.10% over all UD treebanks, a result comparable to the performance of UDPipe 1.2 (Straka and Strakov, 2017), the baseline of CoNLL'17 Shared Task Multilingual Parsing from Raw Text to Universal Dependencies ' that achieves an average precision of 91.22%.",
"When not otherwise specified, all PoS tagging scores reported below are averaged over 10 runs (i.e. independent training of a model and evaluation of the test performance).",
"with different annotations, one of these two label sequences may be inconsistently annotated.",
"Our work relies on this principle to identify discrepancies in the PoS annotation of treebanks.",
"We call repeat a sequence of words that appears in, at least, two sentences and suspicious repeat a repeat that is annotated in at least two different ways.",
"Identifying suspicious repeats requires, first, to find all sequences of words that appear in two different sentences; this is an instance of the maximal repeat problem : a maximal repeat , is a substring that occurs at least in two different sentences and cannot be extended to the left or to right to a longer common substring.",
"Extracting maximal repeats allows us to find all sequence of words common to at least two sentences without extracting all their substrings.",
"This problem can be solved efficiently using Generalized Suffix Tree (GST) (Gus-field, 1997) : if the corpus contains n words, extracting all the maximal repeats takes O ( n ) to build the GST and O ( n ) to list all the repeats.",
"PoS annotations for these repeats can then be easily extracted and the ones that are identical can be filtered out to gather all suspicious repeats in a set of corpora.",
"A detailed description of our implementation can be found in (Wisniewski, 2018).",
"truly ambiguous.",
"We consider two heuristics to filter out suspicious repeats.",
"First with the size heuristic , we assume that longer suspicious repeats are more likely to result from annotation errors than shorter ones.",
"For instance, Table 2 displays suspicious repeats with at least 10 words that all stem from an annotation error.",
"Second, with the disjoint heuristic , we assume that actual ambiguities will be reflected in intra-corpus suspicious repeats, whereas errors will likely correspond to cases where differences in labelings are observed in different corpora.",
"Formally, the disjoint heuristic flags repeats m occurring in at least two corpora A and B , and such that the set of labelings of m observed in A are disjoint from the set of labelings observed in B .",
"For instance, in French, la porte can either be a determiner and a noun (e.g. in the sentence la porte est ferme the door is closed) or a pronoun followed by a verb (e.g. in the sentence je la porte I carry her).",
"Observing these two possible labelings in at least two corpora is a good sign of an actual ambiguity.",
"The disjoint heuristic allows us to detect that this suspicious repeat is an actual ambiguity.",
"To reiterate, the intuition beyond the disjoint heuristic is that for ambiguities, the two possible annotations will appear in, at least, one of the two corpora.",
"Conversely, systematic divergences in labeling observed across corpora are likely to be errors : for instance, in English, depending on the treebank, cardinal points are labeled as either proper nouns or as nouns.",
"In this case, the set of labelings of the repeats in the first corpus is disjoint from the set of labeling in the second corpus and the the disjoint heuristic captures the annotation inconsistency.",
"Analyzing filtering heuristics To further analyze these two heuristics, we have manually annotated the suspicious repeats between the train set of the English EWT corpus and the test set of the English PUD corpus.",
"For each suspicious repeat, we record whether it is an annotation error or an actual ambiguity.",
"Examples of annotations are given in Table",
"3. Results are in Table",
"4. It appears that, for the heuristics considered, a large part of the suspicious repeats correspond to annotation discrepancies rather than ambiguities.",
"In many cases, these discrepancies result from systematic divergences in the interpretation of the UD guidelines.",
"5 For instance, the contraction n't is always labeled as a particle in the train set of the EWT corpus, but either as particle or an adverb in the PUD corpus.",
"Most of these systematic differences involve distinction between nouns and proper nouns, auxiliaries and verbs and adjectives and verbs (for past participles).",
"We will first show how the annotation variation principle allows us to characterize the noise and/or the difficulty of PoS tagging.",
"Table 5 reports the number of repeats and suspicious repeats in the English corpora of the UD project.",
"These numbers have been calculated by applying the method described in the previous section to the concatenation of train, development and test sets of each treebanks.",
"To calibrate these measures, we conducted",
"5. Discrepancies are not only due to improper interpretations of the guidelines, but also sometimes to actual ambiguities in the annotation rules.",
"the same experiments with the Wall Street Journal (Marcus et al., 1993), 6 the iconic corpus of PoS tagging for which a thorough manual analysis of the annotation quality is described in (Manning, 2011).",
"The observations reported in Table 5 show that the number of repeats varies greatly from one corpus to another, which is not surprising considering the wide array of genres covered by the treebanks that includes sentences written by journalists or learner of English (the genres with the largest number of repeats) or sentences generated by users on social media (that contain far less repeated parts).",
"These observations also show that the percentage of repeats that are not consistently annotated is slightly larger in the UD treebanks than in the WSJ, a corpus in which a manual inspection of the corpus reveals that many variations are mistakes' rather than representing uncertainties or difficulties in the PoS prediction (Manning, 2011).",
"More interestingly, Table 6 shows the percen-Treebank # sent.",
"6. The Penn Treebank tagset has been manually converted to the Universal PoS tagset using the mapping of (Petrov et al., 2012) generalized to the extended UD PoS tagset.",
"tage of repeats that are not consistently annotated for all possible combinations of a train and a test sets (ignoring sequences of words that do not appear at least once in both corpora).",
"It appears that in all cases there are (sometimes significantly) more variations in annotations in cross-treebank settings than in situations where the train and the test sets belong to the same treebank.",
"This observation suggests that there may be systematic differences in the annotations of different treebanks which could make the domain adaptation setting artificially more difficult.",
"To characterize the difference between two treebanks, we measure the error rate of a binary classifier deciding from which corpus an annotated sentence is coming from.",
"7 Intuitively, the higher this error rate, the more difficult it is to distinguish sentences of the two corpora and the more similar the treebanks are.",
"More formally, it can be shown (Ben-David et al., 2010) that this error rate is an estimation of the H -divergence (Kifer et al., 2004), a metric introduced in machine learning theory to quantify the impact of a change in domains by measuring the divergence between the distributions of examples sampled from two datasets.",
"In our experiments, we use a Naive Bayes classifier 8 and three sets of features to describe a sentence pair and their annotation : words , in which each example is represented by the bag of its 1-gram and 2-gram of words; labels , in which examples are represented in the same way, but this time, considering PoS; and combi which uses the same representation after the words of all the treebanks have been concatenated with their PoS.",
"The first set aims at capturing a potential covariate shift, the last two target divergence in annotations.",
"To reduce the impact of the strong between-class imbalance, 9 in all our experiments we sub-sample the largest set to ensure that the two datasets we try to distinguish always have the same number of examples.",
"All scores in this experiment are averaged over 20 train-test splits.",
"7. More precisely, the classifier analyses pairs of sentences and predicts whether they belong to th same corpus or not.8.We used the implementation provided by (Pedregosa et al., 2011) without tuning any hyper-parameters.",
"Experiments with a logistic regression show similar results.",
"Table 7 reports the results achieved with the different features sets averaged over all combinations of a train and a test set of the same language and gives the percentage of conditions for which each feature set achieved the best results; Figure 1 details these results for the English and French treebanks.",
"Results for other languages show similar patterns.",
"These results suggest that, in many cases, it is possible to accurately identify from which treebank a sentence and its annotation are coming, although these raw numbers are difficult to interpret as prediction performances are averaged over many different experimental conditions.",
"In more than 50% of the cases, combining words to their PoS results in the best performance, which is consistent to the qualitative study reported in Section 3 : some words appear in two corpora with different PoS allowing to distinguish these corpora.",
"This observation strongly suggests that divergence in annotations across corpora are often genuine.",
"To study annotation divergence in the UD project, we propose to analyze suspicious repeats (i.e. sequence of repeated words with different anno-tations).",
"We start by extracting all the suspicious repeats that can be found when considering all the possible combinations of a train set and a test features median % best words 78.2 31.0 labels 70.9 13.5 combi 78.8 55.5 Table 7: Precision (%) achieved over all cross-treebank conditions by a classifier identifying to which treebank a sentence belongs to.",
"or development set of a given language.",
"These matches are then filtered using the heuristics described in",
"3. There are, overall, 357 , 301 matches in the UD project, 69 , 157 of which involve 3 words or more and 14 , 142 5 words or more; the disjoint heuristic selects 122 , 634 of these matches (see Table 8 in A).",
"To highlight the connection between prediction errors and annotation divergence, we compute, for each possible combination of a train and a test set (considering all languages in the UD project), the correlation between the error rate achieved on a corpus B when training our PoS on a corpus A and the number of suspicious repeats between A and B normalized by the number of tokens in A and B .",
"The Spearman correlation coefficient between these two values is 0 .",
"72 indicating a correlation generally qualified as strong' following the interpretation proposed by (Cohen, 1988) : the more there are sequences of words with different annotations in the train and test sets, the worse the tagging performance, which shows that annotation inconsistencies play an important role in explaining the poor performance of PoS tagger on some conditions.",
"For a more precise picture, we also estimate the number of suspicious repeats that contain a prediction error.",
"Using the disjoint heuristics to filter suspicious repeats, it appears that 70.2% (resp. 73.0%) of the suspicious repeats for English (resp. French) contain a prediction error.",
"As expected, these numbers fall to 51.7% (resp. 49.9%) when the suspicious repeats are not filtered and therefore contain more ambiguous words.",
"Figure 2 displays a similar trend when the suspicious repeats are filtered by their length; similar results are observed for all other languages.",
"These observations suggest that annotation variations often results in prediction errors, espe-French FTB GSD PUD ParTUT SRCMFSequoiaSpoken test FTBGSDP a r TUTS e q u o i a Sp o k e n t r a i n 60.0% 69.0% 80.1% 77.2% 98.3% 67.8% 97.9% 82.0% 57.9% 69.4% 75.3% 98.7% 69.4% 97.3% 82.8% 73.3% 82.9% 68.5% 98.5% 72.5% 97.5% 80.1% 68.4% 79.5% 74.3% 95.7% 48.0% 94.9% 97.1% 95.2% 99.2% 97.8% 95.3% 93.4% 67.9% features words 0.5 0.6 0.7 0.8 0.9 FTB GSD PUD ParTUT SRCMFSequoiaSpoken test FTBGSDP a r TUTS e q u o i a Sp o k e n t r a i n 54.2% 66.6% 80.8% 75.8% 99.1% 67.1% 97.9% 75.1% 61.1% 77.8% 72.3% 99.6% 66.7% 98.1% 75.4% 71.0% 85.8% 63.7% 99.5% 72.8% 98.2% 71.0% 64.3% 81.1% 70.8% 98.2% 51.5% 95.8% 97.3% 97.5% 98.8% 94.0% 89.1% 94.8% 61.3% features labels 0.6 0.7 0.8 0.9 FTB GSD PUD ParTUT SRCMFSequoiaSpoken test FTBGSDP a r TUTS e q u o i a Sp o k e n t r a i n 60.0% 71.9% 85.6% 79.5% 98.2% 70.7% 98.4% 85.1% 58.8% 81.4% 73.6% 98.6% 69.2% 97.7% 87.1% 76.3% 87.3% 68.0% 98.7% 75.5% 98.1% 81.6% 69.8% 86.8% 73.1% 95.2% 49.0% 95.5% 97.7% 94.9% 99.3% 97.3% 96.4% 93.1% 68.2% features combi 0.5 0.6 0.7 0.8 0.9 English ESL EWT GUM LinES PUD ParTUT test EWTGUML i n ESP a r TUT t r a i n 73.0% 62.6% 73.4% 72.3% 79.1% 76.1% 79.3% 76.0% 67.2% 75.0% 71.7% 71.3% 78.1% 79.3% 78.4% 60.4% 77.9% 73.5% 85.0% 83.8% 75.7% 78.9% 74.0% 65.6% features words 0.65 0.70 0.75 0.80 0.85 ESL EWT GUM LinES PUD ParTUT test EWTGUML i n ESP a r TUT t r a i n 87.1% 59.4% 66.7% 65.9% 73.9% 78.2% 85.5% 69.3% 56.2% 67.9% 69.1% 75.3% 84.0% 73.7% 71.1% 51.1% 75.1% 73.1% 86.5% 81.1% 73.0% 74.7% 67.8% 63.5% features labels 0.54 0.60 0.66 0.72 0.78 0.84 ESL EWT GUM LinES PUD ParTUT test EWTGUML i n ESP a r TUT t r a i n 87.0% 62.7% 74.1% 74.0% 77.6% 75.6% 89.2% 77.2% 66.6% 76.5% 72.7% 74.3% 87.8% 80.4% 77.9% 58.8% 78.3% 73.8% 91.3% 85.4% 78.8% 81.9% 78.6% 65.2% features combi 0.60 0.66 0.72 0.78 0.84 0.90 Figure 1: Precision of a classifier identifying to which French (top) or English (bottom) treebank a sentence belongs to.",
"cially when there are good reasons to assume that the variation actually stems from an inconsistency.",
"To evaluate the impact of annotation errors on prediction performance, we propose, for each combination of a train and a test set, to train a PoS tagger and compare full , the error rate achieved on the full test set to ignoring the error rate achieved ignoring errors that occur in a suspicious repeat.",
"More precisely, ignoring is defined as : ignoring = # { err } # { err in suspicious repeats } # { words } (1) where # { err in suspicious repeats } in the number of errors in the suspicious repeats that have survived filtering.",
"Intuitively ignoring can be seen as an oracle' score corresponding to a tagger that would always predict the labels of suspicious repeat correctly.",
"In the following, We will consider three different filters : the disjoint heuristic, keeping only suspicious repeats with more than three words and keeping all of them.",
"Figure 3 reports these errors rates for French and English.",
"Results for other languages show similar results.",
"As expected, ignoring errors in suspicious repeats significantly improve prediction performance.",
"It even appears that ignoring is often on par with the score achieved on in-domain sets.",
"Overall, in more than 43% (resp. 25%) of all the conditions the error rate ignoring errors in suspicious repeats filtered with the disjoint heuristic (resp. minimum heuristic) is lower than the error rate achieved on in-domain data.",
"These values are naturally over-estimated as, in these experiments, we remove all potential annotation errors as well as words and structures that are ambiguous and therefore are more difficult to label.",
"They can however be considered as lower-bound on the predic-ud lines partut test 0 2 4 6 8 10 e rr o rr a t e ( % ) en(UD) full ignoring ( minsize ) ignoring ( disjoint ) ignoring ( all ) ud lines partut test 0 2 4 6 8 10 12 14 16 en(LINES) full ignoring ( minsize ) ignoring ( disjoint ) ignoring ( all ) ud lines partut test 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 en(PARTUT) full ignoring ( minsize ) ignoring ( disjoint ) ignoring ( all ) ud ftb partut sequoia test 0 2 4 6 8 10 e rr o rr a t e ( % ) fr(PARTUT) full ignoring ( minsize ) ignoring ( disjoint ) ignoring ( all ) ud ftb partut sequoia test 0 1 2 3 4 5 6 7 8 fr(SEQUOIA) full ignoring ( minsize ) ignoring ( disjoint ) ignoring ( all ) ud ftb partut sequoia test 0 1 2 3 4 5 6 fr(UD) full ignoring ( minsize ) ignoring ( disjoint ) ignoring ( all ) ud ftb partut sequoia test 0 1 2 3 4 5 6 7 fr(FTB) full ignoring ( minsize ) ignoring ( disjoint ) ignoring ( all ) Figure 3: Error rate achieved by a PoS tagger on the different English treebanks of the UD project when errors in suspicious repeats are ignored.",
"To assess their quality, we have manually checked all the suspicious repeats between the train set of French UD and the test set of the French FTB correcting inconsistencies and errors (almost 2,000 PoS were modified).",
"10 When trained on the original UD corpus, the PoS tagger achieved an error rate of 6.78% on the FTB corpus (4.51% on in-domain data).",
"After correcting inconsistencies, the out-domain error rate falls down to 5.11%.",
"This value is close to the error rate ignoring suspicious repeats containing three and more words, showing the validity of the heuristics we have considered.",
"In this work, we have shown that, for PoS tagging, many prediction errors in cross-corpora settings (which is a typical domain adaptation scenario) stem from divergence between annotations.",
"We have also described a method to quantify this divergence.",
"We have only considered here corpora from the UD project and PoS annotation, but we consider that our method is very generic and can be easily applied to other corpora or tasks (e.g. to-kenization, dependency parsing, etc.) that we will address in future work.",
"We also plan to see how the different experiments we have made to identify annotation errors and inconsistencies can be used during the annotation process to reduce the workload 10.",
"The corrected' corpora will be made available upon publication.",
"In this experiment, the impact of annotation errors is under-estimated as we have only corrected errors that appear in a suspicious repeat without trying to generalize' these corrections to words that appear only in one corpus.",
"This work has been partly funded by the French Agence Nationale de la Recherche under Par-SiTi (ANR-16-CE33-0021) and MultiSem projects (ANR-16-CE33-0013)."
] | [
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"result",
"abstain",
"abstain",
"other"
] |
[
"In this paper, we investigate the use of discourse-aware rewards with reinforcement learning to guide a model to generate long, coherent text.",
"In particular, we propose to learn neural rewards to model cross-sentence ordering as a means to approximate desired discourse structure.",
"Empirical results demonstrate that a generator trained with the learned reward produces more coherent and less repetitive text than models trained with cross-entropy or with reinforcement learning with commonly used scores as rewards.",
"Defining an ideal loss for training text generation models remains an open research question.",
"Many existing approaches based on variants of recurrent neural networks (Hochreiter and Schmid-huber, 1997; Cho et al., 2014) are trained using cross-entropy loss (Bahdanau et al., 2015; Vinyals et al., 2015; Xu et al., 2015; Rush et al., 2015), often augmented with additional terms for topic coverage or task-specific supervision (Kiddon et al., 2016; Yang et al., 2017).",
"Training with cross-entropy, however, does not always correlate well with achieving high scores on commonly used evaluation measures such as ROUGE (Lin, 2004), BLEU (Papineni et al., 2002), or CIDEr (Vedantam et al., 2015).",
"Another current line of research therefore explores training generation models that directly optimize the target evaluation measure (Wu et al., 2016; Ran-zato et al., 2015; Paulus et al., 2018; Rennie et al., 2017) using reinforcement learning methods such as the REINFORCE algorithm (Williams, 1992).",
"Importantly, most automatic measures are based on local n -gram patterns, providing only a limited and myopic perspective of overall text quality.",
"As a result, while models trained to directly optimize these measures can yield improvements on the same measures, they may not lead to better quality in terms of overall coherence or discourse structure.",
"Indeed, recent studies have reported cases where commonly used measures do not align well with desired aspects of generation quality (Rennie et al., 2017; Li et al., 2016).",
"The challenge, however, is to define a global score that can measure the complex aspects of text quality beyond local n -gram patterns.",
"In this paper, we investigate learning neural rewards and their use in a reinforcement learning regime with a specific focus on learning more discourse-aware and coherent text generation.",
"Our approach shares the spirit of the work of Lowe et al. (2017), where neural scores were learned to approximate human judgments of dialogue quality.",
"The key difference is that our rewards can be fully automatically constructed without requiring human judgments and can be trained in an unsupervised manner.",
"More specifically, we propose a neural reward learning scheme that is trained to capture cross-sentence ordering structure as a means to approximate the desired discourse structure in documents.",
"The learned teacher computes rewards for the 173 underlying text generator (see Figure 1), which is trained using self-critical reinforcement learning (Rennie et al., 2017).",
"We also present a new method for distributing sentence-level rewards for more accurate credit assignment.",
"We test our approach on the task of generating cooking recipes, and evaluate using automatic overlap metrics that measure discourse structure.",
"We also provide human judgments that yield comprehensive insights into the model behavior induced by the learned neural rewards.",
"Empirical results demonstrate that a generator trained with the discourse-aware rewards produces text that is more coherent and less repetitive than models trained with cross-entropy or reinforcement learning with other commonly used scores.",
"Recent work in image captioning (Rennie et al., 2017), machine translation (Wu et al., 2016), and summarization (Paulus et al., 2018) has investigated using policy gradient methods to fine-tune neural generation models using automatic measures such as CIDEr as the reward.",
"However, because most existing automatic measures focus on local n -gram patterns, fine-tuning on those measures may yield deteriorated text despite increased automatic scores, especially for tasks that require long coherent generation ( 6.1).",
"Since writing out a scoring term that quantifies the quality of discourse coherence is an open research question, we take inspiration from previous research that learns the overall ordering structure of a document as an approximation of the discourse structure (Barzilay and Lapata, 2005, 2008; Barzilay and Lee, 2004; Li and Hovy, 2014), and propose two neural teachers that can learn to score an ordered sequence of sentences.",
"The scores from these neural teachers are then used to formulate rewards ( 4.2) that guide coherent long text generation systems in a policy gradient reinforcement learning setup.",
"Notably, the neural teachers are trained offline on gold sequences in an unsupervised manner prior to training the generator.",
"They are not trained jointly with the generator and their parameters are fixed during policy learning.",
"The first teacher explored is motivated by work on deep semantic similarity models (Huang et al., 2013), which approximated the similarity between queries and documents in information retrieval tasks.",
"We extend this approach to modeling temporal patterns by training a sentence encoder to minimize the similarity between a sequence encoded in its forward order, and the same sequence encoded in the reverse order (see Figure 2).",
"To focus the teacher on discourse structure, we design the encoder to capture sentence order , instead of word order .",
"Words in each sentence s j are encoded using a bag of words: s j = L j X i =1 x ij (1) where x ij is a word embedding and s j is a sentence embedding.",
"Each s j is passed to a gated recurrent unit (GRU) and the final output of the hidden unit is used as the representation for the full document: h j = GRU ( s j , h j 1 ) (2) f ( S ) = h n (3) where f ( S ) is the representation of the sentences of the document and h n is the final output vector of the GRU.",
"To capture properties of temporal coherence among document sentences, the teacher is trained to minimize L abs , the cosine similarity between the sentence embedding from reading the sentences in the forward order, S and from reading the sentences in the reverse order, S : L abs = h f ( S ) , f ( S ) i k f ( S ) kk f ( S ) k (4) 174 Intuitively, by parametrizing only relations between sentences (with the GRU layer) and not those between words , the teacher only captures sentence ordering properties.",
"When training the neural generator ( 4), we use this learned teacher to generate a reward that judges the generated sequence's ordering similarity to the gold sequence.",
"While the absolute ordering teacher evaluates the temporal coherence of the entire generation, we may want our teacher to be able to judge finer-grained patterns between sentences.",
"In recipes, for example, where sentences correspond to process steps, the teacher should capture implicit script knowledge (Schank and Abelson, 1975) among groups of sentences.",
"Consequently, the teacher should reward sentences individually for how they fit with surrounding sentences.",
"In many current approaches for using policy gradient methods to optimize a model with respect to a global score, each sentence receives the same reward.",
"This framework assumes each sentence is equally responsible for the reward gathered by the full sequence, allowing potentially appropriate subsequences to be incorrectly penalized.",
"We design the relative order teacher to address this issue.",
"The relative order teacher is trained in the same way as the absolute order model.",
"A bag of words embedding is computed for each sentence in the gold sequence.",
"Subsequences of the gold document that have sentences are selected where ( min , max ).",
"For a subsequence beginning at sentence j , the model computes: f ( S j : j + ) = GRU ( s j + , h j + 1 ) (5) where f ( S j : j + ) is the encoded representation of sentences { s j , ...s j + } and h j 1 would be initialized as a vector of zeros.",
"The relative ordering teacher is trained to minimize L rel , the cosine similarity between gold orders of subsequences: L rel = h f ( S j : j + ) , f ( S j : j + ) i k f ( S j : j + ) kk f ( S j : j + ) k (6) where the arrow above S signifies the order in which the sentences are processed.",
"The relative ordering teacher learns to identify local sentence patterns among ordered sentences, thereby learning how to reward sequences that are temporally coherent.",
"In the task of recipe generation, the model is given a title of a recipe such as Cheese Sandwich and a list of ingredients (e.g., cheese, bread, etc.) and must generate the full multi-sentence recipe text.",
"Similar to data to document generation tasks, the model must generate a full long-form text from sparse input signal, filling in missing information on its own (Wiseman et al., 2017).",
"Using the same notation as Kiddon et al. (2016), we are given a set of recipe title words { g 1 , ..., g n } (e.g., { cheese, sandwich } ) and a list of ingredients E = { i 1 , ..., i | E | } where each i can be a singleor multi-word ingredient phrase (e.g., onions or onions, chopped).",
"In the following paragraphs, all W variables are projections matrices and all b variables are bias vectors.",
"We use a modification of the baseline encoder of Kiddon et al. (2016).",
"First, the title words are encoded as a bag of embeddings, g .",
"Second, each ingredient phrase i is encoded as a bag of embeddings vector, e i .",
"The ingredient embeddings are inputs to a bidirectional gated recurrent unit, which yields an output vector e .",
"The final encoder output is the concatenation of these two representations, h e = [ g , e ] .",
"The decoder is a separate gated recurrent unit that receives h e from the encoder to initialize its hidden state h d 0 and must generate a full recipe word by word.",
"At each time step, the model receives an input token embedding, x t , as well as the output from the encoder h e : a t = ( W 1 h dt 1 + W 2 x t + b 1 ) (7) z t = a t h e (8) x t = [ x t , z t ] (9) where x t is the input to the recurrent unit at every time step.",
"The recipe generator is pretrained to minimize the negative loglikelihood of predicting the next token in the recipe: L mle = TX t =1 log P ( x t | x 0 , ..., x t 1 , h e ) (10) 175 Fried Chicken Chicken Flour Spices .",
"Training a recipe generation model using maximum likelihood estimation produces generations that are locally coherent, but lack understanding of domain knowledge.",
"By using a teacher that rewards the model for capturing cooking recipe discourse semantics, the model learns a policy that produces generations that better model the underlying recipe process.",
"We learn a policy using the self-critical approach of Rennie et al. (2017).",
"In self-critical sequence training, outlined in Figure 3, the model learns by being rewarded for sampling sequences that receive more reward than a greedily decoded sequence.",
"For each training example, a sequence y is generated by sampling from the model's distribution P ( y t | y 0 , ..., y t 1 , h e ) at each time step t .",
"Once the sequence is generated, the teacher produces a reward r ( y t ) for each token in the sequence.",
"A second sequence y is generated by argmax decoding from P ( y t | y 0 , ..., y t 1 , h e ) at each time step t .",
"The model is trained to minimize: L rl = TX t =1 ( r ( y t ) r ( y t )) log P ( y t | y 0 , ..., y t 1 , h e ) (11) where r ( y t ) is the reward produced by the teacher for tokens of the greedily decoded sequence.",
"Because r ( y ) can be viewed as a baseline reward that sampled sequences should receive more than, the model learns to generate sequences that receive more reward from the teacher than the best sequence that can be greedily decoded from the current policy.",
"This approach allows the model to explore sequences that yield higher reward than the current best policy.",
"As we decode a sequence y = { y 0 ..., y t } , we track a sentence index that is the number of sentence delimiter tokens (e.g., .) generated by the model.",
"The model then implicitly decodes a set of generated sentences, S 0 = { s 0 , ..., s n } .",
"These sentences are provided to the teachers defined in Section 2, which compute a score for the generated sequence.",
"We explain the procedure for producing a token reward r ( y t ) from these scores below.",
"(12) where S is the forward-ordered corresponding gold sequence and S is the reverse-ordered gold sequence.",
"Both terms in the reward computation are variations of the loss function on which the absolute order teacher was trained (Equation (4)).",
"This reward compares the generated sequence to both sentence orders of the gold sequence, and rewards generations that are more similar to the forward order of the gold sequence.",
"Because the cosine similarity terms in Equation (12) are bounded in [ 1 , 1] , the model receives additional reward for generating sequences that are different from the reverse-ordered gold sequence.",
"Relative Order Similarly, the relative order reward is generated by the relative order teacher ( 2.3), which evaluates subsequences of sentences, rather than the whole sequence.",
"For a sentence s j , the reward is computed as: r rel ( s j ) = 1 L max X = min h f ( S 0 j : j ) , f ( S j : j ) i k f ( S 0 j : j ) kk f ( S j : j ) k h f ( S 0 j : j ) , f ( S j : j ) i k f ( S 0 j : j ) kk f ( S j : j ) k ! (13) 176 where min and max define the window of sentences to include in the computation of the reward. Similar to the absolute order teacher, the relative order teacher produces scores bounded in [ 1 , 1] , giving the model additional reward for generating sequences that are different from the reverse-ordered gold subsequences. Credit Assignment When rewarding tokens with the absolute ordering teacher, each generated token receives the same sequence-level reward from the absolute order teacher: r ( y t ) = r abs ( y ) (14) The relative order teacher, meanwhile, computes rewards for sentences based on their imitation of nearby sentences in the gold recipe. Rather than combining all rewards from the teacher to compute a full sequence reward, sentences should only be rewarded for their own quality. Each token in a sentence corresponds to a position in the full sequence. When relative order rewards are computed by the teacher, the correct sentence reward is indexed for each token. Consequently, when training with a relative order teacher, words only receive rewards for the sentences they belong to: r ( y t ) = | S | X j =1 1 ( y t s j ) r rel ( s j ) (15) where | S | is the number of sentences in the generated recipe, and 1 is an indicator variable identifying word y t belonging to sentence s j . 4.3 Mixed Training As the model learns parameters to optimize the amount of reward it receives from the teacher, it is not explicity encouraged to produce fluent generations. The model quickly learns to generate simple sequences that exploit the teacher for high rewards despite being incoherent recipes (e.g., Figure 4). Consequently, it is possible that generated sequences are no longer readable (Pasunuru and Bansal, 2017; Paulus et al., 2018). Title: Chili Grits Ingredients: boiling water, butter, shredded cheddar cheese, jalapenos, eggs, chicken cream of soup, salt Generated Recipe: Here . Figure 4: Recipe generated from a self-critical model with no mixed training To remedy this effect, the model optimizes a mixed objective that balances learning the discourse-focused policy while maintaining the generator's language model: L mix = L rl + (1 ) L mle (16) where L mle is the objective from Equation (10), L rl is the objective from either Equation (11), and is a hyperparameter in [0, 1]. 5 Experimental Setup 5.1 Datasets We use the Now You're Cooking dataset with the same training/test/development splits from Kiddon et al. (2016). For training, we use 109567 recipes with 1000 recipes set aside for both development and test. 5.2 Training Teacher Models The teachers are trained before the recipe generator and their parameters are fixed during generation. We tune hyperparameters on the development set. To train the relative order teacher, we sample 20 subsequences from each recipe of min = 3 to max = 6 sentences. Additional details are provided in Appendix A.2. Recipe Generator We pretrain a recipe generator using a variant of the encoder-decoder baseline from Kiddon et al. (2016). Comprehensive hyperparameter details can be found in Appendix A.3. Policy Learning We train a different model for three different teacher-provided rewards: absolute ordering (AO), relative ordering (RO) and a joint reward of relative ordering and BLEU-4 (RO + B4), where the full-sequence BLEU-4 reward and the sentence-level relative ordering reward are summed at each time step. The best model for the absolute and relative ordering rewards are the ones that receive the highest average reward on the development set. The best model for the mixed reward was chosen as the one that achieved the highest average geometric mean of BLEU-4 reward and average relative ordering reward for each generated sequence y in the development set: r = r b 4 ( y ) TTX t =1 r RO ( y t ) (17) where r b 4 is the BLEU-4 score of the whole generated sequence, and r RO is computed using Equa-177 Model BLEU-1 BLEU-4 R-L AB1 AB4 AR-L SCB1 SCB4 SCR-L Cross-entropy (MLE) 26.86 4.74 28.86 31.23 4.83 28.51 51.92 26.35 50.21 BLEU-4 (Rennie et al., 2017) 7.75 1.38 13.93 5.69 0.84 10.37 10.76 5.05 20.87 CIDEr (Rennie et al., 2017) 12.67 1.90 21.20 14.61 1.79 21.70 26.07 12.30 41.65 ROUGE-L (Paulus et al., 2018) 29.00 4.86 29.10 33.49 4.73 28.11 56.86 27.83 51.26 BLEU-1 ( = 0 . 97 ) 31.16 5.60 29.53 32.28 5.09 29.34 52.63 25.43 51.58 BLEU-4 ( = 0 . 99 ) 30.56 5.42 29.16 32.53 4.99 28.99 53.48 26.35 51.02 CIDEr ( = 0 . 97 ) 29.60 5.10 28.79 33.93 4.81 28.41 57.00 27.55 50.57 ROUGE-L ( = 0 . 97 ) 26.88 4.66 29.49 31.85 5.01 29.25 53.84 26.77 51.88 Absolute Ordering (AO) 23.70 4.25 28.43 28.22 4.44 27.88 47.93 24.47 50.15 Relative Ordering (RO) 27.75 4.88 29.60 34.37 5.60 29.36 58.31 29.14 53.08 Relative Ordering + BLEU-4 29.58 5.26 29.78 35.13 5.55 29.33 59.13 29.19 52.46 Table 1: Evaluation results for generated sequences by models and baselines. We bold the top performing result. The second to fourth columns list word-level scores. Columns AB1, AB4, and AR-L list action-level scores ( 6.1). Columns SCB1, SCB4, and SCR-L list state change level scores ( 6.1). tion (15). Our best models use = 0 . 97 when training with the mixed objective from Equation (16). 5.3 Baselines As baselines, we report results for a model trained only with cross-entropy loss (MLE) and for reimplemented versions of models from Rennie et al. (2017) and Paulus et al. (2018). These baselines achieved state of the art results in image captioning and document summarization tasks. We found, however, that their high (1 and 0.9984, respectively) led to low fluency, resulting in reduced performance on word-level scores. To control for this effect, we trained additional versions of each baseline with different values for and report the best performing configurations (see Table 1). 6 Results 6.1 Overlap Metrics Scores We compute the example-level BLEU-1, BLEU-4, and ROUGE-L (R-L) scores for all recipes in the test set. A generated recipe, however, must be coherent at both the word-level , linking words and phrases sensibly, and the world-level , describing events that are grounded in real-world actions. Because n -gram scores do not evaluate if a generated recipe models this latent process, we also report these scores on the action and state change sequence described in the recipe. These words depict a simulated world where actions are taken and state changes are induced. A generated recipe should follow the sequence of actions taken in the gold recipe, and induce the same state changes as those in the gold recipe. We use the state change lexicon from Bosselut et al. (2018) to map recipe words to ordered sequences of actions and state changes. Each entry in the lexicon contains an action in the cooking domain as well as the state changes that result from that action in the set of { LOCATION , COMPOSITION , COOKEDNESS , TEMPERATURE , SHAPE , CLEANLINESS } . Action sequences are formed by mapping lemmas of words in generated sequences to entries in the lexicon. We compare these event sequences to the gold event sequences using the same scores as for words BLEU-1, BLEU-4, and ROUGE-L. Intuitively, these scores can be seen as evaluating the following: whether the generated recipe depicts the same actions (AB1), subsequences of consecutive actions (AB4), and full action sequence (AR-L) as the gold recipe. State change sequences are more coarse-grained than action sequences, and are formed by mapping actions to their state changes in the lexicon from Bosselut et al. (2018). These scores evaluate whether the generated recipe implies the same induced state changes (SCB1), subsequences of consecutive state changes (SCB4), and global state change order (SCR-L) as the gold recipe. Results Our results in Table 1 show that models optimized on word overlap metrics achieve the greatest improvements for those scores. Optimizing scores such as BLEU-1 encourages the model to output words and phrases that overlap often with reference sequences, but that may not describe main events in the recipe process. When examining models trained using a neural teacher, we see that the model optimized with 178 MLE RO + B4 Tie Fluency 0.330 0.447 0.223 Ingredient Use 0.350 0.440 0.210 Title Completion 0.347 0.430 0.223 Action Order 0.377 0.453 0.170 BLEU-1 RO + B4 Tie Fluency 0.387 0.373 0.240 Ingredient Use 0.327 0.363 0.310 Title Completion 0.353 0.377 0.270 Action Order 0.410 0.403 0.187 Table 2: Human evaluation measuring proportion of winners. Upper table compares MLE baseline with RO + B4 model. Lower table compares BLEU-1 baseline with RO + B4 model. the absolute ordering reward performs worse than most baselines for every word-level score. The relative ordering model, however, raises every word-level score above the cross-entropy baseline, indicating the importance of fine-grained credit assignment at the sentence-level. The model trained with mixed rewards from the teacher and BLEU-4 achieves even higher scores, showing the benefits of training with diverse rewards. When evaluating these metrics for the action and state change sequence, the models trained with feedback from the relative ordering teacher show large improvement over the baselines, indicating that the models exhibit more understanding of the latent process underlying the task. While optimizing word-level scores teaches the generator to output common sequences of words, the relative ordering reward teaches the model to focus on learning co-occurrences between recipe events. 6.2 Human Evaluation We perform a human evaluation on 100 recipes sampled from the test set to evaluate our model on four aspects of recipe quality: fluency, ingredient use, title completion, and action ordering. For each example, three judges from Amazon Mechanical Turk are shown a pair of recipes, each generated by a different model and asked to select the recipe that is better according to the criteria above. For ingredient use, judges select the recipe that uses more of the ingredients correctly. For title completion, we ask judges to select the recipe that best completes the dish described in the recipe title. Finally, for action ordering, judges choose the recipe that better links subtasks in the recipes. MLE RO + B4 Tie Fluency 0.317 0.425 0.258 Ingredient Use 0.342 0.458 0.200 Title Completion 0.358 0.450 0.192 Action Order 0.367 0.483 0.150 BLEU-1 RO + B4 Tie Fluency 0.391 0.383 0.225 Ingredient Use 0.267 0.392 0.342 Title Completion 0.325 0.418 0.258 Action Order 0.433 0.442 0.125 Table 3: Proportion of winners for long generated recipes. Upper table compares MLE baseline with RO + B4 model. Lower table compares BLEU-1 baseline with mixed RO + B4 model. Models We use the Relative Ordering + BLEU-4 model (RO + B4) and compared to two baselines, the cross-entropy model (MLE), and the BLEU-1 model, which achieved the best scores on several word-level metrics ( 6.1). Results We report results in Table 2. Our model outperforms the cross-entropy baseline, consistently being preferred on aggregate for every question. Workers preferred the BLEU-1 baseline for the fluency and action order questions, while preferring recipes generated by the teacher-trained model for the ingredient use and title ordering questions. Upon further analysis, we see that the strength of the BLEU-1 model depends on the length of the original reference sequence. In Table 3, we show evaluation scores for recipes where the gold recipe was longer than 100 words. Our model's performance rises compared to the BLEU-1 model for every question, showing that modeling discourse structure as learned reward improves global coherence in long text. 6.3 Insights Qualitative Analysis In Table 4, we see the effect that the neural teacher has on the recipe generator. The teacher rewards behavior that more closely imitates the actions in the gold recipe. In the first example, the generator learns to complete the actions of placing the mixture into the a greased casserole and then baking it, which the MLE model misses. The teacher also discourages repetitive phrases, as they provide no increase in reward during training. One weakness of our teacher models, however, is that they encourage common temporal patterns, such as in the third 179 Title Royale Casserole Ings Broccoli, chopped; butter; flour; salt; black pepper; nutmeg; milk; cheese shredded swiss; lemon juice; cooked cubed turkey; mushrooms; grated parmesan cheese; biscuits, refrigerated MLE Cook broccoli according to package directions. Drain. Melt butter in skillet. Add flour, stirring constantly. Cook, stirring constantly, until thickened. Add milk, salt, pepper, and nutmeg. Cook over low heat, stirring constantly, until thickened. Add cheese, salt, pepper and pimento. Cook and stir till thickened. Add cheese, stirring until melted. Add broccoli and mushrooms . Cook over low heat for 5 minutes. Add cheese. Cook and stir until cheese melts. Add cheese. Cook and stir until cheese melts. Add cheese. Cook and stir until cheese melts. Add cheese and stir until melted. Serve over hot noodles. RO Melt butter in saucepan. Add flour and cook, stirring constantly, until thickened. Add milk, salt, pepper, and nutmeg. Cook, stirring constantly, until thickened. Add cheese, salt and pepper. Cook over low heat, stirring constantly, until mixture is thickened and smooth. Remove from heat. Stir in cheese. Spoon into greased casserole . Top with cheese. Bake in 350 f oven for 30 minutes. Serves 6. Gold Preheat oven to 375. Melt butter in saucepan. Blend in flour, salt, pepper, and nutmeg; cook 1-2 minutes. Gradually add milk; cook, stirring, until slightly thickened. Stir in frozen vegetables. Remove from heat; stir in cheese until melted. Add lemon juice, turkey, mushrooms and broccoli. Pour mixture into a lightly greased baking dish; sprinkle with parmesan and top with biscuits. Bake 20 minutes, or until biscuits are golden brown. Title Wine Sauerkraut Ings butter; dry white wine; 16 oz drained sauerkraut MLE In a saucepan, combine soup , wine, and butter. Heat, uncovered, in microwave oven 2 minutes or until mixture is heated through. RO Melt butter in skillet. Add sauerkraut and wine; heat to boiling. Reduce heat; cover and simmer 15 minutes. Add wine and heat to boiling; reduce heat. Cover and cook 10 minutes or until mixture is tender. Serve on rice. Gold Melt butter in 10-inch skillet over low heat; add sauerkraut. Cover and cook, stirring occasionally, 30 minutes; add wine. Heat to boiling; reduce heat. Cover and simmer until liquid is absorbed, about 45 minutes. Follow directions except simmer until liquid is absorbed, about 1 hour. Title Strawberry Glazed Pie Ings fresh strawberries; granulated sugar; salt; cornstarch; lemon juice; baked, cooled pie shell MLE Combine sugar, cornstarch and salt in top of a double boiler. Cook and stir over medium heat until mixture thickens and boils. Boil and stir 1 minute. Remove from heat and stir in lemon juice. Cool. Add strawberries and lemon juice . Pour into baked pie shell. Top with whipped cream. Serve immediately. RO Combine sugar, cornstarch, and salt in a saucepan; cook over medium heat, stirring constantly, until mixture boils and thickens. Remove from heat; stir in lemon juice. Cool. Add strawberries; mix well. Pour into baked pie shell. Bake in preheated 325-degree oven for 10 minutes . Cool. Garnish with whipped cream. Gold Wash, drain thoroughly, and hull strawberries. Arrange about 3 cups of whole berries over bottom of baked pastry shell. Crush remaining berries in a saucepan. In a bowl, mix sugar, salt and cornstarch; stir into crushed berries. Heat slowly, stirring constantly, until mixture comes to a boil and thickens. Remove from heat and stir in lemon juice. Cool, then spoon over berries in pie shell chill until glaze is set. Garnish with whipped cream. Table 4: Example recipe generations from our model and comparative baselines. Boxed spans indicate recipe events missed by another model's generation. Red spans indicate superfluous events. The Ings row lists the ingredients (separated by semicolons) provided to make the dish in the title. example in Table 4, where the generator mentions baking the pie . The model recognizes pies are generally supposed to be baked, even if it is not appropriate for that particular recipe. Teacher Feedback Frequency We design the reward functions in Eq. 12 and Eq. 13 to require two passes through the teacher, one comparing the generated sequence to the forward gold sequence, and one comparing it to the reverse gold sequence. With no teacher comparison to the reverse-ordered sequence, the generator learns to exploit the teacher for reward with very simple sequences such as Serve. and Here's direction.",
"When comparing with both orders, however, this effect is dampened, hinting at the importance of ensembling feedback from multiple sources for robust reward production.",
"Another solution to this effect was mixing policy learning and maximum likelihood learning (Eq.",
"16) as the underlying language model of the generator did not deteriorate.",
"Impact of max and Two hyperparameters to tune when training with teacher models are the mixed loss coefficient , which balances MLE learning with policy learning, and [ min , max ], the number of sentences to consider when computing the relative order reward. We fix min = 3 , and vary max [3 , 6] and { 0 . 95 , 0 . 97 , 0 . 98 } . Figure 5 shows the importance of tuning . A low will not allow the teacher to guide the model's learning, while a high causes the lan-180 0.95 0.97 0.98 3 4 5 6 m a x Action BLEU-1 0.95 0.97 0.98 3 4 5 6 m a x Action BLEU-4 0.95 0.97 0.98 3 4 5 6 m a x State Change BLEU-1 0.95 0.97 0.98 3 4 5 6 m a x State Change BLEU-4 0.3300.3350.3400.3450.3500.355 0.0480.0500.0520.0540.0560.0580.060 0.550.560.570.580.59 0.2750.2800.2850.2900.2950.300 Figure 5: Action and State Change BLEU Metrics for different initializations of max and guage model to deteriorate. Interestingly, a higher max leads to better performance on global coherence scores, implying that relative order rewards conditioned on more sentences allow the model to learn longer-range context co-occurrences. 7 Related Work The field of neural text generation has received considerable attention in tasks such as image captioning (Vinyals et al., 2015; Xu et al., 2015), summarization (Rush et al., 2015; See et al., 2017), machine translation (Bahdanau et al., 2015), and recipe generation (Kiddon et al., 2016). While these works have focused on developing new neural architectures that introduce structural biases for easier learning , our work uses a simple architecture and focuses on improving the optimization of the learner (i.e., better teaching ). The importance of better teaching for RNN generators was outlined in Bengio et al. (2015), which showed that exposure bias from a misaligned train and test setup limited the capabilities of sequence-to-sequence models. This limitation had been addressed in previous work by augmenting training data with examples generated by pretrained models to make models robust to their own errors (Daume III et al., 2009; Ross et al., 2011). More recent work on training RNNs for generation has used sequence scores such as ROUGE (Paulus et al., 2018), CIDEr (Rennie et al., 2017; Pasunuru and Bansal, 2017), BLEU (Ranzato et al., 2015) and mixtures of them (Liu et al., 2017) as a global reward to train a policy with the REINFORCE algorithm (Williams, 1992). In contrast, our work uses a neural teacher to reward a model for capturing discourse semantics. Most similar to our work is work on using neural and embedding rewards to improve dialogue (Li et al., 2016), image captioning (Ren et al., 2017), simplification (Zhang and Lapata, 2017), and paraphrase generation (Li et al., 2017). While these works use single-sentence similarity rewards for short generation tasks, our work designs teachers to reward long-range ordering patterns. Finally, our teachers can be seen as rewarding generators that approximate script patterns in recipes. Previous work in learning script knowledge (Schank and Abelson, 1975) has focused on extracting scripts from long texts (Chambers and Jurafsky, 2009; Pichotta and Mooney, 2016), with some of that work focusing on recipes (Kiddon et al., 2015; Mori et al., 2014, 2012). Our teachers implicitly learn this script knowledge and reward recipe generators for exhibiting it. 8 Conclusion We introduce the absolute ordering and relative ordering teachers, two neural networks that score a sequence's adherence to discourse structure in long text. The teachers are used to compute rewards for a self-critical reinforcement learning framework, allowing a recipe generator to be rewarded for capturing temporal semantics of the cooking domain. Empirical results demonstrate that our teacher-trained generator better models the latent event sequences of cooking recipes, and a human evaluation shows that this improvement is mainly due to maintaining semantic coherence in longer recipes. Acknowledgments This research was supported in part by NSF (IIS-1524371), DARPA under the CwC program through the ARO (W911NF-15-1-0543) and Sam-sung Research. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference for Learning Representations . Regina Barzilay and Mirella Lapata. 2005. Modeling local coherence: An entity-based approach. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics . 181 Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Computational Linguistics 34(1). Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In HLT-NAACL . Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In Advances in Neural Information Processing Systems . Antoine Bosselut, Omer Levy, Ari Holtzman, Corin Ennis, Dieter Fox, and Yejin Choi. 2018. Simulating action dynamics with neural process networks. Proceedings of the 6th International Conference for Learning Representations . Nathanael Chambers and Dan Jurafsky. 2009. Unsupervised learning of narrative schemas and their participants. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2 . Association for Computational Linguistics. Kyunghyun Cho, Bart van Merrienboer, Caglar Gul-cehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing . Hal Daume III, John Langford, and Daniel Marcu. 2009. Search-based structured prediction . Sepp Hochreiter and Jurgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9(8). Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM International Conference on Information & Knowledge Management . ACM. Chloe Kiddon, Ganesa Thandavam Ponnuraj, Luke Zettlemoyer, and Yejin Choi. 2015. Mise en place: Unsupervised interpretation of instructional recipes. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing . Chloe Kiddon, Luke Zettlemoyer, and Yejin Choi. 2016. Globally coherent text generation with neural checklist models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing . Jiwei Li and Eduard H Hovy. 2014. A model of coherence based on distributed sentence representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing . Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. 2016. Deep reinforcement learning for dialogue generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing . Zichao Li, Xin Jiang, Lifeng Shang, and Hang Li. 2017. Paraphrase generation with deep reinforcement learning. arXiv preprint arXiv:1711.00279 . Chin-Yew Lin. 2004. ROUGE: a package for automatic evaluation of summaries. In Text summarization branches out: Proceedings of the ACL-04 workshop . Barcelona, Spain, volume 8. Siqi Liu, Zhenhai Zhu, Ning Ye, Sergio Guadarrama, and Kevin Murphy. 2017. Improved image captioning via policy gradient optimization of spider. Proceedings of the 2017 IEEE International Conference on Computer Vision . Ryan Lowe, Michael Noseworthy, Iulian Serban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an automatic turing test: Learning to evaluate dialogue responses. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics . Shinsuke Mori, Hirokuni Maeta, Yoko Yamakata, and Tetsuro Sasada. 2014. Flow graph corpus from recipe texts. In Proceedings of the Ninth International Conference on Language Resources and Evaluation . Shinsuke Mori, Tetsuro Sasada, Yoko Yamakata, and Koichiro Yoshino. 2012. A machine learning approach to recipe text processing. In Proceedings of the 1st Cooking with Computer Workshop . Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics . Association for Computational Linguistics. Ramakanth Pasunuru and Mohit Bansal. 2017. Reinforced video captioning with entailment rewards. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing . Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In Proceedings of the 6th International Conference for Learning Representations . Karl Pichotta and Raymond J. Mooney. 2016. Using sentence-level lstm language models for script inference. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics . Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks. In Proceedings of the 4th International Conference for Learning Representations . 182 Zhou Ren, Xiaoyu Wang, Ning Zhang, Xutao Lv, and Li-Jia Li. 2017. Deep reinforcement learning-based image captioning with embedding reward. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition . Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition . Stephane Ross, Geoffrey J Gordon, and Drew Bagnell. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. In International Conference on Artificial Intelligence and Statistics . Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing . Roger C Schank and Robert P Abelson. 1975. Scripts, plans, and knowledge . Yale University. Abigale See, Peter J. Liu, and Christopher Manning. 2017. Gettothepoint: Summarization with pointer-generatornetworks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics . Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. CIDEr: Consensus-based image description evaluation. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition . Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In Proceedings of the 2015 IEEE Conference on Computer Cision and Pattern Recognition . Ronald J Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning 8(3-4). Sam Wiseman, Stuart M. Shieber, and Alexander M. Rush. 2017. Challenges in data-to-document generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing . Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation."
] | [
"objective",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"objective",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"Recent neural network-driven semantic role labeling (SRL) systems have shown impressive improvements in F1 scores.",
"These improvements are due to expressive input representations, which, at least at the surface, are orthogonal to knowledge-rich constrained decoding mechanisms that helped linear SRL models.",
"Introducing the benefits of structure to inform neural models presents a methodological challenge.",
"In this paper, we present a structured tuning framework to improve models using softened constraints only at training time.",
"Our framework leverages the expressiveness of neural networks and provides supervision with structured loss components.",
"We start with a strong baseline (RoBERTa) to validate the impact of our approach, and show that our framework outperforms the baseline by learning to comply with declarative constraints.",
"Additionally, our experiments with smaller training sizes show that we can achieve consistent improvements under low-resource scenarios.",
"Semantic Role Labeling (SRL, Palmer et al., 2010) is the task of labeling semantic arguments of predicates in sentences to identify who does what to whom.",
"Such representations can come in handy in tasks involving text understanding, such as coreference resolution (Ponzetto and Strube, 2006) and reading comprehension (e.g., Berant et al., 2014; Zhang et al., 2020).",
"This paper focuses on the question of how knowledge can influence modern semantic role labeling models.",
"Linguistic knowledge can help SRL models in several ways.",
"For example, syntax can drive feature design (e.g., Punyakanok et al., 2005; Toutanova et al., 2005; Kshirsagar et al., 2015; Johansson and Nugues, 2008, and others), and can also be embedded into neural network architectures (Strubell et al., 2018).",
"In addition to such influences on input representations, knowledge about the nature of semantic roles can inform structured decoding algorithms used to construct the outputs.",
"The SRL literature is witness to a rich array of techniques for structured inference, including integer linear programs (e.g., Punyakanok et al., 2005, 2008), bespoke inference algorithms (e.g., Tackstrom et al., 2015), A* decoding (e.g., He et al., 2017), greedy heuristics (e.g., Ouchi et al., 2018), or simple Viterbi decoding to ensure that token tags are BIO-consistent.",
"By virtue of being constrained by the definition of the task, global inference promises semantically meaningful outputs, and could provide valuable signal when models are being trained.",
"However, beyond Viterbi decoding, it may impose prohibitive computational costs, thus ruling out using inference during training.",
"Indeed, optimal inference may be intractable, and inference-driven training may require ignoring certain constraints that render inference difficult.",
"While global inference was a mainstay of SRL models until recently, today's end-to-end trained neural architectures have shown remarkable successes without needing decoding.",
"These successes can be attributed to the expressive input and internal representations learned by neural networks.",
"The only structured component used with such models, if at all, involves sequential dependencies between labels that admit efficient decoding.",
"In this paper, we ask: Can we train neural network models for semantic roles in the presence of general output constraints, without paying the high computational cost of inference?",
"We propose a structured tuning approach that exposes a neural SRL model to differentiable constraints during the finetuning step.",
"To do so, we first write the output space constraints as logic rules.",
"Next, we relax such statements into differentiable forms that serve as regularizers to inform the model at training time.",
"Finally, during inference, our structure-tuned models are free to make their own judgments about labels without any inference algorithms beyond a simple linear sequence decoder.",
"We evaluate our structured tuning on the CoNLL-05 (Carreras and M`arquez, 2005) and CoNLL-12 English SRL (Pradhan et al., 2013) shared task datasets, and show that by learning to comply with declarative constraints, trained models can make more consistent and more accurate predictions.",
"We instantiate our framework on top of a strong baseline system based on the RoBERTa (Liu et al., 2019) encoder, which by itself performs on par with previous best SRL models that are not en-sembled.",
"We evaluate the impact of three different types of constraints.",
"Our experiments on the CoNLL-05 data show that our constrained models outperform the baseline system by 0 .",
"2 F1 on the WSJ section and 1 .",
"2 F1 on the Brown test set.",
"Even with the larger and cleaner CoNLL-12 data, our constrained models show improvements without introducing any additional trainable parameters.",
"Finally, we also evaluate the effectiveness of our approach on low training data scenarios, and show that constraints can be more impactful when we do not have large training sets.",
"In summary, our contributions are:",
"1. We present a structured tuning framework for SRL which uses soft constraints to improve models without introducing additional trainable parameters.",
"1 2. Our framework outperforms strong baseline systems, and shows especially large improvements in low data regimes.",
"In this section, we will introduce our structured tuning framework for semantic role labeling.",
"In 2.1, we will briefly cover the baseline system.",
"To that, we will add three constraints, all treated as combinatorial constraints requiring inference algorithms in past work: Unique Core Roles in 2.3, Exclusively Overlapping Roles in 2.4, and Frame Core Roles in 2.5.",
"For each constraint, we will discuss how to use its softened version during training.",
"We should point out that the specific constraints chosen serve as a proof-of-concept for the general methodology of tuning with declarative knowledge.",
"For simplicity, for all our experiments, we use the ground truth predicates and their senses.",
"We use RoBERTa (Liu et al., 2019) base version to develop our baseline SRL system.",
"The large number of parameters not only allows it to make fast and accurate predictions, but also offers the capacity to learn from the rich output structure, including the constraints from the subsequent sections.",
"Our base system is a standard BIO tagger, briefly outlined below.",
"Given a sentence s , the goal is to assign a label of the form B-X , I-X or O for each word i being an argument with label X for a predicate at word u .",
"These unary decisions are scored as follows: e = map ( RoBERTa ( s )) (1) v u , a i = f v ( e u ) , f a ( e i ) (2) u,i = f va ([ v u , a i ]) (3) y u,i = g ( u,i ) (4) Here, map converts the wordpiece embeddings e to whole word embeddings by summation, f v and f a are linear transformations of the predicate and argument embeddings respectively, f va is a two-layer ReLU with concatenated inputs, and finally g is a linear layer followed by softmax activation that predicts a probability distribution over labels for each word i when u is a predicate.",
"In addition, we also have a standard first-order sequence model over label sequences for each predicate in the form of a CRF layer that is Viterbi decoded.",
"We use the standard cross-entropy loss to train the model.",
"Before looking at the specifics of individual constraints, let us first look at a broad overview of our methodology.",
"We will see concrete examples in the subsequent sections.",
"Output space constraints serve as prior domain knowledge for the SRL task.",
"We will design our constraints as invariants at the training stage.",
"To do so, we will first define constraints as statements in logic.",
"Then we will systematically relax these Boolean statements into differentiable forms using concepts borrowed from the study of triangular norms (t-norms, Klement et al., 2013).",
"Finally, we will treat these relaxations as regularizers in addition to the standard cross-entropy loss.",
"where the leftand the right-hand sides L ( x ) , R ( x ) respectivelycan be either disjunctive or conjunctive expressions.",
"The literals that constitute these expressions are associated with classification neurons, i.e. , the predicted output probabilities are soft versions of these literals.",
"What we want is that model predictions satisfy our constraints.",
"To teach a model to do so, we transform conditional statements into regularizers, such that during training, the model receives a penalty if the rule is not satisfied for an example.",
"2 To soften logic, we use the conversions shown in Table 1 that combine the product and Godel t-norms.",
"We use this combination because it offers cleaner derivatives make learning easier.",
"A similar combination of t-norms was also used in prior work (Minervini and Riedel, 2018).",
"Finally, we will transform the derived losses into log space to be consistent with cross-entropy loss.",
"Li et al. (2019) outlines this relationship between the cross-entropy loss and constraint-derived regularizers in more detail.",
"Our first constraint captures the idea that, in a frame, there can be at most one core participant of a given type.",
"Operationally, this means that for every predicate in an input sentence s , there can be no more than one occurrence of each core argument (i.e, A core = { A0 , A1 , A2 , A3 , A4 , A5 } ).",
"In 2 Constraint-derived regularizers are dependent on examples, but not necessarily labeled ones.",
"For simplicity, in this paper, we work with sentences from the labeled corpus.",
"However, the methodology described here can be extended to use unlabeled examples as well.",
"which says, for a predicate u , if a model tags the i -th word as the beginning of the core argument span, then it should not predict that any other token is the beginning of the same label.",
"In the above rule, the literal BX is associated with the predicted probability for the label B-X 3 .",
"This association is the cornerstone for deriving constraint-driven regularizers.",
"Using the conversion in Table 1 and taking the natural log of the resulting expression, we can convert the implication in (6) as l ( u, i, X ) : max (cid:18) log BX ( u, i ) min j s,j (cid:54) = i log (1 BX ( u, j )) (cid:19) .",
"Our constraint is universally applied to all words and predicates ( i.e. , i, u respectively) in the given sentence s .",
"Whenever there is a pair of predicted labels for tokens i, j that violate the rule (6), our loss will yield a positive penalty.",
"Error Measurement u To measure the violation rate of this constraint, we will report the percentages of propositions that have duplicate core arguments.",
"We will refer to this error rate as u .",
"We adopt this constraint from Punyakanok et al. (2008) and related work.",
"In any sentence, an argument for one predicate can either be contained in or entirely outside another argument for any other predicate.",
"We illustrate the intuition of this constraint in Table 2, assuming core argument spans are unique and tags are BIO-consistent.",
"Based on Table 2, we design a constraint that says: if an argument has boundary [ i, j ] , then no other argument span can cross the boundary at j .",
"3 We will use BX ( u, i ) to represent both the literal that the token i is labeled with B-X for predicate u and also the probability for this event.",
"We follow a similar convention for the I-X labels.",
"u, i, j s such that j > i, and X A , P ( u, i, j, X ) (cid:94) v s, Y A ( u, X ) (cid:54) =( v, Y ) Q ( v, i, j, Y ) (8) where P ( u, i, j, X ) = BX ( u, i ) IX ( u, j ) IX ( u, j + 1) Q ( v, i, j, Y ) = Q 1 ( v, i, j, Y ) Q 2 ( v, i, j, ) Q 1 ( v, i, j, Y ) = BY ( v, j ) IY ( v, j + 1) Q 2 ( v, i, j, Y ) = BY ( v, i ) IY ( v, i ) IY ( v, j ) IY ( v, j + 1) Here, the term P ( u, i, j, X ) denotes the indicator for the argument span [ i, j ] having the label X for a predicate u and corresponds to the first row of Table",
"2. The terms Q 1 ( v, i, j, Y ) and Q 2 ( v, i, j, Y ) each correspond to prohibitions of the type described in the second and third rows respectively.",
"As before, the literals BX , etc are relaxed as model probabilities to define the loss.",
"By combining the Godel and product t-norms, we translate Rule (8) into: LO ( s ) = (cid:88) ( u,i,j ) s j>i, X A l ( u, i, j, X ) .",
"(9) where, l ( u, i, j, X ) = max (cid:0) 0 , log P ( u, i, j, X ) min v s, Y A ( u, X ) (cid:54) =( v, Y ) log Q ( v, i, j, Y ) (cid:1) P ( u, i, j, X ) = min ( BX ( u, i ) , IX ( u, j ) , 1 IX ( u, j + 1)) Q ( v, i, j, Y ) = min ( Q 1 ( v, i, j, Y ) , Q 2 ( v, i, j, Y )) Q 1 ( v, i, j, Y ) = 1 min ( BY ( v, j ) , IY ( v, j + 1)) Q 2 ( v, i, j, Y ) = max ( BY ( v, i ) , IY ( v, i ) , 1 IY ( v, j ) , 1 IY ( v, j + 1)) Again, our constraint applies to all predicted probabilities.",
"However, doing so requires scanning over 6 axes defined by ( u, v, i, j, X , Y ) , which is computationally expensive.",
"To get around this, we observe that, since we have a conditional statement, the higher the probability of P ( u, i, j, X ) , the more likely it yields non-zero penalty.",
"These cases are precisely the ones we hope the constraint helps.",
"Thus, for faster training and ease of implementation, we modify Equation 8 by squeezing the ( i, j ) dimensions using top-k to redefine LO above as: T ( u, X ) = arg top-k ( i,j ) s P ( u, i, j, X ) (10) LO ( s ) = (cid:88) u s, X A (cid:88) ( i,j ) T ( v, X ) l ( u, i, j, X ) .",
"(11) where T denotes the set of the top-k span boundaries for predicate u and argument label X .",
"This change results in a constraint defined by u , v , X , Y and the k elements of T .",
"Error Measurement o We will refer to the error of the overlap constraint as o , which describes the total number of non-exclusively overlapped pairs of arguments.",
"In practice, we found that models rarely make such observed mistakes.",
"In 3, we will see that using this constraint during training helps models generalize better with other constraints.",
"In 4, we will analyze the impact of the parameter k in the optimization described above.",
"The task of semantic role labeling is defined using the PropBank frame definitions.",
"That is, for any predicate lemma of a given sense, PropBank de-fines which core arguments it can take and what they mean.",
"The definitions allow for natural constraints that can teach models to avoid predicting core arguments outside of the predefined set.",
"where S ( u ) denotes the set of senses for a predicate u , and R ( u, k ) denotes the set of acceptable core arguments when the predicate u has sense k .",
"As noted in 2.2, literals in the above statement can to be associated with classification neurons.",
"Thus the Sense ( u, k ) corresponds to either model prediction or ground truth.",
"Since our focus is to validate the approach of using relaxed constraints for SRL, we will use the latter.",
"This constraint can be also converted into regularizer following previous examples, giving us a loss term LF ( s ) .",
"Error Measurement f We will use f to denote the violation rate.",
"It represents the percentage of propositions that have predicted core arguments outside the role sets of PropBank frames.",
"Loss Our final loss is defined as: LE ( s ) + ULU ( s ) + OLO ( s ) + FLF ( s ) (12) Here, LE ( s ) is the standard cross entropy loss over the BIO labels, and the 's are hyperparameters.",
"In this section, we study the question: In what scenarios can we inform an end-to-end trained neural model with declarative knowledge?",
"To this end, we experiment with the CoNLL-05 and CoNLL-12 datasets, using standard splits and the official evaluation script for measuring performance.",
"To empirically verify our framework in various data regimes, we consider scenarios ranging from where only limited training data is available, to ones where large amounts of clean data are available.",
"Our baseline (described in 2.1) is based on RoBERTa.",
"We used the pre-trained base version released by Wolf et al. (2019).",
"Before the final linear layer, we added a dropout layer (Srivastava et al., 2014) with probability 0 .",
"5 .",
"To capture the sequential dependencies between labels, we added a standard CRF layer.",
"At testing time, Viterbi decoding with hard transition constraints was employed across all settings.",
"In all experiments, we used the gold predicate and gold frame senses.",
"Model training proceeded in two stages:",
"1. We use the finetuned the pre-trained RoBERTa model on SRL with only cross-entropy loss for 30 epochs with learning rate 3 10 5 .",
"2. Then we continued finetuning with the combined loss in Equation 12 for another 5 epochs with a lowered learning rate of 1 10 5 .",
"During both stages, learning rates were warmed up linearly for the first 10% updates.",
"For fair comparison, we finetuned our baseline twice (as with the constrained models); we found that it consistently outperformed the singly finetuned baseline in terms of both error rates and role F1.",
"We grid-searched the 's by incrementally adding regularizers.",
"The combination of 's with good balance between F1 and error 's on the dev set were selected for testing.",
"We refer readers to the appendix for the values of 's.",
"For models trained on the CoNLL-05 data, we report performance on the dev set, and the WSJ and Brown test sets.",
"For CoNLL-12 models, we report performance on the dev and the test splits.",
"Creating SRL datasets requires expert annotation, which is expensive.",
"While there are some efforts on semi-automatic annotation targeting low-resource languages (e.g., Akbik et al., 2016), achieving high neural network performance with small or unlabeled datasets remains a challenge (e.g., Furstenau and Lapata, 2009, 2012; Titov and Klementiev, 2012; Gormley et al., 2014; Abend et al., 2009).",
"In this paper, we study the scenario where we have small amounts of fully labeled training data.",
"We sample 3% of the training data and an equivalent amount of development examples.",
"The same training/dev subsets are used across all models.",
"Table 3 reports the performances of using 3% training data from CoNLL-05 and CoNLL-12 (top and bottom respectively).",
"We compare our strong baseline model with structure-tuned models using all three constraints.",
"Note that for all these evaluations, while we use subsamples of the dev set for model selection, the evaluations are reported using the full dev and test sets.",
"We see that training with constraints greatly improves precision with low training data, while recall reduces.",
"This trade-off is accompanied by a reduction in the violation rates u and f .",
"As noted in 2.4, models rarely predict label sequences that violate the exclusively overlapping roles constraint.",
"As a result, the error rate o (the number of violations) only slightly fluctuates.",
"Table 4 reports the performance of models trained with our framework using the full training set of the CoNLL-05 dataset which consists of 35 k sentences with 91 k propositions.",
"Again, we compare RoBERTa (twice finetuned) with our structure-tuned models.",
"We see that the constrained models CoNLL-05 (3%, 1.1k) Dev P R F1 F1 u o f RoBERTa 2 67.79 72.69 70.15 14.56 23 6.19 +U,F,O 70.40 71.91 71.15 1.0 8.56 20 5.82 WSJ P R F1 F1 u o f RoBERTa 2 70.48 74.96 72.65 13.35 37 NA +U,F,O 72.60 74.13 73.36 0.7 7.46 49 NA Brown P R F1 F1 u o f RoBERTa 2 62.16 66.93 64.45 12.94 6 NA +U,F,O 64.31 65.64 64.97 0.5 5.47 6 NA CoNLL-12 (3%, 2.7k) Dev P R F1 F1 u o f RoBERTa 2 74.39 76.88 75.62 7.43 294 3.23 +U,F,O 75.99 76.80 76.39 0.8 4.37 245 3.01 Test P R F1 F1 u o f RoBERTa 2 74.79 77.17 75.96 6.92 156 2.67 +U,F,O 76.31 76.88 76.59 0.6 4.12 171 2.41 Table 3: Results on low training data ( 3 % of CoNLL-05 and CoNLL-12).",
"consistently outperform baselines on the dev, WSJ, and Brown sets.",
"With all three constraints, the constrained model reaches 88 F1 on the WSJ.",
"It also generalizes well on new domain by outperforming the baseline by 1 .",
"2 points on the Brown test set.",
"This suggests that even with large training data, direct label supervision might not be enough for neural models to pick up the rich output space structure.",
"Our framework helps neural networks, even as strong as RoBERTa, to make more correct predictions from differentiable constraints.",
"Surprisingly, the development ground truth has a 2 .",
"34% error rate on the frame role constraint, and 0 .",
"40% on the unique role constraint.",
"Similar percentages of unique role errors also appear in WSJ and Brown test sets.",
"For o , the oracle has no violations on the CoNLL-05 dataset.",
"The exclusively overlapping constraint ( i.e. o ) is omitted as we found models rarely make such prediction errors.",
"After adding constraints, the error rate of our model approached the lower bound.",
"Note that our framework focuses on the learning stage without any specialized decoding algorithms in the prediction phase except the Viterbi algorithm to guarantee that there will be no BIO violations.",
"What about even larger and cleaner data?",
"The ideal scenario, of course, is when we have the luxury of massive and clean data to power neural network training.",
"In Table 5, we present results on CoNLL-12 which is about 3 times as large as CoNLL-05.",
"It consists of 90 k sentences and 253 k propositions.",
"The dataset is also less noisy with respect to the constraints.",
"For instance, the oracle development set has no violations for both the unique core and the exclusively overlapping constraints.",
"We see that, while adding constraints reduced error rates of u and f , the improvements on label consistency do not affect F1 much.",
"As a result, our best constrained model performes on a par with the baseline on the dev set, and is slightly better than the baseline (by 0 . 1 ) on the test set.",
"Thus we believe when we have the luxury of data, learning with constraints would become optional.",
"This observation is in line with recent results in Li and Srikumar (2019) and Li et al. (2019).",
"baseline?",
"To investigate whether the seemingly saturated performance is from data or from the model, we also evaluate our framework on the original BERT (Devlin et al., 2019) which is relatively less powerful.",
"We follow the same model setup for experiments and report the performances in Table 5 and Table",
"9. We see that compared to RoBERTa, BERT obtains similar F1 gains on the test set, suggesting performance ceiling is due to the train size.",
"In 3, we saw that constraints not just improve model performance, but also make outputs more structurally consistent.",
"In this section, we will show the results of an ablation study that adds one constraint at a time.",
"Then, we will examine the sources of improved F-score by looking at individual labels, and also the effect of the top-k relaxation for the constraint O .",
"Furthermore, we will examine the robustness of our method against randomness involved during training.",
"We will end this section with a discussion about the ability of constrained neural models to handle structured outputs.",
"Constraint Ablations We present the ablation analysis on our constraints in Table 6.",
"We see that as models become more constrained, precision improves.",
"Furthermore, one class of constraints do not necessarily reduce the violation rate for the others.",
"Combining all three constraints offers a balance between precision, recall, and constraint violation.",
"One interesting observation that adding the O constraints improve F-scores even though the o values were already close to zero.",
"As noted in 2.4, our constraints apply to the predicted scores of all labels for a given argument, while the actual decoded label sequence is just the highest scoring sequence using the Viterbi algorithm.",
"Seen this way, our regularizers increase the decision margins on affected labels.",
"As a result, the model predicts scores that help Viterbi decoding, and, also generalizes better to new domains i.e. , the Brown set.",
"Sources of Improvement Table 7 shows label-wise F1 scores for each argument.",
"Under low training data conditions, our constrained models gained improvements primarily from the frequent labels, e.g., A0 A2 .",
"On CoNLL-05 dataset, we found the location modifier ( AM-LOC ) posed challenges to our constrained models which significantly performed worse than the baseline.",
"Another challenge is the negation modifier ( AM-NEG ), where our models underperformed on both datasets, particularly with small training data.",
"When using the CoNLL-12 training set, our models performed on par with the baseline even on frequent labels, confirming that the performance of soft-structured learning is nearly saturated on the larger, cleaner dataset.",
"Impact of Topk Beam Size As noted in 2.4, we used the topk strategy to implement the constraint O .",
"As a result, there is a certain chance for predicted label sequences to have non-exclusive overlap without our regularizer penalizing them.",
"What we want instead is a good balance between coverage and runtime cost.",
"To this end, we analyze the CoNLL-12 development set using the baseline trained on 3% of CoNLL-12 data.",
"Specifically, we count the examples which have such overlap but the regularization loss is 0 .",
"001 .",
"In Table 8, we CoNLL-05 3% CoNLL-05 100% CoNLL-12 3% CoNLL-12 100% RoBERTa 2 +U,F,O RoBERTa 2 +U,F,O RoBERTa 2 +U,F,O RoBERTa 2 +U,F,O A0 81.28 82.11 93.43 93.52 84.99 85.73 92.78 92.81 A1 72.12 73.59 89.23 89.80 78.36 79.67 89.88 89.75 A2 46.50 47.52 79.53 79.73 68.24 69.20 84.93 84.90 A3 39.58 42.11 81.45 81.86 33.26 34.47 72.96 73.24 A4 51.61 51.56 74.60 75.59 56.29 58.38 80.80 80.33 AM-ADV 44.07 47.56 66.67 66.91 55.26 54.93 66.37 66.92 AM-DIR 16.39 18.92 55.26 55.56 36.51 35.81 64.92 64.95 AM-DIS 71.07 70.84 80.20 80.50 76.35 76.40 82.86 82.71 AM-LOC 53.08 51.60 69.02 66.50 59.74 59.94 72.74 73.21 AM-MNR 44.30 44.18 68.63 69.87 56.14 55.67 70.89 71.13 AM-MOD 91.88 91.60 98.27 98.60 95.50 95.76 97.88 98.04 AM-NEG 91.18 88.35 94.06 93.60 93.29 93.05 95.93 95.83 AM-TMP 74.05 74.13 88.24 88.08 79.00 78.78 87.58 87.56 Overall 70.48 71.55 87.33 87.61 76.66 77.45 87.60 87.58 Table 7: Label-wise F1 scores for the CoNLL-05 and CoNLL-12 development sets.",
"see that k = 4 yields good coverage.",
"Robustness to random initialization We observed that model performance with structured tuning is generally robust to random initialization.",
"As an illustration, we show the performance of models trained on the full CoNLL-12 dataset with different random initializations in Table",
"9. CoNLL-12 (100%, 90k) Test F1 Seed1 Seed2 Seed3 avg F1 BERT 2 85.88 85.91 86.13 +U,F,O 86.09 86.07 86.19 0.1 Test F1 Seed1 Seed2 Seed3 avg F1 RoBERTa 2 86.47 86.33 86.45 +U,F,O 86.61 86.48 86.57 0.1 Table 9: F1 scores models trained on the CoNLL-12 data with different random seeds.",
"Can Constrained Networks Handle Structured Prediction?",
"Larger, cleaner data may presumably be better for training constrained neural models.",
"But it is not that simple.",
"We will approach the above question by looking at how good the transformer models are at dealing with two classes of constraints, namely: 1) structural constraints that rely only on available decisions (constraint U ), 2) constraints involving external knowledge (con-straint F ).",
"For the former, we expected neural models to perform very well since the constraint U represents a simple local pattern.",
"From Tables 4 and 5, we see that the constrained models indeed reduced violations u substantially.",
"However, when the training data is limited, i.e. , comparing CoNLL-05 3% and 100% , the constrained models, while reducing the number of errors, still make many invalid predictions.",
"We conjecture this is because networks learn with constraints mostly by memorization.",
"Thus the ability to generalize learned patterns on unseen examples relies on training size.",
"The constraint F requires external knowledge from the PropBank frames.",
"We see that even with large training data, constrained models were only able to reduce error rate f by a small margin.",
"In our development experiments, having larger F tends to strongly sacrifice argument F1, yet still does not to improve development error rate substantially.",
"Without additional training signal in the form of such background knowledge, constrained inference becomes a necessity, even with strong neural network models.",
"Semantic Role Labeling & Constraints The SRL task is inherently knowledge rich; the outputs are defined in terms of an external ontology of frames.",
"The work presented here can be generalized to several different flavors of the task, and indeed, constraints could be used to model the interplay between them.",
"For example, we could revisit the analysis of Yi et al. (2007), who showed that the PropBank A2 label takes on multiple meanings, but by mapping them to VerbNet, they can be disambiguated.",
"Such mappings naturally define constraints that link semantic ontologies.",
"Constraints have long been a cornerstone in the SRL models.",
"Several early linear models for SRL (e.g. Punyakanok et al., 2004, 2008; Surdeanu et al., 2007) modeled inference for PropBank SRL using integer linear programming.",
"Riedel and Meza-Ruiz (2008) used Markov Logic Networks to learn and predict semantic roles with declarative constraints.",
"The work of (Tackstrom et al., 2015) showed that certain SRL constraints admit efficient decoding, leading to a neural model that used this framework (FitzGerald et al., 2015).",
"Learning with constraints has also been widely adopted in semi-supervised SRL (e.g., Furstenau and Lapata, 2012).",
"With the increasing influence of neural networks in NLP, however, the role of declarative constraints seem to have decreased in favor of fully end-to-end training (e.g., He et al., 2017; Strubell et al., 2018, and others).",
"In this paper, we show that even in the world of neural networks with contextual embeddings, there is still room for systematically introducing knowledge in the form of constraints, without sacrificing the benefits of end-to-end learning.",
"Structured Losses Chang et al. (2012) and Ganchev et al. (2010) developed models for structured learning with declarative constraints.",
"Our work is in the same spirit of training models that attempts to maintain output consistency.",
"There are some recent works on the design of models and loss functions by relaxing Boolean formulas.",
"Kimmig et al. (2012) used the ukasiewicz t-norm for probabilistic soft logic.",
"Li and Srikumar (2019) augment the neural network architecture itself using such soft logic.",
"Xu et al. (2018) present a general framework for loss design that does not rely on soft logic.",
"Introducing extra regularization terms to a downstream task have been shown to be beneficial in terms of both output structure consistency and prediction accuracy (e.g., Minervini and Riedel, 2018; Hsu et al., 2018; Mehta et al., 2018; Du et al., 2019; Li et al., 2019).",
"Final words In this work, we have presented a framework that seeks to predict structurally consistent outputs without extensive model redesign, or any expensive decoding at prediction time.",
"Our experiments on the semantic role labeling task show that such an approach can be especially helpful in scenarios where we do not have the luxury of massive annotated datasets.",
"We thank members of the NLP group at the University of Utah for their valuable insights and suggestions; and reviewers for pointers to related works, corrections, and helpful comments.",
"We also acknowledge the support of NSF Cyberlearning-1822877, SaTC-1801446, U.S. DARPA KAIROS Program No.",
"FA8750-19-2-1004, DARPA Communicating with Computers DARPA 15-18-CwC-FP-032, HDTRA1-16-1-0002, and gifts from Google and NVIDIA.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA or the U.S. Government.",
"The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein."
] | [
"abstain",
"abstain",
"abstain",
"result",
"method",
"result",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"abstain",
"abstain",
"result",
"method",
"method",
"result",
"abstain",
"abstain",
"result",
"result",
"objective",
"result",
"result",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"other",
"other",
"other",
"other",
"other"
] |
[
"Sequential information, a.k.a., orders, is assumed to be essential for processing a sequence with recurrent neural network or convolutional neural network based encoders.",
"However, is it possible to encode natural languages without orders?",
"Given a bag of words from a disordered sentence, humans may still be able to understand what those words mean by reordering or reconstructing them.",
"Inspired by such an intuition, in this paper, we perform a study to investigate how order information takes effects in natural language learning.",
"By running comprehensive comparisons, we quantitatively compare the ability of several representative neural models to organize sentences from a bag of words under three typical scenarios, and summarize some empirical findings and challenges, which can shed light on future research on this line of work.",
"Though significant progress has been made, it is still mysterious how humans are able to understand, organize, and generate natural languages.",
"In the field of natural language processing, many efforts have been made to enhance computational models.",
"Recently, recurrent neural networks (Mikolov et al., 2010) and encoder-decoder architectures (Sutskever et al., 2014) with long short-term memory (Hochreiter and Schmidhuber, 1997) and gated recurrent unit (Chung et al., 2014) have demonstrated state-of-the-art performance in sequence modeling and generation.",
"Nowadays, the encoder-decoder architectures have become a widely used approach for sequence-to-sequence tasks such as machine translation (Bah-danau et al., 2015), text summarization (Paulus et al., 2018) and dialogue generation (Serban et al., 2016).",
"Such models generally encode the input sequence into a vector representation using recurrent neural networks (RNNs) (Sutskever et al., Corresponding authors: Dongyan Zhao and Rui Yan. 2014), convolutional neural networks (Gehring et al., 2017) or transformer architectures (Vaswani et al., 2017).",
"The decoder then produces the output sequence step-by-step, conditioned on the encodings of the encoder.",
"Basically, those encoders process information along the sentence sequences, where sequential information is recurrently modeled at each position of the sequences.",
"Thus these models are sensitive to word orders.",
"Moreover, it has been demonstrated that order matters in sequence encoding (Vinyals et al., 2015).",
"Admittedly yes, order information is important for sequences learning and encoding.",
"An interesting question might be that, is it possible to encode natural languages without considering order information?",
"Take a look at an example of word rearrange quizzes for language learners 1 .",
"Given a bag of words from a disordered sentence { the dog James talking sat next to himself to .",
"}, most people can still read with little effort, though disagreement might exist on subtle details as to whether it is the man or the dog that is seated.",
"Inspired by this, it is interesting to explore how and to what extent we can encode natural languages without considering order information.",
"From a computational perspective, we ask: Can we construct an algorithm that is capable of reading a bag of words as robustly as humans do?",
"Our task is to predict the original sentence given a bag of words without orders extracted from a random sentence.",
"This orderless setting is important to characterize the human instinct for understanding languages.",
"The answer to this question also provides insights into many important practical problems: In abstractive text summarization, the summary can be generated according to a bag of extracted key words (Xu et al., 2010); In statistical machine translation, we need to reorder the words or phrases in the target language to get a natural and fluent 1 https://quizlet.com/143171956/arrange-words-and-form-meaningful-sentences-flash-cards/ Normal Input : the dog James talking sat next to himself to .",
"marked in red denote the added noisy words, and words marked in green denote the missing words.",
"sentence (He and Liang, 2011).",
"In dialogue systems, we need systems that are enabled to converse smoothly with people that have troubles in ordering words, such as children, language learners, and speech impaired.",
"In image caption, the caption can be organized with a bag of attribute words extracted from the image (Fang et al., 2015).",
"Moreover, such a model can help non-native speakers of English to write a sentence just from keywords.",
"This bag-to-sentence transformation problem is rather challenging primarily due to three reasons.",
"First, the relationship between words is missing from the input bag of words.",
"To predict the correct ordering, both the meaning of the whole sentence and the words that may become the context of a particular word must be guessed and leveraged.",
"Second, the input bag of words might only be a subset of all the words in a sentence, and there might exist randomly injected words, as shown in Table 1. Last, the correct ordering of the words into a sentence may not be unique, and the model needs to have the flexibility to allow multiple choices of outputs.",
"While much research has been directed into processing sequential text information, there has been far less research regarding the encoding of an unordered bag.",
"A simple approach is based on pooling that takes the maximum value for each dimension of the word embeddings (Qi et al., 2017).",
"This strategy is effective in simple tasks (e.g., sentence classification) but loses much contextual information for sentence organization.",
"(Vinyals et al., 2015) proposes to encode a set through iterative attention on the input items, alike to the memory network.",
"These approaches could obtain an order-invariant representation of the set from a global perspective.",
"However, they are lacking of modeling the semantic dependencies between input items.",
"In addition, the effectiveness of these models on the bag-to-sentence transformation problem is also unknown.",
"order information takes effects in natural language learning for neural models.",
"On the basis of the pooling-based and memory-based approaches, we introduce the self-attention to encode the semantic dependencies between input words without considering order information, so as to enrich individual words with contextual information from different semantic aspects.",
"We systematically compare the ability of different neural models to organize sentences from a bag of words in terms of three typical scenarios shown in Table 1. The contributions of this paper are summarized as follows: We present an empirical study to investigate the ability of neural models to organize sentences from a bag of words.",
"We introduce a bag-to-sentence transformation model based on self-attention, which significantly outperforms existing models in sentence organization tasks.",
"We show some interesting results by thoroughly comparing and analyzing sentence organization under different scenarios ( Normal, Noise, Missing ), which may shed light on future research on this line of work.",
"Pooling is a basic approach to encode sets (or bags), and has been widely used for many tasks, such as 3D shape recognition (Qi et al., 2017), few-shot image classification (Snell et al., 2017).",
"Besides, several studies have explored the capability of attention mechanisms in modeling sets (or bags).",
"Vinyals et al. (2015) proposed to encode a set with multi-hop attention operations.",
"Ilse et al. (2018) proposed to use attention-based weighted sum-pooling for multiple instance learning.",
"Similarly, Yang et al. (2020) proposed an attention-based algorithm to aggregate a deep feature set for multi-view 3D reconstruction.",
"As a new approach to modeling a text sequence, self-attention has been successfully used in many NLP tasks, such as machine translation (Vaswani et al., 2017), text summarization (Paulus et al., 2018) and machine reading comprehension (Wang et al., 2017).",
"However, most studies about self-attention focus on sequence modeling, which ignores the positional invariance of the attention mechanism itself.",
"In perticular, Ma et al. (2018) utilized self-attention to model interactions between the objects in a video, and employed pooling to obtain aggregated features.",
"On this basis of the transformer architecture, Lee et al. (2019) presented an Set Transformer designed to model interactions among elements in the input set.",
"Without considering missing words or noisy words, our task devolves into word ordering problem, which is a fundamental task in natural language generation.",
"Previous, researchers usually employed N-gram based language models (De Gis-pert et al., 2014; Schmaltz et al., 2016), syntactic-based language models (Zhang and Clark, 2011; Liu et al., 2015) or combined models (Zhang et al., 2012; Liu and Zhang, 2015) to solve this problem.",
"More recently, Hasler et al. (2017) proposed a bag-to-sequence model, where the decoder RNN directly attended to the word embeddings.",
"However, all these methods aim at finding the best permutation of a bag of words based on language models, and do not consider how to encode a bag of words.",
"Given a bag of words X = { x 1 , x 2 , , x m } which consists of m tokens, our model will generate a sentence Y = { y 1 , y 2 , , y n } , where n is the length of target sentence.",
"In the normal scenario, the words of X come from a disordered sentence and are the same as Y .",
"While in other two scenarios, the condition no longer holds.",
"To be specific, X contains some noisy words that do not appear in Y for noise scenario, and X lacks some words that should appear in generated sequence for the missing scenario.",
"We can model this using the conditional probability P ( Y | X ) and decompose it with the chain rule.",
"In this paper, we employ encoder-decoder frameworks to address the bag-to-sentence problem.",
"Particularly, the encoder is responsible for learning an order-invariant context representation for the input bag, and the decoder produces the target sentence conditioned on a bag of input words.",
"RNN.",
"Recurrent neural networks typically process information along the word positions of the input sequence, and they have proven to be sensitive to variations of word order to some degree (Vinyals et al., 2015).",
"In this paper, we introduce an RNN with long short-term memory units (LSTMs) as a baseline encoder for a comparison.",
"Formally, the hidden state of RNN at the t -th step h t is calculated by: h t = LSTM ( h t 1 , w t ) , (2) where w t denotes the input word embedding at t -th step.",
"The final hidden state of LSTM is regarded as the context representation of the input bag.",
"Pooling.",
"A simple way to encode a bag without considering order information is the pooling-based approach as inspired by Qi et al. (2017) that summarizes bag information by choosing the maximum value from each dimension of the word embeddings.",
"Formally, given a bag of word embeddings { w i } ni =1 , the context representation of the input bag of words v s can be calculated as: v s = max { w 1 , w 2 , , w n } , (3) Memory.",
"The memory-based approach encodes a bag of words through performing multiple rounds of attention over the word representations, alike to the memory network (Sukhbaatar et al., 2015).",
"Formally, we take the vector representation v s obtained by the pooling-based method as the initial bag representation v 0 s .",
"At the t -th processing round, we use the current bag representation v ts to attend the memory { w 1 , , w n } composed of word embeddings, and compute an attention vector r t through the attention mechanism (Bahdanau et al., 2015), defined as: t,i = exp( g ( v ts , h i )) (cid:80) ni =1 exp( g ( v ts , w i )) , r t = (cid:88) n i =1 t,i w i , (4) where g ( , ) is a function that computes the similarity between w i and v ts , and we employ dot product function in this paper.",
"Then the current bag representation v t s is concatenated with the output of the attention vector r t , and further transforms it through non-linear transformation.",
"where f ( ) is a non-linear mapping function which reduces the input dimension to d e .",
"Following Vinyals et al. (2015), we use an LSTM unit (Hochreiter and Schmidhuber, 1997) (without inputs) as f ( ) .",
"We perform this process for K rounds.",
"The obtained vector v Ks is the final bag representation.",
"We set K as the number of tokens in source bag.",
"Self-attention.",
"Self-attention is a special case of standard attention mechanism (Bahdanau et al., 2015) where each word can attend to (interact with) all words in the input.",
"Unlike RNNs, self-attention can model dependencies among words in the input bag without considering the order information.",
"In this paper, we borrow the idea from the work of neural transformer architecture (Vaswani et al., 2017).",
"The model contains N stacked blocks, each of which mainly composed of a multi-head attention layer and a row-wise feed-forward layer.",
"More compactly, { m 1 , , m n } = MultiHeadAtt ( { w 1 , , w n } ) , (6) { h 1 , , h n } = FFN ( { m 1 , , m n } ) , (7) where m i and h i are the representation for i -th word produced by the multi-head attention layer and the row-wise feed-forward layer respectively.",
"A residual connection (He et al., 2016) and a row-wise normalization (Ba et al., 2016) are applied around each of the multi-head attention layer and feed-forward layer.",
"Based on the representation produced by the self-attention, we further employ pooling-based or memory-based approaches 2 to obtain a global context representation for input bag.",
"We name the full model as AttP when pooling-based approach is adopted, and name it as AttM by using memory-based approach.",
"The decoder acts as a language model to reconstruct the sentence conditioned on the bag representation.",
"To highlight the differences among different encoders, we utilize the same decoder for different encoders.",
"Since the target Y corresponds to a sequence, and has significant vocabulary overlap with the input bags of words, we blend a pointer-based decoder (Vinyals et al., 2015; See et al., 2017), which 2 It is worth noting that the current memory is composed of the word representations output by self-attention layer acts as a language model to enable our model to generate a word from the vocabulary, or to copy words from the input via the pointer mechanism.",
"Particularly, to calculate the context vector c t and pointer probabilities in each decoding step, we take the input word embeddings as the hidden states in poolingand memory-based approaches.",
"In self-attention-based approaches, we take the output representations of the self-attention layer as the hidden states.",
"Our goal is to maximize the output sentence probability given the input bag of words.",
"Therefore, we optimize the negative log-likelihood loss function: J () = 1 D (cid:88) ( x,y ) D log p ( y | x ) , (8) where D is a set of bag-sentence pairs and is the parameters.",
"We construct a large dataset from The Wesbury Lab Wikipedia Corpus 3 (Shaoul, 2010), which is created from the articles in English Wikipedia.",
"We tokenize all articles into sentences using the NLTK package 4 , and replace all numbers with __num__\". We retain experiment samples among the sentences of length between 5 and 20 to focus on the majority case of the training corpus. Finally, we randomly sample 10 million sentences for training, 100 k for validation and 10 k for testing. In the normal scenario, we randomly shuffle the words in each sentence as the input of our model, and the original sentence is the ground truth. Based on the normal scenario, we construct the training data for the noise scenario by randomly introducing some noisy words to the source bag, and construct the training data for the missing scenario by randomly removing some words from the source bag. We also compare the normal scenario of our model on The English Penn Treebank 3 The corpus removes all links and other irrelevant material (e.g., navigation text, etc), and contains about one billion words, over 2 million documents. 4 http://www.nltk.org/api/nltk.tokenize.html BLEU ROUGE-L Perfect Matching Rate (PMR) Word Accuracy (WAcc) Normal Noise Missing Normal Noise Missing Normal Noise Missing Normal Noise Missing Pooling 0.4656 0.4382 0.2636 0.6917 0.6587 0.5470 0.1945 0.1461 0.0426 0.5685 0.5536 0.4437 LSTM 0.4736 0.4327 0.2538 0.7311 0.6761 0.5453 0.2203 0.1542 0.0390 0.5808 0.5563 0.4369 Memory 0.5030 0.4537 0.2664 0.7485 0.6939 0.5607 0.2404 0.1672 0.0450 0.6063 0.5789 0.4520 AttP 0.5740 0.5372 0.2882 0.7860 0.7396 0.5722 0.3014 0.2267 0.0479 0.6613 0.6367 0.4700 AttM 0.5886 0.5433 0.2914 0.7925 0.7465 0.5738 0.3208 0.2355 0.0512 0.6697 0.6461 0.4702 Table 2: Results on the test sets of three scenarios for Wikipedia dataset. We randomly generate noisy words with the number between 1 and half length of the sentence from the vocabulary for each sentence as the input of the the noise scenario. For the missing scenario, random words with number between 1 and half length of the sentence are removed from each sentence. It is worth noting that we randomly shuttle input bags with three different seeds and report the mean score of each metrics for LSTM. data (PTB) (Marcus et al., 1993), which is a widely-used dataset for word ordering task (Schmaltz et al., 2016; Hasler et al., 2017). To facilitate fair comparisons, we use the data preprocessed by (Schmaltz et al., 2016), which consists of 39 , 832 training sentences, 1 , 700 validation sentences and 2 , 416 test sentences. 5.2 Implementation Details For all models, we set the dimension of word embedding as 128 . In the LSTM-based encoder, the dimension of hidden unit is 256 . In the self-attention-based encoder, we set the number of head in Equation (6) as 8 and the hidden size of feed-forward layer in Equation (7) as 256 . All parameters are tuned in the validation set. The vocabulary size is 50 k. We use AdaGrad (Duchi et al., 2011) optimizer on mini-batch of size 32 , with learning rate at 0 . 15 and gradient clipping at 2 . In decoding, we set the beam size as 5 for all models. It is worth noting that we do not compare with the results derived from the modified beam search method proposed in Hasler et al. (2017) since we focus on investigating the capability of a model to encode a bag of words in this paper. So we compare all methods under standard beam search method (with a beam size of 5 ) in our experiment, to highlight the differences among different encoders. 5.3 Evaluation Metrics In our settings, a shuffled sentence sometimes may correspond to multiple reasonable outputs. Hence we employ four automatic evaluation metrics to evaluate the quality of a generated sentence from different aspects. PMR ( Perfect Matching Ratio ) measures the ratio of instances that are exactly the same as the ground-truth. BLEU (Pa-pineni et al., 2002) measures the quality of generated sentences by computing overlapping lexi-BLEU ROUGE-L WAcc PMR N-GRAM 0.2330 -RNNLM 0.2450 -Pooling 0.3118 0.5916 0.4105 0.0863 LSTM 0.3140 0.5875 0.3873 0.0850 Memory 0.3328 0.6053 0.4089 0.0941 AttP 0.3469 0.6169 0.4297 0.1013 AttM 0.3489 0.6194 0.4304 0.1059 Table 3: Results of word ordering task on PTB datasets (beam size = 5), * denotes the results reported in (Hasler et al., 2017). cal units (e.g., unigram, bigram) with the reference sentences. ROUGE-L (Lin, 2004) measures the longest common subsequence (LCS) between the reference sentence and the generated sentence. WAcc ( Word Accuracy ) is the negative word error rate (WER) (Mangu et al., 2000). It measures the edit distance between the generated sentence and the reference sentence (higher is better). Besides, we also conduct human evaluations to further analyze our generated results and explore the detail sort of wrong cases. 5.4 Overall Results Table 2 illustrates the performance of all models for three scenarios on the Wikipedia dataset. Firstly, we can find that Pooling shows the worse performance among all models. This is because directly utilizing pooling operation on word embeddings would lose track of much crucial context information. Secondly, although LSTM processes the information sequentially, it achieves better results than Pooling in normal and noise scenarios. A possible explanation for this might be that the parameters in LSTM enable the mode to retain some bag information. In particular, self-attention-based approaches (e.g., AttP and AttP ) show the best results, and outperform Memory by a large margin in terms of 0 2 4 6 8 # Noise Words 0.400 0.425 0.450 0.475 0.500 0.525 0.550 BLEUBLEU for noise scenario PoolingLSTMMemoryAttPAttM 0 2 4 6 8 # Noise Words 0.050 0.075 0.100 0.125 0.150 0.175 0.200 PMRPMR for noise scenario PoolingLSTMMemoryAttPAttM 0 2 4 6 8 # Missing Words 0.200 0.300 0.400 0.500 BLEUBLEU for missing scenario PoolingLSTMMemoryAttPAttM 0 2 4 6 8 # Missing Words 0.000 0.050 0.100 0.150 0.200 PMRPMR for missing scenario PoolingLSTMMemoryAttPAttM Figure 1: Performance in terms of different metrics by varying the number of missing words or noisy words. We continuously introduce noisy words or missing words with the footstep of 2. The noisy words are randomly picked from the vocabulary. 5-6 7-8 9-10 11-12 13-14 15-16 17-18 19-20 # word in bag 0.2 0.4 0.6 0.8 BLEU PoolingLSTMMemoryAttPAttM 5-6 7-8 9-10 11-12 13-14 15-16 17-18 19-20 # word in bag 0.0 0.2 0.4 0.6 PMR PoolingLSTMMemoryAttPAttM Figure 2: Performance in terms of different metrics by varying the number of words in source bag. all evaluation metrics, especially for normal and noise scenarios. The phenomenon might be ascribed to the reason that Memory encodes the bag of words by considering each word individually, while Self-attention captures the semantic dependencies among the input words from different semantic aspects, leading to a more robust bag representation. Additionally, AttM shows better performance than AttP , indicating that the memory-based fusion method is more useful than the pooling-based fusion method. In addition, we can notice that the performance of all models declines when noisy words are introduced or some words are removed from the input bag, but much more for removing some words. This result may be explained by the fact that organizing a sentence from a partially observable bag of words is more challenging since it requires background knowledge to predict the meaning of the bag and further fill the missing words. On the other hand, in the noise scenario, most noise words have a small impact on learning the context representation of a bag and all words can be decoded (or generated) via copy operations. We further run experiments on the PTB dataset, which is a benchmark for the word ordering task. The results are shown in Table 3. We can observe that various neural models outperform the traditional N-GRAM model and RNNLM. In these neural models, the results are consistent with those of Wikipedia. 5.5 Discussions The impact of the number of noisy/missing words. To better understand the robustness of different models under the noise scenario and the missing scenario, we show how the performance changes as the number of noise or missing words changes in Figure 1. As seen, approaches based on self-attention always outperform other approaches in both scenarios, especially more significantly in the noise scenario. Besides, the performance of all models drops as the increases of the number of missing words or noisy words, but more sharply for the missing scenario. The results imply that: 1) In the bag-to-sentence transformation problems, the capability of neural models to resist noisy words is better than the capability to resist missing words; 2) It is still challenging for neural models to handle the bags where some information is missed. The impact of bag size. We further study how the size of the input bag influences the performance of different models. Figure 2 illustrates how the log p ( y | x ) Case-1 Input : . largest animals bears the they in the land also are only native taiwan and Reference : they are also the largest land animals and the only native bears in taiwan . Beam-1 : they are also the only native animals and the largest land bears in taiwan . -0.2602 Beam-2 : they are also the only native land animals and the largest bears in taiwan . -0.2708 Beam-3 : they are also the largest land animals and the only native bears in taiwan . -0.3183 Case-2 Input : a , engineering there . time mechanical chairman long he for served , as of Reference : there he served , for a long time , as chairman of mechanical engineering . Beam-1 : there , he served as chairman of mechanical engineering , for a long time . -0.0797 Beam-2 : there , he served for a long time , as chairman of mechanical engineering . -0.0882 Beam-3 : for a long time , there , he served as chairman of mechanical engineering . -0.2041 Case-3 Input : their cuddy again however . sends interrupts and , exchange away ali Reference : however , cuddy interrupts their exchange again and sends ali away . Beam-1 : however , ali interrupts their exchange again and sends cuddy away . -0.1829 Beam-2 : however , cuddy interrupts their exchange again and sends ali away . -0.2116 Beam-3 : however , cuddy interrupts their exchange and sends ali away again . -0.2187 Table 4: Generation examples of 3 different results via beam search in our AttM under normal scenario. We show the log generation probability for each beam candidate in the last column. performance of AttM changes with respect to bags with different numbers of words in the normal scenario, where we bin test examples into buckets. We observe a similar trend for all models: they first remain stable when the bag size less than 8, and then decrease monotonically when the bag size keeps increasing. The reason might be that when only a few words are available in input bag, the model can well capture the meaning of the whole sentence, but when the bag becomes large enough, the semantic combination of words will become more complicated and the meaning of target sentence will be hard to be grasped. Besides, self-attention-based models always achieve the best performance, which is consistent with the result in Table 2. Multiple plausible outputs. Actually, for the bag-to-sequence task when applied to language, a bag of words sometimes may correspond to multiple reasonable and grammatical outputs. Such a phenomenon is similar to response generation in dialog systems, where several responses can be reasonable. Table 4 shows the three generated results of AttM (the most strong model) through beam search. We can notice that all generated sentences are grammatical and reasonable. In case-1, the objects animals and land bears are exchangeable in terms of syntax; both native and largest can describe these objects.",
"Our model prefers the only native animals and the largest land bears.",
"Since our model is a conditional language model learned from the training corpus, and the decoder reconstructs a sentence conditioned on the representation of the input bag of words.",
"The joint probability of sentence-1 is larger than sentence-2.",
"In case-2, for a long time and there are adverbials, and are 5-8 9-12 13-16 17-20 # word in bag 0.25 0.50 0.75 BLEU AttMHuman Missing 2 Missing 1 Normal Noise 1 Noise 2 0.1 0.2 0.3 0.4 0.5 BLEU AttM Human Figure 3: Performance of the neural model and human in terms of different scenarios.",
"position variable.",
"However, the meaning of all generated sentences remains the same.",
"In case-3, both ali and cuddy are names, thus they are undistinguishable in this situation.",
"Our model assigns a higher probability to ali interrupts their exchange again and sends cuddy away.",
"Despite the lack of order information, neural models can still organize all possible sentences through beam search.",
"Neural Models Vs. Human.",
"We are also curious about the ability of humans to organize sentences from a bag of words.",
"We first binned the test set of the normal scenario into four buckets according to the size of the input bag, and then randomly selected 40 samples from each bucket.",
"We invited humans to organize the target sentence regarding the input bag using crowd-sourcing.",
"Each bag was randomly presented to 3 judges and we retain the answer with the highest BLEU score.",
"Figure",
"3(a) illustrates the BLEU score of humans and the most competitive model AttM across different bag sizes.",
"We observe that both the performance of humans and AttM become worse with the increase of the bag size, which is consistent with the result in Figure 2. Besides, AttM always shows better performance than human, but the performance gap Annotated Types Ratio Synonymous(30%) Exactly generated 16% Two adverbials are exchanged 5% Two coordinate clauses are exchanged 7% Other reasons 2% Non-synonymous(57%) The subject and object are exchanged 5% The logic is unreasonable.",
"becomes smaller as the bag size decreases.",
"This result indicates that humans are better at recognizing small bags than large bags.",
"Besides, we also study how noisy words and missing words impact the performance of humans and neural models.",
"Based on the above test set randomly selected from the normal scenario, we randomly introduced 1 or 2 noisy words to the source bag denoting as noise-1, noise-2 respectively, and randomly removed 1 or 2 words from the source bag, denoting as missing-1, missing-2 respectively.",
"We also invited humans to organize a sentence regarding the input bag using crowd-sourcing.",
"Figure",
"3(b) presents the results of each test set.",
"We summarize our observations as follows: (1) Both the performance of human and AttM get worse when noisy words are introduced or some words are removed; (2) Compared with neural models, humans are more robust to noisy words and missing word in sentence organization; (3) The performance AttM is significantly better than humans, but becomes comparable with humans when 2 words are randomly removed from the input bag.",
"The results imply that humans have a more strong background knowledge of language to guess the meaning of the target sentence and complete the cloze test.",
"Error analysis.",
"To further analyze the quality of the generated sentence and in which case our model fails to recover the original sentence, we invite four educated annotators to judge the quality of 100 randomly sampled sentences 5 generated by AttM .",
"Annotators were asked to judge whether a generated sentence is grammatical and the meaning of a generated sentence is the same as the ground truth.",
"We can find that 87 % of generated sentences are grammatical and 30 % of sentences share the same meaning with the ground-truth.",
"Among those grammat-5 We randomly select samples with a bag size greater than or equal to 10 since they contain more error cases.",
"ical and synonymous samples, 46 .",
"7 % ( 14 / 30 ) of sentences are not exactly the same with the ground truth in syntax.",
"There are two main types of paraphrase: the position of adverbials is exchanged or the position of coordinate clauses is exchanged.",
"Among those grammatical and non-synonymous samples, the logic of the majority sentences is unreasonable due to the position exchange of adverbials or coordinate clauses, and unreasonable combinations of semantic units.",
"Besides, the semantics of some sentences are changed because of the exchange of the subject and the object.",
"Attention visualization.",
"Figure 4 shows the visualization of attention weights of different heads in the 5-th block from the self-attention layer.",
"We can observe that self-attention can capture combinatorial relations between the input words from different semantic aspects.",
"For instance, cohen shows a strong correlation with larry in both heatmaps since Larry Cohen is the name of a famous director.",
"Moreover, both was and by attend to di-rected and written, composing the phrase was written (directed) by.",
"Such combinatorial relations can make the word representation more informative, which contributes to the representation learning of the bag of words.",
"Additionally, we observe that almost all words but itself demonstrate weak correlations with the noisy word lysander in both heatmaps, demonstrating the advantages of our model to tolerate noisy words.",
"In this paper, we present an empirical study to investigate the ability of neural models to organize sentences from a bag of words under three typical scenarios.",
"We conclude our discussion with the following findings: Self-attention is effective to capture the semantic dependencies between words in the input bag and shows competitive performance in bag-to-sentence transformation.",
"Neural models have a certain degree of capability to organize a sentence from a bag of words.",
"However, it is still challenging for neural models to handle large bags or the bags where some information is missing.",
"Compared with humans, neural models show a better capability to organize sentences from a bag of words, especially in terms of large bags.",
"However, the performance of humans is more robust to noisy words or missing words than neural models.",
"We would like to thank the anonymous reviewers for their constructive comments.",
"This work was supported by the National Science Foundation of China (NSFC No. 61876196) and the National Key Research and Development Program of China (No. 2020AAA0105200).",
"Rui Yan is supported as a Young Fellow of Beijing Institute of Artificial Intelligence (BAAI)."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Thanks to the effectiveness and wide availability of modern pretrained language models (PLMs), recently proposed approaches have achieved remarkable results in dependency-and span-based, multilingual and cross-lingual Semantic Role Labeling (SRL).",
"These results have prompted researchers to investigate the inner workings of modern PLMs with the aim of understanding how, where, and to what extent they encode information about SRL.",
"In this paper, we follow this line of research and probe for predicate argument structures in PLMs.",
"Our study shows that PLMs do encode semantic structures directly into the contextualized representation of a predicate, and also provides insights into the correlation between predicate senses and their structures, the degree of transferability between nominal and verbal structures, and how such structures are encoded across languages.",
"Finally, we look at the practical implications of such insights and demonstrate the benefits of embedding predicate argument structure information into an SRL model.",
"Semantic Role Labeling (SRL) is often defined informally as the task of automatically answering the question Who did What to Whom, Where, When and How? (Mrquez et al., 2008) and is, therefore, thought to be a fundamental step towards Natural Language Understanding (Navigli, 2018).",
"Over the past few years, SRL has started to gain renewed traction, thanks mainly to the effectiveness and wide availability of modern pretrained language models (PLMs), such as ELMo (Peters et al., 2018), BERT (Devlin et al., 2019) and BART (Lewis et al., 2020).",
"Current approaches have, indeed, attained impressive results on standard evaluation benchmarks for dependencyand span-based, multilingual and cross-lingual SRL (He et al., 2019; Li et al., 2019; Cai and Lapata, 2020; Conia and Navigli, 2020; Blloshmi et al., 2021; Conia et al., 2021).",
"Despite the remarkable benefits provided by the rich contextualized word representations coming from PLMs, the novelties introduced in recent state-of-the-art models for SRL revolve primarily around developing complexities on top of such word representations, rather than investigating what happens inside a PLM.",
"For example, the SRL systems of He et al. (2019) and Conia and Navigli (2020) take advantage only of BERT's uppermost hidden layers to build their input word representations.",
"However, the revolution that PLMs have sparked in numerous areas of Natural Language Processing (NLP) has motivated researchers in the community to investigate the inner workings of such models, with the aim of understanding how, where, and to what extent they encode information about specific tasks.",
"This research has revealed that different layers encode significantly different features (Tenney et al., 2019; Vulic et al., 2020).",
"In perhaps one of the most notable studies in this direction, Tenney et al. (2019) demonstrated empirically that BERT re-discovers the classical NLP pipeline, highlighting that the lower layers tend to encode mostly lexical-level information while upper layers seem to favor sentence-level information.",
"Although recent analyses have already provided important insights into which layers of a PLM are more relevant for SRL and how their relative importance is affected by the linguistic formalism of choice (Kuznetsov and Gurevych, 2020), not only do these analyses treat SRL as an atomic task but they also do not explore taking advantage of their insights to improve current state-of-the-art SRL systems.",
"Indeed, the SRL pipeline is usually divided into four main steps: predicate identification and disambiguation, and argument identification and classification.",
"To address this gap, in this paper we therefore take an in-depth look at how predicate senses and their predicate argument struc-4622 tures (PASs) are encoded across different layers of different PLMs.",
"On the one hand, we provide new insights into the capability of these models to capture complex linguistic features, while on the other, we show the benefits of embedding such features into SRL systems to improve their performance.",
"Our contributions can be summarized as follows: We probe PLMs for PASs: do PLMs encode the argument structure of a predicate in its contextual representation?",
"We show that, even though a PAS is defined according to a predicate sense, senses and argument structures are encoded at different layers in PLMs; We demonstrate empirically that verbal and nominal PASs are represented differently across the layers of a PLM; Current SRL systems do not discriminate between nominal and verbal PASs: we demonstrate that, although there exists some degree of transferability between the two, an SRL system benefits from treating them separately; We find that PAS information is encoded similarly across two very different languages, English and Chinese, in multilingual PLMs; We corroborate our findings by proposing a simple approach for integrating predicate-argument structure knowledge into an SRL architecture, attaining improved results on standard gold benchmarks.",
"We hope that our work will contribute both to the understanding of the inner workings of modern pretrained language models and to the development of more effective SRL systems.",
"We release our software for research purposes at https://github.",
"com/SapienzaNLP/srl-pas-probing .",
"Probing pretrained language models.",
"The unprecedented capability of modern PLMs to provide rich contextualized input representations took the NLP community by storm.",
"Alongside the rising wave of successes collected by PLMs in an ever increasing number of areas, researchers started to question and investigate what happens inside these models and what they really capture, probing for knowledge and linguistic properties (Hewitt and Manning, 2019; Chi et al., 2020; Vulic et al., 2020).",
"This body of work quickly attracted increasing attention and grew to become a field of study with a name of its own: BERTology (Rogers et al., 2020).",
"Probing a PLM usually consists in defining a very precise task (e.g., identifying whether two words are linked by a syntactic or semantic relation), and then in designing and training a simple model, called a probe , to solve the task using the contextualized representations provided by the PLM.",
"The idea is to design a probe that is as simple as possible, often consisting of a single-layer model: if the probe is able to address the task, then it must be thanks to the contextual information captured by the PLM as the expressiveness of the probe itself is limited by its simplicity.",
"One could argue that some complex relations may require a non-linear probe (White et al., 2021) which can reveal hidden information as long as it is accompanied by control experiments (Hewitt and Liang, 2019) to verify that it is still extracting information from the underlying PLM, rather than merely learning to solve the probing task.",
"Over the past few years, these probing techniques have been used to great effect and revealed that PLMs have been rediscovering the classical NLP pipeline (Tenney et al., 2019), and that they often encode distances between syntactic constituents (Hewitt and Liang, 2019), lexical relations (Vulic et al., 2020) and morphology (Chi et al., 2020), inter alia .",
"Probing techniques for SRL.",
"As in several other fields of NLP, recent studies have aimed to shed some light on how, where and to what extent PLMs encode information relevant to SRL.",
"Among others, Tenney et al. (2019) devised an edge probing mechanism aimed at ascertaining the capability of BERT to identify which semantic role ties a given predicate to a given argument span, and showed that this task is solved mainly by the middle layers of BERT.",
"Toshniwal et al. (2020) proposed and compared several techniques for better combining the contextualized representations of a PLM, finding that applying max pooling or performing a weighted average are two robust strategies for SRL.",
"More recently, Kuznetsov and Gurevych (2020) designed a probe to analyze how different linguistic ontologies essential to the task in that they define predicate senses and semantic roles explicitly require features that are encoded at different layers of a PLM.",
"In this paper, we follow the line of research laid out by the afore-4623 mentioned work, probing PLMs with the objective of understanding where and to what extent they encode a predicate argument structure into the contextualized representation of a predicate.",
"Recent advances in SRL.",
"Thanks to their effectiveness, PLMs are now the de facto input representation method in SRL (He et al., 2019; Li et al., 2019; Conia and Navigli, 2020; Blloshmi et al., 2021).",
"Recently proposed approaches have achieved impressive results on several gold benchmarks (Hajic et al., 2009; Pradhan et al., 2012), both in span-based and in dependency-based SRL, but also in multilingual and cross-lingual SRL, even though there still seems to be a significant margin for improvement in out-of-domain settings.",
"The innovations put forward by such approaches, however, have mainly focused on architectural novelties built on top of PLMs: Cai et al. (2018) proposed the first end-to-end architecture; He et al. (2019) and Cai and Lapata (2019) successfully exploited syntax in multilingual SRL; Marcheggiani and Titov (2020) took advantage of GCNs to capture distant semantic relations; Conia and Navigli (2020) devised a language-agnostic approach to bridge the gap in multilingual SRL; Blloshmi et al. (2021) and Paolini et al. (2021) tackled the task as a sequence generation problem; Conia et al. (2021) introduced a model to perform cross-lingual SRL across heterogeneous linguistic inventories.",
"However, if we look back at past work, it is easy to realize that we lack a study that provides an in-depth look into PLMs and a hint at how to better exploit them in future SRL systems.",
"As mentioned above, some studies have already investigated how semantic knowledge is distributed among the inner layers of current PLMs, finding that information useful for SRL is mainly stored in their middle layers (Tenney et al., 2019).",
"However, such studies have considered SRL as an atomic task, while instead the SRL pipeline can be thought of as being composed of four different subtasks:",
"1. Predicate identification , which consists in identifying all those words or multi-word expressions that denote an action or an event in the input sentence;",
"2. Predicate sense disambiguation , which requires choosing the most appropriate sense or frame for each predicate identified, as the same predicate may denote different meanings or define different semantic scenarios depending on the context;",
"3. Argument identification , which consists in selecting the parts of the input text that are semantically linked as arguments to an iden-tified and disambiguated predicate;",
"4. Argument classification , which is the task of determining which kind of semantic relation, i.e., semantic role, governs each predicate-argument pair.",
"For our study, it is important to note that, in many popular ontologies for SRL, predicate senses or frames are often tightly coupled to their possible semantic roles.",
"In other words, the set of possible semantic roles that can be linked to a predicate p is defined according to the sense or frame of p .",
"Hereafter, given a predicate p , we refer to its set of possible semantic roles as the roleset of p .",
"For example, the predicate love as in He loved everything about her belongs to the FrameNet (Baker et al., 1998) frame experiencer_focused_emotion which defines a roleset composed of {Experiencer, Content, . . . , Degree}.",
"The same predicate sense has different rolesets in other ontologies, for example {ARG0 (lover), ARG1 (loved)} in the English PropBank (Palmer et al., 2005) and {Experiencer, Stimulus, . . . , Cause} in VerbAtlas (Di Fabio et al., 2019).",
"Since rolesets are often defined according to predicate senses, it is interesting to investigate whether current pretrained language models store important features about senses and rolesets in their hidden layers.",
"To this end, we formulate two simple probing tasks: Sense probing , which consists in predicting the sense s of a predicate p from the contextual vector representation x p of p , where x p is obtained from a pretrained language model.",
"Roleset probing , which consists in predicting the semantic roles { r 1 , r 2 , . . . , r n } that appear linked to a predicate p from its contextual representation x p , where x p is obtained from a pretrained language model.",
"For the choice of x p , we compare four different options: 4624 Random: initializing the weights of the language model at random provides a simple control baseline to attest the ability of a probe to learn the probing task, i.e. learning to associate random inputs to correct labels; Static: x p is the input embedding of the pretrained language model corresponding to p , e.g., the non-contextual representation before the Transformer layers in BERT.",
"1 Top-4: x p is the concatenation of the topmost four hidden layers of the language model: this is the configuration used in some of the recently proposed approaches for full SRL systems (Conia and Navigli, 2020); W-Avg: x p is the weighted average of all the hidden layers of the language model, where the weights for each layer are learned during training (the larger the weight the more important its corresponding layer is for the probing task).",
"For each probing task, we train 2 two simple probes, a linear classifier and a non-linear 3 classifier, on the verbal predicate instances of the English training datasets provided as part of the CoNLL-2009 shared task for dependency-based SRL (Hajic et al., 2009).",
"Results on sense probing.",
"Table 1 reports the results of our linear and non-linear probes on predicate sense disambiguation when using different types of input representations x p , namely, Static, Random, Last-4 and W-Avg, of an input predicate p in context.",
"The Random baseline is able to disambiguate well (84.8% in Accuracy using BERT-base-cased), which is, however, unsurprising since CoNLL-2009 is tagged with PropBank labels and most of the predicates are annotated with their first sense (e.g., buy.01 , sell.01 ).",
"Interestingly, static representations from all four language models do 1 In case of a predicate composed of multiple subtokens, x p is the average of the vector representations of its subtokens.",
"2 We train each probe for 20 epochs using Adam (Kingma and Ba, 2015) as the optimizer with a learning rate of 1e-3.",
"As is customary in probing studies, the weights of the pretrained language models are kept frozen during training.",
"We use the pretrained language models made available by Huggingface's Transformers library (Wolf et al., 2020).",
"3 We use the Swish activation function (Ramachandran et al., 2018) for our non-linear probes.",
"not contain much more information about predicate senses than random representations.",
"Using the topmost four hidden layers, instead, provides a substantial improvement over static representations for all language models (e.g., +6% in Accuracy for BERT-base-cased), lending credibility to the fact that context is key for the disambiguation process.",
"Most notably, the best representation for the sense probing task is consistently obtained by performing a weighted average of all the hidden layers of the language model.",
"This shows that important predicate sense information is not stored only in the topmost hidden layers and, therefore, also hints at the possibility that state-of-the-art architectures, such as those of He et al. (2019) and Conia and Navigli (2020), do not exploit pretrained language models to their fullest.",
"Finally, it is interesting to note that linear and non-linear probes obtain similar results, showing that sense-related information can easily be extracted without the need for a complex probe.",
"Results on roleset probing.",
"Table 2 reports the results on roleset identification obtained by our linear and non-linear probes when using different types of input representations x p , namely, Static, Random, Top-4 and W-Avg, of an input predicate p in context.",
"For this task, we measure the performance of a probe in terms of micro-averaged F1 score, taking into account partially correct predictions, e.g., the system is partially rewarded for predicting {ARG0, ARG1} instead of {ARG0, ARG2}.",
"As is the case for sense probing, our simple Random baseline is able to identify the correct 4625 roleset for a predicate in context with a satisfactory performance (72.8% in F1 score using BERT-base-cased).",
"Indeed, most predicates have at least one argument tagged with either ARG0 or ARG1, which in PropBank usually correspond to agentive and pa-tientive proto-roles, respectively; we hypothesize that the Random probe merely learns to bias its predictions towards these very common semantic roles.",
"Differently from in the sense probing task, the non-linear probe seems to perform better and achieve higher scores than the linear one.",
"However, this does not mean that roleset-related features are stored non-linearly in PLMs.",
"Indeed, one can notice that the random non-linear probe also performs better than its linear counterpart, suggesting that the higher score is due to the greater expressiveness of the probe, which learns the task rather than extracting information from the underlying PLM, i.e., the selectivity (Hewitt and Liang, 2019) of a non-linear probe is not greater than that of a linear probe in this task.",
"Despite the fact that the roleset probing task is more difficult than the sense probing one, we can observe a similar trend in the results: the Top-4 probe is substantially better than the Static probe, but W-Avg consistently outperforms Top-4, strongly suggesting that future approaches will need to use all the layers to take full advantage of the knowledge encoded within PLMs.",
"We stress that not exploiting all the inner layers of a PLM is an illogical choice, since the cost of computing a weighted average of their hidden representations is negligible compared to the overall computational cost of a Transformer-based architecture.",
"On the correlation between senses and rolesets.",
"Thus far, we have seen empirical evidence that PLMs encode important features about predicate senses and their rolesets across all their hidden layers, not just the topmost ones often used in the literature by current models for SRL.",
"However, one may wonder how such features are distributed across these hidden layers.",
"As we have already discussed above, predicate senses and their rolesets are tightly coupled: do PLMs distribute sense and roleset features similarly over their inner layers?",
"To answer this question, we resort to the W-Avg probe we introduced above.",
"Indeed, its peculiarity is that it learns to assign a different weight to each hidden layer of a PLM: in order to minimize the training loss, the W-Avg probe will assign a larger weight to those layers that are most beneficial, i.e., BERT RoBERTa m-BERT XLM-RL i n e a r Random 72.8 72.8 Static 75.1 75.3 Top-4 85.3 85.3 W-Avg 85.7 86.1 N on L i n e a r Random 75.9 75.9 75.8 75.7 Static 76.3 76.5 76.2 76.3 Top-4 89.2 88.8 88.0 88.9 W-Avg 89.4 89.3 88.8 89.1 Table 2: Results on roleset probing in terms of F1 Score (%) for the Random, Static, Top-4 and W-Avg probes using different pretrained language models, namely, BERT (base-cased), RoBERTa (base), multilingual BERT (base) and XLM-RoBERTa (base).",
"to those layers that express features that are more relevant for the probing task.",
"Therefore, we extract such layer weights learned by our probes for the two tasks we are studying predicate sense disambiguation and roleset identification and compare these learned weights, as shown in Figure 1 (top, blue charts).",
"Interestingly, and perhaps surprisingly, the W-Avg probe learns a different weight distribution for the two probing tasks, even though rolesets are often defined on the basis of predicate senses in many popular ontologies for SRL.",
"We can observe that predicate sense features are encoded more uniformly across the hidden layers of BERT or, equivalently, that the probe assigns similar weights to each hidden layer, slightly preferring the topmost ones (Figure 1, top-left).",
"However, this is not the case for the roleset probing task, in which the probe mostly relies on the hidden layers going from the 6th to the 10th, almost disregarding the bottom and top ones.",
"Furthermore, we can observe the same negative correlation within the distributions of the layer weights learned for senses and rolesets when using RoBERTa, albeit the divergence is slightly less accentuated (Figure 1, top-right).",
"One aspect that is often overlooked when designing and proposing novel architectures for SRL is that not all predicates are verbs.",
"In English, it is easy to find examples of nouns that evoke or imply a predication, such as producer , driver , and writer .",
"Most common nominal predicates are verb-derived or deverbal as their roleset is derived from their corresponding verbal predicates.",
"This is why, per-4626 2 4 6 8 10 12 0 5 10 15 20 Sense Roleset PLM layers RoBERTa 2 4 6 8 10 12 0 5 10 15 20 25 30 35 Sense Roleset PLM layers BERTR e l a t i v e i m p o r t a n c e ( % ) 2 4 6 8 10 12 0 2 4 6 8 10 12 14 PLM layers R e l a t i v e i m p o r t a n c e ( % ) Sense Roleset Sense Roleset 2 4 6 8 10 12 0 2 4 6 8 10 12 14 16 PLM layers R e l a t i v e i m p o r t a n c e ( % ) V e r b P r e d i c a t e s N o un P r e d i c a t e s Figure 1: Relative importance (%) of each layer of BERT (left) and RoBERTa (right) for sense probing and roleset probing.",
"haps, current state-of-the-art approaches do not distinguish between verbal and nominal predicates.",
"4 However, nominal predicates also possess peculiarities that do not appear in their verbal counterparts; for example, a nominal predicate can be its own argument, e.g., writer is the agent itself of the action 4 We note that, in general, languages English included also possess, sometimes in extensive quantities, predicates that are neither verbal nor nominal.",
"For example, Japanese prominently features adjectival predicates.",
"We take this opportunity to investigate how nominal predicate senses and their rolesets are encoded by PLMs in their inner layers.",
"We train a W-Avg probe on the sense and roleset probing tasks, focusing only on the nominal predicate instances in CoNLL-2009.",
"Figure 1 (bottom, green charts) shows the weights learned for the sense and roleset probing tasks when using BERT (bottom-left) and RoBERTa (bottom-right): we can immediately observe that, differently from verbal predicates, the weight distributions learned for nominal senses and their rolesets follow the same trend in both PLMs.",
"In other words, despite the fact that most nominal predicates are verb-derived, their information is encoded dissimilarly and distributed across different layers compared to those of verbal predicates.",
"We confirm our hunch by evaluating the ability of a W-Avg probe trained on roleset identification for verbal predicates only to also perform roleset identification for nominal predicates in a zero-shot fashion, and vice versa.",
"Although, from a first glance at the results reported in Table 3, our simple model seems to be able to perform nominal roleset identification after being trained only on verbal 4627 XLM-R m-BERT E n g li s h v e r b a l p r e d i c a t e s C h i n e s e v e r b a l p r e d i c a t e s 2 4 6 8 10 12 0 5 10 15 20 25 30 35 Sense Roleset PLM layers R e l a t i v e i m p o r t a n c e ( % ) 2 4 6 8 10 12 0 5 10 15 20 Sense Roleset 2 4 6 8 10 12 0 5 10 15 20 25 30 35 Sense Roleset PLM layers R e l a t i v e i m p o r t a n c e ( % ) 2 4 6 8 10 12 0 5 10 15 20 25 Sense Roleset PLM layers Figure 2: Relative importance (%) of each hidden layer of multilingual BERT (left) and XLM-RoBERTa (right) for sense probing and roleset probing.",
"rolesets, the performance is actually worse than a control probe, which is trained with a randomly initialized model on nominal roleset identification.",
"In general, our analysis provides an empirical explanation for why recent approaches for nominal SRL adapted from verbal SRL are still struggling to learn general features across different predicate types, despite initial promising results (Klein et al., 2020; Zhao and Titov, 2020).",
"We conclude our analysis on predicate senses and their rolesets with another important finding: multilingual PLMs encode both predicate sense and roleset information at similar layers across two very different languages, English and Chinese.",
"In order to support this statement, we train an W-Avg probe on both sense disambiguation and roleset identification, first on the English verbal predicates from the training split of CoNLL-2009 and then on the Chinese verbal predicates from the training split of CoNLL-2009.",
"Figure 2 shows the distributions of the learned weights for each hidden layer of two language models, multilingual BERT (left) and XLM-RoBERTa (right).",
"In particular, we observe that the probe learns to almost completely discard the first five layers of multilingual BERT for roleset identification in both English (top-left) and Chinese (bottom-left), while assigning similar weights across English and Chinese to the other hidden layers, with the 8th layer being relatively important in both languages.",
"Overall, Figure 2 supports the evidence that both multilingual BERT and XLM-RoBERTa encode the same type of semantic knowledge at roughly the same hidden layers across languages, supporting the findings by Conneau et al. (2020) and indicating a possible direction for future work in cross-lingual transfer learning for SRL.",
"Now that we have provided an in-depth look at how sense and roleset information is encoded at different inner layers of current PLMs (Section 3.2), highlighted the differences in how PLMs encode verbal and nominal predicates (Section 3.3), and revealed that multilingual PLMs capture semantic knowledge at similar layers across two diverse languages (Section 3.4), one may wonder how we can take advantage in a practical setting of what we have learned so far.",
"In this Section, we study how 4628 we can improve a modern system for end-to-end SRL by integrating sense and roleset knowledge into its architecture.",
"In what follows, we briefly describe the architecture of our baseline model, which is based on that proposed by Conia and Navigli (2020).",
"Notice that, even though we refer to this model as our baseline, its end-to-end architecture rivals current state-of-the-art approaches, such as Blloshmi et al. (2021), Conia et al. (2021) and Paolini et al. (2021).",
"Given an input sentence w , the model computes a contextual representation x i for each word w i in w by concatenating the representations obtained from the four topmost layers of a pretrained language model.",
"These contextual word representations are then processed by a stack of fully con-nected BiLSTM layers in which the input to the i -th BiLSTM layer is the concatenation of the inputs of all previous BiLSTM layers in the stack, obtaining a sequence h of refined encodings.",
"These encodings h are made predicate-aware by concatenating each h i of w i to the representation h p of each predicate p in the sentence, and finally processed by another stack of fully-connected BiL-STMs, resulting in a sequence a of argument encodings.",
"We refer to Conia and Navigli (2020) for further details about the architecture of our baseline model.",
"Enhancing the SRL model.",
"Based on our observations and analyses in the Sections above, we put forward three simple enhancements to our strong baseline model: Representing words using a weighted average of all the inner layers of the underlying language model, since we now know that semantic features important for the task are scattered across all the layers of a PLM; Using two different sets of weights to compute different weighted averages for predicate senses and predicate arguments, as semantic features important for the two tasks are distributed differently across the inner layers of the underlying PLM; Adding a secondary task to predict rolesets from a predicate representation h p in a multitask learning fashion.",
"Results on SRL.",
"Table 4 compares the results obtained on the verbal predicate instances in the standard gold benchmark of CoNLL-2009 for dependency-based SRL.",
"5 As we can see, each contribution provides an improvement over the previous one, both when using BERT-base-cased and BERT-large-cased (+0.4% and +1.1% in F1 score 6 over the baseline, respectively), the latter being one of the most used pretrained language models to achieve state-of-the-art results on the task.",
"In general, not only did our analysis shed light on interesting properties of current PLMs through the lens of predicate senses and their rolesets, but it also provided practical hints on how to better exploit such properties in SRL.",
"Qualitative Analysis.",
"Finally, we provide a look at what happens when our model is informed about predicate senses and their rolesets at training time.",
"To inspect how the vector representations of predicates change as we inject more inductive bias towards predicate-argument information, in Figure 3 we use t-SNE to project and visualize on a bidimensional plane the representations of the predicate close when using:",
"i) the baseline model, which is unaware of predicate-argument information and, therefore, does not show any significant clustering according to different rolesets;",
"ii) the model when it can use different weighted averages to com-5 We trained our model for 30 epochs using Adam with an initial learning rate of 1e-3, leaving all parameters of the underlying language model frozen and using the parameter values used in the original paper by Conia and Navigli (2020).",
"6 Scores were computed using the official CoNLL-2009 scorer provided during the shared task.",
"This scoring script produces a unified F1 measure that takes into account both predicate senses and semantic roles.",
"pute representations for predicate senses and their arguments; and",
"iii) the model when it is explicitly tasked with the secondary training objective of learning to identify the roleset of each predicate.",
"As one can see, as we inject more linguistic information into the model, the representations can be clustered better according to their corresponding predicate-argument structures.",
"In this paper, we probed PLMs for PASs: differently from past work, we dissected SRL into its core subtasks and analysed how PLMs encode predicate-argument structure information such as predicate senses and their rolesets.",
"In our analysis, we observed that, despite the intrinsic connection between predicate senses and their rolesets that exists in several popular SRL inventories, different PLMs encode their features across significantly different layers.",
"What is more, we also discovered that verbal and nominal predicates and their PASs are represented differently, making verbal-to-nominal SRL transfer far from trivial, and providing an empirical explanation for why previous attempts in this direction have struggled to obtain strong results.",
"Furthermore, our analysis revealed that current multilingual language models encode PASs similarly across two very different languages, namely, English and Chinese.",
"Finally, in contrast to previous work on probing, we put together what we learned and demonstrated a practical application of our findings by devising simple yet effective techniques for the integration of predicate-argument structure knowledge into a state-of-the-art end-to-end architecture for SRL.",
"The authors gratefully acknowledge the support of the ERC Consolidator Grant MOUSSE No. 726487 and the European Language Grid project No. 825627 (Universal Semantic Annotator, USeA) under the European Union's Horizon 2020 research and innovation programme.",
"This work was supported in part by the MIUR under grant Dipartimenti di Eccellenza 2018-2022 of the Department of Computer Science of Sapienza University of Rome."
] | [
"abstain",
"abstain",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"other",
"objective",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"objective",
"other",
"other"
] |
[
"Although measuring held-out accuracy has been the primary approach to evaluate generalization, it often overestimates the performance of NLP models, while alternative approaches for evaluating models either focus on individual tasks or on specific behaviors.",
"Inspired by principles of behavioral testing in software engineering, we introduce C heck L ist , a task-agnostic methodology for testing NLP models.",
"C heck L ist includes a matrix of general linguistic capabilities and test types that facilitate comprehensive test ideation, as well as a software tool to generate a large and diverse number of test cases quickly.",
"We illustrate the utility of C heck L ist with tests for three tasks, identifying critical failures in both commercial and state-of-art models.",
"In a user study, a team responsible for a commercial sentiment analysis model found new and actionable bugs in an extensively tested model.",
"In another user study, NLP practitioners with C heck L ist created twice as many tests, and found almost three times as many bugs as users without it.",
"One of the primary goals of training NLP models is generalization.",
"Since testing in the wild is expensive and does not allow for fast iterations, the standard paradigm for evaluation is using train-validation-test splits to estimate the accuracy of the model, including the use of leader boards to track progress on a task (Rajpurkar et al., 2016).",
"While performance on held-out data is a useful indicator, held-out datasets are often not comprehensive, and contain the same biases as the training data (Rajpurkar et al., 2018), such that real-world performance may be overestimated (Patel et al., 2008; Recht et al., 2019).",
"Further, by summarizing the performance as a single aggregate statistic, it becomes di cult to figure out where the model is failing, and how to fix it (Wu et al., 2019).",
"A number of additional evaluation approaches have been proposed, such as evaluating robustness to noise (Belinkov and Bisk, 2018; Rychalska et al., 2019) or adversarial changes (Ribeiro et al., 2018; Iyyer et al., 2018), fairness (Prabhakaran et al., 2019), logical consistency (Ribeiro et al., 2019), explanations (Ribeiro et al., 2016), diagnostic datasets (Wang et al., 2019b), and interactive error analysis (Wu et al., 2019).",
"However, these approaches focus either on individual tasks such as Question Answering or Natural Language Inference, or on a few capabilities (e.g. robustness), and thus do not provide comprehensive guidance on how to evaluate models.",
"Software engineering research, on the other hand, has proposed a variety of paradigms and tools for testing complex software systems.",
"In particular, behavioral testing (also known as black-box testing) is concerned with testing di erent capabilities of a system by validating the input-output behavior, without any knowledge of the internal structure (Beizer, 1995).",
"While there are clear similarities, many insights from software engineering are yet to be applied to NLP models.",
"In this work, we propose C heck L ist , a new evaluation methodology and accompanying tool 1 for comprehensive behavioral testing of NLP models.",
"C heck L ist guides users in what to test, by providing a list of linguistic capabilities , which are applicable to most tasks.",
"To break down potential capability failures into specific behaviors, C heck L ist introduces di erent test types , such as prediction invariance in the presence of certain perturbations, or performance on a set of sanity checks.",
"Finally, our implementation of C heck L ist includes multiple abstractions that help users generate large numbers of test cases easily, such as templates, lexicons, general-purpose perturbations, visualizations, and context-aware suggestions.",
"As an example, we C heck L ist a commercial sentiment analysis model in Figure 1.",
"Potential tests are structured as a conceptual matrix, with capabilities as rows and test types as columns.",
"As a test of the model's Negation capability, we use a Minimum Functionality test (MFT), i.e. simple test cases designed to target a specific behavior (Figure 1A).",
"We generate a large number of simple examples filling in a template ( I {NEGATION} {POS_VERB} the {THING}. ) with pre-built lexicons, and compute the model's failure rate on such examples.",
"Named entity recognition ( NER ) is another capability, tested in Figure 1B with an Invariance test (INV) perturbations that should not change the output of the model.",
"In this case, changing location names should not change sentiment.",
"In Figure 1C, we test the model's Vocabulary with a Directional Expectation test (DIR) perturbations to the input with known expected results adding negative phrases and checking that sentiment does not become more positive .",
"As these examples indicate, the matrix works as a guide, prompting users to test each capability with di erent test types.",
"We demonstrate the usefulness and generality of C heck L ist via instantiation on three NLP tasks: sentiment analysis ( Sentiment ), duplicate question detection ( QQP ; Wang et al., 2019b), and machine comprehension ( MC ; Rajpurkar et al., 2016).",
"While traditional benchmarks indicate that models on these tasks are as accurate as humans, C heck L ist reveals a variety of severe bugs, where commercial and research models do not e ectively handle basic linguistic phenomena such as negation, named entities, coreferences, semantic role labeling, etc, as they pertain to each task .",
"Further, C heck L ist is easy to use and provides immediate value in a user study, the team responsible for a commercial sentiment analysis model discovered many new and actionable bugs in their own model, even though it had been extensively tested and used by customers.",
"In an additional user study, we found that NLP practitioners with C heck L ist generated more than twice as many tests (each test containing an order of magnitude more examples), and uncovered almost three times as many bugs, compared to users without C heck L ist .",
"Conceptually, users C heck L ist a model by filling out cells in a matrix (Figure 1), each cell potentially containing multiple tests.",
"In this section, we go into more detail on the rows ( capabilities ), columns ( test types ), and how to fill the cells (tests).",
"C heck L ist applies the behavioral testing principle of decoupling testing from implementation by treating the model as a black box, which allows for comparison of di erent models trained on di erent data, or third-party models where access to training data or model structure is not granted.",
"While testing individual components is a common practice in software engineering, modern NLP models are rarely built one component at a time.",
"Instead, C heck L ist encourages users to consider how di erent natural language capabilities are manifested on the task at hand, and to create tests to evaluate the model on each of these capabilities.",
"For example, the Vocabulary + POS capability pertains to whether a model has the necessary vocabulary, and whether it can appropriately handle the impact of words with di erent parts of speech on the task.",
"For Sentiment , we may want to check if the model is able to identify words that carry positive, negative, or neutral sentiment, by verifying how it behaves on examples like This was a good flight.",
"For QQP , we might want the model to understand when modifiers di erentiate questions, e.g. accredited in (Is John a teacher?, Is John an accredited teacher?).",
"For MC , the model should be able to relate comparatives and superlatives, e.g. ( Context: Mary is smarter than John., Q: Who is the smartest kid?, A: Mary).",
"We suggest that users consider at least the following capabilities: Vocabulary + POS (important words or word types for the task), Taxonomy (syn-onyms, antonyms, etc), Robustness (to typos, irrelevant changes, etc), NER (appropriately understanding named entities), Fairness , Temporal (under-standing order of events), Negation , Coreference , Semantic Role Labeling (understanding roles such as agent, object, etc), and Logic (ability to handle symmetry, consistency, and conjunctions).",
"We will provide examples of how these capabilities can be tested in Section 3 (Tables 1, 2, and 3).",
"This listing of capabilities is not exhaustive, but a starting point for users, who should also come up with additional capabilities that are specific to their task or domain.",
"We prompt users to evaluate each capability with three di erent test types (when possible): Minimum Functionality tests, Invariance, and Directional Expectation tests (the columns in the matrix).",
"A Minimum Functionality test ( MFT ), inspired by unit tests in software engineering, is a collection of simple examples (and labels) to check a behavior within a capability.",
"MFTs are similar to creating small and focused testing datasets, and are particularly useful for detecting when models use shortcuts to handle complex inputs without actually mastering the capability.",
"The Vocabulary + POS examples in the previous section are all MFTs.",
"We also introduce two additional test types inspired by software metamorphic tests (Segura et al., 2016).",
"An Invariance test ( INV ) is when we apply label-preserving perturbations to inputs and expect the model prediction to remain the same.",
"Di erent perturbation functions are needed for di erent capabilities, e.g. changing location names for the NER capability for Sentiment (Figure 1B), or introducing typos to test the Robustness capability.",
"A Directional Expectation test ( DIR ) is similar, except that the label is expected to change in a certain way.",
"For example, we expect that sentiment will not become more positive if we add You are lame. to the end of tweets directed at an airline (Figure 1C).",
"The expectation may also be a target label, e.g. replacing locations in only one of the questions in QQP , such as (How many people are there in England?, What is the population of England (cid:41) Turkey?), ensures that the questions are not duplicates.",
"INVs and DIRs allow us to test models on unlabeled data they test behaviors that do not rely on ground truth labels, but rather on relationships between predictions after perturbations are applied (invariance, monotonicity, etc).",
"Users can create test cases from scratch, or by perturbing an existing dataset.",
"Starting from scratch makes it easier to create a small number of high-quality test cases for specific phenomena that may be underrepresented or confounded in the original dataset.",
"Writing from scratch, however, requires significant creativity and e ort, often leading to tests that have low coverage or are expensive and time-consuming to produce.",
"Perturbation functions are harder to craft, but generate many test cases at once.",
"To support both these cases, we provide a variety of abstractions that scale up test creation from scratch and make perturbations easier to craft.",
"Templates Test cases and perturbations can often be generalized into a template , to test the model on a more diverse set of inputs.",
"In Figure 1 we generalized I didn't love the food. with the template I {NEGATION} {POS_VERB} the {THING}. , where {NEGATION} = {didn't, can't say I, ...}, {POS_VERB} = {love, like, ...}, {THING} = {food, flight, service, ...}, and generated all test cases with a Cartesian product.",
"A more diverse set of inputs is particularly helpful when a small set of test cases could miss a failure, e.g. if a model works for some forms of negation but not others.",
"Expanding Templates While templates help scale up test case generation, they still rely on the user's creativity to create fill-in values for each Figure 2: Templating with masked language models.",
"placeholder (e.g. positive verbs for {POS_VERB} ).",
"We provide users with an abstraction where they mask part of a template and get masked language model (RoBERTa (Liu et al., 2019) in our case) suggestions for fill-ins, e.g. I really {mask} the flight. yields {enjoyed, liked, loved, regret, ...}, which the user can filter into positive, negative, and neutral fill-in lists and later reuse across multiple tests (Figure 2).",
"Sometimes RoBERTa suggestions can be used without filtering, e.g. This is a good {mask} yields multiple nouns that don't need filtering.",
"They can also be used in perturbations, e.g. replacing neutral words like that or the for other words in context ( Vocabulary + POSINV examples in Table 1).",
"RoBERTa suggestions can be combined with WordNet categories (syn-onyms, antonyms, etc), e.g. such that only context-appropriate synonyms get selected in a perturbation.",
"We also provide additional common fill-ins for general-purpose categories, such as Named Entities (common male and female first / last names, cities, countries) and protected group adjectives (nationalities, religions, gender and sexuality, etc).",
"Open source We release an implementation of C heck L ist at https://github.com/marcotcr/ checklist .",
"In addition to templating features and mask language model suggestions, it contains various visualizations, abstractions for writing test expectations (e.g. monotonicity) and perturbations, saving / sharing tests and test suites such that tests can be reused with di erent models and by di erent teams, and general-purpose perturbations such as char swaps (simulating typos), contractions, name and location changes (for NER tests), etc. 3 Testing SOTA models with C heck L ist We C heck L ist the following commercial Sentiment analysis models via their paid APIs 2 : Microsoft's Text Analytics ( q ), Google Cloud's Natural Language ( ), and Amazon's Comprehend ( (cid:192) ).",
"We also C heck L ist BERT-base ( ) and RoBERTa-base ( RoB ) (Liu et al., 2019) finetuned on SST-2 3 (acc: 92.7% and 94.8%) and on the QQP dataset 2 From 11 / 2019, but obtained similar results from 04 / 2020.",
"3 Predictions with probability of positive sentiment in the p 1 { 3 , 2 { 3 q range are considered neutral.",
"(acc: 91.1% and 91.3%).",
"For MC , we use a pre-trained BERT-large finetuned on SQuAD (Wolf et al., 2019), achieving 93.2 F1.",
"All the tests presented here are part of the open-source release, and can be easily replicated and applied to new models.",
"Sentiment Analysis Since social media is listed as a use case for these commercial models, we test on that domain and use a dataset of unlabeled airline tweets for INV 4 and DIR perturbation tests.",
"We create tests for a broad range of capabilities, and present subset with high failure rates in Table 1.",
"The Vocab.",
"+ POS MFTs are sanity checks, where we expect models to appropriately handle common neutral or sentiment-laden words.",
"and RoB do poorly on neutral predictions (they were trained on binary labels only).",
"Surprisingly, and (cid:192) fail (7.6% and 4.8%) on sentences that are clearly neutral, with also failing (15%) on nonneutral sanity checks (e.g. I like this seat.).",
"In the DIR tests, the sentiment scores predicted by q and frequently (12.6% and 12.4%) go down con-4 For all the INV tests, models fail whenever their prediction changes and the probability changes by more than 0.1.",
"siderably when clearly positive phrases (e.g. You are extraordinary.) are added, or up ( : 34.6%) for negative phrases (e.g. You are lame.).",
"All models are sensitive to addition of random (not adversarial) shortened URLs or Twitter handles (e.g. 24.8% of (cid:192) predictions change), and to name changes, such as locations ( : 20.8%, (cid:192) : 14.8%) or person names ( : 15.1%, (cid:192) : 9.1%).",
"None of the models do well in tests for the Temporal , Negation , and SRL capabilities.",
"Failures on negations as simple as The food is not poor. are particularly notable, e.g. (54.2%) and (cid:192) (29.4%).",
"The failure rate is near 100% for all commercial models when the negation comes at the end of the sentence (e.g I thought the plane would be awful, but it wasn't.), or with neutral content between the negation and the sentiment-laden word.",
"Commercial models do not fail simple Fairness sanity checks such as I am a black woman. (template: I am a {PROTECTED} {NOUN}. ), always predicting them as neutral.",
"Similar to software engineering, absence of test failure does not imply that these models are fair just that they are not unfair enough to fail these simple tests.",
"On Test TYPE Failure Example Test cases (with expected behavior and prediction) and Description Rate ( ) V o ca b MFT: comparisons 20 .",
"the other hand, always predicts negative when {PROTECTED} is black, atheist, gay, and lesbian, while predicting positive for Asian, straight, etc.",
"With the exception of tests that depend on predicting neutral, and RoB did better than all commercial models on almost every other test.",
"This is a surprising result, since the commercial models list social media as a use case, and are under regular testing and improvement with customer feedback, while and RoB are research models trained on the SST-2 dataset (movie reviews).",
"Finally, and RoB fail simple negation MFTs, even though they are fairly accurate (91.5%, 93.9%, respectively) on the subset of the SST-2 validation set that contains negation in some form (18% of instances).",
"By isolating behaviors like this, our tests are thus able to evaluate capabilities more precisely, whereas performance on the original dataset can be misleading.",
"Quora Question Pair While and RoB surpass human accuracy on QQP in benchmarks (Wang et al., 2019a), the subset of tests in Table 2 indicate that these models are far from solving the question paraphrase problem, and are likely relying on shortcuts for their high accuracy.",
"Both models lack what seems to be crucial skills for the task: ignoring important modifiers on the Vocab.",
"test, and lacking basic Taxonomy understanding, e.g. synonyms and antonyms of common words.",
"Further, neither is robust to typos or simple paraphrases.",
"The failure rates for the NER tests indicate that these models are relying on shortcuts such as anchoring on named entities too strongly instead of understanding named entities and their impact on whether questions are duplicates.",
"Surprisingly, the models often fail to make simple Temporal distinctions (e.g. is (cid:44) used to be and before (cid:44) after), and to distinguish between simple Coreferences (he (cid:44) she).",
"In SRL tests, neither model is able to handle agent / predicate changes, or active / passive swaps.",
"Finally, and RoB change predictions 4.4% and 2.2% of the time when the question order is flipped, failing a basic task requirement (if q 1 is a duplicate of q 2 , so is q 2 of q 1 ).",
"They are also not consistent with Logical implications of their predictions, such as transitivity.",
"Machine Comprehension Vocab + POS tests in Table 3 show that often fails to properly grasp intensity modifiers and comparisons / superlatives.",
"It also fails on simple Taxonomy tests, such as matching properties (size, color, shape) to adjectives, distinguishing between animals-vehicles or jobs-nationalities, or comparisons involving antonyms.",
"The model does not seem capable of handling short instances with Temporal concepts such as before, after, last, and first, or with simple examples of Negation , either in the question or in the context.",
"It also does not seem to resolve basic Coreferences , and grasp simple subject / object or active / passive distinctions ( SRL ), all of which are critical to true comprehension.",
"Finally, the model seems to have certain biases, e.g. for the simple negation template {P1} is not a {PROF}, {P2} is. as context, and Who is a {PROF}? as question, if we set {PROF} = doctor, {P1} to male names and {P2} to female names (e.g. John is not a doctor, Mary is.; Who is a doctor?), the model fails (picks the man as the doctor) 89.1% of the time.",
"If the situation is reversed, the failure rate is only 3.2% (woman predicted as doctor).",
"If {PROF} = secretary, it wrongly picks the man only 4.0% of the time, and the woman 60.5% of the time.",
"Discussion We applied the same process to very di erent tasks, and found that tests reveal interesting failures on a variety of task-relevant linguistic capabilities.",
"While some tests are task specific (e.g. positive adjectives), the capabilities and test types are general; many can be applied across tasks, as is (e.g. testing Robustness with typos) or with minor variation (changing named entities yields di erent expectations depending on the task).",
"This small selection of tests illustrates the benefits of systematic testing in addition to standard evaluation.",
"These tasks may be considered solved based on benchmark accuracy results, but the tests highlight various areas of improvement in particular, failure to demonstrate basic skills that are de facto needs for the task at hand (e.g. basic negation, agent / object distinction, etc).",
"Even though some of these failures have been observed by others, such as typos (Belinkov and Bisk, 2018; Rychalska et al., 2019) and sensitivity to name changes (Prabhakaran et al., 2019), we believe the majority are not known to the community, and that comprehensive and structured testing will lead to avenues of improvement in these and other tasks.",
"The failures discovered in the previous section demonstrate the usefulness and flexibility of C heck L ist .",
"In this section, we further verify that C heck L ist leads to insights both for users who already test their models carefully and for users with little or no experience in a task.",
"We approached the team responsible for the general purpose sentiment analysis model sold as a service by Microsoft ( q on Table 1).",
"Since it is a public-facing system, the model's evaluation procedure is more comprehensive than research systems, including publicly available benchmark datasets as well as focused benchmarks built in-house (e.g. negations, emojis).",
"Further, since the service is mature with a wide customer base, it has gone through many cycles of bug discovery (either internally or through customers) and subsequent fixes, after which new examples are added to the benchmarks.",
"Our goal was to verify if C heck L ist would add value even in a situation like this, where models are already tested extensively with current practices.",
"We invited the team for a C heck L ist session lasting approximately 5 hours.",
"We presented C heck L ist (without presenting the tests we had already created), and asked them to use the methodology to test their own model.",
"We helped them implement their tests, to reduce the additional cognitive burden of having to learn the software components of C heck L ist .",
"The team brainstormed roughly 30 tests covering all capabilities, half of which were MFTs and the rest divided roughly equally between INVs and DIRs.",
"Due to time constraints, we implemented about 20 of those tests.",
"The tests covered many of the same functionalities we had tested ourselves (Section 3), often with di erent templates, but also ones we had not thought of.",
"For example, they tested if the model handled sentiment coming from camel-cased twitter hashtags correctly (e.g. #IHateYou, #ILoveYou), implicit negation (e.g. I wish it was good), and others.",
"Further, they proposed new capabilities for testing, e.g. handling di erent lengths (sentences vs paragraphs) and sentiment that depends on implicit expectations (e.g. There was no {AC} when {AC} is expected).",
"Qualitatively, the team stated that C heck L ist was very helpful: (1) they tested capabilities they had not considered, (2) they tested capabilities that they had considered but are not in the benchmarks, and (3) even capabilities for which they had benchmarks (e.g. negation) were tested much more thoroughly and systematically with C heck L ist .",
"They discovered many previously unknown bugs, which they plan to fix in the next model iteration.",
"Finally, they indicated that they would definitely incorporate C heck L ist into their development cycle, and requested access to our implementation.",
"This session, coupled with the variety of bugs we found for three separate commercial models in Table 1, indicates that C heck L ist is useful even in pipelines that are stress-tested and used in production.",
"We conduct a user study to further evaluate different subsets of C heck L ist in a more controlled environment, and to verify if even users with no previous experience in a task can gain insights and find bugs in a model.",
"We recruit 18 participants (8 from industry, 10 from academia) who have at least intermediate NLP experience 5 , and task them with testing finetuned on QQP for a period of two hours (including instructions), using Jupyter notebooks.",
"Participants had access to the QQP validation dataset, and are instructed to create tests that explore di erent capabilities of the model.",
"We separate participants equally into three conditions: In Unaided , we give them no further instructions, simulating the current status-quo for commercial systems (even the practice of writing additional tests beyond benchmark datasets is not common for research models).",
"In Cap.",
"only , we provide short descriptions of the capabilities listed in Section 2.1 as suggestions to test, while in Cap.",
"+ templ.",
"we further provide them with the template and fill-in tools described in Section 2.3.",
"Only one participant (in Unaided ) had prior experience with QQP .",
"Due to the short study duration, we only asked users to write MFTs in all conditions; thus, even Cap.",
"+ templ.",
"is a subset of C heck L ist .",
"We present the results in Table 4.",
"Even though users had to parse more instructions and learn a new tool when using C heck L ist , they created many more tests for the model in the same time.",
"Further, templates and masked language model suggestions helped users generate many more test cases per test in Cap.",
"+ templ.",
"than in the other two conditions although users could use arbitrary Python code rather than write examples by hand, only one user in Unaided did (and only for one test).",
"5 i.e. have taken a graduate NLP course or equivalent.",
"Users explored many more capabilities on Cap.",
"only and Cap.",
"+ templ.",
"(we annotate tests with capabilities post-hoc); participants in Unaided only tested Robustness , Vocabulary + POS , Taxonomy , and few instances of SRL , while participants in the other conditions covered all capabilities.",
"Users in Cap.",
"only and Cap.",
"+ templ.",
"collectively came up with tests equivalent to almost all MFTs in Table 2, and more that we had not contemplated.",
"Users in Unaided and Cap.",
"only often did not find more bugs because they lacked test case variety even when testing the right concepts (e.g. negation).",
"At the end of the experiment, we ask users to evaluate the severity of the failures they observe on each particular test, on a 5 point scale 6 .",
"While there is no ground truth, these severity ratings provide each user's perception on the magnitude of the discovered bugs.",
"We report the severity sum of discovered bugs (for tests with severity at least 2), in Table 4, as well as the number of tests for which severity was greater or equal to 3 (which filters out minor bugs).",
"We note that users with C heck L ist ( Cap. only and Cap. + templ. ) discovered much more severe problems in the model (measured by total severity or # bugs) than users in the control condition ( Unaided ).",
"We ran a separate round of severity evaluation of these bugs with a new user (who did not create any tests), and obtain nearly identical aggregate results to self-reported severity.",
"The study results are encouraging: with a subset of C heck L ist , users without prior experience are able to find significant bugs in a SOTA model in only 2 hours.",
"Further, when asked to rate di erent aspects of C heck L ist (on a scale of 1-5), users indicated the testing session helped them learn more about the model (4 . 7 0 . 5), capabilities helped them test the model more thoroughly (4 . 5 0 . 4), and so did templates (4 . 3 1 . 1).",
"6 1 (not a bug), 2 (minor bug), 3 (bug worth investigating and fixing), 4 (severe bug, model may not be fit for production), and 5 (no model with this bug should be in production).",
"One approach to evaluate specific linguistic capabilities is to create challenge datasets.",
"Belinkov and Glass (2019) note benefits of this approach, such as systematic control over data, as well as drawbacks, such as small scale and lack of resemblance to real data.",
"Further, they note that the majority of challenge sets are for Natural Language Inference.",
"We do not aim for C heck L ist to replace challenge or benchmark datasets, but to complement them.",
"We believe C heck L ist maintains many of the benefits of challenge sets while mitigating their drawbacks: authoring examples from scratch with templates provides systematic control, while perturbation-based INV and DIR tests allow for testing behavior in unlabeled, naturally-occurring data.",
"While many challenge sets focus on extreme or di cult cases (Naik et al., 2018), MFTs also focus on what should be easy cases given a capability, uncovering severe bugs.",
"Finally, the user study demonstrates that C heck L ist can be used effectively for a variety of tasks with low e ort: users created a complete test suite for sentiment analysis in a day, and MFTs for QQP in two hours, both revealing previously unknown, severe bugs.",
"With the increase in popularity of end-to-end deep models, the community has turned to probes, where a probing model for linguistic phenomena of interest (e.g. NER) is trained on intermediate representations of the encoder (Tenney et al., 2019; Kim et al., 2019).",
"Along similar lines, previous work on word embeddings looked for correlations between properties of the embeddings and downstream task performance (Tsvetkov et al., 2016; Rogers et al., 2018).",
"While interesting as analysis methods, these do not give users an understanding of how a fine-tuned (or end-to-end) model can handle linguistic phenomena for the end-task .",
"For example, while Tenney et al. (2019) found that very accurate NER models can be trained using BERT (96.7%), we show BERT finetuned on QQP or SST-2 displays severe NER issues.",
"There are existing perturbation techniques meant to evaluate specific behavioral capabilities of NLP models such as logical consistency (Ribeiro et al., 2019) and robustness to noise (Belinkov and Bisk, 2018), name changes (Prabhakaran et al., 2019), or adversaries (Ribeiro et al., 2018).",
"C heck L ist provides a framework for such techniques to systematically evaluate these alongside a variety of other capabilities.",
"However, C heck L ist cannot be directly used for non-behavioral issues such as data versioning problems (Amershi et al., 2019), labeling errors, annotator biases (Geva et al., 2019), worst-case security issues (Wallace et al., 2019), or lack of interpretability (Ribeiro et al., 2016).",
"While useful, accuracy on benchmarks is not su cient for evaluating NLP models.",
"Adopting principles from behavioral testing in software engineering, we propose C heck L ist , a model-agnostic and task-agnostic testing methodology that tests individual capabilities of the model using three di erent test types.",
"To illustrate its utility, we highlight significant problems at multiple levels in the conceptual NLP pipeline for models that have solved existing benchmarks on three di erent tasks.",
"Further, C heck L ist reveals critical bugs in commercial systems developed by large software companies, indicating that it complements current practices well.",
"Tests created with C heck L ist can be applied to any model, making it easy to incorporate in current benchmarks or evaluation pipelines.",
"Our user studies indicate that C heck L ist is easy to learn and use, and helpful both for expert users who have tested their models at length as well as for practitioners with little experience in a task.",
"The tests presented in this paper are part of C heck L ist 's open source release, and can easily be incorporated into existing benchmarks.",
"More importantly, the abstractions and tools in C heck L ist can be used to collectively create more exhaustive test suites for a variety of tasks.",
"Since many tests can be applied across tasks as is (e.g. typos) or with minor variations (e.g. changing names), we expect that collaborative test creation will result in evaluation of NLP models that is much more robust and detailed, beyond just accuracy on held-out data.",
"C heck L ist is open source, and available at https://github.com/marcotcr/checklist .",
"We would like to thank Sara Ribeiro, Scott Lundberg, Matt Gardner, Julian Michael, and Ece Kamar for helpful discussions and feedback.",
"Sameer was funded in part by the NSF award #IIS-1756023, and in part by the DARPA MCS program under Contract No.",
"N660011924033 with the United States O ce of Naval Research."
] | [
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"other",
"other",
"other",
"other"
] |
[
"To bridge the gap between the capabilities of the state-of-the-art in factoid question answering (QA) and what users ask, we need large datasets of real user questions that capture the various question phenomena users are interested in, and the diverse ways in which these questions are formulated.",
"We introduce ComQA , a large dataset of real user questions that exhibit different challenging aspects such as compositionality , temporal reasoning , and comparisons .",
"ComQA questions come from the WikiAnswers community QA platform, which typically contains questions that are not satisfactorily answerable by existing search engine technology.",
"Through a large crowdsourcing effort, we clean the question dataset, group questions into paraphrase clusters , and annotate clusters with their answers.",
"ComQA contains 11 , 214 questions grouped into 4,834 paraphrase clusters.",
"We detail the process of constructing ComQA, including the measures taken to ensure its high quality while making effective use of crowdsourcing.",
"We also present an extensive analysis of the dataset and the results achieved by state-of-the-art systems on ComQA, demonstrating that our dataset can be a driver of future research on QA.",
"Factoid QA is the task of answering natural language questions whose answer is one or a small number of entities (Voorhees and Tice, 2000).",
"To advance research in QA in a manner consistent with the needs of end users, it is important to have access to datasets that reflect real user information needs by covering various question phenomena and the wide lexical and syntactic variety in expressing these information needs.",
"The 1 The main part of this work was carried out when the author was at the Max Planck Institute for Informatics.",
"A: [https://en.wikipedia.org/wiki/cairo] Q: largest city located along the Nile river? Q: largest city by the Nile river?",
"Q: W hat is the largest city in Africa that is on the banks of the Nile river? Cluster 2 comparison Q: W ho was the Britain's leader during WW1? Q: W ho ran Britain during WW1? Q: Who was the leader of Britain during World War One?",
"Cluster 1 A: [https://en.wikipedia.org/wiki/h._h._asquith, https://en.wikipedia.org/wiki/david_lloyd_george] temporal Q: John Travolta and Jamie Lee Curtis acted in this film?",
"Q: Jamie Lee Curtis and John Travolta played together in this movie?",
"Q: John Travolta and Jamie Lee Curtis were actors in this film?",
"Cluster 3 A: [https://en.wikipedia.org/wiki/perfect_(film) compositional Q: Who is the first human landed in Mars?",
"Q: Who was the first human being on Mars? Q: first human in Mars? Cluster 4 A: [] empty answer Figure 1: ComQA paraphrase clusters covering a range of question aspects e.g., temporal and compositional questions, with lexical and syntactic diversity.",
"benchmarks should be large enough to facilitate the use of data-hungry machine learning methods.",
"In this paper, we present ComQA, a large dataset of 11,214 real user questions collected from the WikiAnswers community QA website.",
"As shown in Figure 1, the dataset contains various question phenomena.",
"ComQA questions are grouped into 4,834 paraphrase clusters through a large-scale crowdsourcing effort, which capture lexical and syntactic variety.",
"Crowdsourcing is also used to pair paraphrase clusters with answers to serve as a supervision signal for training and as a basis for evaluation.",
"Table 1 contrasts ComQA with publicly available QA datasets.",
"The foremost issue that ComQA tackles is ensuring research is driven by information needs formulated by real users.",
"Most large-scale datasets resort to highly-templatic synthetically generated natural language questions (Bor-des et al., 2015; Cai and Yates, 2013; Su et al., Dataset Large scale ( > 5 K) Real Information Needs Complex Questions Question Paraphrases ComQA (This paper) (cid:51) (cid:51) (cid:51) (cid:51) Free917 (Cai and Yates, 2013) (cid:55) (cid:55) (cid:55) (cid:55) WebQuestions (Berant et al., 2013) (cid:51) (cid:51) (cid:55) (cid:55) SimpleQuestions (Bordes et al., 2015) (cid:51) (cid:55) (cid:55) (cid:55) QALD (Usbeck et al., 2017) (cid:55) (cid:55) (cid:51) (cid:55) LC-QuAD (Trivedi et al., 2017) (cid:51) (cid:55) (cid:51) (cid:55) ComplexQuestions (Bao et al., 2016) (cid:55) (cid:51) (cid:51) (cid:55) GraphQuestions (Su et al., 2016) (cid:51) (cid:55) (cid:51) (cid:51) ComplexWebQuestions (Talmor and Berant, 2018) (cid:51) (cid:55) (cid:51) (cid:55) TREC (Voorhees and Tice, 2000) (cid:55) (cid:51) (cid:51) (cid:55) Table 1: Comparison of ComQA with existing QA datasets over various dimensions.",
"2016; Talmor and Berant, 2018; Trivedi et al., 2017).",
"Other datasets utilize search engine logs to collect their questions (Berant et al., 2013), which creates a bias towards simpler questions that search engines can already answer reasonably well.",
"In contrast, ComQA questions come from WikiAnswers, a community QA website where users pose questions to be answered by other users.",
"This is often a reflection of the fact that such questions are beyond the capabilities of commercial search engines and QA systems.",
"Questions in our dataset exhibit a wide range of interesting aspects such as the need for temporal reasoning (Fig-ure 1, cluster 1), comparison (Figure 1, cluster 2), compositionality (multiple subquestions with multiple entities and relations) (Figure 1, cluster 3), and unanswerable questions (Figure 1, cluster 4).",
"ComQA is the result of a carefully designed large-scale crowdsourcing effort to group questions into paraphrase clusters and pair them with answers.",
"Past work has demonstrated the benefits of paraphrasing for QA (Abujabal et al., 2018; Berant and Liang, 2014; Dong et al., 2017; Fader et al., 2013).",
"Motivated by this, we judiciously use crowdsourcing to obtain clean paraphrase clusters from WikiAnswers' noisy ones, resulting in ones like those shown in Figure 1, with both lexical and syntactic variations.",
"The only other dataset to provide such clusters is that of Su et al. (2016), but that is based on synthetic information needs.",
"For answering, recent research has shown that combining various resources for answering significantly improves performance (Savenkov and Agichtein, 2016; Sun et al., 2018; Xu et al., 2016).",
"Therefore, we do not pair ComQA with a specific knowledge base (KB) or text corpus for answering.",
"We call on the research community to innovate in combining different answering sources to tackle ComQA and advance research in QA.",
"We use crowdsourcing to pair paraphrase clusters with answers.",
"ComQA answers are primarily Wikipedia entity URLs.",
"This has two motivations:",
"(i) it builds on the example of search engines that use Wikipedia entities as answers for entity-centric queries (e.g., through knowledge cards), and",
"(ii) most modern KBs ground their entities in Wikipedia.",
"Wherever the answers are temporal or measurable quantities, we use TIMEX3 1 and the International System of Units 2 for normalization.",
"Providing canonical answers allows for better comparison of different systems.",
"We present an extensive analysis of ComQA, where we introduce the various question aspects of the dataset.",
"We also analyze the results of running state-of-the-art QA systems on ComQA.",
"ComQA exposes major shortcomings in these systems, mainly related to their inability to handle compositionality, time, and comparison.",
"Our detailed error analysis provides inspiration for avenues of future work to ensure that QA systems meet the expectations of real users.",
"To summarize, in this paper we make the following contributions: We present a dataset of 11,214 real user questions collected from a community QA website.",
"The questions exhibit a range of aspects that are important for users and challenging for existing QA systems.",
"Using crowdsourcing, questions are grouped into 4,834 paraphrase clusters that are annotated with answers.",
"ComQA is available at: http://qa.",
"mpi-inf.mpg.de/comqa .",
"We present an extensive analysis and quantify the various difficulties in ComQA.",
"We also present the results of state-of-the art QA systems on ComQA, and a detailed error analysis.",
"1 http://www.timeml.org 2 https://en.wikipedia.org/wiki/SI 2 Related Work There are two main variants of the factoid QA task, with the distinction tied to the underlying answering resources and the nature of answers.",
"Traditionally, QA has been explored over large textual corpora (Cui et al., 2005; Harabagiu et al., 2001, 2003; Ravichandran and Hovy, 2002; Saquete et al., 2009) with answers being textual phrases.",
"Recently, it has been explored over large structured resources such as KBs (Berant et al., 2013; Unger et al., 2012), with answers being semantic entities.",
"Recent work demonstrated that the two variants are complementary, and a combination of the two results in the best performance (Sun et al., 2018; Xu et al., 2016).",
"QA over textual corpora.",
"QA has a long tradition in IR and NLP, including benchmarking tasks in TREC (Voorhees and Tice, 2000; Dietz and Gamari, 2017) and CLEF (Magnini et al., 2004; Herrera et al., 2004).",
"This has predominantly focused on retrieving answers from textual sources (Ferrucci, 2012; Harabagiu et al., 2006; Prager et al., 2004; Saquete et al., 2004; Yin et al., 2015).",
"In IBM Watson (Ferrucci, 2012), structured data played a role, but text was the main source for answers.",
"The TREC QA evaluation series provide hundreds of questions to be answered over documents, which have become widely adopted benchmarks for answer sentence selection (Wang and Nyberg, 2015).",
"ComQA is orders of magnitude larger than TREC QA.",
"Reading comprehension (RC) is a recently introduced task, where the goal is to answer a question from a given textual paragraph (Kocisky et al., 2017; Lai et al., 2017; Rajpurkar et al., 2016; Trischler et al., 2017; Yang et al., 2015).",
"This setting is different from factoid QA, where the goal is to answer questions from a large repository of data (be it textual or structured), and not a single paragraph.",
"A recent direction in RC is dealing with unanswerable questions from the underlying data (Rajpurkar et al., 2018).",
"ComQA includes such questions to allow tackling the same problem in the context of factoid QA.",
"QA over KBs.",
"Recent efforts have focused on natural language questions as an interface for KBs, where questions are translated to structured queries via semantic parsing (Bao et al., 2016; Bast and Haussmann, 2015; Fader et al., 2013; Mohammed et al., 2018; Reddy et al., 2014; Yang et al., 2014; Yao and Durme, 2014; Yahya et al., 2013).",
"Over the past five years, many datasets were introduced for this setting.",
"However, as Table 1 shows, they are either small in size (Free917, and ComplexQuestions), composed of synthetically generated questions (Sim-pleQuestions, GraphQuestions, LC-QuAD and ComplexWebQuestions), or are structurally simple (WebQuestions).",
"ComQA addresses these shortcomings.",
"Returning semantic entities as answers allows users to further explore these entities in various resources such as their Wikipedia pages, Freebase entries, etc.",
"It also allows QA systems to tap into various interlinked resources for improvement (e.g., to obtain better lexicons, or train better NER systems).",
"Because of this, ComQA provides semantically grounded reference answers in Wikipedia (without committing to Wikipedia as an answering resource).",
"For numerical quantities and dates, ComQA adopts the International System of Units and TIMEX3 standards, respectively.",
"In this work, a factoid question is a question whose answer is one or a small number of entities or literal values (Voorhees and Tice, 2000) e.g., Who were the secretaries of state under Barack Obama? and When was Germany's first postwar chancellor born? .",
"entity (e.g., Where was Einstein born? ) Compositional: A question is compositional if answering it requires answering more primitive questions and combining these.",
"These can be intersection or nested questions.",
"Intersection questions are ones where two or more subquestions can be answered independently, and their answers intersected (e.g., Which films featuring Tom Hanks did Spielberg direct? ).",
"In nested questions, the answer to one subquestion is necessary to answer another ( Who were the parents of the thirteenth president of the US? ).",
"Temporal: These are questions that require temporal reasoning for deriving the answer, be it explicit (e.g., in 1998' ), implicit (e.g., dur-ing the WWI' ), relative (e.g., current' ), or latent (e.g. Who is the US president?' ).",
"Temporal questions also include those whose answer is an explicit temporal expression ( When did Trenton become New Jersey's capital? ).",
"Comparison: We consider three types of comparison questions: comparatives ( Which rivers in Europe are longer than the Rhine? ), superlatives ( What is the population of the largest city in Egypt? ), and ordinal questions ( What was the name of Elvis's first movie? ).",
"Telegraphic (Joshi et al., 2014): These are short questions formulated in an informal manner similar to keyword queries ( First president India? ).",
"Systems that rely on linguistic analysis often fail on such questions.",
"Answer tuple : Where an answer is a tuple of connected entities as opposed to a single entity ( When and where did George H. Bush go to college, and what did he study? ).",
"Recent work has shown that the choice of answering resource, or the combination of resources significantly affects answering performance (Savenkov and Agichtein, 2016; Sun et al., 2018; Xu et al., 2016).",
"Inspired by this, ComQA is not tied to a specific resource for answering.",
"To this end, answers in ComQA are primarily Wikipedia URLs.",
"This enables QA systems to combine different answering resources which are linked to Wikipedia (e.g., DBpedia, Freebase, YAGO, Wikidata, etc).",
"This also allows seamless comparison across these QA systems.",
"An answer in ComQA can be: Entity: ComQA entities are grounded in Wikipedia.",
"However, Wikipedia is inevitably incomplete, so answers that cannot be grounded in Wikipedia are represented as plain text.",
"For example, the answer for What is the name of Kristen Stewart adopted brother? is { Taylor Stewart, Dana Stewart } .",
"Literal value: Temporal answers follow the TIMEX3 standard.",
"For measurable quantities, we follow the International System of Units.",
"Empty: In the factoid setting, some questions can be based on a false premise, and hence, are unanswerable e.g., Who was the first human being on Mars? (no human has been on Mars, yet).",
"The correct answer to such questions is the empty set.",
"Such questions allow systems to cope with these cases.",
"Recent work has started looking at this problem (Rajpurkar et al., 2018).",
"Our goal is to collect factoid questions that represent real information needs and cover a range of question aspects.",
"Moreover, we want to have different paraphrases for each question.",
"To this end, we tap into the potential of community QA platforms.",
"Questions posed there represent real information needs.",
"Moreover, users of those platforms provide (noisy) annotations around questions e.g., paraphrase clusters.",
"In this work, we exploit the annotations where users mark questions as duplicates as a basis for paraphrase clusters, and clean those.",
"Concretely, we started with the WikiAnswers crawl by Fader et al. (2014).",
"We obtained ComQA from this crawl primarily through a large-scale crowdsourcing effort, which we describe in what follows.",
"The original resource curated by Fader et al. contains 763 M questions.",
"Questions in the crawl are grouped into 30 M paraphrase clusters based on feedback from WikiAnswers users.",
"This clustering has a low accuracy (Fader et al., 2014).",
"Extracting factoid questions and cleaning the clusters are thus essential for a high-quality dataset.",
"To remove non-factoid questions, we filtered out questions that",
"(i) start with why' , or",
"(ii) contain words like (dis)similarities, differences, (dis)advantages , etc.",
"Questions matching these filters are out of scope as they require a narrative answer.",
"We also removed questions with less than three or more than twenty words, as we found these to be typically noisy or non-factoid questions.",
"This left us with about 21 M questions belonging to 6 .",
"1 M clusters.",
"To further focus on factoid questions, we automatically classified questions into one or more of the following four classes : (1) temporal, (2) comparison, (3) single entity, and (4) multi-entity questions.",
"We used SUTime (Chang and Manning, 2012) to identify temporal questions and the Stanford named entity recognizer (Finkel et al., 2005) to detect named entities.",
"We used part-of-speech patterns to identify comparatives, superlatives, and ordinals.",
"Clusters which did not have questions belonging to any of the above classes were discarded from further consideration.",
"Although these clusters contain false negatives e.g., What official position did Mendeleev hold until his death? due to errors by the tagging tools, When did henry 7th oldest son die? Henry VII of England second son? Who was henry VII son? Who was henry's vii sons? Who was Henry vii's oldest son? Who is king henry VII eldest son? What was the name of Henry VII first son? Who was henry vII eldest son? What was henry's vii oldest son? Who was the oldest son of Henry VII? Figure 2: A WikiAnswers cluster split into four clusters by AMT Turkers.",
"Manual inspection.",
"We next applied the first stage of human curation to the dataset.",
"Each WikiAnswers cluster was assigned to one of the four classes above based on the majority label of the questions within.",
"We then randomly sampled 15 K clusters from each of the four classes ( 60 K clusters in total with 482 K questions) and sampled a representative question from each of these clusters at random ( 60 K questions).",
"We relied on the assumption that questions within the same cluster are semantically equivalent.",
"These 60 K questions were manually examined by the authors and those with unclear or non-factoid intent were removed along with the cluster that contains them.",
"We thus ended up with 2 .",
"1 K clusters with 13 .",
"7 K questions.",
"We inspected a random subset of the 2 .",
"1 K WikiAnswers clusters and found that questions in the same cluster are semantically related but not necessarily equivalent , which is in line with observations in previous work (Fader et al., 2014).",
"Dong et al. (2017) reported that 45% of question pairs were related rather than genuine paraphrases.",
"For example, Figure 2 shows 10 questions in the same WikiAnswers cluster.",
"Obtaining accurate paraphrase clusters is crucial to any systems that want to utilize them (Abujabal et al., 2018; Berant and Liang, 2014; Dong et al., 2017).",
"We therefore utilized crowdsourcing to clean the Wikianswers paraphrase clusters.",
"We used Amazon Mechanical Turk (AMT) to identify semantically equivalent questions within a WikiAnswers cluster, thereby N u m b e r o f C l u s t e r s 834 1667 2501 3334 4167 5000 Number of Questions 1 2 3 4 5 6 7 8 9 10 11 12 13 4,023 722 570 377 261 213 149 82 47 25 2 1 2 Figure 3: The distribution of questions in clusters.",
"task to collect answers for each ComQA cluster.",
"Task design.",
"We had to ensure the simplicity of the task to obtain high quality results.",
"Therefore, rather than giving workers a WikiAnswers cluster and asking them to partition it into clusters of paraphrases, we showed them pairs of questions from a cluster and asked them to make the binary decision of whether the two questions are paraphrases.",
"To reduce potentially redundant annotations, we utilized the transitivity of the paraphrase relationship.",
"Given a WikiAnswers cluster Q = { q 1 , ..., q n } , we proceed in rounds to form ComQA clusters.",
"In the first round, we collect annotations for each pair ( q i , q i +1 ) .",
"The majority annotation among five annotators is taken.",
"An initial clustering is formed accordingly, with clusters sharing the same question merged together (to account for transitivity).",
"This process continues iteratively until no new clusters can be formed from Q .",
"Task statistics.",
"We obtained annotations for 18,890 question pairs from 175 different workers.",
"Each pair was shown to five different workers, with 65 .",
"7% of the pairs receiving unanimous agreement, 21 .",
"4% receiving four agreements and 12 .",
"9% receiving three agreements.",
"By design, with five judges and binary annotations, no pair can have less three agreements.",
"This resulted in questions being placed in paraphrase clusters, and no questions were discarded at this stage.",
"At the end of this step, the original 2 .",
"1 K WikiAnswers clusters became 6 .",
"4 K ComQA clusters with a total of 13 .",
"7 K questions.",
"Figure 3 shows the distribution of questions in clusters.",
"To test whether relying on the transitivity of the Property Example Percentage% Compositional questions Conjunction What is the capital of the country whose northern border is Poland and Germany? 17 .",
"paraphrase relationship is suitable to reduce the annotation effort, we asked annotators to annotate 1,100 random pairs ( q 1 , q 3 ) , where we had already received positive annotations for the pairs ( q 1 , q 2 ) and ( q 2 , q 3 ) being paraphrases of each other.",
"In 93 .",
"5 % of the cases there was agreement.",
"Additionally, as experts on the task, the authors manually assessed 600 pairs of questions, which serve as honeypots.",
"There was 96 .",
"6% agreement with our annotations.",
"An example result of this task is shown in Figure 2, where Turkers split the original WikiAnswers cluster into the four clusters shown.",
"Task design.",
"To collect answers, we designed another AMT task, where workers were shown a representative question randomly drawn from a cluster.",
"Workers were asked to use the Web to find answers and to provide the corresponding URLs of Wikipedia entities.",
"Due to the inevitable incompleteness of Wikipedia, workers were asked to provide the surface form of an answer entity if it does not have a Wikipedia page.",
"If the answer is a full date, workers were asked to follow dd-mmm-yyyy format.",
"For measurable quantities, workers were asked to provide units.",
"We use TIMEX3 and the international system of units for normalizing temporal answers and measurable quantities e.g., 12th century' to 11XX .",
"If no answer is found, workers were asked to type in no answer' .",
"Task statistics.",
"Each representative question was shown to three different workers.",
"An answer is deemed correct if it is common between at least two workers.",
"This resulted in 1 .",
"6 K clusters (con-taining 2 . 4 K questions) with no agreed-upon answers, which were dropped.",
"For example, Who was the first democratically elected president of Mexico? is subjective.",
"Other questions received related answers e.g., Who do the people in Iraq worship? with Allah , Islam and Mohamed as answers from the three annotators.",
"Other questions were underspecified e.g., Who was elected the vice president in 1796? .",
"At the end of the task, we ended up with 4,834 clusters with 11,214 question-answer pairs, which form ComQA.",
"In this section, we present a manual analysis of 300 questions sampled at random from the ComQA dataset.",
"This analysis helps understand the different aspects of our dataset.",
"A summary of the analysis is presented in Table 2.",
"Question categories.",
"We categorized each question as either simple or complex.",
"A question is complex if it belongs to one or more of the compositional, temporal, or comparison classes.",
"56 .",
"33% of the questions were complex; 32% compositional, 23 .",
"67% temporal, and 29 .",
"33% contain",
"comparison conditions.",
"A question may contain multiple conditions ( What country has the highest population in the year 2008 ? with comparison and temporal conditions).",
"We also identified questions of telegraphic nature e.g., Julia Alvarez's parents? , with 8% of our questions being telegraphic.",
"Such questions pose a challenge for systems that rely on linguistic analysis of questions (Joshi et al., 2014).",
"We counted the number of named entities in questions: 23 .",
"67% contain two or more entities, reflecting their compositional nature, and 2 .",
"67% contain no entities e.g., What public company has the most employees in the world? .",
"Such questions can be hard as many methods assume the existence of a pivot entity in a question.",
"Finally, 3 .",
"67% of the questions are unanswerable, e.g., Who was the first human being on Mars? .",
"Such questions incentivise QA systems to return non-empty answers only when suitable.",
"In Table 3 we compare ComQA with other current datasets based on real user information needs over different question categories.",
"Answer types.",
"We annotated each question with the most fine-grained context-specific answer type (Ziegler et al., 2017).",
"Answers in ComQA belong to a diverse set of types that range from coarse (e.g., person ) to fine (e.g., sports manager ).",
"Types also include literals such as number and date .",
"Figure",
"4(a) shows answer types of the 300 annotated examples as a word cloud.",
"Question topics.",
"We annotated questions with topics to which they belong (e.g., geography, movies, sports).",
"These are shown in Figure",
"4(b), and demonstrate the topical diversity of ComQA.",
"Question length.",
"Questions in ComQA are fairly long, with a mean length of 7.73 words, indicating the compositional nature of questions.",
"In this section we present experimental results for running ComQA through state-of-the-art QA systems.",
"Our experiments show that these systems achieve humble performance on ComQA.",
"Through a detailed analysis, this performance can be attributed to systematic shortcomings in handling various question aspects in ComQA.",
"Splits.",
"We partition ComQA into a random train/dev/test split of 70/10/20% with 7,850, 1,121 and 2,243 questions, respectively.",
"Metrics.",
"We follow the community's standard evaluation metrics: we compute average precision, recall, and F1 scores across all test questions.",
"For unanswerable questions whose correct answer is the empty set, we define precision and recall to be 1 for a system that returns an empty set, and 0 otherwise (Rajpurkar et al., 2018).",
"We evaluated two categories of QA systems that differ in the underlying answering resource: either KBs or textual extractions.",
"We ran the following systems:",
"(i) Abujabal et al. (2017), which automatically generates templates using question-answer pairs;",
"(ii) Bast and Haussmann (2015), which instantiates hand-crafted query templates followed by query ranking;",
"(iii) Berant and Liang (2015), which relies on agenda-based parsing and imitation learning;",
"(iv) Berant et al. (2013), which uses rules to build queries from questions; and",
"(v) Fader et al. (2013), which maps questions to queries over open vocabulary facts extracted from Web documents.",
"Note that our intention is not to assess the quality of these systems, but to assess how challenging ComQA is.",
"The systems were trained with ComQA data.",
"All systems were run over the data sources for which they were designed.",
"The first four baselines are over Freebase.",
"We therefore mapped ComQA answers (Wikipedia entities) to the corresponding Freebase names using the information stored with entities in Freebase.",
"We observe that the Wikipedia answer entities have no counterpart in Freebase for 7% of the ComQA questions.",
"This suggests an oracle F1 score of 93 .",
"0 .",
"For Fader et al. (2013), which is over web extractions, we mapped Wikipedia URLs to their titles.",
"Table 4 shows the performance of the baselines on the ComQA test set.",
"Overall, the systems achieved poor performance, suggesting that current methods cannot handle the complexity of our dataset, and that new models for QA are needed.",
"Table 5 compares the performance of the systems on different datasets (Free917 uses accuracy as a quality metric).",
"For example, while Abujabal et al. (2017) achieved an F1 score of 51 .",
"0 on WebQuestions, it achieved 22 .",
"4 on ComQA.",
"The performance of Fader et al. (2013) is worse than the others due to the incompleteness of its underlying extractions and the complexity of ComQA questions that require higher-order relations and reasoning.",
"However, the system answered some complex questions, which KB-QA systems failed to answer.",
"For example, it answered What is the highest mountain in the state of Washington? .",
"The answer to such a question is more readily available in Web text, compared to a KB, where more sophisticated reasoning is required to handle the superlative.",
"However, a slightly modified question such as What is the fourth highest mountain in the state of Washing-ton? is unlikely to be found in text, but be answered using KBs with the appropriate reasoning.",
"Both examples above demonstrate the benefits of combining text and structured resources.",
"For the two best performing systems on ComQA, QUINT (Abujabal et al., 2017) and AQQU (Bast and Haussmann, 2015), we manually inspected 100 questions on which they failed.",
"We classified failure sources into four categories: compositionality, temporal, comparison or NER.",
"Table 6 shows the distribution of these failure sources.",
"Compositionality.",
"Neither system could handle the compositional nature of questions.",
"For example, they returned the father of Julius Caesar as an answer for What did Julius Caesar's father work as? , while, the question requires another KB predicate that connects the father to his profession.",
"For John Travolta and Jamie Lee Curtis starred in this movie? , both systems returned movies with Jamie Lee Curtis, ignoring the constraint that John Travolta should also appear in them.",
"Properly answering multi-relation questions over KBs remains an open problem.",
"Temporal.",
"Our analysis reveals that both systems fail to capture temporal constraints in questions, be it explicit or implicit.",
"For Who won the Oscar for Best Actress in 1986? , they returned all winners and ignored the temporal restriction from in 1986' .",
"Implicit temporal constraints like named events (e.g., Vietnam war' in Who was the president of the US during Vietnam war? ) pose a challenge to current methods.",
"Such constraints need to be detected first and normalized to a canonical time interval (November 1st, 1955 to April 30th, 1975, for the Vietnam war).",
"Then, systems need to compare the terms of the US presidents with the above interval to account for the temporal relation of during' .",
"While detecting explicit time expressions can be done reasonably well using existing time taggers (Chang and Manning, 2012), identifying implicit ones is difficult.",
"Furthermore, retrieving the correct temporal scopes of entities in questions (e.g., the terms of the US presidents) is hard due to the large number of temporal KB predicates associated with entities.",
"Comparison.",
"Both systems perform poorly on comparison questions, which is expected since they were not designed to address those.",
"To the best of our knowledge, no existing KB-QA system can handle comparison questions.",
"Note that our goal is not to assess the quality the of current methods, but to highlight that these methods miss categories of questions that are important to real users.",
"For What is the first film Julie Andrews made? and What is the largest city in the state of Washington? , both systems returned the list of Julie Andrews's films and the list of Washington's cities, for the first and the second questions, respectively.",
"While the first question requires the attribute of filmReleasedIn to order by, the second needs the attribute of hasArea .",
"Identifying the correct attribute to order by as well as determining the order direction (ascending for the first and descending for the second) is challenging and out of scope for current methods.",
"NER.",
"NER errors come from false negatives, where entities are not detected.",
"For example, in On what date did the Mexican Revolution end?",
"QUINT identified Mexican' rather than Mexican Revolution' as an entity.",
"For What is the first real movie that was produced in 1903? , which does not ask about a specific entity, QUINT could not generate SPARQL queries.",
"Existing QA methods expect a pivotal entity in a question, which is not always the case.",
"Note that while baseline systems achieved low precision, they achieved higher recall (21.2 vs 38.4 for QUINT, respectively) (Table 4).",
"This reflects the fact that these systems often cannot cope with the full complexity of ComQA questions, and instead end up evaluating underconstrained interpretations of the question.",
"To conclude, current methods can handle simple questions very well, but struggle with complex questions that involve multiple conditions on different entities or need to join the results from sub-questions.",
"Handling such complex questions, however, is important if we are to satisfy information needs expressed by real users.",
"We presented ComQA, a dataset for QA that harnesses a community QA platform, reflecting ques-Category",
"tions asked by real users.",
"ComQA contains 11,214 question-answer pairs, with questions grouped into paraphrase clusters through crowdsourcing.",
"Questions exhibit different aspects that current QA systems struggle with.",
"ComQA is a challenging dataset that is aimed at driving future research on QA, to match the needs of real users.",
"We would like to thank Tommaso Pasini for his helpful feedback."
] | [
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"result",
"abstain",
"method",
"objective",
"objective",
"objective",
"objective",
"objective",
"method",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"We propose a new type of representation learning method that models words, phrases and sentences seamlessly.",
"Our method does not depend on word segmentation and any human-annotated resources (e.g., word dictionaries), yet it is very effective for noisy corpora written in unsegmented languages such as Chinese and Japanese.",
"The main idea of our method is to ignore word boundaries completely (i.e., segmentation-free ), and construct representations for all character n -grams in a raw corpus with embeddings of compositional subn grams.",
"Although the idea is simple, our experiments on various benchmarks and real-world datasets show the efficacy of our proposal.",
"Most existing word embedding models (Mikolov et al., 2013; Pennington et al., 2014; Bojanowski et al., 2017) take a sequence of words as their input.",
"Therefore, the conventional models are dependent on word segmentation (Yang et al., 2017; Shao et al., 2018), which is a process of converting a raw corpus (i.e., a sequence of characters) into a sequence of segmented character n -grams.",
"After the segmentation, the segmented character n -grams are assumed to be words, and each word's representation is constructed from distribution of neighbour words that co-occur together across the estimated word boundaries.",
"However, in practice, this kind of approach has several problems.",
"First, word segmentation is difficult especially when texts in a corpus are noisy or unsegmented (Saito et al., 2014; Kim et al., 2018).",
"For example, word segmentation on social network service (SNS) corpora, such as Twitter, is a challenging task since it tends to include many misspellings, informal words, neologisms, and even emoticons.",
"This problem becomes more severe in unsegmented languages, such as Chinese and Japanese, whose word boundaries are not explicitly indicated.",
"Second, word segmentation has ambiguities (Luo et al., 2002; Li et al., 2003).",
"For example, a compound word (linear algebra) can be seen as a single word or sequence of words, such as | (linear | algebra).",
"Word segmentation errors negatively influence subsequent processes (Xu et al., 2004).",
"For example, we may lose some words in training corpora, leading to a larger Out-Of-Vocabulary (OOV) rate (Sun et al., 2005).",
"Moreover, segmentation errors, such as segmenting (yesterday) as | (tree | brain), produce false co-occurrence information.",
"This problem is crucial for most existing word embedding methods as they are based on distributional hypothesis (Harris, 1954), which can be summarized as: a word is characterized by the company it keeps (Firth, 1957).",
"To enhance word segmentation, some recent works (Junyi, 2013; Sato, 2015; Jeon, 2016) made rich resources publicly available.",
"However, maintaining them up-to-date is difficult and it is infeasible for them to cover all types of words.",
"To avoid the negative impacts of word segmentation errors, Oshikiri (2017) proposed a word embedding method called segmentation-free word embedding ( sembei ).",
"The key idea of sembei is to directly embed frequent character n -grams from a raw corpus without conducting word segmentation.",
"However, most of the frequent n -grams are non-words (Kim et al., 2018), and hence sembei still suffers from the OOV problems.",
"The fundamental problem also lies in its extension (Kim et al., 2018), although it uses external resources to reduce the number of OOV.",
"To handle OOV problems, Bojanowski et al. (2017) proposed a novel compositional word embedding method with subword modeling, called subword-information skipgram ( sisg ).",
"The key idea of sisg is to extend the notion of vocabulary to include subwords, linear algebra learn study -ing ! number learn algebra linear algebra endeavor learn study mathmatics algebra linear algebra studying Corpus",
"namely, substrings of words, for enriching the representations of words by the embeddings of its subwords.",
"In sisg , the embeddings of OOV (or unseen) words are computed from the embedings of their subwords.",
"However, sisg requires word segmentation as a prepossessing step, and the way of collecting co-occurrence information is dependent on the results of explicit word segmentation.",
"For solving the issues of word segmentation and OOV, we propose a simple but effective unsupervised representation learning method for words, phrases and sentences, called segmentation-free compositional n -gram embedding ( scne ).",
"The key idea of scne is to train embeddings of character n -grams to compose representations of all character n -grams in a raw corpus, and it enables treating all words, phrases and sentences seamlessly (see Figure 1 for an illustrative explanation).",
"Our experimental results on a range of datasets suggest that scne can compute high-quality representations for words and sentences although it does not consider any word boundaries and is not dependent on any human annotated resources.",
"Our method scne successfully combines a subword model (Zhang et al., 2015; Wieting et al., 2016; Bojanowski et al., 2017; Zhao et al., 2018) with an idea of character n -gram embedding (Os-x",
"hikiri, 2017; Kim et al., 2018).",
"In scne , the vector representation of a target character n -gram is defined as follows.",
"Let x 1 x 2 x N be a raw unsegmented corpus of N characters.",
"For a range i, i + 1 , . . . , j specified by index t = ( i, j ) , 1 i j N , we denote the substring x i x i +1 x j as x ( i,j ) or x t .",
"In a training phase, scne first counts frequency of character n -grams in the raw corpus to construct n -gram set V by collecting M most frequent n -grams with n n max , where M and n max are hyperparameters.",
"For any target character n -gram x ( i,j ) = x i x i +1 x j in the corpus, scne constructs its representation v x ( i,j ) R d by summing the embeddings of its subn grams as follows: v x ( i,j ) = (cid:88) s S ( x ( i,j ) ) z s , where S ( x ( i,j ) ) = { x ( i (cid:48) ,j (cid:48) ) V | i i (cid:48) j (cid:48) j } consists of all subn -grams of target x ( i,j ) , and the embeddings of subn -grams z s R d , s V are model parameters to be learned.",
"The objective of scne is similar to that of Mikolov et al. (2013), (cid:88) t D (cid:88) c C ( t ) log (cid:16) v (cid:62) x t u x c (cid:17) + k (cid:88) s P neg log (cid:16) v (cid:62) x t u s (cid:17) , where ( x ) = 1 1+exp( x ) , D = { ( i, j ) | 1 i j N, j i + 1 n target } , and C (( i, j )) = { ( i (cid:48) , j (cid:48) ) | x ( i (cid:48) ,j (cid:48) ) V, j (cid:48) = i 1 or i (cid:48) = j + 1 } .",
"D is the set of indexes of all possible target n -grams in the raw corpus with n n target , where n target is a hyperparameter.",
"C ( t ) is the set of indexes of contexts of the target x t , that is, all character n grams in V that are adjacent to the target (see Figures 1 and 2).",
"The negative sampling distribution P neg of s V is proportional to its frequency in the corpus.",
"The model parameters z s , u s R d , s, s V , are learned by maximizing the objective.",
"We set n target = n max in our experiments.",
"Although we examine frequent n -grams for simplicity, incorporating supervised word boundary information or byte pair encoding into the construction of compositional n -gram set would be an interesting future work (Kim et al., 2018; Sennrich et al., 2016; Heinzerling and Strube, 2018).",
"To avoid the problems of word segmentation, Oshikiri (2017) proposed segmentation-free word embedding ( sembei ) (Oshikiri, 2017) that considers the M -most frequent character n -grams as individual words.",
"Then, a frequent n -gram lattice is constructed, which is similar to a word lattice used in morphological analysis (see Figure 3).",
"Finally, the pairs of adjacent n -grams in the lattice are considered as target-context pairs and they are fed to existing word embedding methods, e.g., skipgram (Mikolov et al., 2013).",
"Although sembei is simple, the frequent n -gram vocabulary tends to include a vast amount of nonwords (Kim et al., 2018).",
"Furthermore, its vocabulary size is limited to M , hence, sembei can not avoid the undesirable issue of OOV.",
"The proposed scne avoids these problems by taking all possible character n -grams as embedding targets.",
"Note that the target-context pairs of sembei are fully contained in those of scne (see Figure 1).",
"To overcome the problem of OOV in sembei , Kim et al. (2018) proposed an extension of sembei called word-like n -gram embedding ( wne ).",
"In wne , the n -gram vocabulary is fil-tered to have more vaild words by taking advantage of a supervised probabilistic word segmenter.",
"Although wne reduce the number of non-words, there is still the problem of OOV since its vocabulary size is limited.",
"In addition, wne is dependent on word segmenter while scne does not.",
"To deal with OOV words as well as rare words, Bojanowski et al. (2017) proposed subword information skip-gram ( sisg ) that enriches word embeddings with the representations of its subwords, i.e., sub-character n -grams of words.",
"In sisg , a vector representation of a target word is encoded as the sum of the embeddings of its subwords.",
"For instance, subwords of length n = 3 of the word where are extracted as <wh, whe, her, ere, re> , where < , > are special symbols added to the original word to represent its left and right word boundaries.",
"Then, a vector representation of where is encoded as the sum of the embeddings of these subwords and that of the special sequence <where> , which corresponds to the original word itself.",
"Although sisg is powerful, it requires the information of word boundaries as its input, that is, semantic units need to be specified when encoding targets.",
"Therefore, it cannot be directly applied to unsegmented languages.",
"Unlike sisg , scne does not require such information.",
"The proposed scne is much simpler, but due to its simpleness, the embedding target of scne should contains many non-words, which seems to be a problem (see Figure 1).",
"However, our experimental results show that scne successfully captures the semantics of words and even sentences for unsegmented languages without using any knowledge of word boundaries (see Section 3).",
"In this section, we perform two intrinsic and two extrinsic tasks at both word and sentence level, focusing on unsegmented languages.",
"The implementation of our method is available on GitHub 1 .",
"Baselines : We use skipgram (Mikolov et al., 2013), sisg (Bojanowski et al., 2017) and sembei (Oshikiri, 2017) as word embedding baselines.",
"For sentence embedding, we first test simple baselines obtained by averaging the word vectors over a word-segmented sentence.",
"In addition, we examine several recent successful sentence embedding methods, pv-dbow , pv-dm (Le and Mikolov, 2014) and sent2vec (Pagliardini et al., 2018) in an extrinsic task.",
"Note that both scne and sembei have embeddings of frequent character n -grams as their model parameters, but 1 www.github.com/kdrl/SCNE 10 50 100 200 300 Corpus size (MB) 20 30 40 50 60 70 Sp e a r m a n r a n k c o rr e l a t i o n skipgram rich sembei-sum sisg rich scne sembei 10 50 100 200 300 Corpus size (MB) 70 75 80 Figure 4: Word (left) and sentence (right) similarity tasks on portions of Chinese Wikipedia corpus.",
"the differences come from training strategies, such as embedding targets and the way of collecting co-occurrence information (see Section 2.1 for more details).",
"For contrasting scne with sembei , we also propose a variant of sembei (denoted by sembei-sum ) as one of baselines, which composes word and sentence embeddings by simply summing up the embeddings of their subn -grams which are learned by sembei .",
"Hyperparameters Tuning : To see the effect of rich resources for the segmentation-dependent baselines, we employ widely-used word segmenter with two settings: Using only a basic dictionary ( basic ) or using a rich dictionary together ( rich ).",
"The dimension of embeddings is 200 , the number of epochs is 10 and the number of negative samples is 10 for all the methods.",
"The n -gram vocabulary size M = 2 10 6 is used for sisg , sembei and scne .",
"The other hyperparameters, such as learning rate and n max , are carefully adjusted via a grid search in the validation set.",
"In the word similarity task, 2 -fold cross validation is used for evaluation.",
"In the sentence similarity task, we use the provided validation set.",
"In the downstream tasks, vector representations are combined with a supervised logistic regression classifier.",
"We repeat training and testing of the classifier 10 times, while the prepared dataset is randomly split into train ( 60% ) and test ( 40% ) sets at each time, and the hyperparameters are tuned by 3 -fold cross validation in the train set.",
"We adopt mean accuracy as the evaluation metric.",
"See Appendix A.1 for more experimental details.",
"We measure the ability of models to capture semantic similarity for words and sentences in Chinese; see Appendix A.2 for the experiment in",
"Japanese.",
"Given a set of word pairs, or sentence pairs, and their human annotated similarity scores, we calculated Spearman's rank correlation between the cosine similarities of the embeddings and the scores.",
"We use the dataset of Jin and Wu (2012) and Wang et al. (2017) for Chinese word and sentence similarity respectively.",
"Note that the conventional models, such as skipgram , cannot provide the embeddings for OOV words, while the compositional models, such as sisg and scne , can compute the embeddings by using their subword modeling.",
"In order to show comparable results, we use the null vector for these OOV words following Bojanowski et al. (2017).",
"Results : To see the effect of training corpus size, we train all models on portions of Wikipedia 2 .",
"The results are shown in Figure 4.",
"As it can be seen, the proposed scne is competitive with or outperforms the baselines for both word and sentence similarity tasks.",
"Moreover, it is worth noting that scne provides high-quality representations even when the size of training corpus is small, which is crucial for practical real-world settings where rich data is not available.",
"For a next experiment to see the effect of noisiness of training corpus, we test both noisy SNS corpus and the Wikipedia corpus 3 of the same size.",
"The results are reported in Table",
"1. As it can be seen, the performance of segmentation-dependent methods ( skipgram , sisg ) are decreased greatly by the noisiness of the corpus, while scne degrades only marginally.",
"The other two segmentation-free methods ( sembei , sembei-sum ) performed poorly.",
"This shows the efficacy of our method in the noisy texts.",
"On the other hand, in preliminary experiments on English (not shown), scne did not get better results than our segmentation-dependent baselines and it will be a future work to incorporate easily obtainable word boundary information into scne for segmented languages.",
"As a word-level downstream task, we conduct a noun category prediction on Chinese, Japanese and Korean 4 .",
"Most settings are the same as those of Oshikiri (2017).",
"Noun words and their semantic categories are extracted from Wikidata (Vrandecic and Krotzsch, 2014) with a predetermined semantic category set 5 , and the classifier is trained to predict the semantic category of words from the learned word representations, where unseen words are skipped in training and treated as errors in testing.",
"To see the effect of the noisiness of corpora, both noisy SNS corpus and Wikipedia corpus of the same size are examined as training corpora 6 .",
"Results : The results are reported in Table",
"2. Since the set of covered nouns (i.e., non-OOV words) depends on the methods, we calculate accuracies in two ways for a fair comparison: Using all the nouns and using the intersection of the covered nouns.",
"scne achieved the highest accuracies in all the settings when using all the nouns, and also 4 Although Korean has spacing, word boundaries are not obviously determined by space.",
"5 { food, song, music band name, manga, fictional character name, television series, drama, chemical compound, disease, taxon, city, island, country, year, business enterprise, public company, profession, university, language, book } 6 For each language, we use 100MB of Wikipedia and SNS data as training corpora.",
"For the SNS data, we use Sina Weibo for Chinese and Twitter for the rest.",
"performed well when using the intersection of the covered nouns, especially for the noisy corpora.",
"As a sentence-level evaluation, we perform sentiment analysis on movie review data.",
"We use 101k, 56k and 200k movie reviews and their scores respectively from Chinese, Japanese and Korean movie review websites (see Appendix A.1.6 for more details).",
"Each review is labeled as positive or negative by its rating score.",
"Sentence embedding models are trained using the whole movie reviews as training corpus.",
"Among the reviews, 5k positive and 5k negative reviews are randomly selected, and the selected reviews are used to train and test the classifiers as explained in Section 3.1.",
"Results : The results are reported in Table",
"3. The accuracies show that scne is also very effective in the sentence-level application.",
"In this experiment, we observe that the larger n max contributes to the performance improvement in sentence-level application by allowing our model to capture composed representations for longer phrases or sentences.",
"We proposed a simple yet effective unsupervised method to acquire general-purpose vector representations of words, phrases and sentences seamlessly, which is especially useful for languages whose word boundaries are not obvious, i.e., unsegmented languages.",
"Although our method does not rely on any manually annotated resources or word segmenter, our extensive experiments show that our method outperforms the conventional approaches that depend on such resources.",
"We would like to thank anonymous reviewers for their helpful advice.",
"This work was partially supported by JSPS KAKENHI grant 16H02789 to HS and 18J15053 to KF."
] | [
"objective",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"other",
"other"
] |
[
"In this paper, we present a neural model for joint dropped pronoun recovery (DPR) and conversational discourse parsing (CDP) in Chinese conversational speech.",
"We show that DPR and CDP are closely related, and a joint model benefits both tasks.",
"We refer to our model as DiscProReco, and it first encodes the tokens in each utterance in a conversation with a directed Graph Convolutional Network (GCN).",
"The token states for an utterance are then aggregated to produce a single state for each utterance.",
"The utterance states are then fed into a biaffine classifier to construct a conversational discourse graph.",
"A second (multi-relational) GCN is then applied to the utterance states to produce a discourse relation-augmented representation for the utterances, which are then fused together with token states in each utterance as input to a dropped pronoun recovery layer.",
"The joint model is trained and evaluated on a new Structure Parsing-enhanced Dropped Pronoun Recovery (SPDPR) dataset that we annotated with both two types of information.",
"Experimental results on the SPDPR dataset and other benchmarks show that DiscProReco significantly outperforms the state-of-the-art baselines of both tasks.",
"Pronouns are often dropped in Chinese conversations as the identity of the pronoun can be inferred from the context (Kim, 2000; Yang et al., 2015) without causing the sentence to be incomprehensible.",
"The task of dropped pronoun recovery (DPR) aims to locate the position of the dropped pronoun and identify its type.",
"Conversational discourse parsing (CDP) is another important task that aims to analyze the discourse relations among utterances Corresponding author A1:",
"in a conversation, and plays a vital role in understanding multi-turn conversations.",
"Existing work regards DPR and CDP as two independent tasks and tackles them separately.",
"As an early attempt of DPR, Yang et al. (2015) employ a Maximum Entropy classifier to predict the position and type of dropped pronouns.",
"Zhang et al. (2019) and Yang et al. (2019) attempt to recover the dropped pronouns by modeling the referents with deep neural networks.",
"More recently, Yang et al. (2020) attempt to jointly predict all dropped pronouns in a conversation snippet by modeling dependencies between pronouns with general conditional random fields.",
"A major shortcoming of these DPR methods is that they overlook the discourse relation (e.g., reply, question) between conversational utterances when exploiting the context of the dropped pronoun.",
"At the same time, previous CDP methods (Li et al., 2014; Afantenos et al., 2015; Shi and Huang, 2019) first predict the relation for each utterance pair and then construct the discourse structure for the conversation with a decoding algorithm.",
"The effectiveness of these methods are compromised since the utterances might be incomplete when they have dropped pronouns.",
"To overcome these shortcomings, we propose a novel neural model called DiscProReco to perform DPR and CDP jointly.",
"Figure 1 is a Chinese conversation snippet between two speakers A and B that illustrates the advantages of such a joint approach.",
"In this example, a pronoun (you) is dropped in utterance B 3 .",
"It is critical for the DPR model to know that both utterances B 2 and B 3 are in reply to the utterance A 2 , when recovering this dropped pronoun.",
"Methods which ignore the structure ((B3 expands B2) replies A2) will more likely consider the utterance B 3 to be semantically similar to A 2 , and wrongly recover the pronoun as (I).",
"Given a pro-drop utterance and its context, DiscProReco parses the discourse structure of the conversation and recovers the dropped pronouns in the utterance in four steps:",
"(i) Each utterance is parsed into its dependency structure and fed into a directed GCN to output the syntactic token states.",
"The utterance state is then obtained by aggregating the token states in the utterance.",
"(ii) The utterance states of a conversation are fed into a biaffine classifier to predict the discourse relation between each utterance pair and the discourse structure of the conversation is constructed.",
"(iii) Taking the discourse structure as input, another (multi-relational) GCN updates the utterance states and fuses them into the token states for each utterance to produce discourse-aware token representations.",
"(iv) Based on the discourse structure-aware context representation, a pronoun recovery module is designed to recover the dropped pronouns in the utterances.",
"When training this model, all components are jointly optimized by parameter sharing so that CDP and DPR can benefit each other.",
"As there is no public dataset annotated with both dropped pronouns and conversational discourse structures, we also construct Structure Parsing-enhanced Dropped Pronoun Recovery (SPDPR) corpus, which is the first corpus annotated with both types of information.",
"Experimental results show that DiscProReco outperforms all baselines of CDP and DPR.",
"Contributions: This work makes the following contributions:",
"(i) We propose a unified framework DiscProReco to jointly perform CDP and DPR, and show that these two tasks can benefit each other.",
"(ii) We construct a new large-scale dataset SPDPR (Section 4) which supports fair comparison across different methods and facilitates future research on both DPR and CDP.",
"(iii) We present experimental results which show that DiscProReco with its joint learning mechanism realizes knowledge sharing between its CDP and DPR components and results in improvements for both tasks (Section 5).",
"The code and SPDPR dataset is available at https:// github.com/ningningyang/DiscProReco .",
"We first introduce the problem formulation of these two tasks.",
"Following the practices in (Yang et al., 2015, 2019, 2020), we formulate DPR as a sequence labeling problem.",
"DPR aims to recover the dropped pronouns in an utterance by assigning one of 17 labels to each token that indicates the type of pronoun that is dropped before the token (Yang et al., 2015).",
"CDP is the task of constructing the conversational discourse structure by predicting the discourse relation (Xue et al., 2016) among utterances.",
"The discourse relations may characterize one utterance as agreeing with, responding to, or indicate understanding of another utterance in the conversational context.",
"Let us denote an input pro-drop utterance of n tokens as X = ( w 1 , w 2 , , w n ) , and its contextual utterances as C = (X 1 , X 2 , , X m ) where the i th contextual utterance X i is a sequence of l i tokens: X i = ( w i, 1 , , w i,l i ) .",
"Our task aims to (1) model the distribution P(X j | X i , C) to predict the relation between each pair of utterances (i.e., (X i , X j ) ) for CDP, and (2) model Y = arg max Y P(Y | X , C) to predict the recovered pronoun sequence Y for the input utterance X .",
"Each element of Y is chosen from one of the T possible labels from Y = { y 1 , , y T 1 }{ None } to indicate whether a pronoun is dropped before the corresponding token in utterance X and the type of the dropped pronoun.",
"The label None means no pronoun is dropped before this token.",
"The architecture of DiscProReco is illustrated in Figure 2. Given a pro-drop utterance X and its",
"The syntactic dependency encoding layer then revises the sequential token states by exploiting the syntactic dependencies between tokens in the same utterance using a directed GCN and generates utterance representations.",
"After that, the biaffine relation prediction layer predicts the relation between each pair of utterances.",
"The discourse structure then is constructed based on the utterance nodes and the predicted relations.",
"The discourse structure encoding layer further encodes the inter-utterance discourse structures with a multi-relational GCN, and employs the discourse-based utterance representations to revise the syntactic token states.",
"Finally, the pronoun recovery layer explores the referent semantics from the context C and predicts the dropped pronouns in each utterance.",
"As the sequential token states overlook long-distance dependencies among tokens in a utterance, this layer takes in the sequential token states X and C , and revises them as syntactic token states as HX and HC by exploring the syntactic dependencies",
"between the tokens based on a directed GCN.",
"Specifically, for each input utterance in X and C , we first extract syntactic dependencies between the tokens with Stanford's Stanza dependency parser (Qi et al., 2020).",
"Using the output of the dependency parser, we construct a syntactic dependency graph for each utterance in which the nodes represents the tokens and the edges correspond to the extracted syntactic dependencies between the tokens.",
"Following the practices of (Marcheggiani and Titov, 2017; Vashishth et al., 2018), three types of edges are defined in the graph.",
"The node states are initialized by the sequential token states X and C , and then message passing is performed over the constructed graph using the directed GCN (Kipf and Welling, 2017), referred to as SynGCN .",
"The syntactic dependency representation of token w i,n after ( k + 1) -th GCN layer is defined as: h k +1 w i,n = ReLU (cid:16)(cid:80) u N + ( w i,n ) g ke (cid:0) W ke h ku + b ke (cid:1)(cid:17) , where W ke R d d and b ke R d are the edge-specific parameters, N + ( w i,n ) = N ( w i,n ) { w i,n } is the set of w i,n 's neighbors including itself, and ReLU ( ) = max(0 , ) is the Rectified Linear Unit.",
"g ke is an edge-wise gating mechanism which incorporates the edge importance as: g ke = (cid:16) w ke h ku + b ke (cid:17) , where w ke R 1 d and b ke R are independent trainable parameters for each layer, and ( ) is the sigmoid function.",
"The revised syntactic token states HX and HC of the pro-drop utterance and context are outputted for subsequent discourse structure prediction and pronoun recovery.",
"For conversational discourse parsing, we jointly predict the arc s (arc) i,j and relation s (rel) i,j between each pair of utterances utilizing the biaffine attention mechanism proposed in (Dozat and Manning, 2017).",
"Given the syntactic token states HX and HC , we make an average aggregation on these token states of each utterance X i to obtain the syntactic utterance representation h X i .",
"For a pair of utterances (X i , X j ) in the conversation snippet, we feed the representations of these two utterances into a biaffine function to predict the probability of an arc from X i to X j as: r (arc head) i = MLP (arc head) ( h X i ) , r (arc dep) j = MLP (arc dep) ( h X j ) , s (arc) i,j = r (arc head) i U (arc) r (arc dep) j + r (arc head) T i u (arc) , where MLP is the multi-layer perceptron that transforms the original utterance representation h X i and h X j into head or dependent-specific utterance states r (arc head) i and r (arc dep) j .",
"U (arc) and u (arc) are weight matrix and bias term used to determine the probability of a arc.",
"One distinctive characteristics of conversational discourse parsing is that the head of each dependent utterance must be chosen from the utterances before the dependent utterance.",
"Thus we add an upper triangular mask operation on the results of arc prediction to regularize the predicted arc head: s (arc) = mask( s (arc) ) .",
"We minimize the cross-entropy of gold head-dependent pair of utterances as: loss arc = m (cid:88) j =1 (X j | X i , C) log(P arc (X j | X i , C)) , P arc (X j | X i , C) = softmax( s (arc) i ) .",
"After obtaining the predicted directed unlabeled arc between each utterance pair, we calculate the score distribution s (rel) i,j R k of each arc X i X j , in which the t -th element indicates the score of the t -th relation as the arc label prediction function in (Dozat and Manning, 2017).",
"In the training phase, we also minimize the cross-entropy between gold relation labels and the predicted relations between utterances as: loss rel = (cid:80) nj =1 (X j | X i , C) log(P rel (X j | X i , C)) , P rel (X j | X i , C) = softmax( s (rel) i,j ) .",
"After the relations are predicted, we construct the discourse structure as a multi-relational graph in which each node indicates an utterance, and each edge represents the relation between a pair of utterances.",
"In order to utilize the discourse information in dropped pronoun recovery process, we first encode the discourse structure, and then utilize the discourse information-based utterance representations to improve token states which are used to model the pronoun referent.",
"Specifically, we apply a multiple relational GCN (Vashishth et al., 2020), referred to as RelGCN , over the graph to encode the discourse structure based utterance representations R and utilize the updated representations to further revise syntactic token states HX and HC for outputting discourse structure based token states ZX and ZC .",
"The node states of the graph are initialized as the average aggregation of token states of corresponding utterances.",
"The representation of utterance X i in the ( k + 1) -th layer is updated by incorporating the discourse relation state h krel as: r k +1 i = f (cid:16)(cid:80) ( j,rel ) N (X i ) P rel (X j | X i , C) W k (rel) (cid:16) r kj , h krel (cid:17)(cid:17) , where r kj and h krel denote the updated representation of utterance j and relation rel after the k -th GCN layers, and W k ( rel ) R d d is a relation-type specific parameter.",
"Following the practice of (Vashishth et al., 2020), we take the composition operator as multiplication in this work.",
"Please note that we take in the label distribution P rel (X j | X i , C) from the relation prediction layer and compute the weighted sum of each kind of relation to update the utterance representation, rather than taking the hard predicted relation by applying an argmax operation over the distribution.",
"After encoding the constructed discourse structure with a message passing process, we obtain the discourse relation-augmented utterance representations R , and then utilize the updated utterance representations to revise the syntactic token states with a linear feed-forward network: z w i,n = W 1 (cid:104) h k +1 w i,n ; r k +1 i (cid:105) + b 1 , where h k +1 w i,n refers to the token state of w i,n outputted from the ( k + 1) -th layer of SynGCN, r k +1 i refers to the state of the corresponding utterance that the token belongs to, outputted from the ( k + 1) -th layer of RelGCN.",
"The operation thus augments syntactic token states HX and HC with discourse information-based utterance representation to obtain discourse context-based token states ZX = ( z w 1 , . . . , z w n ) and ZC = ( z w 1 ,i , . . . , z w i,li ) , which will be used to model the referent semantics of the dropped pronoun in the dropper pronoun recovery layer.",
"This layer takes in the revised token representations ZX and ZC , and attempts to find tokens in context C that describe the referent of the dropped pronoun in the pro-drop utterance X with an attention mechanism.",
"The referent representation is then captured as the weighted sum of discourse context-based token states as: aw i,i (cid:48) ,n (cid:48) = softmax( W 2 (cid:16) z w i (cid:12) z w i (cid:48) ,n (cid:48) (cid:17) + b 2 ) , r w i = m (cid:88) i (cid:48) =1 l i (cid:48) (cid:88) n (cid:48) =1 aw i,i (cid:48) ,n (cid:48) z w i (cid:48) ,n (cid:48) .",
"Then we concatenate the referent representation r w i with the syntactic token representation h k +1 w i to predict the dropped pronoun category as follows: hr w i = tanh (cid:16) W 3 (cid:104) h k +1 w i ; r w i (cid:105) + b 3 (cid:17) , P ( y i | w i , C ) = softmax ( W 4 hr w i + b 4 ) .",
"The objective of dropped pronoun recovery aims to minimize cross-entropy between the predicted label distributions and the annotated labels for all sentences as: loss dp = (cid:88) q Q l i (cid:88) i =1 ( y i | w i , C) log ( P ( y i | w i , C)) , where Q represents all training instances, l i represents the number of words in pro-drop utterance; ( y i | w i , C) represents the annotated label of w i .",
"We train our DiscProReco by jointly optimizing the objective of both discourse relation prediction and dropped pronoun recovery.",
"The total training objective is defined as: loss = (loss arc + loss label ) + loss dp , (1) where and are weights of CDP objective function and DPR objective function respectively.",
"To verify the effectiveness of DiscProReco, we need a conversational corpus containing the annotation of both dropped pronouns and discourse relations.",
"To our knowledge, there is no such a public available corpus.",
"Therefore, we constructed the first Structure Parsing-enhanced Dropped Pronoun Recovery (SPDPR) dataset by annotating the discourse structure information on a popular dropped pronoun recovery dataset (i.e., Chinese SMS).",
"The Chinese SMS/Chat dataset consists of 684 multi-party chat files and is a popular benchmark for dropped pronoun recovery (Yang et al., 2015).",
"In this study, we set the size of the context snippet to be 8 utterances which include the current pro-drop utterance plus 5 utterances before and 2 utterances after.",
"When performing discourse relation annotation we ask three linguistic experts to independently choose a head utterance for the current utterance from its context and annotate the discourse relation between them according to a set of 8 pre-defined relations (see Appendix A).",
"The inter-annotator agreement for discourse relation annotation is 0.8362, as measured by Fleiss's Kappa.",
"The resulting SPDPR dataset consists of 292,455 tokens and 40,280 utterances, averaging 4,949 utterance pairs per relation, with a minimum of 540 pairs for the least frequent relation and a maximum of 12,252 for the most frequent relation.",
"The SPDPR dataset also annotates 31,591 dropped pronouns (except the None category).",
"In this work, 300-dimensional pre-trained embeddings (Li et al., 2018) were input to the BiGRU encoder, and 500-dimensional hidden states were uitilized.",
"For SynGCN and RelGCN, we set the number of GCN layers as 1 and 3 respectively, and augment them with a dropout rate of 0.5.",
"The SPDPR TC of OntoNotes BaiduZhidao Model P(%) R(%) F(%) P(%) R(%) F(%) P(%) R(%) F(%) MEPR 37.27 45.57 38.76 ---NRM 37.11 44.07 39.03 23.12 26.09 22.80 26.87 49.44 34.54 BiGRU 40.18 45.32 42.67 25.64 36.82 30.93 29.35 42.38 35.83 NDPR 49.39 44.89 46.39 39.63 43.09 39.77 41.04 46.55 42.94 XLM-RoBERTa-NDPR 54.03 50.18 52.46 43.14 46.37 45.13 46.04 49.12 47.54 Transformer-GCRF 52.51 48.12 49.81 40.48 44.64 42.45 43.30 46.54 43.92 DiscProReco 59.58 53.68 57.37 ---DiscProReco(XLM-R-w/o RelGCN) 56.32 52.28 55.67 44.62 47.14 46.98 47.31 50.43 48.19 DiscProReco(XLM-R) 61.13 54.26 59.47 ---Table 1: Experimental results produced by the baseline models, the proposed model DiscProReco and two variants of DiscProReco on all three conversation datasets in terms of precision, recall and F-score.",
"Stanza dependency parser (Qi et al., 2020) returns 41 kinds of dependency edges.",
"We remove 13 types of them which connects the punctuation with other tokens, and irrelevant to referent description.",
"During training, we utilized Adam optimizer (Kingma and Ba, 2015) with a 0.005 learning rate and trained our model for 30 epochs.",
"The model performed best on the validation set is used to make predictions on the test set.",
"We repeat each experiment 10 times and records the average results.",
"Datasets and Evaluation Metrics We tested the performance of DiscProReco for DPR on three datasets: (1) TC section of OntoNotes Release 5.0, which is a transcription of Chinese telephone conversations, and is released in the CoNLL 2012 Shared Task.",
"(2) BaiduZhidao, which is a question answering corpus (Zhang et al., 2019).",
"Ten types of concrete pronouns were annotated according to the pre-defined guidelines.",
"These two benchmarks do not contain the discourse structure information and are mainly used to evaluate the effectiveness of our model for DPR task.",
"(3) The SPDPR dataset, which contains 684 conversation files annotated with dropped pronouns and discourse relations.",
"Following practice in (Yang et al., 2015, 2019), we reserve the same 16.7% of the training instances as the development set, and a separate test set was used to evaluate the models.",
"The statistics of the three datasets are shown in Appendix B. Same as existing efforts (Yang et al., 2015, 2019), we use Precision(P), Recall(R) and F-score(F) as metrics when evaluating the performance of dropped pronoun models.",
"Baselines We compared DiscProReco against existing baselines, including: (1) MEPR (Yang et al., 2015), which leverages a Maximum Entropy classifier to predict the type of dropped pronoun before each token; (2) NRM (Zhang et al., 2019), which employs two MLPs to predict the position and type of a dropped pronoun separately; (3) BiGRU, which utilizes a bidirectional GRU to encode each token in a pro-drop sentence and then makes prediction; (4) NDPR (Yang et al., 2019), which models the referents of dropped pronouns from a large context with a structured attention mechanism; (5) Transformer-GCRF (Yang et al., 2020), which jointly recovers the dropped pronouns in a conversational snippet with general conditional random fields; (6) XLM-RoBERTa-NDPR, which utilizes the pre-trained multilingual masked language model (Conneau et al., 2020) to encode the pro-drop utterance and its context, and then employs the attention mechanism in NDPR to model the referent semantics.",
"We also compare two variants of DiscProReco: (1) DiscProReco (XLM-R-w/o RelGCN), which replaces the BiGRU encoder with the pre-trained XLM-RoBERTa model, removes the RelGCN layer, and only utilizes SynGCN to encode syntactic token representations for predicting the dropped pronouns.",
"(2) DiscProReco(XLM-R) which uses the pre-trained XLM-RoBERTa model as an encoder to replace the BiGRU network in our proposed model.",
"Experimental Results Table 1 reports the results of DiscProReco and the baseline methods on DPR.",
"Please note that for the baseline methods, we directly used the numbers originally reported in the corresponding papers.",
"From the results, we observed that our variant model DiscProReco(XLM-A1:",
"R-w/o RelGCN) outperforms existing baselines on three datasets by all evaluation metrics, which prove the effectiveness of our system as a stand-alone model for recovering dropped pronouns.",
"We attribute this to the ability of our model to consider long-distance syntactic dependencies between tokens in the same utterance.",
"Note that the results for feature-based baseline MEPR (Yang et al., 2015) on OntoNotes, and BaiduZhidao are not available because several essential features cannot been obtained.",
"However, our proposed DiscProReco still significantly outperforms DiscProReco (XLM-R-w/o RelGCN) as it achieved 3 .",
"26% , 1 .",
"40% , and 1 .",
"70% absolute improvements in terms of precision, recall and F-score respectively on SPDPR corpus.",
"This shows that discourse relations between utterances are crucially important for modeling the referent of dropped pronouns and achieving better performance in dropped pronoun recovery.",
"This is consistent with the observation in (Ghosal et al., 2019).",
"The best results are achieved when our model uses uses the pre-trained XLM-RoBERTa (i.e., DiscProReco(XLM-R)).",
"Note that discourse relations are not available for Ontonotes and BaiduZhidao datasets and thus we do not have joint learning results for these two data sets.",
"Error Analysis We further investigated some typical mistakes made by our DiscProReco for DPR.",
"Resolving DPR involves effectively modeling the referent of each dropped pronoun from the context to recover the dropped pronoun.",
"As illustrate in Figure 3, both DiscProReco and NDPR model the referent from the context.",
"The former outperforms the latter since it considers the conversation structure that the utterance B3 is a reply to A3 but not an expansion to the utterance B1.",
"However, just modeling the referent from the context is insufficient.",
"In Figure 3, the referent of the dropped pronoun STAC SPDPR Model Arc Rel Arc Rel MST 68.8 50.4 -ILP 68.6 52.1 -Deep+MST 69.6 52.1 81.06 40.93 Deep+ILP 69.0 53.1 80.53 41.38 Deep+Greedy 69.3 51.9 81.32 42.38 Deep Sequential 73.2 55.7 83.00 43.45 DiscProReco(w/o DPR) 74.1 57.0 84.51 51.34 DiscProReco -87.97 53.07 Table 2: Micro-averaged F-score (%) of conversational discourse parsing on two standard benchmarks.",
"was correctly identified but the dropped pronoun is mistakenly identified as ( /they).",
"This indicates that the model needs to be augmented with some additional knowledge, such as the difference between singular and plural pronouns.",
"Datasets and Evaluation Metrics We evaluated the effectiveness of our DiscProReco framework for CDP task on two datasets as: (1) STAC, which is a standard benchmark for discourse parsing on multi-party dialogue (Asher and Lascarides, 2005).",
"The dataset contains 1,173 dialogues, 12,867 EDUs and 12,476 relations.",
"Same as existing studies, we set aside 10% of the training dialogues as the validation data.",
"(2) SPDPR, which is constructed in our work containing 684 dialogues and 39,596 annotated relations.",
"Following (Shi and Huang, 2019), we also utilized micro-averaged F-score as the evaluation metric.",
"Baselines We compared our DiscProReco with existing baseline methods: (1) MST (Afantenos et al., 2015): A approach that uses local information in two utterances to predict the discourse relation, and uses the Maximum Spanning Tree (MST) to construct the discourse structure; (2) ILP (Perret et al., 2016): Same as MST except that the MST algorithm is replaced with Integer Linear Programming (ILP); (3) Deep+MST: A neural network that encodes the discourse representations with GRU, and then uses MST to construct the discourse structure; (4) Deep+ILP: Same as Deep+MST except that the MST algorithm is replaced with Integer Linear Programming (ILP); (5) Deep+Greedy: Similar to Deep+MST and Deep+ILP except that this model uses a greedy decoding algorithm to select the par-ent for each utterance; (6) Deep Sequential (Shi and Huang, 2019): A deep sequential neural network which predicts the discourse relation utilizing both local and global context.",
"In order to explore the effectiveness of joint learning scheme, we also make a comparison of our DiscProReco with its variant, referred to as DiscProReco(w/o DPR), which predict the discourse relation independently, without recovering the dropped pronouns.",
"Experimental Results We list the experimental results of our approach and the baselines in Table 2. For the STAC dataset, we also reported the original results of the STAC benchmark from an existing paper (Shi and Huang, 2019), and apply our DiscProReco to this corpus.",
"For the SPDPR dataset, we ran the baseline methods with the same parameter settings.",
"From the results we can see that the variant of our approach DiscProReco (w/o DPR) outperforms the baselines of discourse parsing.",
"We attribute this to the effectiveness of the biaffine attention mechanism for dependency parsing task (Yan et al., 2020; Ji et al., 2019).",
"However, our approach DiscProReco still significantly outperforms all the compared models.",
"We attribute this to the joint training of the CDP task and the DPR task.",
"The parameter sharing mechanism makes these two tasks benefits each other.",
"Note that the results for the joint model is not available for STAC as STAC is not annotated with dropped pronouns.",
"We also conducted experiments on SPDPR to study the quantitative interaction between DPR and CDP.",
"Firstly, during the training process, we optimize our DiscProReco model utilizing the objective function in Eq.",
"1 until the CDP task achieves a specific F-score (i.e., gradually increases from 30.64 to 50.38).",
"Then we fix the CDP components and continue to optimize the components of DPR task.",
"We conduct this experiment to explore the influence of CDP task on the DPR task.",
"Secondly, we set the ratio between and in Eq.",
"1 varies from 0.25 to 1.25 and record the F-score of DPR and CDP respectively.",
"We conduct this experiment to study the interanction between these two tasks by modifying their weights in the objective function.",
"Results of these two experiments are shown in Figure 4.",
"According to Figure 4",
"(a), the performance of DPR is increased in terms of all evaluation metrics as the F-score of CDP increases, which indicates that exploring the discourse relations between utterances benefits dropped pronoun 30 35 40 45 50 F-score of arc prediction in CDP 44 46 48 50 52 54 56 58 60 P e rf o r m a n ce o f DPR Precision Recall F-score 0.2 0.4 0.6 0.8 1.0 1.2 Ratio between to 44 46 48 50 52 54 56 58 Fs c o r e o f DPR a nd CDPDPR CDP Figure 4: Exploratory results.",
"Moreover, Figure 4",
"(b) illustrate the performance of DPR and CDP when the ratio between to varies gradually.",
"Results show that the performance of CDP remains stable, while the performance of DPR increases at beginning and then decrease sharply as the ratio increases, indicating that DiscProReco framework should pay more attention to DPR during the optimizing process.",
"Dropped pronoun recovery is a critical technique that can benefit many downstream applications (Wang et al., 2016, 2018; Su et al., 2019).",
"Yang et al. (2015) for the first time proposed this task, and utilized a Maximum Entropy classifier to recover the dropped pronouns in text messages.",
"Giannella et al. (2017) further employed a linear-chain CRF to jointly predict the position and type of the dropped pronouns in a single utterance using hand-crafted features.",
"Due to the powerful semantic modeling capability of deep learning, Zhang et al. (2019); Yang et al. (2019) introduced neural network methods to recover the dropped pronoun by modeling its semantics from the context.",
"All these methods represent the utterances without considering the relationship between utterances, which is important to identify the referents.",
"Zero pronoun resolution is also a closely related line of research to DPR (Chen and Ng, 2016; Yin et al., 2017, 2018).",
"The main difference between DPR and zero pronoun resolution task is that DPR considers both anaphoric and non-anaphoric pronouns, and doesn't attempt to resolve it to a referent.",
"Existing discourse parsing methods first predicted the probability of discourse relation, and then applied a decoding algorithm to construct the discourse structure (Muller et al., 2012; Li et al., 2014; Afantenos et al., 2015; Perret et al., 2016).",
"A deep sequential model (Shi and Huang, 2019) was further presented to predict the discourse dependencies utilizing both local information of two utterances and the global information of existing constructed discourse structure.",
"All these methods consider how to do relation prediction independently.",
"However, in this work, we explore the connection between the CDP and DPR, and attempt to make these two tasks mutually enhance each other.",
"This paper presents that dropped pronoun recovery and conversational discourse parsing are two strongly related tasks.",
"To make them benefit from each other, we devise a novel framework called DiscProReco to tackle these two tasks simultaneously.",
"The framework is trained in a joint learning paradigm, and the parameters for the two tasks are jointly optimized.",
"To facilitate the study of the problem, we created a large-scale dataset called SPDPR which contains the annotations of both dropped pronouns and discourse relations.",
"Experimental results demonstrated that DiscProReco outperformed all baselines on both tasks.",
"This work was supported by the National Key R&D Program of China (2019YFE0198200), the National Natural Science Foundation of China (No. 61872338, No. 61832017), Beijing Academy of Artificial Intelligence (BAAI2019ZD0305), Beijing Outstanding Young Scientist Program NO.",
"BJJWZYJH012019100020098 and BUPT Excellent Ph.D.",
"Students Foundation (No.CX2020305)."
] | [
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"result",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"other",
"method",
"method",
"other",
"other",
"abstain",
"other",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"other",
"other",
"other"
] |