sentences
sequence
labels
sequence
[ "Several recent studies have shown that strong natural language understanding (NLU) models are prone to relying on unwanted dataset biases without learning the underlying task, resulting in models that fail to generalize to out-of-domain datasets and are likely to perform poorly in real-world scenarios.", "We propose two learning strategies to train neural models, which are more robust to such biases and transfer better to out-of-domain datasets.", "The biases are specified in terms of one or more bias-only models , which learn to leverage the dataset biases.", "During training, the bias-only models' predictions are used to adjust the loss of the base model to reduce its reliance on biases by down-weighting the biased examples and focusing training on the hard examples.", "We experiment on large-scale natural language inference and fact verification benchmarks, evaluating on out-of-domain datasets that are specifically designed to assess the robustness of models against known biases in the training data.", "Results show that our debiasing methods greatly improve robustness in all settings and better transfer to other textual entailment datasets.", "Our code and data are publicly available in https: //github.com/rabeehk/robust-nli .", "Recent neural models (Devlin et al., 2019; Radford et al., 2018; Chen et al., 2017) have achieved high and even near human-performance on several large-scale natural language understanding benchmarks.", "However, it has been demonstrated that neural models tend to rely on existing idiosyncratic biases in the datasets, and leverage superficial correlations between the label and existing shortcuts in the training dataset to perform surprisingly well, 1 without learning the underlying task (Kaushik and Lipton, 2018; Gururangan et al., 2018; Poliak et al., 2018; Schuster et al., 2019; 1 We use biases, heuristics or shortcuts interchangeably. McCoy et al., 2019b).", "For instance, natural language inference (NLI) is supposed to test the ability of a model to determine whether a hypothesis sentence ( There is no teacher in the room ) can be inferred from a premise sentence ( Kids work at computers with a teacher's help ) (Dagan et al., 2006).", "2 However, recent work has demonstrated that large-scale NLI benchmarks contain annotation artifacts; certain words in the hypothesis that are highly indicative of inference class and allow models that do not consider the premise to perform unexpectedly well (Poliak et al., 2018; Gururangan et al., 2018).", "As an example, in some NLI benchmarks, negation words such as nobody, no, and not in the hypothesis are often highly correlated with the contradiction label.", "As a result of the existence of such biases, models exploiting statistical shortcuts during training often perform poorly on out-of-domain datasets, especially if the datasets are carefully designed to limit the spurious cues.", "To allow proper evaluation, recent studies have tried to create new evaluation datasets that do not contain such biases (Gururangan et al., 2018; Schuster et al., 2019; McCoy et al., 2019b).", "Unfortunately, it is hard to avoid spurious statistical cues in the construction of large-scale benchmarks, and collecting new datasets is costly (Sharma et al., 2018).", "It is, therefore, crucial to develop techniques to reduce the reliance on biases during the training of the neural models.", "We propose two end-to-end debiasing techniques that can be used when the existing bias patterns are identified.", "These methods work by adjusting the cross-entropy loss to reduce the biases learned from the training dataset, down-weighting the biased examples so that the model focuses on learning the hard examples.", "Figure 1 illustrates an example of applying our strategy to prevent an NLI model from predicting the labels using existing biases in the hypotheses, where the bias-only model only sees the hypothesis.", "Our strat-2 The given sentences are in the contradictory relation, and the hypothesis cannot be inferred from the premise.", "egy involves adding this bias-only branch f B on top of the base model f M during training.", "We then compute the combination of the two models f C in a way that motivates the base model to learn different strategies than the ones used by the bias-only branch f B .", "At the end of the training, we remove the bias-only classifier and use the predictions of the base model.", "In our first proposed method, Product of Experts, the training loss is computed on an ensemble of the base model and the bias-only model, which reduces the base model's loss for the examples that the bias-only model classifies correctly.", "For the second method, Debiased Focal Loss, the bias-only predictions are used to directly weight the loss of the base model, explicitly modulating the loss depending on the accuracy of the bias-only model.", "We also extend these methods to be robust against multiple sources of bias by training multiple bias-only models.", "Our approaches are simple and highly effective.", "They require training only a simple model on top of the base model.", "They are model agnostic and general enough to be applicable for addressing common biases seen in many datasets in different domains.", "We evaluate our models on challenging benchmarks in textual entailment and fact verification, including HANS (Heuristic Analysis for NLI Systems) (McCoy et al., 2019b), hard NLI sets (Gururangan et al., 2018) of Stanford Natural Language Inference (SNLI) (Bowman et al., 2015) and MultiNLI (MNLI) (Williams et al., 2018), and FEVER Symmetric test set (Schuster et al., 2019).", "The selected datasets are highly challenging and have been carefully designed to be unbiased to allow proper evaluation of the out-of-domain performance of the models.", "We additionally construct hard MNLI datasets from MNLI development sets to facilitate the out-of-domain evaluation on this dataset.", "3 We show that including our strategies on training baseline models, including BERT (Devlin et al., 2019), provides a substantial gain on out-of-domain performance in all the experiments.", "In summary, we make the following contributions:", "1) Proposing two debiasing strategies to train neural models robust to dataset bias.", "2) An empirical evaluation of the methods on two large-scale NLI datasets and a fact verification benchmark; obtaining a substantial gain on their challenging out-of-domain data, including 7.4 points on HANS, 4.8 points on SNLI hard set, and 9.8 points on FEVER symmetric test set, setting a new state-of-the-art.", "3) Proposing debiasing strategies capable of combating multiple sources of bias.", "4) Evaluating the transfer performance of the debiased models on 12 NLI datasets and demonstrating improved transfer to other NLI benchmarks.", "To facilitate future work, we release our datasets and code.", "To address dataset biases, researchers have proposed to augment datasets by balancing the existing cues (Schuster et al., 2019) or to create an adversarial dataset (Jia and Liang, 2017).", "However, collecting new datasets, especially at a large scale, is costly, and thus remains an unsatisfactory solution.", "It is, therefore, crucial to develop strategies to allow models to be trained on the existing biased datasets.", "3 Removing the need to submit to an online evaluation system for MNLI hard test sets.", "Schuster et al. (2019) propose to first compute the n-grams in the dataset's claims that are the most associated with each fact-verification label.", "They then solve an optimization problem to assign a balancing weight to each training sample to alleviate the biases.", "In contrast, we propose several end-to-end debiasing strategies.", "Additionally, Belinkov et al. (2019a) propose adversarial techniques to remove from the NLI sentence encoder the features that allow a hypothesis-only model to succeed.", "However, we believe that in general, the features used by the hypothesis-only model can include some information necessary to perform the NLI task, and removing such information from the sentence representation can hurt the performance of the full model.", "Their approach consequently degrades the performance on the hard SNLI set, which is expected to be less biased.", "In contrast, we propose to train a bias-only model to use its predictions to dynamically adapt the classification loss to reduce the importance of the most biased examples.", "Concurrently to our work, Clark et al. (2019) and He et al. (2019) have also proposed to use the product of experts (PoE) models for avoiding biases.", "They train their models in two stages, first training a bias-only model and then using it to train a robust model.", "In contrast, our methods are trained in an end-to-end manner, which is convenient in practice.", "We additionally show that our proposed Debiased Focal Loss model is an effective method to reduce biases, sometimes superior to PoE.", "We have evaluated on new domains of NLI hard sets and fact verification.", "Moreover, we have included an analysis showing that our debiased models indeed have lower correlations with the bias-only models, and have extended our methods to guard against multiple bias patterns simultaneously.", "We furthermore study transfer performance to other NLI datasets.", "Problem formulation We consider a general multi-class classification problem.", "Given a dataset D = { x i ,y i } Ni =1 consisting of the input data x i X , and labels y i Y , the goal of the base model is to learn a mapping f M parameterized by M that computes the predictions over the label space given the input data, shown as f M : X R |Y| .", "Our goal is to optimize M parameters such that we build a model that is more resistant to benchmark dataset biases, to improve its robustness to domain changes where the biases typically observed in the training data do not exist in the evaluation dataset.", "The key idea of our approach, depicted in Figure 1, is first to identify the dataset biases that the base model is susceptible to relying on, and define a bias-only model to capture them.", "We then propose two strategies to incorporate this bias-only knowledge into the training of the base model to make it robust against the biases.", "After training, we remove the bias-only model and use the predictions of the base model.", "We assume that we do not have access to any data from the out-of-domain dataset, so we need to know a priori about the possible types of shortcuts we would like the base model to avoid relying on.", "Once these patterns are identified, we train a bias-only model designed to capture the identified shortcuts that only uses biased features .", "For instance, a hypothesis-only model in the large-scale NLI datasets can correctly classify the majority of samples using annotation artifacts (Poliak et al., 2018; Gururangan et al., 2018).", "Motivated by this work, our bias-only model for NLI only uses hypothesis sentences.", "Note that the bias-only model can, in general, have any form, and is not limited to models using only a part of the input data.", "For instance, on the HANS dataset, our bias-only model makes use of syntactic heuristics and similarity features (see Section 4.3).", "Let x bi X b be biased features of x i that are predictive of y i .", "We then formalize this bias-only model as a mapping f B : X b R |Y| , parameterized by B and trained using cross-entropy (CE) loss LB : LB ( B )= 1 NN (cid:88) i =1 log( ( f y i B ( x bi ; B ))) , (1) where f jB ( x bi , B ) is the j th element of f B ( . ) , and ( u j )= e u j / (cid:80) |Y| k =1 e u k is the softmax function.", "We propose two strategies to incorporate the bias-only f B knowledge into the training of the base model f M .", "In our strategies, the predictions of the bias-only model are combined with either the predictions of the base model or its error, to down-weight the loss for the examples that the bias-only model can predict correctly.", "We then update parameters of the base model M based on this modified loss LC .", "Our learning strategies are end-to-end.", "Therefore, to prevent the base model from learning the biases, the bias-only loss LB is not back-propagated to any shared parameters of the base model, such as a shared sentence encoder.", "Our first approach is based on the product of experts (PoE) method (Hinton, 2002).", "Here, we use this method to combine the bias-only and base model's predictions by computing the element-wise product (cid:12) between their predictions as ( f B ( x bi )) (cid:12) ( f M ( x i )) .", "We compute this combination in the logarithmic space, making it appropriate for the normalized exponential below: f C ( x i , x bi )=log( ( f B ( x bi )))+log( ( f M ( x i ))) , The key intuition behind this model is to combine the probability distributions of the bias-only and the base model to allow them to make predictions based on different characteristics of the input; the bias-only branch covers prediction based on biases, and the base model focuses on learning the actual task.", "Then the base model parameters M are trained using the cross-entropy loss LC of the combined classifier f C : LC ( M ; B )= 1 NN (cid:88) i =1 log( ( f y i C ( x i , x bi ))) .", "the updates for examples that it can accurately predict.", "Justification: Probability of label y i for the example x i in the PoE model is computed as: ( f y i C ( x i , x bi ))= ( f y i B ( x bi )) ( f y i M ( x i )) (cid:80) |Y| k =1 ( f kB ( x bi )) ( f kM ( x i )) Then the gradient of cross-entropy loss of the combined classifier (2) w.r.t M is (Hinton, 2002): MLC ( M ; B )= 1 NN (cid:88) i =1 |Y| (cid:88) k =1 (cid:20) (cid:16) y i k ( f kC ( x i , x bi )) (cid:17) M log( ( f kM ( x i ))) (cid:21) , where y i k is 1 when k = y i and 0 otherwise.", "Generally, the closer the ensemble's prediction ( f kC ( . )) is to the target y i k , the more the gradient is decreased through the modulating term, which only happens when the bias-only and base models are both capturing biases.", "In the extreme case, when the bias-only model correctly classifies the sample, ( f y i C ( x i , x bi )) = 1 and therefore MLC ( M ; B ) = 0 , the biased examples are ignored during training.", "Conversely, when the example is fully unbiased, the bias-only classifier predicts the uniform distribution over all labels ( f kB ( x bi )) = 1 |Y| for k Y , therefore ( f y i C ( x i , x bi )) = ( f y i M ( x i )) and the gradient of ensemble classifier remains the same as the CE loss.", "Focal loss was originally proposed in Lin et al. (2017) to improve a single classifier by down-weighting the well-classified points.", "We propose a novel variant of this loss that leverages the bias-only branch's predictions to reduce the relative importance of the most biased examples and allows the model to focus on learning the hard examples.", "We define Debiased Focal Loss (DFL) as: LC ( M ; B )= (3) 1 NN (cid:88) i =1 (cid:16) 1 ( f y i B ( x bi )) (cid:17) log( ( f y i M ( x i ))) where is the focusing parameter, which impacts the down-weighting rate.", "When is set to 0, DFL is equivalent to the cross-entropy loss.", "For > 0 , as the value of is increased, the effect of down-weighting is increased.", "We set =2 through all experiments, which works well in practice, and avoid fine-tuning it further.", "We note the properties of this loss: (1) When the example x i is unbiased, and the bias-only branch does not do well, ( f y i B ( x bi )) is small, therefore the scaling factor is close to 1 , and the loss remains unaffected.", "(2) As the sample is more biased and ( f y i B ( x bi )) is closer to 1, the modulating factor approaches 0 and the loss for the most biased examples is down-weighted.", "We compare our models to RUBi (Cadene et al., 2019), a recently proposed model to alleviate unimodal biases learned by Visual Question Answering (VQA) models.", "Cadene et al. (2019)'s study is limited to VQA datasets.", "We, however, evaluate the effectiveness of their formulation on multiple challenging NLU benchmarks.", "RUBi consists in first applying a sigmoid function to the bias-only model's predictions to obtain a mask containing an importance weight between 0 and 1 for each label.", "It then computes the element-wise product between the obtained mask and the base model's predictions: f C ( x i , x bi )= f M ( x i ) (cid:12) ( f B ( x bi )) , The main intuition is to dynamically adjust the predictions of the base model to prevent it from leveraging the shortcuts.", "Then the parameters of the base model M are updated by back-propagating the cross-entropy loss LC of the combined classifier.", "Neural models can, in practice, be prone to multiple types of biases in the datasets.", "We, therefore, propose methods for combining several bias-only models.", "To avoid learning relations between biased features, we do not consider training a classifier on top of their concatenation.", "Instead, let { x b j i } Kj =1 be different sets of biased features of x i that are predictive of y i , and let f B j be an individual bias-only model capturing x b j i .", "Next, we extend our debiasing strategies to handle multiple bias patterns.", "Method 1: Joint Product of Experts We extend our proposed PoE model to multiple bias-only models by computing the element-wise product between the predictions of bias-only models and the base model as: ( f B 1 ( x b 1 i )) (cid:12)(cid:12) ( f BK ( x b K i )) (cid:12) ( f M ( x i )) , computed in the logarithmic space: f C ( x i , { x b j i } Kj =1 )= K (cid:88) j =1 log( ( f B j ( x b j i ))) +log( ( f M ( x i ))) .", "Then the base model parameters M are trained using the cross-entropy loss of the combined classifier f C .", "Method 2: Joint Debiased Focal Loss To extend DFL to handle multiple bias patterns, we first compute the element-wise average of the predictions of the multiple bias-only models: f B ( { x b j i } Kj =1 ) = 1 K (cid:80) Kj =1 f B j ( x b j i ) , and then compute the DFL (3) using the computed joint bias-only model.", "We provide experiments on a fact verification (FEVER) and two large-scale NLI datasets (SNLI and MNLI).", "We evaluate the models' performance on recently-proposed challenging unbiased evaluation sets.", "We use the BERT (Devlin et al., 2019) implementation of Wolf et al. (2019) as our main baseline, known to work well for these tasks.", "In all the experiments, we use the default hyperparameters of the baselines.", "Dataset: The FEVER dataset contains claim-evidence pairs generated from Wikipedia.", "Schuster et al. (2019) collected a new evaluation set for the FEVER dataset to avoid the idiosyncrasies observed in the claims of this benchmark.", "They made the original claim-evidence pairs of the FEVER evaluation dataset symmetric, by augmenting them and making each claim and evidence appear with each label.", "Therefore, by balancing the artifacts, relying on statistical cues in claims to classify samples is equivalent to a random guess.", "The collected dataset is challenging, and the performance of the models relying on biases evaluated on this dataset drops significantly.", "Base models: We consider BERT as the base model, which works the best on this dataset (Schuster et al., 2019), and predicts the relations based on the concatenation of the claim and the evidence with a delimiter token (see Appendix A).", "Results: Table 1 shows the results.", "Our proposed debiasing methods, PoE and DFL, are highly effective, boosting the performance of the baseline by 9.8 and 7.5 points respectively, significantly surpassing the prior work of Schuster et al. (2019).", "Datasets: We evaluate on hard datasets of SNLI and MNLI (Gururangan et al., 2018), which are the splits of these datasets where a hypothesis-only model cannot correctly predict the labels.", "Gururangan et al. (2018) show that the success of the recent textual entailment models is attributed to the biased examples, and the performance of these models is substantially lower on the hard sets.", "Base models: We consider BERT and InferSent (Conneau et al., 2017) as our base models.", "We choose InferSent to be able to compare with the prior work of Belinkov et al. (2019b).", "Bias-only model: The bias-only model predicts the labels using the hypothesis (Appendix B).", "Results on SNLI: Table 2 shows the SNLI results.", "With InferSent, DFL and PoE result in 4.1 and 4.8 points gain.", "With BERT, DFL and PoE improve the results by 2.5 and 1.6 absolute points.", "Compared to the prior work of Belinkov et al. (2019b) (AdvCls), our PoE model obtains a 7.4 points gain, setting a new state-of-the-art.", "Results on MNLI: We construct hard sets from the validation sets of MNLI Matched and Mismatched (MNLI-M).", "Following Gururangan et al. (2018), we train a fastText classifier (Joulin et al., 2017) that predicts the labels using only the hypothesis and consider the subset on which it fails as hard examples.", "We report the results on MNLI mismatched sets in Table 3 (see Appendix B for similar results on MNLI matched).", "With BERT, DFL and PoE obtain 1.4 and 1.7 points gain on the hard development set, while with InferSent, they improve the results by 2.5 and 2.6 points.", "To comply with limited access to the MNLI submission system, we evaluate only the best result of the baselines and our models on the test sets.", "Our PoE model improves the performance on the hard test set by 1.1 points while retaining in-domain accuracy.", "Dataset: McCoy et al. (2019b) show that NLI models trained on MNLI can adopt superficial syntactic heuristics.", "They introduce HANS, consisting of several examples on which the syntactic heuristics fail.", "Base model: We use BERT as our base model and train it on the MNLI dataset.", "Bias-only model: We consider the following features for the bias-only model.", "The first four features are based on the syntactic heuristics proposed in McCoy et al. (2019b):", "1) Whether all words in the hypothesis are included in the premise;", "2) If the hypothesis is the contiguous subsequence of the premise;", "3) If the hypothesis is a subtree in the premise's parse tree;", "4) The number of tokens shared between premise and hypothesis normalized by the number of tokens in the premise.", "We additionally include some similarity features:", "5) The cosine similarity between premise and hypothesis's pooled token representations from BERT followed by min, mean, and max-pooling.", "We consider the same weight for contradiction and neutral labels in the bias-only loss to allow the model to recognize entailment from not-entailment.", "During the evaluation, we map the neutral and contradiction labels to not-entailment.", "Results: McCoy et al. (2019a) observe large variability in the linguistic generalization of neural models.", "We, therefore, report the averaged results across 4 runs with the standard deviation in Table 4.", "PoE and DFL obtain 4.4 and 7.4 points gain (see Appendix C for accuracy on individual heuristics of HANS).", "We compare our results with the concurrent work of Clark et al., who propose a PoE model similar to ours, which gets similar results.", "The main difference is that our models are trained end-to-end, which is convenient in practice, while Clark et", "al.'s method requires two steps, first training a bias-only model and then using this pre-trained model to train a robust model.", "The Reweight baseline in Clark et al. is a special case of our DFL with =1 and performs similarly to our DFL method (using default =2 ).", "Their Learned-Mixin+H method requires hyperparameter tuning.", "Since the assumption is not having access to any out-of-domain test data, and there is no available dev set for HANS, it is challenging to perform hyper-parameter tuning.", "Clark et al. follow prior work (Grand and Belinkov, 2019; Ramakrishnan et al., 2018) and perform model section on the test set.", "To provide a fair comparison, we consequently also tuned in DFL by sweeping over { 0 .", "5 , 1 , 2 , 3 , 4 } .", "DFL (cid:68) is the selected model, with = 3 .", "With this hyperparameter tuning, DFL is even more effective, and our best result performs 2.8 points better than Clark et al. (2019).", "To evaluate combating multiple bias patterns, we jointly debias a base model on the hypothesis artifacts and syntactic biases.", "Results: Table 5 shows the results.", "Models trained to be robust to hypothesis biases ( (cid:168) ) do not generalize to HANS.", "On the other hand, models trained to be robust on HANS ( (cid:170) ) use a powerful bias-only model resulting in a slight improvement on MNLI mismatched hard dev set.", "We expect a slight degradation when debiasing for both biases since models need to select samples accommodating both debiasing needs.", "The jointly debiased models successfully obtain improvements on both datasets, which are close to the improvements on each dataset by the individually debiased models.", "To evaluate how well the baseline and proposed models generalize to solving textual entailment in domains that do not share the same annotation biases as the large NLI training sets, we take trained NLI models and test them on several NLI datasets.", "Datasets: We consider a total of 12 different NLI datasets.", "We use the 11 datasets studied by Poliak et al. (2018).", "These datasets include MNLI, SNLI, SciTail (Khot et al., 2018), AddOneRTE (ADD1) (Pavlick and Callison-Burch, 2016), Johns Hopkins Ordinal Commonsense Inference (JOCI) (Zhang et al., 2017), Multiple Premise Entailment (MPE) (Lai et al., 2017), Sentences Involving Compositional Knowledge (SICK) (Marelli et al., 2014), and three datasets from White et al. (2017) which are automatically generated from existing datasets for other NLP tasks including: Semantic Proto-Roles (SPR) (Reisinger et al., 2015), Definite Pronoun Resolution (DPR) (Rahman and Ng, 2012), FrameNet Plus (FN+) (Pavlick et al., 2015), and the GLUE benchmark's diagnostic test (Wang et al., 2019).", "We additionally consider the Quora Question Pairs (QQP) dataset, where the task is to determine whether two given questions are semantically matching (duplicate) or not.", "As in Gong et al. (2017), we interpret duplicate question pairs as an entailment relation and neutral otherwise.", "We use the same split ratio mentioned by Wang et al. (2017).", "Since the datasets considered have different label spaces, when evaluating on each target dataset, we map the model's labels to the corresponding target dataset's space.", "See Appendix D for more details.", "We strictly refrained from using any out-of-domain data when evaluating on the unbiased split of the same benchmark in Section 4.", "However, as shown by prior work (Belinkov et al., 2019a), since different NLI target datasets contain different amounts of the bias found in the large-scale NLI dataset, we need to adjust the amount of debiasing according to each target dataset.", "We consequently introduce a hyperparameter for PoE to modulate the strength of the bias-only model in ensembling.", "We follow prior work (Belinkov et al., 2019a) and perform model selection on the dev set of each target dataset Data CE DFL PoE SICK 57.05 57.91 +0.9 57.28 +0.2 ADD1 87.34 88.89 +1.5 87.86 +0.5 DPR 49.50 50.68 +1.2 50.14 +0.6 SPR 59.85 61.41 +1.6 62.45 +2.6 FN+ 53.16 54.77 +1.6 53.51 +0.4 JOCI 50.06 51.13 +1.1 50.85 +0.8 MPE 69.50 70.2 +0.7 70.1 +0.6 SCITAIL 67.64 69.33 +1.7 71.40 +3.8 GLUE 54.08 54.80 +0.7 54.71 +0.6 QQP 67.78 69.28 +1.5 68.61 +0.8 MNLI 74.40 73.58 -0.8 73.61 -0.8 MNLI-M 73.98 74.0 0.0 73.49 -0.5 Table 6: Accuracy results of models with BERT transferring to new target datasets.", "Results: Table 6 shows the results of the debiased models and baseline with BERT.", "As shown in prior work (Belinkov et al., 2019a), the MNLI datasets have very similar biases to SNLI, which the models are trained on, so we do not expect any improvement in the relative performance of our models and the baseline for MNLI and MNLI-M.", "On all the remaining datasets, our proposed models perform better than the baseline, showing a substantial improvement in generalization by using our debasing techniques.", "We additionally compare with Belinkov et al. (2019a) in Appendix D and show that our methods substantially surpass their results.", "4 Since the test sets are not available for MNLI, we tune on the matched dev set and evaluate on the mismatched dev set or vice versa.", "For GLUE, we tune on MNLI mismatched dev set.", "Analysis of Debiased Focal Loss: As expected, improving the out-of-domain performance could come at the expense of decreased in-domain performance since the removed biases are useful for performing the in-domain task.", "This happens especially for DFL, in which there is a trade-off between in-domain and out-of-domain performance that depends on the parameter , and when the baseline model is not very powerful like InferSent.", "To understand the impact of in DFL, we train an InferSent model using DFL for different values of on the SNLI dataset and evaluate its performance on SNLI test and SNLI hard sets.", "As illustrated in Figure 2, increasing increases debiasing and thus hurts in-domain accuracy on SNLI, but out-of-domain accuracy on the SNLI hard set is increased within a wide range of values (see a similar plot for BERT in Appendix E).", "Correlation Analysis: In contrast to Belinkov et al. (2019a), who encourage only the encoder to not capture the unwanted biases, our learning strategies influence the parameters of the full model to reduce the reliance on unwanted patterns more effectively.", "To test this assumption, in Figure 3, we report the correlation between the element-wise loss of the debiased models and the loss of a bias-only model on the considered datasets.", "The results show that compared to the baselines, our debiasing methods, DFL and PoE, reduce the correlation to the bias-only model, confirming that our models are effective at reducing biases.", "Interestingly, on MNLI, PoE has less correlation with the bias-only model than DFL and also has better performance on the unbiased split of this dataset.", "On the other hand, on the HANS dataset, DFL loss is less correlated with the bias-only model than PoE and also obtains higher performance on the HANS dataset.", "We propose two novel techniques, product-of-experts and debiased focal loss, to reduce biases learned by neural models, which are applicable whenever one can specify the biases in the form of one or more bias-only models.", "The bias-only models are designed to leverage biases and shortcuts in the datasets.", "Our debiasing strategies then work by adjusting the cross-entropy loss based on the performance of these bias-only models, to focus learning on the hard examples and downweight the importance of the biased examples.", "Additionally, we extend our methods to combat multiple bias patterns simultaneously.", "Our proposed debiasing techniques are model agnostic, simple, and highly effective.", "Extensive experiments show that our methods substantially improve the model robustness to domain-shift, including 9.8 points gain on FEVER symmetric test set, 7.4 on HANS dataset, and 4.8 points on SNLI hard set.", "Furthermore, we show that our debiasing techniques result in better generalization to other NLI datasets.", "Future work may include developing debiasing strategies that do not require prior knowledge of bias patterns and can automatically identify them.", "We would like to thank Daniel Andor and Suraj Srinivas for their helpful comments.", "We additionally would like to thank the authors of Schuster et al. (2019); Cadene et al. (2019); McCoy et al. (2019b); Belinkov et al. (2019a) for their support to reproduce their results.", "This research was supported by the Swiss National Science Foundation under the project Learning Representations of Abstraction for Opinion Summarization (LAOS), grant number FNS-30216.", "Y.B. was supported by the Harvard Mind, Brain, and Behavior Initiative." ]
[ "abstain", "objective", "abstain", "abstain", "method", "result", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "objective", "method", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "method", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "objective", "other", "method", "other", "objective", "abstain", "other", "objective", "objective", "objective", "objective", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "other", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "abstain", "method", "objective", "objective", "result", "result", "abstain", "other", "other", "other", "other" ]
[ "Multi-modal neural machine translation (NMT) aims to translate source sentences into a target language paired with images.", "However, dominant multi-modal NMT models do not fully exploit fine-grained semantic correspondences between semantic units of different modalities, which have potential to refine multi-modal representation learning.", "To deal with this issue, in this paper, we propose a novel graph-based multi-modal fusion encoder for NMT.", "Specifically, we first represent the input sentence and image using a unified multi-modal graph, which captures various semantic relationships between multi-modal semantic units (words and visual objects).", "We then stack multiple graph-based multi-modal fusion layers that iteratively perform semantic interactions to learn node representations.", "Finally, these representations provide an attention-based context vector for the decoder.", "We evaluate our proposed encoder on the Multi30K datasets.", "Experimental results and in-depth analysis show the superiority of our multi-modal NMT model.", "Multi-modal neural machine translation (NMT) (Huang et al., 2016; Calixto et al., 2017) has become an important research direction in machine translation, due to its research significance in multimodal deep learning and wide applications, such as translating multimedia news and web product information (Zhou et al., 2018).", "It significantly extends the conventional text-based machine translation by taking images as additional inputs.", "The assumption behind this is that the translation is expected to be more accurate compared to purely text-based (cid:3) This work is done when Yongjing Yin was interning at Pattern Recognition Center, WeChat AI, Tencent Inc, China.", "translation, since the visual context helps to resolve ambiguous multi-sense words (Ive et al., 2019).", "Apparently, how to fully exploit visual information is one of the core issues in multi-modal NMT, which directly impacts the model performance.", "To this end, a lot of efforts have been made, roughly consisting of: (1) encoding each input image into a global feature vector, which can be used to initialize different components of multi-modal NMT models, or as additional source tokens (Huang et al., 2016; Calixto et al., 2017), or to learn the joint multi-modal representation (Zhou et al., 2018; Calixto et al., 2019); (2) extracting object-based image features to initialize the model, or supplement source sequences, or generate attention-based visual context (Huang et al., 2016; Ive et al., 2019); and (3) representing each image as spatial features, which can be exploited as extra context (Calixto et al., 2017; Delbrouck and Dupont, 2017a; Ive et al., 2019), or a supplement to source semantics (Delbrouck and Dupont, 2017b) via an attention mechanism.", "Despite their success, the above studies do not fully exploit the fine-grained semantic correspondences between semantic units within an input sentence-image pair.", "For example, as shown in Figure 1, the noun phrase a toy car semantically corresponds to the blue dashed region.", "The neglect of this important clue may be due to two big challenges: 1) how to construct a unified representation to bridge the semantic gap between two different modalities, and 2) how to achieve semantic interactions based on the unified representation.", "However, we believe that such semantic correspondences can be exploited to refine multimodal representation learning, since they enable the representations within one modality to incorporate cross-modal information as supplement during multi-modal semantic interactions (Lee et al., 2018; Tan and Bansal, 2019).", "In this paper, we propose a novel graph-based multi-modal fusion encoder for NMT.", "We first represent the input sentence and image with a unified multi-modal graph.", "In this graph, each node indicates a semantic unit: textual word or visual object , and two types of edges are introduced to model semantic relationships between semantic units within the same modality ( intra-modal edges ) and semantic correspondences between semantic units of different modalities ( inter-modal edges ) respectively.", "Based on the graph, we then stack multiple graph-based multi-modal fusion layers that iteratively perform semantic interactions among the nodes to conduct graph encoding.", "Particularly, during this process, we distinguish the parameters of two modalities, and sequentially conduct intra-and inter-modal fusions to learn multi-modal node representations.", "Finally, these representations can be exploited by the decoder via an attention mechanism.", "Compared with previous models, ours is able to fully exploit semantic interactions among multimodal semantic units for NMT.", "Overall, the major contributions of our work are listed as follows: (cid:15) We propose a unified graph to represent the input sentence and image, where various semantic relationships between multi-modal semantic units can be captured for NMT.", "(cid:15)", "We propose a graph-based multi-modal fusion encoder to conduct graph encoding based on the above graph.", "To the best of our knowledge, our work is the first attempt to explore multimodal graph neural network (GNN) for NMT.", "(cid:15)", "We conduct extensive experiments on Multi30k datasets of two language pairs.", "Experimental results and in-depth analysis indicate that our encoder is effective to fuse multi-modal information for NMT.", "Particularly, our multi-modal NMT model significantly outperforms several competitive baselines.", "(cid:15)", "We release the code at https://github.com/ DeepLearnXMU/GMNMT.", "Our multi-modal NMT model is based on attentional encoder-decoder framework with maximizing the log likelihood of training data as the objective function.", "Essentially, our encoder can be regarded as a multimodal extension of GNN.", "To construct our encoder, we first represent the input sentence-image pair as a unified multi-modal graph.", "Then, based on this graph, we stack multiple multi-modal fusion layers to learn node representations, which provides the attention-based context vector to the decoder.", "In this section, we take the sentence and the image shown in Figure 1 as an example, and describe how to use a multi-modal graph to represent them.", "Formally, our graph is undirected and can be formalized as G =( V , E ), which is constructed as follows: In the node set V , each node represents either a textual word or a visual object.", "Specifically, we adopt the following strategies to construct these two kinds of nodes: (1) We include all words as separate textual nodes in order to fully exploit textual 3027 Multi-modal Graph Embedding Layer Cross-modal Gating Visual FFN Textual FFN Cross-modal Gating Intra-modal Fusion Inter-modal Fusion Target Inputs Embedding Layer Textual Self-Attention Visual Self-Attention Softmax Layer Target Outputs Self-Attention Encoder-DecoderAttention FFN Encoder Decoder Figure 2: The architecture of our NMT model with the graph-based multi-modal fusion encoder.", "information.", "For example, in Figure 1, the multimodal graph contains totally eight textual nodes, each of which corresponds to a word in the input sentence; (2) We employ the Stanford parser to identify all noun phrases in the input sentence, and then apply a visual grounding toolkit (Yang et al., 2019) to detect bounding boxes (visual objects) for each noun phrase.", "Subsequently, all detected visual objects are included as independent visual nodes .", "In this way, we can effectively reduce the negative impact of abundant unrelated visual objects.", "Let us revisit the example in Figure 1, where we can identify two noun phrases Two boys and a toy car from the input sentence, and then include three visual objects into the multi-modal graph.", "To capture various semantic relationships between multi-modal semantic units for NMT, we consider two kinds of edges in the edge set E : (1) Any two nodes in the same modality are connected by an intra-modal edge ; and (2) Each textual node representing any noun phrase and the corresponding visual node are connected by an inter-modal edge .", "Back to Figure 1, we can observe that all visual nodes are connected to each other, and all textual nodes are fully-connected.", "However, only nodes v o 1 and v x 1 , v o 1 and v x 2 , v o 2 and v x 1 , v o 2 and v x 2 , v o 3 and v x 6 , v o 3 and v x 7 , v o 3 and v x 8 are connected by inter-modal edges.", "Before inputting the multi-modal graph into the stacked fusion layers, we introduce an embedding", "layer to initialize the node states.", "Specifically, for each textual node v x i , we define its initial state H (0) x i as the sum of its word embedding and position encoding (Vaswani et al., 2017).", "To obtain the initial state H (0) o j of the visual node v o j , we first extract visual features from the fully-connected layer that follows the ROI pooling layer in Faster-RCNN (Ren et al., 2015), and then employ a multilayer perceptron with ReLU activation function to project these features onto the same space as textual representations.", "As shown in the left part of Figure 2, on the top of embedding layer, we stack L e graph-based multimodal fusion layers to encode the above-mentioned multi-modal graph.", "At each fusion layer, we sequentially conduct intraand inter-modal fusions to update all node states.", "In this way, the final node states encode both the context within the same modality and the cross-modal semantic information simultaneously.", "Particularly, since visual nodes and textual nodes are two types of semantic units containing the information of different modalities, we apply similar operations but with different parameters to model their state update process, respectively.", "Specifically, in the l -th fusion layer, both updates of textual node states H ( l ) x = f H ( l ) x i g and visual node states H ( l ) o = f H ( l ) o j g mainly involve the following steps: 3028 Step1: Intra-modal fusion .", "At this step, we employ self-attention to generate the contextual representation of each node by collecting the message from its neighbors of the same modality.", "Formally, the contextual representations C ( l ) x of all textual nodes are calculated as follows: 1 C ( l ) x = MultiHead ( H ( l (cid:0) 1) x ; H ( l (cid:0) 1) x ; H ( l (cid:0) 1) x ) ; (1) where MultiHead( Q , K , V ) is a multi-head self-attention function taking a query matrix Q , a key matrix K , and a value matrix V as inputs.", "Similarly, we generate the contextual representations C ( l ) o of all visual nodes as C ( l ) o = MultiHead ( H ( l (cid:0) 1) o ; H ( l (cid:0) 1) o ; H ( l (cid:0) 1) o ) : (2) In particular, since the initial representations of visual objects are extracted from deep CNNs, we apply a simplified multi-head self-attention to preserve the initial representations of visual objects, where the learned linear projects of values and final outputs are removed.", "Step2: Inter-modal fusion .", "Inspired by studies in multi-modal feature fusion (Teney et al., 2018; Kim et al., 2018), we apply a cross-modal gating mechanism with an element-wise operation to gather the semantic information of the cross-modal neighbours of each node.", "Concretely, we generate the representation M ( l ) x i of a text node v x i in the following way: M ( l ) x i = X j 2 A ( v xi ) (cid:11) i;j (cid:12) C ( l ) o j ; (3) (cid:11) i;j = Sigmoid ( W ( l ) 1 C ( l ) x i + W ( l ) 2 C ( l ) o j ) ; (4) where A ( v x i ) is the set of neighboring visual nodes of v x i , and W ( l ) 1 and W ( l ) 2 are parameter matrices.", "Likewise, we produce the representation M ( l ) o j of a visual node v o j as follows: M ( l ) o j = X i 2 A ( v oj ) (cid:12) j;i (cid:12) C ( l ) x i ; (5) (cid:12) j;i = Sigmoid ( W ( l ) 3 C ( l ) o j + W ( l ) 4 C ( l ) x i ) ; (6) where A ( v o j ) is the set of adjacent textual nodes of v o j , and W ( l ) 3 and W ( l ) 4 are also parameter matrices.", "The advantage is that the above fusion approach can better determine the degree of inter-modal fusion according to the contextual representations of 1 For simplicity, we omit the descriptions of layer normalization and residual connection.", "each modality.", "Finally, we adopt position-wise feed forward networks FFN ( (cid:3) ) to generate the textual node states H ( l ) x and visual node states H ( l ) o : H ( l ) x = FFN ( M ( l ) x ) ; (7) H ( l ) o = FFN ( M ( l ) o ) ; (8) where M ( l ) x = f M ( l ) x i g , M ( l ) o = f M ( l ) o j g denote the above updated representations of all textual nodes and visual nodes respectively.", "Our decoder is similar to the conventional Transformer decoder.", "Since visual information has been incorporated into all textual nodes via multiple graph-based multi-modal fusion layers, we allow the decoder to dynamically exploit the multi-modal context by only attending to textual node states.", "As shown in the right part of Figure 2, we follow Vaswani et al. (2017) to stack L d identical layers to generate target-side hidden states, where each layer l is composed of three sub-layers.", "Concretely, the first two sub-layers are a masked self-attention and an encoder-decoder attention to integrate target-and source-side contexts respectively: E ( l ) = MultiHead ( S ( l (cid:0) 1) ; S ( l (cid:0) 1) ; S ( l (cid:0) 1) ) ; (9) T ( l ) = MultiHead ( E ( l ) ; H ( L e ) x ; H ( L e ) x ) ; (10) where S ( l (cid:0) 1) denotes the target-side hidden states in the l 1 -th layer.", "In particular, S (0) are the embed-dings of input target words.", "Then, a position-wise fully-connected forward neural network is uesd to produce S ( l ) as follows: S ( l ) = FFN ( T ( l ) ) : (11) Finally, the probability distribution of generating the target sentence is defined by using a softmax layer, which takes the hidden states in the top layer as input: P ( Y j X; I ) = Y t Softmax ( WS ( L d ) t + b ) ; (12) where X is the input sentence, I is the input image, Y is the target sentence, and W and b are the parameters of the softmax layer.", "We carry out experiments on multi-modal English ) German (En ) De) and English ) French (En ) Fr) translation tasks.", "Datasets We use the Multi30K dataset (Elliott et al., 2016), where each image is paired with one English description and human translations into German and French.", "Training, validation and test sets contain 29,000, 1,014 and 1,000 instances respectively.", "In addition, we evaluate various models on the WMT17 test set and the ambiguous MSCOCO test set, which contain 1,000 and 461 instances respectively.", "Here, we directly use the preprocessed sentences 2 and segment words into subwords via byte pair encoding (Sennrich et al., 2016) with 10,000 merge operations.", "Visual Features We first apply the Stanford parser to identify noun phrases from each source sentence, and then employ the visual ground toolkit released by Yang et al. (2019) to detect associated visual objects of the identified noun phrases.", "For each phrase, we keep the visual object with the highest prediction probability, so as to reduce negative effects of abundant visual objects.", "In each sentence, the average numbers of objects and words are around 3.5 and 15.0 respectively.", "3 Finally, we compute 2,048-dimensional features for these objects with the pre-trained ResNet-100 Faster-RCNN (Ren et al., 2015).", "Settings We use Transformer (Vaswani et al., 2017) as our baseline.", "Since the size of training corpus is small and the trained model tends to be over-fitting, we first perform a small grid search to obtain a set of hyper-parameters on the En ) De validation set.", "Specifically, the word embedding dimension and hidden size are 128 and 256 respectively.", "The decoder has L d =4 layers 4 and the number of attention heads is", "4. The dropout is set to 0.5.", "Each batch consists of approximately 2,000 source and target tokens.", "We apply the Adam optimizer with a scheduled learning rate to optimize various models, and we use other same settings as (Vaswani et al., 2017).", "Finally, we use the metrics BLEU (Pa-pineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) to evaluate the quality of translations.", "Particularly, we run all models three times for each experiment and report the average results.", "2 http://www.statmt.org/wmt18/multimodal-task.html 3 There is no parsing failure for this dataset.", "If no noun is detected for a sentence, the object representations will be set to zero vectors and the model will degenerate to Transformer.", "4 The encoder of the text-based Transformer also has 4 layers.", "Baseline Models In addition to the text-based Transformer (Vaswani et al., 2017), we adapt several effective approaches to Transformer using our visual features, and compare our model with them 5 : (cid:15) ObjectAsToken(TF) (Huang et al., 2016).", "It is a variant of the Transformer, where all visual objects are regarded as extra source tokens and placed at the front of the input sentence.", "(cid:15)", "Enc-att(TF) (Delbrouck and Dupont, 2017b).", "An encoder-based image attention mechanism is incorporated into Transformer, which augments each source annotation with an attention-based visual feature vector.", "(cid:15)", "Doubly-att(TF) (Helcl et al., 2018).", "It is a doubly attentive Transformer.", "In each decoder layer, a cross-modal multi-head attention sublayer is inserted before the fully connected feed-forward layer to generate the visual context vector from visual features.", "We also display the performance of several dominant multi-modal NMT models such as Doubly-att(RNN) (Calixto et al., 2017), Soft-att(RNN) (Delbrouck and Dupont, 2017a), Stochastic-att(RNN) (Delbrouck and Dupont, 2017a), Fusion-conv(RNN) (Caglayan et al., 2017), Trg-mul(RNN) (Caglayan et al., 2017), VMMT(RNN) (Calixto et al., 2019) and Deliberation Network(TF) (Ive et al., 2019) on the same datasets.", "The number L e of multi-modal fusion layer is an important hyper-parameter that directly determines", "5 We use suffixes ( RNN ) and ( TF ) to represent RNN-and Transformer-style NMT models, respectively.", "the degree of fine-grained semantic fusion in our encoder.", "Thus, we first inspect its impact on the EN ) DE validation set.", "Figure 3 provides the experimental results using different L e and our model achieves the best performance when L e is", "3. Hence, we use L e =3 in all subsequent experiments.", "Table 1 shows the main results on the En ) De translation task.", "Ours outperforms most of the existing models and all baselines, and is comparable to Fusion-conv(RNN) and Trg-mul(RNN) on METEOR.", "The two results are from the state-of-the-art system on the WMT2017 test set, which is selected based on METEOR.", "Comparing the baseline models, we draw the following interesting conclusions: First , our model outperforms ObjectAsTo-ken(TF), which concatenates regional visual features with text to form attendable sequences and employs self-attention mechanism to conduct inter-modal fusion.", "The underlying reasons consist of two aspects: explicitly modeling semantic correspondences between semantic units of different modalities, and distinguishing model parameters for different modalities.", "Second , our model also significantly outperforms Enc-att(TF).", "Note that Enc-att(TF) can be considered as a single-layer semantic fusion encoder.", "In addition to the advantage of explicitly modeling semantic correspondences, we conjecture that multi-layer multi-modal semantic interactions are also beneficial to NMT.", "Third , compared with Doubly-att(TF) simply using an attention mechanism to exploit visual in-15 20 25 30 35 40 [5,10) [10,15) [15,20) [20,25) [25,...) BLEU Sentence Length TransformerObjectAsToken(TF)Enc-att(TF) Doubly-att(TF) Our model Figure 4: BLEU scores on different translation groups divided according to source sentence lengths.", "formation, our model achieves a significant improvement, because of sufficient multi-modal fusion in our encoder.", "Besides, we divide our test sets into different groups based on the lengths of source sentences and the numbers of noun phrases, and then compare the performance of different models in each group.", "Figures 4 and 5 report the BLEU scores on these groups.", "Overall, our model still consistently achieves the best performance in all groups.", "Thus, we confirm again the effectiveness and gen-3031 Model En ) De Test2016 Test2017 MSCOCO BLEU METEOR BLEU METEOR BLEU METEOR Our model 39.8 57.6 32.2 51.9 28.7 47.6 w/o inter-modal fusion 38.7 56.7 30.7 50.6 27.0 46.7 visual grounding ) fully-connected 36.4 53.4 28.3 47.0 24.4 42.9 different parameters ) unified parameters 39.2 57.3 31.9 51.4 27.7 47.4 w/ attending to visual nodes 39.6 57.3 32.0 51.3 27.9 46.8 attending to textual nodes ) attending to visual nodes 30.9 48.6 22.3 41.5 20.4 38.7 Table 2: Ablation study of our model on the EN ) DE translation task.", "erality of our proposed model.", "Note that in the sentences with more phrases, which are usually long sentences, the improvements of our model over baselines are more significant.", "We speculate that long sentences often contain more ambiguous words.", "Thus compared with short sentences, long sentences may require visual information to be better exploited as supplementary information, which can be achieved by the multi-modal semantic interaction of our model.", "We also show the training and decoding speed of our model and the baselines in Table", "4. During training, our model can process approximately 1.1K tokens per second, which is comparable to other multi-modal baselines.", "When it comes to decoding procedure, our model translates about 16.7 sentences per second and the speed drops slightly compared to Transformer.", "Moreover, our model only introduces a small number of extra parameters and achieves better performance.", "To investigate the effectiveness of different components, we further conduct experiments to compare our model with the following variants in Table 2:", "(1) w/o inter-modal fusion .", "In this variant, we apply two separate Transformer encoders to learn the semantic representations of words and visual objects, respectively, and then use the doubly-attentive decoder (Helcl et al., 2018) to incorporate textual and visual contexts into the decoder.", "The result in line 3 indicates that removing the inter-modal fusion leads to a significant performance drop.", "It suggests that semantic interactions among multi-modal semantic units are indeed useful for multi-modal representation learning.", "(2) visual grounding ) fully-connected .", "We make the words and visual objects fully-connected to establish the inter-modal correspondences.", "The result in line 4 shows that this change causes a significant performance decline.", "The underlying reason is the fully-connected semantic correspondences introduce much noise to our model.", "(3) different parameters ) unified parameters .", "When constructing this variant, we assign unified parameters to update node states in different modalities.", "Apparently, the performance drop reported in line 5 also demonstrates the validity of our ap-3032 proach using different parameters.", "(4) w/ attending to visual nodes .", "Different from our model attending to only textual nodes, we allow our decoder of this variant to consider both two types of nodes using doubly-attentive decoder.", "From line 6, we can observe that considering all nodes does not bring further improvement.", "The result confirms our previous assumption that visual information has been fully incorporated into textual nodes in our encoder.", "(5) attending to textual nodes ) attending to visual nodes .", "However, when only considering visual nodes, the model performance drops drastically (line 7).", "This is because the number of visual nodes is far fewer than that of textual nodes, which is unable to produce sufficient context for translation.", "Figure 6 displays the 1-best translations of a sampled test sentence generated by different models.", "The phrase a skateboarding ramp is not translated correctly by all baselines, while our model correctly translates it.", "This reveals that our encoder is able to learn more accurate representations.", "We also conduct experiments on the EN ) Fr dataset.", "From Table 3, our model still achieves better performance compared to all baselines, which demonstrates again that our model is effective and general to different language pairs in multi-modal NMT.", "Multi-modal NMT Huang et al. (2016) first incorporate global or regional visual features into attention-based NMT.", "Calixto and Liu (2017) also study the effects of incorporating global visual features into different NMT components.", "Elliott and K adar (2017) share an encoder between a translation model and an image prediction model to learn visually grounded representations.", "Besides, the most common practice is to use attention mechanisms to extract visual contexts for multimodal NMT (Caglayan et al., 2016; Calixto et al., 2017; Delbrouck and Dupont, 2017a,b; Barrault et al., 2018).", "Recently, Ive et al. (2019) propose a translate-and-refine approach and Calixto et al. (2019) employ a latent variable model to capture the multi-modal interactions for multi-modal NMT.", "Apart from model design, Elliott (2018) reveal that visual information seems to be ignored by the multimodal NMT models.", "Caglayan et al. (2019) conduct a systematic analysis and show that visual information can be better leveraged under limited textual context.", "Different from the above-mentioned studies, we first represent the input sentence-image pair as a unified graph, where various semantic relationships between multi-modal semantic units can be effectively captured for multi-modal NMT.", "Benefiting from the multi-modal graph, we further introduce an extended GNN to conduct graph encoding via multi-modal semantic interactions.", "Note that if we directly adapt the approach proposed by Huang et al. (2016) into Transformer, the model (ObjectAsToken(TF)) also involves multimodal fusion.", "However, ours is different from it in following aspects: (1) We first learn the contextual representation of each node within the same modality, so that it can better determine the degree of inter-modal fusion according to its own context.", "(2) We assign different encoding parameters to different modalities, which has been shown effective in our experiments.", "Additionally, the recent study LXMERT (Tan and Bansal, 2019) also models relationships between vision and language, which differs from ours in following aspects: (1) Tan and Bansal (2019) first apply two transformer encoders for two modalities, and then stack two cross-modality encoders to conduct multi-modal fusion.", "In contrast, we sequentially conduct self-attention and cross-modal gating at each layer.", "(2) Tan and Bansal (2019) leverage an attention mechanism to implicitly establish cross-modal relationships via large-scale pretraining, while we utilize visual grounding to capture explicit cross-modal correspondences.", "(3) We focus on multi-modal NMT rather than vision-and-language reasoning in (Tan and Bansal, 2019).", "Graph Neural Networks Recently, GNNs (Marco Gori and Scarselli, 2005) including gated graph neural network (Li et al., 2016), graph convolutional network (Duvenaud et al., 2015; Kipf and Welling, 2017) and graph attention network (Velickovic et al., 2018) have been shown effective in many tasks such as VQA (Teney et al., 2017; Norcliffe-Brown et al., 2018; Li et al., 2019), text generation (Gildea et al., 2018; Becky et al., 2018; Song et al., 2018b, 2019) and text representation (Zhang et al., 2018; Yin et al., 2019; Song et al., 3033 Source : A boy riding a skateboard on a skateboarding ramp .", "In this work, we mainly focus on how to extend GNN to fuse multi-modal information in NMT.", "Close to our work, Teney et al. (2017) introduce GNN for VQA.", "The main difference between their work and ours is that they build an individual graph for each modality, while we use a unified multimodal graph.", "In this paper, we have proposed a novel graph-based multi-modal fusion encoder, which exploits various semantic relationships between multimodal semantic units for NMT.", "Experiment results and analysis on the Multi30K dataset demonstrate the effectiveness of our model.", "In the future, we plan to incorporate attributes of visual objects and dependency trees to enrich the multi-modal graphs.", "Besides, how to introduce scene graphs into multi-modal NMT is a worthy problem to explore.", "Finally, we will apply our model into other multi-modal tasks such as multimodal sentiment analysis.", "This work was supported by the Beijing Advanced Innovation Center for Language Resources (No. TYR17002), the National Natural Science Foundation of China (No. 61672440), and the Scientific Research Project of National Language Committee of China (No. YB135-49)." ]
[ "abstain", "abstain", "objective", "objective", "method", "abstain", "objective", "result", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "objective", "objective", "abstain", "method", "method", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "method", "result", "result", "objective", "other", "objective", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "other", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "other", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "abstain", "objective", "abstain", "other", "objective", "method", "method", "other", "abstain", "abstain", "method", "objective", "objective", "abstain", "abstain", "method", "other" ]
[ "This paper introduces the Multi-Genre Natural Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding.", "At 433k examples, this resource is one of the largest corpora available for natural language inference (a.k.a. recognizing textual entailment ), improving upon available resources in both its coverage and difficulty.", "MultiNLI accomplishes this by offering data from ten distinct genres of written and spoken English, making it possible to evaluate systems on nearly the full complexity of the language, while supplying an explicit setting for evaluating cross-genre domain adaptation.", "In addition, an evaluation using existing machine learning models designed for the Stanford NLI corpus shows that it represents a substantially more difficult task than does that corpus, despite the two showing similar levels of inter-annotator agreement.", "Many of the most actively studied problems in NLP, including question answering, translation, and dialog, depend in large part on natural language understanding (NLU) for success.", "While there has been a great deal of work that uses representation learning techniques to pursue progress on these applied NLU problems directly, in order for a representation learning model to fully succeed at one of these problems, it must simultaneously succeed both at NLU, and at one or more additional hard machine learning problems like structured prediction or memory access.", "This makes it difficult to accurately judge the degree to which current models extract reasonable representations of language meaning in these settings.", "The task of natural language inference (NLI) is well positioned to serve as a benchmark task for research on NLU.", "In this task, also known as recognizing textual entailment (Cooper et al., 1996; Fyodorov et al., 2000; Condoravdi et al., 2003; Bos and Markert, 2005; Dagan et al., 2006; MacCartney and Manning, 2009), a model is presented with a pair of sentenceslike one of those in Figure 1and asked to judge the relationship between their meanings by picking a label from a small set: typically ENTAILMENT , NEUTRAL , and CONTRADICTION .", "Succeeding at NLI does not require a system to solve any difficult machine learning problems except, crucially, that of extracting effective and thorough representations for the meanings of sentences (i.e., their lexical and compositional semantics).", "In particular, a model must handle phenomena like lexical entailment, quantification, coreference, tense, belief, modality, and lexical and syntactic ambiguity.", "As the only large human-annotated corpus for NLI currently available, the Stanford NLI Corpus (SNLI; Bowman et al., 2015) has enabled a good deal of progress on NLU, serving as a major benchmark for machine learning work on sentence understanding and spurring work on core representation learning techniques for NLU, such as attention (Wang and Jiang, 2016; Parikh et al., 2016), memory (Munkhdalai and Yu, 2017), and the use of parse structure (Mou et al., 2016b; Bowman et al., 2016; Chen et al., 2017).", "However, SNLI falls short of providing a sufficient testing ground for machine learning models in two ways.", "First, the sentences in SNLI are derived from only a single text genreimage captionsand are thus limited to descriptions of concrete visual scenes, rendering the hypothesis sentences used to describe these scenes short and simple, and rendering many important phenomenalike temporal reasoning (e.g., yesterday ), belief (e.g., know ), and modality (e.g., should )rare enough to be irrelevant to task performance.", "Second, because of these issues, SNLI is not sufficiently demanding to serve as an effective benchmark for NLU, with the best current model performance falling within a few percentage points of human accuracy and limited room left for fine-grained comparisons between strong models.", "This paper introduces a new challenge dataset, the Multi-Genre NLI Corpus (MultiNLI), whose chief purpose is to remedy these limitations by making it possible to run large-scale NLI evaluations that capture more of the complexity of modern English.", "While its size (433k pairs) and mode of collection are modeled closely on SNLI, unlike that corpus, MultiNLI represents both written and spoken speech in a wide range of styles, degrees of formality, and topics.", "Our chief motivation in creating this corpus is to provide a benchmark for ambitious machine learning research on the core problems of NLU, but we are additionally interested in constructing a corpus that facilitates work on domain adaptation and cross-domain transfer learning.", "These techniqueswhich use labeled training data for a source domain, and aim to train a model that performs well on test data from a target domain with a different distributionhave resulted in gains across many tasks (Daume III and Marcu, 2006; Ben-David et al., 2007), including sequence and part-of-speech tagging (Blitzer et al., 2006; Peng and Dredze, 2017).", "Moreover, in application areas outside NLU, artificial neural network techniques have made it possible to train general-purpose feature extractors that, with no or minimal retraining, can extract useful features for a variety of styles of data (Krizhevsky et al., 2012; Zeiler and Fergus, 2014; Donahue et al., 2014).", "However, attempts to bring this kind of general purpose representation learning to NLU have seen only very limited success (see, for example, Mou et al., 2016a).", "Nearly all successful applications of representation learning to NLU have involved models that are trained on data closely resembling the target evaluation data in both task and style.", "This fact limits the usefulness of these tools for problems involving styles of language not represented in large annotated training sets.", "With this in mind, we construct MultiNLI so as to make it possible to explicitly evaluate models both on the quality of their sentence representations within the training domain and on their ability to derive reasonable representations in unfamiliar domains.", "The corpus is derived from ten different genres of written and spoken English, which are collectively meant to approximate the full diversity of ways in which modern standard 1113 This task will involve reading a line from a non-fiction article and writing three sentences that relate to it.", "The line will describe a situation or event.", "Using only this description and what you know about the world: Write one sentence that is definitely correct about the situation or event in the line.", "Write one sentence that might be correct about the situation or event in the line.", "Write one sentence that is definitely incorrect about the situation or event in the line.", "American English is used.", "All of the genres appear in the test and development sets, but only five are included in the training set.", "Models thus can be evaluated on both the matched test examples, which are derived from the same sources as those in the training set, and on the mismatched examples, which do not closely resemble any of those seen at training time.", "The data collection methodology for MultiNLI is similar to that of SNLI: We create each sentence pair by selecting a premise sentence from a preexisting text source and asking a human annotator to compose a novel sentence to pair with it as a hypothesis.", "This section discusses the sources of our premise sentences, our collection method for hypotheses, and our validation (relabeling) strategy.", "Premise Text Sources The MultiNLI premise sentences are derived from ten sources of freely available text which are meant to be maximally diverse and roughly represent the full range of American English.", "We selected nine sources from the second release of the Open American National Corpus (OANC; Fillmore et al., 1998; Macleod et al., 2000; Ide and Macleod, 2001; Ide and Su-derman, 2006, downloaded 12/2016 1 ), balancing the volume of source text roughly evenly across genres, and avoiding genres with content that would be too difficult for untrained annotators.", "and Conversation Collection of two-sided, in-person conversations that took place in the early 2000s (FACE-TO-FACE ); reports, speeches, letters, and press releases from public domain government websites (GOVERNMENT ); letters from the Indiana Center for Intercultural Communication of Philanthropic Fundraising Discourse written in the late 1990searly 2000s (LETTERS ); the public report from the National Commission on Terrorist Attacks Upon the United States released on July 22, 2004 2 (9/11); five non-fiction works on the textile industry and child development published by the Oxford University Press (OUP); popular culture articles from the archives of Slate Magazine (SLATE ) written between 19962000; transcriptions from University of Pennsylvania's Linguistic Data Consortium Switchboard corpus of two-sided, telephone conversations that took place in 1990 or 1991 (TELEPHONE ); travel guides published by Berlitz Publishing in the early 2000s (TRAVEL ); and short posts about linguistics for non-specialists from the Verbatim archives written between 1990 and 1996 (VERBATIM ).", "For our tenth genre, FICTION , we compile several freely available works of contemporary fiction written between 1912 and 2010, spanning various genres, including mystery ( The Mysterious Affair at Styles , 3 Christie, 1921; The Secret Adversary , 4 Christie, 1922; Murder in the Gun Room , 5 Piper, 1953), humor ( Password Incorrect , 6 Name, 2008), western ( Rebel Spurs , 7 Norton, 1962), science fiction ( Seven Swords , 8 Shea, 2008; Living History , 9 Essex, 2016; The Sky Is Falling , 10 Del Rey, 1973; Youth , 11 Asimov, May 1952), and adventure ( Captain Blood , 12 Sabatini, 1922).", "We construct premise sentences from these ten source texts with minimal preprocessing; unique the sentences within genres, exclude very short 2 https://9-11commission.gov/ 3 gutenberg.org/files/863/863-0.txt 4 gutenberg.org/files/1155/1155-0.txt 5 gutenberg.org/files/17866/17866.txt 6 http://manybooks.net/pages/ namenother09password_incorrect/0.html 7 gutenberg.org/files/20840/20840-0.txt 8 http://mikeshea.net/stories/seven_ swords.html , shared with the author's permission.", "9 manybooks.net/pages/ essexbother10living_history/0.html 10 gutenberg.org/cache/epub/18768/ pg18768.txt 11 gutenberg.org/cache/epub/31547/ pg31547.txt 12 gutenberg.org/files/1965/1965-0.txt 1114 sentences (under eight characters), and manually remove certain types of non-narrative writing, such as mathematical formulae, bibliographic references, and lists.", "Although SNLI is collected in largely the same way as MultiNLI, and is also permissively licensed, we do not include SNLI in the MultiNLI corpus distribution.", "SNLI can be appended and treated as an unusually large additional CAPTIONS genre, built on image captions from the Flickr30k corpus (Young et al., 2014).", "Hypothesis Collection To collect a sentence pair, we present a crowdworker with a sentence from a source text and ask them to compose three novel sentences (the hypotheses): one which is necessarily true or appropriate whenever the premise is true (paired with the premise and labeled ENTAILMENT ), one which is necessarily false or inappropriate whenever the premise is true ( CONTRADICTION ), and one where neither condition applies ( NEUTRAL ).", "This method of data collection ensures that the three classes will be represented equally in the raw corpus.", "The prompts that surround each premise sentence during hypothesis collection are slightly tailored to fit the genre of that premise sentence.", "We pilot these prompts prior to data collection to ensure that the instructions are clear and that they yield hypothesis sentences that fit the intended meanings of the three classes.", "There are five unique prompts in total: one for written non-fiction genres (SLATE , OUP, GOVERNMENT , VERBATIM , TRAVEL ; Figure 1), one for spoken genres (TELEPHONE , FACE-TO-FACE ), one for each of the less formal written genres (FICTION , LETTERS ), and a specialized one for 9/11, tailored to fit its potentially emotional content.", "Each prompt is accompanied by example premises and hypothesis that are specific to each genre.", "Below the instructions, we present three text fieldsone for each labelfollowed by a field for reporting issues, and a link to the frequently asked questions (FAQ) page.", "We provide one FAQ page per prompt.", "FAQs are modeled on their SNLI counterparts (supplied by the authors of that work) and include additional curated examples, answers to genre-specific questions arising from our pilot phase, and information about logistical concerns like payment.", "For both hypothesis collection and validation, we present prompts to annotators using Hybrid Statistic SNLI MultiNLI Pairs w/ unanimous gold label 58.3% 58.2% Individual label = gold label 89.0% 88.7% Individual label = author's label 85.8% 85.2% Gold label = author's label 91.2% 92.6% Gold label 6 = author's label 6.8% 5.6% No gold label (no 3 labels match) 2.0% 1.8% Table 2: Key validation statistics for SNLI (copied from Bowman et al., 2015) and MultiNLI.", "( gethybrid.io ), a crowdsoucring platform similar to the Amazon Mechanical Turk platform used for SNLI.", "We used this platform to hire an organized group of workers.", "387 annotators contributed through this group, and at no point was any identifying information about them, including demographic information, available to the authors.", "Validation We perform an additional round of annotation on test and development examples to ensure accurate labelling.", "The validation phase follows the same procedure used for SICK (Marelli et al., 2014b) and SNLI: Workers are presented with pairs of sentences and asked to supply a single label ( ENTAILMENT , CONTRADICTION , NEUTRAL ) for the pair.", "Each pair is relabeled by four workers, yielding a total of five labels per example.", "Validation instructions are tailored by genre, based on the main data collection prompt (Figure 1); a single FAQ, modeled after the validation FAQ from SNLI, is provided for reference.", "In order to encourage thoughtful labeling, we manually label one percent of the validation examples and offer a $1 bonus each time a worker selects a label that matches ours.", "For each validated sentence pair, we assign a gold label representing a majority vote between the initial label assigned to the pair by the original annotator, and the four additional labels assigned by validation annotators.", "A small number of examples did not receive a three-vote consensus on any one label.", "These examples are included in the distributed corpus, but are marked with ' in the gold label field, and should not be used in standard evaluations.", "Table 2 shows summary statistics capturing the results of validation, alongside corresponding figures for SNLI.", "These statistics indicate that the labels included in MultiNLI are about as reliable as those included in SNLI, despite MultiNLI's more diverse text contents.", "Table 1 shows randomly chosen development set examples from the collected corpus.", "Hypotheses tend to be fluent and correctly spelled, though not all are complete sentences.", "Punctuation is often omitted.", "Hypotheses can rely heavily on knowledge about the world, and often don't correspond closely with their premises in syntactic structure.", "Unlabeled test data is available on Kaggle for both matched and mismatched sets as competitions that will be open indefinitely; Evaluations on a subset of the test set have previously been conducted with different leaderboards through the RepEval 2017 Workshop (Nangia et al., 2017).", "The corpus is available in two formatstab separated text and JSON Lines ( jsonl ), following SNLI.", "For each example, premise and hypothesis strings, unique identifiers for the pair and prompt, and the following additional fields are specified: gold label : label used for classification.", "In examples rejected during the validation process, the value of this field will be '.", "sentence { 1,2 } parse : Each sentence as parsed by the Stanford PCFG Parser 3.5.2 (Klein and Manning, 2003).", "sentence { 1,2 } binary parse : parses in unlabeled binary-branching format.", "label[1] : The label assigned during the creation of the sentence pair.", "In rare cases this may be different from gold label , if a consensus of annotators chose a different label during the validation phase.", "label[2...5] : The four labels assigned during validation by individual annotators to each development and test example.", "These fields will be empty for training examples.", "The current version of the corpus is freely available at nyu.edu/projects/bowman/multinli/ for typical machine learning uses, and may be modified and redistributed.", "The majority of the corpus is released under the OANC's license, which allows all content to be freely used, modified, and shared under permissive terms.", "The data in the FICTION section falls under several permissive licenses; Seven Swords is available under a Creative Commons Share-Alike 3.0 Unported License, and with the explicit permission of the author, Living History and Password Incorrect are available under Creative Commons Attribution 3.0 Unported Licenses; the remaining works of fiction are in the public domain in the United States (but may be licensed differently elsewhere).", "Partition The distributed corpus comes with an explicit train/test/development split.", "The test and development sets contain 2,000 randomly selected examples each from each of the genres, resulting in a total of 20,000 examples per set.", "No premise sentence occurs in more than one set.", "Statistics Table 3 shows some additional statistics.", "Premise sentences in MultiNLI tend to be longer (max 401 words, mean 22.3 words) than their hypotheses (max 70 words, mean 11.4 words), and much longer, on average, than premises in SNLI (mean 14.1 words); premises in MultiNLI also tend to be parsed as complete sentences at a much higher rate on average (91%) than their SNLI counterparts (74%).", "We observe that the two spoken genres differ in thiswith FACE-TO-FACE showing more complete sentences (91%) than TELEPHONE (71%)and speculate that the lack of visual feedback in a telephone setting may result in a high incidence of interrupted or otherwise incomplete sentences.", "Hypothesis sentences in MultiNLI generally cannot be derived from their premise sentences using only trivial editing strategies.", "While 2 .", "5 % of the hypotheses in SNLI differ from their premises by deletion, only 0 .", "9 % of those in MultiNLI (170 examples total) are constructed in this way.", "Similarly, in SNLI, 1 .", "6 % of hypotheses differ from their premises by addition, substitution, or shuf-fling a single word, while in MultiNLI this only happens in 1 .", "2 % of examples.", "The percentage of hypothesis-premise pairs with high token overlap ( > 37%) was comparable between MultiNLI (30% of pairs) and SNLI (29%).", "These statistics suggest that MultiNLI's annotations are comparable in quality to those of SNLI.", "To test the difficulty of the corpus, we experiment with three neural network models.", "The first is a simple continuous bag of words (CBOW) model in which each sentence is represented as the sum of the embedding representations of its words.", "The second computes representations by averaging the states of a bidirectional LSTM RNN (BiL-STM; Hochreiter and Schmidhuber, 1997) over words.", "For the third, we implement and evaluate Chen et", "al.'s Enhanced Sequential Inference Model (ESIM), which is roughly tied for the state of the art on SNLI at the time of writing.", "We use the base ESIM without ensembling with a TreeL-STM (as in the HIM' runs in that work).", "The first two models produce separate vector representations for each sentence and compute label predictions for pairs of representations.", "To do this, they concatenate the representations for premise and hypothesis, their difference, and their element-wise product, following Mou et al. (2016b), and pass the result to a single tanh layer followed by a three-way softmax classifier.", "All models are initialized with 300D reference GloVe vectors (840B token version; Pennington et al., 2014).", "Out-of-vocabulary (OOV) words are initialized randomly and word embeddings are fine-tuned during training.", "The models use 300D hidden states, as in most prior work on SNLI.", "We use Dropout (Srivastava et al., 2014) for regularization.", "For ESIM, we use a dropout rate of 0.5, following the paper.", "For CBOW and BiLSTM models, we tune Dropout on the SNLI development set and find that a drop rate of 0.1 works well.", "We use the Adam (Kingma and Ba, 2015) optimizer with default parameters.", "Code is available at github.com/nyu-mll/multiNLI/ .", "We train models on SNLI, MultiNLI, and a mixture; Table 4 shows the results.", "In the mixed setting, we use the full MultiNLI training set and randomly select 15% of the SNLI training set at each epoch, ensuring that each available genre is seen during training with roughly equal frequency.", "We also train a separate CBOW model on each individual genre to establish the degree to which simple models already allow for effective transfer across genres, using a dropout rate of 0.2.", "When training on SNLI, a single random sample of 15% of the original training set is used.", "For each genre represented in the training set, the model that performs best on it was trained on that genre; a model trained only on SNLI performs worse on every genre than comparable models trained on any genre from MultiNLI.", "Models trained on a single genre from MultiNLI perform well on similar genres; for example, the model trained on TELEPHONE attains the best accuracy (63%) on FACE-TO-FACE , which was nearly one point better than it received on itself.", "SLATE seems to be a difficult and relatively unusual genre and performance on it is relatively poor in this setting; when averaging over runs trained on SNLI and all genres in the matched section of the training set, average performance on SLATE was only 57.5%.", "Sentences in SLATE cover a wide range of topics and phenomena, making it hard to do well on, but also forcing models trained on it be broadly capable; the model trained on SLATE achieves the highest accuracy of any model on 9/11 (55.6%) and VERBATIM (57.2%), and relatively high accuracy on TRAVEL (57.4%) and GOVERNMENT (58.3%).", "We also observe that our models perform similarly on both the matched and mismatched test sets of MultiNLI.", "We expect genre mismatch issues to become more conspicuous as models are developed that can better fit MultiNLI's training genres.", "To evaluate the contribution of sentence length to corpus difficulty, we binned premises and hypotheses by length in 25-word increments for premises and 10-word increments for hypotheses.", "Using the ESIM model, our strong baseline, we find a small effect (stronger for matched than mismatched) of premise length on model accuracy: accuracy decreases slightly as premise sentences increase in length.", "We find no effect of hypothesis length on accuracy.", "In data collection for NLI, different annotator decisions about the coreference between entities and events across the two sentences in a pair can lead to very different assignments of pairs to labels (de Marneffe et al., 2008; Marelli et al., 2014a; Bowman et al., 2015).", "Drawing an example from Bowman et al., the pair a boat sank in the Pacific Ocean and a boat sank in the Atlantic Ocean can be labeled either CONTRADICTION or NEUTRAL depending on (among other things) whether the two mentions of boats are assumed to refer to the same entity in the world.", "This uncertainty can present a serious problem for inter-annotator agreement, since it is not clear that it is possible to define an explicit set of rules around coreference that would be easily intelligible to an untrained annotator (or any non-expert).", "Bowman et al. attempt to avoid this problem by using an annotation prompt that is highly dependent on the concreteness of image descriptions; but, as we engage with the much more abstract writing that is found in, for example, government documents, there is no reason to assume a priori that any similar prompt and annotation strategy can work.", "We are surprised to find that this is not a major issue.", "Through a relatively straightforward trial-and-error piloting phase, followed by discussion with our annotators, we manage to design prompts for abstract genres that yield high inter-annotator agreement scores nearly identical to those of SNLI (see Table 2).", "These high scores suggest that our annotators agreed on a single task definition, and were able to apply it consistently across genres.", "As expected, both the increase in the diversity of linguistic phenomena in MultiNLI and its longer average sentence length conspire to make MultiNLI dramatically more difficult than SNLI.", "Our three baseline models perform better on SNLI than MultiNLI by about 15% when trained on the respective datasets.", "All three models achieve accuracy above 80% on the SNLI test set when trained only on SNLI.", "However, when trained on MultiNLI, only ESIM surpasses 70% accuracy on MultiNLI's test sets.", "When we train models on MultiNLI and downsampled SNLI, we see an expected significant improvement on SNLI, but no significant change in performance on the MultiNLI test sets, suggesting including SNLI in training doesn't drive substantial improvement.", "These results attest to MultiNLI's difficulty, and with its relatively high inter-annotator agreement, suggest that it presents a problem with substantial headroom for future work.", "To better understand the types of language understanding skills that MultiNLI tests, we analyze the collected corpus using a set of annotation tags chosen to reflect linguistic phenomena which are known to be potentially difficult.", "We use two methods to assign tags to sentences.", "First, we use the Penn Treebank (PTB; Marcus et al., 1993) part-of-speech tag set (via the included Stanford Parser parses) to automatically isolate sentences 1118 Dev.", "containing a range of easily-identified phenomena like comparatives.", "Second, we isolate sentences that contain hand-chosen key words indicative of additional interesting phenomena.", "The hand-chosen tag set covers the following phenomena: QUANTIFIERS contains single words with quantificational force (see, for example, Heim and Kratzer, 1998; Szabolcsi, 2010, e.g., many, all, few, some ); BELIEFVERBS contains sentence-embedding verbs denoting mental states (e.g., know, believe, think ), including irregular past tense forms; TIME TERMS contains single words with abstract temporal interpretation, (e.g., then, today ) and month names and days of the week; DISCOURSE MARKERS contains words that facilitate discourse coherence (e.g., yet, however, but, thus, despite ); PRESUPPOSITIONTRIGGERS contains words with lexical presuppositions (Stal-naker, 1974; Schlenker, 2016, e.g., again, too, anymore 13 ); CONDITIONALS contains the word if .", "Table 5 presents the frequency of the tags in SNLI and MultiNLI, and model accuracy on MultiNLI (trained only on MultiNLI).", "The incidence of tags varies by genre; the percentage of sentence pairs containing a particular annotation tag differs by a maximum over 30% across genres.", "Sentence pairs containing pronouns are predictably common for all genres, with 93% of Government and Face-to-face pairs including at 13 Because their high frequency in the corpus, extremely common triggers like the were excluded from this tag.", "least one.", "The Telephone genre has the highest percentage of sentence pairs containing one occurrence of negation, WH-words, belief -verbs and time terms, Verbatim has the highest percentage of pairs containing quantifiers and conversational pivots, and Letters has the highest percentage of pairs that contain one or more modals.", "Pairs containing comparatives and/or superlatives, which is the tag that our baseline models perform worst on, are most common in the Oxford University Press genre.", "Based on this, we conclude that the genres are sufficiently different, because they are not uniform with respect to the percentages of sentence pairs that contain each of the annotation tags.", "The distributions of labels within each tagged subset of the corpus roughly mirrors the balanced overall distribution.", "The most frequent class overall (in this case, ENTAILMENT ) occurs with a frequency of roughly one third (see Table", "4) in most.", "Only two annotation tags differ from the baseline percentage of the most frequent class in the corpus by at least 5%: sentences containing negation, and sentences exceeding 20 words.", "Sentences that contain negation are slightly more likely than average to be labeled CONTRADICTION , reflecting a similar finding in SNLI, while long sentences are slightly more likely to be labeled ENTAILMENT .", "None of the baseline models perform substantially better on any tagged set than they do on the corpus overall, with average model accuracies on sentences containing specific tags falling within 1119 about 3 points of overall averages.", "Using baseline model test accuracy overall as a metric (see Table 4), our baseline models had the most trouble on sentences containing comparatives or superlatives (losing 3-4 points each).", "Despite the fact that 17% of sentence pairs in the corpus contained at least one instance of comparative or superlative, our baseline models don't utilize the information present in these sentences to predict the correct label for the pair, although presence of a comparative or superlative is slightly more predictive of a NEUTRAL label.", "Moreover, the baseline models perform below average on discourse markers, such as despite and however , losing roughly 2 to 3 points each.", "Unsurprisingly, the attention-based ESIM model performs better than the other two on sentences with greater than 20 words.", "Additionally, our baseline models do show slight improvements in accuracy on negation, suggesting that they may be tracking it as a predictor of CONTRADICTION .", "Natural language inference makes it easy to judge the degree to which neural network models for sentence understanding capture the full meanings for natural language sentences.", "Existing NLI datasets like SNLI have facilitated substantial advances in modeling, but have limited headroom and coverage of the full diversity of meanings expressed in English.", "This paper presents a new dataset that offers dramatically greater linguistic difficulty and diversity, and also serves as a benchmark for cross-genre domain adaptation.", "Our new corpus, MultiNLI, improves upon SNLI in its empirical coveragebecause it includes a representative sample of text and speech from ten different genres, as opposed to just simple image captionsand its difficulty, containing a much higher percentage of sentences tagged with one or more elements from our tag set of thirteen difficult linguistic phenomena.", "This greater diversity is reflected in the dramatically lower baseline model performance on MultiNLI than on SNLI (see Table", "5) and comparable inter-annotator agreement, suggesting that MultiNLI has a lot of headroom remaining for future work.", "The MultiNLI corpus was first released in draft form in the first half of 2017, and in the time since its initial release, work by others (Conneau et al., 2017) has shown that NLI can also be an effective source task for pre-training and transfer learning in the context of sentence-to-vector models, with models trained on SNLI and MultiNLI substantially outperforming all prior models on a suite of established transfer learning benchmarks.", "We hope that this corpus will continue to serve for many years as a resource for the development and evaluation of methods for sentence understanding.", "This work was made possible by a Google Faculty Research Award.", "SB also gratefully acknowledges support from Tencent Holdings and Samsung Research.", "We also thank George Dahl, the organizers of the RepEval 2016 and RepEval 2017 workshops, Andrew Drozdov, Angeliki Lazaridou, and our other NYU colleagues for help and advice." ]
[ "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "other", "abstain", "other", "other", "other" ]
[ "Intelligent features in email service applications aim to increase productivity by helping people organize their folders, compose their emails and respond to pending tasks.", "In this work, we explore a new application, Smart-To-Do, that helps users with task management over emails.", "We introduce a new task and dataset for automatically generating To-Do items from emails where the sender has promised to perform an action.", "We design a two-stage process leveraging recent advances in neural text generation and sequence-to-sequence learning, obtaining BLEU and ROUGE scores of 0 .", "23 and 0 .", "63 for this task.", "To the best of our knowledge, this is the first work to address the problem of composing To-Do items from emails.", "Email is one of the most used forms of communication especially in enterprise and work settings (Radicati and Levenstein, 2015).", "With the growing number of users in email platforms, service providers are constantly seeking to improve user experience for a myriad of applications such as online retail, instant messaging and event management (Feddern-Bekcan, 2008).", "Smart Reply (Kan-nan et al., 2016) and Smart Compose (Chen et al., 2019) are two recent features that provide contextual assistance to users aiming to reduce typing efforts.", "Another line of work in this direction is for automated task management and scheduling.", "For example.", "the recent Nudge feature 1 in Gmail and Insights in Outlook 2 are designed to remind users to follow-up on an email or pay attention to pending tasks.", "Smart To-Do takes a step further in task assistance and seeks to boost user productivity by automatically generating To-Do items from their email Work done as an intern at Microsoft Research.", "1 Gmail Nudge 2 Outlook Insights From: Alice To: john@contoso.com Subject: Sales Report Hi John, From: John To: alice@contoso.com Subject: RE: Sales Report I am doing well.", "context.", "Text generation from emails, like creating To-Do items, is replete with complexities due to the diversity of conversations in email threads, heterogeneous structure of emails and various meta-deta involved.", "As opposed to prior works in text generation like news headlines, email subject lines and email conversation summarization, To-Do items are action-focused , requiring the identification of a specific task to be performed.", "In this work, we introduce the task of automatically generating To-Do items from email context and meta-data to assist users with following up on their promised actions (also referred to as commitments in this work).", "Refer to Figure 1 for an illustration.", "Given an email, its temporal context (i.e. thread), and associated meta-data like the name of the sender and recipient, we want to generate a short and succinct To-Do item for the task mentioned in the email.", "This requires identifying the task sentence (also referred to as a query ), relevant sentences in the email that provide contextual information about the query along with the entities (e.g., people) associated with the task.", "We utilize existing work to identify the task sentence via a commitment classifier that detects action intents in the emails.", "Thereafter C Commitment Classifier D Does the email contain commitment ?", "we use an unsupervised technique to extract key sentences in the email that are helpful in providing contextual information about the query.", "These pieces of information are further combined to generate the To-Do item using a sequence-to-sequence architecture with deep neural networks.", "Figure 2 shows a schematic diagram of the process.", "Since there is no existing work or dataset on this problem, our first step is to collect annotated data for this task.", "Overall, our contributions can be summarized as follows: We create a new dataset for To-Do item generation from emails containing action items based on the publicly available email corpus Avocado (Oard et al., 2015).", "3 We develop a two-stage algorithm, based on unsupervised task-focused content selection and subsequent text generation combining contextual information and email meta-data.", "We conduct experiments on this new dataset and show that our model performs at par with human judgments on multiple performance metrics.", "Summarization of email threads has been the focus of multiple research works in the past (Rambow et al., 2004; Carenini et al., 2007; Dredze et al., 2008).", "There has also been considerable research on identifying speech acts or tasks in emails (Car-valho and Cohen, 2005; Lampert et al., 2010; Scerri et al., 2010) and how it can be robustly adapted across diverse email corpora (Azarbonyad et al., 2019).", "Recently, novel neural architectures have been explored for modeling action items in emails 3 We will release the code and data (in accordance with LDC and Avocado policy) at https://aka.ms/SmartToDo .", "Email examples in this paper are similar to those in our dataset but are not reproducing text from the Avocado dataset.", "(Lin et al., 2018) and identifying intents in email conversations (Wang et al., 2019).", "However, there has been less focus on task-specific email summarization (Corston-Oliver et al., 2004).", "The closest to our work is that of email subject line generation (Zhang and Tetreault, 2019).", "But it focuses on a common email theme and uses a supervised approach for sentence selection, whereas our method relies on identifying the task-related context.", "We build upon the Avocado dataset (Oard et al., 2015) 4 containing an anonymized version of the Outlook mailbox for 279 employees with various meta-data and 938 , 035 emails overall.", "Emails contain various user intents including planning and scheduling meetings, requests for information, exchange of information, casual conversations, etc. (Wang et al., 2019).", "For the purpose of this work, we first need to extract emails containing at least one sentence where the sender has promised to perform an action.", "It could be performing a task, providing some information, keeping others informed about a topic and so on.", "We use the term commitment to refer to such intent in an email and the term commitment sentence to refer to each sentence with that intent.", "Commitment classifier: A commitment classifier C : S (cid:55) [0 , 1] takes as input an email sentence S and returns a probability of whether the sentence is a commitment or not.", "The classifier is built using labels from an annotation task with 3 judges.", "The Cohen's kappa value is 0 .", "694 , depicting substantial agreement.", "The final label is obtained from the majority vote, generating a total of 9076 instances (with 2586 positive/commitment labels and 6490 negative labels).", "The classifier is an RNN-based model with word embeddings and self-attention geared for binary classification with the input being the entire email context (Wang et al., 2019).", "The classifier has a precision of 86% and recall of 84% on sentences in the Avocado corpus.", "Candidate emails: We extracted 500 k raw sentences from Avocado emails and passed them", "4 Avocado is a more appropriate test bed than the Enron collection (Klimt and Yang, 2004) since it contains additional meta-data and it entered the public domain via the cooperation and consent of the legal owner of the corpus.", "through the commitment classifier.", "We threshold the commitment classifier confidence to 0 .", "9 and obtained 29 k potential candidates for To-Do items.", "Of these, a random subset of 12 k instances were selected for annotation.", "Annotation guideline: For each candidate email e c and the previous email in the thread e p (if present), we obtained meta-data like From ', Sent-To ', Subject ' and Body '.", "The commitment sentence in e c was highlighted and annotators were asked to write a To-Do item using all of the information in e c and e p .", "We prepared a comprehensive guideline to help human annotators write To-Do Items containing the definition and structure of To-Do Items and commitment sentences, along with illustrative examples.", "Annotators were instructed to use words and phrases from the email context as closely as possible and introduce new vocabulary only when required.", "Each instance was annotated by 2 judges.", "Analysis of human annotations: We obtained a total of 9349 email instances with To-Do items, each of which was annotated by two annotators.", "To-Do items have a median token length of 9 and a mean length of 9 .", "71 .", "For 60 .", "42% of the candidate emails, both annotators agreed that the subject line was helpful in writing the To-Do Item.", "To further analyze the annotation quality, we randomly sampled 100 annotated To-Do items and asked a judge to rate them on", "(a) fluency (grammat-ical and spelling correctness), and", "(b) completeness (capturing all the action items in the email) on a 4 point scale ( 1 : Poor, 2 : Fair, 3 : Good, 4 : Excellent).", "Overall, we obtained a mean rating of 3.1 and 2.9 respectively for fluency and completeness.", "Table 1 shows a snapshot of the analysis.", "In this section, we describe our two-stage approach to generate To-Do items.", "In the first stage, we select sentences that are helpful in writing the To-Do item.", "Emails contain generic sentences such as salutations, thanks and casual conversations not relevant to the commitment task.", "The objective of the first stage is to select sentences containing informative concepts necessary to write the To-Do.", "In the absence of reliable labels to extract helpful sentences in a supervised fashion, we resort to an unsupervised matching-based approach.", "Let the commitment sentence in the email be denoted as H , and the rest of the sentences from the current email e c and previous email e p be denoted as { s 1 , s 2 , . . . s d } .", "The unsupervised approach seeks to obtain a relevance score ( s i ) for each sentence.", "The top K sentences with the highest scores will be selected as the extractive summary for the commitment sentence (also referred to as the query).", "Enriched query context: We first extract top maximum frequency tokens from all the sentences in the given email, the commitment and the subject (i.e., { s 1 , s 2 , . . . s d } H Subject ).", "Tokens are lemmatized and stop-words are removed.", "We set = 10 in our experiments.", "An enriched context for the query E is formed by concatenating the commitment sentence H , subject and top tokens.", "Relevance score computation: Task-specific relevance score for a sentence s i is obtained by inner product in the embedding space with the enriched context.", "Let h ( ) be the function denoting the embedding of a sentence with ( s i ) = h ( s i ) T h ( E ) .", "Our objective is to find helpful sentences for the commitment given by semantic similarity between concepts in the enriched context and a target sentence.", "In case of a short or less informative query, the subject and topic of the email provide useful information via the enriched context.", "We experiment with three different embedding functions.", "frequency vector is used to represent the sentence.", "(2) FastText Word Embeddings We trained FastText embeddings (Bojanowski et al., 2017) of dimension 300 on all sentences in the Avocado corpus.", "The embedding function h ( s j ) is given by taking the max (or mean) across the word-embedding dimension of all tokens in the sentence s j .", "(3) Contextualized Word Embeddings We utilize recent advances in contextualized representations from pre-trained language models like BERT (Devlin et al., 2019).", "We use the second last layer of pre-trained BERT for sentence embeddings.", "We also fine-tuned BERT on the labeled dataset for commitment classifier.", "The dataset is first made balanced ( 2586 positive and 2586 negative instances).", "Uncased BERT is trained for 5 epochs for commitment classification, with the input being word-piece tokenized email sentences.", "This model is denoted as BERT (fine-tuned) in Table 2. Evaluation of unsupervised approaches: Retrieving at-least one helpful sentence is crucial to obtain contextual information for the To-Do item.", "Therefore, we evaluate our approaches based on the proportion of emails where at-least one helpful sentence is present in the top K retrieved sentences.", "We manually annotated 100 email instances and labeled every sentence as helpful or not based on", "(a) whether the sentence contains concepts appearing in the target To-Do item, and", "(b) whether the sentence helps to understand the task context.", "Inter-annotator agreement between 2 judgments for this task has a Cohen Kappa score of 0 .", "69 .", "This annotation task also demonstrates the importance of the previous email in a thread.", "Out of 100 annotated instances, 44 have a replied-to email of which 31 contains a helpful sentence in the replied-to email body ( 70 . 4% ).", "Table 2 shows the performance of the various unsupervised extractive algorithms.", "FastText with max-pooling of embeddings performed the best and used in the subsequent generation stage.", "The generation phase of our approach can be formulated as sequence-to-sequence (Seq2Seq) learning with attention (Sutskever et al., 2014; Bahdanau et al., 2014).", "It consists of two neural networks, an encoder and a decoder.", "The input to the encoder consists of concatenated tokens from different meta-data fields of the email like sent-to', subject', commitment sentence H and extracted sentences I separated by special markers.", "For instance, the input to the encoder for the example in Figure 1 is given as: < to > alice < sub > hello ?", "generation model as follows: Vanilla Seq2Seq : Input tokens { x 1 , x 2 , . . . x T } are passed through a word-embedding layer and a single layer LSTM to obtain encoded representations h t = f ( x t , h t 1 ) t for the input.", "The decoder is another LSTM that makes use of the encoder state h t and prior decoder state s t 1 to generate the target words at every timestep t .", "We consider Seq2Seq with attention mechanism where the decoder LSTM uses attention distribution a t over timesteps t to focus on important hidden states to generate the context vector h t .", "This is the first baseline in our work.", "e t,t (cid:48) = v T tanh ( W h h t + W s s t (cid:48) + b ) a t,t (cid:48) = softmax ( e t,t (cid:48) ) h t = (cid:80) t (cid:48) a t,t (cid:48) h t (cid:48) (1) Seq2Seq with copy mechanism : As the second model, we consider Seq2Seq with copy mechanism (See et al., 2017) to copy tokens from important email fields.", "Copying is pivotal for To-Do item generation since every task involves named From: John Carter To: Helena Watson; Daniel Craig; Rupert Grint Subject: Thanks Thank you for helping me prepare the paper draft for ACL conference.", "entities in terms of the persons involved, specific times and dates when the task has to be accomplished and other task-specific details present in the email context.", "To understand the copy mechanism, consider the decoder input at each decoding step as y t and the context vector as h t .", "The decoder at each timestep t has the choice of generating the output word from the vocabulary V with probability p gen = ( h t , s t , y t ) , or with probability 1 p gen it can copy the word from the input context.", "To allow that, the vocabulary is extended as V (cid:48) = V { x 1 , x 2 , . . . x T } .", "The model is trained end-to-end to maximize the log-likelihood of target words (To-Do items) given the email context.", "Seq2Seq BiFocal : As a third model, we experimented with query-focused attention having two encoders one containing only tokens of the query and the other containing rest of the input context.", "We use a bifocal copy mechanism that can copy tokens from either of the encoders.", "We refer the reader to the Appendix for more details about training and hyper-parameters used in our models.", "9349 email instances with To-Do items, we used 7349 for training and 1000 each for validation and testing.", "For each instance, we chose the annotation with fewer tokens as ground-truth reference.", "The median token length of the encoder input is 43 (including the helpful sentence).", "Table 4 shows the performance comparison of various models.", "We report BLEU-4 (Papineni et al., 2002) and the F1-scores for Rouge-1, Rouge-2 and Rouge-L (Lin, 2004).", "We also report the human performance for this task in terms of the above metrics computed between annotations from the two judges.", "A trivial baseline which concatenates tokens from the sent-to' and subject' fields and the commitment sentence is included for comparison.", "The best performance is obtained with Seq2Seq using copying mechanism.", "We observe our model to perform at par with human performance for writing To-Do items.", "Table 3 shows some examples of To-Do item generation from our best model.", "In this work, we study the problem of automatic To-Do item generation from email context and meta-data to provide smart contextual assistance in email applications.", "To this end, we introduce a new task and dataset for action-focused text intelligence.", "We design a two stage framework with deep neural networks for task-focused text generation.", "There are several directions for future work including better architecture design for utilizing structured meta-data and replacing the two-stage framework with a multi-task generation model that can jointly identify helpful context for the task and perform corresponding text generation." ]
[ "abstain", "objective", "objective", "result", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "objective", "abstain", "objective", "objective", "other", "other", "other", "abstain", "other", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "method", "objective", "method", "abstain" ]
[ "Semantic parsing using sequence-to-sequence models allows parsing of deeper representations compared to traditional word tagging based models.", "In spite of these advantages, widespread adoption of these models for real-time conversational use cases has been stymied by higher compute requirements and thus higher latency.", "In this work, we propose a non-autoregressive approach to predict semantic parse trees with an efficient seq2seq model architecture.", "By combining non-autoregressive prediction with convolutional neural networks, we achieve significant latency gains and parameter size reduction compared to traditional RNN models.", "Our novel architecture achieves up to an 81% reduction in latency on TOP dataset and retains competitive performance to non-pretrained models on three different semantic parsing datasets.", "Our code is available at https://github.", "com/facebookresearch/pytext .", "Advances in conversational assistants have helped to improve the usability of smart speakers and consumer wearables for different tasks.", "Semantic parsing is one of the fundamental components of these assistants and it helps to convert the user input in natural language to a structure representation that can be understood by downstream systems.", "Majority of the semantic parsing systems deployed on various devices, rely on server-side inference because of the lower compute/memory available on these edge devices.", "This poses a few drawbacks such as flaky user experience with spotty internet connectivity and compromised user data privacy due to the dependence on a centralized server to which all user interactions are sent to.", "Thus, semantic parsing on-device has numerous advantages.", "For the semantic parsing task, the meaning representation used decides the capabilities of the system built.", "Limitations of the representation with one intent and slot labels were studied in the context of nested queries and multi turn utterances in Aghajanyan et al. (2020) and Gupta et al. (2018).", "New representations were proposed to overcome these limitations and sequence-to-sequence models were proposed as the solution to model these complex forms.", "But using these new models in real-time conversational assistants still remains a challenge due to higher latency requirements.", "In our work, we propose a novel architecture and generation scheme to significantly improve the end2end latency of sequence-to-sequence models for the semantic parsing task.", "Due to the autoregressive nature of generation in sequence-to-sequence semantic parsing models, the recurrence relationship between target tokens creates a limitation that decoding cannot be parallelized.", "There are multiple works in machine translation which try to solve this problem.", "These approaches relax the decoder token-by-token generation by allowing multiple target tokens to be generated at once.", "Fully non-autoregressive models (Gu et al., 2017; Ma et al., 2019; Ghazvininejad et al., 2020a; Saharia et al., 2020) and conditional masked language models with iterative decoding (Ghazvinine-jad et al., 2019; Gu et al., 2019; Ghazvininejad et al., 2020b) are some of them.", "To enable non-autoregressive generation in semantic parsing, we modify the objective of the standard seq2seq model to predict the entire target structure at once.", "We build upon the CMLM (Con-ditional Masked Language Model) (Ghazvininejad et al., 2019) and condition the generation of the full target structure on the encoder representation.", "By eliminating the recurrent relationship between individual target tokens, the decoding process can be parallelized.", "While this drastically improves latency, the representation of each token is still dependent on previous tokens if we continue to use an RNN architecture.", "Thus, we propose a novel model architecture for semantic parsing based on convolutional networks (Wu et al., 2019b) to solve this issue.", "Our non-autoregressive model achieves up to an 81% reduction in latency on the TOP dataset (Gupta et al., 2018), while achieving 80.23% exact match accuracy.", "We also achieve 88.16% exact match accuracy on DSTC2 (Henderson et al., 2014) and 80.86% on SNIPS (Coucke et al., 2018) which is competitive to prior work without pretraining.", "To summarize, our two main contributions are: We propose a novel alternative to the traditional autoregressive generation scheme for semantic parsing using sequence-to-sequence models.", "With a new model training strategy and generation approach, the semantic parse structure is predicted in one step improving parallelization and thus leading to significant reduction in model latency with minimal accuracy impact.", "We also study the limitations of original CMLM (Ghazvininejad et al., 2019) when applied for conversational semantic parsing task and provide motivations for our simple yet critical modifications.", "We propose LightConv Pointer, a model architecture for non-autoregressive semantic parsing, using convolutional neural networks which provides significant latency and model size improvements over RNN models.", "Our novel model architecture is particularly suitable for limited compute use-cases like on-device conversational assistants.", "In this section, we propose a novel, convolutional, non-autoregressive architecture for semantic parsing.", "While non-autoregressive decoding has been previously explored in machine translation, we describe how it can be applied to semantic parsing with several critical modifications to retain performance.", "We then describe our convolutional architecture.", "By incorporating these advances together, our approach achieves both high accuracy and efficient decoding.", "The task is to predict the semantic parse tree given the raw text.", "We use the decoupled representation (Aghajanyan et al., 2020), an extension of the compositional form proposed in Gupta et al. (2018) for task oriented semantic parsing.", "Decoupled representation is obtained by removing all text in the compositional form that does not appear in a leaf slot.", "Efficient models require representations which are compact, with least number of tokens, to reduce number of floating point operations during inference.", "Decoupled representation was found to be suitable due to this.", "Figure 1 shows the semantic parse for a sample utterance.", "Our model predicts the serialized representation of this tree which is [IN:CREATE_REMINDER [SL:PERSON_REMINDED me ] [SL:TODO [IN:CREATE_CALL [SL:METHOD call ] [SL:CONTACT John ] ] ] ] 2.1 Non-Autoregressive Decoding While autoregressive models (Figure 2), which predict a sequence token by token, have achieved strong results in various tasks including semantic parsing, they have a large downside.", "The main challenge in practical applications is the slow decoding time.", "We investigate how to incorporate recent advances in non-autoregressive decoding for efficient semantic parsing models.", "We build upon the Conditional Masked Language Model (CMLM) proposed in Ghazvininejad et al. (2019) by applying it to the structured prediction task of semantic parsing for task-oriented dialog.", "Ghazvininejad et al. (2019) uses CMLM to first predict a token-level representation for each source token and a target sequence length; then the model predicts and iterates on the target sequence prediction in a non-autoregressive fashion.", "We describe our changes and the motivations for these changes below.", "One of the main differences between our work and Ghazvininejad et al. (2019) is that target length prediction plays a more important role in semantic parsing.", "For the translation task, if the target length is off by one or more, the model can slightly rephrase the sentence to still return a high quality translation.", "In our case, if the length prediction is Figure 2: Traditional Sequence to Sequence architecture which uses autoregressive generation scheme for decoder.", "To resolve this important challenge, we propose a specialized length prediction module that more accurately predicts the target sequence length.", "While Ghazvininejad et al. (2019) uses a special CLS token in the source sequence to predict the target length, we have a separate module of multiple layers of CNNs with gated linear units to predict the target sequence length (Wu et al., 2019b).", "We also use label smoothing and differently weighing losses as explained in section 2.3, to avoid the easy over-fitting in semantic parsing compared to translation.", "As shown in Aghajanyan et al. (2020), transformers without pre-training perform poorly on TOP dataset.", "The architectural changes that we propose to solve the data efficiency can be found in the section 2.2.1.", "Further, we find that the random masking strategy proposed in Ghazvininejad et al. (2019) works poorly for semantic parsing.", "When we use the same strategy for the semantic parsing task where the output has a structure, model is highly likely to see invalid trees during training as masking random tokens in the linearized representation of a tree mostly gives invalid tree representations.", "This makes it hard for the model to learn the structure especially when the structure is complicated (in the case of trees, deep trees were harder to learn).", "To remedy this problem, we propose a different strategy for model training where all the tokens in the target sequence are masked during training.", "Our model architecture (Figure 3) is based on the classical seq2seq model (Sutskever et al., 2014) and follows the encoder-decoder architecture.", "In order to optimize for efficient encoding and decoding, we look to leverage a fully parallel model architecture.", "While transformer models are fully parallel and popular in machine translation (Vaswani et al., 2017), they are known to perform poorly in low resource settings and require careful tuning using techniques like Neural Architecture Search to get good performance (van Biljon et al., 2020; Murray et al., 2019).", "Similarly, randomly initialized transformers performed poorly on TOP dataset achieving only 64.5 % accuracy when SOTA was above 80% (Aghajanyan et al., 2020).", "We overcome this limitation by augmenting Transformers with Convolutional Neural Networks.", "Details of our architecture are explained below.", "For token representations, we use word embeddings concatenated with the sinusoidal positional embeddings (Vaswani et al., 2017).", "Encoder and decoder consist of multiple layers with residual connections as shown in Figure 4.", "First sub-block in each layer consists of MHA (Vaswani et al., 2017).", "In decoder, we do not do masking of future tokens during model training.", "This is needed for non-autoregressive generation of target tokens during inference.", "Second sub-block consists of multiple convolutional layers.", "We use depthwise convolutions with weight sharing (Wu et al., 2019b).", "Convolution layer helps in learning representation for tokens for a fixed context size and multiple layers helps with bigger receptive fields.", "We use non-causal convolutions for both encoder as well as decoder.", "Third sub-block is the FFN (Vaswani et al., 2017; Wu et al., 2019b) which consists of two linear layers and relu.", "The decoder has source-target attention after the convolution layer.", "Pointer-Generator Projection layer The decoder has a final projection layer which generates the target tokens from the decoder/encoder representations.", "Rongali et al. (2020) proposes an idea based Pointer Generator Network (See et al., 2017) to convert the decoder representation to target tokens using the encoder output.", "Similarly, we use a pointer based projection head, which decides whether to copy tokens from the source-sequence or generate from the pre-defined ontology at every Figure 3: Sequence to Sequence model architecture which uses Non-Autoregressive strategy for generation decoding step (Aghajanyan et al., 2020).", "Length Prediction Module Length prediction Module receives token level representations from the encoder as input.", "It uses stacked CNNs with gated linear units and mean pooling to generation the length prediction.", "Suppose the source sequence is of length L and source tokens in the raw text are s 1 , s 2 , s 3 . . . s L .", "Encoder generates a representation of for each token in the source sequence.", "Using the predicted length T, we create a target sequence of length T consisting of identical MASK tokens.", "This sequence is passed through possibly multiple decoder layers and generates a representation for each token in the masked target sequence.", "We make a strong assumption that each token in the target sentence is conditionally independent of each other given the source and the target length.", "Thus, the individual probabilities for each token is P ( y i | X, T ) where X is the input sequence and T is the length of target sequence.", "Beam Search During inference, length prediction module explained in 2.2.1 predicts top k lengths.", "For each predicted length, we create a decoder input sequence of all masked tokens.", "This is similar to the beam search with beam size k in autoregressive systems.", "The main difference in our model architecture is that we expect only one candidate for each predicted length.", "These all masked sequences are given as input to the model and the model predicts target tokens for each masked token.", "Once we have predicted target sequences for k different lengths, they are ranked based on the ranking algorithm described in (5), where X is the input sequence and Y is the predicted output sequence, note the predicted token y i is conditioned on both the sequence ( X ) and the predicted target length T .", "During training, we jointly optimize for two weighted losses.", "The first loss is calculated for the predicted target tokens against the real target and the second loss is calculated for predicted target length against real target length.", "During forward-pass, we replace all the tokens in the target sequence with a special <MASK> token and give this as an input to the decoder.", "Decoder predicts the token for each masked token and the cross-entropy loss is calculated for each predicted token.", "The length prediction module in the model predicts the target length using the encoder representation.", "Similar to CMLMs in (Ghazvininejad et al., 2019), length prediction is modeled as a classifica-tion task with class labels for each possible length.", "Cross entropy loss is calculated for length prediction.", "For our semantic parsing task, label smoothing (Szegedy et al., 2015) was found to be very critical as the length prediction module tends to easily overfit and strong regularization methods are needed.", "This was because length prediction was a much well-defined task compared to predicting all the tokens in the sequence.", "Total loss was calculated by taking a weighted sum of cross entropy loss for labels and length, with lower weight for length loss.", "As training progresses through different epochs, the best model is picked by comparing the exact match (EM) accuracy of different snapshots on validation set.", "We use 3 datasets across various domains to evaluate our semantic parsing approach.", "Length distribution of each dataset is described using median, 90th percentile and 99th percentile lengths.", "TOP Dataset Task Oriented Parsing (Gupta et al., 2018) is a dataset for compositional utterances in the navigation and events domains.", "The training set consists of 31 , 279 instances and the test set consists of 9 , 042 .", "The test set has a median target length of 15, P90 27 and P99 39.", "SNIPS The SNIPS (Coucke et al., 2018) dataset is a public dataset used for benchmarking semantic parsing intent slot models.", "This dataset is considered flat, since it does not contain compositional queries and can be solved with word-tagging models.", "Recently, however seq2seq models have started to out perform word-tagging models (Rongali et al., 2020; Aghajanyan et al., 2020).", "The training set consists of 13 , 084 instances, the test set consists of 700 instances.", "The test set has a median target length of 11, P90 17, P99 21.", "DSTC2 Dialogue State Tracking Challenge 2 (Henderson et al., 2014), is a dataset for conversational understanding.", "The dataset involves users searching for restaurants, by specifying constraints such as cuisine type and price range, we encode these constraints as slots and use this to formulate the decoupled representation.", "The training set consists of 12 , 611 instances and a test set of 9890 .", "The test set has a median target length of 6, P90 9 and P99 10.", "Semantic Parsing Performance For all our datasets, we convert the representation of either the compositional form or flat intent slot form to the decoupled representation (Aghajanyan et al., 2020) .", "We compare the model prediction with the serialized structure representation and look for exact match (EM).", "Benchmarking Latency For the latency analysis for the models trained from scratch: AR LightConv Pointer, NAR LightConv Pointer, and BiLSTM.", "We chose these 3 architectures, to compare NAR vs AR variants of LightConv Pointer, as well as the best performant baseline: Pointer BiLSTM (Aghajanyan et al., 2020).", "We use Samsung Galaxy S8 with Android OS and Octa-core processor.", "We chose to benchmark latency to be consistent with prior work on on-device modeling (Wu et al., 2019a; Howard et al., 2019).", "All models are trained in PyTorch (Paszke et al., 2019) and exported using Torchscript.", "We measure wall clock time as it is preferred instead of other options because it relates more to real world inference.", "1 Latency results can be found in section 4.2.", "For each of our datasets, we report accuracy metrics on the following models:", "NAR LightConv Pointer : A non-autoregressive (NAR) variant of the above model to allow for parallel decoding.", "We compare against the best reported numbers across datasets where the models don't use pretraining.", "During training of our model we use the same base model across all datasets and sweep over hyper parameters for the length module and the batch size and learning rate, an equivalent sweep was done for the AR variant as well.", "The base model we use for NAR LightConv Pointer model uses 5 encoder layers with convolutional kernel sizes [3,7,15,21,27], where each encoder layer has embedding and convolutional dimensions of 160, 1 self attenion head, and 2 decoder layers with kernel sizes [7,27], and embedding dimension of 160, 1 self-attention head and 2 encoder-attention heads.", "Our length prediction module leverages a two convolution layers of 512 embedding dimensions and kernel sizes of 3 and 9.", "and uses hidden dimension in [128,256,512] determined by hyper parameter sweeps.", "We also use 8 attention heads for the decoupled projection head.", "For the convolutional layer, we use lightweight convolutions (Wu et al., 2019b) with number of heads set to 2.", "We train with the Adam (Kingma and Ba, 2014) optimizer, learning rate is selected to be between [0.00007, 0.0004].", "If our evaluation accuracy has not increased in 10 epochs, we also reduce our learning rate by a factor of 10, and we employ early stopping if the accuracy has not changed in 20 epochs.", "We train with our batch size fixed to be 8.", "We optimize a joint loss for label prediction and length prediction.", "Both losses consist of label smoothed cross entropy loss ( is the weight of the uniform distribution) (Pereyra et al., 2017), our label loss has = 0 .", "1 and our length loss has = 0 .", "5 , we also weight our length loss lower, = 0 .", "25 .", "For inference, we use a length beam size of k = 5 .", "Our AR variant follows the same parameters however it does not have length prediction and self-attention in encoder and decoder.", "We show that our proposed non-autoregressive convolutional architecture for semantic parsing is competitive with auto-regressive baselines and word tagging baselines without pre-training on three different benchmarks and reduces latency up to 81% on the TOP dataset.", "We first compare accuracy and latency, then discuss model performance by analyzing errors by length, and the importance of knowledge distillation.", "We do our analysis on the TOP dataset, due to its inherent compositional nature, however we expect our analysis to hold for other datasets as well.", "Non-compositional datasets like DSTC2 and SNIPS can be modeled by word tagging models making seq2seq models more relevant in the case of compositional datasets.", "In table 5a we show our NAR and AR variants for LightConv Pointer perform quite similarly across all datasets.", "We can see that our proposed NAR LightConv Pointer is also competitive with state of the art models without pre-training: -0.66% TOP, -0.17% DSTC2, -4.57% SNIPS (-0.04% compared to word tagging models).", "Following the prior work on Non-Autoregressive models, we also report our experiments with sequence-level knowledge distillation in subsection Knowledge Distillation under section.", "4.3.", "In figure 5b we show the latency of our model with different generation approaches (NAR vs AR) over increasing target sequence lengths on the TOP dataset.", "Firstly, we show that our LightConv Pointer is significantly faster than the BiLSTM baseline (Aghajanyan et al., 2020), achieving up to a 54% reduction in median latency.", "BiLSTM was used as baseline as that was the SOTA without pretraining for TOP and Transformers performed poorly.", "By comparing our model with AR and NAR generation strategy, it can be seen that increase in latency with increase in target length is much smaller for NAR due to better parallelization of decoder, resulting in up to an 81% reduction in Length Bucket NAR (%) AR (%) Bucket Size < 10 82.80 83.13 2798 10-20 84.18 84.36 5167 20-30 62.50 65.72 992 30-40 21.25 41.25 80 > 40 0.00 20.00 5 Table 2: EM accuracy of the NAR LightConv Pointer (distilled) vs AR LightConv Pointer distilled across different target length buckets along with the number of instances in each bucket on the TOP dataset.", "median latency compared to the BiLSTM model.", "Also note that both the LightConv Pointer models are able to achieve parity in terms of EM Accuracy compared to the baseline BiLSTM model, while using many fewer parameters, the BiLSTM model uses 20M parameters, while the NAR LightConv Pointer uses 12M and the AR LightConv Pointer uses 10M.", "Ablation experiments We compare the modifications proposed by this work (LightConv, Conv length prediction module and Mask everything strategy) with the original model proposed in Ghazvininejad et al. (2019) in table 1.", "The motivations for each modification was already discussed in sub-section 2.1.", "Our mean EM accuracy results based on 3 trials show the significance of techniques proposed in this paper especially for longer target sequences.", "Errors by length It is known that non-autoregressive models have difficulty at larger sequence lengths (Ghazvininejad et al., 2019).", "In table 2, we show our model's accuracy in each respective length bucket on the TOP dataset.", "We see that the AR and NAR model follow a similar distribution of errors, however the NAR model seems to error at a higher rate for the longer lengths.", "Knowledge Distillation Following prior work (Ghazvininejad et al., 2019; Zhou et al., 2020), we train our model with sequence-level knowledge distillation (Kim and Rush, 2016).", "We train our system on data generated by the current SOTA autoregressive models BART (Lewis et al., 2019; Aghajanyan et al., 2020).", "In table 3 we show the impact of knowledge distillation in our task on both the non-autoregressive and autoregressive variants of LightConv Pointer.", "These results support prior work in machine translation for distillation of au-Figure 6: Distilled NAR LightConv Pointer Top-K accuracy for exact match (EM) accuracy (blue) and Top-K length accuracy (orange), as well as the EM accuracy with gold length (dotted red line) for the TOP dataset.", "toregressive teachers to non-autoregressive models showing distillation improving our models on TOP and SNIPS, however we notice minimal changes on DSTC2.", "The importance of length prediction An important part of our non-autoregressive model is length prediction.", "In figure 6, we report exact match accuracy @ top k beams and length accuracy @ top k beams (where top K refers to whether the correct answer was in the top K predictions) for the TOP dataset.", "We can see a tight correlation between our length accuracy and exact match accuracy, showing how our model is bottle necked by the length prediction.", "Providing gold length as a feature, led to an exact match accuracy of 88.20% (shown in red on figure 6), an absolute 7.31 point improvement over our best result with our non-autoregressive LightConv Pointer.", "Non-autoregressive Decoding Recent work in machine translation has made a lot of progress in fully non-autoregressive models (Gu et al., 2017; Ma et al., 2019; Ghazvininejad et al., 2020a; Saharia et al., 2020) and parallel decoding (Lee et al., 2018; Ghazvininejad et al., 2019; Gu et al., 2019; Ghazvininejad et al., 2020b; Kasai et al., 2020).", "While many advancements have been made in machine translation, we believe we are the first to explore the non-autoregressive semantic parsing setting.", "In our work, we extend the CMLM to work for semantic parsing.", "We make two important adjustments: first, we use a different masking approach where we mask everything and do one-step generation.", "Second, we note the importance of the length prediction task for parsing and improve the length prediction module in the CMLM.", "Seq2Seq For Semantic Parsing Recent advances in language understanding have lead to increased reliance on seq2seq architectures.", "Recent work by Rongali et al. 2020; Aghajanyan et al. 2020, showed the advantages from using a pointer generator architecture for resolving complex queries (e.g. composition and cross domain queries) that could not be handled by word tagging models.", "Since we target the same task, we adapt their pointer decoder into our proposed architecture.", "However, to optimize for latency and compression we train CNN based architectures (Desai et al. 2020 and Wu et al. 2019b) to leverage the inherent model parallelism compared to the BiLSTM model proposed in Aghajanyan et al. 2020 and more compression compared to the transformer seq2seq baseline proposed in Rongali et al. 2020.", "To further improve latency we look at parallel decoding through non-autoregressive decoding compared to prior work leveraging autoregressive models.", "This work introduces a novel alternative to autoregressive decoding and efficient encoder-decoder architecture for semantic parsing.", "We show that in 3 semantic parsing datasets, we are able to speed up decoding significantly while minimizing accuracy regression.", "Our model is able to generate parse trees competitive with state of the art autoregressive models with significant latency savings, allowing complex NLU systems to be delivered on edge devices.", "There are a couple of limitations of our proposed model that naturally extend themselves to future work.", "Primarily, we cannot support true beam decoding, we decode a single prediction for each length prediction however there may exist multiple beams for each length prediction.", "Also for longer parse trees and more complex semantic parsing systems such as session based understanding, our NAR decoding scheme could benefit from multiple iterations.", "Lastly, though we explored models without pre-training in this work, recent developments show the power of leveraging pre-trained models such as RoBERTa and BART.", "We leave it to future work to extend our non-autoregressive decoding for pre-trained models.", "We would like to thank Sandeep Subramanian (MILA), Karthik Prasad (Facebook AI), Arash Einolghozati (Facebook) and Yinhan Liu for the", "helpful discussions.", "References Armen Aghajanyan, Jean Maillard, Akshat Shrivastava, Keith Diedrick, Michael Haeger, Haoran Li, Yashar Mehdad, Veselin Stoyanov, Anuj Kumar, Mike Lewis, and Sonal Gupta.", "2020.", "Conversational semantic parsing.", "In EMNLP/IJCNLP .", "Alice Coucke, Alaa Saade, Adrien Ball, Thodore Bluche, Alexandre Caulier, David Leroy, Clment Doumouro, Thibault Gisselbrecht, Francesco Calta-girone, Thibaut Lavril, et al. 2018.", "Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces.", "arXiv preprint arXiv:1805.10190 .", "Shrey Desai, Geoffrey Goh, Arun Babu, and Ahmed Aly.", "2020.", "Lightweight convolutional representations for on-device natural language processing.", "arXiv preprint arXiv:2002.01535 .", "Arash Einolghozati, Panupong Pasupat, Sonal Gupta, Rushin Shah, Mrinal Mohit, Mike Lewis, and Luke Zettlemoyer.", "2018.", "Improving semantic parsing for task oriented dialog.", "In Conversational AI Workshop at NeurIPS 2018 .", "Marjan Ghazvininejad, Vladimir Karpukhin, Luke Zettlemoyer, and Omer Levy.", "2020a.", "Aligned cross entropy for non-autoregressive machine translation.", "arXiv preprint arXiv:2004.01655 .", "Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer.", "2019.", "Mask-predict: Parallel decoding of conditional masked language models.", "Marjan Ghazvininejad, Omer Levy, and Luke Zettle-moyer.", "2020b.", "Semi-autoregressive training improves mask-predict decoding.", "arXiv preprint arXiv:2001.08785 .", "Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and Yun-Nung Chen.", "2018.", "Slot-gated modeling for joint slot filling and intent prediction.", "In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers) , pages 753757.", "Victor OK Li, and Richard Socher.", "2017.", "Non-autoregressive neural machine translation.", "arXiv preprint arXiv:1711.02281 .", "Jiatao Gu, Changhan Wang, and Junbo Zhao.", "Levenshtein transformer.", "In Advances in Neural Information Processing Systems , pages 1117911189.", "Jungo Kasai, James Cross, Marjan Ghazvininejad, and Jiatao Gu.", "2020.", "Parallel machine translation with disentangled context transformer.", "arXiv preprint arXiv:2001.05136 .", "Yoon Kim and Alexander M Rush.", "2016.", "Sequence-level knowledge distillation.", "arXiv preprint arXiv:1606.07947 .", "Jason D. Lee, Elman Mansimov, and Kyunghyun Cho.", "2018.", "Deterministic non-autoregressive neural sequence modeling by iterative refinement.", "In Proc.", "of EMNLP .", "Xuezhe Ma, Chunting Zhou, Xian Li, Graham Neu-big, and Eduard Hovy.", "2019.", "Flowseq: Non-autoregressive conditional sequence generation with generative flow.", "arXiv preprint arXiv:1909.02480 .", "Sonal Gupta, Rushin Shah, Mrinal Mohit, Anuj Kumar, and Mike Lewis.", "2018.", "Semantic parsing for task oriented dialog using hierarchical representations.", "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 27872792, Brussels, Belgium.", "Association for Computational Linguistics.", "Dilek Hakkani-Tr, Gkhan Tr, Asli Celikyilmaz, Yun-Nung Chen, Jianfeng Gao, Li Deng, and Ye-Yi Wang.", "2016.", "Multi-domain joint semantic frame parsing using bi-directional rnn-lstm.", "In Interspeech , pages 715719.", "Matthew Henderson, Blaise Thomson, and Jason D. Williams.", "2014.", "The second dialog state tracking challenge.", "In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL) , pages 263272, Philadelphia, PA, U.S.A. Association for Computational Linguistics.", "Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, et al. 2019.", "Searching for mobilenetv3.", "In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 13141324.", "Diederik P Kingma and Jimmy Ba.", "2014.", "Adam: A method for stochastic optimization.", "arXiv preprint arXiv:1412.6980 .", "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer.", "2019.", "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension.", "Kenton Murray, Jeffery Kinnison, Toan Q. Nguyen, Walter Scheirer, and David Chiang.", "2019.", "Auto-sizing the transformer network: Improving speed, efficiency, and performance for low-resource machine translation.", "In Proceedings of the 3rd Workshop on Neural Generation and Translation , pages 231240, Hong Kong.", "Association for Computational Linguistics.", "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te-jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala.", "2019.", "Pytorch: An imperative style, high-performance deep learning library.", "In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32 , pages 80248035.", "Curran Associates, Inc.", "Gabriel Pereyra, George Tucker, Jan Chorowski, ukasz Kaiser, and Geoffrey Hinton.", "2017.", "Regularizing neural networks by penalizing confident output distributions.", "arXiv preprint arXiv:1701.06548 .", "Subendhu Rongali, Luca Soldaini, Emilio Monti, and Wael Hamza.", "2020.", "Don't parse, generate! a sequence to sequence architecture for task-oriented semantic parsing.", "arXiv preprint arXiv:2001.11458 .", "Chitwan Saharia, William Chan, Saurabh Saxena, and Mohammad Norouzi.", "2020.", "Non-autoregressive machine translation with latent alignments.", "arXiv preprint arXiv:2004.07437 .", "Abigail See, Peter J. Liu, and Christopher D. Manning.", "2017.", "Get to the point: Summarization with pointer-generator networks.", "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1073 1083, Vancouver, Canada.", "Association for Computational Linguistics.", "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le.", "2014.", "Sequence to sequence learning with neural networks.", "Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna.", "2015.", "Rethinking the inception architecture for computer vision.", "Elan van Biljon, Arnu Pretorius, and Julia Kreutzer.", "2020.", "On optimal transformer depth for low-resource language translation.", "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, ukasz Kaiser, and Illia Polosukhin.", "2017.", "Attention is all you need.", "In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30 , pages 59986008.", "Curran Associates, Inc.", "Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, Yangqing Jia, and Kurt Keutzer.", "2019a.", "Fb-net: Hardware-aware efficient convnet design via differentiable neural architecture search.", "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 1073410742.", "Victor Zhong, Caiming Xiong, and Richard Socher.", "2018.", "Global-locally self-attentive encoder for dialogue state tracking.", "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1458 1467, Melbourne, Australia.", "Association for Computational Linguistics.", "Chunting Zhou, Jiatao Gu, and Graham Neubig.", "2020.", "Understanding knowledge distillation in non-autoregressive machine translation.", "In International Conference on Learning Representations ." ]
[ "abstain", "abstain", "objective", "result", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "result", "abstain", "result", "result", "objective", "objective", "abstain", "objective", "objective", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "objective", "objective", "abstain", "other", "other", "objective", "abstain", "abstain", "objective", "result", "result", "objective", "abstain", "abstain", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other" ]
[ "Rationale-Centric Framework for Human-in-the-loop Machine Learning", "{yanglinyi, zhangyue}@westlake.edu.cn Abstract", "We present a novel rationale-centric framework with human-in-the-loop R ationales-centric D ouble-robustness L earning (RDL) to boost model out-of-distribution performance in few-shot learning scenarios.", "By using static semi-factual generation and dynamic human-intervened correction, RDL exploits rationales (i.e. phrases that cause the prediction), human interventions and semi-factual augmentations to decouple spurious associations and bias models towards generally applicable underlying distributions, which enables fast and accurate generalisation.", "Experimental results show that RDL leads to significant prediction benefits on both in-distribution and out-of-distribution tests compared to many state-of-the-art benchmarksespecially for few-shot learning scenarios.", "We also perform extensive ablation studies to support in-depth analyses of each component in our framework.", "Recent work finds that natural artefacts (Guru-rangan et al., 2018) or spurious patterns (Keith et al., 2020; Srivastava et al., 2020) in datasets can cause sub-optimal model performance for neural networks.", "As shown in Figure 1, the bold phrases 100% bad and brain cell killing are underlying causes for a negative sentiment prediction that most human readers would recognise.", "These are defined as rationales in this paper.", "The underlined phraseacting and plot has been incorrectly recognised as a causal term by the model used fort this example, and is referred to as a spurious pattern .", "Spurious patterns (or associations) are caused by natural artefacts or biases in training data (Lertvittayakumjorn and Toni, 2021), and are usually useless, or even harmful, at test time.", "This issue can be severe in few-shot learning (FSL) * These authors contributed equally to this work.", "scenarios.", "For instance, Kulesza et al. (2010) suggests that when a model is trained with a small subset of labelled data, it is prone to exploiting spurious patterns leading to poor generalisability that is evident in the performance decay in out-of-distribution (OOD) datasets.", "In spite of these issues, training deep neural networks using few labelled examples is a compelling scenario since unlabelled data may be abundant but labelled data is expensive to obtain in real-world applications (Lu and MacNamee, 2020; Lu et al., 2021).", "There is a strand of research addressing this scenario that seeks to improve model performance by introducing methods and resources for training models less sensitive to spurious patterns", "(Kaushik et al., 2020).", "Most of this work relies on generating counterfactual augmented data", "(CAD), either manually", "(Kaushik et al., 2021)", "or automatically", "(Feng et al., 2021; Qian et al., 2021; Yang et al., 2021, 2020a; Delaney et al., 2021).", "For example, Kaushik et al.", "(2020)", "proposed a human-in-the-loop framework where human annotators are required to make minimal changes to original movie reviews to produce sentiment-flipped counterfactual reviews, which enables models to learn useful associations between input texts and output labels", "(Kaushik et al., 2021).", "Generating manual counterfactuals, however, is expensive and time-consumingKaushik et al.", "(2020)", "report the cost of revising 2 .", "5 k instances at over $10,000.", "On the other hand, fully automatic methods are task-specific and therefore have weak robustness across domains and less reliabil-6986 Semi-factual Generation", "ity compared to manual counterfactuals.", "To address these issues, we propose R ationales-centric D ouble-robustness L earning", "(RDL), a human-in-the-loop framework for data augmentation in a few-shot setting, which is efficient, robust, model-agnostic, and general across tasks.", "Our main idea is a rationale-centric strategy for eliminating the effect of spurious patterns by leveraging human knowledge as shown in Figure", "2. Our double-robustness framework consists of two main modules.", "The first is a Static Semi-factual Generation module that generates a set of semifactual data automatically for a given instance by using human-identified rationales.", "Such labelling requires less human input compared to fully manual counterfactual generation", "(see Section 3.1).", "In contrast with counterfactuals", "(Roese, 1997)", "that rely on what might have been different", "(i.e. the label would be changed if certain terms have been changed), semi-factuals", "(McCloy and Byrne, 2002; Kenny and Keane, 2021), as used in our work, aim to guide a model to identify terms less causally related to the label", "(i.e. even if certain terms had been changed, the label would be kept the same).", "Second, we apply a Dynamic Human-intervened Correction module , where the most salient features are identified for model predictions over a set of training examples, and human workers intervene by checking the correctness of the rationale in case first-round modifications introduce new artefacts.", "We evaluate the two modules in a few-shot setting, where a minimum number of training instances are labeled for maximum generalisation power, both for in-distribution and OOD predictions.", "also used in Kaushik et al.", "(2020), demonstrate that the double-robust models can be less sensitive to spurious patterns.", "In particular, models trained with RDL with only 50 labelled examples achieve the same or even better results than fully-supervised training with a full training set of 1,707 examples, and improvements are especially significant for OOD tests.", "The predictive model trained with RDL using only 100 labelled examples outperforms models trained with manual", "(Kaushik et al., 2020)", "and automatic CAD", "(Yang et al., 2021)", "using the full augmented training set of 3,414 examples.", "To the best of our knowledge, we are the first to exploit the efficacy of semi-factuals and human-intervention for improving the generalisation abilities of deep neural networks in few-shot learning scenarios.", "* 2 Related Work Data augmentation has been used for resolving artefacts in training datasets before", "(Gururangan et al., 2018; Srivastava et al., 2020; Kaushik et al., 2021).", "In particular, previous work", "(Kaushik et al., 2020)", "relied on large-scale crowd-sourcing to generate useful augmented data.", "More recently, Yang et al.", "(2021), and Wang and Culotta", "(2021)", "investigated the efficacy of the automatically generated counterfactuals for sentiment analysis.", "Similar to our work, these methods also consider the most salient features that a model uses when generating augmented data, which is in line with our rationale definition.", "However, they use sentiment lexicon matching for identifying rationales, which is task-specific and not necessarily fully relevant.", "In contrast, we employ human annotators to identify rationales, which can be task-agnostic and robust.", "Moreover, our method generates semi-factuals instead of counterfactuals used in previous work.", "Human-the-loop Machine Learning", "(Wu et al., 2021)", "has received increasing research attention.", "Active learning", "(Settles, 2009; Margatina et al., 2021), the most common example of human-in-the-loop machine learning, asks human annotators only to provide high-level annotations", "(i.e. labels)", "for important examples.", "There is also some work exploring more explainable AI systems by exploiting feature-based information.", "Such methods use relatively simple models such as Nave Bayes", "(Stumpf * All resources are available at https://github.com/GeorgeLuImmortal/RDL-Rationales-centric-Double-robustness-Learning/ 6987 et al., 2009; Kulesza et al., 2015)", "and Linear Regression with bag-of-words features", "(Jia and Liang, 2017; Teso and Kersting, 2019; Ghai et al., 2021; Shao et al., 2021), because these classifiers are relatively intuitive in generating explanations and amenable to incorporating human feedback.", "Some other work uses simple neural networks such as multi-layer perceptrons", "(Shao et al., 2021)", "and shallow CNNs", "(Lertvittayakumjorn et al., 2020; Stammer et al., 2021; Teso et al., 2021)", "because the predictions of such models can be explained in the form of features.", "Very recently, Yao et al.", "(2021)", "proposed a human-in-the-loop method to inspect more complicated models", "(e.g. BERT)", "with the help of model-agnostic post-hoc explanation algorithms", "(Ribeiro et al., 2018)", "that can explain predictions of any linear or non-linear model without exploiting its weights.", "However, previous work focuses on increasing the explainability of AI systems for high-stakes domains such as health and finance", "(Li et al., 2020; Yang et al., 2020b), instead of improving model robustness or generalisation ability.", "Also, they assume access to a large amount of labelled data.", "In contrast, we focus on few-shot learning scenarios which are more compelling.", "The RDL pipeline is shown in Figure 2 and consists of two modules: Static Semi-factual Generation and Dynamic Human-intervened Correction .", "Static semi-factual generation is a more efficient alternative to manually generated counterfactuals", "(Kaushik et al., 2020).", "In the first phase, Rationale Marking", "(Section 3.1), human annotators review each document in the training set to provide rationales", "(i.e. phrases that support the document classification decisions shown as bold text in Figure 2).", "The second phase is a semi-factual generation method based on synonym replacement", "(Section 3.2)", "that produces augmented examples", "(blue text in Figure 2 indicates replaced words), which are added into the training set.", "Dynamic human-intervened correction", "(Section 3.3)", "is a rationales-powered human-in-the-loop framework to dynamically correct the model's behaviours.", "At the outset, sampling and sensitivity of contextual decomposition", "(SCD)", "(Jin et al., 2019)", "is applied to detect the rationales given by the model that is obtained in the previous step.", "Then, all model-identified rationales", "(underlined texts in Figure 2)", "are examined by human annotators to identify false rationales", "(i.e. words or phrases that do not support the classifications but are falsely included by the model)", "and missing rationales", "(i.e. words or phrases that support the classifications but are not included by the model).", "Both false rationales and missing rationales are corrected to produce augmented examples.", "Finally, newly generated examples are added into the training set to re-train the deep learning model.", "Following Kaushik et al.", "(2020)", "and Yang et al.", "(2021), we use the IMDb movie review dataset", "(Maas et al., 2011)", "in our experiments.", "It consists of positive and negative movie reviews that are easy for human participants to understand, re-annotate, and provide feedback upon", "(Zaidan et al., 2007).", "We use a crowdsourcing company to recruit editors and annotators for marking rationales that support classification decisions.", "At the outset, annotators were given instructions and examples that gently guided them to annotate rationales.", "Only adjectives, adverbs, nouns, and verbs were considered as rationales.", "Besides, rationales were required to carry complete semantic information.", "For example, for a phrase starting with a negation word such as not great , annotators are instructed to mark the whole phrase not great as a rationale instead of just marking not .", "We also limited rationales to at most three consecutive words", "(i.e. unigrams, bigrams and trigrams).", "Phrases consisting of numerical scores are not counted as rationales", "(e.g. 5 or 10 stars)", "since different datasets may use different rating scales, and annotating digits may hurt OOD performance.", "Overall, we encouraged annotators to try their best to mark as many rationales as possible to explain classification labels.", "However, to guarantee the quality of rationale marking and prevent annotators from over including non-rationales for more payment, we also manually inspected annotated examples and rejected examples that contained incorrect rationales.", "After inspection, we rejected 10.6% of negative reviews and 7.6% of positive reviews.", "Editors and annotators re-annotated the rejected examples, which were then presented to us for another inspection.", "All re-annotated examples were approved only if all authors were happy with the quality of the annotations.", "Otherwise, the examples were re-annotated again.", "rationales in 855 movie reviews involved in Section 3.1 and 3.3", "(note that we did not annotate all 1,707 examples in the training set because only 855 examples were necessarily involved in our experiments).", "Human annotators spent on average 183.68 seconds to identify rationales in a review and our method generated semi-factual examples automatically.", "On the contrary, workers spent on average 300 seconds to revise a review to generate a counterfactual manually as reported by Kaushik et al.", "(2020).", "Note that our approach using 100 labelled examples can outperform manual CAD", "(Kaushik et al., 2020)", "using the entire training set of 1,707 examples", "(see Section 5.3), making our approach 300 1707 183 .", "68 100 27 .", "88 times more efficient than manually generated CAD.", "We take a simple replacement strategy, which has been taken by Yang et al.", "(2021), to generate semifactual examples.", "Given a human-identified rationale, our method constructs augmented examples by automatically replacing non-rationale words, thus leading to examples with the same labels.", "This augmentation is consistent with semi-factual thinking: even if those non-rationales were changed, the label would not change.", "Formally, given a training example x i = [ t i 1 , t i 2 , ..., t ij ]", "(where t ij is the j th token of the i th document)", "and its ground truth label y i , we create a rationale vector r i = [ a i 1 , a i 2 , ..., a ij ] where a ij is the value that indicates whether t ij is a rationale or not", "(we set a ij = 1 to indicate that t ij is a rationale and 0 otherwise).", "To generate a semi-factual example, x i , we randomly replace a certain number of non-rationales", "(where a ij = 0 ), except for punctuation, with synonymous terms.", "The synonyms can be provided by a human, retrieved automatically from a lexicon such as WordNet", "(Miller, 1995), or generated using the mask-filling function of a pretrained context-aware language model", "(Liu et al., 2019).", "In our experiments, we randomly replace 5% of non-rationales using mask-filling and generate a set of augmented examples, x i , with some replaced non-rationales and all the other tokens identical to x i .", "The label, y i , of a newly generated example is the same as the label of the original example, x i .", "Examples of generated data are shown in Table", "1. Afterwards, the augmented examples are added into the training set used to train the model.", "Dynamic human-intervened correction further improves the robustness of the model by allowing human annotators to correct the model rationales online.", "Firstly, SCD is applied to detect unigrams, bigrams or trigrams that are salient to the model.", "SCD is a technique to assess the importance of terms by continuously removing terms and measuring changes in prediction", "(Jin et al., 2019).", "Human annotators examine all rationales given by the model from all documents to discover two types of incorrect rationale: false rationales and missing rationales.", "The next phase allows human feedback to influence the learning process.", "To this end, for each type of incorrect rationale, we propose a corresponding strategy to correct them.", "For false rationales", "(i.e. phrases that actually do not support classifications but are incorrectly identified by the model), we use synonym replacement again to generate semi-factual examples.", "Unlike the static semi-factual generation", "(Section 3.2), in this component we replace all false rationales with their synonyms instead of randomly replacing 5% of non-rationales in a document.", "Examples of generated data are shown in Table", "2. For missing rationales", "(i.e. phrases that actually support classifications but are not identified by the model), we take another simple semi-factual generation strategy, that is, extracting sentences that contain missing rationales to form semi-factual data.", "Specifically, given a sentence containing missing rationales, we use this sentence as a new example, and the label of this newly generated example is identical to that of the document where the sentence is extracted.", "For example, there is a positive movie review", "(bold font for rationales)", "Robert Urich was a fine actor, and he makes this TV movie believable . I remember watching this film when I was 15 .... .", "The model fails to identify fine and believable as rationales.", "Thus we extract the text Robert Urich was a fine actor, and he makes this TV movie believable . as a new example, and the class of this example is still positive.", "We extract the whole sentence rather than just the missing rationales to reserve more semantic information.", "Note that the two correction methods in dynamic human-intervened correction can operate in parallel and the generated examples are added to the small training set to re-train the model.", "Broadly speaking, our RDL framework takes advantage of invariance that makes a model less sensitive to non-rationale words or spurious patterns (Tu et al., 2020; Wang et al., 2021) in favour of focusing on useful mappings of rationales to labels.", "More specifically, by using static semi-factual generation (Section 3.2) and false rationale correction (Section 3.3), we expect to break spurious associations.", "For example, if a model incorrectly determines that Soylent Green is associated with positive sentiment (Table 2), the augmented examples that replace Soylent Green with other phrases such as Gang Orange break the spurious association.", "Besides, using synonym replacement can generate examples that are similar to the original one, which is equivalent to adding noisy data to prevent models from overfitting (Wei and Zou, 2019).", "Missing rationale correction (Section 3.3) emphasizes the ground truth associations between rationales and labels, enabling the model to better estimate the generally useful underlying distributions for OOD datasets, even in few-shot learning scenarios.", "In the next section, we present experiments and empirical evidence to demonstrate the utility of the proposed RDL framework in improving model robustness.", "Our intention is to improve the generalisability of models, and we use both in-distribution and OOD", "performance for evaluation.", "Our experiments are designed to address the following research questions: RQ1 Can we use static semi-factual generation to achieve better in-distribution and OOD performance?", "RQ2 Does dynamic human-intervened correction improve generalisability of models?", "For fair comparison with previous work (Kaushik et al., 2020; Yang et al., 2021), we use the IMDb sentiment classification dataset (Maas et al., 2011) as the in-distribution dataset.", "Following Kaushik et al. (2020), all models were trained with the IMDb dataset predefined training, validation and test partitions containing 1 , 707 , 245 , and 488 reviews respectively and an enforced 50:50 class ratio.", "To measure the generalisation ability of different models, we focus on OOD performance.", "To this end, we test models on another four binary sentiment classification datasets: the sampled Amazon reviews dataset (Ni et al., 2019) (100,000 positives and 100,000 negatives) from six genres: beauty, fashion, appliances, gift cards, magazines, and software; the Yelp review dataset (Zhang et al., 2015) (19,000 positives and 19,000 negatives); the SST-2 dataset (Socher et al., 2013) (1,067 positives and 1,143 negatives), and the SemEval-2017 Twitter dataset (Rosenthal et al., 2017) (2,339 positives 6990 Training Data In-domain SemEval-2017 SST-2 Yelp Amazon Static (50 gold) 88.60 1.11 77.28 9.11 79.29 5.14 91.53 2.06 89.63 1.65 Full (1,707 gold) 93.23 0.46 71.17 2.54 80.23 2.09 93.66 0.84 90.29 0.57 DP (Static + 350 auto) (400) 86.70 2.92 74.36 2.92 77.33 6.01 89.60 2.51 89.15 1.89 RR (Static + 350 auto) (400) 89.65 1.27 79.20 1.27 78.89 5.95 91.93 2.10 89.73 1.26 Our Methods Static + 150 auto (200) 90.08 1.25 78.88 6.67 79.40 3.28 92.19 1.51 89.81 1.73 Static + 350 auto (400) 90.16 0.85 80.54 2.81 81.26 1.97 93.03 1.08 90.09 1.79 Static + 550 auto (600) 90.04 1.50 80.69 3.42 81.23 1.83 92.10 3.07 89.67 1.27 Static + 750 auto (800) 90.08 1.01 80.55 3.96 80.75 2.30 92.36 1.87 90.18 1.44 Static + 950 auto (1000) 89.83 1.28 80.90 3.29 80.58 2.57 92.30 2.19 90.62 1.29 Static + 1150 auto (1200) 90.12 1.82 79.31 1.82 79.52 3.15 91.47 3.61 90.16 1.46 Table 3: Results on in-distribution and OOD data.", "To address RQ1 , we compare the performance of models trained by the static semi-factual generation strategy with models trained with the original 50 examples, referred to as Static .", "We also compare to a model trained with the full training set (1,707 labelled examples), referred to as Full .", "To simulate the few-shot training scenario, we randomly sample 50 examples (we also forced a 50:50 class balance) from the IMDb dataset as training data.", "For each experiment, the training is repeated 10 times with training datasets sampled by 10 different random seeds.", "We report the average result of these 10 repetitions and use accuracy to measure the classification performance.", "Our experiments rely on an off-the-shelf cased RoBERTa-base model implemented by Hugging Face * to either perform mask-filling to provide synonyms or as a predictive model.", "Following Kaushik et al. (2020), we fine-tune RoBERTa for up to 20 epochs and apply early stopping with patience of 5 (i.e. stop fine-tuning when validation loss does not decrease for 5 epochs).", "We also explore the impact of the number of semi-factual examples on model performance.", "To this end, we conduct static semi-factual generation with a different number of augmented examples for each instance: {3, 7, 11, 15, 19, 23}.", "Considering we have 50 original examples, this would result in {150, 350, 550, 750, 950, 1,150} additional examples in the training set, respectively (we call * https://huggingface.co/transformers/model_doc/roberta.html this Static+ n , where n is the number of generated semi-factuals).", "We use the Adam optimizer (Kingma and Ba, 2014) with a batch size of 4.", "We found that setting the learning rate to {5e-5, 5e-6 and 5e-6} could optimise Static, Static+ n , and Full, respectively.", "As shown in Table 3, all static semi-factual generation (Static+ n ) methods can outperform the baseline method (Static) in both in-distribution and OOD tests, demonstrating the utility of static semifactual generation.", "Among all Static+ n methods, Static+350 seems the best-performing method and exceeds Static with a 1.56% in-distribution improvement in average accuracy.", "Static+350 also outperforms Static with 3.26%, 1.97%, 1.5%, and 0.46% OOD improvement in the SemEval-2017 , SST-2 , Yelp and Amazon datasets respectively.", "Although the improvement on the Amazon dataset appears modest, given that there are 200,000 examples in the Amazon test set, this actually stands for nearly 1,000 documents being correctly classified.", "The Static+ n methods can even outperform Full (i.e. normal training with the full training set) on the SemEval , SST-2 , and Amazon datasets and are comparable on the Yelp dataset.", "The performance of models with the full training set is best on the in-distribution dataset but the worst on the SemEval dataset, which can be caused by the big difference between underlying distributions of these two datasets.", "In other words, a model that fits well with one dataset can cause performance decay on others.", "In this case, training with a smaller training set is more likely to reduce overfitting with the in-distribution dataset and fit well with the SemEval dataset, which explains the big improvement.", "It is interesting to note that models trained with the en-6991 tire training set perform slightly better on the OOD Yelp dataset (93.66 0.84 ) than on the in-distribution dataset (93.23 0.46 ), which could also be explained by the high similarity between the underlying distributions of these two datasets.", "First, we test whether the improvement in model performance is brought about by static semi-factual generation (Static+ n ) or simply by an increase in the size of the training set.", "We compare Static+350 (due to its relatively good performance) with another baseline called Duplication ( DP heareafter).", "We multiply the original training set (50 examples) up into 400 examples identical to the size of the training set of Static+350, and fine-tune RoBERTa on this dataset with the same hyperparameters as Static+350.", "As shown in Table 3, in most cases, DP un-derperforms other algorithms and is even worse than Static, demonstrating that solely increasing the dataset size cannot improve the performance.", "We believe that the duplication of original examples increases the risk of overfitting and easily magnifies artefacts or spurious patterns hidden in the small training set, which leads to worse models.", "Second, synonym replacement has been used previously for data augmentation (Wei and Zou, 2019), and we compare static semi-factual generation with simply replacing any words (i.e. both rationales and non-rationales).", "Following Wei and Zou (2019), we replace 5% of words at random and set the training set size to 400 to ensure fair comparison (we use RoBERTa and the same hyperparameters of Static+350).", "We call this Random Replacement ( RR hereafter).", "As shown in Table 3, RR is slightly better than the baseline Static approach.", "This result is similar to that reported in Wei and Zou (2019), since the augmented data generated by random replacement is similar to the original data, introducing noise that helps prevent overfitting to some extent.", "However, the magnitude of improvement of the Static+ n method is much larger than that of RR, demonstrating the utility of only replacing non-rationales to generate semi-factuals.", "These observations show that the model trained with Static+ n does improve both in-distribution and OOD performance, and the improvement is actually derived from static semi-factual generation.", "As shown in Table 3 and Figure 3, the performance gain of static semi-factual generation (Static+ n ) marginalises when augmented data is increased.", "Using too much augmented data even hurts the Static+1150 performance.", "This observation is consistent with existing work on data augmentation (Wei and Zou, 2019).", "We believe one reason could be that the use of static augmented examples could also introduce new spurious patterns that degrade model performance, necessitating a method that exploits rationales without generating too many augmented examples.", "Human-in-the-loop can address this issue by dynamically correcting the model.", "To address RQ2 , we compare the performance of models trained by dynamic human-intervened correction with a popular few-shot human-in-the-loop learning framework, Active Learning, as well as two other state-of-the-art CAD-based methods (Kaushik et al., 2020; Yang et al., 2021).", "Lastly, we provide an ablation study to examine the influence of different correction methods, as well as an analysis regarding model sensitivity to spurious patterns.", "We build up an active learning procedure as a baseline based on the model trained with Static.", "In particular, we select another 50 examples by Uncertainty Sampling (i.e. prediction scores for two classes in these examples were close) and add them into the training set (called AL hereafter).", "The training set size of the baseline becomes 100.", "The best performing static semi-factual generation method Static+350 is also listed as a baseline.", "For fair comparison, we also use Uncertainty Sampling to select another 50 examples (i.e. 100 original examples in the training set now) for the proposed dynamic human-intervened correction in-6992 Baseline Methods In-domain SemEval-2017 SST-2 Yelp Amazon Static (50 gold) 88.60 1.11 77.28 9.11 79.29 5.14 91.53 2.06 89.63 1.65 Static + 350 auto (400) 90.16 0.85 80.54 2.81 81.26 1.97 93.03 1.08 90.09 1.79 AL (100 gold) 88.64 1.75 78.61 5.90 80.50 3.37 92.47 0.68 89.80 1.91 CAD-based Methods Manual CAD (3,414 gold) 92.70 0.53 69.98 3.99 80.30 2.03 91.87 1.09 90.48 1.09 Automatics CAD (1,707 gold+1,707 auto) 91.82 0.74 79.39 5.37 80.60 3.10 91.92 0.97 90.46 1.08 Our Dynamic Methods Dynamic (100 gold + 700 auto) 90.84 0.99 80.32 4.31 82.40 2.14 93.19 1.24 90.51 2.17 Dynamic-MR (100 gold + 700 auto) 91.06 1.21 79.04 4.92 82.24 2.59 93.03 1.92 90.22 2.74 Dynamic-FR (100 gold + 700 auto) 89.85 1.38 82.39 1.88 81.59 1.82 92.98 0.91 90.12 2.42 Table 4: Results on in-distribution and OOD data.", "cluding both False Rationale Correction and Missing Rationale Correction (called Dynamic ).", "For Dynamic, we control the number of augmented examples for each review to 7 (4 from Missing Rationale Correction and 3 from False Rationale Correction), resulting in 800 examples in the training set.", "For Automatic CAD (Yang et al., 2021) and Manual CAD (Kaushik et al., 2020), we use the entire training set to produce counterfactuals to build up two challenging baselines (one counterfactual for one example, which is limited by the method), resulting in 3,414 examples in the training set.", "To investigate the influence of each correction method, we also construct another two datasets that augment the same 100 original examples to 800 exclusively by False Rationale Correction ( Dynamic-FR hereafter) and Missing Rationale Correction ( Dynamic-MR hereafter).", "Again, experiments all rely on a RoBERTa model and all hyperparameters are identical to those described in Section 5.2.1, except for the learning rate of AL which is set to 1.25e-5 (we found this value optimised AL perfor-mance).", "As shown in Table 4, both AL and Dynamic outperform Static in in-distribution and OOD datasets which makes sense, because we use Uncertainty Sampling to add new labelled data to minimise model uncertainty and increase model performance.", "However, AL fails to compete with Static+350 even if more original data is added, which again demonstrates the utility of static semi-factual generation.", "On the contrary, Dynamic does better than Static+350 with a 0.68% in-distribution improvement in average accuracy.", "Dynamic also outperforms Static+350 with 1.14%, 0.16%, 0.42% OOD improvement in the SST-2 , Yelp and Amazon datasets, but no improvement for the SemEval Non-rationales Rationales Static 0.572 0.428 Dynamic 0.433 0.567 Table 5: Static versus Dynamic models on average sensitivity (normalised) to rationales and non-rationales for IMDb test samples.", "dataset.", "Finally, the performance of our methods is better that the state-of-the-art manual CAD method in few-shot learning scenarios on all OOD datasets.", "Overall, these observations demonstrate that applying dynamic human-intervened correction (i.e. Missing Rationale Correction and False Rationale Correction) can further increase the robustness of a model on generalisation ability, effectively avoiding the improvement marginalisation caused by the increased volume of augmented data.", "Missing Rationales vs. False Rationales We conduct an ablation study by examining the performance of Dynamic-MR and Dynamic-FR in Table 4.", "Interestingly, Dynamic-FR is specifically good at improving model performance on the in-distribution and SemEval datasets while Dynamic-MR does a good job on the SST-2 dataset.", "We believe that it is because Dynamic-MR biases the model to estimate an underlying distribution that is useful for SST-2 and in-distribution datasets, while Dynamic-FR biases the model to estimate a distribution similar to SemEval dataset.", "The performance of Dynamic can be explained as a compromise of two correction methods.", "Sensitivity to Spurious Patterns We conduct an analysis to explore whether the double-robust models are less sensitive to spurious patterns.", "We compute models mean sensitivity to all rationales and non-rationales through SCD in the IMDb test set.", "As shown in Table 5, the corrected model is much more sensitive to rationales with 13.9% average increase in the 6993 sensitivity to rationales, which demonstrates that our double-robust method can decouple models from spurious patterns.", "We proposed a rationale-centric human-in-the-loop framework, RDL, for better model generalisability in few-shot learning scenarios.", "Experimental results show that our method can boost performance of deep neural networks in both in-distribution and OOD datasets and make models less sensitive to spurious patterns, enabling fast generalisation.", "In the future, we expect to see rationale-centric frameworks defined for different tasks, including NER, question answering, and relation extraction.", "We honor the ACL Code of Ethics.", "No private data or non-public information was used in this work.", "All annotators have received labor fees corresponding to the amount of their annotated instances.", "We acknowledge with thanks the discussion with Chenyang Lyu from Dublin City University, as well as the many others who have helped.", "We would also like to thank anonymous reviewers for their insightful comments and suggestions to help improve the paper.", "This publication has emanated from research conducted with the financial support of the Pioneer and \"Leading Goose\" R&D Program of Zhejiang under Grant Number 2022SDXHDX0003 and Science Foundation Ireland (SFI) under Grant Number [12/RC/2289_P2].", "Yue Zhang is the corresponding author." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "result", "abstain", "method", "abstain", "other", "other", "other", "other" ]
["Simile interpretation is a crucial task in natural language processing.","Nowadays, pre-trained la(...TRUNCATED)
["abstain","abstain","abstain","objective","method","result","objective","result","other","abstain",(...TRUNCATED)
["Memory augmented encoder-decoder framework has achieved promising progress for natural language ge(...TRUNCATED)
["abstain","abstain","abstain","objective","method","objective","abstain","abstain","abstain","absta(...TRUNCATED)
["Emotion-cause pair extraction aims to extract all potential pairs of emotions and corresponding ca(...TRUNCATED)
["abstain","abstain","objective","objective","result","abstain","abstain","abstain","abstain","absta(...TRUNCATED)
["Abstract","This paper addresses the problem of dialogue reasoning with contextualized commonsense (...TRUNCATED)
["abstain","abstain","objective","abstain","objective","result","objective","abstain","abstain","abs(...TRUNCATED)

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
0
Add dataset card