{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:13:07.155291Z" }, "title": "Investigating the Effect of Natural Language Explanations on Out-of-Distribution Generalization in Few-shot NLI", "authors": [ { "first": "Yangqiaoyu", "middle": [], "last": "Zhou", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Chicago", "location": {} }, "email": "zhouy1@uchicago.edu" }, { "first": "Chenhao", "middle": [], "last": "Tan", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Chicago", "location": {} }, "email": "chenhao@uchicago.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Although neural models have shown strong performance in datasets such as SNLI, they lack the ability to generalize out-of-distribution (OOD). In this work, we formulate a fewshot learning setup and examine the effects of natural language explanations on OOD generalization. We leverage the templates in the HANS dataset and construct templated natural language explanations for each template. Although generated explanations show competitive BLEU scores against groundtruth explanations, they fail to improve prediction performance. We further show that generated explanations often hallucinate information and miss key elements that indicate the label.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Although neural models have shown strong performance in datasets such as SNLI, they lack the ability to generalize out-of-distribution (OOD). In this work, we formulate a fewshot learning setup and examine the effects of natural language explanations on OOD generalization. We leverage the templates in the HANS dataset and construct templated natural language explanations for each template. Although generated explanations show competitive BLEU scores against groundtruth explanations, they fail to improve prediction performance. We further show that generated explanations often hallucinate information and miss key elements that indicate the label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Thanks to recent advances in pre-trained language models (Vaswani et al., 2017; Devlin et al., 2018) , the state-of-the-art accuracy for natural language inference (NLI) can easily exceed 90% (Pilault et al., 2020) . However, these NLI models show poor out-of-distribution (OOD) generalization. For instance, McCoy et al. (2019) create a templated dataset (HANS) and find model performance to be about chance in this dataset.", "cite_spans": [ { "start": 57, "end": 79, "text": "(Vaswani et al., 2017;", "ref_id": "BIBREF12" }, { "start": 80, "end": 100, "text": "Devlin et al., 2018)", "ref_id": "BIBREF3" }, { "start": 192, "end": 214, "text": "(Pilault et al., 2020)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While recent studies try to tackle this robustness problem from the perspectives of both the dataset and the model (Le Bras et al., 2020; Swayamdipta et al., 2020; Clark et al., 2019) , we investigate an extra dimension of information, natural language explanations. Our work is motivated by the growing interest in explanations in the NLP community (Camburu et al., 2018; Rajani et al., 2019; Alhindi et al., 2018; Stammbach and Ash, 2020) : these explanations can potentially enable models to understand the reasoning strategy beyond spurious patterns. We focus on a few-shot learning setup because it is unrealistic to expect a large number of annotated OOD examples.", "cite_spans": [ { "start": 115, "end": 137, "text": "(Le Bras et al., 2020;", "ref_id": "BIBREF5" }, { "start": 138, "end": 163, "text": "Swayamdipta et al., 2020;", "ref_id": "BIBREF5" }, { "start": 164, "end": 183, "text": "Clark et al., 2019)", "ref_id": "BIBREF2" }, { "start": 350, "end": 372, "text": "(Camburu et al., 2018;", "ref_id": "BIBREF1" }, { "start": 373, "end": 393, "text": "Rajani et al., 2019;", "ref_id": "BIBREF9" }, { "start": 394, "end": 415, "text": "Alhindi et al., 2018;", "ref_id": "BIBREF0" }, { "start": 416, "end": 440, "text": "Stammbach and Ash, 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To introduce an OOD setting with natural language explanations, we construct E-HANS, a dataset with natural language explanations for each template in HANS. By leveraging the templates in HANS, we avoid the challenges in crowdsourcing natural language explanations (Wiegreffe and Marasovi\u0107, 2021) and manually build an explanation dataset of high-quality.", "cite_spans": [ { "start": 265, "end": 296, "text": "(Wiegreffe and Marasovi\u0107, 2021)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We use an EXPLAINTHENPREDICT framework to learn with explanations. An explanation generation model outputs an explanation for each input example, and the generated explanation is fed into a classifier along with the input example. While BLEU scores imply high quality of generated explanations, learning with explanations does not improve predictive performance either in-distribution or out-of-distribution. We show the generated explanations contain words in the true explanations, but they fail to reproduce important phrases and often hallucinate entities during generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Explanations for HANS", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building Natural Language", "sec_num": "2" }, { "text": "To investigate whether natural language explanations improve the robustness of natural language inference (NLI), we build on two existing datasets: 1) HANS, which introduces templates to generate OOD examples for robust evaluation of NLI models; 2) E-SNLI, which provides explanations for the Stanford Natural Language Inference (SNLI) dataset. Our key contribution is to augment HANS by building templated natural language explanations and studying the effect of these explanations on model robustness in a few-shot learning setup. Our dataset is available at https://github.com/ChicagoHAI/hansexplanations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building Natural Language", "sec_num": "2" }, { "text": "We start by presenting details of existing datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Existing Datasets", "sec_num": "2.1" }, { "text": "HANS (McCoy et al., 2019) contains NLI examples designed to be challenging for models that Premise: the psychologist by the programmers saw the essayist. Hypothesis: the psychologist saw the essayist. Explanation: the psychologist by the programmers is still the psychologist. tend to learn spurious patterns. It targets known heuristics for the majority of existing NLI data. For example, one heuristic assumes that a premise entails all hypotheses that are constructed using only words in the premise. There are 3 heuristics in HANS, each containing 10 subcases. A subcase is supported by a few templates and the dataset is constructed following these templates.", "cite_spans": [ { "start": 5, "end": 25, "text": "(McCoy et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Existing Datasets", "sec_num": "2.1" }, { "text": "E-SNLI (Camburu et al., 2018) develops freeform self-contained explanations for the true labels in SNLI using crowdsourcing. We pretrain a model on this dataset to examine the effect of pretraining. There are three explanations collected for each example in the validation dataset, and we use the first explanation. We do not use the test set of E-SNLI.", "cite_spans": [ { "start": 7, "end": 29, "text": "(Camburu et al., 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Existing Datasets", "sec_num": "2.1" }, { "text": "We build natural language explanations for HANS to examine whether explanations can help models when facing this challenging corpus. As HANS is constructed with templates, we develop templates for natural language explanations accordingly. They explain the reasons for the true label in human language. Table 1 shows an example of the proposed explanations (more examples are in Appendix B). In addition to developing these templated explanations, we expand the original HANS vocabulary in terms of its nouns, verbs, adjectives, and adverbs to increase difficulty. This allows us to examine the effect of unseen words.", "cite_spans": [], "ref_spans": [ { "start": 303, "end": 310, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Templated Natural Language Explanations for HANS", "sec_num": "2.2" }, { "text": "To investigate whether natural language explanations improve the robustness of NLI models, we look at a few-shot learning setting. We focus on this setting since in practice one may have little or no access to OOD instances. We are interested in the following questions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Few-Shot Learning Set-up", "sec_num": "3.1" }, { "text": "1. whether the model trained from in-distribution examples can generalize to unseen templates and words, 2. how many samples are enough for learning, 3. whether pretraining on E-SNLI improves generalization on HANS, 4. and most importantly, what is the effect of explanations. We use 5-fold cross validation by splitting 118 templates randomly into 5 folds. We generate k samples for each training template using E-HANS explanation templates. We then build a corresponding development set that contains 0.2k samples of each training template, so that the development set is 20% the size of the training set and does not include any unseen template. This setup ensures that the size of the development set is realistic (Kann et al., 2019) .", "cite_spans": [ { "start": 718, "end": 737, "text": "(Kann et al., 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Few-Shot Learning Set-up", "sec_num": "3.1" }, { "text": "Finally, we build test instances in the following categories to evaluate the performance of the models both in-distribution and out-of-distribution:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Few-Shot Learning Set-up", "sec_num": "3.1" }, { "text": "\u2022 IND vocab, IND template. Both the templates and the vocabulary are matched with the training set. We expect the performance to grow steadily as k increases. \u2022 OOD vocab, IND template. We use the same templates as the training set, but use unseen words to generate this test set. The challenge lies in understanding unseen words. \u2022 IND vocab, OOD template. We use the unseen templates and the same vocabulary as the training set. The challenge lies in understand the logic encoded in unseen templates. \u2022 OOD vocab, OOD template. Finally, we generate the test data with both the unseen templates and the unseen words. We use the same test sets (300 examples for each template) to examine how the models' performance changes as k increases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Few-Shot Learning Set-up", "sec_num": "3.1" }, { "text": "We adapt the EXPLAINTHENPREDICT architecture introduced by Camburu et al. (2018) in our experiments. It consists of a generation model and a classification model. The generation model produces an explanation given an input premise and hypothesis pair. This generated explanation and the original input are fed into the classifier for label prediction. Our framework slightly differs from Camburu et al. (2018) The explanation generation model follows an encoder-decoder framework. Both the encoder and the decoder use the BERT model, but the decoder uses a masking mechanism so that it predicts the next word considering only all the preceding words in both training and testing phases. Our generation model obtains close to SoTA performance on e-SNLI, comparing against WT5 (33.15 vs. 33.7 in BLEU) (Narang et al., 2020) .", "cite_spans": [ { "start": 388, "end": 409, "text": "Camburu et al. (2018)", "ref_id": "BIBREF1" }, { "start": 800, "end": 821, "text": "(Narang et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3.2" }, { "text": "The classification model is a BERT sequence classifier, where a linear layer is applied to the pooled output of BERT encodings (i.e., embedding of the CLS token).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3.2" }, { "text": "We used k = 1, 2, 4, 8, 16 to generate the training data. The explanation generator trains on groundtruth explanations. For all of our models, we saved the model with the best validation performance during training and did not tune other hyperparameters. We used a training batch size 16 and a learning rate 5e-5 for the explanation generator, and we used a training batch size 128 and learning rate 2e-5 for the classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3.3" }, { "text": "Model comparisons. To test if explanations help with learning, we compare with a baseline that only includes the classifier component with the input premise, hypothesis pair (hence \"label-only\"). We also consider a baseline that does not update with the k samples in the training set (hence \"no training\") and a majority baseline (\"majority\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3.3" }, { "text": "In addition, we compare the vanilla BERT model with a BERT model fine-tuned on E-SNLI during both generation and classification to investigate whether pretraining on E-SNLI helps with the HANS task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3.3" }, { "text": "We first look at the quality of generated explanations using BLEU. Although the generated explana-tions match groundtruth explanations well based on BLEU, they affect downstream classification negatively in our few-shot learning set-up. We further examine the generated explanations to understand why the predictive performance drops when adding natural language explanations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "Generated explanations achieve high BLEU scores based on our groundtruth templated explanations (Figure 1 ). IND vocab explanations can achieve BLEU scores greater than 90 on IND templates and 60 on OOD templates when k = 16. Even OOD vocab explanations can achieve BLEU scores close to 20. In general, the performance grows steadily as k increases for both IND and OOD.", "cite_spans": [], "ref_spans": [ { "start": 96, "end": 105, "text": "(Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Quality of Explanations based on BLEU", "sec_num": "4.1" }, { "text": "While the BLEU scores can be quite high, OOD generalization remains a challenge. Unseen vocabulary and templates (Fig. 1b, Fig. 1d ) increase the difficulty in explanation generation. That said, pretraining on E-SNLI improves generation quality for both IND and OOD cases. This improvement on OOD generalization is likely due to exposure to other data during pretraining.", "cite_spans": [], "ref_spans": [ { "start": 113, "end": 130, "text": "(Fig. 1b, Fig. 1d", "ref_id": null } ], "eq_spans": [], "section": "Quality of Explanations based on BLEU", "sec_num": "4.1" }, { "text": "We use BLEU to evaluate the quality of generated explanations with regard to groundtruth explanations because it is a commonly used metric to evaluate natural language explanations (Camburu et al., 2018; Rajani et al., 2019) .", "cite_spans": [ { "start": 181, "end": 203, "text": "(Camburu et al., 2018;", "ref_id": "BIBREF1" }, { "start": 204, "end": 224, "text": "Rajani et al., 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Quality of Explanations based on BLEU", "sec_num": "4.1" }, { "text": "Despite the high BLEU scores, learning with the generated explanations does not help the classification task (Fig. 2) . Learning with explanations consistently performs worse than the label-only baseline under both IND and OOD testing scenarios. Pretraining on E-SNLI does not change this observation either.", "cite_spans": [], "ref_spans": [ { "start": 109, "end": 117, "text": "(Fig. 2)", "ref_id": null } ], "eq_spans": [], "section": "Predictive Performance", "sec_num": "4.2" }, { "text": "The only positive result we find is that pretraining helps with OOD generalization. Models pretrained on E-SNLI give better results than plain Figure 2: x-axis shows the number of samples per template, while y\u2212axis shows the accuracy in label prediction. Learning from explanations is always below the label-only baseline.", "cite_spans": [], "ref_spans": [ { "start": 143, "end": 152, "text": "Figure 2:", "ref_id": null } ], "eq_spans": [], "section": "Predictive Performance", "sec_num": "4.2" }, { "text": "BERT (Fig. 2) . This finding aligns with the positive effect of pretraining on explanation generation. We also observe that testing on groundtruth explanations boosts performance drastically. This suggests that groundtruth explanations give clues for the label, but generated explanations do not capture this information.", "cite_spans": [], "ref_spans": [ { "start": 5, "end": 13, "text": "(Fig. 2)", "ref_id": null } ], "eq_spans": [], "section": "Predictive Performance", "sec_num": "4.2" }, { "text": "To understand why explanations are not helpful, we introduce two new metrics to evaluate the effectiveness of explanation generation. We measure how often the generated explanations contain hallucinated entities, professions (i.e., people) and locations that do not show up in the input, and we measure how well the good label indicator word \"we do not know\" is generated. We present results on explanations generated by the BERT model and the E-SNLI-pretrained model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why Explanations Do Not Help?", "sec_num": "4.3" }, { "text": "An explanation contains a hallucinated entity if there is an entity that never show up in the original input. These hallucinated entities will likely hinder predictive performance when models make predictions based on generated explanations. We only count hallucinated professions and locations to avoid false positives due to synonyms used in explanations. That is, we use a conservative estimate on hallucinated keywords in generated explanations by only counting people and locations. We find that hallucinated entities are almost always generated in OOD vocab cases by the BERT model (99% of explanations consist of entities that do not exist in the premise and the hypothesis) and the hallucination rate is also high (around 60%) for the E-SNLI-pretrained model. However, the hallucination rate is much lower for IND vocab cases (Fig. 3, Fig. 4 ): it is close to 0 when k = 16 and for E-SNLI-pretrained model. But when k = 4, we observe a high hallucination rate (> 50%) for IND vocab cases (Fig. 4a) . We also notice that pretraining on E-SNLI leads to models with much lower hallucination rates for all test cases.", "cite_spans": [], "ref_spans": [ { "start": 834, "end": 849, "text": "(Fig. 3, Fig. 4", "ref_id": null }, { "start": 996, "end": 1005, "text": "(Fig. 4a)", "ref_id": null } ], "eq_spans": [], "section": "Why Explanations Do Not Help?", "sec_num": "4.3" }, { "text": "IND vocab, IND template (k = 4) Premise: the managers who the baker addressed brought the technician. Hypothesis: the baker addressed the managers. Original explanations: who in who the baker addressed refers to the managers. BERT explanations: the artisans that addressed the baker are still the managers. IND vocab, IND template (k = 16) Premise: the analysts in front of the programmers affected the scientist. Hypothesis: the analysts affected the scientist.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why Explanations Do Not Help?", "sec_num": "4.3" }, { "text": "Original explanations: the analysts in front of the programmers are still the analysts. BERT explanations: the analysts in front of the programmers are still the analysts. OOD vocab, IND template (k = 16) Premise: the chaplains near the singer needed the author. Hypothesis: the chaplains needed the author. Original explanations: the chaplains near the singer are still the chaplains. BERT explanations: the psychologists are in front of the musician and the strategists helped the writer, we do not know whether the illustrators helped the writer. \"We do not know\" is a predictive phrase because it is only present in non-entailment examples. We find that when generated explanations contain \"we do not know\", so do the corresponding groundtruth explanations (in other words, precision is 100%). However, when \"we do not know\" is in the groundtruth explanations, it is not necessarily always generated, so the recall is not perfect (Fig. 3c) . In fact, recall decreases as we switch to harder test cases. OOD templates also has greater negative impact than OOD vocab on recall.", "cite_spans": [], "ref_spans": [ { "start": 934, "end": 943, "text": "(Fig. 3c)", "ref_id": null } ], "eq_spans": [], "section": "Why Explanations Do Not Help?", "sec_num": "4.3" }, { "text": "Finally, we look closely at some of the generated explanations (Table 2) . We observe that models struggle to learn the templates even for the IND templates case. In the easiest case (IND vocab, IND template), although the explanation uses the right template when k = 16, it uses a wrong template when k = 4. Once we switch from IND vocab, Fig. 3a and Fig. 3b show that the BERT model and the E-SNLI-pretrained model (trained with k = 16) hallucinate for OOD vocab. Fig. 3c and Fig. 3d suggest that the explanations fail to include \"we do not know\" for instances with the non-entailment label for OOD vocab and OOD templates (with k = 16). Fig. 4a and Fig. 4b show that both the BERT model and E-SNLI-pretrained model (trained with k = 4) hallucinate for OOD vocab, and the hallucination rate is slightly worse for OOD templates. Similarly, Fig. 4c and Fig. 4d suggest that the explanations fail to include \"we do not know\" for instances with the non-entailment label for OOD vocab and OOD templates (with k = 4).", "cite_spans": [], "ref_spans": [ { "start": 63, "end": 72, "text": "(Table 2)", "ref_id": "TABREF3" }, { "start": 340, "end": 359, "text": "Fig. 3a and Fig. 3b", "ref_id": null }, { "start": 466, "end": 473, "text": "Fig. 3c", "ref_id": null }, { "start": 478, "end": 485, "text": "Fig. 3d", "ref_id": null }, { "start": 640, "end": 647, "text": "Fig. 4a", "ref_id": null }, { "start": 652, "end": 659, "text": "Fig. 4b", "ref_id": null }, { "start": 841, "end": 848, "text": "Fig. 4c", "ref_id": null }, { "start": 853, "end": 860, "text": "Fig. 4d", "ref_id": null } ], "eq_spans": [], "section": "Why Explanations Do Not Help?", "sec_num": "4.3" }, { "text": "IND template to OOD vocab, IND template, even the k = 16 models fail to learn which template should be used to generate explanations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why Explanations Do Not Help?", "sec_num": "4.3" }, { "text": "We construct a HANS-based dataset with explanations. On this dataset, we find natural language explanations do not help few-shot NLI to generate out-of-domain under an EXPLAINTHENPREDICT framework. While the genearted explanations obtain high BLEU scores, they do not learn information crucial for downstream classification. Our generation model is close to the SoTA model, yet it still generates nonsensical explanations. Better metrics for explanation evaluation and explanation generation models are key to success for learning with natural language explanations to be effective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [ { "text": "We thank anonymous reviewers for their valuable feedbacks. We thank members of the Chicago Hu-man+AI Lab for their insightful suggestions. We thank Tom Mccoy, one author for the HANS paper, for a detailed explanation on their data when we reached out. We thank techstaff members at the University of Chicago CS department for their technical support. This work is supported in part by research awards from Amazon, IBM, Salesforce, and NSF IIS-2126602.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "We pretrain the BERT model on E-SNLI for 5 epochs and evaluate at every epoch. The model with best dev performance is saved as the final E-SNLI model that we use as the initial model in few-shot learning.On the E-HANS dataset, we run 2000 steps to train the generation model and evaluate every 200 steps. We choose this number because the best dev performance is usually achieved within 2000 steps. As for the explain-then-predict classifier, we run 200 training steps and evaluate every 4 steps because the model quickly reaches best dev performance as training starts. On the other hand, label-only classifier takes more steps in learning. We train for 1000 steps and evaluate every 50 steps.It takes around 30 minutes to train a generation model and 10 minutes to train a classification model on our machine (with 4 gpus).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Replicability Details", "sec_num": null }, { "text": "Premise: the scientists that talked thanked the psychotherapists. Hypothesis: the scientists thanked the psychotherapists. Explanation: the scientists that talked are still the scientists.Premise: if the psychologists ran, the programmers existed. Hypothesis: the psychologists ran. Explanation: the programmers existed if the psychologists ran, we do not know whether the psychologists ran.Premise: though the president voted, the musician existed. Hypothesis: the president voted. Explanation: though suggests the president voted happened.Premise: the scientist was addressed by the psychotherapist. Hypothesis: the psychotherapist addressed the scientist. Explanation: addressed is the active form of was addressed by, so we swap the scientist and the psychotherapist.Premise: if the directors addressed the illustrators, the president listened. Hypothesis: the directors addressed the illustrators. Explanation: the president listened if the directors addressed the illustrators, we do not know whether the directors addressed the illustrators. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Samples from E-HANS", "sec_num": null }, { "text": "Examples of generated explanations on different test types.IND vocab, IND template Premise: the musician by the psychiatrists offended the strategists. Hypothesis: the musician offended the strategists. Original explanations: the musician by the psychiatrists is still the musician. BERT explanations: the strategists offended the musician and the strategists are by the musician, we do not know whether the strategists offended the musician.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C Example Generated Explanations", "sec_num": null }, { "text": "Premise: the administrators near the penciller supported the lyricist. Hypothesis: the administrators supported the lyricist. Original explanations: the administrators near the penciller are still the administrators. BERT explanations: the musician is the subject of the relative clause that addressed the illustrators and the programmers encouraged the programmer, we do not know whether the planners encouraged the illustrators.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OOD vocab, IND template", "sec_num": null }, { "text": "Premise: the scientists who affected the colorists helped the psychotherapists. Hypothesis: the scientists helped the psychotherapists. Original explanations: the scientists who affected the colorists are still the scientists. BERT explanations: the scientists are the subject of the relative clause who addressed the psychotherapists and the scientists helped the colorists, we do not know whether the scientists helped the psychotherapists.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IND vocab, OOD template", "sec_num": null }, { "text": "Premise: the professor who deceived the athletes called the doctors. Hypothesis: the professor called the doctors. Original explanations: the professor who deceived the athletes is still the professor. BERT explanations: the psychiatrists who the president offended is still the psychiatrist. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OOD vocab, OOD template", "sec_num": null }, { "text": "IND vocab, IND template Premise: the directors in front of the analysts avoided the designer. Hypothesis: the directors avoided the designer. Original explanations: the directors in front of the analysts are still the directors. BERT explanations: the directors in front of the analysts are still the directors.OOD vocab, IND template Premise: the chaplains near the singer needed the author. Hypothesis: the chaplains needed the author. Original explanations: the chaplains near the singer are still the chaplains. BERT explanations: the psychologists are in front of the musician and the strategists helped the writer, we do not know whether the illustrators helped the writer.IND vocab, OOD template Premise: the technician that thanked the planners encouraged the worker. Hypothesis: the technician encouraged the worker. Original explanations: the technician that thanked the planners is still the technician. BERT explanations: the technician that encouraged the worker is still the technician, so the technician encouraged the worker and we do not know whether the technician encouraged the worker.OOD vocab, OOD template Premise: the senators that recognized the nurses recommended the chaplains. Hypothesis: the senators recommended the chaplains. Original explanations: the senators that recognized the nurses are still the senators. BERT explanations: the presidents are the subject of the relative clause that offended the psychiatrists and the presidents advised the stylists, we do not know whether the officers advised the bakers. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "124", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Where is your evidence: Improving factchecking by justification modeling", "authors": [ { "first": "Savvas", "middle": [], "last": "Tariq Alhindi", "suffix": "" }, { "first": "Smaranda", "middle": [], "last": "Petridis", "suffix": "" }, { "first": "", "middle": [], "last": "Muresan", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)", "volume": "", "issue": "", "pages": "85--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tariq Alhindi, Savvas Petridis, and Smaranda Mure- san. 2018. Where is your evidence: Improving fact- checking by justification modeling. In Proceedings of the First Workshop on Fact Extraction and VERifi- cation (FEVER), pages 85-90.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "e-snli: Natural language inference with natural language explanations", "authors": [ { "first": "Oana-Maria", "middle": [], "last": "Camburu", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rockt\u00e4schel", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Lukasiewicz", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2018, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "9539--9549", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oana-Maria Camburu, Tim Rockt\u00e4schel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-snli: Natu- ral language inference with natural language explana- tions. In Advances in Neural Information Processing Systems, pages 9539-9549.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Don't take the easy way out: Ensemble based methods for avoiding known dataset biases", "authors": [ { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Yatskar", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.03683" ] }, "num": null, "urls": [], "raw_text": "Christopher Clark, Mark Yatskar, and Luke Zettlemoyer. 2019. Don't take the easy way out: Ensemble based methods for avoiding known dataset biases. arXiv preprint arXiv:1909.03683.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Towards realistic practices in lowresource natural language processing: the development set", "authors": [ { "first": "Katharina", "middle": [], "last": "Kann", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" } ], "year": 2019, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katharina Kann, Kyunghyun Cho, and Samuel R Bow- man. 2019. Towards realistic practices in low- resource natural language processing: the develop- ment set. In EMNLP.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Adversarial filters of dataset biases", "authors": [ { "first": "Swabha", "middle": [], "last": "Ronan Le Bras", "suffix": "" }, { "first": "Chandra", "middle": [], "last": "Swayamdipta", "suffix": "" }, { "first": "Rowan", "middle": [], "last": "Bhagavatula", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Zellers", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Sabharwal", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2020, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "1078--1088", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Le Bras, Swabha Swayamdipta, Chandra Bha- gavatula, Rowan Zellers, Matthew Peters, Ashish Sabharwal, and Yejin Choi. 2020. Adversarial filters of dataset biases. In International Conference on Machine Learning, pages 1078-1088. PMLR.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference", "authors": [ { "first": "Thomas", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1902.01007" ] }, "num": null, "urls": [], "raw_text": "R Thomas McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntac- tic heuristics in natural language inference. arXiv preprint arXiv:1902.01007.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Noah Fiedel, and Karishma Malkan. 2020. Wt5?! training text-to-text models to explain their predictions", "authors": [ { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.14546" ] }, "num": null, "urls": [], "raw_text": "Sharan Narang, Colin Raffel, Katherine Lee, Adam Roberts, Noah Fiedel, and Karishma Malkan. 2020. Wt5?! training text-to-text models to explain their predictions. arXiv preprint arXiv:2004.14546.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Conditionally adaptive multi-task learning: Improving transfer learning in nlp using fewer parameters & less data", "authors": [ { "first": "Jonathan", "middle": [], "last": "Pilault", "suffix": "" }, { "first": "Amine", "middle": [], "last": "Elhattami", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Pal", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2009.09139" ] }, "num": null, "urls": [], "raw_text": "Jonathan Pilault, Amine Elhattami, and Christopher Pal. 2020. Conditionally adaptive multi-task learning: Improving transfer learning in nlp using fewer param- eters & less data. arXiv preprint arXiv:2009.09139.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Explain yourself! leveraging language models for commonsense reasoning", "authors": [ { "first": "Bryan", "middle": [], "last": "Nazneen Fatema Rajani", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Mccann", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.02361" ] }, "num": null, "urls": [], "raw_text": "Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain your- self! leveraging language models for commonsense reasoning. arXiv preprint arXiv:1906.02361.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "2020. e-fever: Explanations and summaries for automated fact checking", "authors": [ { "first": "Dominik", "middle": [], "last": "Stammbach", "suffix": "" }, { "first": "Elliott", "middle": [], "last": "Ash", "suffix": "" } ], "year": null, "venue": "Proceedings of the 2020 Truth and Trust Online Conference (TTO 2020)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dominik Stammbach and Elliott Ash. 2020. e-fever: Ex- planations and summaries for automated fact check- ing. In Proceedings of the 2020 Truth and Trust On- line Conference (TTO 2020), page 32. Hacks Hack- ers.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "2020. Dataset cartography: Mapping and diagnosing datasets with training dynamics", "authors": [ { "first": "Swabha", "middle": [], "last": "Swayamdipta", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Lourie", "suffix": "" }, { "first": "Yizhong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" }, { "first": "A", "middle": [], "last": "Noah", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2009.10795" ] }, "num": null, "urls": [], "raw_text": "Swabha Swayamdipta, Roy Schwartz, Nicholas Lourie, Yizhong Wang, Hannaneh Hajishirzi, Noah A Smith, and Yejin Choi. 2020. Dataset cartography: Map- ping and diagnosing datasets with training dynamics. arXiv preprint arXiv:2009.10795.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Teach me to explain: A review of datasets for explainable nlp", "authors": [ { "first": "Sarah", "middle": [], "last": "Wiegreffe", "suffix": "" }, { "first": "Ana", "middle": [], "last": "Marasovi\u0107", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sarah Wiegreffe and Ana Marasovi\u0107. 2021. Teach me to explain: A review of datasets for explainable nlp. ArXiv:2102.12060.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Figure 3: Fig. 3a and Fig. 3b show that the BERT model and the E-SNLI-pretrained model (trained with k = 16) hallucinate for OOD vocab. Fig. 3c and Fig. 3d suggest that the explanations fail to include \"we do not know\" for instances with the non-entailment label for OOD vocab and OOD templates (with k = 16).", "uris": null, "type_str": "figure" }, "FIGREF1": { "num": null, "text": "Recall of generating \"we do not know\". (E-SNLI) Figure 4:", "uris": null, "type_str": "figure" }, "TABREF0": { "content": "
Average length of
premise and hypothesis are 8.8 and 4.4 tokens. Average
length of natural language explanation is 13.3 tokens.
", "html": null, "num": null, "text": "An example from E-HANS.", "type_str": "table" }, "TABREF1": { "content": "
100100100100
OOD vocabOOD vocabOOD vocabOOD vocab
80IND vocab80IND vocab80IND vocab80IND vocab
BLEU40 60BLEU40 60BLEU40 60BLEU40 60
20202020
0124816012481601248160124816
Training Size Per TemplateTraining Size Per TemplateTraining Size Per TemplateTraining Size Per Template
(a) IND templates with BERT(b) OOD templates with(c) IND templates with(d) OOD templates with
BERTESNLIESNLI
Figure 1
119
", "html": null, "num": null, "text": "in that their classifier only takes explanation as input for the classifier. : x\u2212axis shows the number of samples per template, while y\u2212axis shows the BLEU score. BLEU scores are high for IND vocab, IND template instances. Although BLEU drops substantially for both BERT and the E-SNLI pretrained model under OOD vocab and OOD templates, it is still decent (above 40 with E-SNLI).", "type_str": "table" }, "TABREF3": { "content": "", "html": null, "num": null, "text": "Example generated explanations for IND templates cases by the BERT model trained with k = 4, 16. More examples are in Appendix C.", "type_str": "table" } } } }