{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:08:21.335812Z" }, "title": "Linguistically-Informed Transformations (LIT ): A Method for Automatically Generating Contrast Sets", "authors": [ { "first": "{chuanrong", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington", "location": {} }, "email": "" }, { "first": "Lin", "middle": [], "last": "Shengshuo", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington", "location": {} }, "email": "" }, { "first": "Leo", "middle": [ "Z" ], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington", "location": {} }, "email": "" }, { "first": "Xinyi", "middle": [], "last": "Wu", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington", "location": {} }, "email": "xywu@uw.edu" }, { "first": "Xuhui", "middle": [], "last": "Zhou \u2665 }", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington", "location": {} }, "email": "xuhuizh@uw.edu" }, { "first": "Shane", "middle": [], "last": "Steinert-Threlkeld", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington", "location": {} }, "email": "shanest@uw.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Although large-scale pretrained language models, such as BERT and RoBERTa, have achieved superhuman performance on indistribution test sets, their performance suffers on out-of-distribution test sets (e.g., on contrast sets). Building contrast sets often requires human-expert annotation, which is expensive and hard to create on a large scale. In this work, we propose a Linguistically-Informed Transformation (LIT) method to automatically generate contrast sets, which enables practitioners to explore linguistic phenomena of interests as well as compose different phenomena. Experimenting with our method on SNLI and MNLI shows that current pretrained language models, although being claimed to contain sufficient linguistic knowledge, struggle on our automatically generated contrast sets. Furthermore, we improve models' performance on the contrast sets by applying LIT to augment the training data, without affecting performance on the original data. 1", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Although large-scale pretrained language models, such as BERT and RoBERTa, have achieved superhuman performance on indistribution test sets, their performance suffers on out-of-distribution test sets (e.g., on contrast sets). Building contrast sets often requires human-expert annotation, which is expensive and hard to create on a large scale. In this work, we propose a Linguistically-Informed Transformation (LIT) method to automatically generate contrast sets, which enables practitioners to explore linguistic phenomena of interests as well as compose different phenomena. Experimenting with our method on SNLI and MNLI shows that current pretrained language models, although being claimed to contain sufficient linguistic knowledge, struggle on our automatically generated contrast sets. Furthermore, we improve models' performance on the contrast sets by applying LIT to augment the training data, without affecting performance on the original data. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Large-scale pretrained language models have given remarkable improvements to a wide range of NLP tasks (Peters et al., 2018; Howard and Ruder, 2018; Devlin et al., 2019; Liu et al., 2019; Radford et al., 2019) . However, the results are questionable, since those models take advantage of lexical cues (and other heuristics) in the datasets, which can make them right for wrong reasons (Gururangan et al., 2018; McCoy et al., 2019) . Therefore, the concept of evaluating models on contrast sets (Gardner et al., 2020) and the creation of generalization tests (Kaushik et al., 2020) is critical for building a robust NLP system. Those test sets are usually Figure 1 : Example of BERT making wrong prediction on LIT-transformed data but correct prediction on the original datum. The detailed transformed datum includes a premise modified to past tense and a hypothesis with future tense. The true label correspondingly changes to neutral. LIT also generates multiple transformation results at once for a single original datum; we include only one detailed example here for simplicity of the illustration. manually created, which requires significant human effort, and so is hard to do on a large scale.", "cite_spans": [ { "start": 103, "end": 124, "text": "(Peters et al., 2018;", "ref_id": "BIBREF21" }, { "start": 125, "end": 148, "text": "Howard and Ruder, 2018;", "ref_id": "BIBREF11" }, { "start": 149, "end": 169, "text": "Devlin et al., 2019;", "ref_id": "BIBREF5" }, { "start": 170, "end": 187, "text": "Liu et al., 2019;", "ref_id": "BIBREF17" }, { "start": 188, "end": 209, "text": "Radford et al., 2019)", "ref_id": "BIBREF24" }, { "start": 385, "end": 410, "text": "(Gururangan et al., 2018;", "ref_id": "BIBREF10" }, { "start": 411, "end": 430, "text": "McCoy et al., 2019)", "ref_id": "BIBREF19" }, { "start": 494, "end": 516, "text": "(Gardner et al., 2020)", "ref_id": "BIBREF8" }, { "start": 558, "end": 580, "text": "(Kaushik et al., 2020)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 655, "end": 663, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we propose Linguistically-Informed Transformations (LIT) to create contrast sets automatically. Our method can perturb the original examples and generate various types of contrastive examples, with a wide choice of linguistic phenomena. Furthermore, our tool supports compositional generalization tests. Namely, researchers can choose transformations from a set of basic linguistic phenomena and modify original sentences with an arbitrary combination of those basic transformations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To demonstrate the utility of LIT, we focus on the natural language inference (NLI) task, a central task to many NLP applications. We apply LIT to generate contrast sets for SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018) using seven linguistic phenomena. Human experts' rating show that our generated data is high-quality for basic transformations and for most of the compositional transformations. See Appendix B for more details.", "cite_spans": [ { "start": 179, "end": 200, "text": "(Bowman et al., 2015)", "ref_id": "BIBREF1" }, { "start": 210, "end": 233, "text": "(Williams et al., 2018)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "With our generated contrast sets, we show that pretrained language models, despite having 'seen' huge quantities of raw text data, fail on simple linguistic perturbations. As shown with an example in Figure 1 , 'decoupling' tenses of the premise and hypothesis breaks BERT's prediction. Our analysis not only shows the inadequate coverage of SNLI and MNLI datasets but also reveals the deficiency of current pretraining-and-finetuning paradigms. Compared to previous work showing that BERT is not robust and fails to generalize on out-of-distribution test sets (McCoy et al., 2019; Zhou et al., 2019; Jin et al., 2019b) , our method provides a more fine-grained picture showing on which phenomenon the models fail. In summary, our contributions are:", "cite_spans": [ { "start": 561, "end": 581, "text": "(McCoy et al., 2019;", "ref_id": "BIBREF19" }, { "start": 582, "end": 600, "text": "Zhou et al., 2019;", "ref_id": "BIBREF29" }, { "start": 601, "end": 619, "text": "Jin et al., 2019b)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 200, "end": 208, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We provide a method for automatically generating phenomenon-specific contrast sets, which helps NLP practitioners better understand pre-trained language models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We further apply LIT to augment SNLI and MNLI training data, which improves models' performance on out-of-distribution test sets without sacrificing the models' performance on the in-distribution test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We demonstrate that, in the current pretraining paradigm, traditional linguistic methods are valuable for their ability to measure and promote robustness and consistency in datadriven models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "After discussing several areas of related work in Section 2, we describe LIT in step-by-step detail (Section 3). We then apply LIT to SNLI and MNLI (4.1) before evaluating BERT and RoBERTa on both simple (4.2) and compositional (4.4) transformations. We conclude (Section 5) by discussing limitations of LIT and future directions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "NLI Model Diagnosis Our work builds on works diagnosing and improving NLI models with automatically augmented instances (McCoy et al., 2019; Min et al., 2020) . While most of these works apply simple methods such as templates to generate new instances, which limits the phenomena covered, our method has a wider coverage and can be easily extended.", "cite_spans": [ { "start": 120, "end": 140, "text": "(McCoy et al., 2019;", "ref_id": "BIBREF19" }, { "start": 141, "end": 158, "text": "Min et al., 2020)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Contrast Sets Contrast sets (Gardner et al., 2020) serve to evaluate a models' true capabilities by evaluating on out-of-distribution data since previous in-distribution test sets often have systematic gaps, which inflate models' performance on a task (Gururangan et al., 2018; Geva et al., 2019) . The idea of contrast sets is to modify a test instance to a minimum degree while preserving the original instance's syntactic/semantic artifacts and changing the label. Typically, the authors of the dataset create the contrast set manually. We show that a precision grammar, namely ERG (Copestake and Flickinger, 2000) , can be used to automate this process while preserving the authors' benefit of choosing the perturbations of interest.", "cite_spans": [ { "start": 28, "end": 50, "text": "(Gardner et al., 2020)", "ref_id": "BIBREF8" }, { "start": 252, "end": 277, "text": "(Gururangan et al., 2018;", "ref_id": "BIBREF10" }, { "start": 278, "end": 296, "text": "Geva et al., 2019)", "ref_id": "BIBREF9" }, { "start": 585, "end": 617, "text": "(Copestake and Flickinger, 2000)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Adversarial Datasets Another line of work addressing the problem of current models' superhuman performance on in-distribution test sets focuses on adversarial methods. Bras et al. 2020uses an adversarial filtering algorithm to reduce spurious bias in the dataset to avoid models relying on such patterns. Dinan et al. (2019) shows that a human-in-the-loop adversarial training framework significantly improves models' robustness. And Jin et al. (2019a) shows that current pretrained language models are not robust under simple lexical manipulations. Adversarial methods generate test instances automatically, which can be applied to augment the training data (Jin et al., 2019a; Dinan et al., 2019) . However, these adversarial methods introduce specific models in the loop, which might also bias the test set.", "cite_spans": [ { "start": 305, "end": 324, "text": "Dinan et al. (2019)", "ref_id": "BIBREF6" }, { "start": 430, "end": 452, "text": "And Jin et al. (2019a)", "ref_id": "BIBREF12" }, { "start": 659, "end": 678, "text": "(Jin et al., 2019a;", "ref_id": "BIBREF12" }, { "start": 679, "end": 698, "text": "Dinan et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We propose a new Linguistically-Informed Transformation (LIT) method for large-scale automatic generation of contrast sets. LIT 1) parses the input sentence for both syntax and semantics, 2) produces transformed syntax and semantics for each linguistic phenomenon, 3) generates perturbed sentences corresponding to the transformed syntax/semantics, 4) and selects the best surface sentence for each phenomenon. The full pipeline is shown in Figure 2 . Note that we expand the definition of Figure 2 : General pipeline of LIT system exemplified with one input sentence. The parse result includes both syntax and semantics. The transformation rules produce one transformed representation per phenomenon. A set of sentences, all grammatical according to ERG, is generated for each transformed representation. One sentence per phenomenon is selected as the final output sentence. We include two \"Rule\"s for illustration purpose; LIT includes more transformation rules and can be extended for more phenomena. contrast sets in Gardner et al. (2020) . We not only apply our generated contrast sets for evaluation but also for augmentation. We also no longer restrict that the perturbations necessarily lead to the change of the labels.", "cite_spans": [ { "start": 1021, "end": 1042, "text": "Gardner et al. (2020)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 441, "end": 449, "text": "Figure 2", "ref_id": null }, { "start": 490, "end": 498, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Generating Contrast Sets", "sec_num": "3" }, { "text": "LIT contains seven phenomenon-specific transformation rules for modifying the parse results and can be further extended; LIT also allows the composition of different transformation rules for complicated perturbations involving multiple linguistic phenomena.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Contrast Sets", "sec_num": "3" }, { "text": "LIT utilizes an existing grammar implementation for parsing and generation, namely the English Resource Grammar (ERG, Copestake and Flickinger 2000) . ERG is a linguistically motivated broadcoverage grammar for English in the Head-Driven Phrase Structure Grammar framework (HPSG, Pollard and Sag 1994; Sag et al. 2003 ) covering 82.6% of sentences in Wall Street Journal (WSJ) sections in the Penn Treebank (Marcus et al., 1993) . ERG is processing-neutral, meaning that it is not limited to either parsing or generation, and can handle both with a grammar processor. In this work, we use the ACE parser 2 as the processor for ERG grammar.", "cite_spans": [ { "start": 118, "end": 148, "text": "Copestake and Flickinger 2000)", "ref_id": "BIBREF3" }, { "start": 273, "end": 301, "text": "(HPSG, Pollard and Sag 1994;", "ref_id": null }, { "start": 302, "end": 319, "text": "Sag et al. 2003 )", "ref_id": "BIBREF25" }, { "start": 407, "end": 428, "text": "(Marcus et al., 1993)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Parse and Generation", "sec_num": "3.1" }, { "text": "The core part and original contributions of our LIT system are the transformation rules; each rule modifies the parse results from ERG and the ACE parser for one linguistic phenomenon. An ERG parse result includes an HPSG syntax tree and a seman-tic representation in Minimal Recursion Semantics (MRS, Copestake et al. 2005 ). An MRS representation consists of a bag of elementary predicates (EPs), each with a handle for reference, a set of handle constraints that specify relations between handles, a top indicating the topmost EP, and an index variable for the event described by the entire sentence. Every variable has a set of features such as tense and numbers indicating the properties of the entities or the events.", "cite_spans": [ { "start": 296, "end": 323, "text": "(MRS, Copestake et al. 2005", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Transformation", "sec_num": "3.2" }, { "text": "In what follows, we illustrate the application of the transformation rule for it-cleft construction applied to the sentence Alice saw Bob.; see Appendix A for a full list of rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformation", "sec_num": "3.2" }, { "text": "(1) Original parse result: For each parse result, LIT generates one transformation for each linguistic phenomenon, obtaining a set of simple transformations. Each transformation result in this set can also be fed into LIT as a new base for transformation, allowing different rules to be stacked and producing compositions of transformations. LIT uses all transformation results to generate surface sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformation", "sec_num": "3.2" }, { "text": "[ TOP:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformation", "sec_num": "3.2" }, { "text": "Selection by ERG: One of the advantages of the LIT system is that the grammar backbone ensures the acceptability of the generated data. The ACE parser only generates grammatical sentences, according to the ERG. Consequently, ill-formed LITtransformed results are automatically rejected at the generation phase without additional efforts from the users and developers; for instance, even though LIT may produce a representation that would correspond to *Alice may will see Bob. 3 , such a surface string will not be generated since the ERG does not accept it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Surface Sentence Selection", "sec_num": "3.3" }, { "text": "In practice, ERG slightly overgenerates and allows certain ungrammatical strings. Such cases are likely too rare to affect the overall quality of the dataset and can often be filtered out during postselection. ERG also cannot rule out grammatically well-formed but semantically unnatural sentences, which limits the data quality for certain constructions, especially for passives. As a sanity check, we had expert annotators evaluate the generated data and found high agreement on the grammaticality of generated data; the full details are in Appendix B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Surface Sentence Selection", "sec_num": "3.3" }, { "text": "ERG often permits multiple strings for a single representation since the meaning-to-form mapping is not unique in natural languages. To select the candidate sentence for a specific transformation, LIT employs GPT-2 (Radford et al., 2018) to rank multiple surface sentences generated from the same representation and selects the best one according to their perplexity scores.", "cite_spans": [ { "start": 215, "end": 237, "text": "(Radford et al., 2018)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Post-Selection by Pretrained Language Models:", "sec_num": null }, { "text": "3 * means ungrammatical", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-Selection by Pretrained Language Models:", "sec_num": null }, { "text": "LIT is capable of perturbing sentences for seven linguistic transformations: polar questions, it-clefts, tense and aspect, modality, negation, passives and subject-object swapping. Examples for each transformation are shown in Appendix A. LIT also allows different transformations to be stacked where possible. LIT can be further extended for more linguistic transformations, and any extension to the LIT system would also receive all of the aforementioned benefits from ACE and ERG.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phenomena Covered", "sec_num": "3.4" }, { "text": "Flexibility: LIT covers certain simple constructions that can be handled with a template-based approach, for instance, the subject-object swapping in McCoy et al. (2019) . LIT is, however, not limited to template-generated examples and is capable of perturbing naturally-occurring instances.", "cite_spans": [ { "start": 150, "end": 169, "text": "McCoy et al. (2019)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison with Other Approaches", "sec_num": "3.5" }, { "text": "Plausibility: One special property setting LIT apart from other automatic dataset-construction methods is that LIT uses existing linguistic theories resources as its backbone. The use of ERG enables LIT to control data plausibility without human annotation from scratch.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with Other Approaches", "sec_num": "3.5" }, { "text": "Modularity: LIT consists of multiple modules: parsing and generation, transformation, and postselection. Extending with more transformation rules, updating ERG (which is still under active development), and including other language models for post-selection can all be handled in the system without major modification to other modules, allowing LIT to be reused for different works.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with Other Approaches", "sec_num": "3.5" }, { "text": "Model Agnostic: LIT employs traditional linguistic methods for transforming sentences, and the role of language models is limited to selecting the best one from the strings generated by ERG. Contrasting to models trained on specific datasets, the ERG grammar behind LIT does not introduce bias from any specific architecture or dataset. This increases the utility of contrast sets generated with LIT as they are likely to be used for testing datadriven models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with Other Approaches", "sec_num": "3.5" }, { "text": "LIT successfully transformed 21.0% of the sentence pairs in MNLI and 19.7% in SNLI, with at least one transformed result for each sentence in the pair. The number of transformed sentence pairs by phenomenon is shown in Neutral A car will be driven by Alice. Piano will be played by Alice. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Coverage", "sec_num": "3.6" }, { "text": "Using LIT, we evaluate whether large pretrained models 'understand' certain linguistic phenomena through testing them on transformed SNLI and MNLI instances. Specifically, we investigate whether BERT and RoBERTa can successfully predict transformed instances on modality (may), tenses (past; future), passivization, it cleft, and their compositions correctly and consistently. In the following section, we first discuss how we set up our tasks, and then we present our results on simple transformations and composed transformations, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "For the purpose of this paper, we formulate our experiment settings as follows. Specifically, each instance in SNLI/MNLI consists of a hypothesis (e.g., Some men are playing a sport.), a premise (e.g. A soccer game with multiple males playing.) and their corresponding relationship label (entailment). A dual transformed instance is obtained by applying LIT to either the hypothesis or premise, which may or may not change the label of their relationship (i.e., entailment, neutral, and contradiction).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "4.1" }, { "text": "While LIT does not produce laebls after transformation, we apply two label-changing, two labelpreserving, and the relevant compositional transformations listed in Table 1 , with one example per transformation. Note that o;o means we do not modify the instance.", "cite_spans": [], "ref_spans": [ { "start": 163, "end": 170, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Transforming NLI Datasets with LIT", "sec_num": "4.1.1" }, { "text": "\u2022 Modality is used to talk about possibilities and necessities beyond what is actually true and is central to natural language semantics (Kratzer, 1991) . We investigate models' ability to understand the uncertainty expressed in the text by adding 'may' to the instance. Thus, a 'contradiction' or 'entailment' relationship label is changed to 'neutral' logically. Specifically, we consider adding 'may' to the premise (m;o). Note that one can also add 'may' to the hypothesis, which we leave for future work.", "cite_spans": [ { "start": 137, "end": 152, "text": "(Kratzer, 1991)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Transforming NLI Datasets with LIT", "sec_num": "4.1.1" }, { "text": "\u2022 Tenses are used to evaluate sentences at times other than the time of utterance. To probe whether models are able to perform temporal reasoning, we transformed the instances by assigning past tense to hypothesis and future tense to premise (p;f) or vice versa (f;p), which changes the 'contradiction' and 'entailment' label to 'neutral'. Table 3 : Consistency and accuracies of roberta-large over different linguistic phenomena in MNLI. We first train two model separately on the original (ORI) training set and augmented (AUG) training set. Then, we evaluate the trained models on m. and mm. for each phenomena. In this table, we report accuracy on the original sentence pair (Acc@Ori), accuracy on the transformed sentence pair (Acc@Ctr), and the model's consistency. Each accuracy/consistency has the format (m./mm.).", "cite_spans": [], "ref_spans": [ { "start": 340, "end": 347, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Transforming NLI Datasets with LIT", "sec_num": "4.1.1" }, { "text": "\u2022 Label-preserving Transformations do not require inferring the label after transformation, which serves to test models' ability to stay consistent with its prediction after some linguistic perturbations. Here, we experiment on passivization (pa) and it-cleft (i).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transforming NLI Datasets with LIT", "sec_num": "4.1.1" }, { "text": "\u2022 Compositional Transformations help us further evaluate models' 'understanding' of certain linguistic phenomenon. If the models robustly 'understand' phenomenon \u03b1 and \u03b2, composing both should not pose problems to the models. Specifically, we consider adding passivization and it cleft to p;f and f;p transformations. They are denoted as p;f +i, p;f +p, f;p +i, and f;p +p respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transforming NLI Datasets with LIT", "sec_num": "4.1.1" }, { "text": "The statistics for our generated dataset are shown in Table 2 . We train two models on two training set. The original (ORI) training set includes untransformed SNLI training data, whilst the augmented (AUG) training set includes LIT-transformed data with all non-conpositional transformations listed in Table 2 . We test both models' accuracy and consistency for all transformations in the same table.", "cite_spans": [], "ref_spans": [ { "start": 54, "end": 61, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 303, "end": 310, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Transforming NLI Datasets with LIT", "sec_num": "4.1.1" }, { "text": "We use a set of rules to infer the labels of generated pairs (see Table 1 ) based on the types of transformation and the original labels. For instance, originally entailment pairs will turn neutral when 'may' is inserted since the 'may' modality discharges the truth value of original propositions. 'Decoupling' the tenses of originally present-tense pairs for past/future tense pairs also turns the label to neutral, for events at different times are less likely to affect each other.", "cite_spans": [], "ref_spans": [ { "start": 66, "end": 73, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Transforming NLI Datasets with LIT", "sec_num": "4.1.1" }, { "text": "We hypothesize that NLI tasks follow logic rules completely and our following experiments also con-form to that hypothesis, which legitimize our labelinferring rules. However exceptions to such rules may occur: Alice died nevertheless contradicts Alice will be eating, since dying is an event preventing future action of its agent. Annotation by three experts of 100 randomly chosen transformed pairs shows that 79% human agreement with the inferred label, with 92% for label-preserving transformations and 76% for label-changing transformations. Future work will explore refinements of our labelassignment procedure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transforming NLI Datasets with LIT", "sec_num": "4.1.1" }, { "text": "For pretrained language models, we use models from HuggingFace (Wolf et al., 2019) . In this paper, we use bert-base-uncased, bert-large-uncased (Devlin et al., 2019) , roberta-base, and roberta-large (Liu et al., 2019) . For all models, we use Adam to optimize the parameters with an initial learning rate of 5 \u00d7 10 \u22125 . For all the fine-tuning, we use the same seed and train with batch size 32 for 3 epochs, the same setting used in (Devlin et al., 2019) . In this paper, since we never use the development set for early stopping or hyper-parameter tuning (and since MNLI doesn't have a publicly available test set) , we evaluate our models on the development set. Note that MNLI has matched (m.) and mismatched (mm.) test examples, which are derived from the same and different sources as those in the training set, respectively.", "cite_spans": [ { "start": 63, "end": 82, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF28" }, { "start": 145, "end": 166, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" }, { "start": 201, "end": 219, "text": "(Liu et al., 2019)", "ref_id": "BIBREF17" }, { "start": 436, "end": 457, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Probing Models", "sec_num": "4.1.2" }, { "text": "To fully evaluate models' performance, we use both accuracy and consistency. test instances, consistency measures how robust a model under certain perturbations. We report accuracy on the original test set (Acc@Ori), accuracy on the generated contrast set (Acc@Ctr), and the consistency score (defined below). Note that test sets for different phenomena might be different since we only choose the test instances to be included for each phenomenon if LIT produces contrast instances corresponding to the phenomenon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.1.3" }, { "text": "Consistency In addition to using accuracy to measure models' performance, recent research pays attention to consistency, which provides another perspective to probe models' competence in the real world (Trichelair et al., 2018; Zhou et al., 2019; Gardner et al., 2020 ). If a model is robust for the given task, then its performance on original and transformed data should be consistent. For instance, a human is expected to be consistent over the understanding of both a simple sentence and its it-cleft counterpart. We thus measure consistency by comparing the model's prediction on original and transformed data. We define consistency for a dual test instance as the match between labels assigned on original and transformed data instances. Specifically, we define the model to be consistent if a model makes the same label prediction (whether correct or not) for a dual test instance as for the original, and inconsistent otherwise. 4 We evaluate the model consistency for each type of linguistic transformation to investigate the models' robustness to different linguistic phenomena, and to examine the differences between the difficulties of different linguistic structures for the models.", "cite_spans": [ { "start": 202, "end": 227, "text": "(Trichelair et al., 2018;", "ref_id": "BIBREF26" }, { "start": 228, "end": 246, "text": "Zhou et al., 2019;", "ref_id": "BIBREF29" }, { "start": 247, "end": 267, "text": "Gardner et al., 2020", "ref_id": "BIBREF8" }, { "start": 937, "end": 938, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.1.3" }, { "text": "By perturbing the test instances with our predefined transformations, we aim to probe pre-trained language models' relevant linguistic knowledge and robustness towards those transformations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Simple Transformations", "sec_num": "4.2" }, { "text": "As shown in Table 3 , RoBERTa, trained on ORI of MNLI, performs worse on contrast sets, especially for label-changing transformations. Labelpreserving transformations do not hurt models' performance as much as label-changing transformations. We observed similar trends for other models (see Appendix C. This observation is aligned with (McCoy et al., 2019)), which suggests that models are relying on lexical overlaps to infer the relationship between premise and hypothesis.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 19, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Simple Transformations", "sec_num": "4.2" }, { "text": "Another observation is that RoBERTa does not achieve high consistency in any of the simple transformations. The poor and inconsistent performance of RoBERTa on our contrast sets shows that even though the model can perform very well on the indistribution test set, there is still a systematic gap for future models to overcome.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Simple Transformations", "sec_num": "4.2" }, { "text": "Having shown that pre-trained language models do not generalize well to our generated contrast sets, we ask whether we can 'teach' models to recognize those phenomena and make correct predictions accordingly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Applying LIT for Data Augmentation", "sec_num": "4.3" }, { "text": "We do this by fine-tuning models on the augmented training data together with the original data. As shown in Table 4 , we observe that, when training on the augmented data, models preserve their performance on the original test set while improving significantly on the out-of-distribution test sets.", "cite_spans": [], "ref_spans": [ { "start": 109, "end": 116, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Applying LIT for Data Augmentation", "sec_num": "4.3" }, { "text": "Taking a closer look over the specific phenomenon in Table 3 , models' performance increases significantly on label-changing contrast sets. This indicates that models improve in terms of 'understanding' the role of modality (may) and tenses in natural language inference. Arguably, models may simply memorize the 'trick' that modality (may) and tenses (past to future) are associated with label 'neutral.' However, we successfully show that we could enable models to learn those 'tricks' through data augmentation. Future work will probe whether models fine-tuned on our augmented data are relying on such heuristics.", "cite_spans": [], "ref_spans": [ { "start": 53, "end": 60, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Applying LIT for Data Augmentation", "sec_num": "4.3" }, { "text": "The models' performance also increases slightly for label-preserving transformations. However, their consistency does not increase for every transformation, which suggests that data augmentation alone may not suffice for building robust models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Applying LIT for Data Augmentation", "sec_num": "4.3" }, { "text": "We further investigate the models' performance when multiple transformation rules are composed together and applied to a single sentence. We probe models fine-tuned on the original dataset and the dataset augmented with only simple transformations with our compositional test sets. If a model learns the linguistic phenomenon systematically, it should perform well on these compositional transformations even without training. This resembles the zero-shot tests on tasks like SCAN (Lake and Baroni, 2018), but applied to naturally occurring linguistic data. 5 The bottom-right quadrant of Table 3 shows that RoBERTa performs very well on compositional transformations when it is fine-tuned only on simple transformations, in some cases (p;f + pa) even performing better than on the simple transformation data. Again, we observed similar results across all models (see Appendix C). This suggests that it has learned something systematic about the transformations in the augmented dataset.", "cite_spans": [ { "start": 558, "end": 559, "text": "5", "ref_id": null } ], "ref_spans": [ { "start": 589, "end": 596, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Compositional Transformations", "sec_num": "4.4" }, { "text": "For both p;f and f;p, RoBERTa performs worse when additionally composing with it-clefts than with passivization. This suggests that there are differences in the level of systematicity learned for the different transformations, a phenomenon which future work will investigate in more detail.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Compositional Transformations", "sec_num": "4.4" }, { "text": "With LIT, we reveal that current high-performance NLI models still suffer from understanding simple linguistic phenomena. They can be trained to un-derstand these phenomena in a way that appears systematic. In the remainder, we discuss the limitations of LIT, applying LIT to investigate the systematic deficiency of current large-scale datasets, and potential applications of LIT to tasks other than NLI.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Analysis", "sec_num": "5" }, { "text": "One major limitation of LIT is the dependency on ERG, which took more than twenty years of human labor and is specifically for English. It is possible to swap ERG/ACE parser with data-driven parsers and generators trained on semantic graphbanks, including the DeepBank (Flickinger et al., 2012) which uses the same representation frameworks, potentially extending the method to other languages where a broad-coverage hand-crafted grammar is unavailable. Using data-driven models, however, does re-introduce possible model bias and uncertainty of robustness. Nevertheless, once such a resource is available, LIT provides a method of transforming sentences for data augmentation and integrating linguistic knowledge into a data-driven NLP pipeline.", "cite_spans": [ { "start": 269, "end": 294, "text": "(Flickinger et al., 2012)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Limitations of LIT", "sec_num": "5.1" }, { "text": "Future work will also involve expanding the phenomena covered by LIT by generating new transformation rules (cf. 3.4). One potential extension is the insertion of control and raising verbs:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations of LIT", "sec_num": "5.1" }, { "text": "(6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations of LIT", "sec_num": "5.1" }, { "text": "Alice voted for Bob. a. Alice seemed to have voted for Bob. b. Alice wished to vote for Bob. c. Alice persuaded Carol to vote for Bob.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations of LIT", "sec_num": "5.1" }, { "text": "LIT also has a limited coverage, successfully transforming about 20% of the instances in SNLI (see Section 3.4). The limited coverage may introduce bias in the generated dataset; for instance, the ERG grammar is more likely to fail when parsing complicated sentences. Nevertheless, we provide a proof of conept that the method can be used to augment data and probe for understanding of the linguistic phenomena of interest here; a higher recall grammar will only improve the situation, and can be easily integrated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations of LIT", "sec_num": "5.1" }, { "text": "In addition to constructing contrast sets, we also used LIT to directly analyze the sentence types in the transformable portion of SNLI and MNLI to investigate the effects of data bias on pretrained models probed in our work. For MNLI, we found that 46.4% sentences are in present tense, 32.2% in past tense and only 2.95% in future tense; 7.27% sentences are passive, 0.580% have may modality and 0.227% are it-cleft sentences. We found no passive/future or future/passive tense pairs. The lack of sentences with may modality and mismatched tense pairs may account for the low performance for those transformations before fine-tuning on them. It-cleft transformation does not change the meaning and labels, which may explain the high performance despite its rarity in the original data. Note that LIT can only detect linguistic phenomena in sentences parsable with ERG (see Section 3.6), but such functionality can still provide important insights on datasets and can be further explored in future works.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysing Sentence Types in Datasets", "sec_num": "5.2" }, { "text": "We propose Linguistically-Informed Transformations (LIT), a general method to generate contrast sets using an existing linguistic resource. We apply LIT to transform NLI datasets and evaluate current state-of-the-art NLI models. We reveal the systematic gap between current NLI models and an ideal NLI model for NLP practice, which comes from the inadequate coverage of the linguistic phenomenon of SNLI and MNLI. We further show that models can be further improved by using LIT to augment the training data. Furthermore, models fine-tuned on simple transformations perform very well on compositional transformations, suggesting that fine-tuning provides some systematic understanding of these phenomena.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "http://sweaglesw.org/linguistics/ace/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that this contrasts with whatGardner et al. (2020) call contrast consistency, where both predictions additionally have to be both correct.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "See Andreas (2020) for a complementary, heuristic-driven approach to compositional data augmentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Good-enough compositional data augmentation", "authors": [ { "first": "Jacob", "middle": [], "last": "Andreas", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7556--7566", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.676" ] }, "num": null, "urls": [], "raw_text": "Jacob Andreas. 2020. Good-enough compositional data augmentation. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7556-7566, Online. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A large annotated corpus for learning natural language inference", "authors": [ { "first": "R", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "Gabor", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Potts", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "632--642", "other_ids": { "DOI": [ "10.18653/v1/D15-1075" ] }, "num": null, "urls": [], "raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Compu- tational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Adversarial filters of dataset biases", "authors": [ { "first": "Swabha", "middle": [], "last": "Ronan Le Bras", "suffix": "" }, { "first": "Chandra", "middle": [], "last": "Swayamdipta", "suffix": "" }, { "first": "Rowan", "middle": [], "last": "Bhagavatula", "suffix": "" }, { "first": "Matthew", "middle": [ "E" ], "last": "Zellers", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Sabharwal", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Le Bras, Swabha Swayamdipta, Chandra Bha- gavatula, Rowan Zellers, Matthew E. Peters, Ashish Sabharwal, and Yejin Choi. 2020. Adversarial filters of dataset biases.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "An open source grammar development environment and broad-coverage English grammar using HPSG", "authors": [ { "first": "Ann", "middle": [], "last": "Copestake", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Flickinger", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the Second International Conference on Language Resources and Evaluation (LREC'00)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ann Copestake and Dan Flickinger. 2000. An open source grammar development environment and broad-coverage English grammar using HPSG. In Proceedings of the Second International Conference on Language Resources and Evaluation (LREC'00), Athens, Greece. European Language Resources As- sociation (ELRA).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Minimal Recursion Semantics: An Introduction", "authors": [ { "first": "Ann", "middle": [], "last": "Copestake", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Flickinger", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Pollard", "suffix": "" }, { "first": "Ivan", "middle": [ "A" ], "last": "Sag", "suffix": "" } ], "year": 2005, "venue": "Research on Language and Computation", "volume": "3", "issue": "2-3", "pages": "281--332", "other_ids": { "DOI": [ "10.1007/s11168-006-6327-9" ] }, "num": null, "urls": [], "raw_text": "Ann Copestake, Dan Flickinger, Carl Pollard, and Ivan A. Sag. 2005. Minimal Recursion Semantics: An Introduction. Research on Language and Com- putation, 3(2-3):281-332.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In NAACL-HLT.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Build it break it fix it for dialogue safety: Robustness from adversarial human attack", "authors": [ { "first": "Emily", "middle": [], "last": "Dinan", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Humeau", "suffix": "" }, { "first": "Bharath", "middle": [], "last": "Chintagunta", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/d19-1461" ] }, "num": null, "urls": [], "raw_text": "Emily Dinan, Samuel Humeau, Bharath Chintagunta, and Jason Weston. 2019. Build it break it fix it for dialogue safety: Robustness from adversarial hu- man attack. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Deepbank. a dynamically annotated treebank of the wall street journal", "authors": [ { "first": "Dan", "middle": [], "last": "Flickinger", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Valia", "middle": [], "last": "Kordoni", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 11th International Workshop on Treebanks and Linguistic Theories", "volume": "", "issue": "", "pages": "85--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Flickinger, Yi Zhang, and Valia Kordoni. 2012. Deepbank. a dynamically annotated treebank of the wall street journal. In Proceedings of the 11th In- ternational Workshop on Treebanks and Linguistic Theories, pages 85-96.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Evaluating nlp models via contrast sets", "authors": [ { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" }, { "first": "Victoria", "middle": [], "last": "Basmova", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Bogin", "suffix": "" }, { "first": "Sihao", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Pradeep", "middle": [], "last": "Dasigi", "suffix": "" }, { "first": "Dheeru", "middle": [], "last": "Dua", "suffix": "" }, { "first": "Yanai", "middle": [], "last": "Elazar", "suffix": "" }, { "first": "Ananth", "middle": [], "last": "Gottumukkala", "suffix": "" }, { "first": "Nitish", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Hanna", "middle": [], "last": "Hajishirzi", "suffix": "" }, { "first": "Gabriel", "middle": [], "last": "Ilharco", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Khashabi", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nel- son F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Quan Zhang, and Ben Zhou. 2020. Evaluating nlp models via contrast sets. ArXiv, abs/2004.02709.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Are we modeling the task or the annotator? an investigation of annotator bias in natural language understanding datasets", "authors": [ { "first": "Mor", "middle": [], "last": "Geva", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "1161--1166", "other_ids": { "DOI": [ "10.18653/v1/D19-1107" ] }, "num": null, "urls": [], "raw_text": "Mor Geva, Yoav Goldberg, and Jonathan Berant. 2019. Are we modeling the task or the annotator? an inves- tigation of annotator bias in natural language under- standing datasets. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 1161-1166, Hong Kong, China. As- sociation for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Annotation artifacts in natural language inference data", "authors": [ { "first": "Swabha", "middle": [], "last": "Suchin Gururangan", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Swayamdipta", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Bowman", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "107--112", "other_ids": { "DOI": [ "10.18653/v1/N18-2017" ] }, "num": null, "urls": [], "raw_text": "Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural lan- guage inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107-112, New Orleans, Louisiana. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Universal language model fine-tuning for text classification", "authors": [ { "first": "Jeremy", "middle": [], "last": "Howard", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "328--339", "other_ids": { "DOI": [ "10.18653/v1/P18-1031" ] }, "num": null, "urls": [], "raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 328-339, Melbourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Is bert really robust? a strong baseline for natural language attack on text classification and entailment", "authors": [ { "first": "Di", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Zhijing", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Joey", "middle": [ "Tianyi" ], "last": "Zhou", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Szolovits", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2019a. Is bert really robust? a strong baseline for natural language attack on text classi- fication and entailment.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Is bert really robust? natural language attack on text classification and entailment", "authors": [ { "first": "Di", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Zhijing", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Joey", "middle": [ "Tianyi" ], "last": "Zhou", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Szolovits", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2019b. Is bert really robust? natural lan- guage attack on text classification and entailment. ArXiv, abs/1907.11932.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Learning the difference that makes a difference with counterfactually-augmented data", "authors": [ { "first": "Divyansh", "middle": [], "last": "Kaushik", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Zachary", "middle": [], "last": "Lipton", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Divyansh Kaushik, Eduard Hovy, and Zachary Lipton. 2020. Learning the difference that makes a differ- ence with counterfactually-augmented data. In Inter- national Conference on Learning Representations.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Modality. in semantics: An international handbook of contemporary research", "authors": [ { "first": "Angelika", "middle": [], "last": "Kratzer", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Angelika Kratzer. 1991. Modality. in semantics: An international handbook of contemporary research.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks", "authors": [ { "first": "M", "middle": [], "last": "Brenden", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Lake", "suffix": "" }, { "first": "", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2018, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brenden M. Lake and Marco Baroni. 2018. General- ization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In ICML.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Roberta: A robustly optimized bert pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. ArXiv, abs/1907.11692.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Building a large annotated corpus of English: The Penn Treebank", "authors": [ { "first": "Mitchell", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "Mary", "middle": [ "Ann" ], "last": "Marcinkiewicz", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computa- tional Linguistics, 19(2):313-330.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference", "authors": [ { "first": "Tom", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3428--3448", "other_ids": { "DOI": [ "10.18653/v1/P19-1334" ] }, "num": null, "urls": [], "raw_text": "Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428-3448, Florence, Italy. Association for Computational Lin- guistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Syntactic data augmentation increases robustness to inference heuristics", "authors": [ { "first": "R", "middle": [ "Thomas" ], "last": "Junghyun Min", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Das", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Pitler", "suffix": "" }, { "first": "", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junghyun Min, R. Thomas McCoy, Dipanjan Das, Emily Pitler, and Tal Linzen. 2020. Syntactic data augmentation increases robustness to inference heuristics. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, Seattle, Washington. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proc. of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proc. of NAACL.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Studies in contemporary linguistics. Center for the Study of Language and Information", "authors": [ { "first": "Jesse", "middle": [], "last": "Carl", "suffix": "" }, { "first": "Ivan", "middle": [ "A" ], "last": "Pollard", "suffix": "" }, { "first": "", "middle": [], "last": "Sag", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carl Jesse Pollard and Ivan A. Sag. 1994. Head-driven phrase structure grammar. Studies in contemporary linguistics. Center for the Study of Language and In- formation ; University of Chicago Press, Stanford : Chicago.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Improving language understanding by generative pre-training", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Karthik", "middle": [], "last": "Narasimhan", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Syntactic Theory: A Formal Introduction", "authors": [ { "first": "A", "middle": [], "last": "Ivan", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Sag", "suffix": "" }, { "first": "Emily", "middle": [ "M" ], "last": "Wasow", "suffix": "" }, { "first": "", "middle": [], "last": "Bender", "suffix": "" } ], "year": 2003, "venue": "", "volume": "152", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan A Sag, Thomas Wasow, and Emily M Bender. 2003. Syntactic Theory: A Formal Introduction, vol- ume 152 of CSLI Lecture Notes. CSLI Publications.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "How reasonable are common-sense reasoning tasks: A case-study on the winograd schema challenge and swag", "authors": [ { "first": "Paul", "middle": [], "last": "Trichelair", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Emami", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Trischler", "suffix": "" }, { "first": "Kaheer", "middle": [], "last": "Suleman", "suffix": "" }, { "first": "Jackie Chi Kit", "middle": [], "last": "Cheung", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Trichelair, Ali Emami, Adam Trischler, Kaheer Suleman, and Jackie Chi Kit Cheung. 2018. How reasonable are common-sense reasoning tasks: A case-study on the winograd schema challenge and swag.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "authors": [ { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1112--1122", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122. Association for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Huggingface's transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R'emi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Brew", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Evaluating commonsense in pretrained language models", "authors": [ { "first": "Xuhui", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Leyang", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Dandan", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xuhui Zhou, Yue Zhang, Leyang Cui, and Dandan Huang. 2019. Evaluating commonsense in pre- trained language models.", "links": null } }, "ref_entries": { "TABREF0": { "text": "TOP: label of topmost EP INDEX: the variable associated with the sentential event RELS: bag of EPs LBL: label variable for the EP HCONS: constrains between labels for EPs; qeq denotes a scoping relation (2) Inserting it-cleft EP Connecting top handle h0 to it-cleft EP handle (LBL) SF: prop TENSE: pres MOOD: indicative PROG: -PERF: -] RELS: < [ proper_q LBL: h4 ... ] [ named LBL: h7 ARG0: x3 CARG: \"Alice\" ] [ _see_v_1 LBL: h1 ARG0: e2 ARG1: x3 ARG2: x9 ] [ proper_q LBL: h10 ARG0: x9 ... ] [ named LBL: h13 ARG0: x9 CARG: \"Bob\" ] [ _be_v_itcleft LBL: h14 ARG0: e15 ARG1: x3 ARG2: h1 ] > HCONS: < h0 qeq h14 h5 qeq h7 h11 qeq h13 > ]", "type_str": "table", "html": null, "num": null, "content": "
[ TOP: h0
INDEX: e15
[ e
h0
INDEX: e2
[ e SF: prop TENSE: past ... ]
RELS: < [ proper_q LBL: h4 ARG0: x3 ... ]
[ named LBL: h7 ARG0: x3 CARG: \"Alice\" ]
[ _see_v_1 LBL: h1 ARG0: e2 ARG1: x3 ARG2: x9 ... ]
[ proper_q LBL: h10 ARG0: x9 ... ]
[ named LBL: h13 ARG0: x9 CARG: \"Bob\" ] >
HCONS: < h0 qeq h1 h5 qeq h7 h11 qeq h13 > ]
[ _be_v_itcleft LBL: h14 ARG0: e15 ARG1: x3 ARG2: h1 ]
(3) HCONS: < h0 qeq h14 h5 qeq h7 h11 qeq h13 > ]
" }, "TABREF1": { "text": "UnchangedIt is Alice who is driving a car.It is Alice who is playing piano. pa;pa Unchanged A car is being driven by Alice. Piano is being played by Alice.", "type_str": "table", "html": null, "num": null, "content": "
Transformation LabelSentence 1Sentence 2
o;oContradiction Alice is driving a car.Alice is playing piano.
i;i
f;pNeutralAlice will be driving a car.Alice was playing piano.
m;oNeutralAlice may be driving a car.Alice is playing piano.
f;p +iNeutralIt is Alice who will be driving a car. It is Alice who was playing piano.
f;p +pa
" }, "TABREF2": { "text": "Examples for label rules used for determining labels of generated data for different transformations.", "type_str": "table", "html": null, "num": null, "content": "
# MNLI ex.# SNLI ex.
train m.mm. train dev
o;o392k 10k 10k550k 10k
i;i13k1k1k65k1k
pa;pa3k236 35316k586
f;p3k221 2086k111
p;f3k262 2607k142
m;o13k1k1k48k905
p;f +i4k288 3037k122
p;f +pa 71961821k45
f;p +i4k259 2706k91
f;p +pa 72759721k37
" }, "TABREF3": { "text": "", "type_str": "table", "html": null, "num": null, "content": "" }, "TABREF5": { "text": "/84.79 69.47/69.05 90.97 46.96 bert-large-uncased 86.54/86.46 71.28/70.34 91.78 47.72 roberta-base 88.00/87.60 71.95/70.58 91.86 47.72 roberta-large 90.01/90.34 73.78/73.04 92.83 46.34 AUG bert-base-uncased 84.62/84.45 86.60/85.73 90.86 94.34 bert-large-uncased 86.24/86.37 88.14/87.98 91.49 96.00", "type_str": "table", "html": null, "num": null, "content": "
MNLIaug-MNLISNLI aug-SNLI
ORIbert-base-uncased 84.31roberta-base 87.51/87.52 89.66/89.45 92.13 95.05
roberta-large90.14/89.84 91.47/91.04 92.53 95.93
While accuracy
measures how well a model can accurately predict
" }, "TABREF6": { "text": "Accuracy on MNLI and SNLI datasets. MNLI results have the format (m./mm.). SNLI results are on SNLI dev.", "type_str": "table", "html": null, "num": null, "content": "" } } } }