{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:08:42.580396Z" }, "title": "BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance", "authors": [ { "first": "R", "middle": [ "Thomas" ], "last": "Mccoy", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University", "location": {} }, "email": "" }, { "first": "Junghyun", "middle": [], "last": "Min", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University", "location": {} }, "email": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "", "affiliation": { "laboratory": "", "institution": "New York University", "location": {} }, "email": "linzen@nyu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "If the same neural network architecture is trained multiple times on the same dataset, will it make similar linguistic generalizations across runs? To study this question, we finetuned 100 instances of BERT on the Multigenre Natural Language Inference (MNLI) dataset and evaluated them on the HANS dataset, which evaluates syntactic generalization in natural language inference. On the MNLI development set, the behavior of all instances was remarkably consistent, with accuracy ranging between 83.6% and 84.8%. In stark contrast, the same models varied widely in their generalization performance. For example, on the simple case of subject-object swap (e.g., determining that the doctor visited the lawyer does not entail the lawyer visited the doctor), accuracy ranged from 0.0% to 66.2%. Such variation is likely due to the presence of many local minima in the loss surface that are equally attractive to a low-bias learner such as a neural network; decreasing the variability may therefore require models with stronger inductive biases.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "If the same neural network architecture is trained multiple times on the same dataset, will it make similar linguistic generalizations across runs? To study this question, we finetuned 100 instances of BERT on the Multigenre Natural Language Inference (MNLI) dataset and evaluated them on the HANS dataset, which evaluates syntactic generalization in natural language inference. On the MNLI development set, the behavior of all instances was remarkably consistent, with accuracy ranging between 83.6% and 84.8%. In stark contrast, the same models varied widely in their generalization performance. For example, on the simple case of subject-object swap (e.g., determining that the doctor visited the lawyer does not entail the lawyer visited the doctor), accuracy ranged from 0.0% to 66.2%. Such variation is likely due to the presence of many local minima in the loss surface that are equally attractive to a low-bias learner such as a neural network; decreasing the variability may therefore require models with stronger inductive biases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Generalization is a crucial component of learning a language. No training set can contain all possible sentences, so learners must be able to generalize to sentences that they have never encountered before. We differentiate two types of generalization:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. In-distribution generalization: Generalization to examples which are novel but which are drawn from the same distribution as the training set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. Out-of-distribution generalization: Generalization to examples drawn from a different distribution than the training set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Standard test sets in natural language processing are generated in the same way as the corresponding training set, therefore testing only in-distribution generalization. Current neural architectures perform very well at this type of generalization. For example, on the natural language understanding tasks included in the GLUE benchmark , several Transformer-based models (Liu et al., 2019b,a; Raffel et al., 2020) have surpassed the human baselines from Nangia and Bowman (2019) . However, this strong performance does not necessarily indicate mastery of language. Because of biases in training distributions, it is often possible for a model to achieve strong in-distribution generalization by using shallow heuristics rather than deeper linguistic knowledge. Therefore, evaluating only on standard test sets cannot reveal whether a model has learned abstract properties of language or if it has only learned shallow heuristics.", "cite_spans": [ { "start": 372, "end": 393, "text": "(Liu et al., 2019b,a;", "ref_id": null }, { "start": 394, "end": 414, "text": "Raffel et al., 2020)", "ref_id": "BIBREF26" }, { "start": 455, "end": 479, "text": "Nangia and Bowman (2019)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "An alternative evaluation approach addresses this flaw by testing how the model handles particular linguistic phenomena, using datasets designed to be impossible to solve using shallow heuristics. In this line of investigation, which tests out-of-distribution generalization, the results are more mixed. Some works have found successful handling of phenomena such as subject-verb agreement (Gulordava et al., 2018) and filler-gap dependencies (Wilcox et al., 2018) . Other works, however, have illuminated surprising failures even on seemingly simple types of examples (Marvin and Linzen, 2018; . Such results make it clear that there is still much room for improvement in how neural models perform on syntactic structures that are rare in training corpora.", "cite_spans": [ { "start": 390, "end": 414, "text": "(Gulordava et al., 2018)", "ref_id": "BIBREF12" }, { "start": 443, "end": 464, "text": "(Wilcox et al., 2018)", "ref_id": "BIBREF35" }, { "start": 569, "end": 594, "text": "(Marvin and Linzen, 2018;", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we investigate whether the linguistic generalization behavior of a given neural architecture is consistent across multiple instances of that architecture. This question is important because, in order to tell which types of architectures generalize best, we need to know whether suc-cesses and failures of generalization should be attributed to aspects of the architecture or to random luck in the choice of the model's initial weights.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We investigate this question using the task of natural language inference (NLI). We fine-tuned 100 instances of BERT (Devlin et al., 2019) on the MNLI dataset (Williams et al., 2018) . 1 These 100 instances differed only in (i) the initial weights of the classifier trained on top of BERT, and (ii) the order in which training examples were presented. All other aspects of training, including the initial weights of BERT, were held constant. We evaluated these 100 instances on both the in-distribution MNLI development set and the out-of-distribution HANS evaluation set , which tests syntactic generalization in NLI models.", "cite_spans": [ { "start": 117, "end": 138, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF9" }, { "start": 159, "end": 182, "text": "(Williams et al., 2018)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We found that these 100 instances were remarkably consistent in their in-distribution generalization accuracy, with all accuracies on the MNLI development set falling in the range 83.6% to 84.8%, and with a high level of consistency on labels for specific examples (e.g., we identified 526 examples that all 100 instances labeled incorrectly). In contrast, these 100 instances varied dramatically in their out-of-distribution generalization performance; for example, on one of the thirty categories of examples in the HANS dataset, accuracy ranged from 4% to 76%. These results show that, when assessing the linguistic generalization of neural models, it is important to consider multiple training runs of each architecture, since models can differ vastly in how they perform on examples drawn from a different distribution than the training set, even when they perform similarly on an in-distribution test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Several works have noted that the same architecture can have very different in-distribution generalization across restarts of the same training process Gurevych, 2017, 2018; Madhyastha and Jain, 2019) . Most relevantly for our work, finetuning of BERT is unstable for some datasets, such that some runs achieve state-of-the-art results while others perform poorly (Devlin et al., 2019; Phang et al., 2018) . Unlike these past works, we focus on out-of-distribution generalization, rather than in-distribution generalization.", "cite_spans": [ { "start": 152, "end": 173, "text": "Gurevych, 2017, 2018;", "ref_id": null }, { "start": 174, "end": 200, "text": "Madhyastha and Jain, 2019)", "ref_id": "BIBREF20" }, { "start": 364, "end": 385, "text": "(Devlin et al., 2019;", "ref_id": "BIBREF9" }, { "start": 386, "end": 405, "text": "Phang et al., 2018)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "In-distribution generalization", "sec_num": "2.1" }, { "text": "Several other works have noted variation in outof-distribution syntactic generalization. Weber et al. (2018) trained 50 instances of a sequenceto-sequence model on a symbol replacement task. These instances consistently had above 99% accuracy on the in-distribution test set but varied on out-of-distribution generalization sets; in the most variable case, accuracy ranged from close to 0% to over 90%. Similarly, McCoy et al. (2018) trained 100 instances for each of six types of networks, using a synthetic training set that was ambiguous between two generalizations. Some models consistently made the same generalization across runs, but others varied considerably, with some instances of a given architecture strongly preferring one of the two generalizations that were plausible given the training set, while other instances strongly preferred the other generalization. Finally, Li\u0161ka et al. (2018) trained 5000 instances of recurrent neural networks on the lookup tables task. Most of these instances failed on compositional generalization, but a small number generalized well.", "cite_spans": [ { "start": 89, "end": 108, "text": "Weber et al. (2018)", "ref_id": "BIBREF34" }, { "start": 403, "end": 433, "text": "Similarly, McCoy et al. (2018)", "ref_id": null }, { "start": 884, "end": 903, "text": "Li\u0161ka et al. (2018)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Out-of-distribution generalization", "sec_num": "2.2" }, { "text": "These works on variation in out-of-distribution generalization all used simple, synthetic tasks with training sets designed to exclude certain types of examples. Our work tests if models are still as variable when trained on a natural-language training set that is not adversarially designed. In concurrent work, also measured variability in out-of-distribution performance for 3 models (including BERT) on 12 datasets (including HANS). Their work has impressive breadth, whereas we instead aim for depth: We analyze the particular categories within HANS to give a fine-grained investigation of syntactic generalization, while Zhou et al. only report overall accuracy averaged across categories. In addition, we fine-tuned 100 instances of BERT, while Zhou et al. only fine-tuned 10 instances. The larger number of instances allows us to investigate the extent of the variability in more detail.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Out-of-distribution generalization", "sec_num": "2.2" }, { "text": "Many recent papers have sought a deeper understanding of BERT, whether to assess its encoding of sentence structure (Lin et al., 2019; Hewitt and Manning, 2019; Chrupa\u0142a and Alishahi, 2019; Jawahar et al., 2019; Tenney et al., 2019b) ; its representational structure more generally (Abnar et al., 2019); its handling of specific linguistic phenomena such as subject-verb agreement (Goldberg, 2019), negative polarity items (Warstadt et al., 2019) , function words (Kim et al., 2019) , or a variety of psycholinguistic phenomena (Ettinger, 2020) ; its internal workings (Coenen et al., 2019; Tenney et al., 2019a; Clark et al., 2019) ; or its inductive biases (Warstadt and Bowman, 2020) . The novel contribution of this work is the focus on variability across a large number of fine-tuning runs; previous works have generally used models without fine-tuning or have used only a small number of fine-tuning runs (usually only one fine-tuning run, or at most ten fine-tuning runs).", "cite_spans": [ { "start": 116, "end": 134, "text": "(Lin et al., 2019;", "ref_id": "BIBREF16" }, { "start": 135, "end": 160, "text": "Hewitt and Manning, 2019;", "ref_id": "BIBREF13" }, { "start": 161, "end": 189, "text": "Chrupa\u0142a and Alishahi, 2019;", "ref_id": "BIBREF3" }, { "start": 190, "end": 211, "text": "Jawahar et al., 2019;", "ref_id": "BIBREF14" }, { "start": 212, "end": 233, "text": "Tenney et al., 2019b)", "ref_id": "BIBREF30" }, { "start": 423, "end": 446, "text": "(Warstadt et al., 2019)", "ref_id": "BIBREF33" }, { "start": 464, "end": 482, "text": "(Kim et al., 2019)", "ref_id": "BIBREF15" }, { "start": 528, "end": 544, "text": "(Ettinger, 2020)", "ref_id": "BIBREF10" }, { "start": 569, "end": 590, "text": "(Coenen et al., 2019;", "ref_id": null }, { "start": 591, "end": 612, "text": "Tenney et al., 2019a;", "ref_id": "BIBREF29" }, { "start": 613, "end": 632, "text": "Clark et al., 2019)", "ref_id": "BIBREF4" }, { "start": 659, "end": 686, "text": "(Warstadt and Bowman, 2020)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Linguistic analysis of BERT", "sec_num": "2.3" }, { "text": "We used the task of natural language inference (NLI, also known as Recognizing Textual Entailment; Condoravdi et al., 2003; Dagan et al., 2006 Dagan et al., , 2013 , which involves giving a model two sentences, called the premise and the hypothesis. The model must then output entailment if the premise entails (i.e., implies the truth of) the hypothesis, contradiction if the premise contradicts the hypothesis, or neutral otherwise. For training, we used the training set of the MNLI dataset (Williams et al., 2018) , examples from which are given below:", "cite_spans": [ { "start": 99, "end": 123, "text": "Condoravdi et al., 2003;", "ref_id": "BIBREF6" }, { "start": 124, "end": 142, "text": "Dagan et al., 2006", "ref_id": "BIBREF7" }, { "start": 143, "end": 163, "text": "Dagan et al., , 2013", "ref_id": "BIBREF8" }, { "start": 494, "end": 517, "text": "(Williams et al., 2018)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Method 3.1 Task and datasets", "sec_num": "3" }, { "text": "(1) a. Premise: Finally she turned back to him. Figure 1 ).", "cite_spans": [], "ref_spans": [ { "start": 48, "end": 56, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Method 3.1 Task and datasets", "sec_num": "3" }, { "text": "To assess whether a model has learned these heuristics, HANS contains examples where each heuristic makes the right predictions (i.e., where the correct label is entailment) and examples where each heuristic makes the wrong predictions (i.e., where the correct label is non-entailment). A model that has adopted one of the heuristics will output entailment for all examples targeting that heuristic, even when the correct answer is non-entailment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method 3.1 Task and datasets", "sec_num": "3" }, { "text": "All of our models consisted of BERT with a linear classifier on top of it outputting labels of entailment, contradiction, or neutral. We fine-tuned 100 instances of this model on MNLI using the finetuning code from the BERT GitHub repository. 2 The BERT component of each instance was initialized with the pre-trained bert-base-uncased weights. For evaluation on HANS, we translated outputs of contradiction and neutral into a single non-entailment label, following McCoy et al. (2019). The fine-tuning process proceeded for 3 epochs and modified the weights of both the BERT component and the classifier. Following Devlin et al. 2019, across fine-tuning runs we varied only (i) the random initial weights of the classifier and (ii) the order in which training examples were presented. All other aspects, including the initial pre-trained weights of the BERT component, were held constant.", "cite_spans": [ { "start": 243, "end": 244, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Models and training", "sec_num": "3.2" }, { "text": "The 100 instances were remarkably consistent on in-distribution generalization, with all models scoring between 83.6% and 84.8% on the MNLI development set (Figure 2, left) . Numerical statistics for the performance of our 100 instances of BERT on MNLI and HANS can be found in The instances were also highly consistent in their choice of labels for particular examples (Figure 2 , right); in the rest of this subsection, we provide some quantitative and qualitative analysis of consistency of performance on individual examples.", "cite_spans": [], "ref_spans": [ { "start": 156, "end": 172, "text": "(Figure 2, left)", "ref_id": "FIGREF2" }, { "start": 370, "end": 379, "text": "(Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "In-distribution generalization", "sec_num": "4.1" }, { "text": "Lexical overlap Assume that a premise entails all hypotheses constructed from words in the premise", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristic Definition Example", "sec_num": null }, { "text": "The doctor was paid by the actor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristic Definition Example", "sec_num": null }, { "text": "\u2212 \u2212\u2212\u2212\u2212 \u2192 WRONG", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristic Definition Example", "sec_num": null }, { "text": "The doctor paid the actor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristic Definition Example", "sec_num": null }, { "text": "Assume that a premise entails all of its contiguous subsequences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subsequence", "sec_num": null }, { "text": "The doctor near the actor danced. \u2212 \u2212\u2212\u2212\u2212 \u2192 WRONG The actor danced.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subsequence", "sec_num": null }, { "text": "Assume that a premise entails all complete subtrees in its parse tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constituent", "sec_num": null }, { "text": "If the artist slept, the actor ran. \u2212 \u2212\u2212\u2212\u2212 \u2192 WRONG The artist slept. On average, among any pair of fine-tuned BERT instances, the two members of the pair agreed on the labels of 93.1% of the examples (when considering all three labels of entailment, contradiction, and neutral, rather than the collapsed labels of entailment and non-entailment). To give a sense of consistency across all 100 instances (rather than only among pairs of instances), Figure 2 (right) illustrates how consistent our 100 instances were on their answers to individual examples in the MNLI development set. Of the 9815 examples in the set, there were 6526 that all 100 instances labeled correctly, and 526 that all instances labeled incorrectly. Thus, the consistent score of about 84% on the MNLI development set can be partially explained by the fact that there are certain examples that all models answered correctly or that all models answered incorrectly, as models were consistently correct or incorrect on 72% of the examples.", "cite_spans": [], "ref_spans": [ { "start": 447, "end": 455, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Constituent", "sec_num": null }, { "text": "Examples (4) through (6) show some of the 6526 cases that all 100 instances answered correctly: Examples (7) through (12) show some of the 526 cases that all 100 instances answered incorrectly. Some of these examples arguably have incorrect labels in the dataset, such as (7) (because the hypothesis mentions a report which the premise does not mention), so it is unsurprising that models found such examples difficult. Other consistently difficult examples involve areas that one might intuitively expect to be tricky for models trained on natural language, such as world knowledge (e.g., (8) requires knowledge of how long forearms are, and (9) requires knowledge of what nodding is), the ability to count (e.g., (10)), or fine-grained shades of meaning that might require multiple steps of reasoning (e.g., (11) and (12)). Some of the consistently difficult examples have a high degree of lexical overlap yet are not labeled entailment (such as (13)); the difficulty of such examples adds further evidence to the conclusion that these models have adopted the lexical overlap heuristic. Finally, there are some examples, such as (14), for which it is unclear why models find them so difficult. 11 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constituent", "sec_num": null }, { "text": "On HANS, performance was much more variable than on the MNLI development set. HANS consists of 6 main categories of examples, each of which can be further divided into 5 subcategories. Performance was reasonably consistent on five of these categories, but on the sixth category-lexical overlap examples that are inconsistent with the lexical overlap heuristic-performance varied dramatically, ranging from 5% accuracy to 55% accuracy ( Figure 6 ). Since this is the most variable category, we focus on it for the rest of the analysis.", "cite_spans": [], "ref_spans": [ { "start": 436, "end": 444, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Out-of-distribution generalization", "sec_num": "4.2" }, { "text": "The category of lexical overlap examples that are inconsistent with the lexical overlap heuristic encompasses examples for which the correct label is non-entailment and for which all the words in the hypothesis also appear in the premise but not as a contiguous subsequence. This category has five subcategories; examples and results for each subcategory are in Figure 5 . Chance performance on HANS was 50%; on all subcategories except for passives, accuracies ranged from far below chance to modestly above chance. Models varied considerably even on categories that humans find simple . For example, accuracy on the subject-object swap examples, which can be handled with only rudimentary knowledge of syntax (in particular, the distinction between subjects and objects), ranged from 0% to 66%. Overall,", "cite_spans": [], "ref_spans": [ { "start": 362, "end": 370, "text": "Figure 5", "ref_id": "FIGREF11" } ], "eq_spans": [], "section": "Out-of-distribution generalization", "sec_num": "4.2" }, { "text": "Subcase Minimum Maximum Mean Std. dev.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristic", "sec_num": null }, { "text": "Untangling relative clauses 0.94 1.00 0.98 0.01 overlap", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical", "sec_num": null }, { "text": "The athlete who the judges saw called the manager. \u2192 The judges saw the athlete.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical", "sec_num": null }, { "text": "Sentences with PPs 0.98 1.00 1.00 0.00 The tourists by the actor called the authors. \u2192 The tourists called the authors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical", "sec_num": null }, { "text": "Sentences with relative clauses 0.97 1.00 0.99 0.01 The actors that danced encouraged the author. \u2192 The actors encouraged the author.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical", "sec_num": null }, { "text": "0.72 0.92 0.83 0.05 The secretaries saw the scientists and the actors. \u2192 The secretaries saw the actors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conjunctions", "sec_num": null }, { "text": "Passives 0.99 1.00 1.00 0.00 The authors were supported by the tourists. \u2192 The tourists supported the authors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conjunctions", "sec_num": null }, { "text": "0.93 1.00 0.98 0.02 The actor and the professor shouted. \u2192 The professor shouted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subsequence Conjunctions", "sec_num": null }, { "text": "1.00 1.00 1.00 0.00 Happy professors mentioned the lawyer. \u2192 Professors mentioned the lawyer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adjectives", "sec_num": null }, { "text": "0.95 1.00 1.00 0.01 The author read the book. \u2192 The author read.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Understood argument", "sec_num": null }, { "text": "Relative clause on object 0.98 1.00 0.99 0.01 The artists avoided the actors that performed. \u2192 The artists avoided the actors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Understood argument", "sec_num": null }, { "text": "PP on object 1.00 1.00 1.00 0.00 The authors called the judges near the doctor. \u2192 The authors called the judges.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Understood argument", "sec_num": null }, { "text": "Embedded under preposition 0.81 1.00 0.96 0.02 Because the banker ran, the doctors saw the professors. \u2192 The banker ran.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constituent", "sec_num": null }, { "text": "Outside embedded clause 1.00 1.00 1.00 0.00 Although the secretaries slept, the judges danced. \u2192 The judges danced.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constituent", "sec_num": null }, { "text": "Embedded under verb 0.93 1.00 0.99 0.01 The president remembered that the actors performed. \u2192 The actors performed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constituent", "sec_num": null }, { "text": "1.00 1.00 1.00 0.00 The lawyer danced, and the judge supported the doctors. \u2192 The lawyer danced.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conjunction", "sec_num": null }, { "text": "1.00 1.00 1.00 0.00 Certainly the lawyers advised the manager. \u2192 The lawyers advised the manager. The doctors saw the manager.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adverbs", "sec_num": null }, { "text": "Sentences with relative clauses 0.09 0.67 0.33 0.14 The actors called the banker who the tourists saw.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adverbs", "sec_num": null }, { "text": "The banker called the tourists.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adverbs", "sec_num": null }, { "text": "Conjunctions 0.12 0.72 0.45 0.15 The doctors saw the presidents and the tourists.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adverbs", "sec_num": null }, { "text": "The presidents saw the tourists.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adverbs", "sec_num": null }, { "text": "Passives 0.00 0.04 0.01 0.01 The senators were helped by the managers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adverbs", "sec_num": null }, { "text": "The senators helped the managers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adverbs", "sec_num": null }, { "text": "Subsequence NP/S 0.00 0.05 0.02 0.01 The managers heard the secretary resigned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adverbs", "sec_num": null }, { "text": "The managers heard the secretary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adverbs", "sec_num": null }, { "text": "PP on subject 0.00 0.35 0.12 0.07 The managers near the scientist shouted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adverbs", "sec_num": null }, { "text": "The scientist shouted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adverbs", "sec_num": null }, { "text": "Relative clause on subject 0.00 0.23 0.07 0.04 The secretary that admired the senator saw the actor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adverbs", "sec_num": null }, { "text": "The senator saw the actor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adverbs", "sec_num": null }, { "text": "MV/RR 0.00 0.02 0.00 0.00 The senators paid in the office danced.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adverbs", "sec_num": null }, { "text": "The senators paid in the office.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adverbs", "sec_num": null }, { "text": "NP/Z 0.02 0.13 0.06 0.02 Before the actors presented the doctors arrived.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adverbs", "sec_num": null }, { "text": "The actors presented the doctors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adverbs", "sec_num": null }, { "text": "Embedded under preposition 0.14 0.70 0.41 0.12 Unless the senators ran, the professors recommended the doctor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constituent", "sec_num": null }, { "text": "The senators ran.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constituent", "sec_num": null }, { "text": "Outside embedded clause 0.00 0.03 0.00 0.01 Unless the authors saw the students, the doctors resigned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constituent", "sec_num": null }, { "text": "The doctors resigned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constituent", "sec_num": null }, { "text": "Embedded under verb 0.02 0.42 0.17 0.08 The tourists said that the lawyer saw the banker.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constituent", "sec_num": null }, { "text": "The lawyer saw the banker.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constituent", "sec_num": null }, { "text": "Disjunction 0.00 0.03 0.00 0.01 The judges resigned, or the athletes saw the author.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constituent", "sec_num": null }, { "text": "The athletes saw the author.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constituent", "sec_num": null }, { "text": "Adverbs 0.00 0.17 0.06 0.04 Probably the artists saw the authors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constituent", "sec_num": null }, { "text": "The artists saw the authors. Subject-object swap:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constituent", "sec_num": null }, { "text": "The doctor visited the lawyer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constituent", "sec_num": null }, { "text": "The lawyer visited the doctor. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constituent", "sec_num": null }, { "text": "Relative clause:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Number of instances", "sec_num": null }, { "text": "The actors saw the author who the judge advised. The author saw the judge. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Number of instances", "sec_num": null }, { "text": "The student was stopped by the doctor. The student stopped the doctor. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Passive:", "sec_num": null }, { "text": "The doctors saw the athlete and the judge. The athlete saw the judge. although these models performed consistently on the in-distribution test set, they have nevertheless learned highly variable representations of syntax.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conjunction:", "sec_num": null }, { "text": "We have found that models that differ only in their initial weights and the order of training examples can vary substantially in out-of-distribution linguistic generalization. We found this variation even with the vast majority of initial weights held constant (i.e., all the weights in the BERT component of the model). We conjecture that models might be even more variable if the pre-training of BERT were also redone across instances. These results underscore the importance of evaluating models on multiple restarts, as conclusions drawn from a single instance of a model might not hold across instances. Further, these results highlight the importance of evaluating out-of-distribution generalization; since all of our instances displayed similar in-distribution generalization, only their out-of- Figure 6 : Out-of-distribution generalization: Performance on HANS, broken down into six categories of examples, based on the syntactic heuristic that each example targets and whether the example is consistent with the relevant heuristic (i.e., has a correct label of entailment) or inconsistent with the heuristic (i.e., has a correct label of non-entailment). The lexical overlap cases that are inconsistent with the heuristic (lower left plot) are highly variable across instances. For numerical results, see Figure 7 .", "cite_spans": [], "ref_spans": [ { "start": 803, "end": 811, "text": "Figure 6", "ref_id": null }, { "start": 1315, "end": 1323, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "distribution generalization illuminates the substantial differences in what they have learned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "In stark contrast to the models we have looked at-which generalized in highly variable ways despite being trained on the same set of exampleshumans tend to converge to similar linguistic generalizations despite major differences in the linguistic input that they encounter as children (Chomsky, 1965 (Chomsky, , 1980 . This suggests that reducing the generalization variability of NLP models may help bring them closer to human performance in one major area where they still dramatically lag behind humans, namely in out-of-distribution generalization.", "cite_spans": [ { "start": 285, "end": 299, "text": "(Chomsky, 1965", "ref_id": "BIBREF1" }, { "start": 300, "end": 316, "text": "(Chomsky, , 1980", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "How could the out-of-distribution generalization of models be made more consistent? The variability that we have observed likely reflects the presence of many local minima in the loss surface, all of which are equally attractive to our models. This makes the model's choice of a minimum essentially arbitrary and easily affected by the initial weights and the order of training examples. To reduce this variability, then, one approach would be to use models with stronger inductive biases, which can help distinguish between the many local minima. An alternate approach would be to use training sets that better represent a large set of linguistic phenomena, to decrease the probability of there being local minima that ignore certain phenomena. : Results for models trained on MNLI. The MNLI column reports accuracy on the MNLI matched development set, where there are three possible labels (entailment, contradiction, and neutral). The remaining columns are subsets of the HANS dataset, with neutral and contradiction merged into a single label, non-entailment, such that there are only two possible labels: entailment and non-entailment. The examples that are consistent with the heuristics are those that have a correct label of entailment, while the examples that are inconsistent with the heuristics are those with a correct label of non-entailment. All statistics are based on 100 runs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "The weights for all 100 fine-tuned models are publicly available at https://github.com/tommccoy1/ hans.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "github.com/google-research/bert", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We are grateful to Emily Pitler, Dipanjan Das, and the members of the Johns Hopkins Computation and Psycholinguistics lab group for helpful comments. Any errors are our own.This project is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. 1746891 and by a gift to TL from Google, and it was conducted using computational resources from the Maryland Advanced Research Computing Center (MARCC). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation, Google, or MARCC.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Blackbox meets blackbox: Representational similarity & stability analysis of neural language models and brains", "authors": [ { "first": "Samira", "middle": [], "last": "Abnar", "suffix": "" }, { "first": "Lisa", "middle": [], "last": "Beinborn", "suffix": "" }, { "first": "Rochelle", "middle": [], "last": "Choenni", "suffix": "" }, { "first": "Willem", "middle": [], "last": "Zuidema", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "191--203", "other_ids": { "DOI": [ "10.18653/v1/W19-4820" ] }, "num": null, "urls": [], "raw_text": "Samira Abnar, Lisa Beinborn, Rochelle Choenni, and Willem Zuidema. 2019. Blackbox meets blackbox: Representational similarity & stability analysis of neural language models and brains. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 191-203, Florence, Italy. Association for Computa- tional Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Aspects of the Theory of Syntax", "authors": [ { "first": "Noam", "middle": [], "last": "Chomsky", "suffix": "" } ], "year": 1965, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noam Chomsky. 1965. Aspects of the Theory of Syntax. MIT Press, Cambridge, MA.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Rules and representations. Behavioral and Brain Sciences", "authors": [ { "first": "Noam", "middle": [], "last": "Chomsky", "suffix": "" } ], "year": 1980, "venue": "", "volume": "3", "issue": "", "pages": "1--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noam Chomsky. 1980. Rules and representations. Be- havioral and Brain Sciences, 3(1):1-15.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Correlating neural and symbolic representations of language", "authors": [ { "first": "Grzegorz", "middle": [], "last": "Chrupa\u0142a", "suffix": "" }, { "first": "Afra", "middle": [], "last": "Alishahi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2952--2962", "other_ids": { "DOI": [ "10.18653/v1/P19-1283" ] }, "num": null, "urls": [], "raw_text": "Grzegorz Chrupa\u0142a and Afra Alishahi. 2019. Corre- lating neural and symbolic representations of lan- guage. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 2952-2962, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "What does BERT look at? An analysis of BERT's attention", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Urvashi", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "276--286", "other_ids": { "DOI": [ "10.18653/v1/W19-4828" ] }, "num": null, "urls": [], "raw_text": "Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? An analysis of BERT's attention. In Pro- ceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276-286, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Fernanda Vi\u00e9gas, and Martin Wattenberg. 2019. Visualizing and measuring the geometry of BERT. 33rd Conference on Neural Information Processing Systems", "authors": [ { "first": "Andy", "middle": [], "last": "Coenen", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Reif", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Been", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Pearce", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andy Coenen, Emily Reif, Ann Yuan, Been Kim, Adam Pearce, Fernanda Vi\u00e9gas, and Martin Watten- berg. 2019. Visualizing and measuring the geometry of BERT. 33rd Conference on Neural Information Processing Systems.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Entailment, intensionality and text understanding", "authors": [ { "first": "Cleo", "middle": [], "last": "Condoravdi", "suffix": "" }, { "first": "Dick", "middle": [], "last": "Crouch", "suffix": "" }, { "first": "Reinhard", "middle": [], "last": "Valeria De Paiva", "suffix": "" }, { "first": "Daniel", "middle": [ "G" ], "last": "Stolle", "suffix": "" }, { "first": "", "middle": [], "last": "Bobrow", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the HLT-NAACL 2003 Workshop on Text Meaning", "volume": "", "issue": "", "pages": "38--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cleo Condoravdi, Dick Crouch, Valeria de Paiva, Rein- hard Stolle, and Daniel G. Bobrow. 2003. Entail- ment, intensionality and text understanding. In Pro- ceedings of the HLT-NAACL 2003 Workshop on Text Meaning, pages 38-45.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The PASCAL Recognising Textual Entailment Challenge", "authors": [ { "first": "Oren", "middle": [], "last": "Ido Dagan", "suffix": "" }, { "first": "Bernardo", "middle": [], "last": "Glickman", "suffix": "" }, { "first": "", "middle": [], "last": "Magnini", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the First International Conference on Machine Learning Challenges: Evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment, MLCW'05", "volume": "", "issue": "", "pages": "177--190", "other_ids": { "DOI": [ "10.1007/11736790_9" ] }, "num": null, "urls": [], "raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL Recognising Textual Entail- ment Challenge. In Proceedings of the First In- ternational Conference on Machine Learning Chal- lenges: Evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual En- tailment, MLCW'05, pages 177-190, Berlin, Hei- delberg. Springer-Verlag.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Recognizing Textual Entailment: Models and Applications", "authors": [ { "first": "Dan", "middle": [], "last": "Ido Dagan", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Fabio", "middle": [ "Massimo" ], "last": "Sammons", "suffix": "" }, { "first": "", "middle": [], "last": "Zanzotto", "suffix": "" } ], "year": 2013, "venue": "Synthesis Lectures on Human Language Technologies", "volume": "6", "issue": "4", "pages": "1--220", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ido Dagan, Dan Roth, Mark Sammons, and Fabio Mas- simo Zanzotto. 2013. Recognizing Textual Entail- ment: Models and Applications. Synthesis Lectures on Human Language Technologies, 6(4):1-220.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models", "authors": [ { "first": "Allyson", "middle": [], "last": "Ettinger", "suffix": "" } ], "year": 2020, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "", "pages": "34--48", "other_ids": { "DOI": [ "10.1162/tacl_a_00298" ] }, "num": null, "urls": [], "raw_text": "Allyson Ettinger. 2020. What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. Transactions of the Association for Computational Linguistics, 8:34-48.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Assessing BERT's syntactic abilities", "authors": [ { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1901.05287" ] }, "num": null, "urls": [], "raw_text": "Yoav Goldberg. 2019. Assessing BERT's syntactic abilities. arXiv preprint arXiv:1901.05287.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Colorless green recurrent networks dream hierarchically", "authors": [ { "first": "Kristina", "middle": [], "last": "Gulordava", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1195--1205", "other_ids": { "DOI": [ "10.18653/v1/N18-1108" ] }, "num": null, "urls": [], "raw_text": "Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1195-1205, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A structural probe for finding syntax in word representations", "authors": [ { "first": "John", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4129--4138", "other_ids": { "DOI": [ "10.18653/v1/N19-1419" ] }, "num": null, "urls": [], "raw_text": "John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word repre- sentations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129-4138, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "What does BERT learn about the structure of language", "authors": [ { "first": "Ganesh", "middle": [], "last": "Jawahar", "suffix": "" }, { "first": "Beno\u00eet", "middle": [], "last": "Sagot", "suffix": "" }, { "first": "Djam\u00e9", "middle": [], "last": "Seddah", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3651--3657", "other_ids": { "DOI": [ "10.18653/v1/P19-1356" ] }, "num": null, "urls": [], "raw_text": "Ganesh Jawahar, Beno\u00eet Sagot, and Djam\u00e9 Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 3651-3657, Florence, Italy. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Probing what different NLP tasks teach machines about function word comprehension", "authors": [ { "first": "Najoung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Roma", "middle": [], "last": "Patel", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Poliak", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Ross", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019)", "volume": "", "issue": "", "pages": "235--249", "other_ids": { "DOI": [ "10.18653/v1/S19-1026" ] }, "num": null, "urls": [], "raw_text": "Najoung Kim, Roma Patel, Adam Poliak, Patrick Xia, Alex Wang, Tom McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bow- man, and Ellie Pavlick. 2019. Probing what dif- ferent NLP tasks teach machines about function word comprehension. In Proceedings of the Eighth Joint Conference on Lexical and Computational Se- mantics (*SEM 2019), pages 235-249, Minneapolis, Minnesota. Association for Computational Linguis- tics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Open sesame: Getting inside BERT's linguistic knowledge", "authors": [ { "first": "Yongjie", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Chern Tan", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Frank", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "241--253", "other_ids": { "DOI": [ "10.18653/v1/W19-4825" ] }, "num": null, "urls": [], "raw_text": "Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019. Open sesame: Getting inside BERT's linguistic knowledge. In Proceedings of the 2019 ACL Work- shop BlackboxNLP: Analyzing and Interpreting Neu- ral Networks for NLP, pages 241-253, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Memorize or generalize? Searching for a compositional RNN in a haystack", "authors": [ { "first": "Adam", "middle": [], "last": "Li\u0161ka", "suffix": "" }, { "first": "Germ\u00e1n", "middle": [], "last": "Kruszewski", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 workshop on Architectures and Evaluation for Generality, Autonomy, and Progress in AI (AEGAP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Li\u0161ka, Germ\u00e1n Kruszewski, and Marco Baroni. 2018. Memorize or generalize? Searching for a compositional RNN in a haystack. In Proceedings of the 2018 workshop on Architectures and Evalua- tion for Generality, Autonomy, and Progress in AI (AEGAP).", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Multi-task deep neural networks for natural language understanding", "authors": [ { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Pengcheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Weizhu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4487--4496", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jian- feng Gao. 2019a. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487-4496, Flo- rence, Italy. Association for Computational Linguis- tics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "RoBERTa: A robustly optimized BERT pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "On model stability as a function of random seed", "authors": [ { "first": "Pranava", "middle": [], "last": "Madhyastha", "suffix": "" }, { "first": "Rishabh", "middle": [], "last": "Jain", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", "volume": "", "issue": "", "pages": "929--939", "other_ids": { "DOI": [ "10.18653/v1/K19-1087" ] }, "num": null, "urls": [], "raw_text": "Pranava Madhyastha and Rishabh Jain. 2019. On model stability as a function of random seed. In Pro- ceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 929- 939, Hong Kong, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Targeted syntactic evaluation of language models", "authors": [ { "first": "Rebecca", "middle": [], "last": "Marvin", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1192--1202", "other_ids": { "DOI": [ "10.18653/v1/D18-1151" ] }, "num": null, "urls": [], "raw_text": "Rebecca Marvin and Tal Linzen. 2018. Targeted syn- tactic evaluation of language models. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192-1202, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Revisiting the poverty of the stimulus: Hierarchical generalization without a hierarchical bias in recurrent neural networks", "authors": [ { "first": "R", "middle": [], "last": "", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Frank", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 40th Annual Conference of the Cognitive Science Society", "volume": "", "issue": "", "pages": "2093--2098", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Thomas McCoy, Robert Frank, and Tal Linzen. 2018. Revisiting the poverty of the stimulus: Hi- erarchical generalization without a hierarchical bias in recurrent neural networks. In Proceedings of the 40th Annual Conference of the Cognitive Science So- ciety, pages 2093-2098, Madison, WI.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference", "authors": [ { "first": "R", "middle": [], "last": "", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3428--3448", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Thomas McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428-3448, Florence, Italy. Association for Computational Lin- guistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Human vs. muppet: A conservative estimate of human performance on the GLUE benchmark", "authors": [ { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "R", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4566--4575", "other_ids": { "DOI": [ "10.18653/v1/P19-1449" ] }, "num": null, "urls": [], "raw_text": "Nikita Nangia and Samuel R. Bowman. 2019. Human vs. muppet: A conservative estimate of human per- formance on the GLUE benchmark. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4566-4575, Flo- rence, Italy. Association for Computational Linguis- tics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Sentence encoders on STILTs: Supplementary training on intermediate labeled-data tasks", "authors": [ { "first": "Jason", "middle": [], "last": "Phang", "suffix": "" }, { "first": "Thibault", "middle": [], "last": "F\u00e9vry", "suffix": "" }, { "first": "Samuel R", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1811.01088" ] }, "num": null, "urls": [], "raw_text": "Jason Phang, Thibault F\u00e9vry, and Samuel R Bow- man. 2018. Sentence encoders on STILTs: Supple- mentary training on intermediate labeled-data tasks. arXiv preprint arXiv:1811.01088.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter J", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Journal of Machine Learning Research", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text trans- former. Journal of Machine Learning Research.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "338--348", "other_ids": { "DOI": [ "10.18653/v1/D17-1035" ] }, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 338-348, Copenhagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Why comparing single performance scores does not allow to draw conclusions about machine learning approaches", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1803.09578" ] }, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2018. Why com- paring single performance scores does not allow to draw conclusions about machine learning ap- proaches. arXiv preprint arXiv:1803.09578.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "BERT rediscovers the classical NLP pipeline", "authors": [ { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4593--4601", "other_ids": { "DOI": [ "10.18653/v1/P19-1452" ] }, "num": null, "urls": [], "raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4593- 4601, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "What do you learn from context? Probing for sentence structure in contextualized word representations", "authors": [ { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Berlin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Poliak", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Najoung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019b. What do you learn from context? Probing for sentence structure in contextu- alized word representations. In International Con- ference on Learning Representations.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Inter- national Conference on Learning Representations.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Can neural networks acquire a structural bias from raw linguistic data?", "authors": [ { "first": "Alex", "middle": [], "last": "Warstadt", "suffix": "" }, { "first": "", "middle": [], "last": "Samuel R Bowman", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 42nd Annual Conference of the Cognitive Science Society", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Warstadt and Samuel R Bowman. 2020. Can neu- ral networks acquire a structural bias from raw lin- guistic data? Proceedings of the 42nd Annual Con- ference of the Cognitive Science Society.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Investigating BERT's knowledge of language: Five analysis methods with NPIs", "authors": [ { "first": "Alex", "middle": [], "last": "Warstadt", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Ioana", "middle": [], "last": "Grosu", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Hagen", "middle": [], "last": "Blix", "suffix": "" }, { "first": "Yining", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Alsop", "suffix": "" }, { "first": "Shikha", "middle": [], "last": "Bordia", "suffix": "" }, { "first": "Haokun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Alicia", "middle": [], "last": "Parrish", "suffix": "" }, { "first": "Sheng-Fu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Phang", "suffix": "" }, { "first": "Anhad", "middle": [], "last": "Mohananey", "suffix": "" }, { "first": "Paloma", "middle": [], "last": "Phu Mon Htut", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Jeretic", "suffix": "" }, { "first": "", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2870--2880", "other_ids": { "DOI": [ "10.18653/v1/D19-1286" ] }, "num": null, "urls": [], "raw_text": "Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Ha- gen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, Sheng-Fu Wang, Jason Phang, Anhad Mohananey, Phu Mon Htut, Paloma Jeretic, and Samuel R. Bowman. 2019. Investi- gating BERT's knowledge of language: Five anal- ysis methods with NPIs. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2870-2880, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "The fine line between linguistic generalization and failure in Seq2Seq-attention models", "authors": [ { "first": "Noah", "middle": [], "last": "Weber", "suffix": "" }, { "first": "Leena", "middle": [], "last": "Shekhar", "suffix": "" }, { "first": "Niranjan", "middle": [], "last": "Balasubramanian", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Workshop on Generalization in the Age of Deep Learning", "volume": "", "issue": "", "pages": "24--27", "other_ids": { "DOI": [ "10.18653/v1/W18-1004" ] }, "num": null, "urls": [], "raw_text": "Noah Weber, Leena Shekhar, and Niranjan Balasubra- manian. 2018. The fine line between linguistic gen- eralization and failure in Seq2Seq-attention models. In Proceedings of the Workshop on Generalization in the Age of Deep Learning, pages 24-27, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "What do RNN language models learn about filler-gap dependencies?", "authors": [ { "first": "Ethan", "middle": [], "last": "Wilcox", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Takashi", "middle": [], "last": "Morita", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Futrell", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "211--221", "other_ids": { "DOI": [ "10.18653/v1/W18-5423" ] }, "num": null, "urls": [], "raw_text": "Ethan Wilcox, Roger Levy, Takashi Morita, and Richard Futrell. 2018. What do RNN language models learn about filler-gap dependencies? In Proceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 211-221, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "authors": [ { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1112--1122", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122. Association for Computational Linguistics.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "The curse of performance instability in analysis datasets: Consequences, source, and suggestions", "authors": [ { "first": "Xiang", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Yixin", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.13606" ] }, "num": null, "urls": [], "raw_text": "Xiang Zhou, Yixin Nie, Hao Tan, and Mohit Bansal. 2020. The curse of performance instability in analy- sis datasets: Consequences, source, and suggestions. arXiv preprint arXiv:2004.13606.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": ", and statistics for HANS broken down by linguistic construction can be found inFigures 3 and 4. Finally, to see model-by-model results, see https://github.com/tommccoy1/hans.", "num": null }, "FIGREF1": { "uris": null, "type_str": "figure", "text": "The heuristics targeted by the HANS dataset, along with examples of incorrect entailment predictions that these heuristics would lead to.(Figure from McCoy et al. 2019.)", "num": null }, "FIGREF2": { "uris": null, "type_str": "figure", "text": "In-distribution generalization. Left: Withininstance accuracy on the MNLI development set; all BERT instances had scores near 84%. Right: Acrossinstance accuracy on individual examples in the MNLI development set; e.g., 66% of the examples were answered correctly by all 100 instances. For numerical results, seeFigure 7.", "num": null }, "FIGREF3": { "uris": null, "type_str": "figure", "text": "(4) a. Premise: The new rights are nice enough b. Hypothesis: Everyone really likes the newest benefits c. Label: Neutral (5) a. Premise: This site includes a list of all award winners and a searchable database of Government Executive articles. b. Hypothesis: The Government Executive articles housed on the website are not able to be searched. c. Label: Contradiction (6) a. Premise: You and your friends are not welcome here, said Severn. b. Hypothesis: Severn said the people were not welcome there. c. Label: Entailment", "num": null }, "FIGREF4": { "uris": null, "type_str": "figure", "text": "(7) a. Premise:Indeed, 58 percent of Columbia/HCA's beds lie empty, compared with 35 percent of nonprofit beds. b. Hypothesis: 58% of Columbia/HCA's beds are empty, said the report. c. Label: Entailment(8)a. Premise: One he broke back to about the length of his forearm. b. Hypothesis: He snapped it until it was just a couple of inches long. c. Label: Contradiction (9) a. Premise: The Kal nodded. b. Hypothesis: The Kal then shook its head side to side. c. Label: Contradiction (10) a. Premise: Load time is divided into elemental and coverage related load time. b. Hypothesis: Load time is comprised of three parts. c. Label: Contradiction", "num": null }, "FIGREF5": { "uris": null, "type_str": "figure", "text": "Results for the HANS subcases for which the heuristics make correct predictions (i.e., where the correct label is entailment). All statistics are based on 100 runs.", "num": null }, "FIGREF6": { "uris": null, "type_str": "figure", "text": "Results for the HANS subcases for which the heuristics make incorrect predictions (i.e., where the correct label is non-entailment). All statistics are based on 100 runs.", "num": null }, "FIGREF11": { "uris": null, "type_str": "figure", "text": "Accuracy distributions on the subcategories of the non-entailed lexical overlap examples of the HANS dataset (i.e., the examples that are inconsistent with the lexical overlap heuristic). For numerical results, and results for the other 25 subcategories of HANS, seeFigures 3 and 4.", "num": null } } } }