ACL-OCL / Base_JSON /prefixB /json /blackboxnlp /2021.blackboxnlp-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:08:22.439772Z"
},
"title": "Test Harder than You Train: Probing with Extrapolation Splits",
"authors": [
{
"first": "Jenny",
"middle": [],
"last": "Kunz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Link\u00f6ping University",
"location": {}
},
"email": "jenny.kunz@liu.se"
},
{
"first": "Marco",
"middle": [],
"last": "Kuhlmann",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Link\u00f6ping University",
"location": {}
},
"email": "marco.kuhlmann@liu.se"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Previous work on probing word representations for linguistic knowledge has focused on interpolation tasks. In this paper, we instead analyse probes in an extrapolation setting, where the inputs at test time are deliberately chosen to be 'harder' than the training examples. We argue that such an analysis can shed further light on the open question whether probes actually decode linguistic knowledge, or merely learn the diagnostic task from shallow features. To quantify the hardness of an example, we consider scoring functions based on linguistic, statistical, and learning-related criteria, all of which are applicable to a broad range of NLP tasks. We discuss the relative merits of these criteria in the context of two syntactic probing tasks, part-of-speech tagging and syntactic dependency labelling. From our theoretical and experimental analysis, we conclude that distance-based and hard statistical criteria show the clearest differences between interpolation and extrapolation settings, while at the same time being transparent, intuitive, and easy to control.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Previous work on probing word representations for linguistic knowledge has focused on interpolation tasks. In this paper, we instead analyse probes in an extrapolation setting, where the inputs at test time are deliberately chosen to be 'harder' than the training examples. We argue that such an analysis can shed further light on the open question whether probes actually decode linguistic knowledge, or merely learn the diagnostic task from shallow features. To quantify the hardness of an example, we consider scoring functions based on linguistic, statistical, and learning-related criteria, all of which are applicable to a broad range of NLP tasks. We discuss the relative merits of these criteria in the context of two syntactic probing tasks, part-of-speech tagging and syntactic dependency labelling. From our theoretical and experimental analysis, we conclude that distance-based and hard statistical criteria show the clearest differences between interpolation and extrapolation settings, while at the same time being transparent, intuitive, and easy to control.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The use of contextualised language models such as ELMo and BERT has brought about remarkable performance gains on a wide range of downstream tasks (Peters et al., 2018a; Devlin et al., 2019) ; but the question to what extent these models have acquired linguistic knowledge remains open. One way to investigate this question is through the use of probing classifiers trained to solve diagnostic prediction tasks that are considered to require linguistic information, such as parts-of-speech, syntactic structure, or semantic roles (Belinkov et al., 2017a; Conneau et al., 2018; Tenney et al., 2019) . However, what conclusions can be drawn from probing experiments is disputed. In particular, a central point of debate is how to know whether probes 'decode linguistic knowledge' or simply 'learn to solve the diagnostic task' (Hewitt and Liang, 2019) . We suggest that new methods that define more rigorous and harder challenges are needed to get further insights into the capabilities and limitations of probes and probing methodology.",
"cite_spans": [
{
"start": 147,
"end": 169,
"text": "(Peters et al., 2018a;",
"ref_id": "BIBREF26"
},
{
"start": 170,
"end": 190,
"text": "Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 530,
"end": 554,
"text": "(Belinkov et al., 2017a;",
"ref_id": "BIBREF2"
},
{
"start": 555,
"end": 576,
"text": "Conneau et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 577,
"end": 597,
"text": "Tenney et al., 2019)",
"ref_id": "BIBREF33"
},
{
"start": 825,
"end": 849,
"text": "(Hewitt and Liang, 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose to analyse probes in the context of an extrapolation setting, where the inputs at test time are deliberately chosen to be 'harder' than the training examples. While machine learning models and neural networks in particular have proved to be very effective learners in interpolation scenarios, where the examples at training time and those at test time are drawn from the same (idealised) underlying distribution, the ability of these models to extrapolate from the training data appears to be limited (Dubois et al., 2020) . At the same time, extrapolation has been proposed as a litmus test for abstract reasoning in neural networks (Barrett et al., 2018) . In the context of probing, we posit that the better the extrapolation capability of a probe, i.e. the higher its performance even in situations where the training and the test examples are substantially different, the more evidence we have for claiming that the probe actually uses abstract linguistic knowledge encoded in the input word representations.",
"cite_spans": [
{
"start": 527,
"end": 548,
"text": "(Dubois et al., 2020)",
"ref_id": "BIBREF7"
},
{
"start": 660,
"end": 682,
"text": "(Barrett et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To construct extrapolation challenges, we propose a conceptually simple approach where we start from standard probing datasets, stratify them based on the 'hardness' of examples, and then use the 'easy' examples for training and the 'hard' ones for testing ( \u00a7 3) . The central decision in this approach is how to measure 'hardness'. Here we identify different scoring functions based on criteria grounded in linguistic theories, statistical properties of the base dataset, and learning behaviour. We apply these scoring functions to create extrapolation challenges from two standard probing tasks, part-ofspeech tagging and syntactic dependency labelling ( \u00a7 4), and use the results of our experiments to discuss the merits of our approach ( \u00a7 5).",
"cite_spans": [
{
"start": 257,
"end": 263,
"text": "( \u00a7 3)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The method that we propose in this paper synthesises several strands of related work:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Probing aims at detecting linguistic knowledge in word representations. While this can be done in a zero-shot setting (Goldberg, 2019; Talmor et al., 2020) or as a structural probe (Hewitt and Manning, 2019) , a dominant approach is to train and evaluate simple classifiers on relevant diagnostic tasks (Belinkov et al., 2017b; Hewitt and Liang, 2019) , where the classifier receives one word representations at a time as its input. This is based on the idea that the accuracy of the trained probe can indicate to what extent the representations encode linguistic knowledge that is useful for the diagnostic task.",
"cite_spans": [
{
"start": 118,
"end": 134,
"text": "(Goldberg, 2019;",
"ref_id": "BIBREF9"
},
{
"start": 135,
"end": 155,
"text": "Talmor et al., 2020)",
"ref_id": null
},
{
"start": 181,
"end": 207,
"text": "(Hewitt and Manning, 2019)",
"ref_id": "BIBREF13"
},
{
"start": 303,
"end": 327,
"text": "(Belinkov et al., 2017b;",
"ref_id": "BIBREF3"
},
{
"start": 328,
"end": 351,
"text": "Hewitt and Liang, 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probing (and its Limitations)",
"sec_num": "2.1"
},
{
"text": "Recent work has questioned the validity of this methodology, suggesting that analysis should shift focus to measuring 'amount of effort' rather than task-based accuracy (Pimentel et al., 2020; Voita and Titov, 2020) . Moreover, many probing tasks are relatively easy to learn with local context and strong independence assumptions. It thus remains unclear whether the probed word representations actually encode linguistic knowledge, contain predictive but superficial features extracted from the words' linear context (Kunz and Kuhlmann, 2020) , or rather provide an effective initialisation for the probing classifier (Prasanna et al., 2020) .",
"cite_spans": [
{
"start": 169,
"end": 192,
"text": "(Pimentel et al., 2020;",
"ref_id": "BIBREF28"
},
{
"start": 193,
"end": 215,
"text": "Voita and Titov, 2020)",
"ref_id": "BIBREF35"
},
{
"start": 519,
"end": 544,
"text": "(Kunz and Kuhlmann, 2020)",
"ref_id": "BIBREF19"
},
{
"start": 620,
"end": 643,
"text": "(Prasanna et al., 2020)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probing (and its Limitations)",
"sec_num": "2.1"
},
{
"text": "A growing body of research suggests that, while deep neural models can reach remarkable performance in interpolation settings, they often fail to extrapolate, i.e. to generalise to inputs outside the range of the training data. For example, Barrett et al. (2018) show that in visual reasoning, popular models such as ResNets perform at levels barely above a random choice baseline in extrapolation settings. As the ability to extrapolate is generally considered a hallmark of intelligence, such findings raise the question whether deep models are capable of human-like reasoning. Similar concerns come from observations that performance can suffer greatly when models are confronted with adversarial examples (Goodfellow et al., 2015; Jia and Liang, 2017) or challenge sets (Zellers et al., 2018 (Zellers et al., , 2019 . Zellers et al. (2019) suggest that deep models may 'pick up on dataset-specific distributional biases' instead of learning the actual task.",
"cite_spans": [
{
"start": 241,
"end": 262,
"text": "Barrett et al. (2018)",
"ref_id": "BIBREF1"
},
{
"start": 709,
"end": 734,
"text": "(Goodfellow et al., 2015;",
"ref_id": "BIBREF10"
},
{
"start": 735,
"end": 755,
"text": "Jia and Liang, 2017)",
"ref_id": "BIBREF14"
},
{
"start": 774,
"end": 795,
"text": "(Zellers et al., 2018",
"ref_id": "BIBREF39"
},
{
"start": 796,
"end": 819,
"text": "(Zellers et al., , 2019",
"ref_id": "BIBREF40"
},
{
"start": 822,
"end": 843,
"text": "Zellers et al. (2019)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Interpolation and Extrapolation",
"sec_num": "2.2"
},
{
"text": "In the domain of natural language understanding, authors have shown that Transformers lack the capability to extrapolate to longer sequences (Dubois et al., 2020) and number representations of higher values (Weiss et al., 2018) ; and that even large neural models such as RoBERTa can compare ages only within a restricted range (Talmor et al., 2020) . Evidently, test data outside the training distribution is a great challenge, and contextualised language models are easily broken on such data.",
"cite_spans": [
{
"start": 141,
"end": 162,
"text": "(Dubois et al., 2020)",
"ref_id": "BIBREF7"
},
{
"start": 207,
"end": 227,
"text": "(Weiss et al., 2018)",
"ref_id": "BIBREF36"
},
{
"start": 328,
"end": 349,
"text": "(Talmor et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Interpolation and Extrapolation",
"sec_num": "2.2"
},
{
"text": "Most of the aforementioned works on extrapolation and abstraction employ synthetic datasets or adversarial attacks to challenge a model. Here we propose a method based on the stratification of existing probing datasets according to a measure of expected difficulty or 'hardness'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "What are Hard Examples?",
"sec_num": "2.3"
},
{
"text": "One way to quantify the difficulty of training examples is to use readability criteria, which are typically motivated on linguistic grounds or with reference to studies on human language processing (Kocmi and Bojar, 2017; Platanios et al., 2019) . A widely used and widely applicable metric is sentence length, which is intuitive and straightforward to measure (Sherman, 1893) , but only weakly correlated with processing complexity (Bailin and Grafstein, 2001 ). There are also many more specific measures, such as the respective averages of parse tree height, length of arcs in syntactic dependency trees, number of noun phrases and number of verb phrases, or word frequency. These measures often inform systems that help authors improve writing quality, and automatically transform texts to make them more understandable or accessibile (Zamanian and Heydari, 2012), but are also used to evaluate systems such as dependency parsers (McDonald and Nivre, 2007; Kulmizev et al., 2019) .",
"cite_spans": [
{
"start": 198,
"end": 221,
"text": "(Kocmi and Bojar, 2017;",
"ref_id": "BIBREF16"
},
{
"start": 222,
"end": 245,
"text": "Platanios et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 361,
"end": 376,
"text": "(Sherman, 1893)",
"ref_id": "BIBREF31"
},
{
"start": 433,
"end": 460,
"text": "(Bailin and Grafstein, 2001",
"ref_id": "BIBREF0"
},
{
"start": 934,
"end": 960,
"text": "(McDonald and Nivre, 2007;",
"ref_id": "BIBREF23"
},
{
"start": 961,
"end": 983,
"text": "Kulmizev et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Readability Criteria",
"sec_num": "2.3.1"
},
{
"text": "Instead of using inherent properties, another way to quantify the hardness of training examples is to look at the effort that a model has to put into learning them. Here we take inspiration from developments in curriculum learning, which moved from heuristic metrics on artificial datasets (Bengio et al., 2009) to learning-specific metrics. In particular, self-paced learning employs the loss of a model to rate and rank the difficulty of examples in a dataset (Kumar et al., 2010; Hacohen and Wein-shall, 2019) . This approach is widely used, but has also been criticised as being inherently modelspecific (Lalor and Yu, 2020) . Other approaches that have been successfully employed in curriculum learning are rankings based on the norms of word embeddings (Liu et al., 2020) and on model uncertainty (Zhou et al., 2020) .",
"cite_spans": [
{
"start": 290,
"end": 311,
"text": "(Bengio et al., 2009)",
"ref_id": "BIBREF4"
},
{
"start": 462,
"end": 482,
"text": "(Kumar et al., 2010;",
"ref_id": "BIBREF18"
},
{
"start": 483,
"end": 512,
"text": "Hacohen and Wein-shall, 2019)",
"ref_id": null
},
{
"start": 608,
"end": 628,
"text": "(Lalor and Yu, 2020)",
"ref_id": "BIBREF21"
},
{
"start": 759,
"end": 777,
"text": "(Liu et al., 2020)",
"ref_id": "BIBREF22"
},
{
"start": 803,
"end": 822,
"text": "(Zhou et al., 2020)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning-Based Criteria",
"sec_num": "2.3.2"
},
{
"text": "In this section we present our specific approach to creating extrapolation datasets, and the setup for our empirical evaluation. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3"
},
{
"text": "Our word representations come from the English BERT base (uncased) model (Devlin et al., 2019) , accessed via the the Transformers library (Wolf et al., 2020) . We probe on the hidden representations of words in all 13 layers, including the uncontextualised layer 0 as a baseline. For words that BERT tokenises into several word pieces, we use the last piece as the representation for the word.",
"cite_spans": [
{
"start": 73,
"end": 94,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 139,
"end": 158,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Representations",
"sec_num": "3.1"
},
{
"text": "The probing classifier is the same in all experiments: a feed-forward network with one hidden layer, 64 hidden units and ReLU activation. We train this classifier with cross-entropy loss for 5 epochs using the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 0.001 and a batch size of 64. Our implementation uses PyTorch (Paszke et al., 2017) .",
"cite_spans": [
{
"start": 333,
"end": 354,
"text": "(Paszke et al., 2017)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probing Classifier",
"sec_num": "3.2"
},
{
"text": "We use two prototypical diagnostic tasks which have been widely studied in the probing literature: part-of-speech (POS) tagging and syntactic dependency labelling. The training and test data for both tasks comes from the English Web Treebank (EWT) as released by the Universal Dependencies project (Nivre et al., 2020) (v2.5). More specifically, we extract our examples from two 1,000sentence sets S train and S test , randomly sampled from the training and the development section of the EWT, respectively. 2 We write s t i to denote the ith sentence in S t , w t ij to denote its jth word, and",
"cite_spans": [
{
"start": 508,
"end": 509,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tasks and Datasets",
"sec_num": "3.3"
},
{
"text": "1 All code necessary to reproduce our experiments is publicly available at https://github.com/jekunz/ extrapolation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tasks and Datasets",
"sec_num": "3.3"
},
{
"text": "2 We sub-sample the full data to reduce training time and save resources. Preliminary experiments showed the same trends that we report here for the full data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tasks and Datasets",
"sec_num": "3.3"
},
{
"text": "x t ij to denote the BERT representation of w ij . We omit the superscript when the base set (training or development) is understood or irrelevant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tasks and Datasets",
"sec_num": "3.3"
},
{
"text": "T1: Part-of-speech tagging This is our prototypical single-word labelling task. Examples take the form e = (x ij , y ij ), where x ij is the representation of a single word w ij , and y ij is the corresponding gold-standard tag. The POS class of a word captures some of its most basic syntactic properties, and can be predicted with local or even without context information at a high accuracy. For our data, probes trained on contextualised word representations usually show a tagging accuracy above 95%, with the highest-performing layers being the lower middle or middle layers of a model (Peters et al., 2018b; Tenney et al., 2019) .",
"cite_spans": [
{
"start": 592,
"end": 614,
"text": "(Peters et al., 2018b;",
"ref_id": "BIBREF27"
},
{
"start": 615,
"end": 635,
"text": "Tenney et al., 2019)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tasks and Datasets",
"sec_num": "3.3"
},
{
"text": "T2: Syntactic dependency labelling In this task, which intends to capture the hierarchical structure of a sentence, we aim to predict the grammatical relation for a given dependency arc. Examples take the form e = ((x ij , x ik ), y ik ), where x ij and x ik are word representations of the head and dependent, respectively, and y ik is the gold-standard dependency label. The performance of simple probes on this task is usually lower than for POS tagging, as the syntactic information that is required to accurately predict the labels is more complex and depends on a larger context. Accuracy can however still exceed 90% in the highest-performing layers, which are usually the higher middle layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tasks and Datasets",
"sec_num": "3.3"
},
{
"text": "We next introduce the inventory of measures that we use to quantify the 'hardness' of training examples. Formally, each measure is a real-valued function m whose domain is the set of all taskspecific examples. If m(e) > m(e ), we say that example e is harder than example e .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Functions",
"sec_num": "3.4"
},
{
"text": "These scoring functions refer to two different notions of length:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Length-based Criteria",
"sec_num": "3.4.1"
},
{
"text": "Sentence length (T1, T2) The most basic length is that of the sentence s i from which the example is derived. Using |\u2022| to denote length,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Length-based Criteria",
"sec_num": "3.4.1"
},
{
"text": "m(x ij , y ij ) = |s i | (for T1) m((x ij , x ik ), y ik ) = |s i | (for T2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Length-based Criteria",
"sec_num": "3.4.1"
},
{
"text": "Arc length (T2) For dependency labelling, we may also consider the length of the dependency arc: m((x ij , x ik ), y ik ) = |j \u2212 k|.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Length-based Criteria",
"sec_num": "3.4.1"
},
{
"text": "For part-of-speech tagging, we consider criteria related to the distribution of the tags:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Criteria",
"sec_num": "3.4.2"
},
{
"text": "Tag proportions (T1) Here the hardness score of an example is the inverse relative frequency of the represented word's gold-standard POS tag in the training set. More formally, for a word w ij from S train and a tag t, let f (w ij , t) be the relative frequency of t among all possible tags for w ij ; then m(x ij , y ij ) = 1 \u2212 f (w ij , y ij ). For examples e that represent words which do not occur in S train , we let m(e) = 1; out-of-vocabulary words will thus always yield the hardest examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Criteria",
"sec_num": "3.4.2"
},
{
"text": "Most frequent tag (T1) In a related setup, we consider an example to be 'easy' if its goldstandard tag is the most frequent tag (mft) in the training set, and 'hard' otherwise. Formally, m(x ij , y ij ) = 1\u22121[y ij is the mft for w ij in S train ] .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Criteria",
"sec_num": "3.4.2"
},
{
"text": "Here we implement ideas from curriculum learning. We first train an ensemble of 10 classifiers on all examples derived from S train . Each classifier has the same architecture and training regime as our probe ( \u00a7 3.2), but uses a different random seed. We then use this ensemble to define the hardness of each example e as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning-based Criteria",
"sec_num": "3.4.3"
},
{
"text": "Sample-specific loss (T1, T2) Here we let m(e) be the sample-specific loss for e, relative to its gold-standard tag or label, averaged over the 10 classifiers in the ensemble.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning-based Criteria",
"sec_num": "3.4.3"
},
{
"text": "Speed of learning (T1, T2) Here we want to classify an example as 'hard' if the probe needs a long time (a large number of updates) to learn it reliably. To implement this idea, at seven specified checkpoints early into training, we let each of the classifiers in the ensemble predict the tag or label of each example e, and define m(e) = 1/(c + 1) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning-based Criteria",
"sec_num": "3.4.3"
},
{
"text": "where c is the total number of correct predictions. For our checkpoints, we use the partially trained classifiers after 2 n batch updates, for 1 \u2264 i \u2264 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning-based Criteria",
"sec_num": "3.4.3"
},
{
"text": "As a consequence, the minimal value for c is 0 (never correctly classified), and the maximal value is 7 \u2022 10 (correctly classified at every checkpoint, by every classifier).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning-based Criteria",
"sec_num": "3.4.3"
},
{
"text": "The last step of our approach is to use our scoring functions to split the set of all task-specific examples into an 'easy' set and a 'hard' set. Here, for each specific experiment we choose two values m 1 and m 2 and let",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Easy Sets and Hard Sets",
"sec_num": "3.5"
},
{
"text": "D easy = {e | m(e) < m 1 } D hard = {e | m(e) > m 2 }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Easy Sets and Hard Sets",
"sec_num": "3.5"
},
{
"text": "The difference m 2 \u2212 m 1 denotes the distance between D easy and D hard . The specific criteria according to which we choose the split points vary:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Easy Sets and Hard Sets",
"sec_num": "3.5"
},
{
"text": "Linguistic criteria For sentence length we base our choice on the classification of Flesch and Gould (1949) . Specifically, for D easy we use the lengths less than 17 words (m 1 = 17), corresponding to (at most) 'fairly easy' readability, understood by 88% of adults in the referenced study. For D hard we use the lengths greater than 29 words (m 2 = 29), classified as (at least) 'very difficult', understood by 4.5% of adults.",
"cite_spans": [
{
"start": 84,
"end": 107,
"text": "Flesch and Gould (1949)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Easy Sets and Hard Sets",
"sec_num": "3.5"
},
{
"text": "Distributional criteria For the remaining scoring functions, we choose split points based on the empirical distribution of the scores: We let m 1 be the 50th percentile (i.e., the median score), and m 2 be the 75th percentile. The only exception to this rule is for the most frequent tag criterion, as explained in \u00a7 3.4.2. Note that, with our strategies of choosing split points, the sizes of the specific 'easy' and 'hard' sets that we use for each experiment differ from the full set, getting as low as half the number of all examples. To assess the impact of this reduction, in control experiments we randomly sub-sampled the 'standard' training sets down to 50% of their original size, but only observed a moderate drop in accuracy (at most 1%).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Easy Sets and Hard Sets",
"sec_num": "3.5"
},
{
"text": "For each experiment, we consider two setups:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.6"
},
{
"text": "\u2022 In the extrapolation setup, we train on the examples in D easy and test on those in D hard .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.6"
},
{
"text": "\u2022 In the control setup, we also test on the examples in D hard , but train on the full set of examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.6"
},
{
"text": "For both setups, we report the mean over 10 random seeds of the best accuracy of each classifier among the 5 epochs for which it was trained. To interpret an experiment, we compare the two accuracy values: If the accuracy in the extrapolation setup is significantly lower than that of the control, we want to conclude that the probe lacks the ability to extrapolate from 'easy' examples, and that there is thus no evidence that the probe makes use of linguistic knowledge in the probed representations. On the other hand, similar scores in the two setups indicate that we have chosen a test set that is hard even for interpolation learning, in which case we do not want to draw this conclusion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.6"
},
{
"text": "For comparison, we also report the mean accuracy in the standard setup, where we train and evaluate on the full datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.6"
},
{
"text": "We now present our experimental results for each of the scoring functions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "The results for sentence length can be seen in Figure 1 . The accuracies for the extrapolation setups are the highest among all scoring functions, and the differences to the standard setups are by far the smallest. Indeed, for part-of-speech tagging (T1) the difference is so small that a large part of it can probably be explained by the decreased number of training examples: the difference between the control and the extrapolation setup is mostly 1-2 points, and never exceeds 3 points. For dependency labelling (T2), the difference is more pronounced, but sentence length remains the measure with the smallest difference between the two setups.",
"cite_spans": [],
"ref_spans": [
{
"start": 47,
"end": 55,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Sentence Length",
"sec_num": "4.1"
},
{
"text": "The distributional split criterion gives m 1 = 23 and m 2 = 34, so both the longest sentences in the 'easy' set and the shortest sentences in the 'hard' set are longer than with the linguistic criterion. The linguistically motivated split shows a larger gap between the standard setups and the extrapolation setups. This is particularly clear for T2, with a gap as high as 8 points in layer 10.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Length",
"sec_num": "4.1"
},
{
"text": "The extrapolation accuracy of the probe based on arc length (Figure 2 ) is comparatively low, suggesting that this setup is more challenging than extrapolation based on sentence length. The control shows that the 'hard' set is clearly harder than the unfiltered test set; but there is an additional substantial accuracy drop in the extrapolation setup.",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 69,
"text": "(Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Arc Length",
"sec_num": "4.2"
},
{
"text": "When using the distributional split criterion, we get m 1 = 2 and m 2 = 4, and the extrapolation accuracy does not exceed 46% in any layer. As m 1 = 2 results in a training set that only consists of arcs of length 1, we perform an additional experiment with a different split, decreasing the distance between D easy and D hard by setting m 1 = 3. This increases accuracy to at most 62%, which is considerably higher than before but still far below the control, which reaches up to 85% on the 'hard' set. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arc Length",
"sec_num": "4.2"
},
{
"text": "The results for the extrapolation splits based on the most frequent tag and the tag proportions criteria are shown in Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 118,
"end": 126,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Most Frequent Tag and Tag Proportions",
"sec_num": "4.3"
},
{
"text": "Splitting based on the most frequent tag criterion leads to an extrapolation setup that is consistently more challenging than the standard setup. We observe a very low accuracy in the first layers, while the higher layers are significantly more predictive. The relative difference in accuracy between the extrapolation setup and the control is also most pronounced in the early layers, although the pattern is less clear in terms of absolute numbers. The gap to the standard (interpolation) setup is substantial: 11-35 points for the control, and 23-42 points for the extrapolation setup.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Most Frequent Tag and Tag Proportions",
"sec_num": "4.3"
},
{
"text": "When using the tag proportions criterion for the extrapolation split, the 'hard' set is now easier, as around half of the examples have a tag that is the most frequent one for the word form. The simpler nature of this challenge is visible in the results: While the performance of the control only sees a modest increase (especially in the lower layers), the difference between the control and the extrapolation setup shrinks more clearly, presumably because the augmentation of the test set with easier examples has a high proportional effect on the previously very low results of the extrapolation probe.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Most Frequent Tag and Tag Proportions",
"sec_num": "4.3"
},
{
"text": "Using the learning-based scoring function, the difference between the control and the extrapolation setup is the largest among all settings. The accuracy of the control is similar to that in the standard setup, suggesting that the 'hard' set may in fact not be (much) harder after all. For the dependency labelling task (T2), control accuracy even slightly exceeds accuracy on the standard set, in all layers but the uncontextualised layer 0. Training the probe on the 'easy' set only, however, has a disastrous effect: in the extrapolation setup, the accuracy drops dramatically. Interestingly, accuracy continues to decrease in higher layers, whereas the typical curve for syntactic probes peaks in the middle layers (Tenney et al., 2019) .",
"cite_spans": [
{
"start": 719,
"end": 740,
"text": "(Tenney et al., 2019)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speed of Learning",
"sec_num": "4.4"
},
{
"text": "With the loss-based split (Figure 5) , the results for the control setups are the lowest among all scoring functions. In this setting, by construction, the 'hard' set consists of the examples with the highest loss, making it challenging even in an interpolation setting. For the tagging task, we see an extreme drop of accuracy in layers 6-8, the layers on which the other two setups perform best. 3 The probes in these layers appear to be completely unable to extrapolate to the harder examples.",
"cite_spans": [
{
"start": 398,
"end": 399,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 26,
"end": 36,
"text": "(Figure 5)",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Sample-specific Loss",
"sec_num": "4.5"
},
{
"text": "While for POS tagging (T1), extrapolation accuracy is generally very close to that of the control, for dependency labelling (T2) we observe a larger distance between all setups, but in particular between the extrapolation setup and the control. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sample-specific Loss",
"sec_num": "4.5"
},
{
"text": "While for all experiments, the accuracy in the extrapolation setup is substantially lower than in the standard interpolation setup, it is still above random guessing, which suggests that probes are able to extract some useful information from the word representations even under this experimental regime. However, the success of extrapolating from 'easy' to 'hard' examples varies depending on the choice of the scoring function. In this section we discuss these findings and the limitations of our method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "We start by arguing for the merits of the different scoring functions in the context of probing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Functions",
"sec_num": "5.1"
},
{
"text": "Sentence length Sentence length is the least discriminating metric in the experimental results, which is in line with our expectations and with previous work on curriculum learning discussed in \u00a7 2.3: for word-level tasks, sentence length is not a strong indicator for hard examples. In the case of part-of-speech tagging (T1), there is no considerable difference between interpolation and extrapolation accuracy. For dependency labelling (T2), such a difference is present; but it is small compared to other choices of scoring functions. The non-correlation between sentence length and hardness is quite intuitive: long sentences also contain many simple examples, and even short sentences may contain complex syntactic constructions. At the same time, the observed difference between the two tasks suggests that the higher-level the task is, and the wider the context it depends on, the more meaningful sentence length can be as a criterion for creating extrapolation challenges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Functions",
"sec_num": "5.1"
},
{
"text": "Arc length In contrast to sentence length, arc length provides a useful criterion for extrapolation. While the comparatively low accuracy in the control setup shows that longer arcs are a challenge in themselves, restricting the training set to short arcs limits the accuracy of the probe even further. Extrapolation capability is limited even in the softened setup where we decrease the distance between training and test set (Figure 2, right) . Thus, under this scoring function, we find no evidence that the probe extracts useful linguistic knowledge from the word representations -a conclusion that establishes a difference between our extrapolation setup and results for interpolation-based learning (Tenney et al., 2020; Hewitt and Liang, 2019) .",
"cite_spans": [
{
"start": 705,
"end": 726,
"text": "(Tenney et al., 2020;",
"ref_id": "BIBREF34"
},
{
"start": 727,
"end": 750,
"text": "Hewitt and Liang, 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 427,
"end": 444,
"text": "(Figure 2, right)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Scoring Functions",
"sec_num": "5.1"
},
{
"text": "Most frequent tag and tag proportions Even the most frequent tag criterion is an informative setup. Our empirical results (Figure 3 ) suggest that the probe seems to heavily rely on word formspecific information at least in the first layers, while it focuses on more generalisable information in the later layers, and thus exhibits better extrapolation capabilities.",
"cite_spans": [],
"ref_spans": [
{
"start": 122,
"end": 131,
"text": "(Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Scoring Functions",
"sec_num": "5.1"
},
{
"text": "Based on the differences between the extrapolation setup and the control, we argue that the most frequent tag criterion is better motivated and provides more insights than tag proportions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Functions",
"sec_num": "5.1"
},
{
"text": "Speed of learning The 'speed of learning' criterion creates a very challenging extrapolation setup, compared to the standard setup (and the control). Probes trained on the full set perform well on the supposedly hard extrapolation test set, sometimes even better than on the standard test set. It is the training set that makes the difference: by only including fast-success examples, we are likely to miss patterns. The extrapolation setup favours patterns that are easy to learn, making it superfluous for the classifier to try harder and extract features that generalise better, even if these may not necessarily be extremely hard to learn -the number of such examples may simply be too small to learn the pattern in the first phase. As a consequence of this behaviour, the speed of learning criterion has a low interpretability. Without further qualitative analyses, we can only make assumptions about the nature of the 'easy' and 'hard' datasets, and in particular about the examples that are left out from either. And obviously, if patterns are completely missed, we cannot expect the model to extrapolate to harder examples of this very pattern.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Functions",
"sec_num": "5.1"
},
{
"text": "Sample-specific loss The most opaque of all scoring functions is arguably the loss-based criterion. It is even less transparent than the learningbased criterion, where we can possibly identify the learned (and missed) patterns in an error analysis. With the loss-based criterion, we will be unlikely to identify commonalities between examples that share the same ranking with respect to the scoring function. While the loss-based criterion strongly discriminates between the standard setup and the extrapolation setup, this is largely an effect of the construction of the test set, which in the latter setup will contain all examples that are classified incorrectly. For tasks where the performance of a standard probe is already low, the test set will solely consist of misclassified examples. Applying the loss of fully trained probes on the test set can therefore be seen as circular.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Functions",
"sec_num": "5.1"
},
{
"text": "Summary To summarise, from a perspective of transparency, controllability, and demonstrable success in separating the data into easier and harder examples, we argue that the most interesting metrics for the identification of extrapolation challenges are arc length and the most frequent tag criterion. The learning-based scoring functions, which have the potential to be less ad-hoc, are hard to interpret, give unsurprising results, and are therefore less useful as an analysis tool.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Functions",
"sec_num": "5.1"
},
{
"text": "Another benefit of arc length and the most frequent tag criterion is that they are applicable to a wide range of tasks. The most frequent tag criterion can be applied to any word labelling task that has a limited number of labels. Examples for further tasks where it can be applied include named entity recognition and word sense disambiguation. Arc length can be applied to all tasks that can be formulated as operating on pairs of words. Besides other parsing tasks such as semantic dependency parsing, this is the case for e.g. coreference resolution or negation scope detection (Kurtz et al., 2020) .",
"cite_spans": [
{
"start": 582,
"end": 602,
"text": "(Kurtz et al., 2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Functions",
"sec_num": "5.1"
},
{
"text": "The strong differences between the standard setup and the extrapolation setup and the great variability of results across scoring functions illustrate that the interpretation of probing classifiers remains challenging. A more extensive analysis, be it with automated techniques such as our extrapolation splits or with a qualitative analysis, is a necessity for a deeper understanding of a probing classifier's results. Unlike previous restrictions of the model or the training data as proposed by Hewitt and Liang (2019) , our approach offers (given an appropriate scoring function, such as arc length or the most frequent tag criterion), more control over and transparency about the nature of the restrictions imposed by the modification of the data.",
"cite_spans": [
{
"start": 498,
"end": 521,
"text": "Hewitt and Liang (2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contributions and Limitations",
"sec_num": "5.2"
},
{
"text": "While the extrapolation setup helps approximating the nature of the features the probe uses, it does not ultimately solve the problem of the lacking interpretability of probing classifiers themselves. Negative results in the extrapolation setup do not imply that the linguistic knowledge of interest is not present in the representation. The probe may just have focused on other features -the amount of predictors to approximate a given target function is infinite. However, classical interpolation-based setups using probing classifiers tend to overestimate the information present in the representations, as classifiers can learn a task even from randomly initialised word embeddings (Zhang and Bowman, 2018; Hewitt and Liang, 2019 ). Therefore we argue that, at this time, we need to be more aware of false positives than of false negatives in probing. Extrapolation probes have the potential to reduce the false positive rate while providing new insights into the generalisability of the features they use.",
"cite_spans": [
{
"start": 686,
"end": 710,
"text": "(Zhang and Bowman, 2018;",
"ref_id": "BIBREF41"
},
{
"start": 711,
"end": 733,
"text": "Hewitt and Liang, 2019",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contributions and Limitations",
"sec_num": "5.2"
},
{
"text": "We identified and suggested several ways to define the difficulty of training and validation examples based on linguistic, statistical, and learning-based criteria, to create extrapolation splits for natural language datasets. We demonstrated the usefulness of these measures for the analysis of two linguistic tasks, and proposed an evaluation protocol with baselines and metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Our experimental results suggest that a probe trained on BERT hidden representations is capable of applying patterns learned from easier examples to harder examples to some extent; but in well-motivated scenarios where the scoring function is an appropriate measure of difficulty of the examples, its competence is clearly limited compared to an interpolation probe. In our experiments, the most informative scoring functions are the distance-based arc length criterion that we applied to syntactic dependency labelling, and the word-specific most frequent tag criterion for partof-speech tagging. These functions allow for a clear and transparent extrapolation setup, while at the same time being simple and also computationally efficient. Sentence length, as expected, did not turn out to be a strong indicator for hard examples, while learning-based criteria show a high margin between interpolation and extrapolation setups, but limited interpretability and qualitative insights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We conclude that enriching probing experiments with automated extrapolation setups can be a valuable supplement to standard probing methods, as it gives us an instrument to test the generalisation capability of the probe, and thereby the robustness of the features it uses. In addition to interpretation purposes, well-chosen extrapolation splits can provide a cheap but valuable extension of the evaluation of a model, testing its generalisation capabilities and verifying the progress made.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "To put this into context, we recall that we tried to control for a too high model-specificness by averaging the losses of 10 different models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The linguistic assumptions underlying readability formulae: A critique",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Bailin",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Grafstein",
"suffix": ""
}
],
"year": 2001,
"venue": "Language & Communication",
"volume": "21",
"issue": "3",
"pages": "285--301",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Bailin and Ann Grafstein. 2001. The linguistic assumptions underlying readability formulae: A cri- tique. Language & Communication, 21(3):285-301.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Measuring abstract reasoning in neural networks",
"authors": [
{
"first": "David",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Santoro",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Morcos",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Lillicrap",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 35th International Conference on Machine Learning",
"volume": "80",
"issue": "",
"pages": "511--520",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Barrett, Felix Hill, Adam Santoro, Ari Morcos, and Timothy Lillicrap. 2018. Measuring abstract reasoning in neural networks. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Re- search, pages 511-520. PMLR.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "What do neural machine translation models learn about morphology?",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Fahim",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "861--872",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1080"
]
},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Has- san Sajjad, and James Glass. 2017a. What do neu- ral machine translation models learn about morphol- ogy? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 861-872, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Evaluating layers of representation in neural machine translation on part-of-speech and semantic tagging tasks",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Fahim",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov, Llu\u00eds M\u00e0rquez, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and James Glass. 2017b. Evaluating layers of representation in neu- ral machine translation on part-of-speech and seman- tic tagging tasks. In Proceedings of the Eighth In- ternational Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1-10, Taipei, Taiwan. Asian Federation of Natural Lan- guage Processing.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Curriculum learning",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "J\u00e9r\u00f4me",
"middle": [],
"last": "Louradour",
"suffix": ""
},
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 26th Annual International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "41--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, J\u00e9r\u00f4me Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th Annual International Con- ference on Machine Learning, pages 41-48.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Kruszewski",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2126--2136",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1198"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, German Kruszewski, Guillaume Lam- ple, Lo\u00efc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2126-2136, Melbourne, Aus- tralia. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Location Attention for Extrapolation to Longer Sequences",
"authors": [
{
"first": "Yann",
"middle": [],
"last": "Dubois",
"suffix": ""
},
{
"first": "Gautier",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
},
{
"first": "Elia",
"middle": [],
"last": "Bruni",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "403--413",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.39"
]
},
"num": null,
"urls": [],
"raw_text": "Yann Dubois, Gautier Dagan, Dieuwke Hupkes, and Elia Bruni. 2020. Location Attention for Extrapo- lation to Longer Sequences. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 403-413, Online. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The art of readable writing",
"authors": [
{
"first": "Rudolf",
"middle": [],
"last": "Flesch",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"J"
],
"last": "Gould",
"suffix": ""
}
],
"year": 1949,
"venue": "",
"volume": "8",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rudolf Flesch and Alan J Gould. 1949. The art of read- able writing, volume 8. Harper New York.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Assessing BERT's syntactic abilities",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1901.05287"
]
},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg. 2019. Assessing BERT's syntactic abilities. arXiv preprint arXiv:1901.05287.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Explaining and harnessing adversarial examples",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Jonathon",
"middle": [],
"last": "Shlens",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Szegedy",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversar- ial examples. In International Conference on Learn- ing Representations.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "On the power of curriculum learning in training deep networks",
"authors": [
{
"first": "Guy",
"middle": [],
"last": "Hacohen",
"suffix": ""
},
{
"first": "Daphna",
"middle": [],
"last": "Weinshall",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "2535--2544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guy Hacohen and Daphna Weinshall. 2019. On the power of curriculum learning in training deep net- works. In International Conference on Machine Learning, pages 2535-2544. PMLR.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Designing and interpreting probes with control tasks",
"authors": [
{
"first": "John",
"middle": [],
"last": "Hewitt",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2733--2743",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1275"
]
},
"num": null,
"urls": [],
"raw_text": "John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 2733-2743, Hong Kong, China. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A structural probe for finding syntax in word representations",
"authors": [
{
"first": "John",
"middle": [],
"last": "Hewitt",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4129--4138",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1419"
]
},
"num": null,
"urls": [],
"raw_text": "John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word repre- sentations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129-4138, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Adversarial examples for evaluating reading comprehension systems",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2021--2031",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1215"
]
},
"num": null,
"urls": [],
"raw_text": "Robin Jia and Percy Liang. 2017. Adversarial exam- ples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 2021-2031, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Adam: A method for stochastic optimization. 3rd International Conference for Learning Representations",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. 3rd Interna- tional Conference for Learning Representations.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Curriculum learning and minibatch bucketing in neural machine translation",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kocmi",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "379--386",
"other_ids": {
"DOI": [
"10.26615/978-954-452-049-6_050"
]
},
"num": null,
"urls": [],
"raw_text": "Tom Kocmi and Ond\u0159ej Bojar. 2017. Curriculum learn- ing and minibatch bucketing in neural machine trans- lation. In Proceedings of the International Confer- ence Recent Advances in Natural Language Process- ing, RANLP 2017, pages 379-386, Varna, Bulgaria. INCOMA Ltd.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Deep contextualized word embeddings in transitionbased and graph-based dependency parsing -a tale of two parsers revisited",
"authors": [
{
"first": "Artur",
"middle": [],
"last": "Kulmizev",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Miryam De Lhoneux",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Gontrum",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Fano",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2755--2768",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1277"
]
},
"num": null,
"urls": [],
"raw_text": "Artur Kulmizev, Miryam de Lhoneux, Johannes Gontrum, Elena Fano, and Joakim Nivre. 2019. Deep contextualized word embeddings in transition- based and graph-based dependency parsing -a tale of two parsers revisited. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2755-2768, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Self-paced learning for latent variable models",
"authors": [
{
"first": "M",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Packer",
"suffix": ""
},
{
"first": "Daphne",
"middle": [],
"last": "Koller",
"suffix": ""
}
],
"year": 2010,
"venue": "Advances in Neural Information Processing Systems",
"volume": "23",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Kumar, Benjamin Packer, and Daphne Koller. 2010. Self-paced learning for latent variable models. In Advances in Neural Information Processing Systems, volume 23. Curran Associates, Inc.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Classifier probes may just learn from linear context features",
"authors": [
{
"first": "Jenny",
"middle": [],
"last": "Kunz",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Kuhlmann",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5136--5146",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.450"
]
},
"num": null,
"urls": [],
"raw_text": "Jenny Kunz and Marco Kuhlmann. 2020. Classifier probes may just learn from linear context features. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5136-5146, Barcelona, Spain (Online). International Committee on Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "End-to-end negation resolution as graph parsing",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Kurtz",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Kuhlmann",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2020.iwpt-1.3"
]
},
"num": null,
"urls": [],
"raw_text": "Robin Kurtz, Stephan Oepen, and Marco Kuhlmann. 2020. End-to-end negation resolution as graph pars- ing. In Proceedings of the 16th International Con- ference on Parsing Technologies and the IWPT 2020",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Dynamic data selection for curriculum learning via ability estimation",
"authors": [
{
"first": "P",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Lalor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "545--555",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.48"
]
},
"num": null,
"urls": [],
"raw_text": "John P. Lalor and Hong Yu. 2020. Dynamic data selec- tion for curriculum learning via ability estimation. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 545-555, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Norm-based curriculum learning for neural machine translation",
"authors": [
{
"first": "Xuebo",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Houtim",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Derek",
"middle": [
"F"
],
"last": "Wong",
"suffix": ""
},
{
"first": "Lidia",
"middle": [
"S"
],
"last": "Chao",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "427--436",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.41"
]
},
"num": null,
"urls": [],
"raw_text": "Xuebo Liu, Houtim Lai, Derek F. Wong, and Lidia S. Chao. 2020. Norm-based curriculum learning for neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 427-436, Online. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Characterizing the errors of data-driven dependency parsing models",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)",
"volume": "",
"issue": "",
"pages": "122--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald and Joakim Nivre. 2007. Character- izing the errors of data-driven dependency parsing models. In Proceedings of the 2007 Joint Confer- ence on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 122-131, Prague, Czech Republic. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Universal Dependencies v2: An evergrowing multilingual treebank collection",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Tyers",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Zeman",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "4034--4043",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Jan Haji\u010d, Christopher D. Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal Dependencies v2: An evergrowing multilingual treebank collection. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4034-4043, Mar- seille, France. European Language Resources Asso- ciation.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Automatic differentiation in PyTorch",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1202"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018a. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Dissecting contextual word embeddings: Architecture and representation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1499--1509",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1179"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018b. Dissecting contextual word embeddings: Architecture and representation. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 1499-1509, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Information-theoretic probing for linguistic structure",
"authors": [
{
"first": "Tiago",
"middle": [],
"last": "Pimentel",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Valvoda",
"suffix": ""
},
{
"first": "Rowan",
"middle": [],
"last": "Hall Maudslay",
"suffix": ""
},
{
"first": "Ran",
"middle": [],
"last": "Zmigrod",
"suffix": ""
},
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4609--4622",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.420"
]
},
"num": null,
"urls": [],
"raw_text": "Tiago Pimentel, Josef Valvoda, Rowan Hall Maudslay, Ran Zmigrod, Adina Williams, and Ryan Cotterell. 2020. Information-theoretic probing for linguistic structure. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 4609-4622, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Competence-based curriculum learning for neural machine translation",
"authors": [
{
"first": "Otilia",
"middle": [],
"last": "Emmanouil Antonios Platanios",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Stretcu",
"suffix": ""
},
{
"first": "Barnabas",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Poczos",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1162--1172",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1119"
]
},
"num": null,
"urls": [],
"raw_text": "Emmanouil Antonios Platanios, Otilia Stretcu, Graham Neubig, Barnabas Poczos, and Tom Mitchell. 2019. Competence-based curriculum learning for neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1162-1172, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "When BERT Plays the Lottery, All Tickets Are Winning",
"authors": [
{
"first": "Sai",
"middle": [],
"last": "Prasanna",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rogers",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "3208--3229",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.259"
]
},
"num": null,
"urls": [],
"raw_text": "Sai Prasanna, Anna Rogers, and Anna Rumshisky. 2020. When BERT Plays the Lottery, All Tickets Are Winning. In Proceedings of the 2020 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 3208-3229, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Analytics of literature: A manual for the objective study of English prose and poetry",
"authors": [
{
"first": "Lucius Adelno",
"middle": [],
"last": "Sherman",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucius Adelno Sherman. 1893. Analytics of literature: A manual for the objective study of English prose and poetry. Ginn.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "2020. oLMpics-on what language model pre-training captures",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Talmor",
"suffix": ""
},
{
"first": "Yanai",
"middle": [],
"last": "Elazar",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
}
],
"year": null,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "743--758",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00342"
]
},
"num": null,
"urls": [],
"raw_text": "Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2020. oLMpics-on what language model pre-training captures. Transactions of the As- sociation for Computational Linguistics, 8:743-758.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "BERT rediscovers the classical NLP pipeline",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Tenney",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4593--4601",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1452"
]
},
"num": null,
"urls": [],
"raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4593- 4601, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "The language interpretability tool: Extensible, interactive visualizations and analysis for NLP models",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Tenney",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Wexler",
"suffix": ""
},
{
"first": "Jasmijn",
"middle": [],
"last": "Bastings",
"suffix": ""
},
{
"first": "Tolga",
"middle": [],
"last": "Bolukbasi",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Coenen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Gehrmann",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Mahima",
"middle": [],
"last": "Pushkarna",
"suffix": ""
},
{
"first": "Carey",
"middle": [],
"last": "Radebaugh",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Reif",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Yuan",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "107--118",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-demos.15"
]
},
"num": null,
"urls": [],
"raw_text": "Ian Tenney, James Wexler, Jasmijn Bastings, Tolga Bolukbasi, Andy Coenen, Sebastian Gehrmann, Ellen Jiang, Mahima Pushkarna, Carey Radebaugh, Emily Reif, and Ann Yuan. 2020. The language in- terpretability tool: Extensible, interactive visualiza- tions and analysis for NLP models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstra- tions, pages 107-118, Online. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Informationtheoretic probing with minimum description length",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Voita",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "183--196",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.14"
]
},
"num": null,
"urls": [],
"raw_text": "Elena Voita and Ivan Titov. 2020. Information- theoretic probing with minimum description length. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 183-196, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "On the practical computational power of finite precision RNNs for language recognition",
"authors": [
{
"first": "Gail",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Eran",
"middle": [],
"last": "Yahav",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "740--745",
"other_ids": {
"DOI": [
"10.18653/v1/P18-2117"
]
},
"num": null,
"urls": [],
"raw_text": "Gail Weiss, Yoav Goldberg, and Eran Yahav. 2018. On the practical computational power of finite pre- cision RNNs for language recognition. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 740-745, Melbourne, Australia. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "Remi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Drame",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-demos.6"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Readability of texts: State of the art. Theory & Practice in Language Studies",
"authors": [
{
"first": "Mostafa",
"middle": [],
"last": "Zamanian",
"suffix": ""
},
{
"first": "Pooneh",
"middle": [],
"last": "Heydari",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mostafa Zamanian and Pooneh Heydari. 2012. Read- ability of texts: State of the art. Theory & Practice in Language Studies, 2(1).",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "SWAG: A large-scale adversarial dataset for grounded commonsense inference",
"authors": [
{
"first": "Rowan",
"middle": [],
"last": "Zellers",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "93--104",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1009"
]
},
"num": null,
"urls": [],
"raw_text": "Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A large-scale adversar- ial dataset for grounded commonsense inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 93- 104, Brussels, Belgium. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "HellaSwag: Can a machine really finish your sentence?",
"authors": [
{
"first": "Rowan",
"middle": [],
"last": "Zellers",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4791--4800",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1472"
]
},
"num": null,
"urls": [],
"raw_text": "Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. HellaSwag: Can a machine really finish your sentence? In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4791- 4800, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Language modeling teaches you more than translation does: Lessons learned through auxiliary syntactic task analysis",
"authors": [
{
"first": "Kelly",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "359--361",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5448"
]
},
"num": null,
"urls": [],
"raw_text": "Kelly Zhang and Samuel Bowman. 2018. Language modeling teaches you more than translation does: Lessons learned through auxiliary syntactic task analysis. In Proceedings of the 2018 EMNLP Work- shop BlackboxNLP: Analyzing and Interpreting Neu- ral Networks for NLP, pages 359-361, Brussels, Bel- gium. Association for Computational Linguistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Uncertainty-aware curriculum learning for neural machine translation",
"authors": [
{
"first": "Yikai",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Baosong",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Derek",
"middle": [
"F"
],
"last": "Wong",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Lidia",
"middle": [
"S"
],
"last": "Chao",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6934--6944",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.620"
]
},
"num": null,
"urls": [],
"raw_text": "Yikai Zhou, Baosong Yang, Derek F. Wong, Yu Wan, and Lidia S. Chao. 2020. Uncertainty-aware cur- riculum learning for neural machine translation. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 6934- 6944, Online. Association for Computational Lin- guistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Extrapolation based on sentence length. From left to right: part-of-speech tagging (T1), linguistic criterion; dependency labelling (T2), linguistic; T1, distributional criterion; T2, distributional. In all plots, the x-axis corresponds to the BERT layer used for prediction, and the y-axis corresponds to the mean accuracy.",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "Extrapolation based on arc length. Left: Standard distributional setup. Right: Modified setup.",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "Extrapolation for T1 based on the most frequent tag (left) and tag proportions criteria (right).",
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"text": "Extrapolation based on speed of learning. Left: Tagging (T1). Right: Dependency labelling (T2).",
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"uris": null,
"text": "Extrapolation based on loss. Left: Tagging (T1). Right: Dependency Labelling (T2).",
"num": null,
"type_str": "figure"
}
}
}
}