{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:09:34.151216Z" }, "title": "On the Interplay Between Fine-tuning and Sentence-level Probing for Linguistic Knowledge in Pre-trained Transformers", "authors": [ { "first": "Marius", "middle": [], "last": "Mosbach", "suffix": "", "affiliation": { "laboratory": "", "institution": "Saarland University", "location": { "country": "Germany" } }, "email": "mmosbach@lsv.uni-saarland.de" }, { "first": "Anna", "middle": [], "last": "Khokhlova", "suffix": "", "affiliation": { "laboratory": "", "institution": "Saarland University", "location": { "country": "Germany" } }, "email": "akhokhlova@lsv.uni-saarland.de" }, { "first": "Michael", "middle": [ "A" ], "last": "Hedderich", "suffix": "", "affiliation": { "laboratory": "", "institution": "Saarland University", "location": { "country": "Germany" } }, "email": "mhedderich@lsv.uni-saarland.de" }, { "first": "Dietrich", "middle": [], "last": "Klakow", "suffix": "", "affiliation": { "laboratory": "", "institution": "Saarland University", "location": { "country": "Germany" } }, "email": "dklakow@lsv.uni-saarland.de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Fine-tuning pre-trained contextualized embedding models has become an integral part of the NLP pipeline. At the same time, probing has emerged as a way to investigate the linguistic knowledge captured by pre-trained models. Very little is, however, understood about how fine-tuning affects the representations of pre-trained models and thereby the linguistic knowledge they encode. This paper contributes towards closing this gap. We study three different pre-trained models: BERT, RoBERTa, and ALBERT, and investigate through sentence-level probing how finetuning affects their representations. We find that for some probing tasks fine-tuning leads to substantial changes in accuracy, possibly suggesting that fine-tuning introduces or even removes linguistic knowledge from a pre-trained model. These changes, however, vary greatly across different models, fine-tuning and probing tasks. Our analysis reveals that while finetuning indeed changes the representations of a pre-trained model and these changes are typically larger for higher layers, only in very few cases, fine-tuning has a positive effect on probing accuracy that is larger than just using the pre-trained model with a strong pooling method. Based on our findings, we argue that both positive and negative effects of finetuning on probing require a careful interpretation.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Fine-tuning pre-trained contextualized embedding models has become an integral part of the NLP pipeline. At the same time, probing has emerged as a way to investigate the linguistic knowledge captured by pre-trained models. Very little is, however, understood about how fine-tuning affects the representations of pre-trained models and thereby the linguistic knowledge they encode. This paper contributes towards closing this gap. We study three different pre-trained models: BERT, RoBERTa, and ALBERT, and investigate through sentence-level probing how finetuning affects their representations. We find that for some probing tasks fine-tuning leads to substantial changes in accuracy, possibly suggesting that fine-tuning introduces or even removes linguistic knowledge from a pre-trained model. These changes, however, vary greatly across different models, fine-tuning and probing tasks. Our analysis reveals that while finetuning indeed changes the representations of a pre-trained model and these changes are typically larger for higher layers, only in very few cases, fine-tuning has a positive effect on probing accuracy that is larger than just using the pre-trained model with a strong pooling method. Based on our findings, we argue that both positive and negative effects of finetuning on probing require a careful interpretation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Transformer-based contextual embeddings like BERT (Devlin et al., 2019) , RoBERTa (Liu et al., 2019b) and ALBERT (Lan et al., 2020) recently became the state-of-the-art on a variety of NLP downstream tasks. These models are pre-trained on large amounts of text and subsequently fine-tuned on task-specific, supervised downstream tasks. Their strong empirical performance triggered questions concerning the linguistic knowledge they encode in their representations and how it is affected by the training objective and model architecture (Kim et al., 2019; Wang et al., 2019a) . One prominent technique to gain insights about the linguistic knowledge encoded in pre-trained models is probing (Rogers et al., 2020) . However, works on probing have so far focused mostly on pre-trained models. It is still unclear how the representations of a pre-trained model change when fine-tuning on a downstream task. Further, little is known about whether and to what extent this process adds or removes linguistic knowledge from a pre-trained model. Addressing these issues, we are investigating the following questions:", "cite_spans": [ { "start": 50, "end": 71, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF6" }, { "start": 82, "end": 101, "text": "(Liu et al., 2019b)", "ref_id": "BIBREF18" }, { "start": 113, "end": 131, "text": "(Lan et al., 2020)", "ref_id": "BIBREF15" }, { "start": 536, "end": 554, "text": "(Kim et al., 2019;", "ref_id": "BIBREF13" }, { "start": 555, "end": 574, "text": "Wang et al., 2019a)", "ref_id": null }, { "start": 690, "end": 711, "text": "(Rogers et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. How and where does fine-tuning affect the representations of a pre-trained model? 2. To which extent (if at all) can changes in probing accuracy be attributed to a change in linguistic knowledge encoded by the model?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To answer these questions, we investigate three different pre-trained encoder models, BERT, RoBERTa, and ALBERT. We fine-tune them on sentence-level classification tasks from the GLUE benchmark and evaluate the linguistic knowledge they encode leveraging three sentence-level probing tasks from the SentEval probing suite (Conneau et al., 2018) . We focus on sentence-level probing tasks to measure linguistic knowledge encoded by a model for two reasons: 1) during fine-tuning we explicitly train a model to represent sentence-level context in its representations and 2) we are interested in the extent to which this affects existing sentence-level linguistic knowledge already present in a pre-trained model. We find that while, indeed, fine-tuning affects a model's sentence-level probing accuracy and these effects are typically larger for higher layers, changes in probing accuracy vary depend-ing on the encoder model, fine-tuning and probing task combination. Our results also show that sentence-level probing accuracy is highly dependent on the pooling method being used. Only in very few cases, fine-tuning has a positive effect on probing accuracy that is larger than just using the pre-trained model with a strong pooling method. Our findings suggest that changes in probing performance can not exclusively be attributed to an improved or deteriorated encoding of linguistic knowledge and should be carefully interpreted. We present further evidence for this interpretation by investigating changes in the attention distribution and language modeling capabilities of fine-tuned models which constitute alternative explanations for changes in probing accuracy.", "cite_spans": [ { "start": 322, "end": 344, "text": "(Conneau et al., 2018)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Probing A large body of previous work focuses on analyses of the internal representations of neural models and the linguistic knowledge they encode (Shi et al., 2016; Ettinger et al., 2016; Adi et al., 2016; Belinkov et al., 2017; Hupkes et al., 2018) . In a similar spirit to these first works on probing, Conneau et al. (2018) were the first to compare different sentence embedding methods for the linguistic knowledge they encode. Krasnowska-Kiera\u015b and Wr\u00f3blewska (2019) extended this approach to study sentence-level probing tasks on English and Polish sentences.", "cite_spans": [ { "start": 148, "end": 166, "text": "(Shi et al., 2016;", "ref_id": "BIBREF29" }, { "start": 167, "end": 189, "text": "Ettinger et al., 2016;", "ref_id": "BIBREF8" }, { "start": 190, "end": 207, "text": "Adi et al., 2016;", "ref_id": "BIBREF0" }, { "start": 208, "end": 230, "text": "Belinkov et al., 2017;", "ref_id": "BIBREF2" }, { "start": 231, "end": 251, "text": "Hupkes et al., 2018)", "ref_id": "BIBREF12" }, { "start": 307, "end": 328, "text": "Conneau et al. (2018)", "ref_id": "BIBREF4" }, { "start": 434, "end": 473, "text": "Krasnowska-Kiera\u015b and Wr\u00f3blewska (2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Alongside sentence-level probing, many recent works (Peters et al., 2018; Liu et al., 2019a; Tenney et al., 2019b; Hewitt and Manning, 2019 ) have focused on token-level probing tasks investigating more recent contextualized embedding models such as ELMo (Peters et al., 2018) , GPT (Radford et al., 2019) , and BERT (Devlin et al., 2019) . Two of the most prominent works following this methodology are Liu et al. (2019a) and Tenney et al. (2019b) . While Liu et al. (2019a) use linear probing classifiers as we do, Tenney et al. (2019b) use more expressive, non-linear classifiers. However, in contrast to our work, most studies that investigate pre-trained contextualized embedding models focus on pre-trained models and not fine-tuned ones. Moreover, we aim to assess how probing performance changes with fine-tuning and how these changes differ based on the model architecture, as well as probing and fine-tuning task combination.", "cite_spans": [ { "start": 52, "end": 73, "text": "(Peters et al., 2018;", "ref_id": "BIBREF22" }, { "start": 74, "end": 92, "text": "Liu et al., 2019a;", "ref_id": "BIBREF17" }, { "start": 93, "end": 114, "text": "Tenney et al., 2019b;", "ref_id": "BIBREF33" }, { "start": 115, "end": 139, "text": "Hewitt and Manning, 2019", "ref_id": "BIBREF11" }, { "start": 255, "end": 276, "text": "(Peters et al., 2018)", "ref_id": "BIBREF22" }, { "start": 283, "end": 305, "text": "(Radford et al., 2019)", "ref_id": "BIBREF25" }, { "start": 317, "end": 338, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF6" }, { "start": 404, "end": 422, "text": "Liu et al. (2019a)", "ref_id": "BIBREF17" }, { "start": 427, "end": 448, "text": "Tenney et al. (2019b)", "ref_id": "BIBREF33" }, { "start": 457, "end": 475, "text": "Liu et al. (2019a)", "ref_id": "BIBREF17" }, { "start": 517, "end": 538, "text": "Tenney et al. (2019b)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Fine-tuning While fine-tuning pre-trained language models leads to a strong empirical performance across various supervised NLP downstream tasks , fine-tuning itself (Dodge et al., 2020) and its effects on the representations learned by a pre-trained model are poorly understood. As an example, Phang et al. (2018) show that downstream accuracy can benefit from an intermediate fine-tuning task, but leave the investigation of why certain tasks benefit from intermediate task training to future work. Recently, Pruksachatkun et al. (2020) extended this approach using eleven diverse intermediate fine-tuning tasks. They view probing task performance after finetuning as an indicator of the acquisition of a particular language skill during intermediate task finetuning. This is similar to our work in the sense that probing accuracy is used to understand how finetuning affects a pre-trained model. Talmor et al. (2019) try to understand whether the performance on downstream tasks should be attributed to the pre-trained representations or rather the fine-tuning process itself. They fine-tune BERT and RoBERTa on a large set of symbolic reasoning tasks and find that while RoBERTa generally outperforms BERT in its reasoning abilities, the performance of both models is highly context dependent.", "cite_spans": [ { "start": 166, "end": 186, "text": "(Dodge et al., 2020)", "ref_id": "BIBREF7" }, { "start": 295, "end": 314, "text": "Phang et al. (2018)", "ref_id": "BIBREF23" }, { "start": 511, "end": 538, "text": "Pruksachatkun et al. (2020)", "ref_id": "BIBREF24" }, { "start": 899, "end": 919, "text": "Talmor et al. (2019)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Most similar to our work is the contemporaneous work by Merchant et al. (2020) . They investigate how fine-tuning leads to changes in the representations of a pre-trained model. In contrast to our work, their focus, however, lies on edgeprobing (Tenney et al., 2019b) and structural probing tasks (Hewitt and Manning, 2019) and they study only a single pre-trained encoder: BERT. We consider our work complementary to them since we study sentence-level probing tasks, use different analysis methods and investigate the impact of fine-tuning on three different pre-trained encoders: BERT, RoBERTa, and ALBERT.", "cite_spans": [ { "start": 56, "end": 78, "text": "Merchant et al. (2020)", "ref_id": "BIBREF19" }, { "start": 245, "end": 267, "text": "(Tenney et al., 2019b)", "ref_id": "BIBREF33" }, { "start": 297, "end": 323, "text": "(Hewitt and Manning, 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The focus of our work is on studying how finetuning affects the representations learned by a pretrained model. We assess this change through sentence-level probing tasks. We focus on sentencelevel probing tasks since during fine-tuning we explicitly train a model to represent sentence-level context in the CLS token.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology and Setup", "sec_num": "3" }, { "text": "The fine-tuning and probing tasks we study concern different linguistic levels, requiring a model Table 1 : Fine-tuning performance on the development set on selected down-stream tasks. For comparison we also report the fine-tuning accuracy of BERT-basecased as reported by Devlin et al. (2019) on the test set of each of the tasks taken from the GLUE and SQuAD leaderboards. We report Matthews correlation coefficient for CoLA, accuracy for SST-2 and RTE, and exact match (EM) and F 1 score for SQuAD.", "cite_spans": [ { "start": 274, "end": 294, "text": "Devlin et al. (2019)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 98, "end": 105, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Methodology and Setup", "sec_num": "3" }, { "text": "to focus more on syntactic, semantic or discourse information. The extent to which knowledge of a particular linguistic level is needed to perform well differs from task to task. For instance, to judge if the syntactic structure of a sentence is intact, no deep discourse understanding is needed. Our hypothesis is that if a pre-trained model encodes certain linguistic knowledge, this acquired knowledge should lead to a good performance on a probing task testing for the same linguistic phenomenon. Extending this hypothesis to fine-tuning, one might argue that if fine-tuning introduces new or removes existing linguistic knowledge into/from a model, this should be reflected by an increase or decrease in probing performance. 1 However, we argue that encoding or forgetting linguistic knowledge is not necessarily the only explanation for observed changes in probing accuracy. Hence, the goal of our work is to test the abovestated hypotheses assessing the interaction between fine-tuning and probing tasks across three different encoder models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology and Setup", "sec_num": "3" }, { "text": "We study three fine-tuning tasks taken from the GLUE benchmark . All the tasks are sentence-level classification tasks and cover different levels of linguistic phenomena. Additionally, we study models fine-tuned on SQuAD (Rajpurkar et al., 2016) a widely used question answering dataset. Statistics for each of the tasks can be found in the Appendix.", "cite_spans": [ { "start": 221, "end": 245, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning tasks", "sec_num": "3.1" }, { "text": "CoLA The Corpus of Linguistic Acceptability (Warstadt et al., 2018) is an acceptability task which tests a model's knowledge of grammatical concepts. We expect that fine-tuning on CoLA results in changes in accuracy on a syntactic probing task. 2", "cite_spans": [ { "start": 44, "end": 67, "text": "(Warstadt et al., 2018)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning tasks", "sec_num": "3.1" }, { "text": "The Stanford Sentiment Treebank (Socher et al., 2013) . We use the binary version where the task is to categorize movie reviews to have either positive or negative valence. Making sentiment judgments requires knowing the meanings of isolated words and combining them on the sentence and discourse level (e.g. in case of irony). Hence, we expect to see a difference for semantic and/or discourse probing tasks when fine-tuning on SST-2.", "cite_spans": [ { "start": 32, "end": 53, "text": "(Socher et al., 2013)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "SST-2", "sec_num": null }, { "text": "RTE The Recognizing Textual Entailment dataset is a collection of sentence-pairs in either neutral or entailment relationship collected from a series of annual textual entailment challenges (Dagan et al., 2005; Bar-Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009) . The task requires a deeper understanding of the relationship of two sentences, hence, fine-tuning on RTE might affect the accuracy on a discourse-level probing task.", "cite_spans": [ { "start": 190, "end": 210, "text": "(Dagan et al., 2005;", "ref_id": "BIBREF5" }, { "start": 211, "end": 233, "text": "Bar-Haim et al., 2006;", "ref_id": "BIBREF1" }, { "start": 234, "end": 259, "text": "Giampiccolo et al., 2007;", "ref_id": "BIBREF9" }, { "start": 260, "end": 284, "text": "Bentivogli et al., 2009)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "SST-2", "sec_num": null }, { "text": "SQuAD The Stanford Questions Answering Dataset (Rajpurkar et al., 2016 ) is a popular extractive reading comprehension dataset. The task involves a broader discourse understanding as a model trained on SQuAD is required to extract the answer to a question from an accompanying paragraph.", "cite_spans": [ { "start": 47, "end": 70, "text": "(Rajpurkar et al., 2016", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "SST-2", "sec_num": null }, { "text": "We select three sentence-level probing tasks from the SentEval probing suit (Conneau et al., 2018) , testing for syntactic, semantic and broader discourse information on the sentence-level.", "cite_spans": [ { "start": 76, "end": 98, "text": "(Conneau et al., 2018)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Probing Tasks", "sec_num": "3.2" }, { "text": "bigram-shift is a syntactic binary classification task that tests a model's sensitivity to word order. The dataset consists of intact and corrupted sentences, where for corrupted sentences, two random adjacent words have been inverted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probing Tasks", "sec_num": "3.2" }, { "text": "semantic-odd-man-out tests a model's sensitivity to semantic incongruity on a collection of sentences where random verbs or nouns are replaced by another verb or noun.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probing Tasks", "sec_num": "3.2" }, { "text": "coordination-inversion is a collection of sentences made out of two coordinate clauses. In half of the sentences, the order of the clauses is inverted. Coordinate-inversion tests for a model's broader discourse understanding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probing Tasks", "sec_num": "3.2" }, { "text": "It is unclear to which extent findings on the encoding of certain linguistic phenomena generalize from one pre-trained model to another. Hence, we examine three different pre-trained encoder models in our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pre-trained Models", "sec_num": "3.3" }, { "text": "BERT (Devlin et al., 2019 ) is a transformerbased model (Vaswani et al., 2017) jointly trained on masked language modeling and next-sentenceprediction -a sentence-level binary classification task. BERT was trained on the Toronto Books corpus and the English portion of Wikipedia. We focus on the BERT-base-cased model which consists of 12 hidden layers and will refer to it as BERT in the following.", "cite_spans": [ { "start": 5, "end": 25, "text": "(Devlin et al., 2019", "ref_id": "BIBREF6" }, { "start": 56, "end": 78, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Pre-trained Models", "sec_num": "3.3" }, { "text": "RoBERTa (Liu et al., 2019b ) is a follow-up version of BERT which differs from BERT in a few crucial aspects, including using larger amounts of training data and longer training time. The aspect that is most relevant in the context of this work is that RoBERTa was pre-trained without a sentencelevel objective, minimizing only the masked language modeling objective. As with BERT we will consider the base model, RoBERTa-base, for this study and refer to it as RoBERTa.", "cite_spans": [ { "start": 8, "end": 26, "text": "(Liu et al., 2019b", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Pre-trained Models", "sec_num": "3.3" }, { "text": "ALBERT (Lan et al., 2020) is another recently proposed transformer-based pre-trained masked language model. In contrast to both BERT and RoBERTa, it makes heavy use of parameter sharing. That is, ALBERT ties the weight matrices across all hidden layers effectively applying the same non-linear transformation on every hidden layer. Additionally, similar to BERT, ALBERT uses a sentence-level pre-training task. We will use the base model ALBERT-base-v1 and refer to it as ALBERT throughout this work.", "cite_spans": [ { "start": 7, "end": 25, "text": "(Lan et al., 2020)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Pre-trained Models", "sec_num": "3.3" }, { "text": "Fine-tuning For fine-tuning, we follow the default setup proposed by Devlin et al. (2019) . A single randomly initialized task-specific classification layer is added on top of the pre-trained encoder. As input, the classification layer receives z = tanh (Wh + b), where h is the hidden representation of the first token on the last hidden layer and W and b are the randomly initialized parameters of the classifier. 3 During fine-tuning all model parameters are updated jointly. We train for 3 epochs on CoLA and for 1 epoch on SST-2, using a learning rate of 2e\u22125. The learning rate is linearly increased for the first 10% of steps (warmup) and kept constant afterwards. An overview of all hyper-parameters for each model and task can be found in the Appendix. Fine-tuning performance on the development set of each of the tasks can be found in Table 1 .", "cite_spans": [ { "start": 69, "end": 89, "text": "Devlin et al. (2019)", "ref_id": "BIBREF6" }, { "start": 416, "end": 417, "text": "3", "ref_id": null } ], "ref_spans": [ { "start": 846, "end": 853, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Fine-tuning and Probing Setup", "sec_num": "3.4" }, { "text": "Probing For probing, our setup largely follows that of previous works (Tenney et al., 2019b; Liu et al., 2019a; Hewitt and Liang, 2019) where a probing classifier is trained on top of the contextualized embeddings extracted from a pre-trained or -as in our case -fine-tuned encoder model. Notably, we train linear (logistic regression) probing classifiers and use two different pooling methods to obtain sentence embeddings from the encoder hidden states: CLS-pooling, which simply returns the hidden state corresponding to the first token of the sentence and mean-pooling which computes a sentence embedding as the mean over all hidden states. We do this to assess the extent to which the CLS token captures sentence-level context. We use linear probing classifiers because intuitively we expect that if a linguistic feature is useful for a fine-tuning task, it should be linearly separable in the embeddings. For all probing tasks, we measure layer-wise accuracy to investigate how the linear separability of a particular linguistic phenomenon changes across the model. In total, we train 390 probing classifiers on top of 12 pre-trained and fine-tuned encoder models.", "cite_spans": [ { "start": 70, "end": 92, "text": "(Tenney et al., 2019b;", "ref_id": "BIBREF33" }, { "start": 93, "end": 111, "text": "Liu et al., 2019a;", "ref_id": "BIBREF17" }, { "start": 112, "end": 135, "text": "Hewitt and Liang, 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning and Probing Setup", "sec_num": "3.4" }, { "text": "Implementation Our experiments are implemented in PyTorch (Paszke et al., 2019) and we use the pre-trained models provided by the HuggingFace transformers library (Wolf et al., 2019 Table 2 : Change in probing accuracy \u2206 (in %) of CoLA and SST-2 fine-tuned models compared to the pre-trained models when using CLS and mean-pooling. We average the difference in probing accuracy over two different layers groups: layers 0 to 6 and layers 7 to 12.", "cite_spans": [ { "start": 58, "end": 79, "text": "(Paszke et al., 2019)", "ref_id": "BIBREF21" }, { "start": 163, "end": 181, "text": "(Wolf et al., 2019", "ref_id": "BIBREF38" } ], "ref_spans": [ { "start": 182, "end": 189, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Fine-tuning and Probing Setup", "sec_num": "3.4" }, { "text": "4.1 Probing Accuracy Figure 1 shows the layer-wise probing accuracy of BERT, RoBERTa, and ALBERT on each of the probing tasks. These results establish base-lines for our comparison with fine-tuned models below. Consistent with previous work (Krasnowska-Kiera\u015b and Wr\u00f3blewska, 2019), we observe that mean-pooling generally outperforms CLS-pooling across all probing tasks, highlighting the importance of sentence-level context for each of the prob-ing tasks. We also find that for bigram-shift probing accuracy is substantially larger than that for coordination-inversion and odd-man-out. Again, this is consistent with findings in previous works (Tenney et al., 2019b; Liu et al., 2019a; Tenney et al., 2019a) reporting better performance on syntactic than semantic probing tasks. When comparing the three encoder models, we observe some noticeable differences. On odd-manout, ALBERT performs significantly worse than both BERT and RoBERTa, with RoBERTa performing best across all layers. We attribute the poor performance of ALBERT to the fact that it makes heavy use of weight-sharing, effectively applying the same non-linear transformation on all layers. We also observe that on coordinationinversion, RoBERTa with CLS pooling performs much worse than both BERT and ALBERT with CLS pooling. We attribute this to the fact that RoBERTa lacks a sentence-level pre-training objective and the CLS token hence fails to capture relevant sentence-level information for this particular probing task. The small differences in probing accuracy for BERT and ALBERT when comparing CLS to mean-pooling and the fact that RoBERTa with mean-pooling outperforms all other models on coordination-inversion is providing evidence for this interpretation.", "cite_spans": [ { "start": 646, "end": 668, "text": "(Tenney et al., 2019b;", "ref_id": "BIBREF33" }, { "start": 669, "end": 687, "text": "Liu et al., 2019a;", "ref_id": "BIBREF17" }, { "start": 688, "end": 709, "text": "Tenney et al., 2019a)", "ref_id": "BIBREF32" } ], "ref_spans": [ { "start": 21, "end": 29, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Accuracy?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "How does Fine-tuning affect Probing", "sec_num": "4.2" }, { "text": "Having established baselines for the probing accuracy of the pre-trained models, we now turn to the question of how it is affected by fine-tuning. Table 2 shows the effect of fine-tuning on CoLA and SST-2 on the layer-wise accuracy for all three encoder models across the three probing tasks. Results for RTE and SQuAD can be found in Table 5 in the Appendix. For all models and tasks we find that fine-tuning has mostly an effect on higher layers, both positive and negative. The impact varies depending on the fine-tuning/probing task combination and underlying encoder model.", "cite_spans": [], "ref_spans": [ { "start": 147, "end": 155, "text": "Table 2", "ref_id": null }, { "start": 336, "end": 343, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "How does Fine-tuning affect Probing", "sec_num": "4.2" }, { "text": "CoLA results in a substantial improvement on the bigram-shift probing task for all the encoder models; fine-tuning on RTE improves the coordinationinversion accuracy for RoBERTa. This finding is in line with our expectations: bigram-shift and CoLA require syntactic level information, whereas coordination-inversion and RTE require a deeper discourse-level understanding. However, when taking a more detailed look, this reasoning becomes questionable: The improvement is only visible when using CLS-pooling and becomes negligible when probing with mean-pooling. Moreover, the gains are not large enough to improve significantly over the mean-pooling baseline (as shown by the stars and the second y-axis in Figure 4 ). This suggests that adding new linguistic knowledge is not necessarily the only driving force behind the improved probing accuracy and we provide evidence for this reasoning in Section 5.1.", "cite_spans": [], "ref_spans": [ { "start": 707, "end": 715, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Positive Changes in Accuracy: Fine-tuning on", "sec_num": null }, { "text": "Negative Changes in Accuracy: Across all models and pooling methods, fine-tuning on SST-2 has a negative impact on probing accuracy on bigram-shift and odd-man-out, and the decrease in probing accuracy is particularly large for RoBERTa. Fine-tuning on SQuAD follows a similar trend: it has a negative effect on probing accuracy on bigram-shift and odd-man-out for both CLS-and mean-pooling (see Table 5 ), while the impact on coordination-inversion is negligible. We argue that this strong negative impact on probing accuracy is the consequence of more dramatic changes in the representations. We investigate this issue further in Section 5.2.", "cite_spans": [], "ref_spans": [ { "start": 395, "end": 402, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Positive Changes in Accuracy: Fine-tuning on", "sec_num": null }, { "text": "Changes in probing accuracy for other finetuning/probing combinations are not substantial, which suggests that representations did not change significantly with regard to the probed information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Positive Changes in Accuracy: Fine-tuning on", "sec_num": null }, { "text": "In the previous part, we saw the effects of different fine-tuning approaches on model performance. This opens the question for their causes. In this section, we study two hypotheses that go towards explaining these effects.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What Happens During Fine-tuning?", "sec_num": "5" }, { "text": "If the improvement in probing accuracy with CLSpooling can be attributed to a better sentence representation in the CLS token, this can be due to a corresponding change in a model's attention distribution. The model might change the attention of the CLS token to cover more tokens and with this build a better representation of the whole sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analyzing Attention Distributions", "sec_num": "5.1" }, { "text": "To study this hypothesis, we fine-tune RoBERTa on CoLA using two different methods: the default CLS-pooling approach and mean-pooling (cf. Section 3.4). We compare the layer-wise attention distribution on bigram-shift after fine-tuning to that data. We expect to see more profound changes for CLS-pooling than for mean-pooling. To investigate how the attention distribution changes, we analyze its entropy, i.e.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analyzing Attention Distributions", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "H j = i a j (x i ) \u2022 log (a j (x i ))", "eq_num": "(1)" } ], "section": "Analyzing Attention Distributions", "sec_num": "5.1" }, { "text": "where x i is the i-th token of an input sequence and a(x i ) the corresponding attention at position j given to it by a specific attention head. Entropy is maximal when the attention is uniform over the whole input sequence and minimal if the attention head focuses on just one input token. Figure 2a shows the mean entropy for the CLS token (i.e. H 0 ) before and after fine-tuning. We observe a large increase in entropy in the last three layers when fine-tuning on the CLS token (orange bars). This is consistent with our interpretation that, during fine-tuning, the CLS token learns to take more sentence-level information into account, therefore being required to spread its attention over more tokens. For mean-pooling (green bars) this might not be required as taking the mean over all token-states could already provide sufficient sentence-level information during fine-tuning. Accordingly, there are only small changes in the entropy for mean-pooling, with the mean entropy actually decreasing in the last layer.", "cite_spans": [], "ref_spans": [ { "start": 291, "end": 300, "text": "Figure 2a", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Analyzing Attention Distributions", "sec_num": "5.1" }, { "text": "Entropy alone is, however, not sufficient to analyze changes in the attention distribution. Even when the amount of entropy is similar, the underlying attention distribution might have changed. Figure 2b, therefore, compares the attentions of an attention head for an input sequence before and after fine-tuning using Earth mover's distance (Rubner et al., 1998) . We find that, similarly to the entropy results, changes in attention tend to increase with the layer number and again, the largest change of the attention distribution is visible for the first token for layer 11 and 12 when pooling on the CLS-token, while the change is much smaller for mean-pooling. This affirms our hypothesis that improvements in the fine-tuning with CLS-pooling can be attributed to a change in the attention distribution which is less necessary for the mean-pooling.", "cite_spans": [ { "start": 341, "end": 362, "text": "(Rubner et al., 1998)", "ref_id": "BIBREF28" } ], "ref_spans": [ { "start": 194, "end": 204, "text": "Figure 2b,", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Analyzing Attention Distributions", "sec_num": "5.1" }, { "text": "If fine-tuning has more profound effects on the representations of a pre-trained model potentially introducing or removing linguistic knowledge, we expect to see larger changes to the language modeling abilities of the model when compared to the case where fine-tuning just changes the attention distribution of the CLS token.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analyzing MLM Perplexity", "sec_num": "5.2" }, { "text": "For this, we analyze how fine-tuning on CoLA and SST-2 affect the language modeling abilities of a pre-trained model. A change in perplexity should reveal if the representations of the model did change during fine-tuning and we expect this change to be larger for SST-2 fine-tuning where we observe a large negative increase in probing accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analyzing MLM Perplexity", "sec_num": "5.2" }, { "text": "For the first experiment, we evaluate the pretrained masked language model heads of BERT and RoBERTa on the Wikitext-2 test set (Merity et al., 2017) and compare it to the masked-language modeling perplexity, hereafter perplexity, of finetuned models. 4 In the second experiment, we test which layers contribute most to the change in perplexity and replace layers of the fine-tuned encoder by pre-trained layers, starting from the last layer.", "cite_spans": [ { "start": 128, "end": 149, "text": "(Merity et al., 2017)", "ref_id": "BIBREF20" }, { "start": 252, "end": 253, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Analyzing MLM Perplexity", "sec_num": "5.2" }, { "text": "For both experiments, we evaluate the perplexity of the resulting model using the pre-trained masked language modeling head. We fine-tune and evaluate each model 5 times, and report the mean perplexity as well as standard deviation. Our reasoning is that if fine-tuning leads to dramatic changes to the hidden representations of a model, the effects should be reflected in the perplexity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analyzing MLM Perplexity", "sec_num": "5.2" }, { "text": "Perplexity During Fine-tuning Figure 3a and 3b show how the perplexity of a pre-trained model changes during fine-tuning. Both BERT and RoBERTa show a similar trend where perplexity increases with fine-tuning. Interestingly, for RoBERTa the increase in perplexity after the first epoch is much larger compared to BERT. Additionally, our results show that for both models the increase in perplexity is larger when fine-tuning on SST-2. This confirms our hypothesis and also our findings from Section 4 suggesting that finetuning on SST-2 has indeed more dramatic effects how perplexity changes with fine-tuning.", "cite_spans": [], "ref_spans": [ { "start": 30, "end": 39, "text": "Figure 3a", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Analyzing MLM Perplexity", "sec_num": "5.2" }, { "text": "on the representations of both models compared to fine-tuning on CoLA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analyzing MLM Perplexity", "sec_num": "5.2" }, { "text": "Perplexity When Replacing Fine-tuned Layers While fine-tuning leads to worse language modeling abilities for both CoLA and SST-2, it is not clear from the first experiment alone which layers are responsible for the increase in perplexity. Figure 3c and 3d show the perplexity results when replacing fine-tuned layers with pre-trained ones starting from the last hidden layer. Consistent with our probing results in Section 4, we find that the changes that lead to an increase in perplexity happen in the last layers, and this trend is the same for both BERT and RoBERTa. Interestingly, we observe no difference between CoLA and SST-2 fine-tuning in this experiment.", "cite_spans": [], "ref_spans": [ { "start": 239, "end": 255, "text": "Figure 3c and 3d", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Analyzing MLM Perplexity", "sec_num": "5.2" }, { "text": "In the following, we discuss the main implications of our experiments and analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.3" }, { "text": "1. We conclude that fine-tuning indeed does affect the representations of a pre-trained model and in particular those of the last hidden layers, which is supported by our perplexity anal-ysis. However, our perplexity analysis does not reveal whether these changes have a positive or negative effect on the encoding of linguistic knowledge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.3" }, { "text": "2. Some fine-tuning/probing task combinations result in substantial improvements in probing accuracy when using CLS-pooling. Our attention analysis supports our interpretation that the improvement in probing accuracy can not simply be attributed to the encoding of linguistic knowledge, but can at least partially be explained by changes in the attention distribution for the CLS token. We note that this is also consistent with our findings that the improvement in probing accuracy vanishes when comparing to the mean-pooling baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.3" }, { "text": "3. Some other task combinations have a negative effect on the probing task performance, suggesting that the linguistic knowledge our probing classifiers are testing for is indeed no longer (linearly) accessible. However, it remains unclear whether fine-tuning indeed removes the linguistic knowledge our probing classifiers are testing for from the representations or whether it is simply no longer linearly separable. We are planning to further investigate this in future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.3" }, { "text": "We investigated the interplay between fine-tuning and layer-wise sentence-level probing accuracy and found that fine-tuning can lead to substantial changes in probing accuracy. However, these changes vary greatly depending on the encoder model and fine-tuning and probing task combination. Our analysis of attention distributions after fine-tuning showed, that changes in probing accuracy can not be attributed to the encoding of linguistic knowledge alone but might as well be caused by changes in the attention distribution. At the same time, our perplexity analysis showed that finetuning has profound effects on the representations of a pre-trained model but our probing analysis can not sufficiently detail whether it leads to forgetting of the probed linguistic information. Hence we argue that the effects of fine-tuning on pre-trained representations should be carefully interpreted. Table 3 shows hyperparamters used when finetuning BERT, RoBERTa, and ALBERT on CoLA, SST-2, RTE, and SQuAD. On SST-2 training for a single epoch was sufficient and we didn't observe a significant improvement when training for more epochs. Table 4 shows number of training and development samples for each of the fine-tuning datasets considered in our experiments. Additionally, we report the metric used to evaluate performance for each of the tasks. Table 5 shows the effect of fine-tuning on RTE and SQuAD on the layer-wise accuracy for all three encoder models across the three probing tasks. Figure 4 and Figure 5 show the change in probing accuracy \u2206 (in %) across all probing tasks when fine-tuning on CoLA, SST-2, RTE, and SQuAD using CLS-pooling and mean-pooling, respectively. The second y-axis in Figure 4 shows the layer-wise difference after fine-tuning compared to the mean-pooling baseline. Note that only in very few cases this differences is larger than zero.x 0 1 2 3 4 5 6 7 8 9 10 11 12 Layer Index -4 Accuracy \u2206% ( ) to mean pooling baseline (l) odd-man-out Figure 4 : Difference in probing accuracy \u2206 (in %) when using CLS-pooling after fine-tuning on CoLA, SST-2, RTE, and SQuAD for all three encoder models BERT, RoBERTa, and ALBERT across all probing taks considered in this work. The second y-axis shows layer-wise improvement over the mean-pooling baselines (stars) on the respective task. ", "cite_spans": [], "ref_spans": [ { "start": 892, "end": 899, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 1131, "end": 1138, "text": "Table 4", "ref_id": "TABREF4" }, { "start": 1343, "end": 1350, "text": "Table 5", "ref_id": null }, { "start": 1488, "end": 1496, "text": "Figure 4", "ref_id": null }, { "start": 1501, "end": 1509, "text": "Figure 5", "ref_id": "FIGREF2" }, { "start": 1699, "end": 1707, "text": "Figure 4", "ref_id": null }, { "start": 1970, "end": 1978, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Merchant et al. (2020) follow a similar reasoning. They find that fine-tuning on dependency parsing task leads to an improvement on the constituents probing task and attribute this to the improved linguistic knowledge. Similarly, Pruksachatkun et al. (2020) view probing task performance as \"an indicator for the acquisition of a particular language skill.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "CoLA contains sentences with syntactic, morphological and semantic violations. However, only about 15% of the sentences are labeled with morphological and semantic violations. Hence, we suppose that fine-tuning on CoLA should increase a model's sensitivity to syntactic violations to a greater extent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For BERT and ALBERT h corresponds to the hidden state of the[CLS] token. For RoBERTa the first token of every sentence is the token. We will refer to both of them as CLS token.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that perplexity results are not directly comparable between BERT and RoBERTa since both models have different vocabularies. However, what we are interested in is rather", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank Badr Abdullah for his comments and suggestions. We would also like to thank the reviewers for their useful comments and feedback, in particular R1. This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -project-id 232722074 -SFB 1102.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": " Table 5 : Change in probing accuracy \u2206 (in %) of RTE and SQuAD fine-tuned models compared to the pre-trained models when using CLS and mean-pooling. We average the difference in probing accuracy over two different layers groups: layers 0 to 6 and layers 7 to 12.", "cite_spans": [], "ref_spans": [ { "start": 1, "end": 8, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Fine-grained analysis of sentence embeddings using auxiliary prediction tasks", "authors": [ { "first": "Yossi", "middle": [], "last": "Adi", "suffix": "" }, { "first": "Einat", "middle": [], "last": "Kermany", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Ofer", "middle": [], "last": "Lavi", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1608.04207" ] }, "num": null, "urls": [], "raw_text": "Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2016. Fine-grained anal- ysis of sentence embeddings using auxiliary predic- tion tasks. arXiv preprint arXiv:1608.04207.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The second pascal recognising textual entailment challenge", "authors": [ { "first": "Roy", "middle": [], "last": "Bar-Haim", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" }, { "first": "Lisa", "middle": [], "last": "Ferro", "suffix": "" }, { "first": "Danilo", "middle": [], "last": "Giampiccolo", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Second PASCAL Challenges Workshop on Recognising Textual Entailment", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roy Bar-Haim, Ido Dagan, Bill Dolan, Lisa Ferro, and Danilo Giampiccolo. 2006. The second pascal recognising textual entailment challenge. Proceed- ings of the Second PASCAL Challenges Workshop on Recognising Textual Entailment.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "What do neural machine translation models learn about morphology?", "authors": [ { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Nadir", "middle": [], "last": "Durrani", "suffix": "" }, { "first": "Fahim", "middle": [], "last": "Dalvi", "suffix": "" }, { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "" }, { "first": "James", "middle": [], "last": "Glass", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "861--872", "other_ids": { "DOI": [ "10.18653/v1/P17-1080" ] }, "num": null, "urls": [], "raw_text": "Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Has- san Sajjad, and James Glass. 2017. What do neu- ral machine translation models learn about morphol- ogy? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 861-872, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The fifth pascal recognizing textual entailment challenge", "authors": [ { "first": "Luisa", "middle": [], "last": "Bentivogli", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Hoa", "middle": [ "Trang" ], "last": "Dang", "suffix": "" }, { "first": "Danilo", "middle": [], "last": "Giampiccolo", "suffix": "" }, { "first": "Bernardo", "middle": [], "last": "Magnini", "suffix": "" } ], "year": 2009, "venue": "Proc Text Analysis Conference (TAC'09)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luisa Bentivogli, Ido Dagan, Hoa Trang Dang, Danilo Giampiccolo, and Bernardo Magnini. 2009. The fifth pascal recognizing textual entailment challenge. In In Proc Text Analysis Conference (TAC'09).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "German", "middle": [], "last": "Kruszewski", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Lo\u00efc", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2126--2136", "other_ids": { "DOI": [ "10.18653/v1/P18-1198" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, German Kruszewski, Guillaume Lam- ple, Lo\u00efc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2126-2136, Melbourne, Aus- tralia. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The pascal recognising textual entailment challenge", "authors": [ { "first": "Oren", "middle": [], "last": "Ido Dagan", "suffix": "" }, { "first": "Bernardo", "middle": [], "last": "Glickman", "suffix": "" }, { "first": "", "middle": [], "last": "Magnini", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the First International Conference on Machine Learning Challenges: Evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment, MLCW'05", "volume": "", "issue": "", "pages": "177--190", "other_ids": { "DOI": [ "10.1007/11736790_9" ] }, "num": null, "urls": [], "raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment challenge. In Proceedings of the First Inter- national Conference on Machine Learning Chal- lenges: Evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual En- tailment, MLCW'05, page 177-190, Berlin, Heidel- berg. Springer-Verlag.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping", "authors": [ { "first": "Jesse", "middle": [], "last": "Dodge", "suffix": "" }, { "first": "Gabriel", "middle": [], "last": "Ilharco", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Farhadi", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.06305" ] }, "num": null, "urls": [], "raw_text": "Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah Smith. 2020. Fine-tuning pretrained language models: Weight initializations, data orders, and early stop- ping. arXiv preprint arXiv:2002.06305.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Probing for semantic evidence of composition by means of simple classification tasks", "authors": [ { "first": "Allyson", "middle": [], "last": "Ettinger", "suffix": "" }, { "first": "Ahmed", "middle": [], "last": "Elgohary", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP", "volume": "", "issue": "", "pages": "134--139", "other_ids": { "DOI": [ "10.18653/v1/W16-2524" ] }, "num": null, "urls": [], "raw_text": "Allyson Ettinger, Ahmed Elgohary, and Philip Resnik. 2016. Probing for semantic evidence of composition by means of simple classification tasks. In Proceed- ings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 134-139, Berlin, Germany. Association for Computational Linguis- tics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The third pascal recognizing textual entailment challenge", "authors": [ { "first": "Danilo", "middle": [], "last": "Giampiccolo", "suffix": "" }, { "first": "Bernardo", "middle": [], "last": "Magnini", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, RTE '07", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third pascal recogniz- ing textual entailment challenge. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, RTE '07, page 1-9, USA. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Designing and interpreting probes with control tasks", "authors": [ { "first": "John", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2733--2743", "other_ids": { "DOI": [ "10.18653/v1/D19-1275" ] }, "num": null, "urls": [], "raw_text": "John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 2733-2743, Hong Kong, China. Association for Computational Lin- guistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A structural probe for finding syntax in word representations", "authors": [ { "first": "John", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4129--4138", "other_ids": { "DOI": [ "10.18653/v1/N19-1419" ] }, "num": null, "urls": [], "raw_text": "John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word repre- sentations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129-4138, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Visualisation and'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure", "authors": [ { "first": "Dieuwke", "middle": [], "last": "Hupkes", "suffix": "" }, { "first": "Sara", "middle": [], "last": "Veldhoen", "suffix": "" }, { "first": "Willem", "middle": [], "last": "Zuidema", "suffix": "" } ], "year": 2018, "venue": "Journal of Artificial Intelligence Research", "volume": "61", "issue": "", "pages": "907--926", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. 2018. Visualisation and'diagnostic classifiers' re- veal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61:907-926.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Probing what different NLP tasks teach machines about function word comprehension", "authors": [ { "first": "Najoung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Roma", "middle": [], "last": "Patel", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Poliak", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Ross", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019)", "volume": "", "issue": "", "pages": "235--249", "other_ids": { "DOI": [ "10.18653/v1/S19-1026" ] }, "num": null, "urls": [], "raw_text": "Najoung Kim, Roma Patel, Adam Poliak, Patrick Xia, Alex Wang, Tom McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bow- man, and Ellie Pavlick. 2019. Probing what dif- ferent NLP tasks teach machines about function word comprehension. In Proceedings of the Eighth Joint Conference on Lexical and Computational Se- mantics (*SEM 2019), pages 235-249, Minneapolis, Minnesota. Association for Computational Linguis- tics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Empirical linguistic study of sentence embeddings", "authors": [ { "first": "Katarzyna", "middle": [], "last": "Krasnowska", "suffix": "" }, { "first": "-", "middle": [], "last": "Kiera\u015b", "suffix": "" }, { "first": "Alina", "middle": [], "last": "Wr\u00f3blewska", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5729--5739", "other_ids": { "DOI": [ "10.18653/v1/P19-1573" ] }, "num": null, "urls": [], "raw_text": "Katarzyna Krasnowska-Kiera\u015b and Alina Wr\u00f3blewska. 2019. Empirical linguistic study of sentence em- beddings. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 5729-5739, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Albert: A lite bert for self-supervised learning of language representations", "authors": [ { "first": "Zhenzhong", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Mingda", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Piyush", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. In International Con- ference on Learning Representations.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Open sesame: Getting inside BERT's linguistic knowledge", "authors": [ { "first": "Yongjie", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Chern Tan", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Frank", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "241--253", "other_ids": { "DOI": [ "10.18653/v1/W19-4825" ] }, "num": null, "urls": [], "raw_text": "Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019. Open sesame: Getting inside BERT's linguistic knowledge. In Proceedings of the 2019 ACL Work- shop BlackboxNLP: Analyzing and Interpreting Neu- ral Networks for NLP, pages 241-253, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Linguistic knowledge and transferability of contextual representations", "authors": [ { "first": "Nelson", "middle": [ "F" ], "last": "Liu", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1073--1094", "other_ids": { "DOI": [ "10.18653/v1/N19-1112" ] }, "num": null, "urls": [], "raw_text": "Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019a. Lin- guistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 1073-1094, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Roberta: A robustly optimized bert pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "What happens to bert embeddings during fine-tuning? arXiv preprint", "authors": [ { "first": "Amil", "middle": [], "last": "Merchant", "suffix": "" }, { "first": "Elahe", "middle": [], "last": "Rahimtoroghi", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.14448" ] }, "num": null, "urls": [], "raw_text": "Amil Merchant, Elahe Rahimtoroghi, Ellie Pavlick, and Ian Tenney. 2020. What happens to bert embeddings during fine-tuning? arXiv preprint arXiv:2004.14448.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Pointer sentinel mixture models", "authors": [ { "first": "Stephen", "middle": [], "last": "Merity", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2017, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer sentinel mixture mod- els. ArXiv, abs/1609.07843.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Pytorch: An imperative style, high-performance deep learning library", "authors": [ { "first": "Adam", "middle": [], "last": "Paszke", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Massa", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lerer", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Chanan", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Killeen", "suffix": "" }, { "first": "Zeming", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Gimelshein", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Antiga", "suffix": "" }, { "first": "Alban", "middle": [], "last": "Desmaison", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Kopf", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zachary", "middle": [], "last": "Devito", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Raison", "suffix": "" }, { "first": "Alykhan", "middle": [], "last": "Tejani", "suffix": "" }, { "first": "Sasank", "middle": [], "last": "Chilamkurthy", "suffix": "" }, { "first": "Benoit", "middle": [], "last": "Steiner", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Junjie", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Soumith", "middle": [], "last": "Chintala", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "8026--8037", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Py- torch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d' Alch\u00e9-Buc, E. Fox, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 32, pages 8026-8037. Curran Asso- ciates, Inc.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2227--2237", "other_ids": { "DOI": [ "10.18653/v1/N18-1202" ] }, "num": null, "urls": [], "raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks", "authors": [ { "first": "Jason", "middle": [], "last": "Phang", "suffix": "" }, { "first": "Thibault", "middle": [], "last": "F\u00e9vry", "suffix": "" }, { "first": "Samuel R", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1811.01088" ] }, "num": null, "urls": [], "raw_text": "Jason Phang, Thibault F\u00e9vry, and Samuel R Bowman. 2018. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. arXiv preprint arXiv:1811.01088.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Intermediate-task transfer learning with pretrained models for natural language understanding: When and why does it work? arXiv preprint", "authors": [ { "first": "Yada", "middle": [], "last": "Pruksachatkun", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Phang", "suffix": "" }, { "first": "Haokun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xiaoyi", "middle": [], "last": "Phu Mon Htut", "suffix": "" }, { "first": "Richard", "middle": [ "Yuanzhe" ], "last": "Zhang", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Katharina", "middle": [], "last": "Vania", "suffix": "" }, { "first": "Samuel R", "middle": [], "last": "Kann", "suffix": "" }, { "first": "", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.00628" ] }, "num": null, "urls": [], "raw_text": "Yada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, and Samuel R Bowman. 2020. Intermediate-task transfer learning with pretrained models for natural language under- standing: When and why does it work? arXiv preprint arXiv:2005.00628.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "OpenAI Blog", "volume": "1", "issue": "8", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "SQuAD: 100,000+ questions for machine comprehension of text", "authors": [ { "first": "Pranav", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Konstantin", "middle": [], "last": "Lopyrev", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2383--2392", "other_ids": { "DOI": [ "10.18653/v1/D16-1264" ] }, "num": null, "urls": [], "raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "2020. A primer in bertology: What we know about how bert works", "authors": [ { "first": "Anna", "middle": [], "last": "Rogers", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Kovaleva", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rumshisky", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.12327" ] }, "num": null, "urls": [], "raw_text": "Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in bertology: What we know about how bert works. arXiv preprint arXiv:2002.12327.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "A metric for distributions with applications to image databases", "authors": [ { "first": "Y", "middle": [], "last": "Rubner", "suffix": "" }, { "first": "C", "middle": [], "last": "Tomasi", "suffix": "" }, { "first": "L", "middle": [ "J" ], "last": "Guibas", "suffix": "" } ], "year": 1998, "venue": "Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)", "volume": "", "issue": "", "pages": "59--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Rubner, C. Tomasi, and L. J. Guibas. 1998. A metric for distributions with applications to image databases. In Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271), pages 59-66.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Does string-based neural MT learn source syntax?", "authors": [ { "first": "Xing", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Inkit", "middle": [], "last": "Padhi", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1526--1534", "other_ids": { "DOI": [ "10.18653/v1/D16-1159" ] }, "num": null, "urls": [], "raw_text": "Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does string-based neural MT learn source syntax? In Pro- ceedings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing, pages 1526- 1534, Austin, Texas. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Perelygin", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1631--1642", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "oLMpics-On what Language Model Pre-training Captures", "authors": [ { "first": "Alon", "middle": [], "last": "Talmor", "suffix": "" }, { "first": "Yanai", "middle": [], "last": "Elazar", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1912.13283" ] }, "num": null, "urls": [], "raw_text": "Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2019. oLMpics-On what Lan- guage Model Pre-training Captures. arXiv preprint arXiv:1912.13283.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "BERT rediscovers the classical NLP pipeline", "authors": [ { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4593--4601", "other_ids": { "DOI": [ "10.18653/v1/P19-1452" ] }, "num": null, "urls": [], "raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4593- 4601, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "What do you learn from context? probing for sentence structure in contextualized word representations", "authors": [ { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Berlin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Poliak", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Najoung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019b. What do you learn from context? probing for sentence structure in contextu- alized word representations. In International Con- ference on Learning Representations.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Bowman. 2019a. Can you tell me how to get past sesame street? sentence-level pretraining beyond language modeling", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Hula", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Raghavendra", "middle": [], "last": "Pappagari", "suffix": "" }, { "first": "R", "middle": [ "Thomas" ], "last": "Mccoy", "suffix": "" }, { "first": "Roma", "middle": [], "last": "Patel", "suffix": "" }, { "first": "Najoung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Yinghui", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Katherin", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Shuning", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Berlin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "", "suffix": "" } ], "year": null, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4465--4476", "other_ids": { "DOI": [ "10.18653/v1/P19-1439" ] }, "num": null, "urls": [], "raw_text": "Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pap- pagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen, Benjamin Van Durme, Edouard Grave, Ellie Pavlick, and Samuel R. Bow- man. 2019a. Can you tell me how to get past sesame street? sentence-level pretraining beyond language modeling. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 4465-4476, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019b. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Inter- national Conference on Learning Representations.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Neural network acceptability judgments", "authors": [ { "first": "Alex", "middle": [], "last": "Warstadt", "suffix": "" }, { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Samuel R", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1805.12471" ] }, "num": null, "urls": [], "raw_text": "Alex Warstadt, Amanpreet Singh, and Samuel R Bow- man. 2018. Neural network acceptability judgments. arXiv preprint arXiv:1805.12471.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Huggingface's transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R'emi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Brew", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Entropy and Earth mover's distance of the attention for the CLS token for each layer with the RoBERTa model on the bigram-shift dataset. The mean over all input sequences and the mean over all attention heads of a layer are taken. The Earth Mover Distance is computed between the base model and each fine-tuned model.", "num": null, "uris": null }, "FIGREF1": { "type_str": "figure", "text": "Perplexity on Wikitext-2 of models consisting of a fine-tuned encoder and a pre-trained MLM-head. Plots (a) and (b) show how perplexity changes over the course of fine-tuning with epoch 0 showing the perplexity of the pre-trained model. (c) and (d) show how perplexity changes when a number of last layers of the fine-tuned encoder are replaced with corresponding layers from the pre-trained model. Note the different y-axes for RoBERTa and BERT.", "num": null, "uris": null }, "FIGREF2": { "type_str": "figure", "text": "Difference in probing accuracy \u2206 (in %) when using mean-pooling after fine-tuning on CoLA, SST-2, RTE, and SQuAD for all three encoder models BERT, RoBERTa, and ALBERT across all probing tasks considered in this work.", "num": null, "uris": null }, "TABREF1": { "html": null, "content": "
0.950.750.70
0.85 0.900.700.65
Accuracy0.60 0.65 0.70 0.75 0.80ALBERT CLS ALBERT mean BERT CLSAccuracy0.60 0.65 0.55ALBERT CLS ALBERT mean BERT CLSAccuracy0.60 0.55ALBERT CLS ALBERT mean BERT CLS
0.55 0.50BERT mean RoBERTa CLS RoBERTa mean0.50BERT mean RoBERTa CLS RoBERTa mean0.50BERT mean RoBERTa CLS RoBERTa mean
0.450 1 2 3 4 5 6 7 8 9 10 11 120.450 1 3 4 5 6 7 8 9 10 11 120.450 1 2 3 4 5 6 7 8 9 10 11 12
Layer IndexLayer IndexLayer Index
(a) bigram-shift(b) coordination-inversion(c) odd-man-out
Figure 1: Probing TaskCLS-poolingBERT-base-casedmean-pooling
CoLASST-2CoLASST-2
0 -6 7 -120 -6 7 -120 -6 7 -120 -6 7 -12
bigram-shift0.074.73 \u22121.02 \u22124.630.231.45 \u22120.37 \u22123.24
coordinate-inversion \u22120.101.90 \u22120.25 \u22121.150.140.29 \u22120.48 \u22120.85
odd-man-out\u22120.200.26 \u22120.02 \u22121.28 \u22120.34 \u22120.29 \u22120.30 \u22121.09
RoBERTa-base
Probing TaskCLS-poolingmean-pooling
CoLASST-2CoLASST-2
0 -6 7 -120 -6 7 -120 -6 7 -120 -6 7 -12
bigram-shift0.585.35 \u22122.41 \u22127.220.691.74 \u22120.23 \u22124.87
coordinate-inversion \u22120.721.84 \u22121.28 \u22120.63 \u22120.220.02 \u22120.18 \u22123.83
odd-man-out\u22120.661.05 \u22121.09 \u22122.40 \u22120.08 \u22120.55 \u22120.46 \u22123.61
ALBERT-base-v1
Probing TaskCLS-poolingmean-pooling
CoLASST-2CoLASST-2
0 -6 7 -120 -6 7 -120 -6 7 -120 -6 7 -12
bigram-shift1.553.39 \u22121.94 \u22125.150.260.66 \u22120.70 \u22122.73
coordinate-inversion \u22120.69 \u22121.53 \u22121.07-2.87 \u22120.07 \u22121.19 \u22120.35 \u22121.53
odd-man-out\u22120.42 \u22121.39 \u22120.90 \u22122.75 \u22120.27 \u22121.40 \u22120.60 \u22122.82
). Code to reproduce our results and figures is
available online: https://github.com/uds-lsv/
probing-and-finetuning
", "num": null, "type_str": "table", "text": "Layer-wise probing accuracy on bigram-shift, coordination inversion, and odd-man-out for BERT, RoBERTa, and ALBERT. For all models mean-pooling (solid lines) consistently improves probing accuracy compared to CLS-pooling (dashed-lines) highlighting the importance of sentence-level information for each of the tasks." }, "TABREF3": { "html": null, "content": "
StatisticsTask
CoLA SST-2 RTE SQuAD
training8.6k67k2.587k
validation 1,04387427810k
metricMCCAcc. Acc. EM/F 1
", "num": null, "type_str": "table", "text": "Hyperparamters used when fine-tuning." }, "TABREF4": { "html": null, "content": "", "num": null, "type_str": "table", "text": "Fine-tuning task statistics." } } } }