{ "paper_id": "D19-1006", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:11:29.517261Z" }, "title": "How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings", "authors": [ { "first": "Kawin", "middle": [], "last": "Ethayarajh", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University", "location": {} }, "email": "kawin@stanford.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Replacing static word embeddings with contextualized word representations has yielded significant improvements on many NLP tasks. However, just how contextual are the contextualized representations produced by models such as ELMo and BERT? Are there infinitely many context-specific representations for each word, or are words essentially assigned one of a finite number of word-sense representations? For one, we find that the contextualized representations of all words are not isotropic in any layer of the contextualizing model. While representations of the same word in different contexts still have a greater cosine similarity than those of two different words, this self-similarity is much lower in upper layers. This suggests that upper layers of contextualizing models produce more context-specific representations, much like how upper layers of LSTMs produce more task-specific representations. In all layers of ELMo, BERT, and GPT-2, on average, less than 5% of the variance in a word's contextualized representations can be explained by a static embedding for that word, providing some justification for the success of contextualized representations.", "pdf_parse": { "paper_id": "D19-1006", "_pdf_hash": "", "abstract": [ { "text": "Replacing static word embeddings with contextualized word representations has yielded significant improvements on many NLP tasks. However, just how contextual are the contextualized representations produced by models such as ELMo and BERT? Are there infinitely many context-specific representations for each word, or are words essentially assigned one of a finite number of word-sense representations? For one, we find that the contextualized representations of all words are not isotropic in any layer of the contextualizing model. While representations of the same word in different contexts still have a greater cosine similarity than those of two different words, this self-similarity is much lower in upper layers. This suggests that upper layers of contextualizing models produce more context-specific representations, much like how upper layers of LSTMs produce more task-specific representations. In all layers of ELMo, BERT, and GPT-2, on average, less than 5% of the variance in a word's contextualized representations can be explained by a static embedding for that word, providing some justification for the success of contextualized representations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The application of deep learning methods to NLP is made possible by representing words as vectors in a low-dimensional continuous space. Traditionally, these word embeddings were static: each word had a single vector, regardless of context (Mikolov et al., 2013a; Pennington et al., 2014) . This posed several problems, most notably that all senses of a polysemous word had to share the same representation. More recent work, namely deep neural language models such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) , * Work partly done at the University of Toronto.", "cite_spans": [ { "start": 240, "end": 263, "text": "(Mikolov et al., 2013a;", "ref_id": "BIBREF19" }, { "start": 264, "end": 288, "text": "Pennington et al., 2014)", "ref_id": "BIBREF23" }, { "start": 474, "end": 495, "text": "(Peters et al., 2018)", "ref_id": "BIBREF24" }, { "start": 505, "end": 526, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "have successfully created contextualized word representations, word vectors that are sensitive to the context in which they appear. Replacing static embeddings with contextualized representations has yielded significant improvements on a diverse array of NLP tasks, ranging from questionanswering to coreference resolution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The success of contextualized word representations suggests that despite being trained with only a language modelling task, they learn highly transferable and task-agnostic properties of language. In fact, linear probing models trained on frozen contextualized representations can predict linguistic properties of words (e.g., part-of-speech tags) almost as well as state-of-the-art models (Liu et al., 2019a; Hewitt and Manning, 2019) . Still, these representations remain poorly understood. For one, just how contextual are these contextualized word representations? Are there infinitely many context-specific representations that BERT and ELMo can assign to each word, or are words essentially assigned one of a finite number of word-sense representations?", "cite_spans": [ { "start": 390, "end": 409, "text": "(Liu et al., 2019a;", "ref_id": "BIBREF16" }, { "start": 410, "end": 435, "text": "Hewitt and Manning, 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We answer this question by studying the geometry of the representation space for each layer of ELMo, BERT, and GPT-2. Our analysis yields some surprising findings:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. In all layers of all three models, the contextualized word representations of all words are not isotropic: they are not uniformly distributed with respect to direction. Instead, they are anisotropic, occupying a narrow cone in the vector space. The anisotropy in GPT-2's last layer is so extreme that two random words will on average have almost perfect cosine similarity! Given that isotropy has both theoretical and empirical benefits for static embeddings (Mu et al., 2018) , the extent of anisotropy in contextualized represen-tations is surprising.", "cite_spans": [ { "start": 462, "end": 479, "text": "(Mu et al., 2018)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. Occurrences of the same word in different contexts have non-identical vector representations. Where vector similarity is defined as cosine similarity, these representations are more dissimilar to each other in upper layers. This suggests that, much like how upper layers of LSTMs produce more task-specific representations (Liu et al., 2019a) , upper layers of contextualizing models produce more context-specific representations.", "cite_spans": [ { "start": 326, "end": 345, "text": "(Liu et al., 2019a)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. Context-specificity manifests very differently in ELMo, BERT, and GPT-2. In ELMo, representations of words in the same sentence grow more similar to each other as context-specificity increases in upper layers; in BERT, they become more dissimilar to each other in upper layers but are still more similar than randomly sampled words are on average; in GPT-2, however, words in the same sentence are no more similar to each other than two randomly chosen words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "4. After adjusting for the effect of anisotropy, on average, less than 5% of the variance in a word's contextualized representations can be explained by their first principal component. This holds across all layers of all models. This suggests that contextualized representations do not correspond to a finite number of word-sense representations, and even in the best possible scenario, static embeddings would be a poor replacement for contextualized ones. Still, static embeddings created by taking the first principal component of a word's contextualized representations outperform GloVe and FastText embeddings on many word vector benchmarks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "These insights help justify why the use of contextualized representations has led to such significant improvements on many NLP tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Static Word Embeddings Skip-gram with negative sampling (SGNS) (Mikolov et al., 2013a) and GloVe (Pennington et al., 2014) are among the best known models for generating static word embeddings. Though they learn embeddings iteratively in practice, it has been proven that in theory, they both implicitly factorize a word-context matrix containing a co-occurrence statistic (Levy and Goldberg, 2014a,b) . Because they create a single representation for each word, a notable problem with static word embeddings is that all senses of a polysemous word must share a single vector.", "cite_spans": [ { "start": 63, "end": 86, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF19" }, { "start": 97, "end": 122, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF23" }, { "start": 373, "end": 401, "text": "(Levy and Goldberg, 2014a,b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Contextualized Word Representations Given the limitations of static word embeddings, recent work has tried to create context-sensitive word representations. ELMo (Peters et al., 2018) , BERT (Devlin et al., 2018) , and GPT-2 (Radford et al., 2019) are deep neural language models that are fine-tuned to create models for a wide range of downstream NLP tasks. Their internal representations of words are called contextualized word representations because they are a function of the entire input sentence. The success of this approach suggests that these representations capture highly transferable and task-agnostic properties of language (Liu et al., 2019a) . ELMo creates contextualized representations of each token by concatenating the internal states of a 2-layer biLSTM trained on a bidirectional language modelling task (Peters et al., 2018) . In contrast, BERT and GPT-2 are bi-directional and uni-directional transformer-based language models respectively. Each transformer layer of 12layer BERT (base, cased) and 12-layer GPT-2 creates a contextualized representation of each token by attending to different parts of the input sentence (Devlin et al., 2018; Radford et al., 2019) . BERT -and subsequent iterations on BERT (Liu et al., 2019b; Yang et al., 2019) -have achieved state-ofthe-art performance on various downstream NLP tasks, ranging from question-answering to sentiment analysis.", "cite_spans": [ { "start": 162, "end": 183, "text": "(Peters et al., 2018)", "ref_id": "BIBREF24" }, { "start": 191, "end": 212, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF8" }, { "start": 225, "end": 247, "text": "(Radford et al., 2019)", "ref_id": "BIBREF25" }, { "start": 638, "end": 657, "text": "(Liu et al., 2019a)", "ref_id": "BIBREF16" }, { "start": 826, "end": 847, "text": "(Peters et al., 2018)", "ref_id": "BIBREF24" }, { "start": 1145, "end": 1166, "text": "(Devlin et al., 2018;", "ref_id": "BIBREF8" }, { "start": 1167, "end": 1188, "text": "Radford et al., 2019)", "ref_id": "BIBREF25" }, { "start": 1231, "end": 1250, "text": "(Liu et al., 2019b;", "ref_id": "BIBREF17" }, { "start": 1251, "end": 1269, "text": "Yang et al., 2019)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Probing Tasks Prior analysis of contextualized word representations has largely been restricted to probing tasks (Tenney et al., 2019; Hewitt and Manning, 2019) . This involves training linear models to predict syntactic (e.g., part-of-speech tag) and semantic (e.g., word relation) properties of words. Probing models are based on the premise that if a simple linear model can be trained to accurately predict a linguistic property, then the representations implicitly encode this information to begin with. While these analyses have found that contextualized representations encode semantic and syntactic information, they cannot answer how contextual these representations are, and to what extent they can be replaced with static word embeddings, if at all. Our work in this paper is thus markedly different from most dissections of contextualized representations. It is more similar to Mimno and Thompson (2017) , which studied the geometry of static word embedding spaces.", "cite_spans": [ { "start": 113, "end": 134, "text": "(Tenney et al., 2019;", "ref_id": "BIBREF26" }, { "start": 135, "end": 160, "text": "Hewitt and Manning, 2019)", "ref_id": "BIBREF11" }, { "start": 890, "end": 915, "text": "Mimno and Thompson (2017)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The contextualizing models we study in this paper are ELMo, BERT, and GPT-2 1 . We choose the base cased version of BERT because it is most comparable to GPT-2 with respect to number of layers and dimensionality. The models we work with are all pre-trained on their respective language modelling tasks. Although ELMo, BERT, and GPT-2 have 2, 12, and 12 hidden layers respectively, we also include the input layer of each contextualizing model as its 0 th layer. This is because the 0 th layer is not contextualized, making it a useful baseline against which to compare the contextualization done by subsequent layers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contextualizing Models", "sec_num": "3.1" }, { "text": "To analyze contextualized word representations, we need input sentences to feed into our pretrained models. Our input data come from the SemEval Semantic Textual Similarity tasks from years 2012 -2016 (Agirre et al., 2012 (Agirre et al., , 2013 (Agirre et al., , 2014 (Agirre et al., , 2015 . We use these datasets because they contain sentences in which the same words appear in different contexts. For example, the word 'dog' appears in \"A panda dog is running on the road.\" and \"A dog is trying to get bacon off his back.\" If a model generated the same representation for 'dog' in both these sentences, we could infer that there was no contextualization; conversely, if the two representations were different, we could infer that they were contextualized to some extent. Using these datasets, we map words to the list of sentences they appear in and their index within these sentences. We do not consider words that appear in less than 5 unique contexts in our analysis.", "cite_spans": [ { "start": 201, "end": 221, "text": "(Agirre et al., 2012", "ref_id": "BIBREF3" }, { "start": 222, "end": 244, "text": "(Agirre et al., , 2013", "ref_id": "BIBREF2" }, { "start": 245, "end": 267, "text": "(Agirre et al., , 2014", "ref_id": "BIBREF1" }, { "start": 268, "end": 290, "text": "(Agirre et al., , 2015", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.2" }, { "text": "We measure how contextual a word representation is using three different metrics: self-similarity, intra-sentence similarity, and maximum explainable variance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measures of Contextuality", "sec_num": "3.3" }, { "text": "Definition 1 Let w be a word that appears in sentences", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measures of Contextuality", "sec_num": "3.3" }, { "text": "{s 1 , ..., s n } at indices {i 1 , ..., i n } respec- tively, such that w = s 1 [i 1 ] = ... = s n [i n ]. Let f (s, i)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measures of Contextuality", "sec_num": "3.3" }, { "text": "be a function that maps s[i] to its representation in layer of model f . The self similarity of w in layer is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measures of Contextuality", "sec_num": "3.3" }, { "text": "SelfSim (w) = 1 n 2 \u2212 n \u2211 j \u2211 k = j cos( f (s j , i j ), f (s k , i k ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measures of Contextuality", "sec_num": "3.3" }, { "text": "(1) where cos denotes the cosine similarity. In other words, the self-similarity of a word w in layer is the average cosine similarity between its contextualized representations across its n unique contexts. If layer does not contextualize the representations at all, then SelfSim (w) = 1 (i.e., the representations are identical across all contexts). The more contextualized the representations are for w, the lower we would expect its self-similarity to be.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measures of Contextuality", "sec_num": "3.3" }, { "text": "Definition 2 Let s be a sentence that is a sequence w 1 , ..., w n of n words. Let f (s, i) be a function that maps s[i] to its representation in layer of model f . The intra-sentence similarity of s in layer is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measures of Contextuality", "sec_num": "3.3" }, { "text": "IntraSim (s) = 1 n \u2211 i cos( s , f (s, i))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measures of Contextuality", "sec_num": "3.3" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measures of Contextuality", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s = 1 n \u2211 i f (s, i)", "eq_num": "(2)" } ], "section": "Measures of Contextuality", "sec_num": "3.3" }, { "text": "Put more simply, the intra-sentence similarity of a sentence is the average cosine similarity between its word representations and the sentence vector, which is just the mean of those word vectors. This measure captures how context-specificity manifests in the vector space. For example, if both IntraSim (s) and SelfSim (w) are low \u2200 w \u2208 s, then the model contextualizes words in that layer by giving each one a context-specific representation that is still distinct from all other word representations in the sentence. If IntraSim (s) is high but SelfSim (w) is low, this suggests a less nuanced contextualization, where words in a sentence are contextualized simply by making their representations converge in vector space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measures of Contextuality", "sec_num": "3.3" }, { "text": "Definition 3 Let w be a word that appears in sentences", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measures of Contextuality", "sec_num": "3.3" }, { "text": "{s 1 , ..., s n } at indices {i 1 , ..., i n } respec- tively, such that w = s 1 [i 1 ] = ... = s n [i n ]. Let f (s, i)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measures of Contextuality", "sec_num": "3.3" }, { "text": "be a function that maps s[i] to its representation in layer of model f . Where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measures of Contextuality", "sec_num": "3.3" }, { "text": "[ f (s 1 , i 1 )... f (s n , i n )]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measures of Contextuality", "sec_num": "3.3" }, { "text": "is the occurrence matrix of w and \u03c3 1 ...\u03c3 m are the first m singular values of this matrix, the maximum explainable variance is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measures of Contextuality", "sec_num": "3.3" }, { "text": "MEV (w) = \u03c3 2 1 \u2211 i \u03c3 2 i (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measures of Contextuality", "sec_num": "3.3" }, { "text": "MEV (w) is the proportion of variance in w's contextualized representations for a given layer that can be explained by their first principal component. It gives us an upper bound on how well a static embedding could replace a word's contextualized representations. The closer MEV (w) is to 0, the poorer a replacement a static embedding would be; if MEV (w) = 1, then a static embedding would be a perfect replacement for the contextualized representations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measures of Contextuality", "sec_num": "3.3" }, { "text": "It is important to consider isotropy (or the lack thereof) when discussing contextuality. For example, if word vectors were perfectly isotropic (i.e., directionally uniform), then SelfSim (w) = 0.95 would suggest that w's representations were poorly contextualized. However, consider the scenario where word vectors are so anisotropic that any two words have on average a cosine similarity of 0.99. Then SelfSim (w) = 0.95 would actually suggest the opposite -that w's representations were well contextualized. This is because representations of w in different contexts would on average be more dissimilar to each other than two randomly chosen words. To adjust for the effect of anisotropy, we use three anisotropic baselines, one for each of our contextuality measures. For self-similarity and intra-sentence similarity, the baseline is the average cosine similarity between the representations of uniformly randomly sampled words from different contexts. The more anisotropic the word representations are in a given layer, the closer this baseline is to 1. For maximum explainable variance (MEV), the baseline is the proportion of variance in uniformly randomly sampled word representations that is explained by their first principal component. The more anisotropic the representations in a given layer, the closer this baseline is to 1: even for a random assortment of words, the principal component would be able to explain a large proportion of the variance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adjusting for Anisotropy", "sec_num": "3.4" }, { "text": "Since contextuality measures are calculated for each layer of a contextualizing model, we calculate separate baselines for each layer as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adjusting for Anisotropy", "sec_num": "3.4" }, { "text": "We then subtract from each measure its respective baseline to get the anisotropy-adjusted contexuality measure. For example, the anisotropy-adjusted self-similarity is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adjusting for Anisotropy", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Baseline( f ) = E x,y\u223cU(O) [cos( f (x), f (y))] SelfSim * (w) = SelfSim (w) \u2212 Baseline( f )", "eq_num": "(4)" } ], "section": "Adjusting for Anisotropy", "sec_num": "3.4" }, { "text": "where O is the set of all word occurrences and f (\u2022) maps a word occurrence to its representation in layer of model f . Unless otherwise stated, references to contextuality measures in the rest of the paper refer to the anisotropy-adjusted measures, where both the raw measure and baseline are estimated with 1K uniformly randomly sampled word representations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adjusting for Anisotropy", "sec_num": "3.4" }, { "text": "Contextualized representations are anisotropic in all non-input layers. If word representations from a particular layer were isotropic (i.e., directionally uniform), then the average cosine similarity between uniformly randomly sampled words would be 0 (Arora et al., 2017) . The closer this average is to 1, the more anisotropic the representations. The geometric interpretation of anisotropy is that the word representations all occupy a narrow cone in the vector space rather than being uniform in all directions; the greater the anisotropy, the narrower this cone (Mimno and Thompson, 2017) . As seen in Figure 1 , this implies that in almost all layers of BERT, ELMo and GPT-2, the representations of all words occupy a narrow cone in the vector space. The only exception is ELMo's input layer, which produces static character-level embeddings without using contextual or even positional information (Peters et al., 2018) . It should be noted that not all static embeddings are necessarily isotropic, however; Mimno and Thompson (2017) found that skipgram embeddings, which are also static, are not isotropic.", "cite_spans": [ { "start": 253, "end": 273, "text": "(Arora et al., 2017)", "ref_id": "BIBREF5" }, { "start": 568, "end": 594, "text": "(Mimno and Thompson, 2017)", "ref_id": "BIBREF21" }, { "start": 905, "end": 926, "text": "(Peters et al., 2018)", "ref_id": "BIBREF24" }, { "start": 1015, "end": 1040, "text": "Mimno and Thompson (2017)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 608, "end": 616, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "(An)Isotropy", "sec_num": "4.1" }, { "text": "Contextualized representations are generally more anisotropic in higher layers. As seen in Figure 1 , for GPT-2, the average cosine similarity between uniformly randomly words is roughly 0.6 in layers 2 through 8 but increases exponentially from layers 8 through 12. In fact, word representations in GPT-2's last layer are so anisotropic that any two words have on average an almost perfect cosine similarity! This pattern holds for BERT and Figure 1 : In almost all layers of BERT, ELMo, and GPT-2, the word representations are anisotropic (i.e., not directionally uniform): the average cosine similarity between uniformly randomly sampled words is non-zero. The one exception is ELMo's input layer; this is not surprising given that it generates character-level embeddings without using context. Representations in higher layers are generally more anisotropic than those in lower ones.", "cite_spans": [], "ref_spans": [ { "start": 91, "end": 99, "text": "Figure 1", "ref_id": null }, { "start": 442, "end": 450, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "(An)Isotropy", "sec_num": "4.1" }, { "text": "ELMo as well, though there are exceptions: for example, the anisotropy in BERT's penultimate layer is much higher than in its final layer. Isotropy has both theoretical and empirical benefits for static word embeddings. In theory, it allows for stronger \"self-normalization\" during training (Arora et al., 2017) , and in practice, subtracting the mean vector from static embeddings leads to improvements on several downstream NLP tasks (Mu et al., 2018) . Thus the extreme degree of anisotropy seen in contextualized word representations -particularly in higher layersis surprising. As seen in Figure 1 , for all three models, the contextualized hidden layer representations are almost all more anisotropic than the input layer representations, which do not incorporate context. This suggests that high anisotropy is inherent to, or least a by-product of, the process of contextualization.", "cite_spans": [ { "start": 291, "end": 311, "text": "(Arora et al., 2017)", "ref_id": "BIBREF5" }, { "start": 436, "end": 453, "text": "(Mu et al., 2018)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 594, "end": 602, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "(An)Isotropy", "sec_num": "4.1" }, { "text": "Contextualized word representations are more context-specific in higher layers. Recall from Definition 1 that the self-similarity of a word, in a given layer of a given model, is the average cosine similarity between its representations in different contexts, adjusted for anisotropy. If the self-similarity is 1, then the representations are not context-specific at all; if the self-similarity is 0, that the representations are maximally contextspecific. In Figure 2 , we plot the average selfsimilarity of uniformly randomly sampled words in each layer of BERT, ELMo, and GPT-2. For example, the self-similarity is 1.0 in ELMo's input layer because representations in that layer are static character-level embeddings.", "cite_spans": [], "ref_spans": [ { "start": 460, "end": 468, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Context-Specificity", "sec_num": "4.2" }, { "text": "In all three models, the higher the layer, the lower the self-similarity is on average. In other words, the higher the layer, the more contextspecific the contextualized representations. This finding makes intuitive sense. In image classification models, lower layers recognize more generic features such as edges while upper layers recognize more class-specific features (Yosinski et al., 2014) . Similarly, upper layers of LSTMs trained on NLP tasks learn more task-specific representations (Liu et al., 2019a) . Therefore, it follows that upper layers of neural language models learn more context-specific representations, so as to predict the next word for a given context more accurately. Of all three models, representations in GPT-2 are the most context-specific, with those in GPT-2's last layer being almost maximally context-specific.", "cite_spans": [ { "start": 372, "end": 395, "text": "(Yosinski et al., 2014)", "ref_id": "BIBREF28" }, { "start": 493, "end": 512, "text": "(Liu et al., 2019a)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Context-Specificity", "sec_num": "4.2" }, { "text": "Stopwords (e.g., 'the', 'of', 'to') have among the most context-specific representations. Across all layers, stopwords have among the lowest selfsimilarity of all words, implying that their contextualized representations are among the most context-specific. For example, the words with the lowest average self-similarity across ELMo's layers are 'and', 'of', ''s', 'the', and 'to' . This is relatively surprising, given that these words are not polysemous. This finding suggests that the variety Figure 2 : The average cosine similarity between representations of the same word in different contexts is called the word's self-similarity (see Definition 1). Above, we plot the average self-similarity of uniformly randomly sampled words after adjusting for anisotropy (see section 3.4). In all three models, the higher the layer, the lower the self-similarity, suggesting that contextualized word representations are more context-specific in higher layers.", "cite_spans": [ { "start": 346, "end": 380, "text": "'and', 'of', ''s', 'the', and 'to'", "ref_id": null } ], "ref_spans": [ { "start": 496, "end": 504, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Context-Specificity", "sec_num": "4.2" }, { "text": "of contexts a word appears in, rather than its inherent polysemy, is what drives variation in its contextualized representations. This answers one of the questions we posed in the introduction: ELMo, BERT, and GPT-2 are not simply assigning one of a finite number of word-sense representations to each word; otherwise, there would not be so much variation in the representations of words with so few word senses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-Specificity", "sec_num": "4.2" }, { "text": "Context-specificity manifests very differently in ELMo, BERT, and GPT-2. As noted earlier, contextualized representations are more contextspecific in upper layers of ELMo, BERT, and GPT-2. However, how does this increased contextspecificity manifest in the vector space? Do word representations in the same sentence converge to a single point, or do they remain distinct from one another while still being distinct from their representations in other contexts? To answer this question, we can measure a sentence's intra-sentence similarity. Recall from Definition 2 that the intrasentence similarity of a sentence, in a given layer of a given model, is the average cosine similarity between each of its word representations and their mean, adjusted for anisotropy. In Figure 3 , we plot the average intra-sentence similarity of 500 uniformly randomly sampled sentences.", "cite_spans": [], "ref_spans": [ { "start": 768, "end": 776, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Context-Specificity", "sec_num": "4.2" }, { "text": "In ELMo, words in the same sentence are more similar to one another in upper layers. As word representations in a sentence become more context-specific in upper layers, the intra-sentence similarity also rises. This suggests that, in practice, ELMo ends up extending the intuition behind Firth's (1957) distributional hypothesis to the sentence level: that because words in the same sentence share the same context, their contextualized representations should also be similar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-Specificity", "sec_num": "4.2" }, { "text": "In BERT, words in the same sentence are more dissimilar to one another in upper layers. As word representations in a sentence become more context-specific in upper layers, they drift away from one another, although there are exceptions (see layer 12 in Figure 3 ). However, in all layers, the average similarity between words in the same sentence is still greater than the average similarity between randomly chosen words (i.e., the anisotropy baseline). This suggests a more nuanced contextualization than in ELMo, with BERT recognizing that although the surrounding sentence informs a word's meaning, two words in the same sentence do not necessarily have a similar meaning because they share the same context.", "cite_spans": [], "ref_spans": [ { "start": 253, "end": 261, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Context-Specificity", "sec_num": "4.2" }, { "text": "In GPT-2, word representations in the same sentence are no more similar to each other than randomly sampled words. On average, the unadjusted intra-sentence similarity is roughly the same as the anisotropic baseline, so as seen in Figure 3, the anisotropy-adjusted intra-sentence similarity is close to 0 in most layers of GPT-2. In fact, the intra-sentence similarity is highest in the input layer, which does not contextualize words at all. This is in contrast to ELMo and BERT, where the Figure 3 : The intra-sentence similarity is the average cosine similarity between each word representation in a sentence and their mean (see Definition 2). Above, we plot the average intra-sentence similarity of uniformly randomly sampled sentences, adjusted for anisotropy. This statistic reflects how context-specificity manifests in the representation space, and as seen above, it manifests very differently for ELMo, BERT, and GPT-2.", "cite_spans": [], "ref_spans": [ { "start": 231, "end": 237, "text": "Figure", "ref_id": null }, { "start": 491, "end": 499, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Context-Specificity", "sec_num": "4.2" }, { "text": "average intra-sentence similarity is above 0.20 for all but one layer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-Specificity", "sec_num": "4.2" }, { "text": "As noted earlier when discussing BERT, this behavior still makes intuitive sense: two words in the same sentence do not necessarily have a similar meaning simply because they share the same context. The success of GPT-2 suggests that unlike anisotropy, which accompanies context-specificity in all three models, a high intra-sentence similarity is not inherent to contextualization. Words in the same sentence can have highly contextualized representations without those representations being any more similar to each other than two random word representations. It is unclear, however, whether these differences in intra-sentence similarity can be traced back to differences in model architecture; we leave this question as future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-Specificity", "sec_num": "4.2" }, { "text": "On average, less than 5% of the variance in a word's contextualized representations can be explained by a static embedding. Recall from Definition 3 that the maximum explainable variance (MEV) of a word, for a given layer of a given model, is the proportion of variance in its contextualized representations that can be explained by their first principal component. This gives us an upper bound on how well a static embedding could replace a word's contextualized representations. Because contextualized representations are anisotropic (see section 4.1), much of the variation across all words can be explained by a sin-gle vector. We adjust for anisotropy by calculating the proportion of variance explained by the first principal component of uniformly randomly sampled word representations and subtracting this proportion from the raw MEV. In Figure 4 , we plot the average anisotropy-adjusted MEV across uniformly randomly sampled words.", "cite_spans": [], "ref_spans": [ { "start": 846, "end": 854, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Static vs. Contextualized", "sec_num": "4.3" }, { "text": "In no layer of ELMo, BERT, or GPT-2 can more than 5% of the variance in a word's contextualized representations be explained by a static embedding, on average. Though not visible in Figure 4 , the raw MEV of many words is actually below the anisotropy baseline: i.e., a greater proportion of the variance across all words can be explained by a single vector than can the variance across all representations of a single word. Note that the 5% threshold represents the best-case scenario, and there is no theoretical guarantee that a word vector obtained using GloVe, for example, would be similar to the static embedding that maximizes MEV. This suggests that contextualizing models are not simply assigning one of a finite number of word-sense representations to each word -otherwise, the proportion of variance explained would be much higher. Even the average raw MEV is below 5% for all layers of ELMo and BERT; only for GPT-2 is the raw MEV non-negligible, being around 30% on average for layers 2 to 11 due to extremely high anisotropy.", "cite_spans": [], "ref_spans": [ { "start": 182, "end": 191, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Static vs. Contextualized", "sec_num": "4.3" }, { "text": "Principal components of contextualized representations in lower layers outperform GloVe and FastText on many benchmarks. As noted Figure 4 : The maximum explainable variance (MEV) of a word is the proportion of variance in its contextualized representations that can be explained by their first principal component (see Definition 3). Above, we plot the average MEV of uniformly randomly sampled words after adjusting for anisotropy. In no layer of any model can more than 5% of the variance in a word's contextualized representations be explained by a static embedding. Table 1 : The performance of various static embeddings on word embedding benchmark tasks. The best result for each task is in bold. For the contextualizing models (ELMo, BERT, GPT-2), we use the first principal component of a word's contextualized representations in a given layer as its static embedding. The static embeddings created using ELMo and BERT's contextualized representations often outperform GloVe and FastText vectors.", "cite_spans": [], "ref_spans": [ { "start": 130, "end": 138, "text": "Figure 4", "ref_id": null }, { "start": 571, "end": 578, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Static vs. Contextualized", "sec_num": "4.3" }, { "text": "earlier, we can create static embeddings for each word by taking the first principal component (PC) of its contextualized representations in a given layer. In Table 1 , we plot the performance of these PC static embeddings on several benchmark tasks 2 . These tasks cover semantic similarity, analogy solving, and concept categorization: Sim-Lex999 (Hill et al., 2015) , MEN (Bruni et al., 2014) , WS353 (Finkelstein et al., 2002) , RW (Luong et al., 2013) , SemEval-2012 (Jurgens et al., 2012 , Google analogy solving (Mikolov et al., 2013a) MSR analogy solving (Mikolov et al., 2013b) , BLESS (Baroni and Lenci, 2011) and AP (Almuhareb and Poesio, 2004) . We leave out layers 3 -10 in Table 1 because their performance is 2 The Word Embeddings Benchmarks package was used for evaluation. between those of Layers 2 and 11.", "cite_spans": [ { "start": 349, "end": 368, "text": "(Hill et al., 2015)", "ref_id": "BIBREF12" }, { "start": 375, "end": 395, "text": "(Bruni et al., 2014)", "ref_id": "BIBREF7" }, { "start": 404, "end": 430, "text": "(Finkelstein et al., 2002)", "ref_id": "BIBREF9" }, { "start": 436, "end": 456, "text": "(Luong et al., 2013)", "ref_id": "BIBREF18" }, { "start": 459, "end": 471, "text": "SemEval-2012", "ref_id": null }, { "start": 472, "end": 493, "text": "(Jurgens et al., 2012", "ref_id": "BIBREF13" }, { "start": 519, "end": 542, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF19" }, { "start": 563, "end": 586, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF20" }, { "start": 595, "end": 619, "text": "(Baroni and Lenci, 2011)", "ref_id": "BIBREF6" }, { "start": 627, "end": 655, "text": "(Almuhareb and Poesio, 2004)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 159, "end": 166, "text": "Table 1", "ref_id": null }, { "start": 687, "end": 727, "text": "Table 1 because their performance is 2", "ref_id": null } ], "eq_spans": [], "section": "Static vs. Contextualized", "sec_num": "4.3" }, { "text": "The best-performing PC static embeddings belong to the first layer of BERT, although those from the other layers of BERT and ELMo also outperform GloVe and FastText on most benchmarks. For all three contextualizing models, PC static embeddings created from lower layers are more effective those created from upper layers. Those created using GPT-2 also perform markedly worse than their counterparts from ELMo and BERT. Given that upper layers are much more contextspecific than lower layers, and given that GPT-2's representations are more context-specific than ELMo and BERT's (see Figure 2 ), this suggests that the PCs of highly context-specific representations are less effective on traditional benchmarks. Those derived from less context-specific represen-tations, such as those from Layer 1 of BERT, are much more effective.", "cite_spans": [], "ref_spans": [ { "start": 584, "end": 592, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Static vs. Contextualized", "sec_num": "4.3" }, { "text": "Our findings offer some new directions for future work. For one, as noted earlier in the paper, Mu et al. (2018) found that making static embeddings more isotropic -by subtracting their mean from each embedding -leads to surprisingly large improvements in performance on downstream tasks. Given that isotropy has benefits for static embeddings, it may also have benefits for contextualized word representations, although the latter have already yielded significant improvements despite being highly anisotropic. Therefore, adding an anisotropy penalty to the language modelling objective -to encourage the contextualized representations to be more isotropic -may yield even better results.", "cite_spans": [ { "start": 96, "end": 112, "text": "Mu et al. (2018)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "5" }, { "text": "Another direction for future work is generating static word representations from contextualized ones. While the latter offer superior performance, there are often challenges to deploying large models such as BERT in production, both with respect to memory and run-time. In contrast, static representations are much easier to deploy. Our work in section 4.3 suggests that not only it is possible to extract static representations from contextualizing models, but that these extracted vectors often perform much better on a diverse array of tasks compared to traditional static embeddings such as GloVe and FastText. This may be a means of extracting some use from contextualizing models without incurring the full cost of using them in production.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "5" }, { "text": "In this paper, we investigated how contextual contextualized word representations truly are. For one, we found that upper layers of ELMo, BERT, and GPT-2 produce more context-specific representations than lower layers. This increased context-specificity is always accompanied by increased anisotropy. However, context-specificity also manifests differently across the three models; the anisotropy-adjusted similarity between words in the same sentence is highest in ELMo but almost non-existent in GPT-2. We ultimately found that after adjusting for anisotropy, on average, less than 5% of the variance in a word's contextualized representations could be explained by a static embedding. This means that even in the best-case scenario, in all layers of all models, static word embeddings would be a poor replacement for contextualized ones. These insights help explain some of the remarkable success that contextualized representations have had on a diverse array of NLP tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "We use the pretrained models provided in an earlier version of the PyTorch-Transformers library.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the anonymous reviewers for their insightful comments. We thank the Natural Sciences and Engineering Research Council of Canada (NSERC) for their financial support.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "and pilot on interpretability", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Carmen", "middle": [], "last": "Banea", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "M", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "Mona", "middle": [ "T" ], "last": "Cer", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Weiwei", "middle": [], "last": "Gonzalez-Agirre", "suffix": "" }, { "first": "Inigo", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Montse", "middle": [], "last": "Lopez-Gazpio", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Maritxalar", "suffix": "" }, { "first": "", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2015, "venue": "Semeval-2015 task 2: Semantic textual similarity", "volume": "", "issue": "", "pages": "252--263", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre, Carmen Banea, Claire Cardie, Daniel M Cer, Mona T Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Inigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, et al. 2015. Semeval-2015 task 2: Seman- tic textual similarity, English, Spanish and pilot on interpretability. In Proceedings SemEval@ NAACL- HLT. pages 252-263.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Semeval-2014 task 10: Multilingual semantic textual similarity", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Carmen", "middle": [], "last": "Banea", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "M", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "Mona", "middle": [ "T" ], "last": "Cer", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Weiwei", "middle": [], "last": "Gonzalez-Agirre", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Guo", "suffix": "" }, { "first": "German", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Rigau", "suffix": "" }, { "first": "", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 2014, "venue": "Proceedings Se-mEval@ COLING", "volume": "", "issue": "", "pages": "81--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre, Carmen Banea, Claire Cardie, Daniel M Cer, Mona T Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2014. Semeval-2014 task 10: Multilin- gual semantic textual similarity. In Proceedings Se- mEval@ COLING. pages 81-91.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Sem 2013 shared task: Semantic textual similarity, including a pilot on typed-similarity", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Gonzalez-Agirre", "suffix": "" }, { "first": "Weiwei", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2013, "venue": "SEM 2013: The Second Joint Conference on Lexical and Computational Semantics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez- Agirre, and Weiwei Guo. 2013. Sem 2013 shared task: Semantic textual similarity, including a pilot on typed-similarity. In SEM 2013: The Second Joint Conference on Lexical and Computational Seman- tics. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Semeval-2012 task 6: A pilot on semantic textual similarity", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Gonzalez-Agirre", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Sixth International Workshop on Semantic Evaluation. Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "385--393", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre, Mona Diab, Daniel Cer, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pi- lot on semantic textual similarity. In Proceedings of the First Joint Conference on Lexical and Com- putational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation. Association for Computa- tional Linguistics, pages 385-393.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Attribute-based and value-based clustering: An evaluation", "authors": [ { "first": "Abdulrahman", "middle": [], "last": "Almuhareb", "suffix": "" }, { "first": "Massimo", "middle": [], "last": "Poesio", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 2004 conference on empirical methods in natural language processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abdulrahman Almuhareb and Massimo Poesio. 2004. Attribute-based and value-based clustering: An evaluation. In Proceedings of the 2004 conference on empirical methods in natural language process- ing.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A simple but tough-to-beat baseline for sentence embeddings", "authors": [ { "first": "Sanjeev", "middle": [], "last": "Arora", "suffix": "" }, { "first": "Yingyu", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Tengyu", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2017, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence em- beddings. In International Conference on Learning Representations.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "How we blessed distributional semantic evaluation", "authors": [ { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Lenci", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Baroni and Alessandro Lenci. 2011. How we blessed distributional semantic evaluation. In Pro- ceedings of the GEMS 2011 Workshop on GEomet- rical Models of Natural Language Semantics. Asso- ciation for Computational Linguistics, pages 1-10.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Multimodal distributional semantics", "authors": [ { "first": "Elia", "middle": [], "last": "Bruni", "suffix": "" }, { "first": "Nam-Khanh", "middle": [], "last": "Tran", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2014, "venue": "Journal of Artificial Intelligence Research", "volume": "49", "issue": "", "pages": "1--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Ar- tificial Intelligence Research 49:1-47.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805 .", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Placing search in context: The concept revisited", "authors": [ { "first": "Lev", "middle": [], "last": "Finkelstein", "suffix": "" }, { "first": "Evgeniy", "middle": [], "last": "Gabrilovich", "suffix": "" }, { "first": "Yossi", "middle": [], "last": "Matias", "suffix": "" }, { "first": "Ehud", "middle": [], "last": "Rivlin", "suffix": "" }, { "first": "Zach", "middle": [], "last": "Solan", "suffix": "" }, { "first": "Gadi", "middle": [], "last": "Wolfman", "suffix": "" }, { "first": "Eytan", "middle": [], "last": "Ruppin", "suffix": "" } ], "year": 2002, "venue": "ACM Transactions on information systems", "volume": "20", "issue": "1", "pages": "116--131", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Ey- tan Ruppin. 2002. Placing search in context: The concept revisited. ACM Transactions on informa- tion systems 20(1):116-131.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A synopsis of linguistic theory, 1930-1955. Studies in linguistic analysis", "authors": [ { "first": "", "middle": [], "last": "John R Firth", "suffix": "" } ], "year": 1957, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John R Firth. 1957. A synopsis of linguistic theory, 1930-1955. Studies in linguistic analysis .", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A structural probe for finding syntax in word representations", "authors": [ { "first": "John", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word represen- tations. In North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation", "authors": [ { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2015, "venue": "Computational Linguistics", "volume": "41", "issue": "4", "pages": "665--695", "other_ids": {}, "num": null, "urls": [], "raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2015. Simlex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguis- tics 41(4):665-695.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Semeval-2012 task 2: Measuring degrees of relational similarity", "authors": [ { "first": "A", "middle": [], "last": "David", "suffix": "" }, { "first": "", "middle": [], "last": "Jurgens", "suffix": "" }, { "first": "D", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Turney", "suffix": "" }, { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "Keith", "middle": [ "J" ], "last": "Mohammad", "suffix": "" }, { "first": "", "middle": [], "last": "Holyoak", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics", "volume": "1", "issue": "", "pages": "356--364", "other_ids": {}, "num": null, "urls": [], "raw_text": "David A Jurgens, Peter D Turney, Saif M Mohammad, and Keith J Holyoak. 2012. Semeval-2012 task 2: Measuring degrees of relational similarity. In Pro- ceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceed- ings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation. Association for Computational Linguistics, pages 356-364.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Linguistic regularities in sparse and explicit word representations", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the eighteenth conference on computational natural language learning", "volume": "", "issue": "", "pages": "171--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Levy and Yoav Goldberg. 2014a. Linguistic reg- ularities in sparse and explicit word representations. In Proceedings of the eighteenth conference on com- putational natural language learning. pages 171- 180.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Neural word embedding as implicit matrix factorization", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2014, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "2177--2185", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Levy and Yoav Goldberg. 2014b. Neural word embedding as implicit matrix factorization. In Ad- vances in Neural Information Processing Systems. pages 2177-2185.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Linguistic knowledge and transferability of contextual representations", "authors": [ { "first": "Nelson", "middle": [ "F" ], "last": "Liu", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019a. Lin- guistic knowledge and transferability of contextual representations. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Roberta: A robustly optimized bert pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692 .", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Better word representations with recursive neural networks for morphology", "authors": [ { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2013, "venue": "SIGNLL Conference on Computational Natural Language Learning (CoNLL)", "volume": "", "issue": "", "pages": "104--113", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thang Luong, Richard Socher, and Christopher D Manning. 2013. Better word representations with recursive neural networks for morphology. In SIGNLL Conference on Computational Natural Language Learning (CoNLL). pages 104-113.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013a. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in Neural Information Processing Systems. pages 3111-3119.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Linguistic regularities in continuous space word representations", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Yih", "middle": [], "last": "Wen-Tau", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Zweig", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "746--751", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 746-751.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "The strange geometry of skip-gram with negative sampling", "authors": [ { "first": "David", "middle": [], "last": "Mimno", "suffix": "" }, { "first": "Laure", "middle": [], "last": "Thompson", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2873--2878", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Mimno and Laure Thompson. 2017. The strange geometry of skip-gram with negative sampling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. pages 2873-2878.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "All-but-the-top: Simple and effective postprocessing for word representations", "authors": [ { "first": "Jiaqi", "middle": [], "last": "Mu", "suffix": "" }, { "first": "Suma", "middle": [], "last": "Bhat", "suffix": "" }, { "first": "Pramod", "middle": [], "last": "Viswanath", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 7th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiaqi Mu, Suma Bhat, and Pramod Viswanath. 2018. All-but-the-top: Simple and effective postprocess- ing for word representations. In Proceedings of the 7th International Conference on Learning Represen- tations (ICLR).", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "GloVe: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP). pages 1532-1543.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2227--2237", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers). pages 2227-2237.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners .", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "What do you learn from context? probing for sentence structure in contextualized word representations", "authors": [ { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Berlin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Poliak", "suffix": "" }, { "first": "R", "middle": [ "Thomas" ], "last": "Mccoy", "suffix": "" }, { "first": "Najoung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipan- jan Das, and Ellie Pavlick. 2019. What do you learn from context? probing for sentence structure in contextualized word representations. In Inter- national Conference on Learning Representations. https://openreview.net/forum?id=SJzSgnRcKX.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.08237" ] }, "num": null, "urls": [], "raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretrain- ing for language understanding. arXiv preprint arXiv:1906.08237 .", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "How transferable are features in deep neural networks?", "authors": [ { "first": "Jason", "middle": [], "last": "Yosinski", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Clune", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Hod", "middle": [], "last": "Lipson", "suffix": "" } ], "year": 2014, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "3320--3328", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. 2014. How transferable are features in deep neural networks? In Advances in Neural Informa- tion Processing Systems. pages 3320-3328.", "links": null } }, "ref_entries": {} } }