ACL-OCL / Base_JSON /prefixB /json /blackboxnlp /2021.blackboxnlp-1.12.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:09:00.963840Z"
},
"title": "Word Equations: Inherently Interpretable Sparse Word Embeddings through Sparse Coding",
"authors": [
{
"first": "Adly",
"middle": [],
"last": "Templeton",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Williams College",
"location": {}
},
"email": "adlytempleton@gmail.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Word embeddings are a powerful natural language processing technique, but they are extremely difficult to interpret. To enable interpretable NLP models, we create vectors where each dimension is inherently interpretable. By inherently interpretable, we mean a system where each dimension is associated with some human-understandable hint that can describe the meaning of that dimension. In order to create more interpretable word embeddings, we transform pretrained dense word embeddings into sparse embeddings. These new embeddings are inherently interpretable: each of their dimensions is created from and represents a natural language word or specific grammatical concept. We construct these embeddings through sparse coding, where each vector in the basis set is itself a word embedding. Therefore, each dimension of our sparse vectors corresponds to a natural language word. We also show that models trained using these sparse embeddings can achieve good performance and are more interpretable in practice, including through human evaluations.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Word embeddings are a powerful natural language processing technique, but they are extremely difficult to interpret. To enable interpretable NLP models, we create vectors where each dimension is inherently interpretable. By inherently interpretable, we mean a system where each dimension is associated with some human-understandable hint that can describe the meaning of that dimension. In order to create more interpretable word embeddings, we transform pretrained dense word embeddings into sparse embeddings. These new embeddings are inherently interpretable: each of their dimensions is created from and represents a natural language word or specific grammatical concept. We construct these embeddings through sparse coding, where each vector in the basis set is itself a word embedding. Therefore, each dimension of our sparse vectors corresponds to a natural language word. We also show that models trained using these sparse embeddings can achieve good performance and are more interpretable in practice, including through human evaluations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Word embeddings represent each word in a natural language as a vector in a continuous high dimensional space. Many different pretrained embeddings are readily available (Mikolov et al., 2013; Pennington et al., 2014; Bojanowski et al., 2017) , and are used in a range of applications (Li and Yang, 2018) . This vector representation can be said to encode the meaning of the word; not only are similar words close together but linear relationships between words are thought to have conceptual meaning. In the famous example, the vector difference between 'man' and 'woman' is similar to the vector difference between 'king ' and 'queen' (Landauer and Dumais, 1997; Mikolov et al., 2013) . This observation suggests that the vector difference between 'woman' and 'man' represents a concept of gender within the vector space, implying that dimensions or linear combinations of dimensions in the vector space are related to human-understandable concepts. However, in practice, interpreting these vector spaces is extremely difficult. This obscures the behavior of any NLP model built on top of word embeddings.",
"cite_spans": [
{
"start": 169,
"end": 191,
"text": "(Mikolov et al., 2013;",
"ref_id": "BIBREF15"
},
{
"start": 192,
"end": 216,
"text": "Pennington et al., 2014;",
"ref_id": "BIBREF21"
},
{
"start": 217,
"end": 241,
"text": "Bojanowski et al., 2017)",
"ref_id": "BIBREF1"
},
{
"start": 284,
"end": 303,
"text": "(Li and Yang, 2018)",
"ref_id": "BIBREF12"
},
{
"start": 622,
"end": 663,
"text": "' and 'queen' (Landauer and Dumais, 1997;",
"ref_id": null
},
{
"start": 664,
"end": 685,
"text": "Mikolov et al., 2013)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To enable interpretable NLP models, we create vectors where each dimension is inherently interpretable. By inherently interpretable, we mean a system where each dimension is associated with some human-understandable hint that can describe the meaning of that dimension. This allows us to directly interpret the coefficients of simple models trained on these vectors. By comparison, most other systems of interpretable word embeddings aim to create dimensions that humans may be able to manually interpret after the fact.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To create our vectors, we represent word embeddings as the sparse linear combination of a basis set of other word embeddings. Our primary contribution is that, instead of learning an optimal basis for our sparse vector space, we draw the columns of the basis from the original set of dense word embeddings. This strategy provides a natural label for each sparse dimension and allows us to represent each natural language word as the linear combination of a small number of other natural language words. This representation is itself a more 'interpretable' word embedding. This technique produces representations of words that have interpretable dimensions. We show that these representations are more interpretable and that models trained on these embeddings perform almost as well as models trained on standard dense embeddings. We show how the creation of inherently interpretable vectors can help us understand the behavior and structure of the original word embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent work has created more interpretable vectors through a variety of methods. However, relatively few approaches create inherently interpretable dimensions. Therefore, we believe that our work, which creates inherently interpretable embeddings through a simple novel method can be the basis of future NLP tools where interpretability is crucial.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As an example, we present one randomly selected embedding from our system. More examples can be found in the appendix. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Park et al. (Park et al., 2017 ) find a more interpretable rotation of word embeddings using techniques associated with factor analysis. Other work (Dufter and Sch\u00fctze, 2019; Rothe and Sch\u00fctze, 2016) rotates dense vectors using different methods.",
"cite_spans": [
{
"start": 12,
"end": 30,
"text": "(Park et al., 2017",
"ref_id": "BIBREF19"
},
{
"start": 148,
"end": 174,
"text": "(Dufter and Sch\u00fctze, 2019;",
"ref_id": "BIBREF5"
},
{
"start": 175,
"end": 199,
"text": "Rothe and Sch\u00fctze, 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "Koc et al. (\u015e enel et al., 2020) tie concepts to dimensions in a more direct way. They select a concept for each dense dimension and identify words that are associated with these concepts. A penalty term pushes coefficients for these words towards the fixed values.",
"cite_spans": [
{
"start": 11,
"end": 32,
"text": "(\u015e enel et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "Other work has focused on interpretability through sparsity. Subramian et al. (Subramanian et al., 2018) created more interpretable embeddings by passing pretrained dense embeddings through a sparse autoencoder.",
"cite_spans": [
{
"start": 61,
"end": 104,
"text": "Subramian et al. (Subramanian et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "Panigrahi et al. (Panigrahi et al., 2019) proposed Word2Sense, a generative approach that models each dimension as a 'sense' and word embeddings as a sparse probability distribution over the senses.",
"cite_spans": [
{
"start": 17,
"end": 41,
"text": "(Panigrahi et al., 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "The mathematical technique we use in this paper, Sparse coding, which is defined as the representation of vectors as the sparse linear combination of an overcomplete basis, is a well-studied optimization problem (Coates and Ng, 2011; Hoyer, 2002; Lee et al., 2007) . Previous work (Coates and Ng, 2011) has also shown that basis vectors can be efficiently selected from the set that is being encoded.",
"cite_spans": [
{
"start": 212,
"end": 233,
"text": "(Coates and Ng, 2011;",
"ref_id": "BIBREF4"
},
{
"start": 234,
"end": 246,
"text": "Hoyer, 2002;",
"ref_id": "BIBREF8"
},
{
"start": 247,
"end": 264,
"text": "Lee et al., 2007)",
"ref_id": "BIBREF10"
},
{
"start": 281,
"end": 302,
"text": "(Coates and Ng, 2011)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "Faruqui et al. (Faruqui et al., 2015) used nonnegative sparse coding to recode dense word embeddings into more interpretable sparse vectors while learning a basis. However, because they create their basis through direct optimization, the basis vectors (and, consequently, the dimensions in their transformed sparse space) do not have any inherent interpretation and must be manually interpreted. Zhang et al. (Zhang et al., 2019) also used nonnegative sparse coding to learn a set of word factors to recode word2vec embeddings. The basis vectors created in this way are highly redundant, so they then use spectral clustering to remove nearduplicate factors. Then, they are able to manually infer reasonable post hoc interpretations for most of the factors.",
"cite_spans": [
{
"start": 15,
"end": 37,
"text": "(Faruqui et al., 2015)",
"ref_id": "BIBREF6"
},
{
"start": 396,
"end": 429,
"text": "Zhang et al. (Zhang et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "Concurrently with our work, Mathew et al. create an inherently interpretable subspace from pairs of antonyms. They then project embeddings into that subspace, producing lower-dimensional dense vectors (Mathew et al., 2020) .",
"cite_spans": [
{
"start": 201,
"end": 222,
"text": "(Mathew et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "Our work uses sparse coding to transform a set of word embeddings from a dense and uninterpretable space into a sparse and interpretable space. Let v D represent a dense word embedding, and let B represent a matrix with basis vectors along the columns. B has size (n S , n N ) where n d is the dimensionality of the dense vectors and n S is the dimensionality of the sparse vectors. We achieve sparse coding using regularized regression, inducing sparsity using the L 1 norm. Formally, this corresponds to finding the sparse vector v S that minimizes the following objective function",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "arg min v S ||v D \u2212 v S B|| 2 2 + \u03b1||v S || 1 (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "\u03b1 is a hyperparameter that controls the level of sparsity. The first term in Equation 1 ensures the sparse vector corresponds to a vector in the dense space that is similar to the original vector. The second term is a sparsity-inducing penalty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Note that by 'basis' we mean a set of vectors in the dense space, each one corresponding to a dimension in the transformed, sparse, space. Out of necessity, these vectors are overcomplete (there are more dimensions than vectors) and so they do not form a basis according to the traditional definition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Previous work using sparse coding to create interpretable word embeddings has considered the basis B to be part of the optimization problem (Faruqui et al., 2015; Zhang et al., 2019) . Our primary contribution is that, instead of learning an optimal basis, we draw the columns of the basis from the original set of dense word embeddings. This strategy provides a natural label for each sparse dimension.",
"cite_spans": [
{
"start": 140,
"end": 162,
"text": "(Faruqui et al., 2015;",
"ref_id": "BIBREF6"
},
{
"start": 163,
"end": 182,
"text": "Zhang et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "We can roughly divide the 'meaning' carried by a word embedding into grammatical and nongrammatical properties. Here we use 'grammatical properties' to mean properties that describe how that word fits into the grammar of the language, such as its part-of-speech, tense, or number. We use 'non-grammatical properties' to mean all other aspects of the meaning of a word. For instance, we expect the embedding for the word 'swimming' to include a grammatical component representing that this word is a present-tense participle and a non-grammatical component that represents the meaning 'to swim'. Of course, this deconstruction is imperfect. Nevertheless, this approach provides a useful insight towards decomposing the meaning of a word embedding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical Basis",
"sec_num": "3.1"
},
{
"text": "Preliminary experiments showed that, without special consideration, grammatical properties would be captured in an unintuitive way. The grammatical components could not be easily isolated to one subset of the nonzero dimensions. Ideally, the grammatical information would be captured in a small number of interpretable dimensions. Instead, each basis vector would capture part of the grammatical component and part of the semantic component. This duality creates difficulty when interpreting our representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical Basis",
"sec_num": "3.1"
},
{
"text": "To address this, we construct a small number of grammatical basis vectors and add them to the basis set. For instance, we construct a 'POS-NOUN' vector by taking the mean of all word embeddings corresponding to nouns. For this work, we use a set of 11 grammatical basis vectors, though the number and the construction of these are arbitrary. A description of the grammatical basis vectors is in the appendix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical Basis",
"sec_num": "3.1"
},
{
"text": "Next, we make the grammatical basis vectors orthogonal using the Gram-Schmidt process. Finally, we subtract the projection along the grammatical basis vectors from all other ('non-grammatical') basis vectors we use and renormalize them. This procedure separates the grammatical meaning from our non-grammatical basis vectors, ensuring that non-grammatical bases are not also coding for grammatical concepts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical Basis",
"sec_num": "3.1"
},
{
"text": "Note that we only perform this orthogonalization with respect to a very small number of grammatical basis vectors. We find that this procedure does not remove more than 50% of the length of any individual vector and 50% of vectors have less than 20% of their length removed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical Basis",
"sec_num": "3.1"
},
{
"text": "When encoding a dense vector, instead of finding the grammatical coefficients using sparse coding, we set each grammatical coefficient to the projection along the corresponding grammatical basis vector, which is equal to the dot product similarity between the original vector and the grammatical basis vector. Because the grammatical basis is orthogonal, we can do this for every grammatical basis vector simultaneously. This residual is then transformed using Equation 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical Basis",
"sec_num": "3.1"
},
{
"text": "Note that, although we do require hand-crafted features to create the grammatical basis vectors, our system does not use hand-crafted features in the representation of new words. Once the grammatical feature vectors are defined, words can be represented in our sparse space using no more information than their fasttext dense vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical Basis",
"sec_num": "3.1"
},
{
"text": "We cannot practically use all words as our basis set, so we have to select a subset. First, we start with the 30,000 most frequent words. We filter out any words that are capitalized or that are not in a standard English vocabulary (using the vocabulary of the spaCy en core web sm model). Next, we filter out any words that are not nouns, verbs, or adjectives. This process removes many basis vectors that may be hard to interpret. This gives us approximately 11,000 remaining potential basis words. From these, we will select 3,000 words to use in the final basis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basis Selection",
"sec_num": "3.2"
},
{
"text": "We use an iterative algorithm that takes, at each step, the potential basis vector with the highest mean cosine similarity to all other vectors. To encourage diversity, this mean is weighted by the lowest cosine dissimilarity that each vector has with any already-selected basis vector. Formally, at each step, we grow the set of basis vectors B by adding the potential basis vector x from the set of unchosen potential basis vectors F \\B that satisfies arg max",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basis Selection",
"sec_num": "3.2"
},
{
"text": "x\u2208F \\B v\u2208V D (x \u2022 v) max b\u2208B (1 \u2212 b \u2022 v)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basis Selection",
"sec_num": "3.2"
},
{
"text": "Where V D is the set of dense vectors for the 30,000 most frequent words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basis Selection",
"sec_num": "3.2"
},
{
"text": "Note that, despite our use of the word 'basis', this is not a basis in the traditional sense; the set of basis vectors are not linearly independent, and there are more basis vectors than dimensions in the original space. However, because of the L1 penalty term, our objective function still allows for optimal decompositions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basis Selection",
"sec_num": "3.2"
},
{
"text": "In order to find the sparse vector representation, we follow the following process, combining the above elements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "End-to-end Process",
"sec_num": "3.3"
},
{
"text": "1. Find the dense vector representation of the word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "End-to-end Process",
"sec_num": "3.3"
},
{
"text": "2. Compute the projection along each vector orthogonal grammatical basis. Store these projections as the first part of the resulting vector. Subtract the projection along this basis before moving on to the next step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "End-to-end Process",
"sec_num": "3.3"
},
{
"text": "3. Optimize Eq. 1 using the FISTA algorithm (Chalasani et al., 2013) . Store the learned sparse vector as the second part of the resulting vector.",
"cite_spans": [
{
"start": 44,
"end": 68,
"text": "(Chalasani et al., 2013)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "End-to-end Process",
"sec_num": "3.3"
},
{
"text": "We will evaluate this model in multiple ways. In particular, we care about two contradictory properties of our transformed vector space. First, we want our vector space to be useful in downstream machine learning applications. We expect that, in most applications, increased interpretability comes with some performance cost. Therefore, we care about the performance loss when moving from dense vectors to our sparse vectors. The other goal is that our sparse vectors should be interpretable. It is much harder to articulate exactly what interpretability is or how we can measure it. Metrics such as the Word Intrusion Task (Section2) can act as a useful proxy for interpretability, and we use it as our primary quantitative measure of interpretability. But part of interpretability is, by definition, subjective and any metric is imperfect.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "We use the FastText (Bojanowski et al., 2017) pretrained 300 dimensional English vectors (without subword information) trained on Wikipedia 2017, UMBC webbase corpus and statmt.org news dataset as the dense vectors that we input into our models. Unless otherwise mentioned, we only consider the 30,000 most frequent words, for computational reasons. We normalize all vectors to have mean 0 and unit length. After learning sparse vectors, we normalize each sparse vector so that it corresponds to a dense vector of unit length. When comparing with the original dense vectors (Fast-Text (Bojanowski et al., 2017 )), we subtract the mean of all vectors, to match our preprocessing.",
"cite_spans": [
{
"start": 20,
"end": 45,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF1"
},
{
"start": 585,
"end": 609,
"text": "(Bojanowski et al., 2017",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "implementation",
"sec_num": "4.1"
},
{
"text": "In practice, the sparse penalty term will only push coefficients very close to 0. We clamp any coefficient with a magnitude of less than .001 to 0. We found this threshold by taking the lowest cutoff that does not introduce significant irregularities into the tradeoff curves in Section 4.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "implementation",
"sec_num": "4.1"
},
{
"text": "We solve the regularized optimization problems using the FISTA algorithm (Chalasani et al., 2013) , as implemented in the Python Lightning package (Blondel and Pedregosa, 2016) , using default hyperparameters. FISTA is an optimization algorithm that can efficiently solve sparse coding problems. We use the spaCy library (Honnibal and Montani, 2017) to check for out of vocabulary words and perform part-of-speech tagging. We use the numpy (Oliphant, 2006) , CuPy (Okuta et al., 2017) , and Scikit learn (Pedregosa et al., 2011) libraries for various linear algebra implementations. We use the open-source Gensim library (Rehurek and Sojka, 2010) to manipulate word embeddings. For the word analogy task evaluation, we use the 3CosAdd method, as implemented by Gensim. Models processed 30,000 words within a few hours, running across 32 2.5 GHz processors with no GPU.",
"cite_spans": [
{
"start": 73,
"end": 97,
"text": "(Chalasani et al., 2013)",
"ref_id": "BIBREF2"
},
{
"start": 147,
"end": 176,
"text": "(Blondel and Pedregosa, 2016)",
"ref_id": "BIBREF0"
},
{
"start": 321,
"end": 349,
"text": "(Honnibal and Montani, 2017)",
"ref_id": "BIBREF7"
},
{
"start": 440,
"end": 456,
"text": "(Oliphant, 2006)",
"ref_id": "BIBREF17"
},
{
"start": 464,
"end": 484,
"text": "(Okuta et al., 2017)",
"ref_id": "BIBREF16"
},
{
"start": 504,
"end": 528,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF20"
},
{
"start": 621,
"end": 646,
"text": "(Rehurek and Sojka, 2010)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "implementation",
"sec_num": "4.1"
},
{
"text": "To compare our work against other sparse coding approaches, we will often reference the vectors created by Faruqui et al. (Faruqui et al., 2015) . That work generates more interpretable vectors using sparse coding but without inherently interpretable dimensions.",
"cite_spans": [
{
"start": 107,
"end": 144,
"text": "Faruqui et al. (Faruqui et al., 2015)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Previous Work",
"sec_num": "4.2"
},
{
"text": "Note that, because of the penalty term in Equation 1, V S B (the reconstructed vectors) are not exactly equal to the original dense vectors V D . Therefore, we expect a tradeoff between sparsity and this difference (which we call reconstruction error).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reconstruction Error and Sparsity",
"sec_num": "4.3"
},
{
"text": "This tradeoff curve is displayed in Figure 1 . Despite the additional constraints of an inherently interpretable system, we suffer only a minor increase in reconstruction error compared to traditional sparse coding. This reconstruction error is the primary drawback of our system; reconstruction error adds a small amount of noise to every model built on top of our sparse vectors. For the remainder of this work, unless otherwise mentioned, we will consider the vectors made with \u03b1 = 0.35. These vectors have, on average, 20 nonzero entries.",
"cite_spans": [],
"ref_spans": [
{
"start": 36,
"end": 44,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Reconstruction Error and Sparsity",
"sec_num": "4.3"
},
{
"text": "In the word2vec vector space, famously, the vector for 'king' plus the vector for 'woman' minus the vector for 'man' is close to the vector for 'queen'. Analogy tasks quantitatively test these properties. The task consists of analogies of the form A is to A as B is to B . The vector space is evaluated on its ability to correctly determine the value of B .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analogy Task",
"sec_num": "4.4"
},
{
"text": "The performance of our vector space at this task is displayed in Table 1. Our model performs poorly on this task. This degradation comes from two sources. First, the drop from the original vectors to the reconstructed vectors that is due to reconstruction error. Second, an additional degradation is caused by the transformation from dense vectors to sparse vectors, especially with cosine similarity. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analogy Task",
"sec_num": "4.4"
},
{
"text": "Next, we demonstrate that our model can be used to build interpretable machine learning systems. To this end, we train classifiers using our word embeddings as input. We demonstrate that these classifiers are not only effective but also interpretable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "4.5"
},
{
"text": "We evaluate our vectors on two datasets, the IMDB sentiment analysis dataset (Maas et al., 2011) and the TREC question classification dataset (Li and Roth, 2002) . For both of these datasets, we use a logistic regression model and a bag of words representation.",
"cite_spans": [
{
"start": 77,
"end": 96,
"text": "(Maas et al., 2011)",
"ref_id": "BIBREF13"
},
{
"start": 142,
"end": 161,
"text": "(Li and Roth, 2002)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "4.5"
},
{
"text": "The IMDB movie review dataset consists of 50,000 passages taken from IMDB movie reviews, evenly split between positive and negative reviews. The task is to determine the sentiment of each passage (Maas et al., 2011) .",
"cite_spans": [
{
"start": 196,
"end": 215,
"text": "(Maas et al., 2011)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "IMDB Sentiment Analysis Dataset",
"sec_num": "4.5.1"
},
{
"text": "We train classifiers using various word embedding spaces as inputs. While we could train deep neural modesl on these vector spaces, neural models do not directly produce interpretable coefficients, and therefore we provide a demonstration on simple logistic regression models. The results are presented in Table 2 . Our vector spaces demonstrate improvement over the original dense vectors (FastText (Bojanowski et al., 2017) ), as well as the traditional sparse coding approach of Faruqui et al. This result holds despite a slight decrease in performance caused by the reconstruction error (as demonstrated by the low performance with reconstructed vectors).",
"cite_spans": [
{
"start": 400,
"end": 425,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 306,
"end": 313,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "IMDB Sentiment Analysis Dataset",
"sec_num": "4.5.1"
},
{
"text": "We can directly interpret our classifier's coefficients. Here, we present the most significant coeffi- Table 2 : Accuracy on the IMDB sentiment analysis dataset and the TREC question classification dataset. We use a logistic regression classifier, which uses as input a bag-of-words sum of various word embeddings.",
"cite_spans": [],
"ref_spans": [
{
"start": 103,
"end": 110,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "IMDB Sentiment Analysis Dataset",
"sec_num": "4.5.1"
},
{
"text": "cients (\u03b1 = 0.1) 1 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IMDB Sentiment Analysis Dataset",
"sec_num": "4.5.1"
},
{
"text": "ln P (positive) 1 \u2212 P (positive) = \u2212157 \u2022 dreadful \u2212 153 \u2022 horrible + 150 \u2022 fabulous \u2212 140 \u2022 dull \u2212 132 \u2022 dreary \u2212 107 \u2022 worsen \u2212 105 \u2022 ridiculous + ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IMDB Sentiment Analysis Dataset",
"sec_num": "4.5.1"
},
{
"text": "Note that these are not coefficients on the frequencies of individual words. Instead, these are coefficients on vectors in the basis set. We can consider them to be coefficients on concepts, which are labeled by the displayed words. The coefficients make sense: positive concepts have positive coefficients, while negative concepts have negative coefficients. This pattern continues for much longer than displayed above, and we have omitted other terms for space reasons. The first term to not fall into this clear interpretation is the 24th-most significant: ... + 74 \u2022 shall + ..' At first, this term appears nonsensical. Looking more closely at this dimension can reveal more about our system. The top five words in the dimension represented by 'shall' are the following: 'henceforth', 'herein', 'hereafter', 'thereof', 'hereby'. We can see here how both our vector space and our regression model pick up on tone. This dimension appears to correspond to a formal and somewhat archaic tone, which is likely not found in a negative internet comment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IMDB Sentiment Analysis Dataset",
"sec_num": "4.5.1"
},
{
"text": "Our next classification task is more complex. The TREC question classification dataset consists of 6,000 questions that are divided into 6 categories based on the expected answer: abbreviations, descriptions, entities, humans, locations, and numeric. Accuracy for various vector spaces is presented in Table 2 . Again, our model does better than the unmodified input vectors we start with, despite some loss from the reconstruction error. Both results suggest that our vector spaces are efficient in regression-based settings, though the performance at the word-analogy task suffers a serious degradation. It is likely that different qualities are needed for these different tasks. The exact-match evaluation of the word analogy task severely punishes even slight noise in the vector space, and cosine similarities are noisy in sparse vectors.",
"cite_spans": [],
"ref_spans": [
{
"start": 302,
"end": 309,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "TREC Question Classification Dataset",
"sec_num": "4.5.2"
},
{
"text": "Once again, we directly interpret the coefficients learned by logistic regression. For space, we display the most significant terms for the HUM category. Questions in this category expect the name of a human as the answer: Some of these coefficients, such as 'songwriter' or 'identities' are intuitive and reveal interesting behavior of the classifier. Others, such as 'wonder', are not. Manual inspection reveals that 'wonder' is used to represent words such as 'How' or 'why' but not 'who', though this behavior is likely noise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TREC Question Classification Dataset",
"sec_num": "4.5.2"
},
{
"text": "ln P (HUM) 1 \u2212 P (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TREC Question Classification Dataset",
"sec_num": "4.5.2"
},
{
"text": "To quantitatively measure interpretability, we use human experiments. In particular, we use the word intrusion task (Chang et al., 2009) . In this task, humans are presented with five words, four of which are associated highly with a particular dimension. Participants are asked to choose the word that does not belong. We use our vectors both with and without providing the label of the dimension as a 'hint'.",
"cite_spans": [
{
"start": 116,
"end": 136,
"text": "(Chang et al., 2009)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Intrusion Task",
"sec_num": "4.5.3"
},
{
"text": "We use the following procedure for generating questions. First, we filter candidate words, starting with the 20,000 most frequent words and filtering out words that are not lowercase, words that are Figure 2 : An example of the user interface given to annotators. The following instructions were given to the annotators: 'You will be presented with a group of 5 words. Four of these words are similar in some way and the other one is not. Pick out the word which is dissimilar. You may be provided with a hint about how the words are similar.' not made up of only ASCII alphabetic characters, and words with only one letter. Then we randomly select a dimension. We pick the 4 highest words along that dimension, and one word randomly selected from the bottom 50% of words in that dimension, then randomize the order. Each example is presented to three different Mechanical Turk annotators. An example of the interface presented to annotators is seen in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 199,
"end": 207,
"text": "Figure 2",
"ref_id": null
},
{
"start": 953,
"end": 961,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Intrusion Task",
"sec_num": "4.5.3"
},
{
"text": "The results of the word intrusion task are presented in Table 3 . When hints are provided, we see a statistically significant improvement in accuracy between our vectors and the sparse coding baseline (p = .00055). In addition, using hints produces a statistically significant improvement (p = .040), validating our motivation for inherently interpretable dimensions. Of course, any quantitative metric of interpretability is imperfect. To qualitatively assess interpretability, randomly selected vectors are presented in the appendix.",
"cite_spans": [],
"ref_spans": [
{
"start": 56,
"end": 63,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Accuracy",
"sec_num": null
},
{
"text": "Our method still has some serious drawbacks. Sparse coding, by its nature, introduces a substantial amount of noise in the form of reconstruction error and sparse coding has the potential to assign very different sparse vectors to similar dense vectors. We hope that future work will produce sparse embeddings that are interpretable by construction without some of the shortcomings of our work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "4.6"
},
{
"text": "In this work, we presented a method to create word embeddings that are interpretable by construction. Each dimension of these embeddings corresponds precisely to a natural language word. These embeddings can be presented in a human readable form, and we have shown that most of these representations are intuitive. We have also shown that these embeddings can be used to produce an extremely interpretable classification model that still delivers performance comparable to or better than a classification model based on the original embeddings. Unlike most previous work on interpretable word embeddings, our method does not require humans to interpret and label each dimension. We have previously seen how this feature allows us to easily create interpretable classification models. It also allows us to gain a deeper understanding of the original dense vector space. Previous approaches may have obscured nuanced or hard to interpret behavior. In particular, a human manually interpreting a dimension may not appreciate subtle behavior of the system. Several sections of this work, which have manually examined individual word representations in our system, have revealed the nuanced behavior that our system demonstrates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "4.7"
},
{
"text": "Our method still has some serious drawbacks. While we have examined a number of these flaws, many are tied closely to the sparse coding method we have chosen to use. Sparse coding, by its nature, introduces a substantial amount of noise in the form of reconstruction error. In addition to the reconstruction error, sparse coding has the potential to assign very different sparse vectors to similar dense vectors. We hope that future work will produce sparse embeddings that are interpretable by construction without some of the shortcomings of our work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "4.7"
},
{
"text": "Much of the promise of sparse coding methods remains to be proved. In particular, we believe it will be fruitful to study the representation of syntactic concepts. We have seen that our attempts to disentangle syntactic concepts from our semantic basis vectors were not entirely successful. We would also like to better understand how these methods are applicable in deep learning models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "4.7"
},
{
"text": "There is still a large amount of analytical work left to be done on evaluation. The word intrusion task, while an effective quantitative method, does not offer a complete view of interpretability. Part of this problem is that we do not have any way to quantify interpretability where it is most useful: when building downstream classification models. More fundamentally, we do not have any underlying framework for understanding what it means for a word embedding to be interpretable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "4.7"
},
{
"text": "We believe that interpretable word embeddings have great potential for helping us understand and interpret models in a wide range of NLP tasks. Juexiao Zhang, Yubei Chen, Brian Cheung, and Bruno A. Olshausen. 2019. Word embedding visualization via dictionary learning. arXiv preprint arXiv:1910.03833.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "4.7"
},
{
"text": "Our approach makes use of four types of grammatical basis vectors:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix A Gramtacic Basis Descriptions",
"sec_num": "5"
},
{
"text": "1. We use the first principal component of the embeddings of the 30,000 most frequent words. Previous work on word embedding has referred to this as the common discourse vector, or c 0 , and has shown that this vector encodes words that appear commonly in all contexts, such as 'the'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix A Gramtacic Basis Descriptions",
"sec_num": "5"
},
{
"text": "2. We take the mean of all vectors of capitalized words and use this as a grammatical basis vector to represent capitalization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix A Gramtacic Basis Descriptions",
"sec_num": "5"
},
{
"text": "3. For a variety of parts-of-speech, we use the mean vectors for words with that part-ofspeech (POS). Specifically, we encode a vector for each of the following: nouns, verbs, adjectives, adverbs, and numbers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix A Gramtacic Basis Descriptions",
"sec_num": "5"
},
{
"text": "4. We create mean vector differences for the following grammatical concepts: the relationship between singular and plural nouns, the relationship between present-tense verbs and their present participle form, and the relationship between present-tense verbs and their pasttense forms. For each of these relationships, we manually collect approximately 50 example word pairs that fit that relationship. We manually filter for word pairs where either the grammatical relationship does not change the form of the word (i.e., 'deer') or for word pairs where the grammatical change is likely to produce a more complicated change in meaning (i.e., 'math' and 'maths'). We average the differences between pairs of each relationship type and use it as the vector for that relationship.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix A Gramtacic Basis Descriptions",
"sec_num": "5"
},
{
"text": "The choice and construction of these grammatical basis vectors is highly arbitrary, and different grammatical basis vectors could easily be used in different applications or in follow up work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix A Gramtacic Basis Descriptions",
"sec_num": "5"
},
{
"text": "To compare to the sparse coding approach of Faruqui et al., we use their publicly available implementation with the following settings: We use the same input vectors without preprocessing, a dimensionality of 3000, L 2 regularization penalty \u03c4 = 10 \u22125 , as suggested in their paper, and various L 1 regularization penalties (\u03bb).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.1 Comparison to Faruqui et al.",
"sec_num": null
},
{
"text": "We randomly select 25 words and display their complete sparse vector representations here: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.2 Word Intrusion Task Implementation C Randomly Selected Word Representations",
"sec_num": null
},
{
"text": "carbon =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.2 Word Intrusion Task Implementation C Randomly Selected Word Representations",
"sec_num": null
},
{
"text": "These weights are real-values and truncated for space. Note that the weights are very large because they correspond to sparse low-magnitude features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Thanks to Duane Bailey for his extensive support and advice, and for advising the thesis on which this paper is based.. Thanks to Andrea Danyluk for her guidance as the second reader of that thesis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Lightning: large-scale linear classification, regression and ranking in Python",
"authors": [
{
"first": "Mathieu",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "Fabian",
"middle": [],
"last": "Pedregosa",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.5281/zenodo.200504"
]
},
"num": null,
"urls": [],
"raw_text": "Mathieu Blondel and Fabian Pedregosa. 2016. Light- ning: large-scale linear classification, regression and ranking in Python.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A fast proximal method for convolutional sparse coding",
"authors": [
{
"first": "Rakesh",
"middle": [],
"last": "Chalasani",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Jose",
"suffix": ""
},
{
"first": "Naveen",
"middle": [],
"last": "Principe",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ramakrishnan",
"suffix": ""
}
],
"year": 2013,
"venue": "The 2013 International Joint Conference on Neural Networks (IJCNN)",
"volume": "",
"issue": "",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rakesh Chalasani, Jose C Principe, and Naveen Ra- makrishnan. 2013. A fast proximal method for convolutional sparse coding. In The 2013 In- ternational Joint Conference on Neural Networks (IJCNN), pages 1-5. IEEE.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Reading tea leaves: How humans interpret topic models",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Gerrish",
"suffix": ""
},
{
"first": "Chong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jordan",
"middle": [
"L"
],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
}
],
"year": 2009,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "288--296",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Chang, Sean Gerrish, Chong Wang, Jordan L. Boyd-Graber, and David M. Blei. 2009. Reading tea leaves: How humans interpret topic models. In Advances in Neural Information Processing Systems, pages 288-296.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The importance of encoding versus training with sparse coding and vector quantization",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Coates",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 28th International Conference on Machine Learning (ICML-11)",
"volume": "",
"issue": "",
"pages": "921--928",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Coates and Andrew Y. Ng. 2011. The impor- tance of encoding versus training with sparse cod- ing and vector quantization. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 921-928.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Analytical methods for interpretable ultradense word embeddings",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Dufter",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.08654"
]
},
"num": null,
"urls": [],
"raw_text": "Philipp Dufter and Hinrich Sch\u00fctze. 2019. Analytical methods for interpretable ultradense word embed- dings. arXiv preprint arXiv:1904.08654.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Sparse overcomplete word vector representations",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Dani",
"middle": [],
"last": "Yogatama",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1491--1500",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui, Yulia Tsvetkov, Dani Yogatama, Chris Dyer, and Noah A Smith. 2015. Sparse overcom- plete word vector representations. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1491-1500.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Honnibal",
"suffix": ""
},
{
"first": "Ines",
"middle": [],
"last": "Montani",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embed- dings, convolutional neural networks and incremen- tal parsing.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Non-negative sparse coding",
"authors": [
{
"first": "Patrik",
"middle": [
"O"
],
"last": "Hoyer",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing",
"volume": "",
"issue": "",
"pages": "557--565",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrik O. Hoyer. 2002. Non-negative sparse coding. In Proceedings of the 12th IEEE Workshop on Neu- ral Networks for Signal Processing, pages 557-565. IEEE.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A solution to Plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge",
"authors": [
{
"first": "K",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"T"
],
"last": "Landauer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dumais",
"suffix": ""
}
],
"year": 1997,
"venue": "Psychological Review",
"volume": "104",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas K. Landauer and Susan T. Dumais. 1997. A solution to Plato's problem: The latent semantic analysis theory of acquisition, induction, and rep- resentation of knowledge. Psychological Review, 104(2):211.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Efficient sparse coding algorithms",
"authors": [
{
"first": "Honglak",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Battle",
"suffix": ""
},
{
"first": "Rajat",
"middle": [],
"last": "Raina",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2007,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "801--808",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Honglak Lee, Alexis Battle, Rajat Raina, and An- drew Y. Ng. 2007. Efficient sparse coding algo- rithms. In Advances in Neural Information Process- ing Systems, pages 801-808.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning question classifiers",
"authors": [
{
"first": "Xin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 19th International Conference on Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xin Li and Dan Roth. 2002. Learning question clas- sifiers. In Proceedings of the 19th International Conference on Computational Linguistics-Volume 1, pages 1-7. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Word embedding for understanding natural language: a survey",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2018,
"venue": "Guide to Big Data Applications",
"volume": "",
"issue": "",
"pages": "83--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Li and Tao Yang. 2018. Word embedding for un- derstanding natural language: a survey. In Guide to Big Data Applications, pages 83-104. Springer.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Learning word vectors for sentiment analysis",
"authors": [
{
"first": "Andrew",
"middle": [
"L"
],
"last": "Maas",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"E"
],
"last": "Daly",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"T"
],
"last": "Pham",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "142--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analy- sis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies, pages 142-150, Port- land, Oregon, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The polar framework: Polar opposites enable interpretability of pre-trained word embeddings",
"authors": [
{
"first": "Binny",
"middle": [],
"last": "Mathew",
"suffix": ""
},
{
"first": "Sandipan",
"middle": [],
"last": "Sikdar",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Lemmerich",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Strohmaier",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2001.09876"
]
},
"num": null,
"urls": [],
"raw_text": "Binny Mathew, Sandipan Sikdar, Florian Lemmerich, and Markus Strohmaier. 2020. The polar frame- work: Polar opposites enable interpretability of pre-trained word embeddings. arXiv preprint arXiv:2001.09876.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in Neural Information Processing Systems, pages 3111-3119.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "CuPy: A NumPy-compatible library for nvidia gpu calculations",
"authors": [
{
"first": "Ryosuke",
"middle": [],
"last": "Okuta",
"suffix": ""
},
{
"first": "Yuya",
"middle": [],
"last": "Unno",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Nishino",
"suffix": ""
},
{
"first": "Shohei",
"middle": [],
"last": "Hido",
"suffix": ""
},
{
"first": "Crissman",
"middle": [],
"last": "Loomis",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of Workshop on Machine Learning Systems (LearningSys) in The Thirty-first Annual Conference on Neural Information Processing Systems (NIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryosuke Okuta, Yuya Unno, Daisuke Nishino, Shohei Hido, and Crissman Loomis. 2017. CuPy: A NumPy-compatible library for nvidia gpu calcula- tions. In Proceedings of Workshop on Machine Learning Systems (LearningSys) in The Thirty-first Annual Conference on Neural Information Process- ing Systems (NIPS).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A guide to NumPy",
"authors": [
{
"first": "Travis",
"middle": [
"E"
],
"last": "Oliphant",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Travis E. Oliphant. 2006. A guide to NumPy, volume 1. Trelgol Publishing USA.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Word2sense: Sparse interpretable word embeddings",
"authors": [
{
"first": "Abhishek",
"middle": [],
"last": "Panigrahi",
"suffix": ""
},
{
"first": "Chiranjib",
"middle": [],
"last": "Harsha Vardhan Simhadri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5692--5705",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhishek Panigrahi, Harsha Vardhan Simhadri, and Chiranjib Bhattacharyya. 2019. Word2sense: Sparse interpretable word embeddings. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5692-5705.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Rotated word vector representations and their interpretability",
"authors": [
{
"first": "Sungjoon",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Jinyeong",
"middle": [],
"last": "Bak",
"suffix": ""
},
{
"first": "Alice",
"middle": [],
"last": "Oh",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "401--411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sungjoon Park, JinYeong Bak, and Alice Oh. 2017. Rotated word vector representations and their inter- pretability. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 401-411.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Scikit-learn: Machine learning in Python",
"authors": [
{
"first": "Fabian",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Bertrand",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "Mathieu",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Dubourg",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12(Oct):2825-2830.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Software framework for topic modelling with large corpora",
"authors": [
{
"first": "Radim",
"middle": [],
"last": "Rehurek",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Sojka",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radim Rehurek and Petr Sojka. 2010. Software frame- work for topic modelling with large corpora. In In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks. Citeseer.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "carbon = 0.79 * nitrogen \u2212 0.38 * CAPITALIZATION + 0.3 * fossil \u2212 0.21 * POS-NOUN + 0.16 * POS-ADJ + 0.14 * C0 \u2212 0.14 * PAST-TENSE + 0.13 * wood + 0.11 * global + 0.1 * atoms \u2212 0.095 * POS-ADV + 0.092 * aluminum \u2212 0.078 * PLURAL-NOUN + 0.073 * greenhouse \u2212 0.072 * POS-PROPN \u2212 0.048 * POS-VERB + 0.046 * forestry + 0.03 * PARTICIPLE + 0.017 * sink + 0.012 * POS-NUM",
"num": null,
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"text": "The tradeoff curve between sparsity and reconstruction error. The dashed line shows the tradeoff curve achieved by Faruqui et al. using sparse coding without inherently interpretable dimensions (Section 4.2).",
"num": null,
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"text": "0.79 * nitrogen \u2212 0.38 * CAPITALIZATION + 0.3 * fossil \u2212 0.21 * POS-NOUN + 0.16 * POS-ADJ + 0.14 * C0 \u2212 0.14 * PAST-TENSE + 0.13 * wood + 0.11 * global + 0.1 * atoms \u2212 0.095 * POS-ADV + 0.092 * aluminum \u2212 0.078 * PLURAL-NOUN + 0.073 * greenhouse \u2212 0.072 * POS-PROPN \u2212 0.048 * POS-VERB + 0.046 * forestry + 0.03 * PARTICIPLE + 0.017 * sink + 0.012 * POS-NUM reefs = 0.68 * islands \u2212 0.66 * CAPITALIZATION + 0.4 * C0 + 0.35 * PLURAL-NOUN + 0.28 * POS-VERB + 0.25 * rocks + 0.19 * dredging + 0.18 * oysters + 0.12 * POS-ADJ + 0.096 * POS-NUM + 0.096 * POS-ADV + 0.089 * POS-PROPN + 0.086 * tropical + 0.075 * underwater + 0.068 * dunes + 0.063 * seas + 0.06 * diver \u2212 0.058 * PAST-TENSE + 0.042 * sandstone \u2212 0.025 * demon + 0.02 * marine \u2212 0.019 * PARTICIPLE \u2212 0.014 * POS-NOUN \u2212 0.012 * french \u2212 0.0041 * witches Coulson = 0.85 * hacking + 0.72 * C0 + 0.59 * POS-PROPN + 0.25 * CAPITALIZATION \u2212 0.23 * POS-ADJ \u2212 0.19 * POS-NOUN + 0.17 * butler \u2212 0.17 * southern + 0.15 * POS-VERB \u2212 0.14 * website \u2212 0.12 * com \u2212 0.12 * roaring + 0.1 * solicitors \u2212 0.094 * POS-ADV + 0.074 * oats \u2212 0.068 * cathedral \u2212 0.064 * PARTICIPLE + 0.061 * inquiry \u2212 0.06 * dances \u2212 0.056 * fan + 0.042 * POS-NUM \u2212 0.029 * provinces \u2212 0.029 * finals \u2212 0.02 * dance \u2212 0.017 * waters \u2212 0.013 * tango \u2212 0.013 * shame \u2212 0.012 * PAST-TENSE \u2212 0.005 * PLURAL-NOUN roundabout = 0.72 * bypass + 0.4 * roadway \u2212 0.28 * CAPITALIZATION + 0.22 * plaza \u2212 0.16 * PLURAL-NOUN + 0.11 * POS-ADJ + 0.11 * clumsy + 0.1 * airfield + 0.088 * POS-ADV \u2212 0.08 * biological + 0.079 * C0 + 0.051 * POS-NOUN + 0.043 * PAST-TENSE + 0.039 * PARTICIPLE + 0.028 * caravan + 0.025 * ironic + 0.021 * POS-VERB \u2212 0.021 * POS-NUM \u2212 0.0038 * POS-PROPN + 0.003 * nonsensical Hub = 0.49 * bustling + 0.47 * C0 + 0.4 * portal + 0.39 * infrastructure + 0.32 * POS-NOUN + 0.31 * CAPITALIZATION + 0.31 * central \u2212 0.13 * POS-PROPN \u2212 0.1 * PLURAL-NOUN + 0.069 * outage + 0.068 * centre + 0.061 * POS-NUM + 0.058 * connectivity + 0.058 * PARTICIPLE \u2212 0.057 * POS-ADJ + 0.043 * PAST-TENSE \u2212 0.043 * POS-VERB + 0.027 * POS-ADV environmental = 0.43 * sustainability + 0.43 * economic + 0.38 * POS-ADJ \u2212 0.3 * CAPITALIZATION \u2212 0.27 * POS-VERB + 0.27 * regulatory \u2212 0.2 * PAST-TENSE + 0.18 * biological + 0.17 * campaigner + 0.17 * POS-NUM + 0.14 * thermal \u2212 0.14 * POS-ADV \u2212 0.12 * POS-NOUN + 0.1 * health \u2212 0.1 * C0 + 0.087 * PARTICIPLE + 0.087 * POS-PROPN \u2212 0.084 * PLURAL-NOUN + 0.073 * outdoor + 0.055 * chemical + 0.0055 * cultural Churchill = 0.84 * wartime + 0.6 * C0 + 0.41 * CAPITALIZATION + 0.4 * quotation + 0.38 * POS-PROPN + 0.36 * statesman \u2212 0.21 * PARTICIPLE \u2212 0.14 * astronomer \u2212 0.14 * POS-NOUN \u2212 0.11 * POS-ADJ \u2212 0.1 * PAST-TENSE + 0.082 * POS-VERB + 0.078 * POS-NUM + 0.064 * POS-ADV + 0.045 * advising \u2212 0.025 * architectures + 0.022 * PLURAL-NOUN + 0.017 * pint + 0.013 * fascism resident = 0.54 * citizens + 0.49 * native + 0.37 * visiting \u2212 0.19 * PLURAL-NOUN + 0.12 * PAST-TENSE + 0.11 * caretaker \u2212 0.099 * CAPITALIZATION \u2212 0.094 * C0 + 0.082 * PARTICIPLE + 0.082 * ward + 0.077 * POS-NOUN \u2212 0.039 * POS-ADV + 0.022 * proprietor + 0.022 * POS-VERB \u2212 0.0065 * POS-NUM + 0.0045 * POS-PROPN + 0.0036 * POS-ADJ backers = 0.64 * sponsors + 0.4 * POS-NOUN \u2212 0.4 * CAPITALIZATION + 0.33 * advocates + 0.28 * PLURAL-NOUN + 0.19 * POS-PROPN + 0.18 * businessman + 0.18 * businessmen + 0.16 * fans \u2212 0.15 * POS-ADJ + 0.12 * PARTICIPLE + 0.12 * PAST-TENSE \u2212 0.12 * POS-ADV + 0.092 * whose + 0.082 * opposition + 0.065 * POS-VERB + 0.056 * candidacy + 0.055 * touted + 0.047 * startups \u2212 0.024 * POS-NUM + 0.024 * rebels + 0.014 * reformist + 0.013 * investment \u2212 0.002 * C0 rudimentary = 0.84 * basics \u2212 0.65 * C0 + 0.49 * POS-ADJ \u2212 0.41 * POS-VERB + 0.41 * apparatus \u2212 0.36 * POS-NOUN + 0.35 * improvised + 0.15 * POS-ADV + 0.099 * CAPITALIZATION + 0.072 * PARTICIPLE + 0.069 * PLURAL-NOUN + 0.062 * POS-NUM + 0.059 * develop + 0.05 * PAST-TENSE + 0.043 * POS-PROPN admire = 0.73 * admirable \u2212 0.66 * PARTICIPLE \u2212 0.65 * C0 + 0.31 * magnificent + 0.23 * CAPITALIZATION + 0.16 * criticize + 0.16 * POS-NOUN \u2212 0.16 * PAST-TENSE + 0.14 * loves + 0.1 * POS-PROPN + 0.1 * beauty \u2212 0.098 * POS-NUM \u2212 0.068 * PLURAL-NOUN + 0.066 * devotion \u2212 0.061 * POS-ADV \u2212 0.058 * POS-ADJ + 0.039 * openness + 0.02 * charming \u2212 0.00015 * POS-VERB re-add = \u22120.65 * PARTICIPLE \u2212 0.47 * POS-NUM + 0.44 * deleted + 0.43 * POS-VERB + 0.41 * cruft \u2212 0.41 * C0 \u2212 0.3 * PAST-TENSE + 0.28 * section + 0.19 * CAPITALIZATION + 0.17 * categorization + 0.16 * unblock + 0.15 * POS-ADV + 0.11 * POS-PROPN + 0.098 * reversion \u2212 0.09 * POS-ADJ + 0.09 * POS-NOUN + 0.061 * inserting + 0.046 * reference + 0.043 * sourcing + 0.034 * template + 0.027 * encyclopedic + 0.013 * modify \u2212 0.0088 * battleship \u2212 0.0071 * cow + 0.006 * PLURAL-NOUN visuals = 0.47 * cinematography \u2212 0.47 * CAPITALIZATION + 0.29 * evocative + 0.25 * multimedia + 0.21 * videos + 0.19 * POS-NOUN + 0.15 * PLURAL-NOUN + 0.14 * POS-PROPN + 0.12 * hallucinations + 0.11 * awesome + 0.1 * PARTICIPLE + 0.08 * video \u2212 0.079 * POS-VERB + 0.076 * sounds + 0.076 * POS-ADJ + 0.076 * slick + 0.075 * POS-ADV + 0.066 * C0 + 0.062 * dazzling + 0.052 * colorful + 0.044 * interactive + 0.027 * jarring + 0.019 * visualization + 0.0047 * PAST-TENSE + 0.00025 * POS-NUM Conflict = 0.61 * POS-NOUN \u2212 0.49 * POS-PROPN + 0.44 * warfare + 0.4 * escalation + 0.4 * peace + 0.36 * C0 + 0.24 * guideline + 0.23 * CAPITALIZATION + 0.21 * PARTICIPLE + 0.19 * ethnic \u2212 0.19 * POS-VERB + 0.16 * resolved + 0.12 * POS-NUM \u2212 0.099 * PLURAL-NOUN + 0.078 * PAST-TENSE + 0.07 * divergence + 0.065 * geopolitical \u2212 0.05 * stationary + 0.038 * POS-ADJ \u2212 0.032 * shops + 0.03 * polarized \u2212 0.012 * POS-ADV hitter = \u22120.54 * CAPITALIZATION + 0.45 * C0 \u2212 0.42 * PLURAL-NOUN + 0.42 * shortstop + 0.36 * designated + 0.32 * batting + 0.3 * POS-VERB + 0.21 * POS-NOUN + 0.18 * POS-ADV + 0.17 * pitchers + 0.17 * pitcher + 0.14 * catcher \u2212 0.12 * PARTICIPLE \u2212 0.1 * POS-NUM \u2212 0.096 * inane + 0.087 * guy + 0.073 * POS-PROPN + 0.048 * exert \u2212 0.014 * PAST-TENSE + 0.0071 * outs \u2212 0.0064 * POS-ADJ + 0.0019 * swings fence = 0.52 * wire + 0.43 * gates \u2212 0.41 * CAPITALIZATION + 0.35 * yard \u2212 0.32 * PLURAL-NOUN + 0.21 * shrubs + 0.14 * barn + 0.14 * ditch + 0.09 * POS-VERB + 0.085 * side \u2212 0.07 * PARTICIPLE \u2212 0.052 * POS-ADJ + 0.042 * POS-NUM \u2212 0.032 * POS-PROPN \u2212 0.02 * PAST-TENSE \u2212 0.012 * C0 + 0.0068 * nailed \u2212 0.0047 * POS-ADV + 0.00013 * POS-NOUN 1978 = 0.97 * 1970s \u2212 0.89 * POS-ADJ \u2212 0.6 * POS-PROPN + 0.49 * POS-NUM \u2212 0.42 * POS-NOUN + 0.21 * C0 \u2212 0.18 * PLURAL-NOUN \u2212 0.12 * POS-ADV \u2212 0.081 * PARTICIPLE \u2212 0.073 * CAPITALIZATION + 0.067 * POS-VERB + 0.041 * PAST-TENSE + 0.039 * seventies + 0.026 * contends heroine = 0.66 * hero + 0.35 * protagonist \u2212 0.34 * CAPITALIZATION \u2212 0.25 * PLURAL-NOUN + 0.14 * actress + 0.13 * girl + 0.1 * C0 + 0.1 * PAST-TENSE + 0.071 * POS-PROPN + 0.07 * POS-NOUN + 0.063 * POS-ADV \u2212 0.058 * POS-NUM + 0.051 * protagonists \u2212 0.029 * POS-VERB + 0.026 * PARTICIPLE + 0.015 * POS-ADJ + 0.014 * goddess structure = 0.91 * structures \u2212 0.35 * CAPITALIZATION \u2212 0.25 * PLURAL-NOUN + 0.17 * structuring + 0.16 * POS-NOUN \u2212 0.085 * POS-VERB \u2212 0.078 * PAST-TENSE + 0.05 * POS-ADV \u2212 0.039 * POS-ADJ \u2212 0.034 * POS-PROPN + 0.029 * POS-NUM + 0.026 * structural + 0.022 * reorganization \u2212 0.022 * C0 + 0.0079 * PARTICIPLE wizards = 0.65 * magic + 0.41 * witches \u2212 0.41 * CAPITALIZATION + 0.36 * PLURAL-NOUN + 0.19 * POS-NOUN \u2212 0.19 * POS-NUM + 0.15 * POS-ADJ + 0.15 * tech + 0.13 * POS-ADV + 0.098 * PAST-TENSE + 0.09 * wannabe + 0.084 * dragons + 0.084 * knights + 0.052 * C0 \u2212 0.036 * POS-PROPN + 0.022 * PARTICIPLE + 0.015 * err \u2212 0.013 * POS-VERB + 0.0041 * guru autistic = 0.49 * preschool + 0.37 * epilepsy \u2212 0.35 * POS-NOUN + 0.33 * POS-ADJ \u2212 0.25 * papal + 0.22 * son + 0.21 * POS-PROPN + 0.16 * twins + 0.12 * PAST-TENSE + 0.12 * therapist + 0.12 * PARTICIPLE \u2212 0.1 * CAPITALIZATION + 0.084 * teenage + 0.072 * trait + 0.069 * psychologist + 0.067 * behaviors + 0.056 * kid + 0.054 * hospitalized \u2212 0.053 * C0 + 0.052 * manipulative \u2212 0.038 * POS-NUM + 0.037 * granddaughter \u2212 0.03 * rounds + 0.03 * PLURAL-NOUN + 0.027 * campers + 0.026 * POS-VERB + 0.0092 * dementia \u2212 0.0054 * regional \u2212 0.0052 * POS-ADV \u2212 0.0023 * sedan tornado = 0.72 * hurricane + 0.49 * C0 \u2212 0.47 * CAPITALIZATION + 0.28 * typhoon \u2212 0.26 * PLURAL-NOUN + 0.23 * tractor + 0.21 * POS-VERB + 0.16 * POS-ADJ + 0.12 * POS-NUM + 0.1 * flattened \u2212 0.1 * ports \u2212 0.097 * opium + 0.072 * POS-ADV + 0.053 * POS-PROPN + 0.053 * avalanche + 0.052 * tape + 0.05 * earthquake \u2212 0.045 * colonial \u2212 0.043 * POS-NOUN + 0.028 * musical \u2212 0.025 * handsets + 0.014 * terrifying + 0.013 * PAST-TENSE + 0.013 * occurrences \u2212 0.0067 * labour + 0.0025 * PARTICIPLE 1852 = 0.85 * 1800s \u2212 0.81 * POS-PROPN \u2212 0.78 * POS-ADJ \u2212 0.61 * POS-NOUN + 0.45 * POS-NUM + 0.43 * C0 \u2212 0.31 * CAPITALIZATION + 0.29 * renders + 0.25 * POS-VERB \u2212 0.19 * PLURAL-NOUN + 0.16 * noted \u2212 0.15 * PARTICIPLE + 0.13 * underscored \u2212 0.082 * POS-ADV + 0.063 * insisting + 0.029 * PAST-TENSE gloom = 0.66 * gloomy \u2212 0.46 * CAPITALIZATION \u2212 0.32 * POS-VERB + 0.28 * pessimism + 0.28 * darkness + 0.15 * PAST-TENSE \u2212 0.14 * PLURAL-NOUN + 0.14 * POS-NOUN + 0.12 * PARTICIPLE + 0.11 * POS-NUM \u2212 0.058 * C0 \u2212 0.058 * POS-ADV + 0.052 * misery + 0.051 * POS-PROPN + 0.037 * slump \u2212 0.025 * POS-ADJ recycle = \u22120.65 * PARTICIPLE + 0.61 * bin + 0.49 * rubbish \u2212 0.48 * C0 \u2212 0.47 * PAST-TENSE + 0.27 * POS-VERB + 0.18 * POS-NOUN + 0.15 * utilize + 0.14 * plastic \u2212 0.12 * POS-NUM + 0.1 * excess + 0.088 * sustainability + 0.083 * POS-PROPN + 0.066 * aluminum + 0.042 * gramthesize \u2212 0.037 * PLURAL-NOUN + 0.036 * POS-ADJ + 0.035 * refurbished + 0.034 * POS-ADV + 0.03 * converter + 0.026 * nitrogen \u2212 0.004 * CAPITALIZATION + 0.0024 * saving",
"num": null,
"uris": null
},
"TABREF1": {
"type_str": "table",
"text": "Accuracy on the word2vec analogy evaluation set for various vector spaces. The first column shows the average number of nonzero entries in each sparse vector. Accuracy is also broken down by nongrammatical and grammatical categories. 'Recons' denotes the performance of the reconstructed dense vectors. All results are on a 50% held-out test set.",
"num": null,
"html": null,
"content": "<table/>"
},
"TABREF2": {
"type_str": "table",
"text": "Ours Recons. \u03b1 = 0.1 85.08 81.4 Ours \u03b1 = 0.35 86.46 84.0 Ours Recons. \u03b1 = 0.35 83.00 75.8",
"num": null,
"html": null,
"content": "<table><tr><td/><td>IMDB TREC</td></tr><tr><td>FastText</td><td>85.35 84.2</td></tr><tr><td>Faruqui \u03bb = .75</td><td>85.54 84.4</td></tr><tr><td>Ours \u03b1 = 0.1</td><td>87.51 86.2</td></tr></table>"
},
"TABREF5": {
"type_str": "table",
"text": "",
"num": null,
"html": null,
"content": "<table/>"
},
"TABREF6": {
"type_str": "table",
"text": "Sascha Rothe and Hinrich Sch\u00fctze. 2016. Word embedding calculus in meaningful ultradense subspaces.In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 512-517.",
"num": null,
"html": null,
"content": "<table><tr><td>L\u00fctfi Kerem \u015e enel, Ihsan Utlu, Furkan \u015e ahinu\u00e7, Hal-</td></tr><tr><td>dun M Ozaktas, and Aykut Ko\u00e7. 2020. Imparting in-</td></tr><tr><td>terpretability to word embeddings while preserving</td></tr><tr><td>semantic structure. Natural Language Engineering,</td></tr><tr><td>pages 1-26.</td></tr><tr><td>Anant Subramanian, Danish Pruthi, Harsh Jhamtani,</td></tr><tr><td>Taylor Berg-Kirkpatrick, and Eduard Hovy. 2018.</td></tr><tr><td>Spine: Sparse interpretable neural embeddings. In</td></tr><tr><td>Thirty-Second AAAI Conference on Artificial Intelli-</td></tr><tr><td>gence.</td></tr></table>"
}
}
}
}