|
{ |
|
"paper_id": "I17-1029", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:39:01.789167Z" |
|
}, |
|
"title": "Training Word Sense Embeddings With Lexicon-based Regularization", |
|
"authors": [ |
|
{ |
|
"first": "Luis", |
|
"middle": [], |
|
"last": "Nieto-Pi\u00f1a", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Gothenburg", |
|
"location": {} |
|
}, |
|
"email": "luis.nieto.pina@gu.se" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Johansson", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Gothenburg", |
|
"location": {} |
|
}, |
|
"email": "richard.johansson@gu.se" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We propose to improve word sense embeddings by enriching an automatic corpus-based method with lexicographic data. Information from a lexicon is introduced into the learning algorithm's objective function through a regularizer. The incorporation of lexicographic data yields embeddings that are able to reflect expertdefined word senses, while retaining the robustness, high quality, and coverage of automatic corpus-based methods. These properties are observed in a manual inspection of the semantic clusters that different degrees of regularizer strength create in the vector space. Moreover, we evaluate the sense embeddings in two downstream applications: word sense disambiguation and semantic frame prediction, where they outperform simpler approaches. Our results show that a corpusbased model balanced with lexicographic data learns better representations and improve their performance in downstream tasks.", |
|
"pdf_parse": { |
|
"paper_id": "I17-1029", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We propose to improve word sense embeddings by enriching an automatic corpus-based method with lexicographic data. Information from a lexicon is introduced into the learning algorithm's objective function through a regularizer. The incorporation of lexicographic data yields embeddings that are able to reflect expertdefined word senses, while retaining the robustness, high quality, and coverage of automatic corpus-based methods. These properties are observed in a manual inspection of the semantic clusters that different degrees of regularizer strength create in the vector space. Moreover, we evaluate the sense embeddings in two downstream applications: word sense disambiguation and semantic frame prediction, where they outperform simpler approaches. Our results show that a corpusbased model balanced with lexicographic data learns better representations and improve their performance in downstream tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Word embeddings, as a tool for representing the meaning of words based on the context in which they appear, have had a considerable impact on many of the traditional Natural Language Processing tasks in recent years. (Turian et al., 2010; Collobert et al., 2011; Socher et al., 2011; Glorot et al., 2011) This form of semantic representation has come to replace in many instances traditional count-based vectors (Baroni et al., 2014) , as they yield high-quality semantic representations in a computationally efficient manner, which allows them to leverage information from large corpora.", |
|
"cite_spans": [ |
|
{ |
|
"start": 217, |
|
"end": 238, |
|
"text": "(Turian et al., 2010;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 239, |
|
"end": 262, |
|
"text": "Collobert et al., 2011;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 263, |
|
"end": 283, |
|
"text": "Socher et al., 2011;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 284, |
|
"end": 304, |
|
"text": "Glorot et al., 2011)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 412, |
|
"end": 433, |
|
"text": "(Baroni et al., 2014)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Due to this success, some attention has been devoted to the question of whether their representational power can be refined to further advance the state of the art in those tasks that can benefit from semantic representations. One instance in which this could be realized concerns polysemous words, which has led to several attempts at representing word senses instead of simple word forms. Doing so would help avoid the situation in which several meanings of a word have to be conflated into just one embedding, typical of simple word embeddings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Among the different approaches to learning word sense embeddings, a distinction can be made between those that make use of a semantic network (SN) and those that do not. Approaches in the latter group usually apply an unsupervised strategy for clustering instances of words based on the context formed by surrounding words. The resulting clusters are then used to represent the different meanings of a word. These representations characterize word usage in the training corpus rather than lexicographic senses, and run the risk of marginalizing under-represented word senses. Nonetheless, for well represented word senses, this strategy proves to be effective and adaptable to changes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The alternative is to integrate an SN in the learning process. This kind of resource encodes a lexicon of word senses, connecting lexically and semantically related concepts, usually in the form of a graph. Methods that take this approach are able to work with lexicographic word senses as defined by experts, usually integrating them in different ways with corpus-learned embeddings. However, their completeness depends on the quality of the underlying SN.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we present an approach that tries to achieve a balance between these two variants. We propose to make use of an SN for learn-ing word sense embeddings by leveraging its signal through a regularizer function that is applied on top of a traditional objective function used to learn embeddings from corpora. In this manner, our model is able to merge these two opposed sources of data with the expectation that each one will balance the limitations of the other: flexible, high-quality embeddings learned from a corpus, with well defined separation between the expertdefined senses of any given polysemic word. The influence of each source of information can be regulated through a mix parameter.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "As the corpus-based part of our model, we use a version of the Skip-gram (Mikolov et al., 2013) model that is modified so that it is able to learn two distinct vocabularies: word senses and word forms as introduced by . Regarding the SN data, we focus our attention on its underlying graph. We assume that neighboring nodes in such a graph correspond to semantically related concepts. Thus, given a word sense, a sequence of related word senses can be generated from its neighbors. A regularizer function can then be used to update their corresponding embeddings so that they become closer in the vector space. This has the benefit of creating clear separations between the different senses of polysemic words, precisely as they are described in the SN, even in the cases where this separation would not be clear from the data in a corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 95, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We give an overview of related work in Section 2, and our model is described in detail in Section 3. The resulting word sense embeddings are evaluated in Section 4 on two separate automated tasks: word sense disambiguation (WSD) and lexical frame prediction (LFP). The experiments used for evaluation allow us to investigate the influence of the lexicographic data on the embeddings by comparing different model parameterizations. We conclude with a discussion of our results in Section 5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The recent success of word embeddings as effective semantic representations across the broad spectrum of NLP tasks has led to an increased interest in developing embedding methods further in order to acquire finer-grained representations able to handle polysemy and homonymy. This effort can be divided into two approaches: those that tackle the problem as an unsupervised task, aiming to discover different usages of words in corpora, and those that make use of knowledge resources as a way of injecting linguistic knowledge into the models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Among the earliest efforts in the former group is the work of Reisinger and Mooney (2010) and Huang et al. (2012) , who propose to cluster occurrences of words based on their contexts to account for different meanings. With the advent of the Skip-gram model (Mikolov et al., 2013) as an efficient way of training prediction-based word embedding models, much of the research into obtaining word sense representations revolved around it. Neelakantan et al. (2014) and make use of context-based word sense disambiguation (WSD) during corpus training to allow on-line learning of multiple senses of a word with modified versions of Skip-gram. Li and Jurafsky (2015) and Bartunov et al. (2016) apply stochastic processes to allow for representations of a variable number of senses per word to be learnt in unsupervised fashion from corpora.", |
|
"cite_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 89, |
|
"text": "Reisinger and Mooney (2010)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 94, |
|
"end": 113, |
|
"text": "Huang et al. (2012)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 258, |
|
"end": 280, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 436, |
|
"end": 461, |
|
"text": "Neelakantan et al. (2014)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 639, |
|
"end": 661, |
|
"text": "Li and Jurafsky (2015)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 666, |
|
"end": 688, |
|
"text": "Bartunov et al. (2016)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The embeddings obtained using this approach tend to be word-usage oriented, rather than represent formally defined word senses. While this is descriptive of the texts in the corpus at hand, it can be problematic for generalization. For instance, word senses that are underrepresented or absent in the training corpus will not be assigned a functional embedding. On the other hand, due to the ability of these models to process large amounts of data, well-represented word senses will acquire meaningful representations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The alternative approach to unsupervised methods is to include data from knowledge resources, usually graph-encoded semantic networks (SN) such as WordNet (Miller, 1995) . Chen et al. (2014) and Iacobacci et al. (2015) propose to make use of knowledge resources to produce a sense-annotated corpus, on which known techniques can then be applied to generate word sense embeddings. A usual way of circumventing the lack of sense-annotated corpora is to apply postprocessing techniques onto pre-trained word embeddings as a way of leveraging lexical information to produce word sense embeddings. The following models share this method: Johansson and Nieto-Pi\u00f1a (2015) formulate an optimization problem to derive multiple word sense representations from word embeddings, while Pilehvar and Collier (2016) and one of the models proposed by Jauhar et al. (2015) use graph learning techniques to do so.", |
|
"cite_spans": [ |
|
{ |
|
"start": 155, |
|
"end": 169, |
|
"text": "(Miller, 1995)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 172, |
|
"end": 190, |
|
"text": "Chen et al. (2014)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 195, |
|
"end": 218, |
|
"text": "Iacobacci et al. (2015)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A characteristic of this approach is that these models can generate embeddings for a complete inventory of word senses. However, the dependence on manually crafted resources can potentially lead to incompleteness, in case of unlisted word senses, or to inflexibility in the face of changes in meaning, failing to account for new meanings of a word.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The model that we present in this article tries to preserve desirable characteristics from both approaches. On one side, the model learns word sense embeddings from a corpus using a predictive learning algorithm that is efficient, streamlined, and flexible with respect to being able to discriminate between different usages of a word from running text. This learning algorithm is based on the idea of adding an extra latent variable to the Skip-gram objective function to account for different senses of a word, that has been explored in previous work by Jauhar et al. 2015and . On the other side, the learning process is guided by a regularizer function that introduces information from an SN, in an attempt to achieve a clear, complete, and fair division between the different senses of a word. Furthermore, from a technical point of view, the effect of the regularizer function is applied in parallel to the embedding learning process. This eliminates the need for a two-step training process or pretrained word embeddings, and makes it possible to regulate the influence that each source of data (corpus and SN) has on the learning process.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The Skip-gram word embedding model (Mikolov et al., 2013) works on the premise of training the vector for a word w to be able to predict those context words c i with which it appears often together in a large training corpus, according to the following objective function:", |
|
"cite_spans": [ |
|
{ |
|
"start": 35, |
|
"end": 57, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Word Sense Embeddings", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "n i=1 log p(c i |w)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Word Sense Embeddings", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where p(c i |w) can be approximated using the softmax function, The model, thus, works by maintaining two separate vocabularies which represent word forms in their roles as target and context words. The resulting word embeddings (usually those vectors trained for the target word vocabulary) are able to store meaningful semantic information about the words they represent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Word Sense Embeddings", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The original Skip-gram model is, however, limited to word forms in both its vocabularies. introduced a modification of this model in which the target vocabulary holds a variable number of vectors for each word form, intended to represent its different senses. The training objective of such a model now has the following shape:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Word Sense Embeddings", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "log p(s|w) + n i=1 log p(c i |s)", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Learning Word Sense Embeddings", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Thus the word sense embeddings are trained to maximize the log-probability of context words c i given a word's sense s plus the log-probability of that sense given the word w. For our purposes, this prior is a constant, p(s|w) = 1 n , as we do not have information on the probability of each sense of a given word.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Word Sense Embeddings", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "This formulation requires a sense s of word w to be selected for each instance in which the objective function above is applied. This word sense disambiguation is applied on-line at training time and based on the target word's context: The sense s chosen to disambiguate an instance of w is the one whose embedding maximizes the dot product with the sum of the context words' embeddings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Word Sense Embeddings", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "arg max s e s i c i s e s i c i (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Word Sense Embeddings", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "This unsupervised model learns different usages of a word with minimal overhead computation on top of the original, word-based Skip-gram. The number of senses per word can be obtained from a lexicon or set to a fixed number.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Word Sense Embeddings", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In order to adapt the graph-structured nature of the data in an SN to be used in continuous representations, we propose to introduce it through a regularizer that can act upon the same embeddings trained by the unsupervised model described above.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Embedding a Lexicon", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Any given node s in a graph will have a set of neighbors n i directly connected to it. In the graph underlying an SN, we assume n i to be lexically or semantically similar to s. In this setting, a collection of sequences composed of word senses s and n i can be collected by visiting all nodes in the SN's graph and collecting its immediate neighbors. Note that extracting such a collection of sequences from a semantic graph follows quite naturally, but in fact it could be generated from any other resource that relates concepts, such as a thesaurus, even if it is not encoded in a graph, as long as the relations it contains are relevant to the model being trained.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Embedding a Lexicon", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We propose to use a collection of sequences of related word senses to update their corresponding word sense vectors by pulling any two vectors closer together in their geometric space whenever they are encountered in a sequence. This action can be easily modeled by minimizing the following expression:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Embedding a Lexicon", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "k i=1 ||s \u2212 n i || 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Embedding a Lexicon", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(3) for each sequence of word senses (s, n 1 , n 2 , . . . , n k ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Embedding a Lexicon", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "By minimizing the distance in the vector space between vectors representing interconnected concepts according to the SN's organization, the vector model is effectively representing that organization in a way that geometrical distance correlates with lexical or semantical relatedness, a central concept in the word embedding literature.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Embedding a Lexicon", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The two preceding sections describe the two parts of a combined model that is able to learn simultaneously from a corpus and an SN. This is achieved by training embeddings from a corpus with the objective described in Equation 1, and complementing this procedure with lexicographic data by means of using Equation 3 as a regularizer. The extent of the regularizer's influence on the model is adapted by a mix parameter \u03c1 \u2208 [0, 1]: the higher the value of \u03c1, the more influence the SN data has on the model, and vice versa.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combined Model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Thus, the objective function of our model is as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combined Model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "log p(s|w)+(1\u2212\u03c1) n i=1 log p(c i |s)\u2212\u03c1 m j=1 ||s\u2212n j || 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combined Model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In practice, this objective is realized by alternating updates through each of the model's parts, the number of which is regulated by \u03c1. Updates on the corpus-based part are executed with Skip-gram with negative sampling (Mikolov et al., 2013) , adapted to work with a vocabulary of word senses as explained in \u00a73.1.", |
|
"cite_spans": [ |
|
{ |
|
"start": 221, |
|
"end": 243, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combined Model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "On top of the formulation of the lexicon-based part of the model given in the previous section we propose two variations on this model in order to explore the extent to which the SN data can be used to influence the combined model explained in the following section. The initial formulation of the model will be referenced as V0 in this paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combined Model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In the first variation (henceforth V1) we propose to only apply Equation 3 on word senses pertaining to polysemous words. If by using the SN we intend to learn clear separations between different senses of a word, it attends to reason to limit its application to those cases, while monosemous words can be sufficiently well trained by the usual corpus-based approach, and act as semantic anchors in the broader vector space.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combined Model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The second variation (henceforth V2) deals with the specific architecture of the corpus-based training algorithm. As mentioned in the previous section, this model trains a target and a context vocabulary. We propose to use the regularizer to act not only on word sense vectors, but also on context (word form) vectors. By doing this we expect the context vocabulary to be ready for instances of different senses of a word, training context vectors to be potentially more effective in the disambiguation scheme introduced in Equation 2. This variation introduces an extra term into Equation 3,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combined Model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "n i=0 ||w(s) \u2212 w(n i )|| 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combined Model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "where w(x) is a mapping from a given sense x to its corresponding word form.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combined Model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We trained the three variants of our model using different parameterizations of \u03c1 \u2208 (0, 1). Each of these instances learned target and context embeddings of 50 dimensions, using a window of size 5 on the corpus-based part of the training algorithm, for a total number of 5 iterations over a number of updates equal to the size of the training corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Below we describe the lexicon and corpus used to train the sense embeddings. 4.1.1 SALDO: a Semantic Network of Swedish Word Senses SALDO (Borin et al., 2013) is the largest graphstructured semantic lexicon available for Swedish. The version used here contains roughly 125,000 concepts (word senses) organized into a single semantic network.", |
|
"cite_spans": [ |
|
{ |
|
"start": 138, |
|
"end": 158, |
|
"text": "(Borin et al., 2013)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The sense nodes in the SALDO network are connected by edges that are defined in terms of semantic descriptors. A descriptor of a sense is another sense used to define its meaning. The most important descriptor is called the primary descriptor (PD), and since every sense in SALDO (except an abstract root sense) has a single unique PD, the PD subgraph of SALDO forms a tree. In most cases, the PD of a sense s is a hypernym or a synonym of s, but other types of semantic relations are also possible.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "To exemplify, Figure 1 shows a fragment of the PD tree. In the example, there are some cases where the PD edges correspond to hypernymy, such as hard rock being a type of rock music, which in turn is a type of music, but there are also other types of relations, such as music being defined in terms of to sound. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 22, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "For training the embedding models, we created a mixed-genre corpus of approximately 1 billion words downloaded from Spr\u00e5kbanken, the Swedish language bank. 1 The texts were tokenized, part-of-speech-tagged and lemmatized. Compounds were segmented automatically and when a compound-word lemma was not listed as an entry in the SALDO lexicon, we used the compound parts instead. For instance, h\u00e5rdrock 'hard rock' would occur as a single token in the corpus, while rockstj\u00e4rna 'rock star' would be split into two separate tokens.", |
|
"cite_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 157, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Corpus", |
|
"sec_num": "4.1.2" |
|
}, |
|
{ |
|
"text": "By inspecting lists of nearest neighbors to a given embedding, some insight can be gained into how a model represents the meaning of the concept it represents. It is especially interesting in the case of polysemous words, where the neighbors of each of its senses can help judging how well it manages to separate their different meanings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Qualitative Inspection of Word Senses", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In Table 1 we list nearest neighbors for each of the two senses of the Swedish word rock: 'coat' and 'rock music'. The neighboring concepts in the table are extracted from two separate vector models trained with different parameterizations for the mix parameter \u03c1: The first, \u03c1 = 0.01, has little influence from the lexicon and thus is similar to a corpus-only approach; the second, \u03c1 = 0.5, allows for more information from the lexicon to influence the embeddings. In our corpus, the music sense is overrepresented; this can be seen in the table, where both senses trained with \u03c1 = 0.01 have most of their nearest neighbors semantically related to music. The model that is more influenced by the lexicon with \u03c1 = 0.5 is, however, able to learn two distinct senses. Note how the music sense is not negatively affected by this change: many of its nearest neighbors are the same in both models, and all of them keep the music-related topic in common.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Qualitative Inspection of Word Senses", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "It is also interesting to filter these lists of nearest neighbors to limit them to unlisted words; i.e., words that are not present in the lexicon and appear only in the corpus. This provides an observation of how well those embeddings that are trained by both parts of the model are integrated with those others whose training is based only on the corpus. Table 2 contains such lists of unlisted items for the two senses of rock on two models with different parameterization. It presents a similar behavior to the previous experiment: In a model with low influence from the lexicon, the representations of both senses tend towards that of the overrepresented one; when more influence from the lexicon is allowed, a clear separation of the two senses into their expected meanings is observed.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 357, |
|
"end": 364, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Qualitative Inspection of Word Senses", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We trained and evaluated several parameterizations of our model on a Swedish language word sense disambiguation (WSD) task. The aim of this task is to select a sense of an instance of a polyse-rock-1 'coat' rock-2 'rock music' \u03c1 = 0.01 \u03c1 = 0.5 \u03c1 = 0.01 \u03c1 = 0.5 syrtut 'frock coat' syrtut 'frock coat' h\u00e5rdrock 'hard rock music' punk 'punk music' Rhythm 'rhythm music' kappa 'coat' pop 'pop music' rappa 'to rap' rockband 'rock band' k\u00e5pa 'cowl' punk 'punk music' rap 'rap music' Peepshows 'peep shows' p\u00e4ls 'fur coat' jazza 'to jazz' pop 'pop music' skaband 'ska band' mudd 'cuff' d\u00f6dsmetall 'death metal music' jam 'music jam' mous word in context. For this purpose, we use a disambiguation mechanism similar to the one introduced in \u00a73.1. Given an ambiguous word in context, a score is calculated for each of its possible senses by applying the expression in Equation 2; however, to correct for skewed sense distributions, we replaced the uniform prior with a power-law prior P (s k |w) \u221d k \u22122 , where k is the numerical identifier of the sense. The highest scoring sense is then selected to disambiguate that instance of the word. As baselines for this experiment, we used random sense and first sense 2 selection. Additionally, we show the results achieved by a disambiguation system, UKB, based on Personalized PageRank (Agirre and Soroa, 2009) , and which was trained on the PD tree from SALDO. The implementation of this model makes no assumptions on the underlying graph and thus it is easily adaptable to work with any kind of SN. Our models were all parameterized with \u03c1 = 0.9 based on the results obtained on the SweFN dataset. All evaluated systems including the baselines are unsupervised: none of them has used a sense-annotated corpus during training.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1325, |
|
"end": 1349, |
|
"text": "(Agirre and Soroa, 2009)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word Sense Disambiguation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We evaluated the WSD systems on eleven different datasets, which to our knowledge are all senseannotated datasets that exist for Swedish. The datasets consist of instances, where each instance 2 No frequency information is available for SALDO's sense inventory and the senses are not ordered by frequency. The senses are ordered by lexicographers so that the lowernumbered senses are more \"central\" or \"primitive\", which often but not always correlates with the sense frequency.", |
|
"cite_spans": [ |
|
{ |
|
"start": 193, |
|
"end": 194, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-annotated Datasets", |
|
"sec_num": "4.3.1" |
|
}, |
|
{ |
|
"text": "is a sentence where a single target word has been selected for disambiguation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-annotated Datasets", |
|
"sec_num": "4.3.1" |
|
}, |
|
{ |
|
"text": "Two datasets consist of lexicographical examples (Lex-Ex): the SALDO examples (SALDOex) and Swedish FrameNet examples (SweFN-ex). The latter of these is annotated in terms of semantic frames, but there is a deterministic mapping from frames to SALDO senses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-annotated Datasets", |
|
"sec_num": "4.3.1" |
|
}, |
|
{ |
|
"text": "Two additional datasets are taken from the Senseval-2 Swedish lexical sample task (Kokkinakis et al., 2001) . It uses a different sense inventory, which we mapped manually to SALDO senses. The lexical sample originally consisted of instances for 40 lemmas, out of which we removed 7 lemmas because they were unambiguous in SALDO. Since we are using an unsupervised experimental setup, we report results not only on the designated test set but also on the training set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 82, |
|
"end": 107, |
|
"text": "(Kokkinakis et al., 2001)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-annotated Datasets", |
|
"sec_num": "4.3.1" |
|
}, |
|
{ |
|
"text": "The other datasets come from the Koala annotation project (Johansson et al., 2016) . The latest version consists of seven different corpora, each sampled from text in a separate domain: blogs, novels, Wikipedia, European Parliament proceedings, political news, newsletters from a government agency, and government press releases. Unlike the two lexicographical example sets and the Senseval-2 lexical sample, in which the instances have been selected by lexicographers to be prototypical and to have a good coverage of the sense variation, the instances in the Koala corpora are annotated 'as is' in running text.", |
|
"cite_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 82, |
|
"text": "(Johansson et al., 2016)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-annotated Datasets", |
|
"sec_num": "4.3.1" |
|
}, |
|
{ |
|
"text": "The sentences in all datasets were tokenized, compound-split, and lemmatized, and for each target word we automatically determined the set of possible senses, given its context and inflec- Table 3 : WSD accuracy on baselines, UKB, and the three variants of our model (\u03c1 = 0.9) on all test sets.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 189, |
|
"end": 196, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sense-annotated Datasets", |
|
"sec_num": "4.3.1" |
|
}, |
|
{ |
|
"text": "tion. We only considered senses of content words: nouns, verbs, adjectives, and adverbs. Multi-word targets were not included, and we removed all instances where only one sense was available. 3 Table 3 shows disambiguation accuracies for our models on the datasets described above, along with the scores achieved by our baselines and the UKB model. The results of each variant of our model were obtained with a parameterization of \u03c1 = 0.9, which was chosen as the best scoring value on the Swe-FN subset used as validation set. The model which only applies the regularizer to polysemous words (V1) dominates most highest scores, overtaken in some instances by V0 and in one by the first sense baseline. Note how the general magnitudes of the scores within each type of dataset underline their different characteristics explained above.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 194, |
|
"end": 201, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sense-annotated Datasets", |
|
"sec_num": "4.3.1" |
|
}, |
|
{ |
|
"text": "Additionally, for the sake of making a more detailed analysis of the influence of the parameter \u03c1 that dominates the extent of the lexicon's influence on the model, Figure 2 shows the average performance of our models on each dataset for a wide range of values for \u03c1. There is a clear pattern across all models and datasets by which a greater input from the SN translates into a better performance in WSD. These figures also confirm the superior performance of the variant V1 of our model seen in Table 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 165, |
|
"end": 173, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 497, |
|
"end": 504, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Disambiguation Results", |
|
"sec_num": "4.3.2" |
|
}, |
|
{ |
|
"text": "In our second evaluation, we investigated how well the sense vector models learned by the different training algorithms correspond to semantic classes defined by the Swedish FrameNet (Friberg Heppin and Toporowska Gronostaj, 2012) . In a frame-semantic model of lexical meaning (Fillmore and Baker, 2009) , the meaning of words is defined by associating them with broad semantic classes called frames; for instance, the word falafel would belong to the frame FOOD. Important classes of frames include those corresponding to objects and people, mainly populated by nouns, such as FOOD or PEOPLE BY AGE; verb-dominated frames corresponding to events, such as IMPACT, STATEMENT, or INGESTION; and frames dominated by adjectives, often referring to relations, qualities, and states, e.g. ORIGIN or EMOTION DIRECTED.", |
|
"cite_spans": [ |
|
{ |
|
"start": 183, |
|
"end": 230, |
|
"text": "(Friberg Heppin and Toporowska Gronostaj, 2012)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 278, |
|
"end": 304, |
|
"text": "(Fillmore and Baker, 2009)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Frame Prediction", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "In case a word has more than one sense, it may belong to more than one frame. In the Swedish FrameNet, unlike its English counterpart, these senses are explicitly defined using SALDO (see \u00a74.1.1): for instance, for the highly polysemous noun slag, its first sense ('type') belongs to the frame TYPE, the second ('hit') to IMPACT, the third ('battle') to HOSTILE ENCOUNTER, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Frame Prediction", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "In the evaluation, we trained classifiers to determine whether a SALDO sense, represented as a sense vector, belongs to a given frame or not. To train the classifiers, we selected the 546 frames from the Swedish FrameNet for which at least 5 entries were available. In total we had 28,842 verb, noun, adjective, and adverb entries, which we split into training (67% of the entries in each ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Frame Prediction", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "At test time, for each frame we applied the SVM scoring function of its classifier to each sense in the test set. The ranking induced by this score was evaluated using the Average Precision (AP) metric commonly used to evaluate rankers; the goal of this ranking step is to score the senses belonging to the frame higher than those that do not. We computed the Mean Averaged Precision (MAP) score by macro-averaging the AP scores over the set of frames. Figure 3 shows the MAP scores of frame predictors based on different sense vector models. We compared the three training algorithms described in Section 3 for different values of the regularization strength parameter \u03c1. As a baseline, we included a model that does not distinguish between different senses: it represents a SALDO sense with the word vector of its lemma.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 453, |
|
"end": 461, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation Results", |
|
"sec_num": "4.4.1" |
|
}, |
|
{ |
|
"text": "As the figure shows, almost all sense-aware vector models outperformed the model that just used lemma vectors. The result shows tendencies that are different from what we saw in the WSD experiments. The best MAP scores were achieved with mid-range values of \u03c1, so it seems that this task requires embeddings that strike a balance between representing the lexicon structure faithfully and representing the cooccurrence patterns in the corpus. An model with very light influence of the lexicon was hardly better than just using lemma embeddings, and unlike what we saw for the WSD task we see a strong dropoff when increasing \u03c1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Results", |
|
"sec_num": "4.4.1" |
|
}, |
|
{ |
|
"text": "In addition, the tendencies here differ from the WSD results in that the training algorithm that only applies the lexicon-based regularizer to polysemous words (V1) gives lower scores than the other two approaches. We believe that this is because it is crucial in this task that sense vectors are clustered into coherent groups, which makes it more useful to move sense vectors closer to their neighbors even when they are monosemous; this as opposed to the WSD task, where it is more useful to leave the monosemous sense vectors in place as \"anchors\" for the senses of polysemous words. The context-regularized training algorithm (V2) gives no improvement over the original approach (V0), which is expected since context vectors are not used in this task. To get a more detailed picture of the strengths and weaknesses of the models in this task, we selected eight frames: two frames dominated by nouns, two for verbs, two for adjectives, two for adverbs. Table 4 shows the AP scores for these frames of the lemma-vector baseline, the initial approach (V0), and the version that only regularizes senses of polysemous words (V1). All lexicon-aware models used a \u03c1 value of 0.7. Almost across the board, the V0 method gives very strong improvements. The exception is the frame ORIGIN, which contains adjectives of ethnicity and nationality (Mexican, African, etc); this set of adjectives is already quite coherently clustered by a simple word vector model and is not substantially improved by any lexicon-based approach.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 957, |
|
"end": 964, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation Results", |
|
"sec_num": "4.4.1" |
|
}, |
|
{ |
|
"text": "In this article we have introduced a family of word sense embedding models that are able to leverage information from two concurrent sources of information: a semantic network and a corpus. Our hypothesis was that by combining them, the robustness and coverage of embeddings trained on a large corpus could achieve a more balanced and linguistically informed representation of the senses of polysemic words. This point has been proved in the evaluation of our models on Swedish language tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "A manual inspection of the word sense representation through their nearest neighbors exemplified it in \u00a74.2. Indeed, an increased influence from the SN causes a clearer distinction between different senses of a word, even in the case where one of them is underrepresented in the corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "A WSD experiment was carried out on a variety of sense-annotated datasets. Our model consistently outperformed random and first sense baselines, as well as a comparable graph-based WSD system trained on a Swedish SN, which underlines the fact that the strength of our model resides in a combination of lexicon-and corpus-learning. This is further confirmed in the evaluation of our model on a frame prediction task: A well balanced combination of lexicon and corpus data produces word sense embeddings that outperform common word embeddings when used to predict their semantic frame membership. Furthermore, this superiority is uniform across common frames dominated by different parts of speech.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "An analysis of different values of our model's mix parameter \u03c1 showed the value of using lexicographic information in conjunction with corpus data. Especially on WSD, larger values of \u03c1 (i.e., more influence from the SN) generally lead to improved results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In conclusion, we have shown that automatic word sense representation benefits greatly from using a semantic network in addition to the usual corpus-learning. The combination of these sources of information yields robust, high-quality, and balanced embeddings that excel in downstream tasks where accurate representation of word meaning is crucial. Given these findings, we intend to continue exploring more refined ways in which data from a semantic network can be leveraged to increase sense-awareness in embedding models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "http://spraakbanken.gu.se", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In addition, to facilitate a comparison to the UKB system as a baseline, we removed a small number of instances that could not be lemmatized unambiguously.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This research was funded by the Swedish Research Council under grant 2013-4944. The Koala corpus was developed in a project funded by Riksbankens Jubilemsfond, grant number In13-0320:1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Personalizing PageRank for word sense disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Eneko", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aitor", |
|
"middle": [], |
|
"last": "Soroa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "33--41", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eneko Agirre and Aitor Soroa. 2009. Personalizing PageRank for word sense disambiguation. In Pro- ceedings of the 12th Conference of the European Chapter of the Association for Computational Lin- guistics, pages 33-41. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Don't count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Georgiana", |
|
"middle": [], |
|
"last": "Dinu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Germ\u00e1n", |
|
"middle": [], |
|
"last": "Kruszewski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Baroni, Georgiana Dinu, and Germ\u00e1n Kruszewski. 2014. Don't count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, volume 1.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Breaking sticks and ambiguities with adaptive skip-gram", |
|
"authors": [ |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Bartunov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dmitry", |
|
"middle": [], |
|
"last": "Kondrashkin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anton", |
|
"middle": [], |
|
"last": "Osokin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dmitry", |
|
"middle": [], |
|
"last": "Vetrov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Artificial Intelligence and Statistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "130--138", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sergey Bartunov, Dmitry Kondrashkin, Anton Osokin, and Dmitry Vetrov. 2016. Breaking sticks and ambi- guities with adaptive skip-gram. In Artificial Intelli- gence and Statistics, pages 130-138.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "SALDO: a touch of yin to WordNet's yang. Language Resources and Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Lars", |
|
"middle": [], |
|
"last": "Borin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Forsberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lennart", |
|
"middle": [], |
|
"last": "L\u00f6nngren", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "47", |
|
"issue": "", |
|
"pages": "1191--1211", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lars Borin, Markus Forsberg, and Lennart L\u00f6nngren. 2013. SALDO: a touch of yin to WordNet's yang. Language Resources and Evaluation, 47:1191- 1211.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A unified model for word sense representation and disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Xinxiong", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyuan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maosong", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1025--1035", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A unified model for word sense representation and disambiguation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1025-1035.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Natural language processing (almost) from scratch", |
|
"authors": [ |
|
{ |
|
"first": "Ronan", |
|
"middle": [], |
|
"last": "Collobert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L\u00e9on", |
|
"middle": [], |
|
"last": "Bottou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Karlen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Koray", |
|
"middle": [], |
|
"last": "Kavukcuoglu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pavel", |
|
"middle": [], |
|
"last": "Kuksa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "The Journal of Machine Learning Research", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "2493--2537", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Re- search, 12:2493-2537.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "LIBLINEAR: A library for large linear classification", |
|
"authors": [ |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Rong-En Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cho-Jui", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiang-Rui", |
|
"middle": [], |
|
"last": "Hsieh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chih-Jen", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "1871--1874", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang- Rui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9:1871-1874.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A frames approach to semantic analysis", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Charles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Collin", |
|
"middle": [], |
|
"last": "Fillmore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Baker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "The Oxford Handbook of Linguistic Analysis", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "313--340", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Charles J. Fillmore and Collin Baker. 2009. A frames approach to semantic analysis. In B. Heine and H. Narrog, editors, The Oxford Handbook of Lin- guistic Analysis, pages 313-340. Oxford: OUP.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "The rocky road towards a Swedish FrameNet -creating SweFN", |
|
"authors": [], |
|
"year": 2012, |
|
"venue": "Proceedings of the Eighth conference on International Language Resources and Evaluation (LREC-2012)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "256--261", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Karin Friberg Heppin and Maria Toporowska Gronos- taj. 2012. The rocky road towards a Swedish FrameNet -creating SweFN. In Proceedings of the Eighth conference on International Language Re- sources and Evaluation (LREC-2012), pages 256- 261, Istanbul, Turkey.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Domain adaptation for large-scale sentiment classification: A deep learning approach", |
|
"authors": [ |
|
{ |
|
"first": "Xavier", |
|
"middle": [], |
|
"last": "Glorot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Bordes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 28th International Conference on Machine Learning (ICML-11)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "513--520", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Pro- ceedings of the 28th International Conference on Machine Learning (ICML-11), pages 513-520.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Improving word representations via global context and multiple word prototypes", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Eric", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew Y", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "873--882", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric H Huang, Richard Socher, Christopher D Man- ning, and Andrew Y Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguis- tics: Long Papers-Volume 1, pages 873-882. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Sensembed: Learning sense embeddings forword and relational similarity", |
|
"authors": [ |
|
{ |
|
"first": "Ignacio", |
|
"middle": [], |
|
"last": "Iacobacci", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Taher Pilehvar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Navigli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ignacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. Sensembed: Learning sense embeddings forword and relational similarity. In 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL-IJCNLP 2015. Association for Computational Linguistics (ACL).", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Ontologically grounded multi-sense representation learning for semantic vector space models", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Sujay Kumar Jauhar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proc. NAACL", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sujay Kumar Jauhar, Chris Dyer, and Eduard Hovy. 2015. Ontologically grounded multi-sense represen- tation learning for semantic vector space models. In Proc. NAACL, volume 1.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "A multi-domain corpus of Swedish word sense annotation", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Johansson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yvonne", |
|
"middle": [], |
|
"last": "Adesam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerlof", |
|
"middle": [], |
|
"last": "Bouma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karin", |
|
"middle": [], |
|
"last": "Hedberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Language Resources and Evaluation Conference (LREC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3019--3022", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Johansson, Yvonne Adesam, Gerlof Bouma, and Karin Hedberg. 2016. A multi-domain corpus of Swedish word sense annotation. In Proceedings of the Language Resources and Evaluation Confer- ence (LREC), pages 3019-3022, Portoro\u017e, Slovenia.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Embedding a semantic network in a word space", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Johansson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luis", |
|
"middle": [], |
|
"last": "Nieto-Pi\u00f1a", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1428--1433", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Johansson and Luis Nieto-Pi\u00f1a. 2015. Em- bedding a semantic network in a word space. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1428-1433, Denver, Colorado. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "SENSEVAL-2: The Swedish framework", |
|
"authors": [ |
|
{ |
|
"first": "Dimitrios", |
|
"middle": [], |
|
"last": "Kokkinakis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jerker", |
|
"middle": [], |
|
"last": "J\u00e4rborg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yvonne", |
|
"middle": [], |
|
"last": "Cederholm", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of SENSEVAL-2 Second International Workshop on Evaluating Word Sense Disambiguation Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "45--48", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dimitrios Kokkinakis, Jerker J\u00e4rborg, and Yvonne Cederholm. 2001. SENSEVAL-2: The Swedish framework. In Proceedings of SENSEVAL-2 Second International Workshop on Evaluating Word Sense Disambiguation Systems, pages 45-48, Toulouse, France.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Do multi-sense embeddings improve natural language understanding?", |
|
"authors": [ |
|
{ |
|
"first": "Jiwei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiwei Li and Dan Jurafsky. 2015. Do multi-sense em- beddings improve natural language understanding? In Empirical Methods in Natural Language Process- ing (EMNLP).", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in Neural Information Processing Systems, pages 3111-3119.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "WordNet: a lexical database for English", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "George", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Communications of the ACM", |
|
"volume": "38", |
|
"issue": "11", |
|
"pages": "39--41", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "George A Miller. 1995. WordNet: a lexical database for English. Communications of the ACM, 38(11):39-41.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Efficient nonparametric estimation of multiple embeddings per word in vector space", |
|
"authors": [ |
|
{ |
|
"first": "Arvind", |
|
"middle": [], |
|
"last": "Neelakantan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeevan", |
|
"middle": [], |
|
"last": "Shankar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Passos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1059--1069", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Arvind Neelakantan, Jeevan Shankar, Alexandre Pas- sos, and Andrew McCallum. 2014. Efficient non- parametric estimation of multiple embeddings per word in vector space. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1059-1069, Doha, Qatar. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "A simple and efficient method to generate word sense representations", |
|
"authors": [ |
|
{ |
|
"first": "Luis", |
|
"middle": [], |
|
"last": "Nieto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "-", |
|
"middle": [], |
|
"last": "Pi\u00f1a", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Johansson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "465--472", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Luis Nieto-Pi\u00f1a and Richard Johansson. 2015. A sim- ple and efficient method to generate word sense rep- resentations. In Proceedings of the International Conference Recent Advances in Natural Language Processing, pages 465-472, Hissar, Bulgaria.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "De-conflated semantic representations", |
|
"authors": [ |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Taher Pilehvar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nigel", |
|
"middle": [], |
|
"last": "Collier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1680--1690", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohammad Taher Pilehvar and Nigel Collier. 2016. De-conflated semantic representations. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1680-1690, Austin, Texas. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Multi-prototype vector-space models of word meaning", |
|
"authors": [ |
|
{ |
|
"first": "Joseph", |
|
"middle": [], |
|
"last": "Reisinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Raymond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Human Language Technologies: The", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joseph Reisinger and Raymond J Mooney. 2010. Multi-prototype vector-space models of word mean- ing. In Human Language Technologies: The 2010", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "109--117", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annual Conference of the North American Chap- ter of the Association for Computational Linguistics, pages 109-117. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Dynamic pooling and unfolding recursive autoencoders for paraphrase detection", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Eric", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Pennin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew Y", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "801--809", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Socher, Eric H Huang, Jeffrey Pennin, Christo- pher D Manning, and Andrew Y Ng. 2011. Dy- namic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in Neural In- formation Processing Systems, pages 801-809.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Word representations: a simple and general method for semi-supervised learning", |
|
"authors": [ |
|
{ |
|
"first": "Joseph", |
|
"middle": [], |
|
"last": "Turian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lev", |
|
"middle": [], |
|
"last": "Ratinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "384--394", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Compu- tational Linguistics, pages 384-394. Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "A fragment of the network in SALDO." |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Average WSD accuracies on all instances of each dataset for different values of \u03c1 on the three variants of our model. frame) and test sets (33%). For each frame, we used LIBLINEAR(Fan et al., 2008) to train a linear support vector machine, using the vectors of the senses associated with that frame as positive training instances, and all other senses listed in FrameNet as negative instances. MAP scores for the frame prediction classifiers for the different types of models." |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"html": null, |
|
"text": "Nearest neighbors for the two senses of rock 'coat' and ' rock music' for different \u03c1.", |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"2\">rock-1 'coat'</td><td colspan=\"2\">rock-2 'rock music'</td></tr><tr><td>\u03c1 = 0.01</td><td>\u03c1 = 0.5</td><td>\u03c1 = 0.01</td><td>\u03c1 = 0.5</td></tr><tr><td>Rhythm 'rhythm music'</td><td>jesussandaler 'Jesus sandals'</td><td>nu-metal 'nu metal'</td><td>metal 'metal music'</td></tr><tr><td colspan=\"2\">Peepshows 'peep shows' tubsockar 'tube socks'</td><td>goth ' goth music'</td><td>rnb 'RnB music'</td></tr><tr><td>skabandk 'ska band'</td><td>bl\u00e5jeans 'blue jeans'</td><td>psytrance ' psytrance music'</td><td>indie 'indie music'</td></tr><tr><td>Punkrock 'punk rock'</td><td>snowjoggers 'snow joggers'</td><td>boogierock 'boogie rock'</td><td>dubstep 'dubstep music'</td></tr><tr><td>sleaze 'to sleaze'</td><td>midjekort 'doublet jacket'</td><td colspan=\"2\">synthband 'synth music band' goth 'goth music'</td></tr></table>" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"html": null, |
|
"text": "Nearest unlisted neighbors for the two senses of rock 'coat' and 'rock music' for different \u03c1.", |
|
"type_str": "table", |
|
"content": "<table/>" |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"html": null, |
|
"text": "Frame prediction AP scores for selected frames dominated by nouns, verbs, adjectives, and adverbs respectively.", |
|
"type_str": "table", |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |