{ "paper_id": "I17-1028", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:40:12.182168Z" }, "title": "Turning Distributional Thesauri into Word Vectors for Synonym Extraction and Expansion", "authors": [ { "first": "Olivier", "middle": [], "last": "Ferret", "suffix": "", "affiliation": { "laboratory": "", "institution": "LIST, Vision and Content Engineering Laboratory", "location": { "postCode": "F-91191", "settlement": "Gif-sur-Yvette", "country": "France" } }, "email": "olivier.ferret@cea.fr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this article, we propose to investigate a new problem consisting in turning a distributional thesaurus into dense word vectors. We propose more precisely a method for performing such task by associating graph embedding and distributed representation adaptation. We have applied and evaluated it for English nouns at a large scale about its ability to retrieve synonyms. In this context, we have also illustrated the interest of the developed method for three different tasks: the improvement of already existing word embeddings, the fusion of heterogeneous representations and the expansion of synsets.", "pdf_parse": { "paper_id": "I17-1028", "_pdf_hash": "", "abstract": [ { "text": "In this article, we propose to investigate a new problem consisting in turning a distributional thesaurus into dense word vectors. We propose more precisely a method for performing such task by associating graph embedding and distributed representation adaptation. We have applied and evaluated it for English nouns at a large scale about its ability to retrieve synonyms. In this context, we have also illustrated the interest of the developed method for three different tasks: the improvement of already existing word embeddings, the fusion of heterogeneous representations and the expansion of synsets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Early work about distributional semantics (Grefenstette, 1994; Lin, 1998; Curran and Moens, 2002) was strongly focused on the notion of distributional thesaurus. Recent work in this domain has been more concerned by the notions of semantic similarity and relatedness (Budanitsky and Hirst, 2006) and by the representation of distributional data. This trend has been strengthened even more recently with all work about distributed word representations and embeddings, whether they are built by neural networks (Mikolov et al., 2013) or not (Pennington et al., 2014) .", "cite_spans": [ { "start": 42, "end": 62, "text": "(Grefenstette, 1994;", "ref_id": "BIBREF15" }, { "start": 63, "end": 73, "text": "Lin, 1998;", "ref_id": "BIBREF23" }, { "start": 74, "end": 97, "text": "Curran and Moens, 2002)", "ref_id": "BIBREF10" }, { "start": 267, "end": 295, "text": "(Budanitsky and Hirst, 2006)", "ref_id": "BIBREF4" }, { "start": 509, "end": 531, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF25" }, { "start": 539, "end": 564, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "From a more global perspective, distributional thesauri and distributional data, i.e. distributional contexts of words, can be considered as dual representations of the same semantic similarity information. Distributional data are an intensional form of this information that can take an extensional form as distributional thesauri by applying a similarity measure to them. Going from an intensional to an extensional representation corresponds to the rather classical process underlying the building of distributional thesauri. In the context of word embeddings, Perozzi et al. (2014a) extend this process to the building of lexical networks. Going to the other way, from an extensional to an intensional representation, is, as far as we know, a new problem in the context of distributional semantics. The interest of this transformation is twofold. First, whatever the initial form of the semantic knowledge, it can be turned into the most suitable form for a particular use. For instance, thesauri are more suitable for tasks like query expansion while word embeddings are more adapted as features for statistical classifiers. Second, each form is also associated with specific methods of improvement. A lot of work has been done for improving distributional contexts by studying various parameters, which has led to an important improvement of distributional thesauri. Conversely, work such as (Claveau et al., 2014) has focused on methods for improving thesauri themselves. It would clearly be interesting to transpose the improvements obtained in such a way to distributional contexts, as illustrated by Figure 1 .", "cite_spans": [ { "start": 564, "end": 586, "text": "Perozzi et al. (2014a)", "ref_id": "BIBREF30" }, { "start": 1398, "end": 1420, "text": "(Claveau et al., 2014)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 1610, "end": 1618, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Hence, we propose in this article to investigate the problem of turning a distributional thesaurus into word embeddings, that is to say embedding a thesaurus. We will show that such process can be achieved without losing too much information and moreover, that its underlying principles can be used for improving already existing word embeddings. Finally, we will illustrate the interest of such process for building word embeddings integrating external knowledge more efficiently and extending this knowledge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A distributional thesaurus is generally viewed as a set of entries with, for each entry, a list of semantic neighbors ranked in descending order of semantic similarity with this entry. Since the neighbors of an entry are also entries of the thesaurus, such thesaurus can be considered as a graph in which vertices are words and edges are the semantic neighborhood relations between them, weighted according to their semantic similarity. The resulting graph is undirected if the semantic similarity measure between words is symmetric, which is the most common case. Such representation was already adopted for improving distributional thesauri by reranking the neighbors of their entries (Claveau et al., 2014) for instance.", "cite_spans": [ { "start": 687, "end": 709, "text": "(Claveau et al., 2014)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Embedding Distributional Thesauri", "sec_num": "2" }, { "text": "One specificity of distributional thesauri from that perspective is that although the weight between two words is representative of their semantic similarity, we know from work such as (Ferret, 2010; Claveau et al., 2014) that the relevance of the semantic neighbors based on this weight strongly decreases as the rank of the neighbors increases. Consequently, our strategy for embedding distributional thesauri is two-fold: first, we build an embedding by relying on methods for embedding graphs, either by exploiting directly their structure or from their representation as matrices; second, we adapt the embedding resulting from the first step according to the specificities of distributional thesauri. We detail these two steps in the next two sections.", "cite_spans": [ { "start": 185, "end": 199, "text": "(Ferret, 2010;", "ref_id": "BIBREF13" }, { "start": 200, "end": 221, "text": "Claveau et al., 2014)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Embedding Distributional Thesauri", "sec_num": "2" }, { "text": "The problem of embedding graphs in the perspective of dimension reduction is not new and was already tackled by much work (Yan et al., 2007) , going from spectral methods (Belkin and Niyogi, 2001) to more recently neural methods (Perozzi et al., 2014b; Cao et al., 2016) . As graphs can be represented by their adjacency matrix, this problem is also strongly linked to the matrix factorization problem. The basic strategy is to perform the eigendecomposition of the matrix as for instance in the case of Latent Semantic Analysis (LSA) (Landauer and Dumais, 1997) . However, such decomposition is computationally expensive and for large matrices, as in the context of Collaborative Filtering (Koren, 2008) , less constrained matrix factorization techniques are used.", "cite_spans": [ { "start": 122, "end": 140, "text": "(Yan et al., 2007)", "ref_id": "BIBREF37" }, { "start": 171, "end": 196, "text": "(Belkin and Niyogi, 2001)", "ref_id": "BIBREF2" }, { "start": 229, "end": 252, "text": "(Perozzi et al., 2014b;", "ref_id": "BIBREF31" }, { "start": 253, "end": 270, "text": "Cao et al., 2016)", "ref_id": "BIBREF6" }, { "start": 535, "end": 562, "text": "(Landauer and Dumais, 1997)", "ref_id": "BIBREF20" }, { "start": 691, "end": 704, "text": "(Koren, 2008)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Graph Embedding", "sec_num": "2.1" }, { "text": "For turning a distributional thesaurus into word embeddings, we tested three different methods:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Embedding", "sec_num": "2.1" }, { "text": "\u2022 the LINE algorithm (Tang et al., 2015) , a recent method for embedding weighted graphs; \u2022 the application of Singular Value Decomposition (SVD) to the adjacency matrix of the thesaurus; \u2022 the matrix factorization approach proposed by Hu et al. (2008) , also applied to the adjacency matrix of the thesaurus. LINE defines a probabilistic model over the space V \u00d7V , with V , the set of vertices of the considered graph. This probabilistic model is based on the representation of each vertex as a lowdimensional vector. This vector results from the minimization of an objective function based on the Kullback-Leibler divergence between the probabilistic model and the empirical distribution of the considered graph. This minimization is performed by the Stochastic Gradient Descent (SGD) method. Tang et al. (2015) propose more precisely two probabilistic models: one is based on the direct relation between two vertices while the second defines the proximity of two vertices according to the number of neighbors they share. We adopted the second model, which globally gives better results on several benchmarks.", "cite_spans": [ { "start": 21, "end": 40, "text": "(Tang et al., 2015)", "ref_id": "BIBREF34" }, { "start": 236, "end": 252, "text": "Hu et al. (2008)", "ref_id": "BIBREF16" }, { "start": 796, "end": 814, "text": "Tang et al. (2015)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Graph Embedding", "sec_num": "2.1" }, { "text": "In our second option, SVD factorizes T , the adjacency matrix of the thesaurus to embed, into the product U \u2022\u03a3\u2022V . U and V are orthonormal and \u03a3 is a diagonal matrix of eigenvalues. We classically adopted the truncated version of SVD by keeping only the first d elements of \u03a3, which finally leads to Levy et al. (2015) investigated in the context of word co-occurrence matrices the best option for the low-dimensional representation of words as the usual setting was", "cite_spans": [ { "start": 300, "end": 318, "text": "Levy et al. (2015)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Graph Embedding", "sec_num": "2.1" }, { "text": "T d = U d \u2022\u03a3 d \u2022V d .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Embedding", "sec_num": "2.1" }, { "text": "U d \u2022 \u03a3 d while Caron (2001) suggested that U d \u2022 \u03a3 P d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Embedding", "sec_num": "2.1" }, { "text": "with P < 1 would be a better option. They found that P = 0 or P = 0.5 are clearly better than P = 1, with a slight superiority for P = 0. Similarly, we found P = 0 to be the best option.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Embedding", "sec_num": "2.1" }, { "text": "Our last choice is based on a less constrained form of matrix factorization where T is decom-posed into two matrices in such a way that U", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Embedding", "sec_num": "2.1" }, { "text": "\u2022V = T \u2248 T , with T \u2208 R m\u2022n , U \u2208 R m\u2022d , V \u2208 R d\u2022n and d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Embedding", "sec_num": "2.1" }, { "text": "m, n. U and V are obtained by minimizing the following expression:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Embedding", "sec_num": "2.1" }, { "text": "i,j (t ij \u2212 u i v j ) 2 + \u03bb( u i 2 + v j 2 ) (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Embedding", "sec_num": "2.1" }, { "text": "where the first term minimizes the reconstruction error of T by the product U \u2022 V while the second term is a regularization term, controlled by the parameter \u03bb for avoiding overfitting. We used U as embedding of the initial thesaurus. (Hu et al., 2008 ) is a slight variation of this approach where t ij is turned into a confidence score and the minimization of equation 1 is performed by the Alternating Least Squares method. One of the interests of this matrix factorization approach is its ability to deal with undefined values, which implements an implicit feedback in the context of recommender systems and can deal in our context with the fact that the input graph is generally sparse and does not include the furthest semantic neighbors of an entry.", "cite_spans": [ { "start": 235, "end": 251, "text": "(Hu et al., 2008", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Graph Embedding", "sec_num": "2.1" }, { "text": "As mentioned previously, all the graph embedding methods of the previous section exploit the semantic similarity between words but for an entry, this similarity is not linearly correlated with the rank of its relevant neighbors in the thesaurus. In other words, the relevance of the semantic neighbors of an entry strongly decreases as their rank increases and the first neighbors are particularly important. For taking into account this observation, we have adopted a strategy consisting in using the first neighbors of each entry of the initial thesaurus as constraints for adapting the embeddings built from this thesaurus by the graph embedding methods we consider. Such adaptation has already been tackled by some work in the context of the injection of external knowledge made of semantic relations into embeddings built mainly by neural methods such as the Skip-Gram model (Mikolov et al., 2013) . Methods for performing such injection can roughly be divided into two categories: those operating during the building of the embeddings, generally by modifying the objective function supporting this building (Yih et al., 2012; Zhang et al., 2014) , and those applied after the building of the embeddings (Yu and Dredze, 2014; Xu et al., 2014) . We have more particularly used or adapted two methods from the second category and transposed one method from the first category for implementing our endogenous strategy.", "cite_spans": [ { "start": 880, "end": 902, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF25" }, { "start": 1113, "end": 1131, "text": "(Yih et al., 2012;", "ref_id": "BIBREF38" }, { "start": 1132, "end": 1151, "text": "Zhang et al., 2014)", "ref_id": "BIBREF41" }, { "start": 1209, "end": 1230, "text": "(Yu and Dredze, 2014;", "ref_id": "BIBREF40" }, { "start": 1231, "end": 1247, "text": "Xu et al., 2014)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "From Graph to Thesaurus Embeddings", "sec_num": "2.2" }, { "text": "The first method we have considered is the retrofitting method from Faruqui et al. (2015) . This method performs the adaptation of a set of word vectors q i by minimizing the following objective function through a label propagation algorithm (Bengio et al., 2006) :", "cite_spans": [ { "start": 68, "end": 89, "text": "Faruqui et al. (2015)", "ref_id": "BIBREF11" }, { "start": 242, "end": 263, "text": "(Bengio et al., 2006)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "From Graph to Thesaurus Embeddings", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "n i=1 q i \u2212q i 2 + (i,j)\u2208E q i \u2212 q j 2", "eq_num": "(2)" } ], "section": "From Graph to Thesaurus Embeddings", "sec_num": "2.2" }, { "text": "whereq i are the q i vectors after their adaptation. The first term is a stability term ensuring that the adapted vectors do not diverge too much from the initial vectors while the second term represents an adaptation term, tending to bring closer the vectors associated with words that are part of a relation from an external knowledge source E. In our case, this knowledge corresponds to the relations between each entry of the initial thesaurus and its first neighbors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Graph to Thesaurus Embeddings", "sec_num": "2.2" }, { "text": "The second method, counter-fitting (Mrk\u0161i\u0107 et al., 2016) , is close to retrofitting and mainly differentiates from it by adding to the objective function a repelling term for pushing vectors corresponding to antonymous words away from each other. However, a distributional thesaurus does not contain identified antonymous words 1 . Hence, we discarded this term and used the following objective function, minimized by SGD:", "cite_spans": [ { "start": 35, "end": 56, "text": "(Mrk\u0161i\u0107 et al., 2016)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "From Graph to Thesaurus Embeddings", "sec_num": "2.2" }, { "text": "(3) N i =1 j \u2208N (i) \u03c4 (dist(q i ,q j ) \u2212 dist(q i , q j )) + (i,j) \u2208E \u03c4 (dist(q i ,q j ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Graph to Thesaurus Embeddings", "sec_num": "2.2" }, { "text": "with dist(x, y) = 1 \u2212 cos(x, y) and \u03c4 (x) = max(0, x). As in equation 2, the first term tends to preserve the initial vectors. In this case, this preservation does not focus on the vectors themselves but on the pairwise distances between a vector and its nearest neighbors (N (i)). The second term is quite similar to the second term of equation 2 with the use of a distance derived from the Cosine similarity instead of the Euclidean distance 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Graph to Thesaurus Embeddings", "sec_num": "2.2" }, { "text": "The last method we have used for improving the embeddings built from the initial thesaurus, called rank-fitting hereafter, is a transposition of the method proposed by Liu et al. (2015) . The objective of this method is to integrate into embeddings order constraints coming from external knowledge with the following form:", "cite_spans": [ { "start": 168, "end": 185, "text": "Liu et al. (2015)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "From Graph to Thesaurus Embeddings", "sec_num": "2.2" }, { "text": "similarity(w i , w j ) > similarity(w i , w k ), abbreviated s ij > s ik in what", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Graph to Thesaurus Embeddings", "sec_num": "2.2" }, { "text": "follows. This kind of constraints particularly fits our context as the semantic neighbors of an entry in a distributional thesaurus are ranked and can be viewed as a set of such constraints. More precisely, i corresponds in this case to an entry and j and k to two of its neighbors such that rank(j) > rank(k). However, the method of Liu et al. (2015) is linked to the Skip-Gram model and was defined as a modification of the objective function underlying this model. We have transposed this approach for its application to the adaptation of embeddings after their building, without a specific link to the Skip-Gram model.", "cite_spans": [ { "start": 334, "end": 351, "text": "Liu et al. (2015)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "From Graph to Thesaurus Embeddings", "sec_num": "2.2" }, { "text": "The general idea is to adapt vectors to minimize s ij \u2212 s ik \u2200(i, j, k) \u2208 E. The objective to minimize takes more specifically the following form:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Graph to Thesaurus Embeddings", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(i,j,k)\u2208E f (s ik \u2212 s ij )", "eq_num": "(4)" } ], "section": "From Graph to Thesaurus Embeddings", "sec_num": "2.2" }, { "text": "where f (s ik \u2212 s ij ) = max(0, s ik \u2212 s ij ) corresponds to a kind of hinge loss function and the similarity between words i and j, s ij , is given by the Cosine measure between their associated vectors. The minimization of this objective is performed as for counter-fitting by SGD. Finally, we have also defined a mixed counterrank-fitting method that associates constraints about the proximity of word vectors and their relative ranking. This association was done by mixing the objective functions of counter-fitting and rankfitting through the addition of the second term of equation 3, i.e. its adaptation term, and equation 4. In this configuration, the first term of the counterfitting function, that preserves the initial embeddings, was not found useful anymore in preliminary experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Graph to Thesaurus Embeddings", "sec_num": "2.2" }, { "text": "3 Evaluation of Thesaurus Embedding", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Graph to Thesaurus Embeddings", "sec_num": "2.2" }, { "text": "For testing and evaluating the proposed approach, we needed first to choose a reference corpus and to build a distributional thesaurus from it. We chose the AQUAINT-2 corpus, already used for various evaluations, a middle-size corpus of around 380 million words made of news articles in English. The main preprocessing of the corpus was the application of lemmatization and the removal of function words. According to (Bullinaria and Levy, 2012) , the lemmatization of words leads to only a small improvement in terms of results but it is also a way to obtain the same results with a smaller corpus.", "cite_spans": [ { "start": 434, "end": 445, "text": "Levy, 2012)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Framework", "sec_num": "3.1" }, { "text": "The building of our reference distributional thesaurus, T cnt , was achieved by relying on a classical count-based approach with a set of parameters that were found relevant by several systematic studies (Baroni et al., 2014; Kiela and Clark, 2014; Levy et al., 2015 ):", "cite_spans": [ { "start": 204, "end": 225, "text": "(Baroni et al., 2014;", "ref_id": "BIBREF0" }, { "start": 226, "end": 248, "text": "Kiela and Clark, 2014;", "ref_id": "BIBREF18" }, { "start": 249, "end": 266, "text": "Levy et al., 2015", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Framework", "sec_num": "3.1" }, { "text": "\u2022 distributional contexts: co-occurrents restricted to nouns, verbs and adjectives having at least 10 occurrences in the corpus, collected in a 3 word window, i.e. +/-1 word around the target word; \u2022 directional co-occurrents, which were found having a good performance by Bullinaria and Levy 2012; \u2022 weighting function of co-occurrents in contexts = Positive Pointwise Mutual Information (PPMI) with the context distribution smoothing factor proposed by (Levy et al., 2015) , equal to 0.75; \u2022 similarity measure between contexts, for evaluating the semantic similarity of two words = Cosine measure; \u2022 filtering of contexts: removal of cooccurrents with only one occurrence.", "cite_spans": [ { "start": 455, "end": 474, "text": "(Levy et al., 2015)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Framework", "sec_num": "3.1" }, { "text": "The building of the thesaurus from the distributional data was performed as in (Lin, 1998) or (Curran and Moens, 2002) by extracting the closest semantic neighbors of each of its entries. More precisely, the similarity measure was computed between each entry and its possible neighbors. Both the entries of the thesaurus and their possible neighbors were nouns with at least 10 occurrences in the corpus. These neighbors were then ranked in the decreasing order of the values of this measure.", "cite_spans": [ { "start": 79, "end": 90, "text": "(Lin, 1998)", "ref_id": "BIBREF23" }, { "start": 94, "end": 118, "text": "(Curran and Moens, 2002)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Framework", "sec_num": "3.1" }, { "text": "The evaluation of distributional objects such as thesauri or word embeddings is currently a subject of research as both intrinsic (Faruqui et al., 2016; Batchkarov et al., 2016) and extrinsic (Schnabel et al., 2015) evaluations exhibit insufficiencies that question their reliability. In our case, we per- formed an intrinsic evaluation relying on the synonyms of WordNet 3.0 (Miller, 1990) as Gold Standard. This choice was first justified by our overall long-term perspective, illustrated in Section 5, which is the extraction of synonyms from documents and the expansion of already existing sets of synonyms. However, it is also likely to alleviate some evaluation problems as it narrows the scope of the evaluation, by restricting to a specific type of semantic relations, but performs it at a large scale, the combination of which making its results more reliable. For focusing on the evaluation of the extracted semantic neighbors, the WordNet 3.0's synonyms were filtered to discard entries and synonyms that were not part of the AQUAINT-2 vocabulary. The number of evaluated words and the average number of synonyms in our Gold Standard for each entry are given by the second and the third columns of Table 1 . In terms of methodology, the kind of evaluation we have performed follows (Curran and Moens, 2002; Ferret, 2010) by adopting an Information Retrieval point of view in which each entry is considered as a query and its neighbors are viewed as retrieved synonyms. Hence, we adopted the classical evaluation measures in the field: the Rprecision (R prec ) is the precision after the first R neighbors were retrieved, R being the number of Gold Standard synonyms; the Mean Average Precision (MAP) is the mean of the precision values each time a Gold Standard synonym is found; precision at different cut-offs is given for the 1, 2, 5 first neighbors. We also give the global recall for the first 100 neighbors. Table 1 shows the evaluation according to these measures of our initial distributional thesaurus T cnt along with the evaluation in the same framework of two reference models for building word embeddings from texts: GloVe from Pennington et al. (2014) and Skip-Gram with negative sampling (SGNS) from Mikolov et al. (2013) 3 . The input of these two models was the lemmatized version of the AQUAINT-2 corpus as for T cnt but with all its words. Each model was built with the best parameters found from previous work and tested on this corpus. For GloVe: vectors of 300 dimensions, window size = 10, addition of word and context vectors and 100 iterations; for SGNS: vectors of 400 dimensions, window size = 5, 10 negative examples and default value for downsampling of highly frequent words.", "cite_spans": [ { "start": 130, "end": 152, "text": "(Faruqui et al., 2016;", "ref_id": "BIBREF12" }, { "start": 153, "end": 177, "text": "Batchkarov et al., 2016)", "ref_id": "BIBREF1" }, { "start": 192, "end": 215, "text": "(Schnabel et al., 2015)", "ref_id": "BIBREF33" }, { "start": 376, "end": 390, "text": "(Miller, 1990)", "ref_id": "BIBREF26" }, { "start": 1293, "end": 1317, "text": "(Curran and Moens, 2002;", "ref_id": "BIBREF10" }, { "start": 1318, "end": 1331, "text": "Ferret, 2010)", "ref_id": "BIBREF13" }, { "start": 2152, "end": 2176, "text": "Pennington et al. (2014)", "ref_id": "BIBREF29" }, { "start": 2226, "end": 2247, "text": "Mikolov et al. (2013)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 1209, "end": 1216, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 1925, "end": 1932, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experimental Framework", "sec_num": "3.1" }, { "text": "Two main trends can be drawn from this evaluation. First, T cnt significantly outperforms GloVe and SGNS for all measures 4 . This superiority of a count-based approach over two predict-based approaches can be seen as contradictory with the findings of Levy et al. (2015) . Our analysis is that the use of directional co-occurrences, a rarely tested parameter, explains a large part of this superiority. The second conclusion is that SGNS significantly outperforms GloVe for all measures. Hence, we will report results hereafter only for SGNS as a reference word embedding model.", "cite_spans": [ { "start": 253, "end": 271, "text": "Levy et al. (2015)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Framework", "sec_num": "3.1" }, { "text": "We have evaluated the three methods presented in Section 2.1 for embedding our initial thesaurus T cnt according to the evaluation framework presented in the previous section. For all methods, the main parameters were the number of neighbors taken into account and the number of dimensions of the final vectors. In all cases, the number of neighbors was equal to 5,000, LINE being not very affected by this parameter, and the size of the vectors was 600 5 . For LINE, 10 billion samplings of the similarity values were done and for the matrix factorization (MF) approach, we used \u03bb = 0.075.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Embedding Evaluation", "sec_num": "3.2" }, { "text": "According to Table 2 , SVD significantly appears as the best method even if LINE is a competitive alternative. SVD outperforms GloVe while Method R prec MAP P@1 P@2 P@5 Table 2 : Evaluation of the embedding of a thesaurus as a graph LINE is equivalent to it, which is a first interesting result: this first embedding step of a distributional thesaurus is already able to produce better word representations than a state-of-the-art method, even if it does not reach the level of the best one (SGNS). However, Table 2 also shows that there is still room for improvement for reaching the level of the initial thesaurus T cnt . Finally, the matrix factorization approach is obviously a bad option, at least under the tested form. Table 3 shows the results of the evaluation of the word embedding adaptation methods of Section 2.2, which is also the evaluation of the global thesaurus embedding process. For all methods, the input embeddings were produced by applying SVD to the initial thesaurus T cnt , which was shown as the best option by Table 2 . For retrofitting (Retrofit) and counter-fitting (Counterfit), only the relations between each entry of the thesaurus and its first and second neighbors were considered. For rank-fitting (Rankfit), the neighborhood was extended to the first 50 neighbors. For the optimization processes, we used the default settings of the methods: 10 iterations for retrofitting and 20 iterations for counter-fitting. We also used 20 iterations for rank-fitting and counter-rank-fitting (Counter-rankfit). For all optimizations by SGD, the learning rate was 0.01. Several observations can be done. First, all the tested methods significantly improve the initial embeddings. Second, the results of the different methods are quite close for all measures. retrofitting outperforms counter-fitting but not significantly for R prec . rank-fitting is significantly the worst method and its association with counterfitting is better than retrofitting for P@1 only, but not significantly. However, we can globally note that the association of SVD and the best adapta- Table 3 : Evaluation of the global thesaurus embedding process tion methods obtains results close to the results of the initial T cnt (the difference is even not significant for R prec and P@5). As a consequence, we can conclude, in connection with our initial objective, that embedding a distributional thesaurus while preserving its information in terms of semantic similarity is possible.", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 20, "text": "Table 2", "ref_id": null }, { "start": 169, "end": 176, "text": "Table 2", "ref_id": null }, { "start": 508, "end": 515, "text": "Table 2", "ref_id": null }, { "start": 726, "end": 733, "text": "Table 3", "ref_id": null }, { "start": 1038, "end": 1045, "text": "Table 2", "ref_id": null }, { "start": 2091, "end": 2098, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Graph Embedding Evaluation", "sec_num": "3.2" }, { "text": "4 Applications of Thesaurus Embedding", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Thesaurus Embedding Evaluation", "sec_num": "3.3" }, { "text": "In the previous section, we have shown that the strongest relations of a distributional thesaurus can be used for improving word vectors built from the embedding of this thesaurus. Since this adaptation is performed after the building of the vectors, it can actually be applied to all kinds of embeddings elaborated from the corpus used for building the distributional thesaurus. As for the process of the previous section, this is a kind of bootstrapping approach in which the knowledge extracted from a corpus is used for improving the word representations elaborated from this corpus. Moreover, as GloVe and most word embedding models, SGNS relies on first-order co-occurrences between words. From that perspective, adapting SGNS embeddings with relations coming from a distributional thesaurus built from the same corpus as these embeddings is a way to incorporate second-order co-occurrence relations into them. For this experiment, we applied both retrofitting and counter-rank-fitting with exactly the same pa-rameters as in Section 3.3. The results of Table 4 clearly validate the benefit of the technique: both retrofitting and counter-rank-fitting significantly improve SGNS embeddings. As in Section 3.3, the results of retrofitting and counterrank-fitting are rather close, with a global advantage for counter-rank-fitting. We can also note that the improved versions of SGNS embeddings are still far from the best results of our thesaurus embedding method (SVD + Retrofit).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Improvement of Existing Embeddings", "sec_num": "4.1" }, { "text": "Being able to turn a distributional thesaurus into word embeddings also makes it possible to fusion different types of distributional data. In the case of thesaurus, fusion processes were early proposed by Curran (2002) and more recently by Ferret (2015). In the case of word embeddings, the recent work of Yin and Sch\u00fctze (2016) applied ensemble methods to several word embeddings. By exploiting the possibility to change from one type of representation to another, we propose a new kind of fusion, performed between a thesaurus and word embeddings and leading to improve both the input thesaurus and the embeddings.", "cite_spans": [ { "start": 206, "end": 219, "text": "Curran (2002)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Fusion of Heterogeneous Representations", "sec_num": "4.2" }, { "text": "The first step of this fusion process consists in turning the input word embeddings into a distributional thesaurus. Then, the resulting thesaurus is merged with the input thesaurus, which consists in merging two lists of ranked neighbors for each of their entries. We followed (Ferret, 2015) and applied for this fusion the CombSum strategy to the similarity values between entries and their neighbors, normalized with the Zero-one method (Wu et al., 2006) . Finally, we applied the method of Section 2 for turning the thesaurus resulting from this fusion into word embeddings.", "cite_spans": [ { "start": 278, "end": 292, "text": "(Ferret, 2015)", "ref_id": "BIBREF14" }, { "start": 440, "end": 457, "text": "(Wu et al., 2006)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Fusion of Heterogeneous Representations", "sec_num": "4.2" }, { "text": "Rprec MAP P@1 P@2 P@5 The evaluation of this fusion process, performed in a shared context as the considered thesaurus and word embeddings are built from the same corpus, is given in Table 5 . The Fusion T-S line corresponds to the evaluation of the thesaurus resulting from the second step of the fusion process. The significant difference with the results of T cnt and SGNS confirms the conclusions of Ferret (2015) about the interest of merging thesauri built differently. The Emb retrof (fusion T-S) line shows the evaluation of the word embeddings produced by the global fusion process. In a similar way to the findings of Section 3.3, the embeddings built from the Fusion T-S thesaurus are less effective than the thesaurus itself but the difference is small here too. Moreover, we can note that these embeddings have significantly higher results than SGNS, the input embeddings, but also higher results than the input thesaurus T cnt , once again without any external knowledge.", "cite_spans": [], "ref_spans": [ { "start": 183, "end": 190, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "In this section, we will illustrate how the improvement of a distributional thesaurus, obtained in our case by the injection of external knowledge, can be transposed to word embeddings. Moreover, we will show that the thesaurus embedding process achieving this transposition obtains better results for taking into account external knowledge than methods, such as retrofitting, that are applied to embeddings built directly from texts (SGNS in our case). We will demonstrate this superiority more precisely in the context of synset expansion. The overall principle is quite straightforward: first, the external knowledge is integrated into a distributional thesaurus built from the source corpus (T cnt in our experiments). Then, the resulting thesaurus is embedded following the method of Section 2. This external knowledge is supposed to be made of semantic similarity relations. We have considered more particularly pairs of synonyms (E, K) such that E is an entry of T cnt and K is a synonym of E randomly selected from the WordNet 3.0's synsets E is part of. Each E is part of only one pair (E, K).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Injection and Synset Expansion", "sec_num": "5" }, { "text": "The integration of the semantic relations into a distributional thesaurus is done for each entry E by reranking the neighbor K of the (E, K) pair at the highest rank with the highest similarity. The line T cnt +K of Table 6 gives the evaluation of this integration for 10,544 pairs (E, K) of synonyms, which means one synonym by entry.", "cite_spans": [], "ref_spans": [ { "start": 216, "end": 223, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Injecting External Knowledge into a Thesaurus", "sec_num": "5.1" }, { "text": "Method Rprec MAP P@1 P@2 P@5 Rprec MAP P@1 P@2 P@5 Table 6 : Evaluation of the injection of external knowledge into word embeddings for synset expansion", "cite_spans": [], "ref_spans": [ { "start": 51, "end": 58, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Global evaluation", "sec_num": null }, { "text": "As our evaluation methodology is based on the synonyms of WordNet, we have split our evaluation in two parts. One part takes as Gold Standard the synonyms used for the knowledge injection (see the Evaluation of memorization columns in Table 6 ) and evaluates to what extent the injected knowledge has been memorized. The second part (see the Global evaluation columns in Table 6 ) considers all the synonyms used for the evaluations in the previous sections as Gold Standard for evaluating the ability of models not only to memorize the injected knowledge but also to retrieve new synonyms, i.e. synonyms that are not part of the injected knowledge. In the context of our evaluation, which is based on synonym retrieval, this kind of generalization can also be viewed as a form of synset expansion. This is another way to extract synonyms from texts compared to work such as (Leeuwenberg et al., 2016; Minkov and Cohen, 2014; van der Plas and Tiedemann, 2006) .", "cite_spans": [ { "start": 875, "end": 901, "text": "(Leeuwenberg et al., 2016;", "ref_id": "BIBREF21" }, { "start": 902, "end": 925, "text": "Minkov and Cohen, 2014;", "ref_id": "BIBREF27" }, { "start": 926, "end": 959, "text": "van der Plas and Tiedemann, 2006)", "ref_id": "BIBREF32" } ], "ref_spans": [ { "start": 235, "end": 242, "text": "Table 6", "ref_id": null }, { "start": 371, "end": 378, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "SGNS", "sec_num": null }, { "text": "In the case of T cnt +K, we can note that the memorization is perfect, which is not a surprise since the injection of knowledge into the thesaurus corresponds to a kind of memorization. No specific generalization effect beyond the synonyms already present in the thesaurus is observed for the same reason.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SGNS", "sec_num": null }, { "text": "The result of the process described in the previous section is what we could call a knowledge-boosted distributional thesaurus. However, its form is not different from a classical distributional thesaurus and it can be embedded similarly by applying the method of Section 2. The only difference with this method concerns its second step: instead of leveraging the first n neighbors of each entry for improving the embeddings obtained by SVD, we ex-ploited the set of relations used for \"boosting\" the initial thesaurus. The evaluation of the new method we propose for building word embeddings integrating external knowledge is presented in Table 6 . More precisely, three different methods are compared: a state-of-the-art method, SGNS+retrof(K), consisting in applying retrofitting to SGNS embeddings. retrofitting was chosen as it is quick and gives good results. The second method, svd(T cnt )+retrof(K), applies retrofitting to the embeddings built from T cnt by SVD. The last method, svd(T cnt +K)+retrof(K), corresponds to the full process we have presented, where the external knowledge is first injected into the initial thesaurus T cnt before its embedding.", "cite_spans": [], "ref_spans": [ { "start": 640, "end": 647, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "From a Knowledge-Boosted Thesaurus to Word Embeddings", "sec_num": "5.2" }, { "text": "First, we can note that all the methods considered for producing word embeddings by taking into account external knowledge leads to a very strong improvement of results compared to their starting point. This is true both for the memorization and global evaluations. From the memorization viewpoint, all the injected synonyms can be found among the first five neighbors returned by the three methods as illustrated by their P@5 and even at the first rank in nearly nine times out of ten for the best method, which is clearly our thesaurus embedding process (except the pure memorization performed by T cnt +K).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From a Knowledge-Boosted Thesaurus to Word Embeddings", "sec_num": "5.2" }, { "text": "We can also observe that the method used for knowledge injection can reverse initial differences. For instance, the application of SVD to a thesaurus built from a corpus, svd(T cnt ), obtains lower results than the application of SGNS to the same corpus. After the injection of external knowledge, this ranking is reversed: the values of the evaluation measures are higher for svd(T cnt )+retrof(K) than for SGNS+retrofit(K).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From a Knowledge-Boosted Thesaurus to Word Embeddings", "sec_num": "5.2" }, { "text": "More importantly, Table 6 shows that the inte- Table 7 : Examples of the interest of thesaurus embedding for synset expansion. Each synonym is given with its [rank] among the neighbors of the entry and its similarity value with the entry gration of external knowledge into the thesaurus before its embedding is clearly effective as illustrated by the significant differences between SGNS+retrofit(K) and svd(T cnt +K)+retrof(K). Finally, from the synset expansion viewpoint, it is worth adding that the P@2 value of our best method means that the first synonym proposed by the expansion in addition to the injected synonyms is correct with a precision equal to 46.9, which represents 4,945 new synonyms and illustrates the generalization capabilities of the method. Table 7 illustrates more qualitatively for some words the interest of the thesaurus embedding method we propose for the expansion of existing synsets. In accordance with the findings of Table 6, it first shows that the method has a good memorization capability of the injected knowledge (K) in the initial thesaurus since in the resulting embeddings (svd(T cnt +K)+retrof(K)), the synonym provided for each entry appears as the first or the second neighbor. Table 7 also illustrates the good capabilities of the method observed in Table 6 in terms of generalization as the rank of synonyms of an entry not provided as initial knowledge tend to decrease strongly. For instance, for the entry idiom, the rank of the synonym parlance is equal to 2,971 in the initial thesaurus with the injected knowledge (T cnt +K) while it is only equal to 4 after the embedding of the thesaurus. Interestingly, this improvement in terms of rank comes from a change in the distributional representation of words that also impacts the evaluation of the semantic similarity between words. While the similarity between the word richness and its synonym profusion was initially very low (0.06), its value after the embedding process is very much higher (0.66) and more representative of the relation between the two words.", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 25, "text": "Table 6", "ref_id": null }, { "start": 47, "end": 54, "text": "Table 7", "ref_id": null }, { "start": 766, "end": 773, "text": "Table 7", "ref_id": null }, { "start": 1224, "end": 1231, "text": "Table 7", "ref_id": null }, { "start": 1297, "end": 1304, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "From a Knowledge-Boosted Thesaurus to Word Embeddings", "sec_num": "5.2" }, { "text": "In this article, we presented a method for building word embeddings from distributional thesauri with a limited loss of semantic similarity information. The resulting embeddings outperforms stateof-the-art embeddings built from the same corpus. We also showed that this method can improve already existing word representations and the injection of external knowledge into word embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Perspectives", "sec_num": "6" }, { "text": "A first extension to this work would be to better leverage the ranking of neighbors in a thesaurus and to integrate more tightly the two steps of our embedding method. We also would like to define a more elaborated method for injecting external knowledge into a distributional thesaurus, more precisely by exploiting the injected knowledge to rerank its semantic neighbors. Finally, we would be interested in testing further the capabilities of the embeddings with injected knowledge for extending resources such as WordNet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Perspectives", "sec_num": "6" }, { "text": "We tried to exploit semantic neighbors that are not very close to their entry as antonyms but results were globally better without them.2 Since the Cosine similarity is used as similarity measure between words through their vectors, this distance should be more adapted in this context than the Euclidean distance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Following(Levy et al., 2015), SGNS was preferred to the Continuous Bag-Of-Word (CBOW) model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The statistical significance of differences were judged according to a paired Wilcoxon test with p-value < 0.05. The same test was applied for results reported hereafter.5 The values of these parameters were optimized on another thesaurus, coming from(Ferret, 2010).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Don't count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors", "authors": [ { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Georgiana", "middle": [], "last": "Dinu", "suffix": "" }, { "first": "Germ\u00e1n", "middle": [], "last": "Kruszewski", "suffix": "" } ], "year": 2014, "venue": "52 nd Annual Meeting of the Association for Computational Linguistics (ACL 2014)", "volume": "", "issue": "", "pages": "238--247", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Baroni, Georgiana Dinu, and Germ\u00e1n Kruszewski. 2014. Don't count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors. In 52 nd An- nual Meeting of the Association for Computational Linguistics (ACL 2014), pages 238-247, Baltimore, Maryland.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A critique of word similarity as a method for evaluating distributional semantic models", "authors": [ { "first": "Miroslav", "middle": [], "last": "Batchkarov", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Kober", "suffix": "" }, { "first": "Jeremy", "middle": [], "last": "Reffin", "suffix": "" }, { "first": "Julie", "middle": [], "last": "Weeds", "suffix": "" }, { "first": "David", "middle": [], "last": "Weir", "suffix": "" } ], "year": 2016, "venue": "1st Workshop on Evaluating Vector-Space Representations for NLP", "volume": "", "issue": "", "pages": "7--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miroslav Batchkarov, Thomas Kober, Jeremy Reffin, Julie Weeds, and David Weir. 2016. A critique of word similarity as a method for evaluating distribu- tional semantic models. In 1st Workshop on Evalu- ating Vector-Space Representations for NLP, pages 7-12, Berlin, Germany.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering", "authors": [ { "first": "Mikhail", "middle": [], "last": "Belkin", "suffix": "" }, { "first": "Partha", "middle": [], "last": "Niyogi", "suffix": "" } ], "year": 2001, "venue": "Advances in Neural Information Processing Systems", "volume": "14", "issue": "", "pages": "585--591", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikhail Belkin and Partha Niyogi. 2001. Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering. In Advances in Neural Information Processing Systems 14, pages 585-591.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Label Propagation And Quadratic Criterion", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Delalleau", "suffix": "" }, { "first": "Nicolas", "middle": [ "Le" ], "last": "Roux", "suffix": "" } ], "year": 2006, "venue": "Semi-Supervised Learning", "volume": "", "issue": "", "pages": "193--216", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, Olivier Delalleau, and Nicolas Le Roux. 2006. Label Propagation And Quadratic Cri- terion. In Semi-Supervised Learning, pages 193- 216. MIT Press.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Evaluating WordNet-based Measures of Lexical Semantic Relatedness", "authors": [ { "first": "Alexander", "middle": [], "last": "Budanitsky", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 2006, "venue": "Computational Linguistics", "volume": "32", "issue": "1", "pages": "13--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Budanitsky and Graeme Hirst. 2006. Evalu- ating WordNet-based Measures of Lexical Semantic Relatedness. Computational Linguistics, 32(1):13- 47.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Extracting semantic representations from word co-occurrence statistics: stop-lists, stemming, and SVD. Behavior research methods", "authors": [ { "first": "A", "middle": [], "last": "John", "suffix": "" }, { "first": "Joseph P", "middle": [], "last": "Bullinaria", "suffix": "" }, { "first": "", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2012, "venue": "", "volume": "44", "issue": "", "pages": "890--907", "other_ids": {}, "num": null, "urls": [], "raw_text": "John A Bullinaria and Joseph P Levy. 2012. Extracting semantic representations from word co-occurrence statistics: stop-lists, stemming, and SVD. Behavior research methods, 44(3):890-907.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Deep Neural Networks for Learning Graph Representations", "authors": [ { "first": "Shaosheng", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Qiongkai", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2016, "venue": "Thirtieth AAAI Conference on Artificial Intelligence (AAAI 2016)", "volume": "", "issue": "", "pages": "1145--1152", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shaosheng Cao, Wei Lu, and Qiongkai Xu. 2016. Deep Neural Networks for Learning Graph Representa- tions. In Thirtieth AAAI Conference on Artificial Intelligence (AAAI 2016), pages 1145-1152. AAAI Press.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Computational Information Retrieval. chapter Experiments with LSA Scoring: Optimal Rank and Basis", "authors": [ { "first": "John", "middle": [], "last": "Caron", "suffix": "" } ], "year": 2001, "venue": "Society for Industrial and Applied Mathematics", "volume": "", "issue": "", "pages": "157--169", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Caron. 2001. Computational Information Re- trieval. chapter Experiments with LSA Scoring: Op- timal Rank and Basis, pages 157-169. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Improving distributional thesauri by exploring the graph of neighbors", "authors": [ { "first": "Vincent", "middle": [], "last": "Claveau", "suffix": "" }, { "first": "Ewa", "middle": [], "last": "Kijak", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Ferret", "suffix": "" } ], "year": 2014, "venue": "25 th International Conference on Computational Linguistics (COLING 2014)", "volume": "", "issue": "", "pages": "709--720", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vincent Claveau, Ewa Kijak, and Olivier Ferret. 2014. Improving distributional thesauri by exploring the graph of neighbors. In 25 th International Confer- ence on Computational Linguistics (COLING 2014), pages 709-720, Dublin, Ireland.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Ensemble Methods for Automatic Thesaurus Extraction", "authors": [ { "first": "James", "middle": [], "last": "Curran", "suffix": "" } ], "year": 2002, "venue": "Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "222--229", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Curran. 2002. Ensemble Methods for Auto- matic Thesaurus Extraction. In 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP 2002), pages 222-229.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Improvements in automatic thesaurus extraction", "authors": [ { "first": "R", "middle": [], "last": "James", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Curran", "suffix": "" }, { "first": "", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2002, "venue": "Workshop of the ACL Special Interest Group on the Lexicon (SIGLEX)", "volume": "", "issue": "", "pages": "59--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "James R. Curran and Marc Moens. 2002. Improve- ments in automatic thesaurus extraction. In Work- shop of the ACL Special Interest Group on the Lexi- con (SIGLEX), pages 59-66, Philadelphia, USA.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Retrofitting Word Vectors to Semantic Lexicons", "authors": [ { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Jesse", "middle": [], "last": "Dodge", "suffix": "" }, { "first": "Sujay", "middle": [], "last": "Kumar Jauhar", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2015, "venue": "2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT 2015)", "volume": "", "issue": "", "pages": "1606--1615", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting Word Vectors to Semantic Lexicons. In 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies (NAACL HLT 2015), pages 1606-1615, Denver, Colorado.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Problems With Evaluation of Word Embeddings Using Word Similarity Tasks", "authors": [ { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" }, { "first": "Pushpendre", "middle": [], "last": "Rastogi", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2016, "venue": "1st Workshop on Evaluating Vector-Space Representations for NLP", "volume": "", "issue": "", "pages": "30--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manaal Faruqui, Yulia Tsvetkov, Pushpendre Rastogi, and Chris Dyer. 2016. Problems With Evaluation of Word Embeddings Using Word Similarity Tasks. In 1st Workshop on Evaluating Vector-Space Represen- tations for NLP, pages 30-35, Berlin, Germany.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Testing Semantic Similarity Measures for Extracting Synonyms from a Corpus", "authors": [ { "first": "Olivier", "middle": [], "last": "Ferret", "suffix": "" } ], "year": 2010, "venue": "th International Conference on Language Resources and Evaluation (LREC'10)", "volume": "", "issue": "", "pages": "3338--3343", "other_ids": {}, "num": null, "urls": [], "raw_text": "Olivier Ferret. 2010. Testing Semantic Similarity Measures for Extracting Synonyms from a Corpus. In 7 th International Conference on Language Re- sources and Evaluation (LREC'10), pages 3338- 3343, Valletta, Malta.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Early and Late Combinations of Criteria for Reranking Distributional Thesauri", "authors": [ { "first": "Olivier", "middle": [], "last": "Ferret", "suffix": "" } ], "year": 2015, "venue": "53 rd Annual Meeting of the Association for Computational Linguistics and 7 th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2015)", "volume": "", "issue": "", "pages": "470--476", "other_ids": {}, "num": null, "urls": [], "raw_text": "Olivier Ferret. 2015. Early and Late Combinations of Criteria for Reranking Distributional Thesauri. In 53 rd Annual Meeting of the Association for Com- putational Linguistics and 7 th International Joint Conference on Natural Language Processing (ACL- IJCNLP 2015), pages 470-476, Beijing, China.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Explorations in automatic thesaurus discovery", "authors": [ { "first": "Gregory", "middle": [], "last": "Grefenstette", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gregory Grefenstette. 1994. Explorations in automatic thesaurus discovery. Kluwer Academic Publishers.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Collaborative Filtering for Implicit Feedback Datasets", "authors": [ { "first": "Y", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Y", "middle": [], "last": "Koren", "suffix": "" }, { "first": "C", "middle": [], "last": "Volinsky", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Hu, Y. Koren, and C. Volinsky. 2008. Collaborative Filtering for Implicit Feedback Datasets. In 2008", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Eighth IEEE International Conference on Data Mining (ICDM'08)", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "263--272", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eighth IEEE International Conference on Data Min- ing (ICDM'08), pages 263-272.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A Systematic Study of Semantic Vector Space Model Parameters", "authors": [ { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2014, "venue": "2 nd Workshop on Continuous Vector Space Models and their Compositionality (CVSC)", "volume": "", "issue": "", "pages": "21--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douwe Kiela and Stephen Clark. 2014. A Systematic Study of Semantic Vector Space Model Parameters. In 2 nd Workshop on Continuous Vector Space Mod- els and their Compositionality (CVSC), pages 21- 30, Gothenburg, Sweden.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Factorization Meets the Neighborhood: A Multifaceted Collaborative Filtering Model", "authors": [ { "first": "Yehuda", "middle": [], "last": "Koren", "suffix": "" } ], "year": 2008, "venue": "14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2008)", "volume": "", "issue": "", "pages": "426--434", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yehuda Koren. 2008. Factorization Meets the Neigh- borhood: A Multifaceted Collaborative Filtering Model. In 14th ACM SIGKDD International Con- ference on Knowledge Discovery and Data Mining (KDD 2008), pages 426-434.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A solution to Plato's problem: the latent semantic analysis theory of acquisition, induction, and representation of knowledge", "authors": [ { "first": "K", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Susan", "middle": [ "T" ], "last": "Landauer", "suffix": "" }, { "first": "", "middle": [], "last": "Dumais", "suffix": "" } ], "year": 1997, "venue": "Psychological review", "volume": "104", "issue": "2", "pages": "211--240", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas K. Landauer and Susan T. Dumais. 1997. A solution to Plato's problem: the latent semantic analysis theory of acquisition, induction, and rep- resentation of knowledge. Psychological review, 104(2):211-240.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A Minimally Supervised Approach for Synonym Extraction with Word Embeddings", "authors": [ { "first": "Artuur", "middle": [], "last": "Leeuwenberg", "suffix": "" }, { "first": "Mihaela", "middle": [], "last": "Vela", "suffix": "" }, { "first": "Jon", "middle": [], "last": "Dehdari", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Van Genabith", "suffix": "" } ], "year": 2016, "venue": "The Prague Bulletin of Mathematical Linguistics", "volume": "", "issue": "105", "pages": "111--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "Artuur Leeuwenberg, Mihaela Vela, Jon Dehdari, and Josef van Genabith. 2016. A Minimally Supervised Approach for Synonym Extraction with Word Em- beddings. The Prague Bulletin of Mathematical Lin- guistics, (105):111-142.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Improving Distributional Similarity with Lessons Learned from Word Embeddings", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" } ], "year": 2015, "venue": "Transactions of the Association for Computational Linguistics (TALC)", "volume": "3", "issue": "", "pages": "211--225", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving Distributional Similarity with Lessons Learned from Word Embeddings. Transactions of the Association for Computational Linguistics (TALC), 3:211-225.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Automatic retrieval and clustering of similar words", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1998, "venue": "17 th International Conference on Computational Linguistics and 36 th Annual Meeting of the Association for Computational Linguistics (ACL-COLING'98)", "volume": "", "issue": "", "pages": "768--774", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin. 1998. Automatic retrieval and cluster- ing of similar words. In 17 th International Confer- ence on Computational Linguistics and 36 th Annual Meeting of the Association for Computational Lin- guistics (ACL-COLING'98), pages 768-774, Mon- tral, Canada.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Learning Semantic Word Embeddings based on Ordinal Knowledge Constraints", "authors": [ { "first": "Quan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Si", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Zhen-Hua", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2015, "venue": "53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2015)", "volume": "", "issue": "", "pages": "1501--1511", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quan Liu, Hui Jiang, Si Wei, Zhen-Hua Ling, and Yu Hu. 2015. Learning Semantic Word Embed- dings based on Ordinal Knowledge Constraints. In 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL- IJCNLP 2015), pages 1501-1511, Beijing, China.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word repre- sentations in vector space. In International Con- ference on Learning Representations 2013 (ICLR 2013), workshop track.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "WordNet: An On-Line Lexical Database", "authors": [ { "first": "George", "middle": [ "A" ], "last": "Miller", "suffix": "" } ], "year": 1990, "venue": "International Journal of Lexicography", "volume": "3", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A. Miller. 1990. WordNet: An On-Line Lex- ical Database. International Journal of Lexicogra- phy, 3(4).", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Adaptive graph walk-based similarity measures for parsed text", "authors": [ { "first": "Einat", "middle": [], "last": "Minkov", "suffix": "" }, { "first": "William", "middle": [ "W" ], "last": "Cohen", "suffix": "" } ], "year": 2014, "venue": "Natural Language Engineering", "volume": "20", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Einat Minkov and William W. Cohen. 2014. Adap- tive graph walk-based similarity measures for parsed text. Natural Language Engineering, 20(3):361397.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Counter-fitting Word Vectors to Linguistic Constraints", "authors": [ { "first": "Nikola", "middle": [], "last": "Mrk\u0161i\u0107", "suffix": "" }, { "first": "Diarmuid\u00f3", "middle": [], "last": "S\u00e9aghdha", "suffix": "" }, { "first": "Blaise", "middle": [], "last": "Thomson", "suffix": "" }, { "first": "Milica", "middle": [], "last": "Ga\u0161i\u0107", "suffix": "" }, { "first": "Lina", "middle": [ "M" ], "last": "Rojas-Barahona", "suffix": "" }, { "first": "Pei-Hao", "middle": [], "last": "Su", "suffix": "" }, { "first": "David", "middle": [], "last": "Vandyke", "suffix": "" }, { "first": "Tsung-Hsien", "middle": [], "last": "Wen", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Young", "suffix": "" } ], "year": 2016, "venue": "2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT 2016)", "volume": "", "issue": "", "pages": "142--148", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikola Mrk\u0161i\u0107, Diarmuid\u00d3 S\u00e9aghdha, Blaise Thom- son, Milica Ga\u0161i\u0107, Lina M. Rojas-Barahona, Pei- Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting Word Vectors to Linguistic Constraints. In 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies (NAACL HLT 2016), pages 142-148, San Diego, California.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "GloVe: Global Vectors for Word Representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), pages 1532-1543, Doha, Qatar.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Inducing Language Networks from Continuous Space Word Representations", "authors": [ { "first": "Bryan", "middle": [], "last": "Perozzi", "suffix": "" }, { "first": "Rami", "middle": [], "last": "Al-Rfou", "suffix": "" }, { "first": "Vivek", "middle": [], "last": "Kulkarni", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Skiena", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bryan Perozzi, Rami Al-Rfou, Vivek Kulkarni, and Steven Skiena. 2014a. Inducing Language Net- works from Continuous Space Word Representa- tions. Springer International Publishing, Bologna, Italy.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "DeepWalk: Online Learning of Social Representations", "authors": [ { "first": "Bryan", "middle": [], "last": "Perozzi", "suffix": "" }, { "first": "Rami", "middle": [], "last": "Al-Rfou", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Skiena", "suffix": "" } ], "year": 2014, "venue": "20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2014)", "volume": "", "issue": "", "pages": "701--710", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014b. DeepWalk: Online Learning of Social Rep- resentations. In 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Min- ing (KDD 2014), pages 701-710.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Finding Synonyms Using Automatic Word Alignment and Measures of Distributional Similarity", "authors": [ { "first": "J\u00f6rg", "middle": [], "last": "Lonneke Van Der Plas", "suffix": "" }, { "first": "", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2006, "venue": "21 st International Conference on Computational Linguistics and 44 th Annual Meeting of the Association for Computational Linguistics (COLING-ACL 2006)", "volume": "", "issue": "", "pages": "866--873", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lonneke van der Plas and J\u00f6rg Tiedemann. 2006. Find- ing Synonyms Using Automatic Word Alignment and Measures of Distributional Similarity. In 21 st International Conference on Computational Lin- guistics and 44 th Annual Meeting of the Associa- tion for Computational Linguistics (COLING-ACL 2006), pages 866-873, Sydney, Australia.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Evaluation methods for unsupervised word embeddings", "authors": [ { "first": "Tobias", "middle": [], "last": "Schnabel", "suffix": "" }, { "first": "Igor", "middle": [], "last": "Labutov", "suffix": "" }, { "first": "David", "middle": [], "last": "Mimno", "suffix": "" }, { "first": "Thorsten", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 2015, "venue": "2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015)", "volume": "", "issue": "", "pages": "298--307", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. 2015. Evaluation methods for unsupervised word embeddings. In 2015 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP 2015), pages 298-307, Lisbon, Portugal.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "LINE: Large-scale Information Network Embedding", "authors": [ { "first": "Jian", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Meng", "middle": [], "last": "Qu", "suffix": "" }, { "first": "Mingzhe", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Qiaozhu", "middle": [], "last": "Mei", "suffix": "" } ], "year": 2015, "venue": "24th International Conference on World Wide Web (WWW 2015), WWW '15", "volume": "", "issue": "", "pages": "1067--1077", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. 2015. LINE: Large-scale Information Network Embedding. In 24th Interna- tional Conference on World Wide Web (WWW 2015), WWW '15, pages 1067-1077.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Evaluating Score Normalization Methods in Data Fusion", "authors": [ { "first": "Shengli", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Fabio", "middle": [], "last": "Crestani", "suffix": "" }, { "first": "Yaxin", "middle": [], "last": "Bi", "suffix": "" } ], "year": 2006, "venue": "Third Asia Conference on Information Retrieval Technology (AIRS'06)", "volume": "", "issue": "", "pages": "642--648", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shengli Wu, Fabio Crestani, and Yaxin Bi. 2006. Eval- uating Score Normalization Methods in Data Fu- sion. In Third Asia Conference on Information Retrieval Technology (AIRS'06), pages 642-648. Springer-Verlag.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "RC-NET: A General Framework for Incorporating Knowledge into Word Representations", "authors": [ { "first": "Chang", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Yalong", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Jiang", "middle": [], "last": "Bian", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Gang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xiaoguang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2014, "venue": "23rd ACM International Conference on Conference on Information and Knowledge Management (CIKM 2014)", "volume": "", "issue": "", "pages": "1219--1228", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang Xu, Yalong Bai, Jiang Bian, Bin Gao, Gang Wang, Xiaoguang Liu, and Tie-Yan Liu. 2014. RC-NET: A General Framework for Incorporating Knowledge into Word Representations. In 23rd ACM International Conference on Conference on Information and Knowledge Management (CIKM 2014), pages 1219-1228.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Graph Embedding and Extensions: A General Framework for Dimensionality Reduction", "authors": [ { "first": "S", "middle": [], "last": "Yan", "suffix": "" }, { "first": "D", "middle": [], "last": "Xu", "suffix": "" }, { "first": "B", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "H", "middle": [ "J" ], "last": "Zhang", "suffix": "" }, { "first": "Q", "middle": [], "last": "Yang", "suffix": "" }, { "first": "S", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2007, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "29", "issue": "1", "pages": "40--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Yan, D. Xu, B. Zhang, H. j. Zhang, Q. Yang, and S. Lin. 2007. Graph Embedding and Extensions: A General Framework for Dimensionality Reduction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(1):40-51.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Polarity Inducing Latent Semantic Analysis", "authors": [ { "first": "Geoffrey", "middle": [], "last": "Wen-Tau Yih", "suffix": "" }, { "first": "John", "middle": [], "last": "Zweig", "suffix": "" }, { "first": "", "middle": [], "last": "Platt", "suffix": "" } ], "year": 2012, "venue": "2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "1212--1222", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wen-tau Yih, Geoffrey Zweig, and John Platt. 2012. Polarity Inducing Latent Semantic Analysis. In 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computa- tional Natural Language Learning (EMNLP-CoNLL 2012), pages 1212-1222, Jeju Island, Korea.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Learning Word Meta-Embeddings", "authors": [ { "first": "Wenpeng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2016, "venue": "54th Annual Meeting of the Association for Computational Linguistics (ACL 2016)", "volume": "", "issue": "", "pages": "1351--1360", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenpeng Yin and Hinrich Sch\u00fctze. 2016. Learning Word Meta-Embeddings. In 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016), pages 1351-1360, Berlin, Germany.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Improving Lexical Embeddings with Semantic Knowledge", "authors": [ { "first": "Mo", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2014, "venue": "52nd Annual Meeting of the Association for Computational Linguistics (ACL 2014)", "volume": "", "issue": "", "pages": "545--550", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mo Yu and Mark Dredze. 2014. Improving Lexical Embeddings with Semantic Knowledge. In 52nd Annual Meeting of the Association for Computa- tional Linguistics (ACL 2014), pages 545-550, Bal- timore, Maryland.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Word Semantic Representations using Bayesian Probabilistic Tensor Factorization", "authors": [ { "first": "Jingwei", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jeremy", "middle": [], "last": "Salwen", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Glass", "suffix": "" }, { "first": "Alfio", "middle": [], "last": "Gliozzo", "suffix": "" } ], "year": 2014, "venue": "Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1522--1531", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingwei Zhang, Jeremy Salwen, Michael Glass, and Al- fio Gliozzo. 2014. Word Semantic Representations using Bayesian Probabilistic Tensor Factorization. In 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP 2014), pages 1522-1531, Doha, Qatar.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "text": "Duality of semantic information", "type_str": "figure" }, "TABREF1": { "html": null, "content": "", "type_str": "table", "text": "Evaluation of the initial thesaurus and two reference models of embeddings (values x 100)", "num": null }, "TABREF5": { "html": null, "content": "
", "type_str": "table", "text": "Evaluation of the adaptation of SGNS embeddings with thesaurus relations", "num": null }, "TABREF7": { "html": null, "content": "
", "type_str": "table", "text": "Evaluation of the fusion of a distributional thesaurus T and word embeddings S", "num": null } } } }