{ "paper_id": "2019", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:29:42.226203Z" }, "title": "DRCoVe: An Augmented Word Representation Approach using Distributional and Relational Context", "authors": [ { "first": "Md", "middle": [], "last": "Aslam Parwez", "suffix": "", "affiliation": { "laboratory": "", "institution": "Jamia Millia Islamia New Delhi", "location": { "country": "India" } }, "email": "aslamparwez.jmi@gmail.com" }, { "first": "Muhammad", "middle": [], "last": "Abulaish", "suffix": "", "affiliation": {}, "email": "abulaish@sau.ac.in" }, { "first": "Mohd", "middle": [], "last": "Fazil", "suffix": "", "affiliation": {}, "email": "mohdfazil.jmi@gmail.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Word representation using the distributional information of words from a sizeable corpus is considered efficacious in many natural language processing and text mining applications. However, distributional representation of a word is unable to capture distant relational knowledge, representing the relational semantics. In this paper, we propose a novel word representation approach using distributional and relational contexts, DRCoVe, which augments the distributional representation of a word using the relational semantics extracted as syntactic and semantic association among entities from the underlying corpus. Unlike existing approaches that use external knowledge bases representing the relational semantics for enhanced word representation, DRCoVe uses typed dependencies (aka syntactic dependencies) to extract relational knowledge from the underlying corpus. The proposed approach is applied over a biomedical text corpus to learn word representation and compared with GloVe, which is one of the most popular word embedding approaches. The evaluation results on various benchmark datasets for word similarity and word categorization tasks demonstrate the effectiveness of DRCoVe over the GloVe.", "pdf_parse": { "paper_id": "2019", "_pdf_hash": "", "abstract": [ { "text": "Word representation using the distributional information of words from a sizeable corpus is considered efficacious in many natural language processing and text mining applications. However, distributional representation of a word is unable to capture distant relational knowledge, representing the relational semantics. In this paper, we propose a novel word representation approach using distributional and relational contexts, DRCoVe, which augments the distributional representation of a word using the relational semantics extracted as syntactic and semantic association among entities from the underlying corpus. Unlike existing approaches that use external knowledge bases representing the relational semantics for enhanced word representation, DRCoVe uses typed dependencies (aka syntactic dependencies) to extract relational knowledge from the underlying corpus. The proposed approach is applied over a biomedical text corpus to learn word representation and compared with GloVe, which is one of the most popular word embedding approaches. The evaluation results on various benchmark datasets for word similarity and word categorization tasks demonstrate the effectiveness of DRCoVe over the GloVe.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Understanding contextual semantics of words is crucial in many natural language processing (NLP) applications. Recent trends in text mining and NLP suggest immense interest towards learning word embedding or word representation in a vector space from a large corpus, which could be useful for a variety of applications like text classification (Lai et al., 2015) , clustering (Wang et al., 2015) , and sentiment analysis (Tang et al., 2014) . In addition, researchers are devising methods to learn phrase-, sentence-, or document-level embeddings for various NLP applications. Word embeddings capture implicit semantics and hence attracted many researchers to explore and exploit a tremendous amount of available unstructured corpora for efficient word representation by employing mainly unsupervised learning approaches. Further, the growth and availability of domain-specific massive text corpora can be exploited to learn domain-specific word representation.", "cite_spans": [ { "start": 344, "end": 362, "text": "(Lai et al., 2015)", "ref_id": "BIBREF9" }, { "start": 376, "end": 395, "text": "(Wang et al., 2015)", "ref_id": "BIBREF20" }, { "start": 421, "end": 440, "text": "(Tang et al., 2014)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although different approaches for learning word embeddings have been proposed in the prior works, they are mostly based on distributional representation of words, considering the neighbors of a word within a fixed context window. These algorithms map sparse representation of words to a lower dimensional vector space where words with similar context appear nearby each other. However, distributional representation of words learned by these algorithms suffer from two important limitations -(i) unable to capture the relational semantics of rare co-occurring words within the corpus, and (ii) unable to capture the relational semantics of words that are outside the purview of the context window. The first limitation is that a large corpus, though represents different contextual information, may have rare co-occurrence of two words because it might not be large enough to possess sufficient count of the co-occurrence of semantically similar word pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To overcome this limitation, researchers have incorporated knowledge into these distributional word representations from external knowledge bases (KBs). In this direction, semantically related words in terms of relations like synonymy, hypernymy, and meronymy from KBs like WordNet (Miller, 1995) , Freebase (Bollacker et al., 2008) have been used to learn better representation of words (Alsuhaibani et al., Figure 1 : An exemplar dependency parse tree generated by the Stanford parser using DependenSee 3.7.0 2018; Celikyilmaz et al., 2015) . This makes these approaches dependent upon the external KBs to enhance the efficacy of word representation. Although KBs provide significant information about word relations, they are scanty with limited entries for each word and does not represent any contextual information. In addition, since KBs are manually curated and maintained, they are not comprehensive.", "cite_spans": [ { "start": 282, "end": 296, "text": "(Miller, 1995)", "ref_id": "BIBREF15" }, { "start": 308, "end": 332, "text": "(Bollacker et al., 2008)", "ref_id": "BIBREF2" }, { "start": 388, "end": 408, "text": "(Alsuhaibani et al.,", "ref_id": null }, { "start": 409, "end": 417, "text": "Figure 1", "ref_id": null }, { "start": 517, "end": 542, "text": "Celikyilmaz et al., 2015)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The second limitation is that the distributional word representations are unable to capture the relational semantics of words due to their dependence on the fixed context window, and hence ignore the semantic associations between words that are outside the purview of the context window. For example, in the sentence, \"cholera is an infectious disease characterized by watery diarrhea, vomiting, severe dehydration, and muscle cramps\", the word pairs (cholera, dehydration) and (cholera, cramps) have long-range dependency. However, both dehydration and cramps are semantically associated with cholera as they are its symptoms. In case of fixed context window size, e.g. 5, such long range dependency relationships will not be captured. Further, if we increase the size of the context window, it will adversely impact the embedding representation due to the inclusion of irrelevant and weak contextual words. Additionally, in case of domain-specific corpus for learning word embedding, the semantic relation between cholera and dehydration, or cholera and cramps would be very vital because dehydration and muscle cramps are the symptoms of cholera. These relational semantics can be captured by dependency grammar that shows syntactic and semantic relationships between words of a sentence. To this end, Levy and Goldberg (2014a) presented a dependency-based word representation learning approach to incorporate the syntactic contexts instead of linear contexts. However, existing literatures have no approach that learn word representation using syntactic contexts extracted from inter-relationships of words based on the dependency tuples generated by the language-parser. For example, in figure 1, the syntactic contexts using only the head and modifier words of the dependency tuples generated by the parser shows direct dependency relation between cholera and disease through nsubj dependency relation; but, it doesn't not show any relational semantics between cholera and watery, diarrhea, vomiting, dehydration, and cramps as they are not directly linked to cholera by any dependency relations. Therefore, extraction of such relations to augment word representations would be very helpful for various domain-specific NLP tasks such as classification of disease-related documents or texts. To the best of our knowledge, in the existing literatures, no such approach exists that utilizes the relational semantics extracted from a large corpus to enhance the distributional representation of words.", "cite_spans": [ { "start": 1305, "end": 1330, "text": "Levy and Goldberg (2014a)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we present an augmented approach, DRCoVe, to use both text corpus and an extracted repository of semantically related triplets from the corpus to learn efficient word representation. The proposed approach first initializes the word representation to low-dimensional real-valued vectors generated from the singular value decomposition (SVD) of positive pointwise mutual information (PPMI) matrix of the underlying corpus and the relational semantic repository. The initial word vectors from the corpus are augmented using vectors from the relational semantic repository, provided the words from the corpus occur in the vocabulary of the relational semantic repository. In the proposed approach, we implement a modified GloVe (Pennington et al., 2014) objective function for cost optimization to incorporate vector representations from the relational knowledge repository with the initial vectors from the corpus. In brief, the main contributions of this paper can be summarized as follows.", "cite_spans": [ { "start": 739, "end": 764, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We propose DRCoVe, a novel approach of learning and augmentation of word representation from a corpus that can handle both long-and short-range dependencies among words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 The model combines the benefits of point-wise mutual information, singular value decomposition, and neural network-based updation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Compared to existing approaches, the proposed model performs considerably better on different benchmark datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Rest of the paper is organized as follows. Section 2 presents a brief review of the existing works on learning word representations. Section 3 presents background details of the concepts used in this paper. Section 4 presents the detailed description of the proposed model. Section 5 presents the experimental details and evaluation results. Finally, section 6 concludes the paper and provides future directions of research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, a number of different learning algorithms have been proposed to learn the low-dimensional dense representation of words generally called word embedding used in different NLP tasks such as named entity recognition (Collobert et al., 2011) , sentiment analysis (Tang et al., 2014) . In this regard, two popular word representation models: continuous bag of words (CBOW) and skip gram (SG) (Mikolov et al., 2013a ) models based on neural networks have gained momentum in learning distributed word representation by exploiting the local context of words co-occurring within a given context window. The CBOW predicts the target word given the surrounding context words while SG predicts the surrounding context words given the current word. Similarly, GloVe (Pennington et al., 2014) is another popular method of learning word representation based on global co-occurrence matrix that predicts global co-occurrence between target and context words by employing randomly initialized vectors of desired dimensions. These models learn embeddings only from the corpus without incorporation of any external knowledge. However, in this direction, numerous studies (Yu and Dredze, 2014; Xu et al., 2014; Alsuhaibani et al., 2018) have attempted to incorporate the relational information from KBs for word representation. In Yu and Dredze (2014) , the authors proposed an approach to jointly learn embeddings from a corpus and a similarity lexicon (synonymy) by assigning high probabilities to words that appear in the similarity lexicon using joint objective functions of relation constraint models (RCM) and CBOW. Similarly, Xu et al. (2014) used the relational and categorical information as regularization parameters to the SG training objective function to improve the word representation. The CBOW based models normalize target word probabilities for the whole vocabulary, hence, computationally very expensive for large corpora.", "cite_spans": [ { "start": 223, "end": 247, "text": "(Collobert et al., 2011)", "ref_id": "BIBREF5" }, { "start": 269, "end": 288, "text": "(Tang et al., 2014)", "ref_id": "BIBREF18" }, { "start": 397, "end": 419, "text": "(Mikolov et al., 2013a", "ref_id": "BIBREF13" }, { "start": 763, "end": 788, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF17" }, { "start": 1162, "end": 1183, "text": "(Yu and Dredze, 2014;", "ref_id": "BIBREF22" }, { "start": 1184, "end": 1200, "text": "Xu et al., 2014;", "ref_id": "BIBREF21" }, { "start": 1201, "end": 1226, "text": "Alsuhaibani et al., 2018)", "ref_id": "BIBREF1" }, { "start": 1321, "end": 1341, "text": "Yu and Dredze (2014)", "ref_id": "BIBREF22" }, { "start": 1623, "end": 1639, "text": "Xu et al. (2014)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "In Ghosh et al. (2016) , the authors proposed vocabulary driven skip-gram with negative sampling (SGNS) to learn disease-specific word vectors from health-related news corpus by incorporating disease-related vocabulary. Most of the proposed word representation approaches are based on either of the two models (CBOW or SG), or their variants (SGNS, SGHS) of Word2Vec algorithm either by linearly combining additional objective functions or adding as regularizers. Alsuhaibani et al. (2018) used WordNet to extract eight different types of relations such as synonymy, antonymy, hypernymy, meronymy, and so on to learn joint embeddings. They used a linear combination of GloVe and KB-based objective functions.", "cite_spans": [ { "start": 3, "end": 22, "text": "Ghosh et al. (2016)", "ref_id": "BIBREF7" }, { "start": 464, "end": 489, "text": "Alsuhaibani et al. (2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "All the discussed and existing approaches ignore the relational semantics between the words, which are outside the purview of context-window.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "This section presents the notations and the background details of the important concepts used in the proposed approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Problem Definition", "sec_num": "3" }, { "text": "Notations: Suppose a corpus C has n number of documents d 1 , d 2 , . . . , d n , and D represents the collection of target and context words pairs (w, c) obtained from C for a given context window size l, where context words of a target word w i are the surrounding words", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Problem Definition", "sec_num": "3" }, { "text": "w i\u2212l , . . . , w i\u22121 , w i+1 , . . . , w i+l .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Problem Definition", "sec_num": "3" }, { "text": "In addition, assume that V w and V c represent word and context vocabularies respectively for corpus D. We assume that n (w,c) represents the total count of (w, c) pair in D such that the target word w and context word c appear together within the context window l, n w and n c denote the occurrence of w and c respectively in D such that n w = \u0109\u2208Vc n (w,\u0109) and n c = \u0175\u2208Vw n (\u0175,c) . The association between every pair of target and context words of V w and V c is presented in a matrix M such that each row of the matrix represents the vector of a target word w \u2208 V w and each column represents vector of a context word c \u2208 V c and every element M i,j represents the association between the i th target word w i and j th context word c j . Further, assume a relational semantic repository R l consisting of all the relational semantic triplets extracted from the corpus C. In addition, assume V represents the vocabulary of R l . In the paper, alphabets w and c in bold typeface represent vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Problem Definition", "sec_num": "3" }, { "text": "GloVe: It is a neural network-based machine learning algorithm to learn an efficient lower dimensional dense representation of words in an embedding space. It uses global co-occurrence matrix to learn distributed representation of words from a text corpus. Initially, it creates co-occurrence matrix M with rows representing target words for which we want to learn word representation and the columns represent the context words co-occurring with the target words in the corpus within a given context window. In M , each entry, say, M i,j represents the sum of the reciprocal of the distance of co-occurring target and context words. GloVe implements weighted least square regression objective function to minimize the loss J g as given in equation 1, where f (M w,c ) is the weight function to find weight between a target word w and context word c as given in equation 2, and b w and b c are the bias terms for the underlying target and context words respectively. In the equation 2, \u03b1 = 0.75 is a hyper-parameter and x max = 100. The objective of GloVe is to minimize the squared difference between the inner product of word and context vectors w and c, and the logarithm of their co-occurrence count in D.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Problem Definition", "sec_num": "3" }, { "text": "Jg = 1 2 w\u2208Vw c\u2208Vc f (Mw,c)(w T \u2022 c + bw + bc \u2212 log(Mi,j)) 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Problem Definition", "sec_num": "3" }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Problem Definition", "sec_num": "3" }, { "text": "f (M w,c ) = min {(M w,c /x max ) \u03b1 , 1} (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Problem Definition", "sec_num": "3" }, { "text": "In GloVe, learning process starts by assigning random vectors of desired dimensions to the target and context words and then updating them during the learning process with an objective to reduce the weighted least square loss as given in equation 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Problem Definition", "sec_num": "3" }, { "text": "Pointwise Mutual Information: In the existing literature, researchers have used different metrics such as co-occurrence count in GloVe to represent the association between a word and context pair (w, c). However, simple frequency count is not the best measure of association as it does not incorporate any contextual information. The pointwise mutual information (PMI) is another measure of association and better as compared to co-occurrence count. It measures how often two events co-occur compared to what we would expect if they were independent as defined in equation 3 (Jurafsky and Martin, 2018) . There can be target and context word pairs (w \u2208 V w and c \u2208 V c ) which do not appear together within the given context window l in the corpus and for such pairs n (w,c) = 0, and therefore P M I(w, c) = log(0) = \u2212\u221e. To avoid this situation, positive pointwise mutual information (PPMI) has been used in which negative PMI values are mapped to zero as given in equation 4. In addition, Bullinaria and Levy (2007) showed that PPMI performs better than PMI in finding semantic similarity. PPMI measures are widely used to find semantic similarity, however, these matrices are highly sparse and need huge computational resources. One measure is to convert such sparse vectors into low dimensional dense vectors to improve computational efficiency and generalization. In this regard, dimensionality reduction is a way to find low dimensional dense vectors using matrix factorization techniques such as SVD. ", "cite_spans": [ { "start": 575, "end": 602, "text": "(Jurafsky and Martin, 2018)", "ref_id": "BIBREF8" }, { "start": 1005, "end": 1016, "text": "Levy (2007)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Background and Problem Definition", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P M I(w, c) = log \u00c5 P (w, c) P (w) * P (c) \u00e3 = log \u00c5 n (w,c) * |D| nw * nc \u00e3 (3) P P M I(w, c) = max {P M I(w, c), 0}", "eq_num": "(4)" } ], "section": "Background and Problem Definition", "sec_num": "3" }, { "text": "This section presents the detailed description of the proposed approach, starting from the mechanism to generate initial word representation from the corpus, their augmentation through relational semantics, and finally, adaptive updation of word vectors. A detailed description of each step of the proposed approach is presented in the following subsections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Approach", "sec_num": "4" }, { "text": "To learn word representation of desired dimension, we first need to initialize the vectors for each target and context words pair of the corpus that can be further augmented using their relational semantics and updated based on the weighted least square loss minimization process. Before neural-based approaches, distributed word representations were based on count-based vectors such as tf-idf and SVD-based vectors. Recent advancements in neural network-based word representation have shown significant improvement in its performance in various NLP tasks. The neural network-based word representations are based on prediction (Mikolov et al., 2013b,a) of either the target word given the context within the specified context window or vice versa. However, recent studies (Levy and Goldberg, 2014b; Levy et al., 2015) have shown that the neural network-based embedding learned using Word2Vec or GloVe models are comparable in performance with the traditional representation of vectors obtained through the decomposition of PPMI matrix. Therefore, to incorporate the benefits of traditional decomposition-based vectors, the proposed approach generates initial word representation using vectors obtained from SVD-based factorization of PPMI matrix. To this end, we first create a co-occurrence matrix M considering the co-occurrence count of every (w, c) pair of target and context words from V w and V c respectively that is further mapped to a PPMI matrix M p . Thereafter, the M p is factorized using SVD to generate initial low dimensional dense vector representations of target and context words as W = U \u2022 \u221a \u03a3 and C = V T \u2022 \u221a \u03a3, respectively from the corpus that incorporate the distributional semantics. Similarly, the same process is repeated for relational semantic repository R l to generate the initial vector representation of target and context words as\u0174", "cite_spans": [ { "start": 628, "end": 653, "text": "(Mikolov et al., 2013b,a)", "ref_id": null }, { "start": 773, "end": 799, "text": "(Levy and Goldberg, 2014b;", "ref_id": "BIBREF11" }, { "start": 800, "end": 818, "text": "Levy et al., 2015)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Initial Vector Representation", "sec_num": "4.1" }, { "text": "= U \u2022 \u221a \u03a3 and\u0108 = V T \u2022 \u221a \u03a3, respectively from R l .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initial Vector Representation", "sec_num": "4.1" }, { "text": "The initial vectors of target and context words from the corpus are further augmented using the vectors generated from the relational semantic repository. A detailed description of the augmentation process is described in the following section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initial Vector Representation", "sec_num": "4.1" }, { "text": "To minimize the decomposition error we followed the GloVe approach of optimization of the initial representation of vectors. GloVe method learns continuous word representation from a corpus using the global co-occurrence matrix. However, Glove does not incorporate any additional or domain-specific knowledge and suffers from two important limitations as discussed in section 1. Therefore, during optimization we performed the augmentation of initial word representation from the corpus by merging with the initial word representation from the relational semantic repository. To augment the additional information during learning, we define an augmented objective function J a similar to GloVe as given in equation 5, where f (p w,c ) is the weight function to assign weight between every pair (w, c) of target and context words as given in equation 6, and b w and b c are the bias values for w and c respectively, and p w,c is the PPMI value between w and c. In equation 6, \u03b1 is a hyper parameter and we used 0.75 as its value as used in GloVe. We need to consider each of these (w, c) pair categories especially while merging to generate augmented word representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Objective Function Augmentation", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Ja = 1 2 w\u2208Vw c\u2208Vc f (pw,c)(w T \u2022c +bw +bc \u2212log(pw,c)) 2 (5) f (p w,c ) = min \u00df\u00c5 p w,c / max \u2200w,c\u2208D (p w,c ) \u00e3 \u03b1 , 1 \u2122", "eq_num": "(" } ], "section": "Objective Function Augmentation", "sec_num": "4.2" }, { "text": "In case of D \u2227 , as both the target and context words belong to V, we considered the merged vectors from the corpus and the relational semantic repository corresponding to target and context words such that w = 0.5 * (w +\u0175) and c = 0.5 * (c +\u0109), where w and c are the initial vectors from corpus and\u0175 and\u0109 are the initial vectors from relational semantic repository. For category D \u223c , we considered the initial vectors from the corpus only as neither of the two words belongs to V, hence, we have w = w and c = c. Similarly, in case of D \u2295 , as either of the two words belongs to V but not both, we took the merged vector for target or context word depending upon which word belongs to V. In this case, if target word belongs to V, we take w = 0.5 * (w +\u0175) and if context word belongs to V, we consider c = 0.5 * (c +\u0109).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Objective Function Augmentation", "sec_num": "4.2" }, { "text": "We performed the parameter updation during learning process based on a well-known gradient descent technique called AdaGrad (Duchi et al., ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptive Updation of Parameters", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b4J \u03b4w =g t,w = c\u2208Vc f (pw,c)(w T \u2022c +bw+bc\u2212log(pw,c))\u2022c (7) \u03b4J \u03b4c =g t,c = w\u2208Vw f (pw,c)(w T \u2022c +bw+bc\u2212log(pw,c))\u2022w (8) \u03b4J \u03b4bw =g t,bw = c\u2208Vc f (pw,c)(w T \u2022c +bw+bc\u2212log(pw,c)) (9) \u03b4J \u03b4bc =g t,bc = w\u2208Vw f (pw,c)(w T \u2022c +bw+bc\u2212log(pw,c))", "eq_num": "(10)" } ], "section": "Adaptive Updation of Parameters", "sec_num": "4.3" }, { "text": "AdaGrad algorithm is suitable for dealing with sparse data as it performs larger updates for infrequent words, and smaller updates for frequent words. The update equation is shown as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptive Updation of Parameters", "sec_num": "4.3" }, { "text": "w t+1 = w t \u2212 \u03b7 \u00bb t \u03c4 =1 g 2 \u03c4,w * (g t,w ) (11)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptive Updation of Parameters", "sec_num": "4.3" }, { "text": "where, w is the a merged target word vector, g t,w is the gradient at time t, and g 2 \u03c4,w is the squared gradient at time \u03c4 for the target word vector w . Similarly, updates for context word and biases are performed according to the following equations. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptive Updation of Parameters", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c t+1 = c t \u2212 \u03b7 \u00bb t \u03c4 =1 g 2 \u03c4,c * (g t,c )", "eq_num": "(12)" } ], "section": "Adaptive Updation of Parameters", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "b t+1 w = b t w \u2212 \u03b7 \u00bb t \u03c4 =1 g 2 \u03c4,bw * (g t,bw )", "eq_num": "(13)" } ], "section": "Adaptive Updation of Parameters", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "b t+1 c = b t c \u2212 \u03b7 \u00bb t \u03c4 =1 g 2 \u03c4,bc * (g t,bc )", "eq_num": "(14)" } ], "section": "Adaptive Updation of Parameters", "sec_num": "4.3" }, { "text": "The DRCoVe is evaluated on different benchmark datasets using two evaluation tasks -word similarity and concept categorization. This section presents a brief description of corpus and relational semantic repository used in the evaluation process, experimental setup, and finally presents the evaluation results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup and Results", "sec_num": "5" }, { "text": "The DRCoVe is evaluated on a biomedical text corpus crawled from PubMed 1 , an online repository of millions of citations and abstracts related to biomedicine, health, life and behavioral sciences, and bioengineering. The abstracts are the source of rich information related to diseases, symptoms, pathogens, vectors, and their transmission and etiologies. PubMed provides access to the abstracts of documents through axis 2.1.6.2 API 2 . The crawled corpus C consist of 16,337 PubMed documents related to four diseases -cholera, dengue, influenza, and malaria. In addition, a relational semantic repository R l is created by extracting relational triplets < arg 1 , relation, arg 2 > based on typed dependencies generated by Stanford parser 3 that are filtered using MetaMap 4 to identify meaningful disease-symptom triplets. The repository R l is used to augment the learning process of word representation. We have extracted the association between the diseases and symptoms using the 1 https://www.ncbi.nlm.nih.gov/pubmed/ 2 http://axis.apache.org/axis2/java/core/ 3 http://nlp.stanford.edu/software/lex-parser.shtm 4 https://metamap.nlm.nih.gov/ approach defined in (Parwez et al., 2018; Abulaish et al., 2019) .", "cite_spans": [ { "start": 1171, "end": 1192, "text": "(Parwez et al., 2018;", "ref_id": "BIBREF16" }, { "start": 1193, "end": 1215, "text": "Abulaish et al., 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus and Relational Semantic Repository", "sec_num": "5.1" }, { "text": "The documents of the corpus C are tokenized and processed by removing numbers, punctuations, and stop words. We experimented with the context window size l of 5 and 10 (i.e. for l = 5, the context words are the 5 preceding and 5 succeeding words to the target word) to extract the context words from the corpus. The co-occurrence matrix is created using the co-occurrence frequencies of the target and context words pair within the corpus. The co-occurrence matrix is further mapped into PPMI matrix, which is further factorized using SVD to get the initial word vector of desired dimension d \u2208 {100, 200}. A similar procedure is repeated for relational semantic repository R l and initial vectors are generated for the target and context words. Thereafter, initial word representation of corpus is augmented using the word representation of relational semantic repository which is then optimized using the objective function defined in equation 5. We used a stochastic gradient-based algorithm AdaGrad with the learning rate \u03b7 = 0.05 for optimization. The proposed algorithm is executed for 50 iterations to converge into an optimum solution. As a result, we obtain two sets of enhanced embeddings, one for the target words of vocabulary V w and another for the context words of vocabulary V c . It has been shown that when the two embeddings of a word are combined by taking an average of the corresponding word vectors, the resultant embedding performs better (Pennington et al., 2014) . We have presented results for both the word and context representation in addition to their merged representation.", "cite_spans": [ { "start": 1463, "end": 1488, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.2" }, { "text": "The quality of the learned word vectors based on DRCoVe is evaluated using concept categorization and similarity prediction tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Results and Comparative Analysis", "sec_num": "5.3" }, { "text": "We evaluated the quality of learned word embedding based on concept categorization. It is the grouping of concepts from a given set of concepts into different categories. It evaluates the word representation by clustering the learned vectors into different groups. The performance is assessed based on the extent to which each cluster possesses concepts from a given category. The evaluation metric is called purity and it is 100% if the given standard category is reproduced completely. On the other hand, purity reaches to 0 when cluster quality worsens. The DRCoVe is evaluated based on concept categorization using 6 different benchmark datasets: AP, BLESS, Battig, ESSLI 1a, ESSLI 2b, and ESSLI 2c. The evaluation and comparison results on different combination of context window size and dimensionality over 6 benchmark datasets are given in tables 1, 2, 3, and 4 respectively. It can be observed from the tables that for concept categorization task, except ESSLI 2b, in most of the cases, DRCoVe embedding performs better than the GloVe embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept Categorization:", "sec_num": null }, { "text": "Word Similarity: To evaluate learned vectors on word similarity task, we computed cosine similarity between learned embedding of word pairs and evaluated it based on average similarity rating assigned by human annotators to these word pairs from the benchmark datasets. The idea here is that the learned embeddings encapsulate semantics of the words if there is greater extent of correlation between the similarity score computed from the learned word vectors and the similarity score assigned by the human annotators. We calculated Spearman's rank correlation coefficient between the cosine similarity of learned embeddings and human rated similarity of word pairs. We used 9 different benchmark datasets -MTurk, RG65, RW, SCWS, SimLex999, TR9856, WS353, WS353R, and WS353S for evaluation. In addition, we also compared the quality of learned representation in terms of similarity task with the two variants of GloVe: GloVe W and GloVe Merged. The evaluation and comparison results on different combination of context window size and dimensionality on the benchmark datasets for word similarity are given in tables 5, 6, 7, and 8 respectively. On analysis of tables, it can be found that the context and merged vectors of DRCoVe are significantly better as compared to GloVe word vectors and merged vectors except RW, where GloVe is better.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept Categorization:", "sec_num": null }, { "text": "Word embeddings learned from diverse sources using methods like GloVe as the distributional representation of words have been employed to resolve numerous natural language processing problems with considerable accuracy. However, these distributional representations are unable to capture the relational semantics of distant words and the words with rare co-occurrences in the corpus. In this paper, we have proposed DRCoVe, an augmentation approach of distributional word representations from a corpus with relational semantic information extracted from the corpus to learn enhanced word representation. We compared the proposed model based on semantic similarity and concept categorization tasks on different benchmark datasets and found that the word representation learned by DRCoVe shows better performance than the GloVe model in most of the datasets. The learned word representations could be useful for various NLP tasks like text classification or concept categorization. Learning word representations over much larger corpus and evaluation of their efficacy for short texts", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Direction", "sec_num": "6" } ], "back_matter": [ { "text": "like tweets classification seems one of the future directions of research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Disease: A biomedical text analytics system for disease symptom extraction and characterization", "authors": [ { "first": "Muhammad", "middle": [], "last": "Abulaish", "suffix": "" }, { "first": "Md", "middle": [ "Aslam" ], "last": "Parwez", "suffix": "" }, { "first": "Jahiruddin", "middle": [], "last": "", "suffix": "" } ], "year": 2019, "venue": "Journal of Biomedical Informatics", "volume": "100", "issue": "12", "pages": "1--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Muhammad Abulaish, Md. Aslam Parwez, and Jahiruddin. 2019. Disease: A biomedical text analytics system for disease symptom extraction and characterization. Journal of Biomedical Informatics, 100(12):1-15.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Jointly learning word embeddings using a corpus and a knowledge base", "authors": [ { "first": "Mohammed", "middle": [], "last": "Alsuhaibani", "suffix": "" }, { "first": "Danushka", "middle": [], "last": "Bollegala", "suffix": "" }, { "first": "Takanori", "middle": [], "last": "Maehara", "suffix": "" }, { "first": "Ken-Ichi", "middle": [], "last": "Kawarabayashi", "suffix": "" } ], "year": 2018, "venue": "PloS one", "volume": "13", "issue": "3", "pages": "1--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohammed Alsuhaibani, Danushka Bollegala, Takanori Maehara, and Ken-ichi Kawarabayashi. 2018. Jointly learning word embeddings using a corpus and a knowledge base. PloS one, 13(3):1-26.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Freebase: a collaboratively created graph database for structuring human knowledge", "authors": [ { "first": "Kurt", "middle": [], "last": "Bollacker", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Evans", "suffix": "" }, { "first": "Praveen", "middle": [], "last": "Paritosh", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Sturge", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Taylor", "suffix": "" } ], "year": 2008, "venue": "Proceedings of International Conference on Manageent of Data", "volume": "", "issue": "", "pages": "1247--1250", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of International Conference on Manageent of Data, pages 1247-1250, Vancouver, Canada. ACM.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Extracting semantic representations from word co-occurrence statistics: A computational study", "authors": [ { "first": "A", "middle": [], "last": "John", "suffix": "" }, { "first": "Joseph P", "middle": [], "last": "Bullinaria", "suffix": "" }, { "first": "", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2007, "venue": "Behavior Research Methods", "volume": "39", "issue": "3", "pages": "510--526", "other_ids": {}, "num": null, "urls": [], "raw_text": "John A Bullinaria and Joseph P Levy. 2007. Extracting semantic representations from word co-occurrence statistics: A computational study. Behavior Research Methods, 39(3):510-526.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Enriching word embeddings using knowledge graph for semantic tagging in conversational dialog systems", "authors": [ { "first": "Asli", "middle": [], "last": "Celikyilmaz", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Hakkani-Tur", "suffix": "" }, { "first": "Panupong", "middle": [], "last": "Pasupat", "suffix": "" }, { "first": "Ruhi", "middle": [], "last": "Sarikaya", "suffix": "" } ], "year": 2015, "venue": "2015 AAAI Spring Symposium Series", "volume": "", "issue": "", "pages": "39--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "Asli Celikyilmaz, Dilek Hakkani-Tur, Panupong Pasupat, and Ruhi Sarikaya. 2015. Enriching word embeddings using knowledge graph for semantic tagging in conversational dialog systems. In 2015 AAAI Spring Symposium Series, pages 39-42, California, USA. AAAI.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Natural language processing (almost) from scratch", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Leon", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "1", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert, Jason Weston, Leon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(1):2493-2537.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Adaptive subgradient methods for online learning and stochastic optimization", "authors": [ { "first": "John", "middle": [], "last": "Duchi", "suffix": "" }, { "first": "Elad", "middle": [], "last": "Hazan", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "1", "pages": "2121--2159", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(1):2121-2159.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Characterizing diseases from unstructured text: A vocabulary driven word2vec approach", "authors": [ { "first": "Saurav", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Prithwish", "middle": [], "last": "Chakraborty", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "Naren", "middle": [], "last": "John S Brownstein", "suffix": "" }, { "first": "", "middle": [], "last": "Ramakrishnan", "suffix": "" } ], "year": 2016, "venue": "Proceedings of International Conference on Information and Knowledge Management", "volume": "", "issue": "", "pages": "1129--1138", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saurav Ghosh, Prithwish Chakraborty, Emily Cohn, John S Brownstein, and Naren Ramakrishnan. 2016. Characterizing diseases from unstructured text: A vocabulary driven word2vec approach. In Proceedings of International Conference on Information and Knowledge Management, pages 1129-1138, Indianapolis, USA. ACM.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition", "authors": [ { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "James", "middle": [ "H" ], "last": "Martin", "suffix": "" } ], "year": 2018, "venue": "", "volume": "3", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Jurafsky and James H. Martin. 2018. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition, volume 3. Prentice-Hall, Inc.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Recurrent convolutional neural networks for text classification", "authors": [ { "first": "Siwei", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Liheng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2015, "venue": "Proceedings of International Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "2267--2273", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent convolutional neural networks for text classification. In Proceedings of International Conference on Artificial Intelligence, pages 2267-2273, Texas, USA. AAAI.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Dependency-based word embeddings", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2014, "venue": "Proceedings of Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "302--308", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Levy and Yoav Goldberg. 2014a. Dependency-based word embeddings. In Proceedings of Annual Meeting of the Association for Computational Linguistics, pages 302-308, Maryland, USA. ACL.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Neural word embedding as implicit matrix factorization", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2014, "venue": "Proceedings of International Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "2177--2185", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Levy and Yoav Goldberg. 2014b. Neural word embedding as implicit matrix factorization. In Proceedings of International Conference on Neural Information Processing Systems, pages 2177-2185, Montreal, Canada. Curran Associates.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Improving distributional similarity with lessons learned from word embeddings", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" } ], "year": 2015, "venue": "Transactions of the Association for Computational Linguistics", "volume": "3", "issue": "", "pages": "211--225", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211-225.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of International Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proceedings of International Conference on Neural Information Processing Systems, pages 3111-3119, Nevada, USA. Curran Associates.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Wordnet: a lexical database for english", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1995, "venue": "Communications of the ACM", "volume": "38", "issue": "11", "pages": "39--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39-41.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Biomedical text analytics for characterizing climate-sensitive disease", "authors": [ { "first": "Muhammad", "middle": [], "last": "Md Aslam Parwez", "suffix": "" }, { "first": "Jahiruddin", "middle": [], "last": "Abulaish", "suffix": "" } ], "year": 2018, "venue": "Procedia Computer Science", "volume": "132", "issue": "", "pages": "1002--1011", "other_ids": {}, "num": null, "urls": [], "raw_text": "Md Aslam Parwez, Muhammad Abulaish, and Jahiruddin. 2018. Biomedical text analytics for characterizing climate-sensitive disease. Procedia Computer Science, 132:1002-1011.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of International Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of International Conference on Empirical Methods in Natural Language Processing, pages 1532-1543, Doha, Qatar. ACL.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Learning sentiment-specific word embedding for twitter sentiment classification", "authors": [ { "first": "Duyu", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning sentiment-specific word embedding for twitter sentiment classification.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Proceedings of Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "1555--1565", "other_ids": {}, "num": null, "urls": [], "raw_text": "In Proceedings of Annual Meeting of the Association for Computational Linguistics, pages 1555-1565, Maryland, USA. ACL.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Semantic clustering and convolutional neural network for short text categorization", "authors": [ { "first": "Peng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jiaming", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Cheng-Lin", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Fangyuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Hongwei", "middle": [], "last": "Hao", "suffix": "" } ], "year": 2015, "venue": "Proceedings of Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "352--357", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peng Wang, Jiaming Xu, Bo Xu, Cheng-Lin Liu, Heng Zhang, Fangyuan Wang, and Hongwei Hao. 2015. Semantic clustering and convolutional neural network for short text categorization. In Proceedings of Annual Meeting of the Association for Computational Linguistics, pages 352-357, Beijing, China. AAAI.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Rc-net: A general framework for incorporating knowledge into word representations", "authors": [ { "first": "Chang", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Yalong", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Jiang", "middle": [], "last": "Bian", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Gang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xiaoguang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2014, "venue": "Proceedings of International Conference on Information and Knowledge Management", "volume": "", "issue": "", "pages": "1219--1228", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang Xu, Yalong Bai, Jiang Bian, Bin Gao, Gang Wang, Xiaoguang Liu, and Tie-Yan Liu. 2014. Rc-net: A general framework for incorporating knowledge into word representations. In Proceedings of International Conference on Information and Knowledge Management, pages 1219-1228, Shanghai, China. ACM.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Improving lexical embeddings with semantic knowledge", "authors": [ { "first": "Mo", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2014, "venue": "Proceedings of Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "545--550", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mo Yu and Mark Dredze. 2014. Improving lexical embeddings with semantic knowledge. In Proceedings of Annual Meeting of the Association for Computational Linguistics, pages 545-550, Maryland, USA. ACL.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "6) Thereafter, we employed the relational semantics from the extracted relational semantic repository R l consisting of vocabulary V to augment the learning process. The input corpus C consist of target and context words pairs (w, c) \u2208 D. Based on V, we grouped the (w, c) pairs of D into three categories -D \u2227 , D \u223c , and D \u2295 such that \u2022 D \u2227 = {(w, c) : w \u2208 V \u2227 c \u2208 V}, i.e. both the target and context words belongs to V \u2022 D \u223c = {(w, c) :\u223c (w \u2208 V \u2227 c \u2208 V)}, i.e. neither target nor the context word belongs to V \u2022 D \u2295 = {(w, c) : w \u2208 V \u2295 c \u2208 V}, i.e either the target or the context word belongs to V" }, "TABREF0": { "content": "
Singular Value Decomposition: It is a
dimensionalityreductionmethodwhich
decomposes a symmetric
", "html": null, "num": null, "type_str": "table", "text": "matrix M m\u00d7n into three matrices U , \u03a3, and V such that M = U \u2022\u03a3\u2022V . The matrices U and V are orthogonal matrices while \u03a3 is a diagonal matrix of singular values. To obtain d dimensional vectors, the matrix M is decomposed to U m\u00d7d , \u03a3 d\u00d7d , and V d\u00d7n corresponding to top d singular values. The d-dimensional rows of matrix W = U \u2022 \u221a \u03a3 are dense vectors which are the approximate representative of high dimensional rows of M . The matrix W is considered as a dense vector representation of words, while the matrix C = V T \u2022 \u221a \u03a3 can be considered as context representation. The matrices W and C thus obtained are used as initial word and context representations respectively. These resulting representations need to fulfill minimization of error in matrix decomposition." }, "TABREF1": { "content": "
Word EmbeddingsAPBLESS Battig ESSLI 1a ESSLI 2b ESSLI 2c
GloVe W0.19400.210.09990.40900.5750.3333
GloVe Merged 0.22130.210.10620.43180.550.3555
DRCoVe W0.1890 0.235 0.09950.43180.450.377
DRCoVe C0.19900.260.10620.47720.4750.4222
DRCoVe Merged 0.1965 0.245 0.10810.45450.50.4
", "html": null, "num": null, "type_str": "table", "text": "Concept categorization performance with l = 5, and d = 100" }, "TABREF2": { "content": "
Word EmbeddingsAPBLESS Battig ESSLI 1a ESSLI 2b ESSLI 2c
GloVe W0.1815 0.205 0.09820.43180.550.3777
GloVe Merged 0.2039 0.225 0.10490.45450.5250.3777
DRCoVe W0.19400.230.10130.40900.4750.3777
DRCoVe C0.2064 0.215 0.10600.40900.4750.3777
DRCoVe Merged 0.2068 0.225 0.10090.43180.450.4
2011), which is an adaptive gradient update
algorithm to perform gradient-based learning. The
gradients are computed as follows:
", "html": null, "num": null, "type_str": "table", "text": "Concept categorization performance with l = 5, and d = 200" }, "TABREF3": { "content": "
Word EmbeddingsAPBLESS Battig ESSLI 1a ESSLI 2b ESSLI 2c
GloVe W0.2040.215 0.10320.40910.5250.3778
GloVe Merged 0.21130.220.10950.43080.5250.3778
DRCoVe W0.20150.250.09960.40910.4750.3778
DRCoVe C0.21390.220.10170.43180.4750.4222
DRCoVe Merged 0.1990.225 0.10470.40910.450.3556
", "html": null, "num": null, "type_str": "table", "text": "Concept categorization performance with l = 10, and d = 100" }, "TABREF4": { "content": "
Word EmbeddingAPBLESS Battig ESSLI 1a ESSLI 2b ESSLI 2c
GloVe W0.1965 0.2150 0.10760.40900.550.4222
GloVe Merged 0.23130.220.11060.41400.6250.4
DRCoVe W0.2064 0.225 0.10260.45450.4750.355
DRCoVe C0.19650.230.10850.40140.5250.4
DRCoVe Merged 0.2114 0.225 0.11220.42450.450.432
", "html": null, "num": null, "type_str": "table", "text": "Concept categorization performance with l = 10, and d = 200" }, "TABREF5": { "content": "
Word Embeddings MTurk RG65RWSCWS SimLex999 TR9856 WS353 WS353R WS353S
GloVe W0.1869 -0.0650 0.1881 0.271040.04070.1259 0.22880.14110.2404
GloVe Merged 0.1976 -0.0675 0.1891 0.28440.03540.1275 0.22690.14470.2300
DRCoVe W0.2327 0.1726 0.15130.290.07370.1347 0.28810.23380.2702
DRCoVe C0.2049 0.1368 0.1555 0.29640.07800.1454 0.29610.24670.2762
DRCoVe Merged 0.2270 0.1839 0.1284 0.29820.09070.1382 0.26900.24580.2324
", "html": null, "num": null, "type_str": "table", "text": "Word similarity performance with l = 5, and d = 100" }, "TABREF6": { "content": "
Word Embeddings MTurk RG65RWSCWS SimLex999 TR9856 WS353 WS353R WS353S
GloVe W0.1915 -0.0430 0.1877 0.28370.03830.1263 0.24200.15420.2471
GloVe Merged 0.2043 -0.0567 0.1894 0.28420.03160.1275 0.23140.14620.2374
DRCoVe W0.19190.0870.1563 0.30190.07390.1468 0.29490.22600.2662
DRCoVe C0.2152 0.1336 0.1544 0.30070.08110.1405 0.31200.23850.2966
DRCoVe Merged 0.2038 0.1390 0.1347 0.29770.09150.1412 0.28330.22720.250
", "html": null, "num": null, "type_str": "table", "text": "Word similarity performance with l = 5, and d = 200" } } } }