Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "I17-1024",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:39:47.826452Z"
},
"title": "On Modeling Sense Relatedness in Multi-prototype Word Embedding",
"authors": [
{
"first": "Yixin",
"middle": [],
"last": "Cao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"country": "China"
}
},
"email": ""
},
{
"first": "Juanzi",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"country": "China"
}
},
"email": ""
},
{
"first": "Jiaxin",
"middle": [],
"last": "Shi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"country": "China"
}
},
"email": "shi-jx@mail.tsinghua.edu.cn"
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"country": "China"
}
},
"email": ""
},
{
"first": "Chengjiang",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"country": "China"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "To enhance the expression ability of distributional word representation learning model, many researchers tend to induce word senses through clustering, and learn multiple embedding vectors for each word, namely multi-prototype word embedding model. However, most related work ignores the relatedness among word senses which actually plays an important role. In this paper, we propose a novel approach to capture word sense relatedness in multi-prototype word embedding model. Particularly, we differentiate the original sense and extended senses of a word by introducing their global occurrence information and model their relatedness through the local textual context information. Based on the idea of fuzzy clustering, we introduce a random process to integrate these two types of senses and design two non-parametric methods for word sense induction. To make our model more scalable and efficient, we use an online joint learning framework extended from the Skip-gram model. The experimental results demonstrate that our model outperforms both conventional single-prototype embedding models and other multi-prototype embedding models, and achieves more stable performance when trained on smaller data.",
"pdf_parse": {
"paper_id": "I17-1024",
"_pdf_hash": "",
"abstract": [
{
"text": "To enhance the expression ability of distributional word representation learning model, many researchers tend to induce word senses through clustering, and learn multiple embedding vectors for each word, namely multi-prototype word embedding model. However, most related work ignores the relatedness among word senses which actually plays an important role. In this paper, we propose a novel approach to capture word sense relatedness in multi-prototype word embedding model. Particularly, we differentiate the original sense and extended senses of a word by introducing their global occurrence information and model their relatedness through the local textual context information. Based on the idea of fuzzy clustering, we introduce a random process to integrate these two types of senses and design two non-parametric methods for word sense induction. To make our model more scalable and efficient, we use an online joint learning framework extended from the Skip-gram model. The experimental results demonstrate that our model outperforms both conventional single-prototype embedding models and other multi-prototype embedding models, and achieves more stable performance when trained on smaller data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Word embedding, representing words in a low dimentional vector space, plays an increasing important role in various IR and NLP related tasks, such as language modeling (Bengio et al., 2006; Mnih and Hinton, 2009) , named entity recognition and disambiguation (Turian et al., 2010; Collobert et al., 2011) , and syntactic parsing (Socher et al., 2011 (Socher et al., , 2013 . This trend has been accelerated by the CBOW and the Skipgram models of (Mikolov et al., 2013b,a) due to its efficiency and remarkable semantic compositionality of embedding vectors (e.g. vec(king)vec(queen)=vec(man)-vec(woman)).",
"cite_spans": [
{
"start": 168,
"end": 189,
"text": "(Bengio et al., 2006;",
"ref_id": "BIBREF0"
},
{
"start": 190,
"end": 212,
"text": "Mnih and Hinton, 2009)",
"ref_id": "BIBREF13"
},
{
"start": 259,
"end": 280,
"text": "(Turian et al., 2010;",
"ref_id": "BIBREF21"
},
{
"start": 281,
"end": 304,
"text": "Collobert et al., 2011)",
"ref_id": "BIBREF3"
},
{
"start": 329,
"end": 349,
"text": "(Socher et al., 2011",
"ref_id": "BIBREF19"
},
{
"start": 350,
"end": 372,
"text": "(Socher et al., , 2013",
"ref_id": "BIBREF18"
},
{
"start": 446,
"end": 471,
"text": "(Mikolov et al., 2013b,a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, the assumption that each word is represented by only one single vector is problematic when dealing with the polysemous words. To enhance the expression ability of the embedding model, recent research has a rising enthusiasm for representing words at sense level. That is, an individual word is represented as multiple vectors, where each vector corresponds to one of its meanings. Pervious work mostly focus on using clustering to induce word senses (each cluster refers to one of the senses) and then learn the word sense representations respectively (Reisinger and Mooney, 2010; Huang et al., 2012; Tian et al., 2014; Neelakantan et al., 2014; Li and Jurafsky, 2015) . However, the above approaches ignore the relatedness among the word senses. Hence the following limitations arise in the usage of hard clustering. First of all, many clustering errors will be caused by using hard clustering based method because the senses of the polysemous word actu-ally have no distinct semantic boundary (Liu et al., 2015) . Secondly, due to dividing the occurrences of a word into separate clusters, the embedding model will suffer from more data sparsity issue as compared to the Skip-gram model. Thirdly, the embedding quality is considerably sensitive to the clustering results due to the isolation of different sense clusters.",
"cite_spans": [
{
"start": 561,
"end": 589,
"text": "(Reisinger and Mooney, 2010;",
"ref_id": "BIBREF16"
},
{
"start": 590,
"end": 609,
"text": "Huang et al., 2012;",
"ref_id": "BIBREF6"
},
{
"start": 610,
"end": 628,
"text": "Tian et al., 2014;",
"ref_id": "BIBREF20"
},
{
"start": 629,
"end": 654,
"text": "Neelakantan et al., 2014;",
"ref_id": "BIBREF14"
},
{
"start": 655,
"end": 677,
"text": "Li and Jurafsky, 2015)",
"ref_id": "BIBREF8"
},
{
"start": 1004,
"end": 1022,
"text": "(Liu et al., 2015)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To address this problem, we learn the embedding vectors of the word senses with some common features if the senses are related. Instead of clearly cutting the sense cluster boundaries, one occurrence of the word will be assigned into multiple sense clusters with different probabilities, which agrees with a classic task of word sense annotation, Graded Word Sense Assignment (Erk and McCarthy, 2009; Jurgens and Klapaftis, 2013) .",
"cite_spans": [
{
"start": 376,
"end": 400,
"text": "(Erk and McCarthy, 2009;",
"ref_id": "BIBREF4"
},
{
"start": 401,
"end": 429,
"text": "Jurgens and Klapaftis, 2013)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Actually, the senses of a polysemous word are related not only by the contiguity of meaning within a semantic field 1 , but also by the extended relationship between the original meaning and the extended meaning (Von Engelhardt and Zimmermann, 1988) . We investigate the relatedness of the synsets (word senses) in WordNet (Miller, 1995) through the Wu & Palmer measure 2 (Wu and Palmer, 1994) , and present an interesting example of the word \"book\" in Figure 1 . The right side is the similarity matrix of its 11 nominal synsets, where s i denotes the ith synset. Each tile represents a similarity value between two synsets whose color deepens as the value increases. The left side is their frequencies in Word-Net. On one hand, we can see apparent correlations among these senses in different levels. Note that (s 1 , s 2 , s 11 ) are strongly related, and so are (s 6 , s 7 ) and (s 8 , s 9 , s 10 ). This is because of their extended relationship. Take (s 1 , s 2 , s 11 ) for example, s 1 refers to the sense of \"the written work printed on pages bound together\", s 2 refers to \"physical objects consisting of a number of pages bound together\" and s 3 refers to \"a number of sheets (or stamps, etc.) bound together\". Obviously, s 1 is the original meaning, s 2 and s 11 are the extended meanings. Moreover, the relatedness suggests that the senses share some common textual features in the contexts. On the other hand, the frequency of the original meaning s 1 is much 1 According to https://en.wikipedia.org/wiki/Polysemy. 2 The Wu & Palmer measure is an edge based approach that is tied to the structure of WordNet. Also, one can try different relatedness approaches and will find similar results.",
"cite_spans": [
{
"start": 212,
"end": 249,
"text": "(Von Engelhardt and Zimmermann, 1988)",
"ref_id": "BIBREF22"
},
{
"start": 323,
"end": 337,
"text": "(Miller, 1995)",
"ref_id": "BIBREF12"
},
{
"start": 372,
"end": 393,
"text": "(Wu and Palmer, 1994)",
"ref_id": "BIBREF23"
},
{
"start": 1530,
"end": 1531,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 453,
"end": 462,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "higher than that of the extended meanings s 2 and s 11 , which suggests that the word sense distribution in corpus should be taken into account when modeling word sense relatedness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a novel method, namely FCSE (Fuzzy Clustering-based multi-Sense Embedding model), that models the relatedness among word senses by using the fuzzy clustering based method for word sense induction, and then learns sense embeddings via a variant of Skip-gram model. The basic idea behind fuzzy clustering is that the senses may be related and share common features through the overlaps of the sense clusters. Based on our observations of the original meaning and the extended meaning, we further design two non-parametric methods, FCSE-1 and FCSE-2, to model the local textual context information of senses as well as their global occurrence distribution by incorporating the Generalized Polya Urn (GPU) model. For efficiency and scalability, our proposed model also adopts an online joint learning procedure. FCSE adopts an online procedure that induces the word sense and learns the sense embeddings jointly. Given a word sequence D = {w 1 , w 2 , . . . , w M }, we obtain the input of our model, the word and its context words, by sliding a window with the length of 2k + 1. The output is also the context words. During the learning process, two types of vectors are maintained for each word, the global vector w i and its sense vectors 3 w s i i . Note that the number of senses |S i | is varying because the cluster method is non-parametric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As shown in Figure 2 , there are mainly two steps: the clustering step and the embedding learning step. The former step incrementally clusters all the occurrences of one word according to its context vectors by computing the average sum of the global vectors of the context words:",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "w c i = 1 2k \u2212k\u2264j\u2264k w i+j .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Each cluster refers to one word sense, thus each occurrence will be annotated with at least one sense.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the second step, we update the sense embeddings via a variant of the Skip-gram model (Mikolov et al., 2013b) . The main difference between our model and Skip-gram is that we aim to predict the context words given the exact sense of the target word instead of the word itself. Moreover, because several senses are assigned to the current word with probabilities, we leverage all the related senses to predict the context words. The intuition is that the related senses tend to have common context words as mentioned in Section 1. Thus, all the assigned sense vectors will be updated with weights simultaneously as follows:",
"cite_spans": [
{
"start": 88,
"end": 111,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "L(D) = 1 M M i=1 \u2212k\u2264j\u2264k |S i | s i \u03bb s i log p(w i+j |w s i i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) where the probability of p(w i+j |w s i i ) is defined using softmax function, and s i denotes the sense index of word w i . S i is the set of existing senses, \u03bb s i is the update weight of sense s i . We set the weights proportional to the probabilities of the current word being annotated with sense s i , which is equivalent to the results of fuzzy clustering, the likelihood of the context w c i assigned into the sense cluster s i :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u03bb s i \u221d p(s i |w c i ) s i is sampled 0 otherwise (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Finally, we use negative sampling technique 4 for efficient learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Section 2 describes the framework of our model including how to obtain the input features of clustering and to use the cluster results for the sense embedding learning. In this section, we present two fuzzy clustering based methods for clusteringbased word sense induction, FCSE-1 and FCSE-2. Both of them are non-parametric and conduct online procedures. Based on our observations in Section 1, the occurrence of word senses is usually distinguishing between the original meaning and the extended meaning, while the original meaning and its extended meanings are semantically related with some common textual contexts. Considering both of the two aspects, in FCSE-1, we induce the word sense according to the cluster probability proportional to the distance of its centroid to the current word's contexts; and FCSE-2 utilizes a random process, the Generalized Polya Urn (GPU) model, to further incorporate the senses' global occurrence distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Induction",
"sec_num": "3"
},
{
"text": "Adopting an online procedure, FCSE-1 clusters the contexts of one word incrementally. When first meet one word, we create a cluster with the centroid of its context vector. Then, for each occurrence of the word, several existing clusters are sampled following a probability distribution; or a new cluster is created only if all the probabilities of the context belonging to the clusters equal to zero. Finally, all the sampled clusters will be updated by adding the current context vector into them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FCSE-1",
"sec_num": "3.1"
},
{
"text": "Remember that each word w i is associated with a global vector, varying number of clusters, and the corresponding sense vectors. FCSE-1 measures the semantic distance of the context vector to its cluster centers, and aims to sample the nearest ones (maybe multiple related senses). Given the context vector w c i , the probability of the word belonging to the existing lth sense is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FCSE-1",
"sec_num": "3.1"
},
{
"text": "p(s i = l|w c i ) = 1 Z Sim(\u00b5 l i , w c i ) 0 if Sim(\u00b5 l i , w c i ) < under (3) where \u00b5 l",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FCSE-1",
"sec_num": "3.1"
},
{
"text": "i denotes the centroid of the lth sense cluster, Z is the normalization term and Sim(\u2022, \u2022) can be any similarity measurement. In the experiments we use cosine similarity as the semantic distance measurement. under is a pre-defined threshold that indicates how easily we create a new sense cluster. Similarly, we use another threshold upper for deciding the number of sampled clusters. Sup-pose that the probabilities {p n i |n i \u2208 S i } is ranked in descending order, then we pick up the clusters with top",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FCSE-1",
"sec_num": "3.1"
},
{
"text": "n i probabilities until p n i \u2212 p n i +1 >",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FCSE-1",
"sec_num": "3.1"
},
{
"text": "upper . Note that the hyper-parameters meet 0 \u2264 under , upper \u2264 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FCSE-1",
"sec_num": "3.1"
},
{
"text": "Since FCSE-1 uses two hyper-parameters to respectively control a new cluster initialization and the number of clusters sampled, which is difficult to set manually. So, instead of the fixed thresholds, we make a further randomization by introducing a random process, GPU, in FCSE-2. Besides, more inherit properties of the word senses can be taken into account, including not only the local information of the semantic distance from the context to the cluster centers, but also the frequency, which is related to how likely the current sense is an original meaning or an extended meanings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FCSE-2",
"sec_num": "3.2"
},
{
"text": "In this section, we will firstly give a brief summarization of the GPU model, and then introduce how to incorporate it into our model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FCSE-2",
"sec_num": "3.2"
},
{
"text": "Polya urn model is a type of random process that draws balls from an urn and replaces it along with extra balls. Suppose that there are some balls of colors in the urn at the beginning. For each draw, the ball of the ith color is selected followed by the distribution:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Polya Urn model",
"sec_num": "3.2.1"
},
{
"text": "p(color = i) = m i m",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Polya Urn model",
"sec_num": "3.2.1"
},
{
"text": "where m is the total number of balls, and m i is the number of balls of the ith color. A standard urn model returns the ball back along with an extra ball of the same color, which can be seen as a reinforcement and sometimes expressed as the richer gets richer. More detailed information can be found in the survey paper (Pemantle et al., 2007) . Polya urn model can be used for non-parametric clustering, where each data point refers to a ball in the urn, and its cluster label is denoted by the ball's color.",
"cite_spans": [
{
"start": 321,
"end": 344,
"text": "(Pemantle et al., 2007)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Polya Urn model",
"sec_num": "3.2.1"
},
{
"text": "Since the fixed replacement lacks of flexibility, the GPU model conducts the reinforcement process following another distribution over the colors. That is, when a ball of color i is drawn, another A ij balls of color j will be put back. Then, for each draw, we replace the ball with different number of balls of various colors according to the distribution matrix A. As repeating this process, the drawing probability will be altered if the number of extra balls are nonzero.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Polya Urn model",
"sec_num": "3.2.1"
},
{
"text": "The induction process of the word senses can be regarded as a GPU model. The original meaning is sampled firstly, and then the extended meanings are sampled through the reinforcement. That is, we sample an extended meaning according to a conditional probability given the original meaning. The basic idea is that knowing the original meaning is necessary for understanding the target word annotated with an extended meaning in a document. For example, the extended meaning of the word \"milk\" when used in the terms \"glacier milk\" won't be well understood unless we know the original meaning of \"milk\". Correspondingly, in the GPU model, a urn denotes a word, the ball and the color refers to the occurrence and the sense, respectively. Note that each ball has an index that distinguishes different occurrences. Thus, the balls of the same color correspond to a sense cluster.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating GPU into Embedding model",
"sec_num": "3.2.2"
},
{
"text": "We sample the related senses in two stages. In the first stage, for the occurrence of the word w i , we sample a sense s io = l considering the global distribution of the word senses as well as the semantic distance from the context features to the cluster center. In the second stage, several senses are sampled conditioned on the previous result: p(s ie = l |s io = l).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating GPU into Embedding model",
"sec_num": "3.2.2"
},
{
"text": "In this way, we find the original meaning and the extended meanings separately following different distributions. Considering the observation that the original meaning occurs more frequently (as described in Section 1), we define the probability distribution of the original meaning as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating GPU into Embedding model",
"sec_num": "3.2.2"
},
{
"text": "p(s io = l|w c i ) \u221d m il \u03b3+m i \u2022 Sim(\u00b5 l i , w c i ) l \u2208 S i \u03b3 \u03b3+m i l is new (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating GPU into Embedding model",
"sec_num": "3.2.2"
},
{
"text": "where m i is the total number of occurrences of the target word w i , m il is the number of the lth cluster and we have S i l m il = m i . Note that \u03b3 is a hyper-parameter that indicates how likely a new cluster will be created, and its impact decreases as the size of training data m i increases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating GPU into Embedding model",
"sec_num": "3.2.2"
},
{
"text": "The probability of sampling an extended meaning is proportional to the semantic distance of the corresponding cluster center to the context fea-tures as well as the cluster center sampled in the first stage, which is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating GPU into Embedding model",
"sec_num": "3.2.2"
},
{
"text": "p(s ie = l |s io = l, w c i ) \u221d e \u2022Sim(w s ie i , w s io i + w c i 2 ) (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating GPU into Embedding model",
"sec_num": "3.2.2"
},
{
"text": "where e varies from 0 to 1 and controls the strength of the reinforcement. We will talk about it in the next subsection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating GPU into Embedding model",
"sec_num": "3.2.2"
},
{
"text": "Sampling separately, the relatedness of the original meaning and the extended meanings are modeled and each occurrence of the word has been annotated with one original sense and several extended senses (or there is no additional extended meanings). Note that the likelihood of the occurrence of the word annotated with an extended meaning is p(s ie = l |s io = l, w c i )p(s io = l|w c i ). Clearly, the probabilities of sampling the extended meanings are always lower than that of the original meaning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating GPU into Embedding model",
"sec_num": "3.2.2"
},
{
"text": "Methods",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relationship with State-of-the-art",
"sec_num": "3.3"
},
{
"text": "FCSE-1 The hyper-parameters meet 0 \u2264 under , upper \u2264 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relationship with State-of-the-art",
"sec_num": "3.3"
},
{
"text": "upper is used to control the number of clusters assigned to the current word, and FCSE-1 will degrade to hard assignment if we set upper = 0, which is similar with the NP-MSSG model in (Neelakantan et al., 2014) . We can use under to control the sense number of each word, and an extreme case of under = 0 denotes that we create only a sense cluster for each word, then the model is equivalent to the Skip-gram.",
"cite_spans": [
{
"start": 185,
"end": 211,
"text": "(Neelakantan et al., 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relationship with State-of-the-art",
"sec_num": "3.3"
},
{
"text": "The number of the extended meanings |S ie | varies from 0 to |S \u2212l i |, where S \u2212l i denotes the set excluding the original meaning s l i . The hyper-parameter 0 \u2264 e \u2264 1 is used to control the strength of the GPU reinforcement as well as the number of the extended meanings. Particularly, if we set e = 0, the second sample for the extended meanings has been turned off, and then FCSE-2 degrades to the SG+ model in (Li and Jurafsky, 2015) , which is another state-of-theart method for multi-prototype word embedding model based on hard clustering. By setting \u03b3 = 0 in Equation 4, which is used to control the probability of creating a new sense, FCSE-2 won't create new senses. Learning a single sense for each word makes the step of sense sampling becomes meaningless. Thus, FCSE-2 uses the only embedding of the current word to predict its context words, which is equivalent to the Skip-gram.",
"cite_spans": [
{
"start": 416,
"end": 439,
"text": "(Li and Jurafsky, 2015)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "FCSE-2",
"sec_num": null
},
{
"text": "In this section, we demonstrate the effectiveness of our model from two aspects, qualitative and quantitative analysis. For qualitative analysis, we presents nearest 10 neighbors for each word sense to give an intuitive impression. For quantitative analysis, we conduct a series of experiments on the NLP task of word similarity using two benchmark datasets, and explore the influence of the size of training corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Evaluation",
"sec_num": "4"
},
{
"text": "We train our model on Wikipedia, the April 2010 dump also used by (Huang et al., 2012; Liu et al., 2015; Neelakantan et al., 2014) . Before training, we have conducted a series of preprocessing steps. At first, the articles have been splitted into sentences, following by stemming and lemmatization using the python package of NLTK 5 . Then, we rank the vocabulary according to their frequencies, and only learn the embeddings of the top 200,000 words. The other words out of the vocabulary are replaced by a pre-defined mark \"UNK\". Note that FCSE is slower than word2vec 6 , but the efficiency is far away from being an obstacle on training.",
"cite_spans": [
{
"start": 66,
"end": 86,
"text": "(Huang et al., 2012;",
"ref_id": "BIBREF6"
},
{
"start": 87,
"end": 104,
"text": "Liu et al., 2015;",
"ref_id": "BIBREF9"
},
{
"start": 105,
"end": 130,
"text": "Neelakantan et al., 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "4.1"
},
{
"text": "Below we describe three baseline methods and parameter settings, followed by qualitative analysis of nearest neighbors of each word sense. Then, quantitative performance will be presented via experiments on two benchmark word similarity tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "4.1"
},
{
"text": "Word Embedding model can be roughly divided into two types: single vector embedding model and multi-prototype embedding model. To validate the performance, we compare our model with three models of both the two types: Skip-gram, NP-MSSG and SG+. The reason why we select them as the baseline methods is because: (i) they are the state-of-the-art methods of word embedding model; (ii) NP-MSSG and SG+ adopts the similar learning framework to our model. Table 1 : Nearest 10 neighbors of each sense of the words \"apple\" and \"berry\", computed by cosine similarity, for different models.",
"cite_spans": [],
"ref_spans": [
{
"start": 452,
"end": 459,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baseline Methods",
"sec_num": "4.2"
},
{
"text": "the embeddings within a two-layer neural network.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Methods",
"sec_num": "4.2"
},
{
"text": "\u2022 NP-MSSG * measures the distance of the current word to each sense, picks up the nearest one and learning its embedding via a standard Skip-gram model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Methods",
"sec_num": "4.2"
},
{
"text": "\u2022 SG+ * improves the NP-MSSG model by introducing a random process that induces the word sense with probabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Methods",
"sec_num": "4.2"
},
{
"text": "The symbol * denotes that we, instead of using their released codes, carefully reimplement these models for the sake of making the comparisons as fairly as possible. Thus, all the models share the same program switched by the correspondingly parameters (as described in Section 3.3). Note that there may be some minor differences such as optimizing tricks between our program and that of their released.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Methods",
"sec_num": "4.2"
},
{
"text": "As discussed in Section 3.3, our model can degrade to the baseline methods by switching different parameters: the threshold upper , e and the max number of word senses N M AX . All the meth-ods are implemented on the same java program 7 , and use, at the greatest extent, the same settings including the training corpus, shared parameters and the program code, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Setting",
"sec_num": "4.3"
},
{
"text": "Switching parameters For FCSE-1 and NP-MSSG, upper is set 0.05 and 0, respectively. Similarly, We set e = 1 for FCSE-2, and e = 0 for SG+. When setting N M AX = 1, all the multiprototype word embedding models degrade to single vector embedding model, that is, the Skipgram model. Shared parameters Following the original papers of NP-MSSG and SG+, the threshold under in FCSE-1 is also set with -0.5, and \u03b3 = 0.01 is used in both FCSE-2 and SG+. The initial learning rate \u03b1 = 0.015 is used for parameter estimation. We pick up 5 words as the context window, and 400 dimensional vectors to learn sense embeddings of the top 200,000 frequent words. Note that all the parameters including the embedding vectors are initialized randomly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Setting",
"sec_num": "4.3"
},
{
"text": "Before conducting the experiments on word similarity task, we first give qualitative analysis of our model as well as two baseline models 8 by representing the word sense with its nearest neighbors, which are computed through cosine similarity of the embeddings between each of the word senses and the senses of the other words. Table 1 presents the nearest 10 neighbors of each sense of two words ranked through the similarity. Skip-gram shows a mixed result of different senses, while the other two models produce a reasonable number of word sense, and their neighbors are indeed semantically correlated. For the word \"Apple\", there are two meanings of the fruit and technology company. NP-MSSG and FCSE-1 can differentiate the two senses, but FCSE-1 clearly achieves a more coherent ranking results. For the word \"Berry\", FCSE-1 outperforms NP-MSSG for it successfully identifies another sense of person's name except the dominant sense of fruit. This is because \"Berry\" is used as a person's name much less frequently than a fruit. Thus, it may cause the data sparsity issue, while our model is capable of addressing this problem by improving the usage of training corpus, which will be further discussed in Section 4.5.3.",
"cite_spans": [],
"ref_spans": [
{
"start": 329,
"end": 336,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "4.4"
},
{
"text": "In this subsection, we evaluate our embeddings on two classic tasks of measuring word similarity: word similarity and contextual word similarity. To better test the ability of our model to address the problem of data sparsity, we train it using only 30% of the training corpus (sampled randomly). Also, we give comparisons with the performance using all the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Similarity",
"sec_num": "4.5"
},
{
"text": "WordSim353 (Finkelstein et al., 2001 ) is a benchmark dataset for word similarity. It contains 353 word pairs and their similarity scores assessed by 16 subjects. SCWS, released by (Huang et al., 2012) , is a benchmark dataset for contextual word similarity, which computes the semantic relatedness between two words conditioned on the specific context. It consists 2,003 pairs of words and their sentential contexts. WordSim353 focuses on the ambiguity among similar words, and SCWS is for the ambiguity of word senses in different con- 8 To be fair, we only show the comparisons among FCSE-1, NP-MSSG and Skip-gram, since the paper of SG+ (Li and Jurafsky, 2015) didn't give the qualitative results. texts.",
"cite_spans": [
{
"start": 11,
"end": 36,
"text": "(Finkelstein et al., 2001",
"ref_id": "BIBREF5"
},
{
"start": 181,
"end": 201,
"text": "(Huang et al., 2012)",
"ref_id": "BIBREF6"
},
{
"start": 538,
"end": 539,
"text": "8",
"ref_id": null
},
{
"start": 641,
"end": 664,
"text": "(Li and Jurafsky, 2015)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Similarity",
"sec_num": "4.5"
},
{
"text": "To evaluate the performance of our model, we compute the similarity between each word pair through some measurement, and then use the spearman correlation between our results and the human judgments to evaluate the performance of the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.5.1"
},
{
"text": "Working on WordSim353, we compute the average similarity between the word pairs the same as (Reisinger and Mooney, 2010; Neelakantan et al., 2014) . And working on SCWS, we use two similarity measurements, avgSimC and maxSimC, proposed by (Neelakantan et al., 2014; Liu et al., 2015) . avgSimC focuses on evaluating the average similarity between all the senses of the two words, and maxSimC evaluates the similarity between the senses with max probability for the current word. Table 2 and 3 shows the overall performance of our proposed model as well as the baseline methods on WordSim353 and SCWS datasets. We only obtain lower performance numbers for SG+, which suggests that they may be more susceptible to noise and worse generalization ability. However, this is a fair comparison because all the methods share the same parameter settings and the code. The following is indicated in the results:",
"cite_spans": [
{
"start": 92,
"end": 120,
"text": "(Reisinger and Mooney, 2010;",
"ref_id": "BIBREF16"
},
{
"start": 121,
"end": 146,
"text": "Neelakantan et al., 2014)",
"ref_id": "BIBREF14"
},
{
"start": 239,
"end": 265,
"text": "(Neelakantan et al., 2014;",
"ref_id": "BIBREF14"
},
{
"start": 266,
"end": 283,
"text": "Liu et al., 2015)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 479,
"end": 486,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.5.1"
},
{
"text": "Model \u03c1 \u00d7 100 NP-MSSG * 67.3 SG+ *",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.5.2"
},
{
"text": "66.9 Skip-gram * 66.7 FCSE-1 68.8 FCSE-2 69.5 Table 3 : Results on the SCWS dataset. \"avg\" and \"max\" respectively denotes the similarity measurements of avgSimC and maxSimC.",
"cite_spans": [],
"ref_spans": [
{
"start": 46,
"end": 53,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.5.2"
},
{
"text": "\u2022 The skip-gram model achieves rather comparative performance due to its good generalization ability, especially in a smaller training set as compared to hard-cluster based multiprototype word embedding models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.5.2"
},
{
"text": "\u2022 FCSE-2 achieves the best performance due to the separately sample for the original meaning and the extended meanings, which follows different distributions incorporating both the global and local information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.5.2"
},
{
"text": "We also investigate the ability of our method that helps address the data sparsity issue by training on different size of data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.5.2"
},
{
"text": "Generally speaking, the embedding model performs better when trained on a larger corpus. The multi-prototype embedding model suffers more data sparsity issue than single prototype embedding due to its further partition on the set of words' contexts by clustering, and then performs even worse using a smaller training corpus. In this subsection, we study the capability of FCSE to helps address this problem by testing the performance when training on different size corpus. Figure 3 shows the comparison between the performance of all the models trained on 30% data and on 100% data. As the training data decreases, all the models perform worse especially the hard clustering based method. Compared to full corpus, we can see more apparent gap between NP-MSSG and FCSE-1 (from 2.6% to 3.1%), SG+ and FCSE-2 (from 0.1% to 1.9%). That is, the gap between FCSE and other methods gets closer when there are adequate training corpus, which is in accordance with the intuition. The data sparsity issue gradually vanishes along with the growth of training data. Besides, the performance of the single-prototype word embedding model increases Figure 3 : The performance of each model when training on different size of data only 1.6%. Our proposed model, both FCSE-1 and FCSE-2, achieves more stable performance (0.2% and 0.6% changes).",
"cite_spans": [],
"ref_spans": [
{
"start": 475,
"end": 483,
"text": "Figure 3",
"ref_id": null
},
{
"start": 1136,
"end": 1144,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training on Different Size Data",
"sec_num": "4.5.3"
},
{
"text": "Multi-prototype word embedding has been extensively studied in the literature Cao et al., 2017; Liu et al., 2015; Reisinger and Mooney, 2010; Huang et al., 2012; Tian et al., 2014; Neelakantan et al., 2014; Li and Jurafsky, 2015) . They can be roughly divided into three groups. The first group is clustering based methods. As described in Section 1, (Reisinger and Mooney, 2010; Huang et al., 2012; Tian et al., 2014; Neelakantan et al., 2014; Li and Jurafsky, 2015) use clustering to induce word sense and then learn sense embeddings via Skip-gram model. The second group is to introduce topics to represent different word senses, such as (Liu et al., 2015) considers that a word under different topics leads to different meanings, so it embeds both word and topic simultaneously and combines them as the word sense. However, it is difficult to determine the number of topics. The third group incorporates external knowledge (i.e. knowledge bases) to induce word/phrase senses. jointly represents and disambiguates the word sense on the basis of the synsets in Word-Net. (Cao et al., 2017) regards entities in KBs as word/phrase senses, and first learn word/phrase and sense embeddings separately, then align them via Wikipedia anchors. However, it fails to deal with the words that are not included in knowledge bases.",
"cite_spans": [
{
"start": 78,
"end": 95,
"text": "Cao et al., 2017;",
"ref_id": "BIBREF1"
},
{
"start": 96,
"end": 113,
"text": "Liu et al., 2015;",
"ref_id": "BIBREF9"
},
{
"start": 114,
"end": 141,
"text": "Reisinger and Mooney, 2010;",
"ref_id": "BIBREF16"
},
{
"start": 142,
"end": 161,
"text": "Huang et al., 2012;",
"ref_id": "BIBREF6"
},
{
"start": 162,
"end": 180,
"text": "Tian et al., 2014;",
"ref_id": "BIBREF20"
},
{
"start": 181,
"end": 206,
"text": "Neelakantan et al., 2014;",
"ref_id": "BIBREF14"
},
{
"start": 207,
"end": 229,
"text": "Li and Jurafsky, 2015)",
"ref_id": "BIBREF8"
},
{
"start": 351,
"end": 379,
"text": "(Reisinger and Mooney, 2010;",
"ref_id": "BIBREF16"
},
{
"start": 380,
"end": 399,
"text": "Huang et al., 2012;",
"ref_id": "BIBREF6"
},
{
"start": 400,
"end": 418,
"text": "Tian et al., 2014;",
"ref_id": "BIBREF20"
},
{
"start": 419,
"end": 444,
"text": "Neelakantan et al., 2014;",
"ref_id": "BIBREF14"
},
{
"start": 445,
"end": 467,
"text": "Li and Jurafsky, 2015)",
"ref_id": "BIBREF8"
},
{
"start": 641,
"end": 659,
"text": "(Liu et al., 2015)",
"ref_id": "BIBREF9"
},
{
"start": 1073,
"end": 1091,
"text": "(Cao et al., 2017)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In this paper, we propose a novel method that models the word sense relatedness in multiprototype word embedding model. It considers the difference and relatedness between the original meanings and the extended meanings. Our proposed method adopts an online framework to induce the word sense and learn sense embeddings jointly, which makes our model more scalable and efficient. Two non-parametric methods for fuzzy clustering produce flexible number of word senses. Particularly, FCSE-2 introduces the Generalized Polya Urn process to integrate both the global occurrence information and local textual context information. The qualitative and quantitative results demonstrate the stable and higher performance of our model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In the future, we are interested in incorporating external knowledge, such as WordNet, to supervise the clustering results, and in extending our model to learn more precise sentence and document embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "All the vectors are randomly initialized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "More detailed information can be found in(Mikolov et al., 2013b).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.nltk.org/ 6 https://code.google.com/archive/p/ word2vec/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We will publish the code if accepted, which is based on the published project of SG+ in https://github.com/jiweil/mutli-sense-embedding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural probabilistic language models",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Jean-S\u00e9bastien",
"middle": [],
"last": "Sen\u00e9cal",
"suffix": ""
},
{
"first": "Fr\u00e9deric",
"middle": [],
"last": "Morin",
"suffix": ""
},
{
"first": "Jean-Luc",
"middle": [],
"last": "Gauvain",
"suffix": ""
}
],
"year": 2006,
"venue": "Innovations in Machine Learning",
"volume": "",
"issue": "",
"pages": "137--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, Holger Schwenk, Jean-S\u00e9bastien Sen\u00e9cal, Fr\u00e9deric Morin, and Jean-Luc Gauvain. 2006. Neural probabilistic language models. In Innovations in Machine Learning, pages 137-186. Springer.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Bridge text and knowledge by learning multi-prototype entity mention embedding",
"authors": [
{
"first": "Yixin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Lifu",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Juanzi",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th annual meeting of the association for computational linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yixin Cao, Lifu Huang, Heng Ji, Xu Chen, and Juanzi Li. 2017. Bridge text and knowledge by learning multi-prototype entity mention embedding. In Pro- ceedings of the 55th annual meeting of the associ- ation for computational linguistics. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A unified model for word sense representation and disambiguation",
"authors": [
{
"first": "Xinxiong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1025--1035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A unified model for word sense representation and disambiguation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1025-1035.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "The Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Re- search, 12:2493-2537.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Graded word sense assignment",
"authors": [
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Mccarthy",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "440--449",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katrin Erk and Diana McCarthy. 2009. Graded word sense assignment. In Proceedings of the 2009 Con- ference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, pages 440-449. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Placing search in context: The concept revisited",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Finkelstein",
"suffix": ""
},
{
"first": "Evgeniy",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Yossi",
"middle": [],
"last": "Matias",
"suffix": ""
},
{
"first": "Ehud",
"middle": [],
"last": "Rivlin",
"suffix": ""
},
{
"first": "Zach",
"middle": [],
"last": "Solan",
"suffix": ""
},
{
"first": "Gadi",
"middle": [],
"last": "Wolfman",
"suffix": ""
},
{
"first": "Eytan",
"middle": [],
"last": "Ruppin",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 10th international conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "406--414",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Ey- tan Ruppin. 2001. Placing search in context: The concept revisited. In Proceedings of the 10th inter- national conference on World Wide Web, pages 406- 414. ACM.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Improving word representations via global context and multiple word prototypes",
"authors": [
{
"first": "H",
"middle": [],
"last": "Eric",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Andrew Y",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers",
"volume": "1",
"issue": "",
"pages": "873--882",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric H Huang, Richard Socher, Christopher D Man- ning, and Andrew Y Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguis- tics: Long Papers-Volume 1, pages 873-882. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Semeval-2013 task 13: Word sense induction for graded and non-graded senses",
"authors": [
{
"first": "David",
"middle": [],
"last": "Jurgens",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Klapaftis",
"suffix": ""
}
],
"year": 2013,
"venue": "Second joint conference on lexical and computational semantics (* SEM)",
"volume": "",
"issue": "",
"pages": "290--299",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Jurgens and Ioannis Klapaftis. 2013. Semeval- 2013 task 13: Word sense induction for graded and non-graded senses. In Second joint conference on lexical and computational semantics (* SEM), vol- ume 2, pages 290-299.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Do multi-sense embeddings improve natural language understanding?",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1722--1732",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li and Dan Jurafsky. 2015. Do multi-sense embeddings improve natural language understand- ing? In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Process- ing, EMNLP 2015, Lisbon, Portugal, September 17- 21, 2015, pages 1722-1732.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Tat-Seng Chua, and Maosong Sun",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "Twenty-Ninth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Zhiyuan Liu, Tat-Seng Chua, and Maosong Sun. 2015. Topical word embeddings. In Twenty- Ninth AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word represen- tations in vector space. CoRR, abs/1301.3781.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Wordnet: a lexical database for english",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39- 41.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A scalable hierarchical distributed language model",
"authors": [
{
"first": "Andriy",
"middle": [],
"last": "Mnih",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2009,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "1081--1088",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andriy Mnih and Geoffrey E Hinton. 2009. A scal- able hierarchical distributed language model. In Advances in neural information processing systems, pages 1081-1088.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Efficient nonparametric estimation of multiple embeddings per word in vector space",
"authors": [
{
"first": "Arvind",
"middle": [],
"last": "Neelakantan",
"suffix": ""
},
{
"first": "Jeevan",
"middle": [],
"last": "Shankar",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1059--1069",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arvind Neelakantan, Jeevan Shankar, Alexandre Pas- sos, and Andrew McCallum. 2014. Efficient non- parametric estimation of multiple embeddings per word in vector space. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1059-1069.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A survey of random processes with reinforcement",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Pemantle",
"suffix": ""
}
],
"year": 2007,
"venue": "Probab. Surv",
"volume": "4",
"issue": "0",
"pages": "1--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robin Pemantle et al. 2007. A survey of random pro- cesses with reinforcement. Probab. Surv, 4(0):1-79.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Multi-prototype vector-space models of word meaning",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Reisinger",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Raymond",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Reisinger and Raymond J Mooney. 2010. Multi-prototype vector-space models of word mean- ing. In Human Language Technologies: The 2010",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "109--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Conference of the North American Chap- ter of the Association for Computational Linguistics, pages 109-117. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Parsing with compositional vector grammars",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Andrew Y",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the ACL conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, John Bauer, Christopher D Manning, and Andrew Y Ng. 2013. Parsing with composi- tional vector grammars. In In Proceedings of the ACL conference. Citeseer.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Parsing natural scenes and natural language with recursive neural networks",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cliff",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Andrew Y",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 28th international conference on machine learning (ICML-11)",
"volume": "",
"issue": "",
"pages": "129--136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Cliff C Lin, Chris Manning, and An- drew Y Ng. 2011. Parsing natural scenes and natu- ral language with recursive neural networks. In Pro- ceedings of the 28th international conference on ma- chine learning (ICML-11), pages 129-136.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A probabilistic model for learning multi-prototype word embeddings",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Hanjun",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Jiang",
"middle": [],
"last": "Bian",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Enhong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "151--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Tian, Hanjun Dai, Jiang Bian, Bin Gao, Rui Zhang, Enhong Chen, and Tie-Yan Liu. 2014. A probabilis- tic model for learning multi-prototype word embed- dings. In Proceedings of COLING, pages 151-160.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Word representations: a simple and general method for semi-supervised learning",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Turian",
"suffix": ""
},
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th annual meeting of the association for computational linguistics",
"volume": "",
"issue": "",
"pages": "384--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th annual meeting of the association for compu- tational linguistics, pages 384-394. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Theory of earth science",
"authors": [
{
"first": "Von",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Engelhardt",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zimmermann",
"suffix": ""
}
],
"year": 1988,
"venue": "CUP Archive",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wolf Von Engelhardt and J\u00f6rg Zimmermann. 1988. Theory of earth science. CUP Archive.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Verbs semantics and lexical selection",
"authors": [
{
"first": "Zhibiao",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 32nd annual meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "133--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhibiao Wu and Martha Palmer. 1994. Verbs semantics and lexical selection. In Proceedings of the 32nd an- nual meeting on Association for Computational Lin- guistics, pages 133-138. Association for Computa- tional Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Relatedness among senses of the word \"book\"."
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Framework of FCSE 2 The Framework of FCSE"
},
"TABREF0": {
"content": "<table><tr><td/><td>writer, ibook, ipod</td></tr><tr><td>FCSE-1</td><td>nectarine, blackcurrants, loganberry, pear, boysenberry, strawberry, apricot, plum, cherry, blueberry</td></tr><tr><td/><td>macintosh, imac, iigs, ibook, ipod, pcpaint, iphone, booter, ipad, macbook</td></tr><tr><td>Berry</td><td/></tr><tr><td colspan=\"2\">Skip-gram * greengage, thimbleberry, loganberry, dewberry, boysenberry, pome, pas-</td></tr><tr><td/><td>sionfruit, acai, maybellene, blackcurrant</td></tr><tr><td colspan=\"2\">NP-MSSG * thimbleberry, pome, nectarine, greengage, fruit, boysenberry, dewberry,</td></tr><tr><td/><td>acai, loganberry, ripe</td></tr><tr><td>FCSE-1</td><td>nectarine, thimbleberry, blueberry, fruit, pome, loganberry, apple, elder-berry, passionfruit, litchi</td></tr><tr><td/><td>gordy, taylor, lambert, osborne, satchell, earland, thornton, fullwood, allen,</td></tr><tr><td/><td>sherrell</td></tr></table>",
"type_str": "table",
"html": null,
"text": "Skip-gram * aims to leverage the current word to predict the context words and learn Apple Skip-gram * iigs, boysenberry, apricot, nectarine, ibook, ipad, blackberry, blackcurrants, loganberry, macintosh NP-MSSG * nectarine, boysenberry, peach, blackcurrants, pear, passionfruit, feijoa, loganberry, elderflower, apricot macintosh, mac, iigs, macworks, macwrite, bundled, compatible, laser-",
"num": null
},
"TABREF1": {
"content": "<table><tr><td>\u2022 Both of FCSE-1 and FCSE-2 outperform all</td></tr><tr><td>of the baseline methods, because it models</td></tr><tr><td>the relatedness among word senses through</td></tr><tr><td>the common features, which inherits the ad-</td></tr><tr><td>vantages of multi-prototype model and en-</td></tr><tr><td>sures adequate training data as compared to</td></tr><tr><td>single vector model.</td></tr></table>",
"type_str": "table",
"html": null,
"text": "Results on the wordsim353 dataset. The table presents spearman correlation \u03c1 between each model's similarity rank results and the human judgement.",
"num": null
}
}
}
}