Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "I17-1022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:39:06.012253Z"
},
"title": "A Computational Study on Word Meanings and Their Distributed Representations via Polymodal Embedding",
"authors": [
{
"first": "Joohee",
"middle": [],
"last": "Park",
"suffix": "",
"affiliation": {},
"email": "james.joohee.park@navercorp.com"
},
{
"first": "Sung-Hyon",
"middle": [],
"last": "Myaeng",
"suffix": "",
"affiliation": {},
"email": "myaeng@kaist.ac.kr"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "A distributed representation has become a popular approach to capturing a word meaning. Besides its success and practical value, however, questions arise about the relationships between a true word meaning and its distributed representation. In this paper, we examine such a relationship via polymodal embedding approach inspired by the theory that humans tend to use diverse sources in developing a word meaning. The result suggests that the existing embeddings lack in capturing certain aspects of word meanings which can be significantly improved by the polymodal approach. Also, we show distinct characteristics of different types of words (e.g. concreteness) via computational studies. Finally, we show our proposed embedding method outperforms the baselines in the word similarity measure tasks and the hypernym prediction tasks.",
"pdf_parse": {
"paper_id": "I17-1022",
"_pdf_hash": "",
"abstract": [
{
"text": "A distributed representation has become a popular approach to capturing a word meaning. Besides its success and practical value, however, questions arise about the relationships between a true word meaning and its distributed representation. In this paper, we examine such a relationship via polymodal embedding approach inspired by the theory that humans tend to use diverse sources in developing a word meaning. The result suggests that the existing embeddings lack in capturing certain aspects of word meanings which can be significantly improved by the polymodal approach. Also, we show distinct characteristics of different types of words (e.g. concreteness) via computational studies. Finally, we show our proposed embedding method outperforms the baselines in the word similarity measure tasks and the hypernym prediction tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Word representations based on the distributional hypothesis of Harris (1954) have become a dominant approach including word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) , which show remarkable performances in a wide spectrum of natural language processing. However, a question arises about a relationship between a true word meaning and its distributed representation. While the context-driven word representations seem to be able to capture word-to-word relations, for example, men is to women as king is to queen, it still remains unclear what aspects of word meaning they capture and miss. For example, a word, coffee, can be understood from multiple perspectives. It may be associated with a ceramic cup filled with dark brown liquid from the perceptual perspective or an emotion such as happiness or tranquility. It may provoke other related concepts like bagel or awakening. We raise the question of how well the current distributed representation captures such aspects of word meanings.",
"cite_spans": [
{
"start": 63,
"end": 76,
"text": "Harris (1954)",
"ref_id": "BIBREF10"
},
{
"start": 128,
"end": 150,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF24"
},
{
"start": 161,
"end": 186,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to help answering this question, we propose a polymodal word representation based on the theory that humans tend to use diverse sources in developing a word meaning. In particular, we construct six modules for polymodality including linear context, syntactic context, visual perception, cognition, emotion, and sentiments based on the human cognitive model proposed by Maruish and Moses (2013) . They are combined to build a single word representation.",
"cite_spans": [
{
"start": 378,
"end": 402,
"text": "Maruish and Moses (2013)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We conduct a series of experiments to examine the relationships between word meanings and their distributed representations and compare the results with other representations such as word2vec, GloVe, and meta-embedding (Yin and Sch\u00fctze, 2015) . We attempt to understand how well the model capture the diverse aspects of word meanings via two experiments: the property norms analysis and the sentiment polarity analysis. The result suggests that the existing embedding methods lack in capturing visual properties and sentiment polarities and show that they can be much improved by adopting polymodal approaches.",
"cite_spans": [
{
"start": 219,
"end": 242,
"text": "(Yin and Sch\u00fctze, 2015)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Finally, we examine distinct characteristics of different types of words via computational studies, focusing along the dimension of concept concreteness and similarity. We find that the importance of a certain module (e.g. visual perception or lexical relations) varies depending on the word properties. Our study provides some computational evidence for the heterogeneous nature of word meanings, which has been extensively studied in the field of psycholinguistics. We briefly introduce it in the following subsection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Word meanings are thought to have diverse aspects. Steels (2008) address that languages are inherently built upon our cognitive system to fulfill the purpose of communication between mutually unobservable internal representations. So many psycholinguistic theories have attempted to understand the diverse nature of word meanings by human minds. Barsalou (1999) claims that many human modalities such as conceptual/perceptual systems cooperate with each other in a complex way and influence word meanings, while Pulverm\u00fcller (1999) argues that concepts are grounded in complex simulations of physical and introspective events, activating the frontal region of the brain that coordinates the multimodal information. Studies on semantic priming (Plaut and Booth, 2000) also supports them that words can be similar to each other in various ways to foster the priming effect. The experiments in this paper are designed to provide some computational evidence on such studies on the multifaceted nature of word meanings.",
"cite_spans": [
{
"start": 51,
"end": 64,
"text": "Steels (2008)",
"ref_id": "BIBREF36"
},
{
"start": 346,
"end": 361,
"text": "Barsalou (1999)",
"ref_id": "BIBREF3"
},
{
"start": 512,
"end": 531,
"text": "Pulverm\u00fcller (1999)",
"ref_id": "BIBREF29"
},
{
"start": 743,
"end": 766,
"text": "(Plaut and Booth, 2000)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Theoretical works",
"sec_num": "2.1"
},
{
"text": "From a computational point of view, there exist a number of bimodal approaches that extend the semantic representation to include perceptual information or understandings of the world around us. Bruni et al. (2014) and Kiros et al. (2014a) propose a way to augment text-based word embeddings using public image datasets while Roller and Im Walde (2013) integrate visual features into LDA models. A recent study on Image caption generation (Xu et al., 2015) suggests an interesting way to align word embeddings and image features. Moreover, Kiros et al. (2014b) jointly trains the image abstraction network and sentence abstraction network altogether, making the visual features naturally combined into word embeddings. Similar attempts have been made not only for visual perception but also auditory and olfactory perception. On the other hand, Henriksson (2015) demonstrates that semantic space ensemble models created by exploiting various corpora are able to outperform any single constituent model. Yin and Sch\u00fctze (2015) propose meta-Embedding that ensembles multiple semantic spaces trained by different methods with different tasks such as word2vec, GloVe, HLBL (Luong et al., 2013) and C&W (Collobert and Weston, 2008) . Above works succeed to improve word embedding quality by extending the semantic representation, but it still remains unclear how those improvements are related to the word meanings.",
"cite_spans": [
{
"start": 195,
"end": 214,
"text": "Bruni et al. (2014)",
"ref_id": "BIBREF4"
},
{
"start": 219,
"end": 239,
"text": "Kiros et al. (2014a)",
"ref_id": "BIBREF18"
},
{
"start": 439,
"end": 456,
"text": "(Xu et al., 2015)",
"ref_id": "BIBREF39"
},
{
"start": 540,
"end": 560,
"text": "Kiros et al. (2014b)",
"ref_id": "BIBREF19"
},
{
"start": 845,
"end": 862,
"text": "Henriksson (2015)",
"ref_id": "BIBREF11"
},
{
"start": 1003,
"end": 1025,
"text": "Yin and Sch\u00fctze (2015)",
"ref_id": "BIBREF40"
},
{
"start": 1169,
"end": 1189,
"text": "(Luong et al., 2013)",
"ref_id": "BIBREF22"
},
{
"start": 1198,
"end": 1226,
"text": "(Collobert and Weston, 2008)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal approaches",
"sec_num": "2.2"
},
{
"text": "To embrace the multifaceted nature of word meanings, we propose a polymodal word embedding. More specifically, we take into account perception, sentiment, emotion, and cognition (lexical relation) derived from diverse sources, in addition to linear context and syntactic context obtained from the corpus. Note that the term polymodal is used to distinguish it from bimodal (Kiela, 2017) . In bimodal approach, a single cognitive modality is used whereas more than one modalities are used in polymodal.",
"cite_spans": [
{
"start": 373,
"end": 386,
"text": "(Kiela, 2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Polymodal word embedding",
"sec_num": "3"
},
{
"text": "We describe each of the modules in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modules",
"sec_num": "3.1"
},
{
"text": "Linear context refers to linear embeddings (Mikolov et al., 2013) comprising 300dimensional vectors trained over 100 billion words from the Google News dataset using skip-gram and negative sampling.",
"cite_spans": [
{
"start": 43,
"end": 65,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modules",
"sec_num": "3.1"
},
{
"text": "Syntactic context takes a similar skip-gram approach as in linear context but defines the context window differently using a dependency parsing result (Levy and Goldberg, 2014) . While the linear skip-gram defines the contexts of a target word w as w \u2212k , w \u2212k+1 , ..., w k\u22121 , w k where k is a size of the window, syntactic context defines them as",
"cite_spans": [
{
"start": 151,
"end": 176,
"text": "(Levy and Goldberg, 2014)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modules",
"sec_num": "3.1"
},
{
"text": "(m 1 , lbl 1 ), (m 2 , lbl 2 ), ..., (m k , lbl k ), (m \u22121 , lbl \u22121 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modules",
"sec_num": "3.1"
},
{
"text": "where m is the modifiers of word w and lbl is the type of dependency relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modules",
"sec_num": "3.1"
},
{
"text": "Both linear and syntactic contexts are similar in the sense that they capture word characteristics from the corpus. However, the different definitions of the contexts make the model focus on the different aspects of word meanings. Levy and Goldberg (2014) report that linear context tends to capture topical similarity whereas syntactic context captures functional similarity. For example, the word Florida is close to Miami in linear context but close to California in syntactic context. We harness both types of contexts to take into account functional and syntactic similarities.",
"cite_spans": [
{
"start": 231,
"end": 255,
"text": "Levy and Goldberg (2014)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modules",
"sec_num": "3.1"
},
{
"text": "Cognition (Lexical relation) encompasses all the relations between words, which are captured in the form of a lexicon or ontology in a cognitive system. In this paper, we mainly focus on synonym, hypernym and hyponym relations in WordNet (Miller, 1995) which contains 149k words and 935k relations between them. We train lexical-relation-specific word embedding using retro-fitting .",
"cite_spans": [
{
"start": 238,
"end": 252,
"text": "(Miller, 1995)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modules",
"sec_num": "3.1"
},
{
"text": "Specifically, let V = {w 1 , ..., w n } be a vocabulary and \u2126 be an ontology that encodes semantic relations between words in V . \u2126 can be represented as a set of edges of undirected graph where (w i , w j ) \u2208 \u2126 if w i and w j holds semantic relationship of interest. The matrixQ is the collection of the vector representation ofq i \u2208 R d for each word w i \u2208 V where d is the length of pre-trained word vectors. In this experiment, we use GloVe as such vectors. The objective of learning is to train the matrix Q = (q 1 , ..., q n ) so as to make q i close to its counterpartq i and also to its adjacent vertices in \u2126. Thus the objective function to be minimized can be written as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modules",
"sec_num": "3.1"
},
{
"text": "\u03a8(Q) = n i=1 \u03b1 i ||q i \u2212q i || 2 + (i,j)\u2208\u2126 \u03b2 ij ||q i \u2212q j || 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modules",
"sec_num": "3.1"
},
{
"text": "where \u03b1 i and \u03b2 ij are hyperparameters. This procedure of training transforms the manifold of semantic space to make words in relations located more closer in Euclidean distance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modules",
"sec_num": "3.1"
},
{
"text": "Perception is a vital component for human cognition and has a significant influence on word meanings. In this paper, we only consider visual perception. We jointly train the embeddings of images and sentences together into the multi-modal vector space to build vision-specific word embeddings (Kiros et al., 2014b) .",
"cite_spans": [
{
"start": 293,
"end": 314,
"text": "(Kiros et al., 2014b)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modules",
"sec_num": "3.1"
},
{
"text": "In particular, let T be the training dataset where one image I i is associated with a corresponding caption sentence S i , i.e., (I i , S i ) \u2208 T . An embedding of image I i , x i \u2208 R d , can be obtained through convolutional neural networks, in this case, 19-layer OxfordNet (Simonyan and Zisserman, 2014), where d is the size of the dimension of multimodal space. Similarly, an embedding of sentence S i , x s \u2208 R d , can be composed through one of the sentence modeling networks, in this case, LSTM (Hochreiter and Schmidhuber, 1997) . These two image and sentence modeling networks are jointly trained together to minimize the pairwise ranking loss function",
"cite_spans": [
{
"start": 502,
"end": 536,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modules",
"sec_num": "3.1"
},
{
"text": "L = x i x\u015d max(0, \u03b1 \u2212 x i \u2022 x s + x i \u2022 x\u015d) + xs x\u00ee max(0, \u03b1 \u2212 x s \u2022 x i + x s \u2022 x\u00ee)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modules",
"sec_num": "3.1"
},
{
"text": "to place correct samples closer while separating negative samples farther in the joint space. \u03b1 is a hyperparameter and x\u015d and x\u00ee are incorrect image and sentence pair obtained through negative sampling. We use MS COCO dataset (Lin et al., 2014) to train the network which contains 300k images and 5 captions per image. Final perception embeddings of dimension 1024 are sampled from the joint space regarding one word as a sentence.",
"cite_spans": [
{
"start": 227,
"end": 245,
"text": "(Lin et al., 2014)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modules",
"sec_num": "3.1"
},
{
"text": "Sentiment, either positive or negative, is determined for words that have sentiment orientations depending on their inherent meanings, usages, backgrounds etc. To capture the sentiment polarity of words (positive and negative), we use SentiWordNet3.0 (Baccianella et al., 2010) , a lexical resource that automatically annotates the degree of positivity, negativity, and neutrality of English words. It is a one-dimensional value and if a word has multiple senses, we take the difference between the maximum positivity and the minimum negativity.",
"cite_spans": [
{
"start": 251,
"end": 277,
"text": "(Baccianella et al., 2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modules",
"sec_num": "3.1"
},
{
"text": "Emotion are considered by using NRC Emotion Lexicon (Mohammad and Turney, 2013) to reflect the emotional characteristics of words. It contains 15k words that are annotated with 10 emotion categories: anger, anticipation, disgust, fear, joy, sadness, surprise, trust, negative and positive. We built 10-dimensional one-hot emotion vectors based on this dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modules",
"sec_num": "3.1"
},
{
"text": "Note that some embedding sets may not cover every word in our set of test vocabulary. In that case, out-of-vocabulary (OOV) words are initialized to zero for the missing modules. All embeddings are L2-normalized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modules",
"sec_num": "3.1"
},
{
"text": "While the most rudimentary way for the amalgamation of several vectors is a concatenation with weights, other ensemble methods are expected to produce the vectors with improved quality (Henriksson, 2015). suggest that singular value decomposition (SVD) can be a promising way to merge the information by approximating the original matrix. Motivated by their work, we examine two matrix factorization techniques, SVD and non-negative matrix factorization (NMF). In addition, we explore an unsupervised ensemble method via autoencoder (AE) networks. The details of these methods are illustrated below. Hyperparameters such as dimension d are selected to obtain the highest Spearmans correlation score in the RG-65 dataset (Rubenstein and Goodenough, 1965) , which is used as a development set to minimize the interference on the test set. Note that before applying SVD, NMF, and AE, embeddings from different modules are concatenated with weights.",
"cite_spans": [
{
"start": 720,
"end": 753,
"text": "(Rubenstein and Goodenough, 1965)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble methods",
"sec_num": "3.2"
},
{
"text": "Concatenation (CONC) is used as the first step for ensembling multiple vectors of different dimensions. That is, let S be a set of n semantic spaces and s i be a single vector space in S. e id \u2208 s i is a representation of word w d in the semantic space s i \u2208 S. Then the resulting concatenated embedding e d of word w d is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble methods",
"sec_num": "3.2"
},
{
"text": "e d = \u03b1 1 e d1 \u2295 ... \u2295 \u03b1 i e di \u2295 ... \u2295 \u03b1 n e dn",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble methods",
"sec_num": "3.2"
},
{
"text": "where \u2295 is the concatenation operator and i \u03b1 i = 1. RG-65 is used as a development set to tune the weights \u03b1 i of particular embedding e di .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble methods",
"sec_num": "3.2"
},
{
"text": "Singular Value Decomposition (SVD) is a generalization of eigenvalue decomposition to any m \u00d7 n matrix where it is reported to be effective in signal processing (Sahidullah and Kinnunen, 2016) . Let V be the set of m words and k is the dimension of word embedding e i for word w i \u2208 V . The dictionary matrix M is a m \u00d7 k matrix where each row vector m i of M is an embedding vector of e i of word w i . Then this matrix M is decomposed into M = U \u03a3V T where U and V are m\u00d7m and n \u00d7 n real unitary matrices respectively, and \u03a3 is a m \u00d7 n non-negative real rectangular diagonal matrix. u id is the first d dimension of i-th row vector u i of U and we use it as a representation of word w i . d is 230 for SVD. The size of vocabulary m is 20150.",
"cite_spans": [
{
"start": 161,
"end": 192,
"text": "(Sahidullah and Kinnunen, 2016)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble methods",
"sec_num": "3.2"
},
{
"text": "Non-negative matrix factorization (NMF) has been reported to be effective method in various research areas including bioinformatics (Taslaman and Nilsson, 2012) , signal denoising (Schmidt et al., 2007) , and topic modeling (Arora et al., 2013) . Two non-negative matrix W and H are optimized to approximate the dictionary matrix M T \u2248 W H by minimizing the frobenius norm ||M T \u2212 W H|| F where W, H \u2265 0. NMF has an inherent property of clustering the column vectors of the target matrix. To make M T non-negative, we normalize the values of each embedding into the [0,1]. Let s id be the first d dimension of i-th column vector s i of W . Then we use s id as a representation of word w i . d is 200 for NMF.",
"cite_spans": [
{
"start": 132,
"end": 160,
"text": "(Taslaman and Nilsson, 2012)",
"ref_id": "BIBREF37"
},
{
"start": 180,
"end": 202,
"text": "(Schmidt et al., 2007)",
"ref_id": "BIBREF33"
},
{
"start": 224,
"end": 244,
"text": "(Arora et al., 2013)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble methods",
"sec_num": "3.2"
},
{
"text": "Autoencoder (AE) is a neural network used for unsupervised learning of efficient coding for data compression or dimensionality reduction (Hinton and Sejnowski, 1986) . Previous work suggests that an autoencoder may be able to learn relationships between the modules and result in higherlevel embeddings (Silberer and Lapata, 2014) . Our autoencoder consists of simple feedforward network. We trained two matrices W enc of size k \u00d7 d and W dec of size d \u00d7 k to learn efficient coding of word representation where k is the dimension of original word embedding and d is the dimension of compressed representation. Parameters are optimized to minimize cosine proximity loss:",
"cite_spans": [
{
"start": 137,
"end": 165,
"text": "(Hinton and Sejnowski, 1986)",
"ref_id": "BIBREF13"
},
{
"start": 303,
"end": 330,
"text": "(Silberer and Lapata, 2014)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble methods",
"sec_num": "3.2"
},
{
"text": "L = x\u2208T 1 \u2212x \u2022 x ||x|| \u2022 ||x|| where x is a k-dimensional word embedding, T is a training data set of size 20150 words,x = f (W dec f (W enc x + b enc ) + b dec )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble methods",
"sec_num": "3.2"
},
{
"text": "and f is a ReLU non-linear activation function. We set d = 900.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble methods",
"sec_num": "3.2"
},
{
"text": "We introduce the experiments taken to examine how well the representations embed word meanings incorporating distinct properties. First, we apply our proposed embedding method to a word similarity measure task and a hypernym prediction task to measure its overall quality. Then we conducted a series of experiments for analyzing the characteristics of word meanings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "To assess the overall quality of proposed embedding method, we examined its performance via the word similarity task on SimLex-999 , WordSim-353 (Agirre et al., 2009) , and MEN (Bruni et al., 2014) datasets. The similarity of each word pair is computed through cosine proximity, and we use Spearman's rank correlation as an evaluation metric. We also measure the performance of the different ensemble methods described in subsection 3.2. The result is compared with three baselines: Word2Vec, GloVe, and Table 1 : Spearman's correlation score on SimLex-999 (SL), WordSim-353 (WS), and MEN datasets. \"Avg. Human\" score is an interagreement between human annotators.",
"cite_spans": [
{
"start": 145,
"end": 166,
"text": "(Agirre et al., 2009)",
"ref_id": "BIBREF0"
},
{
"start": 177,
"end": 197,
"text": "(Bruni et al., 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 504,
"end": 511,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Similarity Measure and Hypernym Prediction",
"sec_num": "4.1"
},
{
"text": "Our proposed method clearly outperforms the baselines in all the datasets, with near-human performance in WordSim-353 and MEN. Among the ensemble methods, SVD gave the best result showing its strong capability of combining information from different modules for this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Similarity Measure and Hypernym Prediction",
"sec_num": "4.1"
},
{
"text": "We also conducted a hypernym prediction experiment using HyperLex dataset (Vuli\u0107 et al., 2016) to analyze the quality of proposed embedding from a different perspective. Given a pair of two words, the task is to predict the degree of the first word being a type of the second word, for example \"To what degree is chemistry a type of science?\". We build a 2-layer feedforward network of dimensions 1000 and 500 respectively with a ReLU activation function to predict the hypernyms. Then the network is trained to predict the degree of hypernymity of the scale from 0.0 to 10.0 to minimize categorial cross-entropy loss using AdaGrad optimizer on the training set. The final evaluation metrics are obtained by calculating Spearman's correlation between the predicted degrees and the test set.",
"cite_spans": [
{
"start": 74,
"end": 94,
"text": "(Vuli\u0107 et al., 2016)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Similarity Measure and Hypernym Prediction",
"sec_num": "4.1"
},
{
"text": "As in Table 2 , the proposed method shows the highest correlation to the test set among all the cases including the baselines. Among the ensemble method, SVD again shows the highest performance. For the hypernym prediction, NMF gives a slightly better result than the simple weighted concatenation.",
"cite_spans": [],
"ref_spans": [
{
"start": 6,
"end": 13,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Word Similarity Measure and Hypernym Prediction",
"sec_num": "4.1"
},
{
"text": "While the corpus-driven word representations such as Word2vec and GLoVe have been shown to Test correlation (\u03c1) Word2Vec",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Property Norms Analysis",
"sec_num": "4.2"
},
{
"text": ".319 GloVe .391 MetaEmb .400 Polymodal (CONC) .445 Polymodal (SVD) .463 Polymodal (NMF) .454 Polymodal (AE)",
"cite_spans": [
{
"start": 39,
"end": 45,
"text": "(CONC)",
"ref_id": null
},
{
"start": 61,
"end": 66,
"text": "(SVD)",
"ref_id": null
},
{
"start": 82,
"end": 87,
"text": "(NMF)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Property Norms Analysis",
"sec_num": "4.2"
},
{
"text": ".434 embed some word-to-word relations such as men is to women as king is to queen, but it is still uncertain that they are also able to capture the properties like has four legs or is delicious. To see how well the models capture such properties of words, we perform the property norms analysis. We utilize the CSLB concept property norms dataset (Devereux et al., 2014) which annotates the normalized feature labels to the set of concepts. This dataset provides the normalized features of five categories: visual perceptual, other perceptual, taxonomic, encyclopedic, and functional. C is the set of all concepts and F is the set of all normalized features in CSLB dataset where |C| = 638 and |F | = 5929. For f \u2208 F and c \u2208 C, c \u2208 C f if and only if c has the feature f where C f \u2282 C. The valid feature set F v is a subset of F such that f \u2208 F v only if there exist more than three concepts that have f , or equivalently, |C f | > 3.",
"cite_spans": [
{
"start": 348,
"end": 371,
"text": "(Devereux et al., 2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Property Norms Analysis",
"sec_num": "4.2"
},
{
"text": "Then the",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Property Norms Analysis",
"sec_num": "4.2"
},
{
"text": "|F v | = 1053.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Property Norms Analysis",
"sec_num": "4.2"
},
{
"text": "To examine how well each representation captures the normalized feature f i \u2208 F v , we calculate the cosine similarity between R(c) for c \u2208 C f i and R(C f i ) where R(\u2022) is a mapping from concept to its distributed representation and C f i is a centroid of all concepts in C f i or",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Property Norms Analysis",
"sec_num": "4.2"
},
{
"text": "C f i = 1 |C f i | c\u2208C f i R(c)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Property Norms Analysis",
"sec_num": "4.2"
},
{
"text": "In other words, C f i is a centroid of concepts that share the feature f i . We define the feature density as the cosine similarity between the concept and the centroid. That is,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Property Norms Analysis",
"sec_num": "4.2"
},
{
"text": "feature density(c, f ) = R(c) \u2022 C f",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Property Norms Analysis",
"sec_num": "4.2"
},
{
"text": "We calculate the feature density of all target concept-feature pairs assuming vectors that share Table 3 : Spearman's correlation between the CSLB normalized feature representation and the target distributed representation.",
"cite_spans": [],
"ref_spans": [
{
"start": 97,
"end": 104,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Property Norms Analysis",
"sec_num": "4.2"
},
{
"text": "the same features will also be distributionally similar (Erk, 2016) . In Figure 1 that summarizes the result, the proposed embedding method shows higher averages and lower deviations of feature densities across all the categories. It shows that our proposed embedding method is more capable of capturing normalized features than the baselines.",
"cite_spans": [
{
"start": 56,
"end": 67,
"text": "(Erk, 2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 73,
"end": 81,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Property Norms Analysis",
"sec_num": "4.2"
},
{
"text": "To further cement the observations, we calculate Spearman's correlation of word similarity measures between the normalized feature representation and the target distributed representation. The normalized feature representation of a concept is constructed as an one-hot vector which assigns 1 if the concept has the feature and 0 otherwise, and then L2-normalized to have length 1. Then we calculate the correlations of similarity measures by the feature categories. The results are shown in Table 3 . While the proposed embedding method shows the highest correlation to the case of using all normalized features, it also shows a noticeable improvement in the visual perceptual category.",
"cite_spans": [],
"ref_spans": [
{
"start": 491,
"end": 498,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Property Norms Analysis",
"sec_num": "4.2"
},
{
"text": "One of the critical weakness of context-based word representation is that it cannot differentiate the sentiment polarity correctly. So we examine the ratio of neighbors that have same/opposite/neutral sentiment polarities with a Figure 2 : The ratio of 10 nearest neighbors that have same/opposite/neutral sentiment polarities of 15010 words. target word among 15010 words and see how this problem can be mitigated. Figure 2 illustrates the result. The three context-based approaches show roughly 20% of incorrect sentiment differentiation. This can be benefited greatly from the sentiment module of the proposed approach as this issue is almost perfectly resolved by simply attaching sentiment values to the embedding. The result might be straightforward but this can improve the quality of embedding greatly.",
"cite_spans": [],
"ref_spans": [
{
"start": 229,
"end": 237,
"text": "Figure 2",
"ref_id": null
},
{
"start": 416,
"end": 424,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Positive vs Negative",
"sec_num": "4.3"
},
{
"text": "We hypothesize that the role of a certain module would be different depending on word characteristics such as the degree of concreteness. To validate this idea, we divided the Simlex-999 dataset into two groups for different degrees of concept concreteness. This corresponds to 500 pairs of concrete words vs. 499 pairs of abstract words. Then we examine the relative importance of the different modules to each group via an ablation test. The result is reported in Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 466,
"end": 473,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Concrete vs Abstract",
"sec_num": "4.4"
},
{
"text": "All Concrete Abstract L (linear) . 442 .462 .449 T (syntactic) . 446 .439 .459 C (cognition) . Interesting properties are revealed through the ablation test. By comparing the results between the different word groups, we can observe that the importance of a certain word aspect varies depending on the word characteristics. While concrete words profit from perception embeddings, the sentiment and emotion aspects are somewhat disturbing. We can observe an opposite result for abstract words. This result is quite intuitive since we can easily imagine the perceptual image from a concrete concept but not from an abstract one like love.",
"cite_spans": [
{
"start": 24,
"end": 32,
"text": "(linear)",
"ref_id": null
},
{
"start": 35,
"end": 62,
"text": "442 .462 .449 T (syntactic)",
"ref_id": null
},
{
"start": 65,
"end": 92,
"text": "446 .439 .459 C (cognition)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modules",
"sec_num": null
},
{
"text": "For a deeper analysis, we further investigate the role of each module in different word groups. For instance, since concrete concepts are perceptionrevealing, they would benefit from a strong emphasis on the perception embedding. On the other hand, emotion-revealing word groups such as abstract concepts would be opposite. Noting that the different types of words may have different sensitivity toward the modules, we adjusted the relative weights for a particular aspect of interest to be from 0.1 to 3.5 while maintaining others to 1.0. Then we observed the changes of the performance in word similarity task. The result is shown in Fig-ure 3. Figure 3 : The result of sensitivity analysis. The weight of aspect-of-interest is adjusted while others are fixed to 1. These graphs reveal the distinct profiles of different word groups. Gradual patterns of emotion and perception are opposite for the concrete and abstract word groups.",
"cite_spans": [],
"ref_spans": [
{
"start": 636,
"end": 643,
"text": "Fig-ure",
"ref_id": null
},
{
"start": 647,
"end": 655,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Modules",
"sec_num": null
},
{
"text": "The result of sensitivity analysis supports the idea that different word groups are influenced by each module with varying degrees. The x-axis refers to the relative weight of a particular aspect while setting the others to 1.0. The y-axis indicates the changes of Spearman's correlation score \u03c1 on Simlex-999. The results in Figure 3 illustrate the different preferences among different word groups, which show the distinct nature between the two groups. In particular, the gradual patterns revealed by increasing relative weights of perception and emotion are contrary to concrete and abstract words. Increasing the weight of perception is beneficial for concrete word groups but detrimental to abstract word groups. However an exactly reverse pattern can be observed for the emotion. Increasing the weight of emotion is advantageous for abstract words but adverse for concrete words.",
"cite_spans": [],
"ref_spans": [
{
"start": 326,
"end": 334,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Modules",
"sec_num": null
},
{
"text": "The \"similarity\" between two words is more strict term than the \"relatedness\". While the relatedness measures how much the two words are related to each other in some senses, the similarity measures how much the two words can be regarded as \"similar\" than just simply related. For example, consider the three word pairs: (bread, butter), (bread, toast), and (bread, stale). All of them can be regarded as \"related\" but only the (bread, toast) pairs can be regarded as \"similar\" because the other two words (butter and stale) are related but not similar to the \"bread\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity vs Relatedness",
"sec_num": "4.5"
},
{
"text": "The two data sets SimLex-999 and WordSim-353 capture this difference of similarity and relatedness. While the scores of WordSim-353 focus on the relatedness, those of the Simlex-999 deliberately try to distinguish between them. For example, a word pair (cloth, closet) is scored 8.00 in WordSim-353 dataset whereas 1.96 in the SimLex-999 dataset. To capture the difference between relatedness and similarity and see what modules contributes most to capture the similarity or the relatedness, we conduct a sensitivity analysis on WordSim-353 and SimLex-999 dataset. Figure 4 shows the result of sensitivity analysis. In the SimLex-999 dataset which focuses on the word similarity, the cognition (lexical relation) and the sentiment modules turned out to be important. On the other hand, in the WordSim-353 dataset which focuses on the word relatedness, both linear context and syntactic context are turned out to be critical. This difference can be interpreted that the word properties extracted from the contexts are of the word relatedness, and in order to differentiate the similarity from the relatedness, additional properties such as lexical relations and sentiment polarities need to be introduced.",
"cite_spans": [],
"ref_spans": [
{
"start": 565,
"end": 573,
"text": "Figure 4",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Similarity vs Relatedness",
"sec_num": "4.5"
},
{
"text": "In this paper, we raise a question if the current distributed word representations sufficiently capture different aspects of word meanings. To address the question, we proposed a novel method for composing word embeddings, inspired by a human cognitive model. We compared our proposed embedding to the current state-of-the-art distributed word embedding methods such as Word2Vec, GloVe, and Meta-embedding from the perspective of capturing diverse aspects of word meanings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Our proposed embedding performs better in the word similarity and hypernym prediction tasks than the baselines. We further conducted a series of experiments to study how well the word meanings are reflected by the representations and analyze the relationships between the modules and the word properties. From the property norms analysis, our findings show that the proposed method can capture the visual properties of words better than the baselines. Also, harnessing sentiment values helps the embedding greatly to resolve the sentiment polarity issue which is a limitation of current context-driven approaches. Based on the experimental results, we can conclude that some aspects of word meanings are not captured enough from the corpus and we can further improve the word embedding by referring to additional data related to a human mind model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Finally, using our proposed method we show the different characteristics of concrete and abstract word groups and the difference between the concept relatedness and the concept similarity. We observe that emotional information is more important than the perceptual information for the abstract words whereas the opposite result is observed for the concrete words. Also, we see that the context-driven embeddings mostly capture the word relatedness and therefore lexical relation and sentiment polarities would be beneficial when considering the word similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "In conclusion, we concentrate on analyzing the relationships between the diverse aspects of word meanings and their distributed representations and propose a way to improve them by harnessing additional information based on the human cognitive model. Since our proposed method largely relies on the labeled extra data, this work has a limitation in terms of the scalability. For future research, we need to explore unsupervised ways of introducing perceptual properties and lexical relationships of words and annotating their sentiment and emotional properties. It will make our method more scalable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A study on similarity and relatedness using distributional and WordNet-based approaches",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Enrique",
"middle": [],
"last": "Alfonseca",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Jana",
"middle": [],
"last": "Kravalova",
"suffix": ""
},
{
"first": "Marius",
"middle": [],
"last": "Pa\u015fca",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Soroa",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "19--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pa\u015fca, and Aitor Soroa. A study on similarity and relatedness using distributional and WordNet-based approaches. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 19-27. Association for Computational Linguistics, 2009.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A practical algorithm for topic modeling with provable guarantees",
"authors": [
{
"first": "Sanjeev",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Rong",
"middle": [],
"last": "Ge",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Halpern",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Mimno",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Moitra",
"suffix": ""
},
{
"first": "Yichen",
"middle": [],
"last": "Sontag",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2013,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "280--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanjeev Arora, Rong Ge, Yonatan Halpern, David M Mimno, Ankur Moitra, David Sontag, Yichen Wu, and Michael Zhu. A practical algorithm for topic modeling with provable guarantees. In ICML (2), pages 280-288, 2013.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Sentiwordnet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining",
"authors": [
{
"first": "Stefano",
"middle": [],
"last": "Baccianella",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Esuli",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Sebastiani",
"suffix": ""
}
],
"year": 2010,
"venue": "LREC",
"volume": "10",
"issue": "",
"pages": "2200--2204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefano Baccianella, Andrea Esuli, and Fabrizio Sebas- tiani. Sentiwordnet 3.0: An enhanced lexical re- source for sentiment analysis and opinion mining. In LREC, volume 10, pages 2200-2204, 2010.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Perceptual symbol system",
"authors": [
{
"first": "",
"middle": [],
"last": "Lw",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Barsalou",
"suffix": ""
}
],
"year": 1999,
"venue": "Behavioral and Brain Science",
"volume": "22",
"issue": "4",
"pages": "577--609",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "LW Barsalou. Perceptual symbol system. Behavioral and Brain Science, 22(4):577-609, 1999.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Multimodal distributional semantics",
"authors": [
{
"first": "Elia",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "Nam-Khanh",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2014,
"venue": "J. Artif. Intell. Res.(JAIR)",
"volume": "",
"issue": "",
"pages": "49--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elia Bruni, Nam-Khanh Tran, and Marco Baroni. Mul- timodal distributional semantics. J. Artif. Intell. Res.(JAIR), 49(1-47), 2014.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A unified architecture for natural language processing: Deep neural networks with multitask learning",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 25th international conference on Machine learning",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert and Jason Weston. A unified archi- tecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learn- ing, pages 160-167. ACM, 2008.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The centre for speech, language and the brain (CSLB) concept property norms. Behavior research methods",
"authors": [
{
"first": "J",
"middle": [],
"last": "Barry",
"suffix": ""
},
{
"first": "Lorraine",
"middle": [
"K"
],
"last": "Devereux",
"suffix": ""
},
{
"first": "Jeroen",
"middle": [],
"last": "Tyler",
"suffix": ""
},
{
"first": "Billi",
"middle": [],
"last": "Geertzen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Randall",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "46",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barry J Devereux, Lorraine K Tyler, Jeroen Geertzen, and Billi Randall. The centre for speech, language and the brain (CSLB) concept property norms. Be- havior research methods, 46(4):1119, 2014.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "What do you know about an alligator when you know the company it keeps?",
"authors": [
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "9",
"issue": "",
"pages": "17--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katrin Erk. What do you know about an alligator when you know the company it keeps? Semantics and Pragmatics, 9:17-1, 2016.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Non-distributional word vector representations",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1506.05230"
]
},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui and Chris Dyer. Non-distributional word vector representations. arXiv preprint arXiv:1506.05230, 2015.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Retrofitting word vectors to semantic lexicons",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Dodge",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Sujay",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Jauhar",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Hovy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui, Jesse Dodge, Sujay K. Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. Retrofitting word vectors to semantic lexicons. In Proceedings of NAACL, 2015.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Distributional structure. Word",
"authors": [
{
"first": "S",
"middle": [],
"last": "Zellig",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1954,
"venue": "",
"volume": "10",
"issue": "",
"pages": "146--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zellig S Harris. Distributional structure. Word, 10(2- 3):146-162, 1954.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Ensembles of semantic spaces: On combining models of distributional semantics with applications in healthcare",
"authors": [
{
"first": "Aron",
"middle": [],
"last": "Henriksson",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aron Henriksson. Ensembles of semantic spaces: On combining models of distributional semantics with applications in healthcare. PhD thesis, Department of Computer and Systems Sciences, Stockholm Uni- versity, 2015.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2016,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. Simlex- 999: Evaluating semantic models with (genuine) similarity estimation. Computational Linguistics, 2016.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Learning and releaming in Boltzmann machines",
"authors": [
{
"first": "E",
"middle": [],
"last": "Geoffrey",
"suffix": ""
},
{
"first": "Terrence",
"middle": [
"J"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sejnowski",
"suffix": ""
}
],
"year": 1986,
"venue": "Parallel Distrilmted Processing",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey E Hinton and Terrence J Sejnowski. Learn- ing and releaming in Boltzmann machines. Parallel Distrilmted Processing, 1, 1986.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Long shortterm memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. Long short- term memory. Neural computation, 9(8):1735- 1780, 1997.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Deep embodiment: grounding semantics in perceptual modalities",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela. Deep embodiment: grounding seman- tics in perceptual modalities. Technical report, Uni- versity of Cambridge, Computer Laboratory, 2017.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Multi-and crossmodal semantics beyond vision: Grounding in auditory perception",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2461--2470",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela and Stephen Clark. Multi-and cross- modal semantics beyond vision: Grounding in au- ditory perception. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2461-2470, Lisbon, Portu- gal, September 2015. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Grounding semantics in olfactory perception",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Luana",
"middle": [],
"last": "Bulat",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2015,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "231--236",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela, Luana Bulat, and Stephen Clark. Grounding semantics in olfactory perception. In ACL (2), pages 231-236, 2015.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Multimodal neural language models",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"S"
],
"last": "Zemel",
"suffix": ""
}
],
"year": 2014,
"venue": "Icml",
"volume": "14",
"issue": "",
"pages": "595--603",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. Multimodal neural language models. In Icml, volume 14, pages 595-603, 2014a.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Unifying visual-semantic embeddings with multimodal neural language models",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"S"
],
"last": "Zemel",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1411.2539"
]
},
"num": null,
"urls": [],
"raw_text": "Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. Unifying visual-semantic embeddings with multimodal neural language models. arXiv preprint arXiv:1411.2539, 2014b.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Dependency-based word embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL 2014",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. Dependency-based word embeddings. In ACL 2014, 2014.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Microsoft COCO: Common objects in context",
"authors": [
{
"first": "Tsung-Yi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Maire",
"suffix": ""
},
{
"first": "Serge",
"middle": [],
"last": "Belongie",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Hays",
"suffix": ""
},
{
"first": "Pietro",
"middle": [],
"last": "Perona",
"suffix": ""
},
{
"first": "Deva",
"middle": [],
"last": "Ramanan",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
},
{
"first": "C Lawrence",
"middle": [],
"last": "Zitnick",
"suffix": ""
}
],
"year": 2014,
"venue": "European Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "740--755",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C Lawrence Zitnick. Microsoft COCO: Com- mon objects in context. In European Conference on Computer Vision, pages 740-755. Springer, 2014.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Better word representations with recursive neural networks for morphology",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2013,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "104--113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Richard Socher, and Christopher D Manning. Better word representations with recur- sive neural networks for morphology. In CoNLL, pages 104-113, 2013.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Clinical neuropsychology: Theoretical foundations for practitioners",
"authors": [
{
"first": "E",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "James A",
"middle": [],
"last": "Maruish",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moses",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark E Maruish and James A Moses. Clinical neu- ropsychology: Theoretical foundations for practi- tioners. Psychology Press, 2013.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111-3119, 2013.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Wordnet: a lexical database for english",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A Miller. Wordnet: a lexical database for en- glish. Communications of the ACM, 38(11):39-41, 1995.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Crowdsourcing a word-emotion association lexicon",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Intelligence",
"volume": "29",
"issue": "3",
"pages": "436--465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M Mohammad and Peter D Turney. Crowdsourc- ing a word-emotion association lexicon. Computa- tional Intelligence, 29(3):436-465, 2013.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "14",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word represen- tation. In EMNLP, volume 14, pages 1532-1543, 2014.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Individual and developmental differences in semantic priming: Empirical and computational support for a singlemechanism account of lexical processing",
"authors": [
{
"first": "C",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "James R",
"middle": [],
"last": "Plaut",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Booth",
"suffix": ""
}
],
"year": 2000,
"venue": "Psychological review",
"volume": "107",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David C Plaut and James R Booth. Individual and de- velopmental differences in semantic priming: Em- pirical and computational support for a single- mechanism account of lexical processing. Psycho- logical review, 107(4):786, 2000.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Words in the brain's language",
"authors": [
{
"first": "Friedemann",
"middle": [],
"last": "Pulverm\u00fcller",
"suffix": ""
}
],
"year": 1999,
"venue": "Behavioral and brain sciences",
"volume": "22",
"issue": "02",
"pages": "253--279",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Friedemann Pulverm\u00fcller. Words in the brain's lan- guage. Behavioral and brain sciences, 22(02):253- 279, 1999.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A multimodal LDA model integrating textual, cognitive and visual modalities",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1146--1157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Roller and Sabine Schulte Im Walde. A multi- modal LDA model integrating textual, cognitive and visual modalities. In Proceedings of the 2013 Con- ference on Empirical Methods in Natural Language Processing, pages 1146-1157, 2013.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Contextual correlates of synonymy",
"authors": [
{
"first": "Herbert",
"middle": [],
"last": "Rubenstein",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goodenough",
"suffix": ""
}
],
"year": 1965,
"venue": "Communications of the ACM",
"volume": "8",
"issue": "10",
"pages": "627--633",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Herbert Rubenstein and John B Goodenough. Contex- tual correlates of synonymy. Communications of the ACM, 8(10):627-633, 1965.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Local spectral variability features for speaker verification",
"authors": [
{
"first": "Md",
"middle": [],
"last": "Sahidullah",
"suffix": ""
},
{
"first": "Tomi",
"middle": [],
"last": "Kinnunen",
"suffix": ""
}
],
"year": 2016,
"venue": "Digital Signal Processing",
"volume": "50",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Md Sahidullah and Tomi Kinnunen. Local spectral variability features for speaker verification. Digital Signal Processing, 50:1-11, 2016.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Wind noise reduction using non-negative sparse coding",
"authors": [
{
"first": "N",
"middle": [],
"last": "Mikkel",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Schmidt",
"suffix": ""
},
{
"first": "Fu-Tien",
"middle": [],
"last": "Larsen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hsiao",
"suffix": ""
}
],
"year": 2007,
"venue": "Machine Learning for Signal Processing",
"volume": "",
"issue": "",
"pages": "431--436",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikkel N Schmidt, Jan Larsen, and Fu-Tien Hsiao. Wind noise reduction using non-negative sparse coding. In Machine Learning for Signal Process- ing, 2007 IEEE Workshop on, pages 431-436. IEEE, 2007.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Learning grounded meaning representations with autoencoders",
"authors": [
{
"first": "Carina",
"middle": [],
"last": "Silberer",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL (1)",
"volume": "",
"issue": "",
"pages": "721--732",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carina Silberer and Mirella Lapata. Learning grounded meaning representations with autoencoders. In ACL (1), pages 721-732, 2014.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Very deep convolutional networks for large-scale image recognition",
"authors": [
{
"first": "Karen",
"middle": [],
"last": "Simonyan",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Zisserman",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.1556"
]
},
"num": null,
"urls": [],
"raw_text": "Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recog- nition. arXiv preprint arXiv:1409.1556, 2014.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "The symbol grounding problem has been solved. so whats next. Symbols and embodiment: Debates on meaning and cognition",
"authors": [
{
"first": "Luc",
"middle": [],
"last": "Steels",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "223--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luc Steels. The symbol grounding problem has been solved. so whats next. Symbols and embodiment: Debates on meaning and cognition, pages 223-244, 2008.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A framework for regularized non-negative matrix factorization, with application to the analysis of gene expression data",
"authors": [
{
"first": "Leo",
"middle": [],
"last": "Taslaman",
"suffix": ""
},
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Nilsson",
"suffix": ""
}
],
"year": 2012,
"venue": "PloS one",
"volume": "7",
"issue": "11",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leo Taslaman and Bj\u00f6rn Nilsson. A framework for regularized non-negative matrix factorization, with application to the analysis of gene expression data. PloS one, 7(11):e46331, 2012.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Hyperlex: A large-scale evaluation of graded lexical entailment",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Daniela",
"middle": [],
"last": "Gerz",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1608.02117"
]
},
"num": null,
"urls": [],
"raw_text": "Ivan Vuli\u0107, Daniela Gerz, Douwe Kiela, Felix Hill, and Anna Korhonen. Hyperlex: A large-scale evalua- tion of graded lexical entailment. arXiv preprint arXiv:1608.02117, 2016.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Show, attend and tell: Neural image caption generation with visual attention",
"authors": [
{
"first": "Kelvin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Aaron",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "ICML",
"volume": "14",
"issue": "",
"pages": "77--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C Courville, Ruslan Salakhutdinov, Richard S Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual atten- tion. In ICML, volume 14, pages 77-81, 2015.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Learning metaembeddings by using ensembles of embedding sets",
"authors": [
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.04257"
]
},
"num": null,
"urls": [],
"raw_text": "Wenpeng Yin and Hinrich Sch\u00fctze. Learning meta- embeddings by using ensembles of embedding sets. arXiv preprint arXiv:1508.04257, 2015.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "The result of sensitivity analysis on word similarity and word relatedness. While context information is important to the relatedness, sentiment polarity and lexical relations are important to the similarity."
},
"TABREF1": {
"type_str": "table",
"content": "<table><tr><td>: Spearman's correlation score of Hyper-</td></tr><tr><td>Lex test dataset and predictions. The proposed</td></tr><tr><td>method shows the highest correlation with the test</td></tr><tr><td>dataset.</td></tr></table>",
"num": null,
"html": null,
"text": ""
}
}
}
}