{ "paper_id": "D09-1045", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:38:07.177285Z" }, "title": "Language Models Based on Semantic Composition", "authors": [ { "first": "Jeff", "middle": [], "last": "Mitchell", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": { "postCode": "EH8 9LW", "settlement": "Edinburgh", "country": "UK" } }, "email": "jeff.mitchell@ed.ac.uk" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": { "postCode": "EH8 9LW", "settlement": "Edinburgh", "country": "UK" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper we propose a novel statistical language model to capture long-range semantic dependencies. Specifically, we apply the concept of semantic composition to the problem of constructing predictive history representations for upcoming words. We also examine the influence of the underlying semantic space on the composition task by comparing spatial semantic representations against topic-based ones. The composition models yield reductions in perplexity when combined with a standard n-gram language model over the n-gram model alone. We also obtain perplexity reductions when integrating our models with a structured language model.", "pdf_parse": { "paper_id": "D09-1045", "_pdf_hash": "", "abstract": [ { "text": "In this paper we propose a novel statistical language model to capture long-range semantic dependencies. Specifically, we apply the concept of semantic composition to the problem of constructing predictive history representations for upcoming words. We also examine the influence of the underlying semantic space on the composition task by comparing spatial semantic representations against topic-based ones. The composition models yield reductions in perplexity when combined with a standard n-gram language model over the n-gram model alone. We also obtain perplexity reductions when integrating our models with a structured language model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Statistical language modeling plays an important role in many areas of natural language processing including speech recognition, machine translation, and information retrieval. The prototypical use of language models is to assign probabilities to sequences of words. By invoking the chain rule, these probabilities are generally estimated as the product of conditional probabilities P(w i |h i ) of a word w i given the history of preceding words h i \u2261 w i\u22121 1 . In theory, the history could span any number of words up to w i such as sentences or even a paragraphs. In practice, however, it has proven challenging to deal with the combinatorial growth in the number of possible histories which in turn impacts reliable parameter estimation. A simple and effective strategy is to truncate the chain rule to include only the n-1 preceding words (n is often set within the range of 3-5). The simplification reduces the number of free parameters. However, low values of n impose an artificially local horizon to the language model, and compromise its ability to capture long-range dependencies, such as syntactic relationships, semantic or thematic constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The literature offers many examples of how to overcome this limitation, essentially by allowing the modulation of probabilities by dependencies which extend to words beyond the n-gram horizon. Cache language models (Kuhn and de Mori, 1992) increase the probability of words observed in the history, e.g., by some factor which decays exponentially with distance. Trigger models (Rosenfeld, 1996 ) go a step further by allowing arbitrary word pairs to be incorporated into the cache. Structured language models (e.g., Roark (2001) ) go beyond the representation of history as a linear sequence of words to capture the syntactic constructions in which these words are embedded.", "cite_spans": [ { "start": 215, "end": 239, "text": "(Kuhn and de Mori, 1992)", "ref_id": "BIBREF19" }, { "start": 377, "end": 393, "text": "(Rosenfeld, 1996", "ref_id": "BIBREF30" }, { "start": 516, "end": 528, "text": "Roark (2001)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "It is also possible to build representations of history which are semantic rather than syntactic (Bellegarda (2000; Coccaro and Jurafsky (1998; Gildea and Hofmann (1999) ). In this approach, estimates for the probabilities of upcoming words are derived from a comparison of their semantic content with the content of the history so far. The semantic representations, in this case, are vectors derived from the distributional properties of words in a corpus, based on the insight that words which are semantically similar will be found in similar contexts (Harris, 1968; Firth, 1957) . Although the the construction of a semantic representation for the history is crucial to this approach, the underlying vector-based models are primarily designed to represent isolated words rather than word sequences. Ideally, we would like to compose the meaning of the history out of its constituent parts. This is by no means a new idea. Much work in linguistic theory (Partee, 1995; Montague, 1974) has been devoted to compositionality, the process of determining the meaning of complex expressions from simpler ones. Previous work either ignores this issue (e.g., Bellegarda (2000)) or simply com-putes the centroid of the vectors representing the history (e.g., Coccaro and Jurafsky (1998) ). This is motivated primarily by mathematical convenience rather than by empirical evidence.", "cite_spans": [ { "start": 97, "end": 115, "text": "(Bellegarda (2000;", "ref_id": "BIBREF0" }, { "start": 116, "end": 143, "text": "Coccaro and Jurafsky (1998;", "ref_id": "BIBREF7" }, { "start": 144, "end": 169, "text": "Gildea and Hofmann (1999)", "ref_id": "BIBREF12" }, { "start": 555, "end": 569, "text": "(Harris, 1968;", "ref_id": "BIBREF15" }, { "start": 570, "end": 582, "text": "Firth, 1957)", "ref_id": "BIBREF10" }, { "start": 957, "end": 971, "text": "(Partee, 1995;", "ref_id": "BIBREF26" }, { "start": 972, "end": 987, "text": "Montague, 1974)", "ref_id": "BIBREF24" }, { "start": 1253, "end": 1280, "text": "Coccaro and Jurafsky (1998)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In our earlier work (Mitchell and Lapata, 2008) we formulated composition as a function of two vectors and introduced a variety of models based on addition and multiplication. In this paper we apply vector composition to the problem of constructing predictive history representations for language modeling. Besides integrating composition with language modeling, a task which is novel to our knowledge, our approach also serves as a valuable testbed of our earlier framework which we originally evaluated on a small scale verb-subject similarity task. We also investigate how the choice of the underlying semantic representation interacts with the choice of composition function by comparing a spatial model that represents words as vectors in a high-dimensional space against a probabilistic model that represents words as topic distributions.", "cite_spans": [ { "start": 20, "end": 47, "text": "(Mitchell and Lapata, 2008)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our results show that the proposed composition models yield reductions in perplexity when combined with a standard n-gram model over the n-gram model alone. We also show that with an appropriate composition function spatial models outperform the more sophisticated topic models. Finally, we obtain further perplexity reductions when our models are integrated with a structured language model, indicating that the two approaches to language modeling are complementary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The insight that words with similar meanings will tend to be distributed in similar contexts has given rise to a number of approaches that construct semantic representations from corpora. Broadly speaking, these models come in two flavors. Semantic space models represent the meaning of words in terms of vectors, with the vector components being derived from the distributional statistics of those words. Essentially, these models provide a simple procedure for constructing spatial representations of word meaning. Topic models, in contrast, impose a probabilistic model onto those distributional statistics, under the assumption that hidden topic variables drive the process that generates words. Both approaches represent the mean-ings of words in terms of an n-dimensional series of values, but whereas the semantic space model treats those values as defining a vector with spatial properties, the topic model treats them as a probability distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributional Models of Semantics", "sec_num": "2.1" }, { "text": "A simple and popular (McDonald, 2000; Bullinaria and Levy, 2007; Lowe, 2000) way to construct a semantic space model is to associate each vector component with a particular context word, and assign it a value based on the strength of its co-occurrence with the target (i.e., the word for which a semantic representation is being constructed). For example, in Mitchell and Lapata (2008) we used the 2,000 most frequent content words in a corpus as their contexts, and defined co-occurrence in terms of the context word being present in a five word window on either side of the target word. We calculated the ratio of the probability of the context word given the target word to the overall probability of the context word and use these values as their vector components. This procedure has the benefits of simplicity and also of being largely free of any additional theoretical assumptions over and above the distributional approach to semantics. This is not to say that more sophisticated approaches have not been developed or that they are not useful. Much work has been devoted to enriching semantic space models with syntactic information (e.g., Grefenstette (1994; Pad\u00f3 and Lapata (2007) ), selectional preferences (Erk and Pad\u00f3, 2008) or with identifying optimal ways of defining the vector components (e.g., Bullinaria and Levy (2007) ).", "cite_spans": [ { "start": 21, "end": 37, "text": "(McDonald, 2000;", "ref_id": "BIBREF22" }, { "start": 38, "end": 64, "text": "Bullinaria and Levy, 2007;", "ref_id": "BIBREF3" }, { "start": 65, "end": 76, "text": "Lowe, 2000)", "ref_id": "BIBREF21" }, { "start": 359, "end": 385, "text": "Mitchell and Lapata (2008)", "ref_id": "BIBREF23" }, { "start": 1149, "end": 1168, "text": "Grefenstette (1994;", "ref_id": "BIBREF13" }, { "start": 1169, "end": 1191, "text": "Pad\u00f3 and Lapata (2007)", "ref_id": "BIBREF25" }, { "start": 1219, "end": 1239, "text": "(Erk and Pad\u00f3, 2008)", "ref_id": "BIBREF9" }, { "start": 1314, "end": 1340, "text": "Bullinaria and Levy (2007)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Distributional Models of Semantics", "sec_num": "2.1" }, { "text": "The semantic space discussed thus far is based on word co-occurrence statistics. However, the statistics of how words are distributed across the documents also carry useful semantic information. Latent Semantic Analysis (LSA, Landauer and Dumais (1997) utilizes precisely this distributional information to uncover hidden semantic factors by means of dimensionality reduction. Singular value decomposition (SVD, Berry et al. (1994) ) is applied to a word-document co-occurrence matrix which is factored into a product of a number of other matrices; one of them represents words in terms of the semantic factors and another represents documents in terms of the same factors. The algebraic relation between these matrices can be used to show that any document vector is a linear combination of the vectors representing the words it contains. Thus, within this paradigm it is nat-ural to treat multi-word structures as a \"pseudodocument\" and represent them via linear combinations of word vectors.", "cite_spans": [ { "start": 226, "end": 252, "text": "Landauer and Dumais (1997)", "ref_id": "BIBREF20" }, { "start": 412, "end": 431, "text": "Berry et al. (1994)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Distributional Models of Semantics", "sec_num": "2.1" }, { "text": "Due to its generality, LSA has proven a valuable analysis tool with a wide range of applications. However, the SVD procedure is somewhat ad-hoc lacking a sound statistical foundation. Probabilistic Latent Semantic Analysis (pLSA, Hofmann (2001) ) casts the relationship between documents and words in terms of a generative model based on a set of hidden topics. Documents are represented by distributions over topics and topics are distributions over words. Thus the mixture of topics in any document determines its vocabulary. Maximum likelihood estimation of these distributions over a word-document matrix has a comparable effect to SVD in LSA: a set of hidden semantic factors, in this case topics, are extracted and documents and words are represented by these topics.", "cite_spans": [ { "start": 223, "end": 244, "text": "(pLSA, Hofmann (2001)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Distributional Models of Semantics", "sec_num": "2.1" }, { "text": "Latent Dirichlet Allocation (Griffiths et al., 2007; Blei et al., 2003) enhances further the mathematical foundation of this approach. Whereas pLSA treats each document as a separate, independent mixture of topics, LDA assumes that the topic distributions of documents are generated by a Dirichlet distribution. Thus, LDA is a probabilistic model of the whole document collection. In this model the process of generating a document can be described as follows:", "cite_spans": [ { "start": 28, "end": 52, "text": "(Griffiths et al., 2007;", "ref_id": "BIBREF14" }, { "start": 53, "end": 71, "text": "Blei et al., 2003)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Distributional Models of Semantics", "sec_num": "2.1" }, { "text": "1. draw a multinomial distribution \u03b8 from a Dirichlet distribution parametrized by \u03b1 2. for each word in a document:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributional Models of Semantics", "sec_num": "2.1" }, { "text": "(a) draw a topic z k from the multinomial distribution characterized by \u03b8 (b) draw a word from a multinomial distribution conditioned on the topic z k and word probabilities \u03b2 Under this model, constructing a representation for a multi-word sequence amounts to estimating the topic proportions for that sequence. 1 Structure here arises from the mathematical form of the model, as opposed to any linguistic assumptions. Without anticipating our results too much, we should point out that several features of the LDA model are likely to affect the representation of multi-word sequences. Firstly, it is a top-down generative model (the topic proportions for a document are first selected and then this drives the generation of words) as opposed to a bottom-up constructive process (words modulate each other to produce a complex representation of their combination). Secondly, the top level Dirichlet distribution is likely to lead to documents being dominated by a small number of topics, producing sparse vectors. And lastly, the assumption that words are generated independently means the interaction between them is not modeled.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributional Models of Semantics", "sec_num": "2.1" }, { "text": "A common approach to embedding semantic representations within language modeling is to measure the semantic similarity between an upcoming word and its history and use it to modify the probabilities from an n-gram model. In this way, the n-gram's sensitivity to short-range dependencies is enriched with information about longer-range semantic coherence. Much of previous work has taken this approach (Bellegarda, 2000; Coccaro and Jurafsky, 1998; Wandmacher and Antoine, 2007) , whilst relying on LSA to provide semantic representations for individual words. Some authors (Coccaro and Jurafsky, 1998; Wandmacher and Antoine, 2007) use the geometric notion of a vector centroid to construct representations of history, whereas others (Bellegarda, 2000; Deng and Khundanpur, 2003) use the idea of a \"pseudodocument\", which is derived from the algebraic relation between documents and words assumed within LSA. They all derive P(w i |h i ), the probability of an upcoming word given its history, from the cosine similarity measure which must be somehow normalized in order to yield well-formed probability estimates. The approach of Gildea and Hofmann (1999) overcomes this difficulty by using representations constructed with pLSA, which have a direct probabilistic interpretation. As a result, the probability of an upcoming word given the history can be derived naturally and directly, avoiding the need for ad-hoc transformations. In constructing their representation of history, Gildea and Hofmann (1999) use an online Expectation Maximization process, which derives from the probabilistic basis of pLSA, to update the history with new words.", "cite_spans": [ { "start": 401, "end": 419, "text": "(Bellegarda, 2000;", "ref_id": "BIBREF0" }, { "start": 420, "end": 447, "text": "Coccaro and Jurafsky, 1998;", "ref_id": "BIBREF7" }, { "start": 448, "end": 477, "text": "Wandmacher and Antoine, 2007)", "ref_id": "BIBREF33" }, { "start": 573, "end": 601, "text": "(Coccaro and Jurafsky, 1998;", "ref_id": "BIBREF7" }, { "start": 602, "end": 631, "text": "Wandmacher and Antoine, 2007)", "ref_id": "BIBREF33" }, { "start": 753, "end": 779, "text": "Deng and Khundanpur, 2003)", "ref_id": "BIBREF8" }, { "start": 1131, "end": 1156, "text": "Gildea and Hofmann (1999)", "ref_id": "BIBREF12" }, { "start": 1482, "end": 1507, "text": "Gildea and Hofmann (1999)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Language Modeling using Semantic Representations", "sec_num": "2.2" }, { "text": "Extensions on the basic semantic language models sketched above involve representing the history by multiple LSA models of varying granularity in an attempt to capture topic, subtopic, and local information (Zhang and Rudnicky, 2002) ; incorporating syntactic information by building the semantic space over words and their syntactic annotations (Kanejiya et al., 2004) ; and treating the LSA similarity as a feature in a maximum entropy language model (Deng and Khundanpur, 2003) .", "cite_spans": [ { "start": 207, "end": 233, "text": "(Zhang and Rudnicky, 2002)", "ref_id": "BIBREF35" }, { "start": 346, "end": 369, "text": "(Kanejiya et al., 2004)", "ref_id": "BIBREF17" }, { "start": 453, "end": 480, "text": "(Deng and Khundanpur, 2003)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Language Modeling using Semantic Representations", "sec_num": "2.2" }, { "text": "The problem of vector composition has received relatively little attention within natural language processing. Attempts to use tensor products (Smolensky, 1990; Clark et al., 2008; Widdows, 2008) as a means of binding one vector to another face major computational difficulties as their dimensionality grows exponentially with the number of constituents being composed. To overcome this problem, other techniques (Plate, 1995) have been proposed in which the binding of two vectors results in a vector which has the same dimensionality as its components. Crucially, the success of these methods depends on the assumption that the vector components are randomly distributed. This is problematic for modeling language which has regular structure. Given the above considerations, in Mitchell and Lapata (2008) we introduce a general framework for studying vector composition, which we formulate as a function f of two vectors u and v:", "cite_spans": [ { "start": 143, "end": 160, "text": "(Smolensky, 1990;", "ref_id": "BIBREF31" }, { "start": 161, "end": 180, "text": "Clark et al., 2008;", "ref_id": "BIBREF6" }, { "start": 181, "end": 195, "text": "Widdows, 2008)", "ref_id": "BIBREF34" }, { "start": 413, "end": 426, "text": "(Plate, 1995)", "ref_id": "BIBREF28" }, { "start": 780, "end": 806, "text": "Mitchell and Lapata (2008)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Composition Models", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h = f (u, v)", "eq_num": "(1)" } ], "section": "Composition Models", "sec_num": "3" }, { "text": "where h denotes the composition of u and v. Different composition models arise, depending on how f is chosen. Our earlier work (Mitchell and Lapata, 2008) explored two broad classes of models based on additive and multiplicative functions. Additive models are the most common method of vector combination in the literature. They have been applied to a wide variety of tasks including document coherence (Foltz et al., 1998) , essay grading (Landauer and Dumais, 1997) , modeling selectional restrictions (Kintsch, 2001) , and notably language modeling (Coccaro and Jurafsky, 1998; Wandmacher and Antoine, 2007) :", "cite_spans": [ { "start": 127, "end": 154, "text": "(Mitchell and Lapata, 2008)", "ref_id": "BIBREF23" }, { "start": 403, "end": 423, "text": "(Foltz et al., 1998)", "ref_id": "BIBREF11" }, { "start": 440, "end": 467, "text": "(Landauer and Dumais, 1997)", "ref_id": "BIBREF20" }, { "start": 504, "end": 519, "text": "(Kintsch, 2001)", "ref_id": null }, { "start": 552, "end": 580, "text": "(Coccaro and Jurafsky, 1998;", "ref_id": "BIBREF7" }, { "start": 581, "end": 610, "text": "Wandmacher and Antoine, 2007)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Composition Models", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h i = u i + v i", "eq_num": "(2)" } ], "section": "Composition Models", "sec_num": "3" }, { "text": "Vector addition (or averaging, which is equivalent under the cosine similarity measure) is a computationally efficient composition model as it does not increase the dimensionality of the resulting vector. However, the idea of averaging is somewhat counterintuitive from a linguistic perspective. Composition of simple elements onto more complex ones must allow the construction of novel meanings which go beyond those of the individual elements (Pinker, 1994) . In Mitchell and Lapata (2008) we argue that composition models based on multiplication address this problem:", "cite_spans": [ { "start": 445, "end": 459, "text": "(Pinker, 1994)", "ref_id": "BIBREF27" }, { "start": 465, "end": 491, "text": "Mitchell and Lapata (2008)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Composition Models", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h i = u i \u2022 v i", "eq_num": "(3)" } ], "section": "Composition Models", "sec_num": "3" }, { "text": "Whereas the addition of vectors 'lumps their content together', multiplication picks out the content relevant to their combination by scaling each component of one with the strength of the corresponding component of the other. This argument is appealing, especially if one is interested in explaining how the meaning of a verb is modulated by its subject. Here, we also develop a complementary, probabilistic argument for the validity of this model. Let us assume that semantic vectors are based on components defined as the ratio of the conditional probability of a context word given the target word to the overall probability of the context word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Composition Models", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "v i = p(context i |target) p(context i )", "eq_num": "(4)" } ], "section": "Composition Models", "sec_num": "3" }, { "text": "These vectors represent the distributional properties of a given target word in terms of the strength of its co-occurrence with a set of context words. Dividing through by the overall probability of each context word prevents the vectors being dominated by the most frequent context words, which will often also have the highest conditional probabilities. Let us assume vectors u and v represent target words w 1 and w 2 . Now, when we compose these vectors using the multiplicative model and the components definition in (4), we obtain:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Composition Models", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h i = v i \u2022 u i = p(c i |w 1 ) p(c i ) p(c i |w 2 ) p(c i )", "eq_num": "(5)" } ], "section": "Composition Models", "sec_num": "3" }, { "text": "And by Bayes' theorem:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Composition Models", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h i = p(w 1 |c i )p(w 2 |c i ) p(w 1 )p(w 2 )", "eq_num": "(6)" } ], "section": "Composition Models", "sec_num": "3" }, { "text": "Assuming w 1 and w 2 are independent and applying Bayes' theorem again, h i becomes:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Composition Models", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h i \u2248 p(w 1 w 2 |c i ) p(w 1 w 2 ) = p(c i |w 1 w 2 ) p(c i )", "eq_num": "(7)" } ], "section": "Composition Models", "sec_num": "3" }, { "text": "By comparing to (4), we can see that the expression on the right hand side gives us something akin to the vector components we would expect when our target is the co-occurrence of w 1 and w 2 . Thus, for the multiplicative model, the combined vector h i can be thought of as an approximation to a vector representing the distributional properties of the phrase w 1 w 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Composition Models", "sec_num": "3" }, { "text": "If multiplication results in a vector which is something like the representation of w 1 and w 2 , then addition produces a vector which is more like the representation of w 1 or w 2 . Suppose we were unsure whether a word token x was an instance of w 1 or of w 2 . It would be reasonable to express the probabilities of context words around this token in terms of the probabilities for w 1 and w 2 , assuming complete uncertainty between them:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Composition Models", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(c i |x) = 1 2 p(c i |w 1 ) + 1 2 p(c i |w 2 )", "eq_num": "(8)" } ], "section": "Composition Models", "sec_num": "3" }, { "text": "Therefore, we could represent x with a vector, based on these probabilities, having the components:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Composition Models", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "x i = 1 2 p(c i |w 1 ) p(c i ) + 1 2 p(c i |w 2 ) p(c i )", "eq_num": "(9)" } ], "section": "Composition Models", "sec_num": "3" }, { "text": "Which is exactly the vector averaging approach to semantic composition. As more vectors are combined, vector addition will lead to greater generality rather than greater specificity. The multiplicative approach, on the other hand, picks out the components of the constituents that are relevant to the combination, and represents more faithfully the properties of their conjunction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Composition Models", "sec_num": "3" }, { "text": "As an aside, we should point out that our earlier work (Mitchell and Lapata, 2008) introduced several other models, additive and multiplicative, besides the ones discussed here. We selected the additive model as a baseline and also due to its overwhelming popularity in the language modeling literature. The multiplicative model presented above performed best in our evaluation study (i.e., predicting verb-subject similarity).", "cite_spans": [ { "start": 55, "end": 82, "text": "(Mitchell and Lapata, 2008)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Composition Models", "sec_num": "3" }, { "text": "Estimating Probabilities In language modeling our aim is to derive probabilities, p(w|h), given the semantic representations of word, w, and its history, h, based on the assumption that probable words should be semantically coherent with the history. Semantic coherence is commonly measured via the cosine of the angle between two vectors:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Modeling", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "sim(w, h) = w \u2022 h |w||h| (10) w \u2022 h = \u2211 i w i h i", "eq_num": "(11)" } ], "section": "Language Modeling", "sec_num": "4" }, { "text": "where w \u2022 h is the dot product of w and h. Coccaro and Jurafsky (1998) utilize this measure in their approach to language modeling. Unfortunately, they find it necessary to resort to a number of ad-hoc mechanisms to turn the cosine similarities into useful probabilities. The primary problem with the cosine measure is that, although its values lie between 0 and 1, they do not sum to 1, as probabilities must. Thus, some form of normalization is required. A further problem concerns the fact that such a measure takes no account of the underlying frequency of w, which is crucial for a probabilistic model. For example, encephalon and brain are roughly synonymous, and may be equally similar to some context, but brain may nonetheless be much more likely, as it is generally more common. An ideal measure would take account of the underlying probabilities of the elements involved and produce values that sum to 1. Our approach is to modify the dot product (equation (11)) on which the cosine measure is based. Assuming that our vector components are given by equation 4, the dot product becomes:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Modeling", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "w \u2022 h = \u2211 i p(c i |w) p(c i ) p(c i |h) p(c i )", "eq_num": "(12)" } ], "section": "Language Modeling", "sec_num": "4" }, { "text": "which we modify to derive probabilities as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Modeling", "sec_num": "4" }, { "text": "p(w|h) = p(w) \u2211 i p(c i |w) p(c i ) p(c i |h) p(c i ) p(c i ) (13)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Modeling", "sec_num": "4" }, { "text": "This expression now weights the sum with the independent probabilities of the context words and the word to be predicted. That this is indeed a valid probability can be seen by the fact it is equivalent to \u2211 i p(w|c i )p(c i |h). However, in constructing a representation of the history h, it is more convenient to work with equation (13) as it is based on vector components and can be readily used with the composition models presented in Mitchell and Lapata (2008) . Equation (13) allows us to derive probabilities from vectors representing a word and its prior history. We must also construct a representation of the history up to the nth word of a sentence. To do this, we combine, via some (additive or multiplicative) function f , the vector representing that word with the vector representing the history up to n \u2212 1 words:", "cite_spans": [ { "start": 440, "end": 466, "text": "Mitchell and Lapata (2008)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Language Modeling", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h n = f (w n , h n\u22121 ) (14) h 1 = w 1", "eq_num": "(15)" } ], "section": "Language Modeling", "sec_num": "4" }, { "text": "One issue that must be resolved in implementing equation 14is that the history vector should remain correctly normalized. In other words, the products h i \u2022 p(c i ) must themselves be a valid distribution over context words. So, after each vector composition the history vector is normalized as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Modeling", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h i =\u0125 i \u2211 j\u0125 j \u2022 p(c i )", "eq_num": "(16)" } ], "section": "Language Modeling", "sec_num": "4" }, { "text": "Equations (13)-(16) define a language model that incorporates vector composition. To generate probability estimates, it requires a set of word vectors whose components are based on the ratio of probabilities described by equation 4. Our discussion thus far has assumed a spatial semantic space model similar to that employed in Mitchell and Lapata (2008) . However, there is no reason why the vectors should not be constructed by some other means. As mentioned earlier, in the LDA topic model, words are represented as distributions over topics. These distributions are essentially components of a vector v corresponding to the target word for which we wish to construct a semantic representation. Analogously to equation (4), we convert these probabilities to ratios of probabilities:", "cite_spans": [ { "start": 328, "end": 354, "text": "Mitchell and Lapata (2008)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Language Modeling", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "v i = p(topic i |target) p(topic i )", "eq_num": "(17)" } ], "section": "Language Modeling", "sec_num": "4" }, { "text": "Integrating with Other Language Models The models defined above are based on little more than semantic coherence. As such they will be only weakly predictive, since they largely ignore word order, which n-gram models primarily exploit. The simplest means to integrate semantic information with a standard language model involves combining two probability estimates as a weighted sum:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Modeling", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(w|h) = \u03bb 1 p 1 (w|h) + (1 \u2212 \u03bb)p 2 (w|h)", "eq_num": "(18)" } ], "section": "Language Modeling", "sec_num": "4" }, { "text": "Linear interpolation is guaranteed to produce valid probabilities, and has been used, for example, to integrate structured language models with n-gram models (Roark, 2001 ). However, it will work best when the models being combined are roughly equally predictive and have complementary strengths and weaknesses. If one model is much weaker than the other, linear interpolation will typically produce a model of intermediate strength (i.e., worse than the better model), with the weaker model contributing a form of smoothing at best. Therefore, based on equation 13, we express our semantic probabilities as the product of the unigram probability, p(w), and a semantic component, \u2206, which determines the factor by which this probability should be scaled up or down given the context in which it occurs.", "cite_spans": [ { "start": 158, "end": 170, "text": "(Roark, 2001", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Language Modeling", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(w|h) = p(w) \u2022 \u2206(w, h) (19) \u2206(w, h) = \u2211 i p(c i |w) p(c i ) p(c i |h) p(c i ) p(c i )", "eq_num": "(20)" } ], "section": "Language Modeling", "sec_num": "4" }, { "text": "Thus, it seems reasonable to integrate the n-gram model by replacing the unigram probabilities with the n-gram versions. 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Modeling", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(w n ) = p(w n |w n\u22121 n\u22122 ) \u2022 \u2206(w n , h)", "eq_num": "(21)" } ], "section": "Language Modeling", "sec_num": "4" }, { "text": "To obtain a true probability estimate we normaliz\u00ea p(w n ) by dividing through the sum of all word probabilities:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Modeling", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(w n |w n\u22121 n\u22122 , h) =p (w n ) \u2211 wp (w)", "eq_num": "(22)" } ], "section": "Language Modeling", "sec_num": "4" }, { "text": "In integrating our semantic model with an n-gram model, we allow the latter to handle short range dependencies and have the former handle the longer dependencies outside the n-gram window. For this reason, the history h used by the semantic model in the prediction of w n only includes words up to w n\u22123 (i.e., only words outside the n-gram).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Modeling", "sec_num": "4" }, { "text": "We also integrate our models with a structured language model (Roark, 2001 ). However, in this case we use linear interpolation (equation 18) because the models are roughly equally predictive and also because linear interpolation is widely used when structured language models are combined with n-grams and other information sources. This approach also has the benefit of allowing the models to be combined without out the need to renormalize the probabilities. In the case of the structured language model, normalizing across the whole vocabulary would be prohibitive.", "cite_spans": [ { "start": 62, "end": 74, "text": "(Roark, 2001", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Language Modeling", "sec_num": "4" }, { "text": "In this section we discuss our experimental design for assessing the performance of the models presented above. We give details on our training procedure and parameter estimation, and present the methods used for comparison with our approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5" }, { "text": "Method Following previous work (e.g., Bellegarda (2000)) we integrated our compositional language models with a standard n-gram model (see equation 21). We experimented with additive and multiplicative composition functions, and two semantic representations (LDA and the simpler semantic space model), resulting in four compositional models. In addition, we compared our models against a state of the art structured language model in order to assess the extent to which the information provided by the semantic representation is complementary to syntactic structure. Our experiments used Roark's (2001) grammarbased language model. Similarly to standard language models, it computes the probability of the next word based upon the previous words of the sentence. This is done by computing a subset of all possible grammatical relations for the prior words and then estimating the probability of the next grammatical structure and the probability of seeing the next word given each of the prior grammatical relations. When estimating the probability of the next word, the model conditions on the two prior heads of constituents, thereby using information about word triples (like a trigram model).", "cite_spans": [ { "start": 588, "end": 602, "text": "Roark's (2001)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5" }, { "text": "All our models were evaluated by computing perplexity on the test set. Roughly, this quantifies the degree of unpredictability in a probability distribution, such that a fair k-sided dice would have a perplexity of k. More precisely, perplexity is the reciprocal of the geometric average of the word probabilities and a lower score indicates better predictions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5" }, { "text": "Parameter Estimation The compositional language models were trained on the BLLIP corpus, a collection of texts from the Wall Street Journal (years 1987-89) . The training corpus consisted of 38,521,346 words. We used a development corpus of 50,006 words and a test corpus of similar size.", "cite_spans": [ { "start": 140, "end": 155, "text": "(years 1987-89)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5" }, { "text": "All words were converted to lowercase and numbers were replaced with the symbol num . A vocabulary of 20,000 words was chosen and the remaining tokens were replaced with unk .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5" }, { "text": "Following Mitchell and Lapata (2008) , we constructed a simple semantic space based on cooccurrence statistics from the BLLIP training set. We used the 2,000 most frequent word types as contexts and a symmetric five word window. Vector components were defined as in equation 4. Contrary to our earlier work, we did not lemmatize the corpus before constructing the vectors as in the context of language modeling this was not appropriate. We also trained the LDA model on BLLIP, using Blei et al.'s (2003) implementation. 3 We experimented with different numbers of topics on the development set (from 10 to 200) and report results on the test set with 100 topics. In our experiments, the hyperparameter \u03b1 was initialized to 0.5, and the \u03b2 word probabilities were initialized randomly.", "cite_spans": [ { "start": 10, "end": 36, "text": "Mitchell and Lapata (2008)", "ref_id": "BIBREF23" }, { "start": 483, "end": 503, "text": "Blei et al.'s (2003)", "ref_id": null }, { "start": 520, "end": 521, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5" }, { "text": "We integrated our compositional models with a trigram model which we also trained on BLLIP. The model was built using the SRILM toolkit (Stolcke, 2002) with backoff and Good-Turing smoothing. Ideally, we would have liked to train Roark's (2001) parser on the same data as that used for the semantic models. However, this would require a gold standard treebank several times larger than those currently available. Following previous work on structured language modeling (Roark, 2001; Charniak, 2001; Chelba and Jelinek, 1998) , we therefore trained the parser on sections 2-21 of the Penn Treebank containing 936,017 words. Note that Roark's (2001) parser produces prefix probabilities for each word of a sentence which we converted to conditional probabilities by dividing each current probability by the previous one. hypothesis that for this type of semantic space the multiplicative vector combination function produces representations which have a sounder probabilistic basis.", "cite_spans": [ { "start": 136, "end": 151, "text": "(Stolcke, 2002)", "ref_id": "BIBREF32" }, { "start": 230, "end": 244, "text": "Roark's (2001)", "ref_id": "BIBREF29" }, { "start": 469, "end": 482, "text": "(Roark, 2001;", "ref_id": "BIBREF29" }, { "start": 483, "end": 498, "text": "Charniak, 2001;", "ref_id": "BIBREF4" }, { "start": 499, "end": 524, "text": "Chelba and Jelinek, 1998)", "ref_id": "BIBREF5" }, { "start": 633, "end": 647, "text": "Roark's (2001)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5" }, { "text": "The results for the LDA model are also reported in the table. This model reduces perplexity with an additive composition function, but performs worse than the n-gram with a multiplicative function. For comparison, Figure 1 plots the perplexity of the combined LDA and n-gram models against the number of topics. Increasing the number of topics produces higher dimensional representations which ought to be richer, more detailed and therefore more predictive. While this is true for the additive model, a greater number of topics actually increases the perplexity of the multiplicative model, indicating it has become less predictive.", "cite_spans": [], "ref_spans": [ { "start": 214, "end": 222, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Results", "sec_num": "6" }, { "text": "We compared these perplexity reductions against those obtained with a structured language model. Following Roark (2001) , we combined the structured language model with a trigram model using linear interpolation (the weights were optimized on the development set). This model (n-gram + parser) performs comparably to our best compositional model (n-gram + Multiply SSM ). While both models incorporate long range dependencies, the parser is trained on a hand annotated treebank, whereas the compositional model uses raw text, albeit from a larger corpus. Interestingly, when interpolating the trigram with the parser and the compositional models, we obtain additional perplexity reductions. This suggests that the semantic models are encoding useful predictive information about long range dependencies, which is distinct from and potentially complementary to the parser's syntactic information about such dependencies. Note that the semantic space multiplicative model yields the highest perplexity reduction in this suite of experiments followed by the LDA additive model.", "cite_spans": [ { "start": 107, "end": 119, "text": "Roark (2001)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "6" }, { "text": "In this paper we advocated the use of vector composition models for language modeling. Using semantic representations of words outside the n-gram window, we enhanced a trigram model with longer range dependencies. We compared composition models based on addition and multiplication and examined the influence of the underlying semantic space on the composition task. Our results indicate that the multiplicative composition function produced the most predictive representations with a simple semantic space. Interestingly, its effect in the LDA setting was detrimental. Increasing the representational power of the LDA model, by using a greater number of topics, rendered the multiplicative model less predictive.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "These results, together with the basic mathematical structure of the LDA model, suggest that it may not be well suited to forming representations for word sequences. In particular, the assumption that words are generated independently within documents prevents the interactions between words being modeled. This assumption, along with the Dirichlet prior on document distributions tends to lead to highly sparse word vec-tors, with a typical word being strongly associated with only one or two topics. Multiplication of a number of these vectors generally produces a vector in which most of these associations have been obliterated by the sparse components, resulting in a representation with little predictive power.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "These shortcomings arise from the mathematical formulation of LDA, which is not directed at modeling the semantic interaction between words. An interesting future direction would be to optimize the vector components of the probabilistic model over a suitable training corpus, in order to derive a vector model of semantics adapted specifically to the task of composition. We also plan to investigate more sophisticated composition models that take syntactic structure into account. Our results on interpolating the compositional models with a parser indicate that there is substantial mileage to be gained by combining syntactic and semantic dependencies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "Estimating the posterior distribution P(\u03b8, z|w, \u03b1, \u03b2) of the hidden variables given an observed collection of documents w is intractable in general; however, a variety of approximate inference algorithms have been proposed in the literature (e.g.,Blei et al. (2003;Griffiths et al. (2007)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Equation (21) can also be expressed asp(w n |w n\u22121 n\u22122 , h) \u2248 p(w n |w n\u22121 n\u22122 )p(w n |h) p(w n ), Which is equivalent to assuming that h is conditionally independent of w n\u22121 n\u22122(Gildea and Hofmann, 1999).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Available from http://www.cs.princeton.edu/ blei/lda-c/index.html.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Exploiting latent semantic information in statistical language modeling", "authors": [ { "first": "R", "middle": [], "last": "Jerome", "suffix": "" }, { "first": "", "middle": [], "last": "Bellegarda", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the IEEE", "volume": "88", "issue": "8", "pages": "1279--1296", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jerome R. Bellegarda. 2000. Exploiting latent se- mantic information in statistical language modeling. Proceedings of the IEEE, 88(8):1279-1296.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Using linear algebra for intelligent information retrieval", "authors": [ { "first": "Michael", "middle": [ "W" ], "last": "Berry", "suffix": "" }, { "first": "Susan", "middle": [ "T" ], "last": "Dumais", "suffix": "" }, { "first": "Gavin", "middle": [ "W" ], "last": "O'brien", "suffix": "" } ], "year": 1994, "venue": "SIAM Review", "volume": "37", "issue": "4", "pages": "573--595", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael W. Berry, Susan T. Dumais, and Gavin W. O'Brien. 1994. Using linear algebra for intelligent information retrieval. SIAM Review, 37(4):573-595.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Latent Dirichlet allocation", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Jordan", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "993--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Ma- chine Learning Research, 3:993-1022.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Extracting semantic representations from word co-occurrence statistics: A computational study", "authors": [ { "first": "J", "middle": [ "A" ], "last": "Bullinaria", "suffix": "" }, { "first": "J", "middle": [ "P" ], "last": "Levy", "suffix": "" } ], "year": 2007, "venue": "Behavior Research Methods", "volume": "39", "issue": "", "pages": "510--526", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.A. Bullinaria and J.P. Levy. 2007. Extracting seman- tic representations from word co-occurrence statis- tics: A computational study. Behavior Research Methods, 39:510-526.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Immediate-head parsing for language models", "authors": [ { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2001, "venue": "Proceedings of 35th Annual Meeting of the Association for Computational Linguistics and 8th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "116--123", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Charniak. 2001. Immediate-head parsing for language models. In Proceedings of 35th Annual Meeting of the Association for Computational Lin- guistics and 8th Conference of the European Chap- ter of the Association for Computational Linguistics, pages 116-123, Toulouse, France.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Exploiting syntactic structure for language modeling", "authors": [ { "first": "Ciprian", "middle": [], "last": "Chelba", "suffix": "" }, { "first": "Frederick", "middle": [], "last": "Jelinek", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 17th International Conference on Computational Linguistics and 36th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "225--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ciprian Chelba and Frederick Jelinek. 1998. Exploit- ing syntactic structure for language modeling. In Proceedings of the 17th International Conference on Computational Linguistics and 36th Annual Meet- ing of the Association for Computational Linguis- tics, pages 225-231, Montr\u00e9al, Canada.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A compositional distributional model of meaning", "authors": [ { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Bob", "middle": [], "last": "Coecke", "suffix": "" }, { "first": "Mehrnoosh", "middle": [], "last": "Sadrzadeh", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2nd Symposium on Quantum Interaction", "volume": "", "issue": "", "pages": "133--140", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Clark, Bob Coecke, and Mehrnoosh Sadrzadeh. 2008. A compositional distribu- tional model of meaning. In Proceedings of the 2nd Symposium on Quantum Interaction, pages 133-140, Oxford, UK. College Publications.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Towards better integration of semantic predictors in satistical language modeling", "authors": [ { "first": "Noah", "middle": [], "last": "Coccaro", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 5th International Conference on Spoken Language Processing", "volume": "", "issue": "", "pages": "2403--2406", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noah Coccaro and Daniel Jurafsky. 1998. Towards better integration of semantic predictors in satistical language modeling. In Proceedings of the 5th Inter- national Conference on Spoken Language Process- ing, pages 2403-2406, Sydney, Australia.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Latent semantic information in maximum entropy language models for conversational speech recognition", "authors": [ { "first": "Yonggang", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Khundanpur", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "56--63", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yonggang Deng and Sanjeev Khundanpur. 2003. La- tent semantic information in maximum entropy lan- guage models for conversational speech recognition. In Proceedings of the 2003 Human Language Tech- nology Conference of the North American Chapter of the Association for Computational Linguistics, pages 56-63, Edmonton, AL.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A structured vector space model for word meaning in context", "authors": [ { "first": "Katrin", "middle": [], "last": "Erk", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Pad\u00f3", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "897--906", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katrin Erk and Sebastian Pad\u00f3. 2008. A structured vector space model for word meaning in context. In Proceedings of the 2008 Conference on Empiri- cal Methods in Natural Language Processing, pages 897-906, Honolulu, Hawaii.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A synopsis of linguistic theory 1930-1955", "authors": [ { "first": "J", "middle": [ "R" ], "last": "Firth", "suffix": "" } ], "year": 1957, "venue": "Studies in Linguistic Analysis", "volume": "", "issue": "", "pages": "1--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. R. Firth. 1957. A synopsis of linguistic theory 1930- 1955. In Studies in Linguistic Analysis, pages 1-32. Philological Society, Oxford.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The measurement of textual coherence with latent semantic analysis", "authors": [ { "first": "Peter", "middle": [], "last": "Foltz", "suffix": "" }, { "first": "Walter", "middle": [], "last": "Kintsch", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Landauer", "suffix": "" } ], "year": 1998, "venue": "Discourse Process", "volume": "15", "issue": "", "pages": "285--307", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Foltz, Walter Kintsch, and Thomas Landauer. 1998. The measurement of textual coherence with latent semantic analysis. Discourse Process, 15:285-307.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Topicbased language models using EM", "authors": [ { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Hofmann", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 6th European Conference on Speech Communiation and Technology", "volume": "", "issue": "", "pages": "2167--2170", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Gildea and Thomas Hofmann. 1999. Topic- based language models using EM. In Proceedings of the 6th European Conference on Speech Communi- ation and Technology, pages 2167-2170, Budapest, Hungary.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Explorations in Automatic Thesaurus Discovery", "authors": [ { "first": "Gregory", "middle": [], "last": "Grefenstette", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gregory Grefenstette. 1994. Explorations in Auto- matic Thesaurus Discovery. Kluwer Academic Pub- lishers, Norwell, MA, USA.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Topics in semantic representation", "authors": [ { "first": "Thomas", "middle": [ "L" ], "last": "Griffiths", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steyvers", "suffix": "" }, { "first": "Joshua", "middle": [ "B" ], "last": "Tenenbaum", "suffix": "" } ], "year": 2007, "venue": "Psychological Review", "volume": "114", "issue": "2", "pages": "211--244", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas L. Griffiths, Mark Steyvers, and Joshua B. Tenenbaum. 2007. Topics in semantic representa- tion. Psychological Review, 114(2):211-244.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Mathematical Structures of Language", "authors": [ { "first": "Zellig", "middle": [], "last": "Harris", "suffix": "" } ], "year": 1968, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zellig Harris. 1968. Mathematical Structures of Lan- guage. Wiley, New York.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Unsupervised learning by probabilistic latent semantic analysis", "authors": [ { "first": "Thomas", "middle": [], "last": "Hofmann", "suffix": "" } ], "year": 2001, "venue": "Machine Learning", "volume": "41", "issue": "", "pages": "177--196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Hofmann. 2001. Unsupervised learning by probabilistic latent semantic analysis. Machine Learning, 41(2):177-196.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Statistical language modeling with performance benchmarks using various levels of syntactic-semantic information", "authors": [ { "first": "Dharmendra", "middle": [], "last": "Kanejiya", "suffix": "" }, { "first": "Arun", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Surendra", "middle": [], "last": "Prasad", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 20th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1161--1167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dharmendra Kanejiya, Arun Kumar, and Surendra Prasad. 2004. Statistical language modeling with performance benchmarks using various levels of syntactic-semantic information. In Proceedings of the 20th International Conference on Computational Linguistics, pages 1161-1167, Geneva, Switzerland.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A cache based natural language model for speech recognition", "authors": [ { "first": "Roland", "middle": [], "last": "Kuhn", "suffix": "" }, { "first": "Renato", "middle": [], "last": "De Mori", "suffix": "" } ], "year": 1992, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "", "issue": "14", "pages": "570--583", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roland Kuhn and Renato de Mori. 1992. A cache based natural language model for speech recogni- tion. IEEE Transactions on Pattern Analysis and Machine Intelligence, (14):570-583.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A solution to Plato's problem: the latent semantic analysis theory of acquisition, induction and representation of knowledge", "authors": [ { "first": "T", "middle": [ "K" ], "last": "Landauer", "suffix": "" }, { "first": "S", "middle": [ "T" ], "last": "Dumais", "suffix": "" } ], "year": 1997, "venue": "Psychological Review", "volume": "104", "issue": "2", "pages": "211--240", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. K. Landauer and S. T. Dumais. 1997. A solution to Plato's problem: the latent semantic analysis the- ory of acquisition, induction and representation of knowledge. Psychological Review, 104(2):211-240.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Topographic Maps of Semantic Space", "authors": [ { "first": "Will", "middle": [], "last": "Lowe", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Will Lowe. 2000. Topographic Maps of Semantic Space. Ph.D. thesis, University of Edinburgh.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Environmental Determinants of Lexical Processing Effort", "authors": [ { "first": "Scott", "middle": [], "last": "Mcdonald", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Scott McDonald. 2000. Environmental Determinants of Lexical Processing Effort. Ph.D. thesis, Univer- sity of Edinburgh.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Vector-based models of semantic composition", "authors": [ { "first": "Jeff", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "236--244", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of ACL-08: HLT, pages 236-244, Columbus, OH.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "English as a formal language", "authors": [ { "first": "R", "middle": [], "last": "Montague", "suffix": "" } ], "year": 1974, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Montague. 1974. English as a formal language. In R. Montague, editor, Formal Philosophy. Yale Uni- versity Press, New Haven, CT.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Dependency-based construction of semantic space models", "authors": [ { "first": "Sebastian", "middle": [], "last": "Pad\u00f3", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2007, "venue": "Computational Linguistics", "volume": "33", "issue": "2", "pages": "161--199", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Pad\u00f3 and Mirella Lapata. 2007. Dependency-based construction of semantic space models. Computational Linguistics, 33(2):161-199.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Lexical semantics and compositionality", "authors": [ { "first": "B", "middle": [], "last": "Partee", "suffix": "" } ], "year": 1995, "venue": "Invitation to Cognitive Science Part I: Language", "volume": "", "issue": "", "pages": "311--360", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Partee. 1995. Lexical semantics and compositional- ity. In Lila Gleitman and Mark Liberman, editors, Invitation to Cognitive Science Part I: Language, pages 311-360. MIT Press, Cambridge, MA.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "The Language Instinct: How the Mind Creates Language. HarperCollins", "authors": [ { "first": "S", "middle": [], "last": "Pinker", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Pinker. 1994. The Language Instinct: How the Mind Creates Language. HarperCollins, New York.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Holographic reduced representations", "authors": [ { "first": "Tony", "middle": [ "A" ], "last": "Plate", "suffix": "" } ], "year": 1995, "venue": "IEEE Transactions on Neural Networks", "volume": "6", "issue": "3", "pages": "623--641", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tony A. Plate. 1995. Holographic reduced represen- tations. IEEE Transactions on Neural Networks, 6(3):623-641.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Probabilistic top-down parsing and language modeling", "authors": [ { "first": "Brian", "middle": [], "last": "Roark", "suffix": "" } ], "year": 2001, "venue": "Computational Linguistics", "volume": "27", "issue": "2", "pages": "249--276", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brian Roark. 2001. Probabilistic top-down parsing and language modeling. Computational Linguistics, 27(2):249-276.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "A maximum entropy approach to adaptive statistical language modeling", "authors": [ { "first": "Roni", "middle": [], "last": "Rosenfeld", "suffix": "" } ], "year": 1996, "venue": "Computer Speech and Language", "volume": "10", "issue": "", "pages": "187--228", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roni Rosenfeld. 1996. A maximum entropy approach to adaptive statistical language modeling. Computer Speech and Language, 10:187-228.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Tensor product variable binding and the representation of symbolic structures in connectionist systems", "authors": [ { "first": "Paul", "middle": [], "last": "Smolensky", "suffix": "" } ], "year": 1990, "venue": "Artificial Intelligence", "volume": "46", "issue": "", "pages": "159--216", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Smolensky. 1990. Tensor product variable bind- ing and the representation of symbolic structures in connectionist systems. Artificial Intelligence, 46:159-216.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "SRILM -an extensible language modeling toolkit", "authors": [ { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the International Conference on Spoken Language Processing", "volume": "", "issue": "", "pages": "901--904", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas Stolcke. 2002. SRILM -an extensible lan- guage modeling toolkit. In Proceedings of the Inter- national Conference on Spoken Language Process- ing, pages 901-904, Denver, CO.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Methods to integrate a language model with semantic information for a word prediction component", "authors": [ { "first": "Tonio", "middle": [], "last": "Wandmacher", "suffix": "" }, { "first": "Jean-Yves", "middle": [], "last": "Antoine", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)", "volume": "", "issue": "", "pages": "506--513", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tonio Wandmacher and Jean-Yves Antoine. 2007. Methods to integrate a language model with seman- tic information for a word prediction component. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning (EMNLP-CoNLL), pages 506-513, Prague, Czech Republic.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Semantic vector products: Some initial investigations", "authors": [ { "first": "Dominic", "middle": [], "last": "Widdows", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2nd Symposium on Quantum Interaction", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dominic Widdows. 2008. Semantic vector products: Some initial investigations. In Proceedings of the 2nd Symposium on Quantum Interaction, Oxford, UK. College Publications.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Improve latent semantic analysis based language model by integrating multiple level knowldege", "authors": [ { "first": "Rong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Alexander", "middle": [ "I" ], "last": "Rudnicky", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 7th International Conference on Spoken Language Processing", "volume": "", "issue": "", "pages": "893--897", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rong Zhang and Alexander I. Rudnicky. 2002. Im- prove latent semantic analysis based language model by integrating multiple level knowldege. In Pro- ceedings of the 7th International Conference on Spo- ken Language Processing, pages 893-897, Denver, CO.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "Perplexity versus Number of Topics for the LDA models using additive and multiplicative composition functions.", "num": null }, "TABREF0": { "type_str": "table", "num": null, "text": "shows perplexity results when the compositional models are combined with an n-gram model. With regard to the simple semantic space model (SSM) we observe that both additive and multiplicative approaches to constructing history are successful in reducing perplexity over the n-gram baseline, with the multiplicative model outperforming the additive one. This confirms the", "content": "
ModelPerplexity
n-gram78.72
n-gram+Add SSM76.65
n-gram + Multiply SSM75.01
n-gram+Add LDA76.60
n-gram+Multiply LDA123.93
parser173.35
n-gram + parser75.22
n-gram + parser + Add SSM73.45
n-gram + parser + Multiply SSM71.32
n-gram + parser + Add LDA71.58
n-gram + parser + Multiply LDA87.93
Table 1: Perplexities for n-gram, composition and
structured language models, and their combina-
tions; subscripts SSM and LSA refer to the semantic
space and LDA models, respectively.
", "html": null } } } }