{ "paper_id": "O06-2003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:07:37.993886Z" }, "title": "A Maximum Entropy Approach for Semantic Language Modeling", "authors": [ { "first": "Chuang-Hua", "middle": [], "last": "Chueh", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Cheng Kung University", "location": { "settlement": "Tainan", "country": "Taiwan, R. O. C" } }, "email": "chchueh@chien.csie.ncku.edu.tw" }, { "first": "Hsin-Min", "middle": [], "last": "Wang", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Jen-Tzung", "middle": [], "last": "Chien", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Cheng Kung University", "location": { "settlement": "Tainan", "country": "Taiwan, R. O. C" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The conventional n-gram language model exploits only the immediate context of historical words without exploring long-distance semantic information. In this paper, we present a new information source extracted from latent semantic analysis (LSA) and adopt the maximum entropy (ME) principle to integrate it into an n-gram language model. With the ME approach, each information source serves as a set of constraints, which should be satisfied to estimate a hybrid statistical language model with maximum randomness. For comparative study, we also carry out knowledge integration via linear interpolation (LI). In the experiments on the TDT2 Chinese corpus, we find that the ME language model that combines the features of trigram and semantic information achieves a 17.9% perplexity reduction compared to the conventional trigram language model, and it outperforms the LI language model. Furthermore, in evaluation on a Mandarin speech recognition task, the ME and LI language models reduce the character error rate by 16.9% and 8.5%, respectively, over the bigram language model.", "pdf_parse": { "paper_id": "O06-2003", "_pdf_hash": "", "abstract": [ { "text": "The conventional n-gram language model exploits only the immediate context of historical words without exploring long-distance semantic information. In this paper, we present a new information source extracted from latent semantic analysis (LSA) and adopt the maximum entropy (ME) principle to integrate it into an n-gram language model. With the ME approach, each information source serves as a set of constraints, which should be satisfied to estimate a hybrid statistical language model with maximum randomness. For comparative study, we also carry out knowledge integration via linear interpolation (LI). In the experiments on the TDT2 Chinese corpus, we find that the ME language model that combines the features of trigram and semantic information achieves a 17.9% perplexity reduction compared to the conventional trigram language model, and it outperforms the LI language model. Furthermore, in evaluation on a Mandarin speech recognition task, the ME and LI language models reduce the character error rate by 16.9% and 8.5%, respectively, over the bigram language model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Language modeling plays an important role in automatic speech recognition (ASR). Given a speech signal O , the most likely word sequence \u0174 is obtained by maximizing a posteriori probability ) ( ", "cite_spans": [ { "start": 192, "end": 193, "text": "(", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "This prior probability corresponds to the language model that is useful in characterizing regularities in natural language. Also, this language model has been widely employed in optical character recognition, machine translation, document classification, information retrieval [Ponte and Croft 1998 ], and many other applications. In the literature, there were several approaches have been taken to extract different linguistic regularities in natural language. The structural language model [Chelba and Jelinek 2000] extracted the relevant syntactic regularities based on predefined grammar rules. Also, the large-span language model [Bellegarda 2000 ] was feasible for exploring the document-level semantic regularities. Nevertheless, the conventional n-gram model was effective at capturing local lexical regularities. In this paper, we focus on developing a novel latent semantic n-gram language model for continuous Mandarin speech recognition.", "cite_spans": [ { "start": 277, "end": 298, "text": "[Ponte and Croft 1998", "ref_id": "BIBREF25" }, { "start": 492, "end": 517, "text": "[Chelba and Jelinek 2000]", "ref_id": "BIBREF4" }, { "start": 635, "end": 651, "text": "[Bellegarda 2000", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "When considering an n-gram model, the probability of a word sequence W is written as a product of probabilities of individual words conditioned on their preceding n-1 words Since the n-gram language model is limited by the span of window size n, it is difficult to characterize long-distance semantic information in n-gram probabilities. To deal with the issue of insufficient long-distance word dependencies, several methods have been developed by incorporating semantic or syntactic regularities in order to achieve long-distance language modeling.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 1 2 1 1 1 1 1 ( ) ( , , , ) ( ,..., ) ( ) T T i T i i n i i i n i i p W p w w w p w w w p w w \u2212 \u2212 + \u2212 \u2212 + = = = \u2245 = \u220f \u220f ,", "eq_num": "(2)" } ], "section": "Introduction", "sec_num": "1." }, { "text": "One simple combination approach is performed using the linear interpolation of different information sources. With this approach, each information source is characterized by a separate model. Various information sources are combined using weighted averaging, which minimizes overall perplexity without considering the strengths and weaknesses of the sources in particular contexts. In other words, the weights were optimized globally instead of locally. The hybrid model obtained in this way cannot guarantee the optimal use of different information sources [Rosenfeld 1996] . Another important approach is based on Jaynes' maximum entropy (ME) principle [Jaynes 1957] . This approach includes a procedure for setting up probability distributions on the basis of partial knowledge. Different from linear interpolation, this approach determines probability models with the largest randomness and simultaneously captures all information provided by various knowledge sources. The ME framework was first applied to language modeling in [Della Pietra et al. 1992] . In the following, we survey several language model algorithms where the idea of information combination is adopted.", "cite_spans": [ { "start": 558, "end": 574, "text": "[Rosenfeld 1996]", "ref_id": "BIBREF26" }, { "start": 655, "end": 668, "text": "[Jaynes 1957]", "ref_id": "BIBREF21" }, { "start": 1033, "end": 1059, "text": "[Della Pietra et al. 1992]", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In [Kuhn and de Mori 1992] , the cache language model was proposed to merge domain information by boosting the probabilities of words in the previously-observed history. In [Zhou and Lua 1999] , n-gram models were integrated with the mutual information (MI) of trigger words. The MI-Trigram model achieved a significant reduction in perplexity. In [Rosenfeld 1996 ], the information source provided by trigger pairs was incorporated into an n-gram model under the ME framework. Long-distance information was successfully applied in language modeling. This new model achieved a 27% reduction in perplexity and a 10% reduction in the word error rate. Although trigger pairs are feasible for characterizing long-distance word associations, this approach only considers the frequently co-occurring word pairs in the training data. Some important semantic information with low frequency of occurrence is lost. To compensate for this weakness, the information of entire historical contexts should be discovered. Since the words used in different topics are inherently different in probability distribution, topic-dependent language models have been developed accordingly. In [Clarkson and Robinson 1997] , the topic language model was built based on a mixture model framework, where topic labels were assigned. Wu and Khudanpur [2002] proposed an ME model by integrating n-gram, syntactic and topic information. Topic information was extracted from unsupervised clustering in the original document space. A word error rate reduction of 3.3% was obtained using the combined language model. In [Florian and Yarowsky 1999] , a delicate tree framework was developed to represent the topic structure in text articles. Different levels of information were integrated by performing linear interpolation hierarchically. In this paper, we propose a new semantic information source using latent semantic analysis (LSA) [Deerwester et al. 1990; Berry et al. 1995] , which is used for reducing the disambiguity caused by polysemy and synonymy [Deerwester et al. 1990] . Also, the relations of semantic topics and target words are incorporated with n-gram models under the ME framework. We illustrate the performance of the new ME model by investigating perplexity in language modeling and the character-error rate in continuous Mandarin speech recognition. The paper is organized as follows. In the next section, we introduce an overview of the ME principle and its relations to other methods. In Section 3, the integration of semantic information and n-gram model via linear interpolation and maximum entropy is presented. Section 4 describes the experimental results. The evaluation of perplexity and character-error rate versus different factors is conducted. The final conclusions drawn from this study are discussed in Section 5.", "cite_spans": [ { "start": 3, "end": 26, "text": "[Kuhn and de Mori 1992]", "ref_id": "BIBREF24" }, { "start": 173, "end": 192, "text": "[Zhou and Lua 1999]", "ref_id": "BIBREF30" }, { "start": 348, "end": 363, "text": "[Rosenfeld 1996", "ref_id": "BIBREF26" }, { "start": 1169, "end": 1197, "text": "[Clarkson and Robinson 1997]", "ref_id": "BIBREF10" }, { "start": 1305, "end": 1328, "text": "Wu and Khudanpur [2002]", "ref_id": "BIBREF29" }, { "start": 1586, "end": 1613, "text": "[Florian and Yarowsky 1999]", "ref_id": "BIBREF17" }, { "start": 1903, "end": 1927, "text": "[Deerwester et al. 1990;", "ref_id": "BIBREF12" }, { "start": 1928, "end": 1946, "text": "Berry et al. 1995]", "ref_id": "BIBREF3" }, { "start": 2025, "end": 2049, "text": "[Deerwester et al. 1990]", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The underlying idea of the ME principle [Jaynes 1957 ] is to subtly model what we know, and assume nothing about what we do not know. Accordingly, we choose a model that satisfies all the information we have and that makes the model distribution as uniform as possible. Using the ME model, we can combine different knowledge sources for language modeling [Berger et al. 1996] . Each knowledge source provides a set of constraints, which must be satisfied to find a unique ME solution. These constraints are typically expressed as marginal distributions. Given features 1 , , N f f , which specify the properties extracted from observed data, the expectation of i f with respect to empirical distribution ( , ) p h w of history h and word w is calculated by", "cite_spans": [ { "start": 40, "end": 52, "text": "[Jaynes 1957", "ref_id": "BIBREF21" }, { "start": 355, "end": 375, "text": "[Berger et al. 1996]", "ref_id": "BIBREF2" }, { "start": 704, "end": 709, "text": "( , )", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "ME Language Modeling", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ", ( ) ( , ) ( , ) i i h w p f p h w f h w = \u2211 ,", "eq_num": "(4)" } ], "section": "ME Language Modeling", "sec_num": "2.1" }, { "text": "where ( ) i f \u22c5 is a binary-valued feature function. Also, using conditional probabilities in language modeling, we yield the expectation with respect to the target conditional distribution", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ME Language Modeling", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) p w h by , ( ) ( ) ( ) ( , ) i i h w p f p h p w h f h w = \u2211 .", "eq_num": "(5)" } ], "section": "ME Language Modeling", "sec_num": "2.1" }, { "text": "Because the target distribution is required to contain all the information provided by these features, we specify these constraints", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ME Language Modeling", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) ( ), for 1, , i i p f p f i N = = .", "eq_num": "(6)" } ], "section": "ME Language Modeling", "sec_num": "2.1" }, { "text": "Under these constraints, we maximize the conditional entropy or uniformity of distribution ( ) p w h . Lagrange optimization is adopted to solve this constrained optimization problem. For each feature i f , we introduce a Lagrange multiplier i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ME Language Modeling", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03bb . The Lagrangian function ) , ( \u03bb p \u039b is extended by 1 ( , ) ( ) ( ) ( ) N i i i i p H p p f p f \u03bb \u03bb = \u039b = + \u2212 \u23a1 \u23a4 \u23a3 \u23a6 \u2211 ,", "eq_num": "(7)" } ], "section": "ME Language Modeling", "sec_num": "2.1" }, { "text": "with conditional entropy defined by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ME Language Modeling", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ", ( ) ( ) ( )log ( ) h w H p ph pwh pwh = \u2212 \u2211 .", "eq_num": "(8)" } ], "section": "ME Language Modeling", "sec_num": "2.1" }, { "text": "Finally, the target distribution ( ) p w h is estimated as a log-linear model distribution", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ME Language Modeling", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 1 ( ) exp ( , ) ( ) N i i i p w h f h w Z h \u03bb \u03bb = \u239b \u239e = \u239c \u239f \u239d \u23a0 \u2211 ,", "eq_num": "(9)" } ], "section": "ME Language Modeling", "sec_num": "2.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ME Language Modeling", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) Z h \u03bb is a normalization term in the form of 1 ( ) exp ( , ) N i i w i Z h f hw \u03bb \u03bb = \u239b \u239e = \u239c \u239f \u239d \u23a0 \u2211 \u2211 ,", "eq_num": "(10)" } ], "section": "ME Language Modeling", "sec_num": "2.1" }, { "text": "determined by the constraint", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ME Language Modeling", "sec_num": "2.1" }, { "text": "( ) 1 w p w h = \u2211", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ME Language Modeling", "sec_num": "2.1" }, { "text": ". The General Iterative Scaling (GIS) algorithm or Improved Iterative Scaling (IIS) algorithm [Darroch and Ratcliff 1972; Berger et al. 1996; Della Pietra et al. 1997 ] can be used to find the Lagrange parameters \u03bb . The IIS algorithm is briefly described as follows.", "cite_spans": [ { "start": 94, "end": 121, "text": "[Darroch and Ratcliff 1972;", "ref_id": "BIBREF11" }, { "start": 122, "end": 141, "text": "Berger et al. 1996;", "ref_id": "BIBREF2" }, { "start": 142, "end": 166, "text": "Della Pietra et al. 1997", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "ME Language Modeling", "sec_num": "2.1" }, { "text": "Feature", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input:", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "functions 1 2 , , , N f f f and empirical distribution ( , ) p h w Output: Optimal Lagrange multiplier i \u03bb\u02c6 1. Start with 0 i \u03bb = for all 1, 2, , i N = . 2. For each 1, 2, , i N = : a. Let i \u03bb \u2206 be the solution to , ( ) ( ) ( , )exp( ( , )) ( ) i i i h w p h p w h f h w F h w p f \u03bb \u2206 = \u2211 , where 1 ( , )", "eq_num": "( , )" } ], "section": "Input:", "sec_num": null }, { "text": "N i i F h w f h w = = \u2211 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input:", "sec_num": null }, { "text": "b. Update the value of i \u03bb according to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input:", "sec_num": null }, { "text": "i i i \u03bb \u03bb \u03bb \u2206 + = .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input:", "sec_num": null }, { "text": "3. Go to step 2 if any i \u03bb has not converged.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input:", "sec_num": null }, { "text": "With the parameters } { i \u03bb , we can calculate the ME language model by using Eqs. (9) and (10).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input:", "sec_num": null }, { "text": "It is interesting to note the relation between maximum likelihood (ML) and ME language models. The purpose of ML estimation is to find a generative model with the maximum likelihood of training data. Generally, the log-likelihood function is adopted in the form of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation between ML and ME Modeling", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( , ) , , ( ) log ( | ) ( , )log ( | ) p h w h w h w L p p w h p h w p w h = = \u2211 \u220f .", "eq_num": "(11)" } ], "section": "Relation between ML and ME Modeling", "sec_num": "2.2" }, { "text": "Under the same assumption that the target distribution ( ) p w h is log-linear, as shown in Eqs. (9) and (10), the log-likelihood function is extended to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation between ML and ME Modeling", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 , ' 1 exp ( , ) ( ) ( , )log exp ( , ') N i i i N h w i i w i f h w L p p h w f h w \u03bb \u03bb \u03bb = = \u239b \u239e \u239c \u239f \u239d \u23a0 = \u239b \u239e \u239c \u239f \u239d \u23a0 \u2211 \u2211 \u2211 \u2211 .", "eq_num": "(12)" } ], "section": "Relation between ML and ME Modeling", "sec_num": "2.2" }, { "text": "By taking the derivative of the log-likelihood function with respect to i \u03bb and setting it at zero, we can obtain the same constraints in Eq. (6) by using the following derivations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation between ML and ME Modeling", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 , , ' ' ' 1 , , ' ' exp ( , \") ( , ) ( , ) ( , ) ( , \") 0, exp ( , ') ( , ) ( , ) ( , ) ( \"| ) ( , \") 0, N i i i i i N h w h w w i i w i i i h w h w w f h w p h w f h w p h w f h w f h w p h w f h w p h w p w h f h w \u03bb \u03bb = = \u239b \u239e \u239c \u239f \u239d \u23a0 \u2212 = \u239b \u239e \u239c \u239f \u239d \u23a0 \u21d2 \u2212 = \u2211 \u2211 \u2211 \u2211 \u2211 \u2211 \u2211 \u2211 \u2211 , \" ( , ) ( , ) ( ) ( \"| ) ( , \") 0, ( ) ( ). i i h w h w i i p h w f h w p h p w h f h w p f p f \u21d2 \u2212 = \u21d2 = \u2211 \u2211 \u2211 .", "eq_num": "(13)" } ], "section": "Relation between ML and ME Modeling", "sec_num": "2.2" }, { "text": "In other words, the ME model is equivalent to an ML model with a log-linear model. In Table 1 , we compare various properties using ML and ME criteria. Under the assumption of log-linear distribution, the optimal parameter ML \u03bb is estimated according to the ML criterion. The corresponding ML model ML \u03bb p is obtained through an unconstrained optimization procedure. On the other hand, ME performs the constrained optimization. The ME constraint allows us to determine the combined model ML \u03bb p with the highest entropy. Interestingly, these two estimation methods achieve the same result. ", "cite_spans": [], "ref_spans": [ { "start": 86, "end": 94, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Relation between ML and ME Modeling", "sec_num": "2.2" }, { "text": "The ME principle is a special case of minimum discrimination information (MDI) that has been successfully applied to language model adaptation [Federico 1999] . Let ( , ) b p h w be the background model trained from a large corpus of general domain, and ( , ) a p h w represents the adapted model estimated from an adaptation corpus of new domain. In the MDI adaptation, the language model is adapted by minimizing the distance between the background model and the adapted model. The non-symmetric Kullback-Leibler distance (KLD)", "cite_spans": [ { "start": 143, "end": 158, "text": "[Federico 1999]", "ref_id": "BIBREF16" }, { "start": 161, "end": 170, "text": "Let ( , )", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Minimum Discrimination Information and Latent ME", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( , ) ( ( , ), ( , )) ( , )log ( , ) a a b a w b p h w D p h w p h w p h w p h w = \u2211", "eq_num": "(14)" } ], "section": "Minimum Discrimination Information and Latent ME", "sec_num": "2.3" }, { "text": "is used for distance measuring. Obviously, when the background model is a uniform distribution, the MDI adaptation is equivalent to the ME estimation. More recently, the ME principle was extended to latent ME (LME) mixture modeling, where the latent variables representing underlying topics were merged [Wang et al. 2004] . To find the LME solution, the modified GIS algorithm, called expectation maximization iterative scaling (EM-IS), was used. The authors also applied the LME principle to incorporate probabilistic latent semantic analysis [Hofmann 1999] into n-gram modeling by serving the semantic information as the latent variables [Wang et al. 2003] . In this study, we use the semantic information as explicit features for ME language modeling. Latent semantic analysis (LSA) is adopted to build semantic topics.", "cite_spans": [ { "start": 303, "end": 321, "text": "[Wang et al. 2004]", "ref_id": "BIBREF27" }, { "start": 544, "end": 558, "text": "[Hofmann 1999]", "ref_id": "BIBREF20" }, { "start": 640, "end": 658, "text": "[Wang et al. 2003]", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Minimum Discrimination Information and Latent ME", "sec_num": "2.3" }, { "text": "Modeling long-distance information is crucial for language modeling. In [Chien and Chen 2004; , we successfully incorporated long-distance association patterns and latent semantic knowledge in language models. In [Wu and Khudanpur 2002] , the integration of statistical n-gram and topic unigram using the ME approach was presented. Clustering of document vectors in the original document space was performed to extract topic information. However, the original document space was generally sparse and filled with noises caused by polysemy and synonymy [Deerwester et al. 1990 ]. To explore robust and representative topic characteristics, here we introduce a new knowledge source to extract long-distance semantic information for n-gram modeling. Our idea is to adopt the LSA approach and extract semantic topic information from the reduced LSA space. The proposed procedure of ME semantic topic modeling is illustrated in Figure 1 . Because the occurrence of a word is highly related to the topic of current discourse, we apply LSA to build representative semantic topics. The subspace of semantic topics is constructed via k-means clustering of document vectors generated from the LSA model. Furthermore, we combine semantic topics and conventional n-grams under the ME framework [Chueh et al. 2004] . ", "cite_spans": [ { "start": 72, "end": 93, "text": "[Chien and Chen 2004;", "ref_id": "BIBREF6" }, { "start": 213, "end": 236, "text": "[Wu and Khudanpur 2002]", "ref_id": "BIBREF29" }, { "start": 551, "end": 574, "text": "[Deerwester et al. 1990", "ref_id": "BIBREF12" }, { "start": 1281, "end": 1300, "text": "[Chueh et al. 2004]", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 922, "end": 930, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Integration of Semantic Information and N-Gram Models", "sec_num": "3." }, { "text": "Latent semantic analysis (LSA) is popular in the areas of information retrieval [Berry et al. 1995] and semantic inference [Bellegarda 2000 ]. Using LSA, we can extract latent structures embedded in words across documents. LSA is feasible for exploiting these structures. The first stage of LSA is to construct an M D \u00d7 word-by-document matrix A . Here, M and D represent the vocabulary size and the number of documents in the training corpus, respectively. The expression for the ( , ) i j entry of matrix A is [Bellegarda 2000] , , ", "cite_spans": [ { "start": 80, "end": 99, "text": "[Berry et al. 1995]", "ref_id": "BIBREF3" }, { "start": 123, "end": 139, "text": "[Bellegarda 2000", "ref_id": "BIBREF0" }, { "start": 481, "end": 486, "text": "( , )", "ref_id": null }, { "start": 512, "end": 529, "text": "[Bellegarda 2000]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Construction of Semantic Topics", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "i j i j i j c a n \u03b5 = \u2212 ,", "eq_num": "(1 )" } ], "section": "Construction of Semantic Topics", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "D i j i j i j i i c c D t t \u03b5 = = \u2212 \u2211 ,", "eq_num": "(16)" } ], "section": "Construction of Semantic Topics", "sec_num": "3.1" }, { "text": "where i t is the total number of times term i w appears in the training corpus. In the second stage, we project words and documents into a lower dimensional space by performing singular value decomposition (SVD) for matrix A", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construction of Semantic Topics", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "T T R R R R = \u03a3 \u2248 \u03a3 = A U V U V A ,", "eq_num": "(17)" } ], "section": "Construction of Semantic Topics", "sec_num": "3.1" }, { "text": "where . After the projection, each column of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construction of Semantic Topics", "sec_num": "3.1" }, { "text": "T R R \u03a3 V characterizes", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construction of Semantic Topics", "sec_num": "3.1" }, { "text": "the location of a particular document in the reduced R-dimensional semantic space. Also, we can perform document clustering [Bellegarda 2000; Bellegarda et al. 1996] in the common semantic space. Each cluster consists of related documents in the semantic space. In general, each cluster in the semantic space reflects a particular semantic topic, which is helpful for integration in language modeling. During document clustering, the similarity of documents and topics in the common semantic space is determined by a cosine measure sim( , ) cos( , In what follows, we present two approaches for integrating the LSA information into the semantic language model, namely the linear interpolation approach and the maximum entropy approach.", "cite_spans": [ { "start": 124, "end": 141, "text": "[Bellegarda 2000;", "ref_id": "BIBREF0" }, { "start": 142, "end": 165, "text": "Bellegarda et al. 1996]", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Construction of Semantic Topics", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ") | || | T T j R R k T T j k R j R k T T R j R k = = d U U t d t U d U t U d U t ,", "eq_num": "(18)" } ], "section": "Construction of Semantic Topics", "sec_num": "3.1" }, { "text": "Linear interpolation (LI) [Rosenfeld 1996 ] is a simple approach to combining information sources from n-grams and semantic topics. To find the LI n-gram model, we first construct a pseudo document-vector from a particular historical context h . Using the projected document vector, we apply the nearest neighbor rule to detect the closest semantic topic k t corresponding to history h . Given n-gram model n ( ) p w h and topic-dependent unigram model ( ) k p w t , the hybrid LI language model is computed by", "cite_spans": [ { "start": 26, "end": 41, "text": "[Rosenfeld 1996", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Integration via Linear Interpolation", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) ( ) ( ) k p w h k p w h k p w = + LI n n t t ,", "eq_num": "(19)" } ], "section": "Integration via Linear Interpolation", "sec_num": "3.2" }, { "text": "where the interpolation coefficients have the properties n t 0 , 1 k k < \u2264 and n t 1 k k + = .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integration via Linear Interpolation", "sec_num": "3.2" }, { "text": "Without the loss of generalization, an n-gram model and a topic-dependent model are integrated using fixed weights. Also, the expectation-maximization (EM) algorithm [Dempster et al. 1977] can be applied to dynamically determine the value of these weights by minimizing the overall perplexity.", "cite_spans": [ { "start": 166, "end": 188, "text": "[Dempster et al. 1977]", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Integration via Linear Interpolation", "sec_num": "3.2" }, { "text": "More importantly, we present a new ME language model combining information sources of n-grams and semantic topics. N-grams and semantic topics serve as constraints for the ME estimation. As shown in Table 2 , two information sources partition the event space so as to obtain feature functions. Here, the trigram model is considered. Let i w denote the current word to be predicted by its historical words. The columns and rows represent different constraints that are due to trigrams and semantic topics, respectively. The event space is partitioned into events E n and E t for different cases of n-grams and semantic topics, respectively. It comes out of the probability of the joint event ( , ) p E E n t to be estimated.", "cite_spans": [], "ref_spans": [ { "start": 199, "end": 206, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Integration via Maximum Entropy", "sec_num": "3.3" }, { "text": "i w w = 1 n1 ends in ( ) h w E 1 2 n2 ends in , ( ) h w w E 2 3 n3 ends in , ( ) h w w E \u2026 1 t1 ( ) h E \u2208 t n1 t1 ( , ) p E E n2 t1 ( , ) p E E n3 t1 ( , ) p E E \u2026 2 t2 ( ) h E \u2208 t n1 t2 ( , ) p E E n2 t2 ( , ) p E E n3 t2 ( , ) p E E \u2026 \u2026", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. Event space partitioned according to trigrams and semantic topics", "sec_num": null }, { "text": "Accordingly, the feature function for each column or n-gram event is given by 1 2 n 1 if ends in , and ( , ) 0 otherwise", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. Event space partitioned according to trigrams and semantic topics", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "i i i i h w w w w f h w \u2212 \u2212 = \u23a7 = \u23a8 \u23a9 .", "eq_num": "(20)" } ], "section": "Table 2. Event space partitioned according to trigrams and semantic topics", "sec_num": null }, { "text": "In addition, the feature function for each row or semantic topic event has the form t 1 if and ( , ) 0 otherwise", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. Event space partitioned according to trigrams and semantic topics", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "k i i h w w f h w \u2208 = \u23a7 = \u23a8 \u23a9 t .", "eq_num": "(21)" } ], "section": "Table 2. Event space partitioned according to trigrams and semantic topics", "sec_num": null }, { "text": "We can build constraints corresponding to the trigrams and semantic topics as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. Event space partitioned according to trigrams and semantic topics", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Trigram: n n 2 1 ( ) ( ) ( , ) ( , ) ( , ) ( , , ) i i i i i h,w h,w p h p w h f h w p h w f h w p w w w \u2212 \u2212 = = \u2211 \u2211 .", "eq_num": "(22)" } ], "section": "Table 2. Event space partitioned according to trigrams and semantic topics", "sec_num": null }, { "text": "Semantic topics:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. Event space partitioned according to trigrams and semantic topics", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "t t ( ) ( ) ( , ) ( , ) ( , )", "eq_num": "( , )" } ], "section": "Table 2. Event space partitioned according to trigrams and semantic topics", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "i i k i h,w h,w p h p w h f h w p h w f h w p h w = = \u2208 \u2211 \u2211 t .", "eq_num": "(23)" } ], "section": "Table 2. Event space partitioned according to trigrams and semantic topics", "sec_num": null }, { "text": "Under these constraints, we apply the IIS procedure described in Section ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. Event space partitioned according to trigrams and semantic topics", "sec_num": null }, { "text": "In this study, we evaluate the proposed ME language model by measuring the model perplexity and the character-error rate in continuous speech recognition. The conventional n-gram language model is used as the baseline, while the ME language model proposed by Wu and Khudanpur [2002] is also employed for comparison. In addition, we also compare the maximum-entropy-based (ME) hybrid language model with the linear-interpolation-based (LI) hybrid language model. In the experiments, the training corpus for language modeling was composed of 5,500 Chinese articles (1,746,978 words in total) of the TDT2 Corpus, which were collected from the XinHua News Agency [Cieri et al. 1999] from January to June in 1998. The TDT2 corpus contained the recordings of broadcasted news audio developed for the tasks of cross-lingual cross-media Topic Detection and Tracking (TDT) and speech recognition. The audio files were recorded in single channel at 16 KHz in 16-bit linear SPHERE files. We used a dictionary of 32,909 words provided by Academic Sinica, Taiwan.", "cite_spans": [ { "start": 259, "end": 282, "text": "Wu and Khudanpur [2002]", "ref_id": "BIBREF29" }, { "start": 659, "end": 678, "text": "[Cieri et al. 1999]", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "4." }, { "text": "18,539 words in this dictionary occurred at least once in the training corpus. When carrying out the LSA procedure, we built a 32,909 5,500 \u00d7 word by document matrix A from the training data. We used MATLAB to implement SVD and k-means operations and, accordingly, performed document clustering and determined semantic topic vectors. The topic-dependent unigram was interpolated with the general unigram for model smoothing. The dimensionality of the LSA model was reduced to 100 R = . We performed the IIS algorithm with 30 iterations. All language models were smoothed using Jelinek-Mercer smoothing [Jelinek and Mercer 1980] , which is calculated based on the interpolation of estimated distribution and lower order n-grams.", "cite_spans": [ { "start": 602, "end": 627, "text": "[Jelinek and Mercer 1980]", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "4." }, { "text": "First of all, we examine the convergence property of the IIS algorithm. Figure 2 shows the log-likelihood of the training data using the ME language model versus different IIS iterations. In this evaluation, the number of semantic topics was set at 30. The ME model that combines the features of trigram and semantic topic information was considered. Typically, the log-likelihood increases consistently with the IIS iterations. The IIS procedure for the ME integration converged after five or six iterations.", "cite_spans": [], "ref_spans": [ { "start": 72, "end": 80, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Convergence of the IIS Algorithm", "sec_num": "4.1" }, { "text": "One popular evaluation metric for language models for speech recognition is the perplexity of test data. Perplexity can be interpreted as the average number of branches in the text. The higher the perplexity, the more branches the speech recognition system should consider. Generally speaking, a language model with lower perplexity implies less confusion in recognition and achieves higher speech-recognition accuracy. To evaluate the perplexity, we selected an additional 734 Chinese documents from the XinHua News Agency, which consisted of 244,573 words, as the test data. First, we evaluated the effect of the length of history h for topic identification. The perplexities of LI and ME models are shown in Figures 3 and 4 , respectively. Here, C represents the number of document clusters or semantic topics. In the LI implementation, for each length of history h , the interpolation weight with the lowest perplexity was empirically selected. It is obvious that the proposed ME language model outperforms Wu's ME language model [Wu and Khudanpur 2002] and the ME language model outperforms the LI language model. Furthermore, a larger C produces lower perplexity and the case that considering 50 historical words obtains the lowest perplexity. Accordingly, we fixed the length of h at 50 in the subsequent experiments. Table 3 details the perplexities for bigram and semantic language models based on LI and ME. We found that the perplexity was reduced from 451.4 (for the baseline bigram) to 444.7 by using Wu's method and to 441 by using the proposed method when the combination was based on linear interpolation (LI) and the topic number was 30. With the maximum entropy (ME) estimation, the perplexity was further reduced to 399 and 393.7 by using Wu's method and the proposed method, respectively. No matter whether Wu's method or the proposed method was used, the ME language model consistently outperformed the LI language model with different numbers of semantic topics. We also evaluated these models based on the trigram features. The results are summarized in Table 4 . We can see that, by integrating latent semantic information into the trigram model, the perplexity is reduced from 376.6 (for the baseline trigram) to 345.3 by using the LI model and to 309.3 by using the ME model, for the case of C=100. The experimental results again demonstrate that the performance improves with the number of semantic topics and that the proposed method consistently outperforms Wu's method, though the improvement is not very significant. ", "cite_spans": [ { "start": 1034, "end": 1057, "text": "[Wu and Khudanpur 2002]", "ref_id": "BIBREF29" } ], "ref_spans": [ { "start": 711, "end": 726, "text": "Figures 3 and 4", "ref_id": "FIGREF4" }, { "start": 1325, "end": 1332, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 2077, "end": 2084, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Evaluation of Perplexity", "sec_num": "4.2" }, { "text": "In addition to perplexity, we evaluated the proposed language models for a continuous Mandarin speech recognition task. Character-error rates are reported for comparison. The initial speaker-independent, hidden Markov models (HMM's) were trained by the benchmark Mandarin speech corpus TCC300 [Chien and Huang 2003 ], which was recorded in office environments using close-talking microphones. We followed the construction of context-dependent sub-syllable HMM's for Mandarin speech presented in [Chien and Huang 2003 ]. Each Mandarin syllable was modeled by right context-dependent states where each state had, at most, 32 mixture components. Each feature vector consisted of twelve Mel-frequency cepstral coefficients, one log energy, and their first derivatives. The maximum a posteriori (MAP) adaptation [Gauvian and Lee 1994] was performed on the initial HMM's using 83 training sentences (about 10 minutes long), from Voice of America (VOA) news, in the TDT2 corpus for corrective training. The additional 49 sentences selected from VOA news were used for speech recognition evaluation. This test set contained 1,852 syllables, with a total length of 6.6 minutes. To reduce the complexity of the tree copy search in decoding a test sentence, we assumed each test sentence corresponded to a single topic, which was assigned according to the nearest neighbor rule. Due to the above complexity, in this study we only implemented the language model by combining bigram and semantic information in our recognizer. Figure 5 displays the character-error rate versus the number of topics. We can see that the character-error rate decreases in the beginning and then increases as the number of topics increases. Basically, more topics provide higher resolution for representing the information source. However, the model with higher resolution requires larger training data for parameter estimation. Otherwise, the overtraining problem occurs and the performance degrades accordingly. The character-error rates used in Wu's method and the proposed method are summarized in Table 5 . In the case of C=50, the proposed LI model can achieve an error-rate reduction of 8.5% compared to the bigram model, while the proposed ME model attains a 16.9% error-rate reduction. The proposed method in general achieves lower error rates compared to Wu's method. To evaluate the statistical significance of performance difference between the proposed method and Wu's method, we applied the matched-pairs test [Gillick and Cox 1989 ] to test the hypothesis that the number of recognition errors that occur when using the proposed method is close to that with Wu's method. In the evaluation, we calculated the difference between character errors induced by Wu's method a E and the proposed method t E for each utterance. If the mean of variable t a z E E = \u2212 was zero, we accepted the conclusion that these two methods are not statistically different. To carry out the test, we calculated the sample mean z \u00b5 and sample variance z \u03c3 from N utterances and determined the test", "cite_spans": [ { "start": 293, "end": 314, "text": "[Chien and Huang 2003", "ref_id": "BIBREF5" }, { "start": 495, "end": 516, "text": "[Chien and Huang 2003", "ref_id": "BIBREF5" }, { "start": 2491, "end": 2512, "text": "[Gillick and Cox 1989", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 1514, "end": 1522, "text": "Figure 5", "ref_id": "FIGREF6" }, { "start": 2069, "end": 2076, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Evaluation of Speech Recognition", "sec_num": "4.3" }, { "text": "statistic ( ) z z N \u03c9 \u00b5 \u03c3 = .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Speech Recognition", "sec_num": "4.3" }, { "text": "Then, we computed the probability 2 Pr( ) P z \u03c9 = \u2265 and compared P with a chosen significance level \u03b1 . When P \u03b1 < , this hypothesis was rejected or, equivalently, the improvement obtained with the proposed method was statistically significant. In the evaluation, we applied the respective best case of Wu's method and the proposed method (i.e., ME language modeling, and C=30 for Wu's method but C=50 for the proposed method) in the test and obtained a P value of 0.0214. Thus, at the 0.05", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Speech Recognition", "sec_num": "4.3" }, { "text": "\u03b1 =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Speech Recognition", "sec_num": "4.3" }, { "text": "level of significance, the proposed method is better than Wu's method. That is, the proposed LSA based topic extraction is desirable for discovering semantic information for language modeling.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Speech Recognition", "sec_num": "4.3" }, { "text": "We have presented a new language modeling approach to overcome the drawback of lacking long-distance dependencies in a conventional n-gram model that is due to the assumption of the Markov chain. We introduced a new long-distance semantic information source, called the semantic topic, for knowledge integration. Instead of extracting the topic information from the original document space, we proposed extracting semantic topics from the LSA space. In the constructed LSA space with reduced dimensionality, the latent relation between words and documents was explored. The k-means clustering technique was applied for document clustering. The estimated clusters were representative of semantic topics embedded in general text documents. Accordingly, the topic-dependent unigrams were estimated and combined with the conventional n-grams. When performing knowledge integration, both linear interpolation and maximum entropy approaches were carried out for comparison. Generally speaking, linear interpolation was simpler for implementation. LI combined two information sources through a weighting factor, which was estimated by minimizing the overall perplexity. This weight was optimized globally such that we could not localize the use of weights for different sources. To achieve an optimal combination, the ME principle was applied. Each information source served as a set of constrains to be satisfied for model combination. The IIS algorithm was adopted for constrained optimization. From the experimental results of Chinese document modeling and Mandarin speech recognition, we found that ME semantic language modeling achieved a desirable performance in terms of model perplexity and character-error rates. The combined model, through linear interpolation, achieved about an 8.3% perplexity reduction over the trigram model. The proposed semantic language model did compensate the insufficiency of long-distance information in a conventional n-gram model. Furthermore, the", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5." } ], "back_matter": [ { "text": "ME semantic language model reduced perplexity by 17.9%. The ME approach did provide a delicate mechanism for model combination. Also, in the evaluation of speech recognition, the ME semantic language model obtained a 16.9% character-error rate reduction over the bigram model. The ME model was better than the LI model for speech recognition. In the future, we will validate the coincidence between the semantic topics discovered by the proposed method and the semantic topics labeled manually. We will also extend the evaluation of speech recognition using higher-order n-gram models over a larger collection of speech data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Exploiting latent semantic information in statistical language modeling", "authors": [ { "first": "J", "middle": [], "last": "Bellegarda", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the IEEE", "volume": "88", "issue": "8", "pages": "1279--1296", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bellegarda, J., \"Exploiting latent semantic information in statistical language modeling,\" Proceedings of the IEEE, 88(8), 2000, pp. 1279-1296.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A novel word clustering algorithm based on latent semantic analysis", "authors": [ { "first": "J", "middle": [], "last": "Bellegarda", "suffix": "" }, { "first": "J", "middle": [], "last": "Butzberger", "suffix": "" }, { "first": "Y", "middle": [], "last": "Chow", "suffix": "" }, { "first": "N", "middle": [], "last": "Coccaro", "suffix": "" }, { "first": "D", "middle": [], "last": "Naik", "suffix": "" } ], "year": 1996, "venue": "IEEE Proceedings of International Conference on Acoustic, Speech and Signal Processing", "volume": "1", "issue": "", "pages": "172--175", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bellegarda, J., J. Butzberger, Y. Chow, N. Coccaro, and D. Naik, \"A novel word clustering algorithm based on latent semantic analysis,\" IEEE Proceedings of International Conference on Acoustic, Speech and Signal Processing (ICASSP), 1, 1996, pp. 172-175.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A maximum entropy approach to natural language processing", "authors": [ { "first": "A", "middle": [], "last": "Berger", "suffix": "" }, { "first": "S", "middle": [ "Della" ], "last": "Pietra", "suffix": "" }, { "first": "V", "middle": [ "Della" ], "last": "Pietra", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics", "volume": "22", "issue": "1", "pages": "39--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Berger, A., S. Della Pietra, and V. Della Pietra, \"A maximum entropy approach to natural language processing,\" Computational Linguistics, 22(1), 1996, pp. 39-71.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Using linear algebra for intelligent information retrieval", "authors": [ { "first": "M", "middle": [], "last": "Berry", "suffix": "" }, { "first": "S", "middle": [], "last": "Dumais", "suffix": "" }, { "first": "G", "middle": [], "last": "O'brien", "suffix": "" } ], "year": 1995, "venue": "SIAM Review", "volume": "37", "issue": "4", "pages": "573--595", "other_ids": {}, "num": null, "urls": [], "raw_text": "Berry, M., S. Dumais, and G. O'Brien, \"Using linear algebra for intelligent information retrieval,\" SIAM Review, 37(4), 1995, pp. 573-595.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Structured language modeling", "authors": [ { "first": "C", "middle": [], "last": "Chelba", "suffix": "" }, { "first": "F", "middle": [], "last": "Jelinek", "suffix": "" } ], "year": 2000, "venue": "Computer Speech and Language", "volume": "14", "issue": "4", "pages": "283--332", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chelba, C. and F. Jelinek, \"Structured language modeling,\" Computer Speech and Language, 14(4), 2000, pp. 283-332.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Bayesian learning of speech duration model", "authors": [ { "first": "J.-T", "middle": [], "last": "Chien", "suffix": "" }, { "first": "C.-H", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2003, "venue": "IEEE Transactions on Speech and Audio Processing", "volume": "11", "issue": "6", "pages": "558--567", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chien, J.-T., and C.-H. Huang, \"Bayesian learning of speech duration model,\" IEEE Transactions on Speech and Audio Processing, 11(6), 2003, pp. 558-567.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Mining of association patterns for language modeling", "authors": [ { "first": "J.-T", "middle": [], "last": "Chien", "suffix": "" }, { "first": "H.-Y.", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2004, "venue": "Proc. International Conference on Spoken Language Processing (ICSLP)", "volume": "2", "issue": "", "pages": "1369--1372", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chien, J.-T., and H.-Y. Chen, \"Mining of association patterns for language modeling,\" Proc. International Conference on Spoken Language Processing (ICSLP), 2, 2004, pp. 1369-1372.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Latent semantic language modeling and smoothing", "authors": [ { "first": "J.-T", "middle": [], "last": "Chien", "suffix": "" }, { "first": "M.-S", "middle": [], "last": "Wu", "suffix": "" }, { "first": "H.-J", "middle": [], "last": "Peng", "suffix": "" } ], "year": 2004, "venue": "International Journal of Computational Linguistics and Chinese Language Processing", "volume": "9", "issue": "2", "pages": "29--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chien, J.-T., M.-S. Wu, and H.-J. Peng, \"Latent semantic language modeling and smoothing,\" International Journal of Computational Linguistics and Chinese Language Processing, 9(2), 2004, pp. 29-44.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A maximum entropy approach for integrating semantic information in statistical language models", "authors": [ { "first": "C.-H", "middle": [], "last": "Chueh", "suffix": "" }, { "first": "J.-T", "middle": [], "last": "Chien", "suffix": "" }, { "first": "H", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2004, "venue": "Proc. International Symposium on Chinese Spoken Language Processing", "volume": "", "issue": "", "pages": "309--312", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chueh, C.-H., J.-T. Chien, and H. Wang, \"A maximum entropy approach for integrating semantic information in statistical language models,\" Proc. International Symposium on Chinese Spoken Language Processing (ISCSLP), 2004, pp. 309-312.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The TDT-2 text and speech corpus", "authors": [ { "first": "C", "middle": [], "last": "Cieri", "suffix": "" }, { "first": "D", "middle": [], "last": "Graff", "suffix": "" }, { "first": "M", "middle": [], "last": "Liberman", "suffix": "" }, { "first": "N", "middle": [], "last": "Martey", "suffix": "" }, { "first": "S", "middle": [], "last": "Strassel", "suffix": "" } ], "year": 1999, "venue": "Proc. of the DARPA Broadcast News Workshop", "volume": "", "issue": "", "pages": "28--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cieri, C., D. Graff, M. Liberman, N. Martey, and S. Strassel, \"The TDT-2 text and speech corpus,\" Proc. of the DARPA Broadcast News Workshop, 28Feb-3Mar 1999.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Language model adaptation using mixtures and an exponential decay cache", "authors": [ { "first": "P", "middle": [], "last": "Clarkson", "suffix": "" }, { "first": "A", "middle": [], "last": "Robinson", "suffix": "" } ], "year": 1997, "venue": "IEEE Proceedings of International Conference on Acoustic, Speech and Signal Processing (ICASSP)", "volume": "2", "issue": "", "pages": "799--802", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clarkson, P., and A. Robinson, \"Language model adaptation using mixtures and an exponential decay cache,\" IEEE Proceedings of International Conference on Acoustic, Speech and Signal Processing (ICASSP), 2, 1997, pp. 799-802.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Generalized iterative scaling for log-linear models", "authors": [ { "first": "J", "middle": [], "last": "Darroch", "suffix": "" }, { "first": "D", "middle": [], "last": "Ratcliff", "suffix": "" } ], "year": 1972, "venue": "The Annals of Mathematical Statistics", "volume": "43", "issue": "", "pages": "1470--1480", "other_ids": {}, "num": null, "urls": [], "raw_text": "Darroch, J., and D. Ratcliff, \"Generalized iterative scaling for log-linear models,\" The Annals of Mathematical Statistics, 43, 1972, pp. 1470-1480.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Indexing by latent semantic analysis", "authors": [ { "first": "S", "middle": [], "last": "Deerwester", "suffix": "" }, { "first": "S", "middle": [], "last": "Dumais", "suffix": "" }, { "first": "G", "middle": [], "last": "Furnas", "suffix": "" }, { "first": "T", "middle": [], "last": "Landauer", "suffix": "" }, { "first": "R", "middle": [], "last": "Harshman", "suffix": "" } ], "year": 1990, "venue": "Journal of the American Society of Information Science", "volume": "41", "issue": "", "pages": "391--407", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deerwester, S., S. Dumais, G. Furnas, T. Landauer, and R. Harshman, \"Indexing by latent semantic analysis,\" Journal of the American Society of Information Science, 41, 1990, pp. 391-407.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Inducing features of random field", "authors": [ { "first": "S", "middle": [], "last": "Della Pietra", "suffix": "" }, { "first": "V", "middle": [ "Della" ], "last": "Pietra", "suffix": "" }, { "first": "J", "middle": [], "last": "Lafferty", "suffix": "" } ], "year": 1997, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "19", "issue": "4", "pages": "380--393", "other_ids": {}, "num": null, "urls": [], "raw_text": "Della Pietra, S., V. Della Pietra, and J. Lafferty, \"Inducing features of random field,\" IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(4), 1997, pp. 380-393.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Adaptive language modeling using minimum discriminant estimation", "authors": [ { "first": "S", "middle": [], "last": "Della Pietra", "suffix": "" }, { "first": "V", "middle": [ "Della" ], "last": "Pietra", "suffix": "" }, { "first": "R", "middle": [], "last": "Mercer", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" } ], "year": 1992, "venue": "IEEE Proceedings of International Conference on Acoustic, Speech and Signal Processing", "volume": "1", "issue": "", "pages": "633--636", "other_ids": {}, "num": null, "urls": [], "raw_text": "Della Pietra, S., V. Della Pietra, R. Mercer, and S. Roukos, \"Adaptive language modeling using minimum discriminant estimation,\" IEEE Proceedings of International Conference on Acoustic, Speech and Signal Processing (ICASSP), 1, 1992, pp. 633-636.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Maximum likelihood from incomplete data via the EM algorithm", "authors": [ { "first": "A", "middle": [], "last": "Dempster", "suffix": "" }, { "first": "N", "middle": [], "last": "Laird", "suffix": "" }, { "first": "D", "middle": [], "last": "Rubin", "suffix": "" } ], "year": 1977, "venue": "Journal of the Royal Statistical Society", "volume": "39", "issue": "1", "pages": "1--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dempster, A., N. Laird, and D.Rubin, \"Maximum likelihood from incomplete data via the EM algorithm,\" Journal of the Royal Statistical Society, 39(1), 1977, pp. 1-38.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Efficient language model adaptation through MDI estimation", "authors": [ { "first": "M", "middle": [], "last": "Federico", "suffix": "" } ], "year": 1999, "venue": "Proceedings of European Conference on Speech Communication and Technology (EUROSPEECH)", "volume": "", "issue": "", "pages": "1583--1586", "other_ids": {}, "num": null, "urls": [], "raw_text": "Federico, M., \"Efficient language model adaptation through MDI estimation,\" Proceedings of European Conference on Speech Communication and Technology (EUROSPEECH), 1999, pp. 1583-1586.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Dynamic nonlocal language modeling via hierarchical topic-based adaptation", "authors": [ { "first": "R", "middle": [], "last": "Florian", "suffix": "" }, { "first": "D", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1999, "venue": "Proc. 37 th Annual Meeting of ACL", "volume": "", "issue": "", "pages": "167--174", "other_ids": {}, "num": null, "urls": [], "raw_text": "Florian, R., and D. Yarowsky, \"Dynamic nonlocal language modeling via hierarchical topic-based adaptation,\" Proc. 37 th Annual Meeting of ACL, 1999, pp. 167-174.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Maximum a posteriori estimation for multivariate Gaussian mixture observation of Markov chain", "authors": [ { "first": "J.-L", "middle": [], "last": "Gauvain", "suffix": "" }, { "first": "C.-H", "middle": [], "last": "Lee", "suffix": "" } ], "year": 1994, "venue": "IEEE Transactions on Speech and Audio Processing", "volume": "2", "issue": "4", "pages": "291--298", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gauvain, J.-L., and C.-H. Lee, \"Maximum a posteriori estimation for multivariate Gaussian mixture observation of Markov chain,\" IEEE Transactions on Speech and Audio Processing, 2(4), 1994, pp. 291-298.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Some statistical issues in the comparison of speech recognition algorithms", "authors": [ { "first": "L", "middle": [], "last": "Gillick", "suffix": "" }, { "first": "S", "middle": [ "J" ], "last": "Cox", "suffix": "" } ], "year": 1989, "venue": "IEEE Proceedings of International Conference on Acoustic, Speech and Signal Processing", "volume": "", "issue": "", "pages": "532--535", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gillick, L., and S. J. Cox, \"Some statistical issues in the comparison of speech recognition algorithms,\" IEEE Proceedings of International Conference on Acoustic, Speech and Signal Processing (ICASSP), 1989, pp. 532-535.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Probabilistic latent semantic indexing", "authors": [ { "first": "T", "middle": [], "last": "Hofmann", "suffix": "" } ], "year": 1999, "venue": "Proc. 22 nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "50--57", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hofmann, T., \"Probabilistic latent semantic indexing,\" Proc. 22 nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 1999, pp. 50-57.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Information theory and statistical mechanics", "authors": [ { "first": "E", "middle": [], "last": "Jaynes", "suffix": "" } ], "year": 1957, "venue": "Physics Reviews", "volume": "106", "issue": "4", "pages": "620--630", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jaynes, E., \"Information theory and statistical mechanics,\" Physics Reviews, 106(4), 1957, pp. 620-630.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Interpolated estimation of Markov source parameters from sparse data", "authors": [ { "first": "F", "middle": [], "last": "Jelinek", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Mercer", "suffix": "" } ], "year": 1980, "venue": "Proc. Workshop on Pattern Recognition in Practice", "volume": "", "issue": "", "pages": "381--402", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jelinek, F., and R. L. Mercer, \"Interpolated estimation of Markov source parameters from sparse data,\" Proc. Workshop on Pattern Recognition in Practice, 1980, pp. 381-402.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Maximum entropy techniques for exploiting syntactic, semantic and collocational dependencies in language modeling", "authors": [ { "first": "S", "middle": [], "last": "Khudanpur", "suffix": "" }, { "first": "J", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2000, "venue": "Computer Speech and Language", "volume": "14", "issue": "", "pages": "355--372", "other_ids": {}, "num": null, "urls": [], "raw_text": "Khudanpur, S., and J. Wu, \"Maximum entropy techniques for exploiting syntactic, semantic and collocational dependencies in language modeling,\" Computer Speech and Language, 14, 2000, pp. 355-372.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A cache based natural language model for speech recognition", "authors": [ { "first": "R", "middle": [], "last": "Kuhn", "suffix": "" }, { "first": "R", "middle": [], "last": "De Mori", "suffix": "" } ], "year": 1992, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "12", "issue": "6", "pages": "570--583", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kuhn, R., and R. de Mori, \"A cache based natural language model for speech recognition,\" IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(6), 1992, pp. 570-583.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "A language modeling approach for information retrieval", "authors": [ { "first": "J", "middle": [], "last": "Ponte", "suffix": "" }, { "first": "W", "middle": [], "last": "Croft", "suffix": "" } ], "year": 1998, "venue": "Proc. ACM SIGIR on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "275--281", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ponte, J., and W. Croft, \"A language modeling approach for information retrieval,\" Proc. ACM SIGIR on Research and Development in Information Retrieval, 1998, pp. 275-281.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A maximum entropy approach to adaptive statistical language modeling", "authors": [ { "first": "R", "middle": [], "last": "Rosenfeld", "suffix": "" } ], "year": 1996, "venue": "Computer Speech and Language", "volume": "10", "issue": "", "pages": "187--228", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rosenfeld, R., \"A maximum entropy approach to adaptive statistical language modeling,\" Computer Speech and Language, 10, 1996, pp. 187-228.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Learning mixture models with the regularized latent maximum entropy principle", "authors": [ { "first": "S", "middle": [], "last": "Wang", "suffix": "" }, { "first": "D", "middle": [], "last": "Schuurmans", "suffix": "" }, { "first": "F", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Y", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2004, "venue": "IEEE Transactions on Neural Networks", "volume": "15", "issue": "4", "pages": "903--916", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang, S., D. Schuurmans, F. Peng, and Y. Zhao, \"Learning mixture models with the regularized latent maximum entropy principle,\" IEEE Transactions on Neural Networks, 15(4), 2004, pp. 903-916.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Semantic n-gram language modeling with the latent maximum entropy principle", "authors": [ { "first": "S", "middle": [], "last": "Wang", "suffix": "" }, { "first": "D", "middle": [], "last": "Schuurmans", "suffix": "" }, { "first": "F", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Y", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2003, "venue": "IEEE Proceedings of International Conference on Acoustic, Speech and Signal Processing", "volume": "1", "issue": "", "pages": "376--379", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang, S., D. Schuurmans, F. Peng, and Y. Zhao, \"Semantic n-gram language modeling with the latent maximum entropy principle,\" IEEE Proceedings of International Conference on Acoustic, Speech and Signal Processing (ICASSP), 1, 2003, pp. 376-379.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Building a topic-dependent maximum entropy model for very large corpora", "authors": [ { "first": "J", "middle": [], "last": "Wu", "suffix": "" }, { "first": "S", "middle": [], "last": "Khudanpur", "suffix": "" } ], "year": 2002, "venue": "IEEE Proceedings of International Conference on Acoustic, Speech and Signal Processing", "volume": "1", "issue": "", "pages": "777--780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu, J., and S. Khudanpur, \"Building a topic-dependent maximum entropy model for very large corpora,\" IEEE Proceedings of International Conference on Acoustic, Speech and Signal Processing (ICASSP), 1, 2002, pp. 777-780.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Interpolation of n-gram and mutual-information based trigger pair language models for Mandarin speech recognition", "authors": [ { "first": "G", "middle": [ "D" ], "last": "Zhou", "suffix": "" }, { "first": "K", "middle": [ "T" ], "last": "Lua", "suffix": "" } ], "year": 1999, "venue": "Computer Speech and Language", "volume": "13", "issue": "", "pages": "125--141", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhou, G. D., and K. T. Lua, \"Interpolation of n-gram and mutual-information based trigger pair language models for Mandarin speech recognition,\" Computer Speech and Language, 13, 1999, pp. 125-141.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "in the training data.", "type_str": "figure", "uris": null }, "FIGREF1": { "num": null, "text": "Implementation procedure for ME semantic topic modeling", "type_str": "figure", "uris": null }, "FIGREF2": { "num": null, "text": "where j d , kt are the vectors constructed by document j and document cluster k, respectively. the projected vectors in the semantic space. By assigning topics to different documents, we can estimate the topic-dependent unigram ( ) into the n-gram model.", "type_str": "figure", "uris": null }, "FIGREF3": { "num": null, "text": "Figure 2. Log-Likelihood of training data versus the number of IIS iterations", "type_str": "figure", "uris": null }, "FIGREF4": { "num": null, "text": "Perplexity of the LI model versus the length of history", "type_str": "figure", "uris": null }, "FIGREF5": { "num": null, "text": "Perplexity of the ME model versus the length of history", "type_str": "figure", "uris": null }, "FIGREF6": { "num": null, "text": "Character error rate (%) versus the number of topics", "type_str": "figure", "uris": null }, "TABREF1": { "html": null, "content": "
Objective functionL( \u03bb p)H( p)
CriterionMaximum LikelihoodMaximum Entropy
Type of searchUnconstrained optimizationConstrained optimization
Search space\u03bb \u2208real valuesp satisfied with constraints
Solution\u03bbMLpME
ML p = \u03bbpME
", "num": null, "text": "", "type_str": "table" }, "TABREF5": { "html": null, "content": "
BigramWu's methodProposed method
LIMELIME
C=30444.7399441393.7
C=50451.4442.9402438394.8
C=100437397.2435.7401.2
Table 4. Comparison of perplexity for trigram, LI and ME semantic language models
TrigramWu's methodProposed method
LIMELIME
C=30355.1317.1349.7311.9
C=50376.6353.3315.9347.1310.4
C=100347.1309.9345.3309.3
", "num": null, "text": "", "type_str": "table" }, "TABREF6": { "html": null, "content": "
BigramWu's methodProposed method
LIMELIME
C=3038.936.436.734.9
C=5041.438.136.837.934.4
C=10038.336.537.336.1
", "num": null, "text": "", "type_str": "table" } } } }