{ "paper_id": "O06-4002", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:07:12.779878Z" }, "title": "An Empirical Study of Word Error Minimization Approaches for Mandarin Large Vocabulary Continuous Speech Recognition", "authors": [ { "first": "Jen-Wei", "middle": [], "last": "Kuo", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan Normal University", "location": { "settlement": "Taipei", "country": "Taiwan" } }, "email": "" }, { "first": "Shih-Hung", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan Normal University", "location": { "settlement": "Taipei", "country": "Taiwan" } }, "email": "" }, { "first": "Hsin-Min", "middle": [], "last": "Wang", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Berlin", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan Normal University", "location": { "settlement": "Taipei", "country": "Taiwan" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents an empirical study of word error minimization approaches for Mandarin large vocabulary continuous speech recognition (LVCSR). First, the minimum phone error (MPE) criterion, which is one of the most popular discriminative training criteria, is extensively investigated for both acoustic model training and adaptation in a Mandarin LVCSR system. Second, the word error minimization (WEM) criterion, used to rescore N-best word strings, is appropriately modified for a Mandarin LVCSR system. Finally, a series of speech recognition experiments is conducted on the MATBN Mandarin Chinese broadcast news corpus. The experiment results demonstrate that the MPE training approach reduces the character error rate (CER) by 12% for a system initially trained with the maximum likelihood (ML) approach. Meanwhile, for unsupervised acoustic model adaptation, MPE-based linear regression (MPELR) adaptation outperforms conventional maximum likelihood linear regression (MLLR) in terms of CER reduction. When the WEM decoding approach is used for N-best rescoring, a slight performance gain over the conventional maximum a posteriori (MAP) decoding method is also observed.", "pdf_parse": { "paper_id": "O06-4002", "_pdf_hash": "", "abstract": [ { "text": "This paper presents an empirical study of word error minimization approaches for Mandarin large vocabulary continuous speech recognition (LVCSR). First, the minimum phone error (MPE) criterion, which is one of the most popular discriminative training criteria, is extensively investigated for both acoustic model training and adaptation in a Mandarin LVCSR system. Second, the word error minimization (WEM) criterion, used to rescore N-best word strings, is appropriately modified for a Mandarin LVCSR system. Finally, a series of speech recognition experiments is conducted on the MATBN Mandarin Chinese broadcast news corpus. The experiment results demonstrate that the MPE training approach reduces the character error rate (CER) by 12% for a system initially trained with the maximum likelihood (ML) approach. Meanwhile, for unsupervised acoustic model adaptation, MPE-based linear regression (MPELR) adaptation outperforms conventional maximum likelihood linear regression (MLLR) in terms of CER reduction. When the WEM decoding approach is used for N-best rescoring, a slight performance gain over the conventional maximum a posteriori (MAP) decoding method is also observed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Due to advances in computer technology and the growth of the Internet, large volumes of multimedia content, such as broadcast news, lectures, voice mails, and digital archives continue to grow and fill our computers, networks, and lives. It is obvious that speech is the richest source of information for the large volumes of multimedia content; thus, associated speech processing technologies will play an increasingly important role in multimedia organization and retrieval in the future. Among these technologies, automatic speech recognition (ASR) has long been the focus of research in the speech processing community.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Automatic speech recognition is a pattern classification task that classifies sound segments into different linguistic categories based on the acoustic vector sequence extracted from the speech signal. Traditionally, in most pattern classification applications, the goal of classifier design is to reduce the probability of errors by using the minimum error rate (MER) criterion [Duda et al. 2000] . Under this paradigm, the problems of classifier optimization are resolved by minimizing the expected loss over the training data directly. The zero-one loss function, which simply assigns no loss to a correct classification and a unit loss to an error, is often employed for this purpose. For example, in ASR, a hypothesized word sequence containing one or more word errors, or a totally different sequence, as compared to the correct sequence, will incur the same amount of loss. However, the most common performance evaluation metrics adopted in ASR often consider individual word errors, instead of merely counting the string-level errors. The use of the zero-one loss function leads to a mismatch between classifier optimization and performance evaluation. In recent years, a common practice in ASR has been to replace the zero-one loss function with alternative loss functions that consider word-or phone-level errors. In practice, such improved loss functions can be used in both model parameter estimation (i.e., classifier optimization) and speech decoding.", "cite_spans": [ { "start": 379, "end": 397, "text": "[Duda et al. 2000]", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this paper, we present an empirical study of word error minimization approaches for Mandarin large vocabulary continuous speech recognition (LVCSR). The minimum phone error (MPE) criterion is extensively investigated in both acoustic model training and adaptation; while the word error minimization (WEM) criterion is exploited to rescore N-best word strings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The remainder of the paper is organized as follows. In Section 2, the general background of the Bayes risk and overall risk criteria is given, and their use in ASR is explained. Section 3 presents the application of the MPE criterion for acoustic model training, and Section 4 describes its extension to unsupervised linear regression based acoustic model adaptation. The use of the WEM criterion for speech decoding is discussed in Section 5. The experiment setup is detailed in Section 6 and a series of speech recognition experiments is described in Section 7. Finally, we present the conclusions drawn from the research in Section 8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Given an acoustic vector sequence O , the goal of an ASR system is to make a decision ( ) u O \u03b1 that identifies O as a certain word sequence u from a hypothesized space h W of all possible word sequences in the language. Let ( , ) L u c be the loss incurred by the decision ( ) u O \u03b1 , where the correct (i.e., reference) transcription is c . Actually, we have no prior knowledge of the correct transcription; in other words, any arbitrary word sequence s in h W could be identical to c . Consequently, for each possible decision ( ) u O \u03b1 , the expected loss (or risk) is calculated as [Duda et al. 2000] :", "cite_spans": [ { "start": 221, "end": 230, "text": "Let ( , )", "ref_id": null }, { "start": 587, "end": 605, "text": "[Duda et al. 2000]", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Bayes Risk and Overall Risk", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) ( | ) ( , ) ( | ) h u s R O O L u s P s O \u03b1 \u2208 = \u2211 W ,", "eq_num": "(1)" } ], "section": "Bayes Risk and Overall Risk", "sec_num": "2." }, { "text": "where ( | ) P s O is the posterior probability of the word sequence s given that the acoustic vector sequence O is observed. Therefore, the Bayes decision", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayes Risk and Overall Risk", "sec_num": "2." }, { "text": "( ) opt O \u03b1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayes Risk and Overall Risk", "sec_num": "2." }, { "text": "is made by selecting the action with the minimum expected loss, i.e.,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayes Risk and Overall Risk", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) ( ) ( ) argmin | arg min ( , ) ( | ) h h h opt u u s u O R O O L u s P s O \u03b1 \u03b1 \u2208 \u2208 \u2208 = = \u2211 W W W .", "eq_num": "(2)" } ], "section": "Bayes Risk and Overall Risk", "sec_num": "2." }, { "text": "In supervised training, on the other hand, the correct transcription of each training utterance O is known, and the overall risk all R of all possible training utterances is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayes Risk and Overall Risk", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ( )| ) ( ) all c R R O O P O dO \u03b1 = \u222b ,", "eq_num": "(3)" } ], "section": "Bayes Risk and Overall Risk", "sec_num": "2." }, { "text": "where the integral extends over the whole acoustic space. However, in practice, we can only obtain the approximate overall risk R all by summing the risks over a finite number of training utterances, i.e.,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayes Risk and Overall Risk", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "| ) ( ) ( , ) ( | ) ( ) r r h all c r r r r r r r s r R R O O PO L c s P s O P O \u03b1 \u2208 = = \u2211 \u2211 \u2211 W ,", "eq_num": "( ( )" } ], "section": "Bayes Risk and Overall Risk", "sec_num": "2." }, { "text": "where r h W and c r , respectively, denote a set of likely hypothesized word sequences and the reference word sequence associated with the training utterance r O ; and the distribution ( | ) r P s O is always assumed to be governed by some underlying parametric distributions. To ensure that ASR is as accurate as possible, we need to design a classifier and estimate the parameters in ( | ) r P s O more carefully in order to minimize the overall risk R all . By applying the Bayes rule and replacing the probability ( | ) ", "cite_spans": [ { "start": 518, "end": 523, "text": "( | )", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Bayes Risk and Overall Risk", "sec_num": "2." }, { "text": "r P O s with its parameterization, ( | ) r p O s \u03bb ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayes Risk and Overall Risk", "sec_num": "2." }, { "text": "Eq. (4) can be expressed as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayes Risk and Overall Risk", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( , ) ( | ) ( ) ( ) ( | ) ( ) r h r h r r s all r r r u L c s p O s P s R P O p O u P u \u03bb \u03bb \u2208 \u2208 = \u2211 \u2211 \u2211 W W ,", "eq_num": "(5)" } ], "section": "Bayes Risk and Overall Risk", "sec_num": "2." }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayes Risk and Overall Risk", "sec_num": "2." }, { "text": "( | ) r p O s \u03bb and ( | ) r p O u \u03bb", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayes Risk and Overall Risk", "sec_num": "2." }, { "text": "are, respectively, the acoustic model likelihoods for s", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayes Risk and Overall Risk", "sec_num": "2." }, { "text": "and u under the acoustic model parameter set \u03bb ; and ( ) P s and ( ) P u are the respective language model probabilities for s and u . The parameters of both the acoustic model and the language model can be estimated by minimizing R all . However, in this study, we only focus on the discriminative estimation of the acoustic model parameters, and adopt the conventional approach for language model training. Moreover, it is assumed that the prior", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayes Risk and Overall Risk", "sec_num": "2." }, { "text": "probability ( ) r P O", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayes Risk and Overall Risk", "sec_num": "2." }, { "text": "is uniformly distributed. As a result, the overall risk becomes", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayes Risk and Overall Risk", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( , ) ( | ) ( ) ( | ) ( ) r h r h r r s all r r u L c s p O s P s R p O u P u \u03bb \u03bb \u2208 \u2208 = \u2211 \u2211 \u2211 W W ,", "eq_num": "(6)" } ], "section": "Bayes Risk and Overall Risk", "sec_num": "2." }, { "text": "and the optimal parameter set, opt \u03bb , can be estimated by minimizing the overall risk of the training utterances", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayes Risk and Overall Risk", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( , ) ( | ) ( ) arg min ( | ) ( ) r h r h r r s opt r r u L c s p O s P s p O u P u \u03bb \u03bb \u03bb \u03bb \u2208 \u2208 = \u2211 \u2211 \u2211 W W .", "eq_num": "(7)" } ], "section": "Bayes Risk and Overall Risk", "sec_num": "2." }, { "text": "To minimize the overall risk, as shown by Equations (4) to (7), the hypothesized word sequence with a lower loss should have a larger posterior probability, and vice versa. How to select an appropriate loss function ( , ) L \u22c5 \u22c5 used in the above equations remains an open research issue. In most pattern classification tasks, to minimize the probability of classification errors, the loss function is often chosen based on the minimum error rate (MER) criterion. This leads directly to the following symmetrical zero-one loss function [Duda et al. 2000 ", "cite_spans": [ { "start": 216, "end": 221, "text": "( , )", "ref_id": null }, { "start": 535, "end": 552, "text": "[Duda et al. 2000", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Bayes Risk and Overall Risk", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "]: 0 , ( , ) 1 , u s L u s u s = \u23a7 = \u23a8 \u2260 \u23a9 .", "eq_num": "(8)" } ], "section": "Bayes Risk and Overall Risk", "sec_num": "2." }, { "text": "The loss function assigns no loss if u s = , and assigns a unit loss when a classification error occurs. In ASR, a hypothesized word sequence that is identical to the correct transcription does not introduce a loss; however, a hypothesized word sequence containing one or more", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayes Risk and Overall Risk", "sec_num": "2." }, { "text": "word errors, or a totally different sequence, compared to the correct sequence, will incur the same unit loss. Thus, minimizing the overall risk is equivalent to minimizing the expected string error rate (SER) of the training utterances. Nevertheless, SER is not a sufficient metric for the evaluation of ASR performance because, with this metric, all incorrectly hypothesized word sequences are regarded as having the same cost of recognition risk. Instead, the loss function could be defined as the distance of the hypothesized word sequence to the correct transcription. For this purpose, the string edit or Levenshtein distance [Levenshtein 1966 ] associated with the word error rate (WER) can be adopted. It is believed that WER is more suitable than SER in reflecting differences in ASR results. Optimization using the Levenshtein-based loss function is often referred to as word error minimization (WEM).", "cite_spans": [ { "start": 632, "end": 649, "text": "[Levenshtein 1966", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Mandarin Large Vocabulary Continuous Speech Recognition", "sec_num": null }, { "text": "However, in complicated ASR tasks, such as LVCSR, it is impossible to perform optimization over the hypothesized space r h W of each training utterance r O without using a pruning technique because such hypothesized spaces usually contain an extremely large number of hypothesized word sequences. Recently, some practical strategies have been proposed to resolve this problem. For instance, a reduced hypothesized space in the form of an N-best list [Schwartz and Chow 1990] or a lattice [Ortmanns 1997 ] can be generated for each training utterance by only retaining recognized hypotheses with higher probabilities. The optimization process can then be applied efficiently to the reduced hypothesized space. ", "cite_spans": [ { "start": 450, "end": 474, "text": "[Schwartz and Chow 1990]", "ref_id": "BIBREF24" }, { "start": 488, "end": 502, "text": "[Ortmanns 1997", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Mandarin Large Vocabulary Continuous Speech Recognition", "sec_num": null }, { "text": "This section describes in detail the application of the minimum phone error (MPE) criterion to acoustic model training. As mentioned in the previous section, the hypothesized space r h W of a given training utterance r O can be reduced to a smaller space represented by a number of the most likely hypothesized word sequences associated with r O . The N-best list contains the N most likely sequences generated by applying the Viterbi algorithm, which has to retain at least N-best search hypotheses at both the HMM (Hidden Markov Model) acoustic model-level and word-level recombination points during the speech decoding process. For each hypothesized word sequence on the N-best list, it is relatively easy to compute the standard Levenshtein distance to the correct transcription directly. Based on this observation, Kaiser et al. proposed overall risk criterion estimation (ORCE) for acoustic model training [Kaiser et al. , 2002 Na et al. 1995] . This approach takes the N-best list as the reduced hypothesized space to obtain training statistics, and applies the extended Baum-Welch algorithm [Gopalakrishnan et al. 1991; Normandin 1991] for parameter optimization. In experiments on the TIMIT database, the authors achieved a 21% word error rate reduction compared to the baseline system. However, an N-best list usually contains too much redundant information, i.e., two hypothesized word sequences may look very similar, which makes the training procedure inefficient. An alternative representation is the word lattice (or graph), illustrated in Figure 1 , which only stores hypothesized word arcs at different segments of the time frames. Although it cannot be guaranteed that all word sequences generated from a word lattice will have higher probabilities than those not presented, it is believed that the approximation will not affect the performance significantly. Nevertheless, for the lattice structure, using the standard Levenshtein distance measure as the loss function is an issue, since it makes the implementation of computing the distance more complicated. Recently, two approaches have been proposed to deal with this problem. One focuses on how to design loss functions that approximate the Levenshtein distance measure, such as MPE training. The other concentrates on the design of algorithms to segment the word lattice so as to make the computation of the Levenshtein distance feasible, such as the minimum Bayes risk discriminative training (MBRDT) approach [Doumpiotis et al. , 2004 . To efficiently reduce the complexity of the hypothesized space in MBRDT, a lattice segmentation algorithm is applied to divide the lattice into several non-overlapping components. It has been shown that MBRDT achieves a considerable performance improvement over the baseline system trained with the maximum likelihood (ML) criterion.", "cite_spans": [ { "start": 912, "end": 933, "text": "[Kaiser et al. , 2002", "ref_id": null }, { "start": 934, "end": 949, "text": "Na et al. 1995]", "ref_id": "BIBREF16" }, { "start": 1099, "end": 1127, "text": "[Gopalakrishnan et al. 1991;", "ref_id": "BIBREF8" }, { "start": 1128, "end": 1143, "text": "Normandin 1991]", "ref_id": "BIBREF17" }, { "start": 2486, "end": 2511, "text": "[Doumpiotis et al. , 2004", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 1555, "end": 1563, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "The MPE training approach, which is one of the most attractive discriminative training techniques, tries to optimize an acoustic model's parameters by minimizing the expected phone error rate. The objective function of MPE is given as [Povey 2004] :", "cite_spans": [ { "start": 235, "end": 247, "text": "[Povey 2004]", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Mandarin Large Vocabulary Continuous Speech Recognition ( | ) ( ) ( , ) ( ) ( | ) ( ) r lat r lat r r s MPE r r u p O s P s A c s F p O u P u \u03bb \u03bb \u03bb \u2208 \u2208 = \u2211 \u2211 \u2211 W W ,", "eq_num": "(9)" } ], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "where r lat W is the lattice generated by the speech recognizer, used to represent a reduced hypothesized space of word sequences; and ( , ) r A c s is the raw accuracy of word sequence s , which is an approximation of the true accuracy computed globally using the standard Levenshtein distance. It is obvious that maximizing the objective function is equivalent to minimizing the expected phone error. The raw accuracy ( , ) r A c s is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( , ) ( , ) r r q s A c s A c q \u2208 \u2032 = \u2211 ,", "eq_num": "(10)" } ], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "where q is the phone involved in s , and ( , )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "r A c q \u2032", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "is a local function used to calculate the raw phone accuracy of each phone q in s . The phone accuracy is calculated locally on each phone arc of the word lattice, instead of globally on each hypothesized word sequence. Given a word arc on the word lattice, the time boundaries of the phone arcs can be determined by aligning the corresponding speech segment with its constituent HMM acoustic models. Figure 2 shows the calculation of raw phone accuracy. Notice that we adopt INITIAL/FINAL units instead of phone units as the acoustic units in our Mandarin LVCSR system. Therefore, for Phone boundary on word arc \"\u597d\u5728\" simplicity, each INITIAL or FINAL unit is regarded as a phone in the elucidation. In Figure 2 , the raw phone accuracy of phone \"au\" involved in the word arc \"\u597d\u5728\" is calculated in the following steps. First, the word arc \"\u597d\u5728\" is aligned with time boundaries of a phone sequence to obtain the start and end time boundaries of the phone \"au\". Second, for each phone q\u2032 in the correct transcription, we calculate the overlapped portion of \"au\" in time frames, and denote it as ( ,\" \") e q au \u2032", "cite_spans": [], "ref_spans": [ { "start": 401, "end": 410, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 704, "end": 712, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": ". Finally, the raw phone accuracy of phone \"au\", i.e., ( ,\" \") r A c au \u2032 , is calculated using the following formula: 1 2 (\" \", ) if \" \" ( ,\" \") max 1 (\" \", ) otherwise ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "It is obvious that ( ,\" \") r A c au \u2032 ranges from 1 to -1+ 1/ r T , where r T is the length of observation r O in terms of the time frames. For example, if the phone arc \"au\" overlays at least one phone q\u2032 in the correct transcription with the same identity in time, \"au\" is considered to be a correct phone, i.e., ( ,\" \") 1 r A c au \u2032 = . Figure 3 compares the accuracy of a hypothesized word sequence obtained via the approximate function discussed here and the exact calculation using the Levenshtein distance.", "cite_spans": [], "ref_spans": [ { "start": 340, "end": 348, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "According to Povey's work [Povey 2004 ], the auxiliary function for optimizing the objective function of MPE in Eq. (9) is", "cite_spans": [ { "start": 26, "end": 37, "text": "[Povey 2004", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) ( , ) log ( | ) log ( | ) r lat MPE MPE r r q r F H p O q p O q \u03bb \u03bb \u03bb \u03bb \u03bb \u03bb \u2208 = \u2202 = \u2202 \u2211 \u2211 W ,", "eq_num": "(12)" } ], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "Reference phone transcription ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "In other words, both the objective and auxiliary functions have the same derivative with respect to \u03bb when they are evaluated at the current estimate \u03bb . For simplicity, we only consider the MPE-based estimation of mean vectors and covariance matrices in HMMs. The state transition probabilities and mixture weights trained by the ML criterion remain unchanged. As a result, in this study, the final auxiliary function for MPE training is expressed as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( , ) ( )log ( ( ), , ) q r q lat t e rMPE r MPE q qm r m m r t s m q g t N o t \u03bb \u03bb \u03b3 \u03b3 \u00b5 = = \u2208 = \u03a3 \u2211 \u2211 \u2211 \u2211 W ,", "eq_num": "(14)" } ], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "where ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) ( , ) ( | ) ( ) ( ) log ( | ) ( | ) ( ) ( | ) ( ) ( | ) ( ) ( , ) ( | ) ( ) ( | ) ( ) r r lat lat r r lat lat r lat r lat r r r v q v u q u MPE r r r u q u u r r r v u r u p O v P v A v s p O u P u F p O q p O u P u p O u P u p O v P v A v s p O u P u p O u P u \u03bb \u03bb \u03bb \u03bb \u03bb \u03bb \u03bb \u03bb \u03bb \u03bb \u03bb \u2032 \u2032 \u2032 \u2032 \u2208 \u2208 \u2208 \u2208 = \u2032 \u2032 \u2208 \u2208 \u2208 \u2032 \u2208 \u2208 \u2032 \u2032 \u2032 \u2032 \u2032 \u2202 = \u2032 \u2032 \u2202 \u2032 \u2032 \u2212 \u2211 \u2211 \u2211 \u2211 \u2211 \u2211 W W W W W W , ( | ) ( ) r lat r lat q u r u p O u P u \u03bb \u2032 \u2208 \u2208 \u2208 \u2211 \u2211 W W .", "eq_num": "( | )" } ], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) ( | ) ( ) r lat r lat r u q u r u p O u P u p O u P u \u03bb \u03bb \u2032 \u2032 \u2208 \u2208 \u2208 \u2032 \u2032 \u2211 \u2211 W W", "eq_num": "( | )" } ], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "is the occupation probability of phone arc q ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": ", , ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) ( , ) ( | ) ( ) r lat r lat r r v q v r u q u p O v P v A v s p O u P u \u03bb \u03bb \u2032 \u2032 \u2208 \u2208 \u2032 \u2032 \u2208 \u2208 \u2032 \u2032 \u2032 \u2032 \u2032 \u2211 \u2211 W W is", "eq_num": "( | )" } ], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "( | ) ( ) r lat r lat r r v r u p O v P v A v s p O u P u \u03bb \u03bb \u2208 \u2208 \u2211 \u2211 W W", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "is the weighted average accuracy of all hypothesized word sequences in r lat W . All three quantities can be calculated efficiently.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "Since maximizing the weak sense auxiliary function with respect to \u03bb does not guarantee an increase in the objective function, the auxiliary function is augmented with an extra smoothing function ( , ) smooth EB g \u03bb \u03bb to moderate the parameter update and prevent extreme parameter values being estimated. The following is an example of a smoothing function: ", "cite_spans": [ { "start": 196, "end": 201, "text": "( , )", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 1 ( , ) log(| |) ( ) ( ) ( ) 2 smooth T m EB m m m m m m m m m D g t r \u03bb \u03bb \u00b5 \u00b5 \u00b5 \u00b5 \u2212 \u2212 \u23a1 \u23a4 = \u2212 \u03a3 + \u2212 \u03a3 \u2212 + \u03a3 \u03a3 \u23a3 \u23a6 \u2211 ,", "eq_num": "(16)" } ], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) 1 1 ( ( , )", "eq_num": "( , )) ( ) ( ) ( )" } ], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u23a4 = \u03a3 \u2212 \u2212 \u03a3 \u2212 \u23a3 \u23a6 \u2202 \u2211 \u2211 \u2211 W ,", "eq_num": "(17)" } ], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "( )( ) ( ) 1 ( ( , ) ( , )) 1 1 ( ) ( ) ( ) 2 2 ( ) ( ) 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u23a4 = \u03a3 \u2212 \u2212 \u2212 \u23a2 \u23a5 \u23a3 \u23a6 \u2202\u03a3 \u23a1 \u23a4 + \u03a3 \u2212 \u2212 \u2212 \u2212\u03a3 \u23a3 \u23a6 \u2211 \u2211 \u2211 W .", "eq_num": "(18)" } ], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "Next, by completing the differentiations and equating the above equations to zero, the following Extended Baum-Welch (EB) update formulae [Normandin 1991 ] are derived: ", "cite_spans": [ { "start": 138, "end": 153, "text": "[Normandin 1991", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "Moreover, to incorporate the ML estimate and smooth the update, the so-called I-smoothing technique is employed to provide a better estimate. I-smoothing is also regarded as a prior distribution for smoothing the auxiliary function, where the mode of the distribution is the same as the estimate obtained by ML training. The update equations thus become: in our experiments). Recently, it has been verified that using the statistics of MMI (Maximum Mutual Information) training in I-smoothing can further improve the estimate [Zheng and Stolcke 2005; Povey et al. 2005] .", "cite_spans": [ { "start": 526, "end": 550, "text": "[Zheng and Stolcke 2005;", "ref_id": "BIBREF29" }, { "start": 551, "end": 569, "text": "Povey et al. 2005]", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "= + + \u2211 \u2211 \u2211 \u2211 \u2211 \u2211 W W ,", "eq_num": "(21)" } ], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b8 \u03b3 = \u2211 \u2211 ,", "eq_num": "(24)" } ], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "Finally, let us examine the quantity r MPE q \u03b3 in more detail. To simplify the discussion, we adopt the following equations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ", ( | ) ( ) ( | ) ( ) r lat r lat r u q u r q r u p O u P u p O u P u \u03bb \u03bb \u03b3 \u2032 \u2032 \u2208 \u2208 \u2208 \u2032 \u2032 = \u2211 \u2211 W W ,", "eq_num": "(26)" } ], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": ", , is the weighted average phone accuracy of all hypothesized word sequences in r lat W . It is clear that the three main statistics must be gathered by applying the forward-backward algorithm to the word lattice [Povey 2004] ", "cite_spans": [ { "start": 214, "end": 226, "text": "[Povey 2004]", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( | ) ( ) ( , ) ( ) ( | ) ( ) r lat r lat r r v q v r r u q u p O v P v A v s c q p O u P u \u03bb \u03bb \u2032 \u2032 \u2208 \u2208 \u2032 \u2032 \u2208 \u2208 \u2032 \u2032 \u2032 = \u2032 \u2032 \u2211 \u2211 W W ,", "eq_num": "(27)" } ], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( | ) ( ) ( , ) ( | ) ( ) r lat r lat r r v r avg r u p O v P v A v s c p O u P u \u03bb \u03bb \u2208 \u2208 = \u2211 \u2211 W W ,", "eq_num": "(28)" } ], "section": "Minimum Phone Error (MPE) Training", "sec_num": "3." }, { "text": "Acoustic model adaptation, which is one of the most important topics in ASR, tries to eliminate some of the spoken and environmental variations between the training and test sets. However, it is a challenging task to adjust the large number of acoustic model parameters when only a very small amount of data is available for model adaptation. To ensure a more reliable estimation of acoustic model parameters, transformation-based approaches have been developed to adapt the acoustic model indirectly by using a set of affine transforms, such as the maximum likelihood linear regression (MLLR) adaptation [Leggetter and Woodland 1995] . Similarly, word or phone error minimization approaches can be used to estimate the transformation matrices. Among these approaches, we focus on MPE-based linear regression (MPELR) adaptation [Wang and Woodland 2004] , which obtains the transformation matrices by using the MPE criterion.", "cite_spans": [ { "start": 605, "end": 634, "text": "[Leggetter and Woodland 1995]", "ref_id": "BIBREF13" }, { "start": 828, "end": 852, "text": "[Wang and Woodland 2004]", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "MPE-based Linear Regression (MPELR) Adaptation", "sec_num": "4." }, { "text": "As in typical MLLR adaptation, Gaussian components are first clustered into several regression classes. Components in the same class share the same transformation matrix. The Gaussian mean vectors are transformed by: [Gales and Woodland 1996] ", "cite_spans": [ { "start": 217, "end": 242, "text": "[Gales and Woodland 1996]", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "MPE-based Linear Regression (MPELR) Adaptation", "sec_num": "4." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "m k m k k m A b W \u00b5 \u00b5 \u03be = + = ,", "eq_num": "(29)" } ], "section": "MPE-based Linear Regression (MPELR) Adaptation", "sec_num": "4." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 T m m k m L H L \u2212 \u2212 \u03a3 = ,", "eq_num": "(30)" } ], "section": "MPE-based Linear Regression (MPELR) Adaptation", "sec_num": "4." }, { "text": "where k H is the linear transformation matrix to be estimated for the class k , and m L is the Cholesky factor of 1 m \u2212 \u03a3 . Hereafter, for simplicity, the subscript k representing the cluster index is omitted. Based on Eq. 14, the auxiliary function can be derived as: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MPE-based Linear Regression (MPELR) Adaptation", "sec_num": "4." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 ({ , },{ , }) ( ) log ( ( ); , )", "eq_num": "q r" } ], "section": "MPE-based Linear Regression (MPELR) Adaptation", "sec_num": "4." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) ( ) 1 1 1 1 ({ , },{ , }) log | | ( ) ( ) 2 smooth EBW T T T m m m m m m m m m m T T m m m m g W H W H D L HL W W L H L W W tr L HL L H L \u03be \u03be \u03be \u03be \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u23a1 = \u2212 + \u2212 \u2212 \u23a2 \u23a3 \u23a4 + \u23a5 \u23a6 \u2211 ,", "eq_num": "(32)" } ], "section": "MPE-based Linear Regression (MPELR) Adaptation", "sec_num": "4." }, { "text": "where ( ) tr \u22c5 is the standard matrix trace operation. After differentiating the auxiliary function with respect to W and setting it to zero, we get the following closed-form solution: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MPE-based Linear Regression (MPELR) Adaptation", "sec_num": "4." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "= \u2212 = \u2208 \u239b \u239e \u239c \u239f \u03a3 + \u239c \u239f \u239d \u23a0 \u239b \u239e \u239c \u239f = + \u03a3 \u239c \u239f \u239d \u23a0 \u2211 \u2211 \u2211 \u2211 \u2211 \u2211 W W .", "eq_num": "(33)" } ], "section": "MPE-based Linear Regression (MPELR) Adaptation", "sec_num": "4." }, { "text": "The above equation can be solved row-by-row using the Gaussian elimination method to obtain the re-estimation formula for the transformation matrix of mean vectors. The re-estimation formula for the transformation matrix of covariance matrices can be derived in a similar way.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MPE-based Linear Regression (MPELR) Adaptation", "sec_num": "4." }, { "text": "Again, to improve the generalization of the test set, extra prior information, such as the ML statistics, can be considered. Therefore, the final auxiliary function employed in this paper is augmented with the following smoothing function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MPE-based Linear Regression (MPELR) Adaptation", "sec_num": "4." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 ( , ) ( )log ( ( ); , ) I smooth ML T m m mm m ML m t m g W H t N o t W L HL \u03c4 \u03b3 \u03be \u03b3 \u2212 \u2212 \u2212 = \u2211 \u2211 .", "eq_num": "(34)" } ], "section": "MPE-based Linear Regression (MPELR) Adaptation", "sec_num": "4." }, { "text": "Given a speech utterance, the standard maximum a posteriori (MAP) decoding approach tries to output the hypothesized word sequence with the highest posterior probability. Actually, by substituting a zero-one loss function into Eq. (2), the MAP decoding formula can be derived. This implies that the MAP decoding approach is based on minimizing the string error rate (SER). Thus, it only provides suboptimal results when the ASR performance is measured in terms of the word error rate (WER) or the character error rate (CER). Hence, replacing the zero-one loss function in Eq. (2) with the Levenshtein distance measure leads to the WEM decoding approach, which finds the hypothesized word sequence with the minimum WER or CER. However, as mentioned in Section 3, a direct implementation of WEM decoding with the word lattice is complicated because there is still no efficient algorithm for computing the Levenshtein distance between any two possible word sequences in the word lattice. To make the implementation of the WEM decoding approach feasible, we initially employ an N-best list of hypothesized word sequences. The WEM decoding approach can then be applied explicitly by choosing the hypothesized word sequence with the minimum expected risk [Stolcke et al. 1997] . The decision formula can thus be expressed as:", "cite_spans": [ { "start": 1249, "end": 1270, "text": "[Stolcke et al. 1997]", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Word Error Minimization (WEM) Decoding", "sec_num": "5." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Nest Nest Nest ( | ) ( ) ( ) argmin ( , ) ( | ) ( ) opt s N u N v N p O s p s O L u s p O v p v \u03b1 \u2208 \u2212 \u2208 \u2212 \u2208 \u2212 = \u2211 \u2211 ,", "eq_num": "(35)" } ], "section": "Word Error Minimization (WEM) Decoding", "sec_num": "5." }, { "text": "where u , s , and v are hypothesized word sequences in the N-best list. Similar ideas have been proposed recently by Mangu et al. [Mangu et al. 2000] and Goel and Byrne [Goel and Byrne 2000] . As an alternative, a novel optimal Bayes decision (OBC) approach for word lattice rescoring has been developed [Chien et al. 2006] . It also provides a promising framework for WEM decoding.", "cite_spans": [ { "start": 117, "end": 149, "text": "Mangu et al. [Mangu et al. 2000]", "ref_id": "BIBREF15" }, { "start": 169, "end": 190, "text": "[Goel and Byrne 2000]", "ref_id": "BIBREF7" }, { "start": 304, "end": 323, "text": "[Chien et al. 2006]", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Word Error Minimization (WEM) Decoding", "sec_num": "5." }, { "text": "In this section, we describe the large vocabulary continuous speech recognition system and the speech and text data used in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "6." }, { "text": "Front-end processing was performed with the HLDA-based (Heteroscedastic Linear Discriminant Analysis) data-driven Mel-frequency feature extraction approach, and then processed by MLLT (Maximum Likelihood Linear Transformation) transformation for feature de-correlation. In addition, utterance-based feature mean subtraction and variance normalization were applied to all the training and test materials. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Front-End Signal Processing", "sec_num": "6.1" }, { "text": "The speech corpus consisted of approximately 198 hours of MATBN (Mandarin Across Taiwan Broadcast News) Mandarin television news content [Wang et al. 2005] , which was collected by Academia Sinica and the Public Television Service Foundation of Taiwan between November 2001 and April 2003. All the speech materials were manually segmented into separate stories, each of which was spoken by one news anchor, several field reporters, and interviewees. Some stories contained background noise, speech, and music. All 198 hours of speech data was accompanied by corresponding orthographic transcripts, of which about 25 hours of gender-balanced speech data of the field reporters collected from November 2001 to December 2002 was used to bootstrap the acoustic training. The training set consisted of 545,732 syllables and the average length of a word was 1.65 characters. Another set of data, 1.5 hours in length, collected during 2003 was reserved for testing. Due to the limited number of distinct field reporters in the corpus, some test data belonged to the training field reporters.", "cite_spans": [ { "start": 137, "end": 155, "text": "[Wang et al. 2005]", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Speech Corpus and Acoustic Model Training", "sec_num": "6.2" }, { "text": "The test set consisted of 26,219 syllables and the average word length was also 1.65 characters. Table 1 shows the detailed statistics of the training and test sets.", "cite_spans": [], "ref_spans": [ { "start": 97, "end": 104, "text": "Table 1", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Speech Corpus and Acoustic Model Training", "sec_num": "6.2" }, { "text": "The acoustic models chosen for speech recognition were a silence model, 112 right-context-dependent INITIAL models, and 38 context-independent FINAL models. Each INITIAL model was represented by an HMM with 3 states, while each FINAL model had 4 states. Note that gender-independent models were used. The Gaussian mixture number per state ranged from 2 to 128, depending on the amount of training data. The acoustic models were first trained using the ML criterion and the Baum-Welch updating formulae. The MPE-based and MMI (Maximum Mutual Information)-based acoustic model training approaches were further applied to acoustic models pre-trained by the ML criterion. Unigram language model constraints were used to collect the training statistics from the word lattices for these two training approaches. For MPE training, both silence and short-pause labels were involved in the calculation of the raw phone accuracy of the hypothesized word sequences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech Corpus and Acoustic Model Training", "sec_num": "6.2" }, { "text": "Initially, the recognition lexicon consisted of 67K words. A set of about 5K compound words was automatically derived using forward and backward bigram statistics and added to the lexicon to form a new lexicon of 72K words. The background language models used in this experiment were trigram and bigram models, which were estimated according to the ML criterion using a text corpus consisting of 170 million Chinese characters collected from the Central News Agency (CNA) in 2001 and 2002 (the Chinese Gigaword Corpus released by LDC). The N-gram language models were trained with Katz back-off smoothing technique using the SRI Language Modeling Toolkit (SRILM) [Stolcke 2000 ].", "cite_spans": [ { "start": 663, "end": 676, "text": "[Stolcke 2000", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Lexicon and N-gram Language Modeling", "sec_num": "6.3" }, { "text": "The speech recognizer was implemented with a left-to-right frame-synchronous Viterbi tree-copy search and a lexical prefix tree of the lexicon. For each speech frame, a beam pruning technique, which considered the decoding scores of path hypotheses together with their corresponding unigram language model look-ahead scores and syllable-level acoustic look-ahead scores [Chen et al. 2005] , was used to select the most promising path hypotheses. Moreover, if the word hypotheses ending at each speech frame had higher scores than a predefined threshold, their associated decoding information, such as the word start and end frames, the identities of current and predecessor words, and the acoustic score, were kept to build a word lattice for further language model rescoring. We used the word bigram language model in the tree search procedure and the trigram language model in the word lattice rescoring procedure.", "cite_spans": [ { "start": 370, "end": 388, "text": "[Chen et al. 2005]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Speech Recognition", "sec_num": "6.4" }, { "text": "Now, a series of experiments performed to assess speech recognition as a function of the acoustic training and adaptation approaches, as well as the speech decoding approaches will be presented. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Results and Discussions", "sec_num": "7." }, { "text": "The acoustic models of the baseline system were first trained using the ML criterion with 10 iterations of Baum-Welch updating. Then, MPE training (with an optimum setting of 10 m \u03c4 = ) was applied to the ML-trained acoustic models. In the implementation, we calculated the raw accuracy of each INITIAL/FINAL, instead of each phone, i.e., we had actually performed Minimum INITIAL/FINAL Error training, not Minimum Phone Error training, in the Mandarin LVCSR system. While evaluating the ASR performance, neither the silence nor the short-pause labels were included in the calculation of CER. MMI training was also performed for comparison with MPE training. As mentioned previously, for both MPE and MMI training, unigram language model constraints were imposed when collecting the training statistics from the word lattices. The results for acoustic model training are shown in Figure 4 . We observe that the ML-trained baseline system (at the 10th iteration) yields a CER of 23.78%. On the other hand, both MMI and MPE work very well, providing a great boost to the acoustic models initially trained by ML. The acoustic models trained by MPE consistently outperform those trained by MMI across all training iterations. In summary, the MPE-trained acoustic models achieve a relative CER reduction of 12.66% (at the 10th iteration) over those trained by ML. Moreover, as shown in Table 2 , the improvements are consistent. The INITIAL/FINAL model error rate is reduced from 13.56% (baseline, ML training only) to 11.12% (at the 10th MPE training iteration). The 18% relative error rate reduction demonstrates the effectiveness of the Minimum INITIAL/FINIAL Error training approach, and the improvement in the acoustic models leads to a 3% absolute reduction in CER (from 23.78% to 20.77%). The use of statistical linguistic rules in MPE training still plays an important role in re-weighting the occupancy statistics, especially in an LVCSR system. In our previous work [Kuo 2005 ], it was found that much of the CER improvement was lost without embedding the language weight.", "cite_spans": [ { "start": 1971, "end": 1980, "text": "[Kuo 2005", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 880, "end": 888, "text": "Figure 4", "ref_id": "FIGREF14" }, { "start": 1381, "end": 1388, "text": "Table 2", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Experiments on MPE Acoustic Model Training", "sec_num": "7.1" }, { "text": "The question thus arises: What makes MPE superior to MMI? In Eq. 7, if the summation operator over all training utterances is replaced by the product operator and the loss function is the zero-one function in Eq. (8), one gets the following MMI criterion: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on MPE Acoustic Model Training", "sec_num": "7.1" }, { "text": "For each test utterance, an N-best list of hypothesized word sequences was first generated from the word lattice. We limited the number of hypothesized word sequences included in the N-best list to 50, and the Levenshtein distance was calculated in terms of character units. The experiment results are shown in Table 3 . From Row 3 (MPE + MPELR + WEM), one observes that, with the best set of acoustic models, WEM only achieves a slight reduction of 0.06% in CER compared to that obtained by conventional MAP decoding, as shown in Row 2. Row 5 (Lattice Error Rate) provides the information regarding the lattice error rate [Ortmanns et al. 1997] , which is the best achievable lower boundary, by rescoring on the current word lattice. This can be computed by finding the best hypothesized word sequence with the minimum Levenshtein distance to the reference transcription from the corresponding word lattice. On the other hand, Row 4 (50-best Error Rate) gives the lower boundary of the best character error rate for the top 50 hypotheses with the highest scores, which is the true best achievable lower bound in our implementation. From the experiment results, the WEM algorithm seems to achieve an almost imperceptible improvement of about 0.06%. The most likely explanation is that there is a defect in the approximation of the posterior distribution. In addition, the WEM algorithm decides the word sequence with the highest posterior probability in most situations [Schl\u00fcter et al. 2005] . For the above reasons, we consider that the improvement in CER accuracy is insignificant.", "cite_spans": [ { "start": 623, "end": 645, "text": "[Ortmanns et al. 1997]", "ref_id": "BIBREF18" }, { "start": 1470, "end": 1492, "text": "[Schl\u00fcter et al. 2005]", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 311, "end": 318, "text": "Table 3", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Experiments on WEM Decoding", "sec_num": "7.3" }, { "text": "In this paper, we have investigated the following word error minimization approaches for Mandarin large vocabulary continuous speech recognition: 1) the MPE criterion used in acoustic model training and adaptation; and 2) the WEM criterion in speech decoding. Unlike conventional techniques, these two approaches try to minimize the expected word error, rather than the string-level error. Experiments on the MATBN corpus demonstrate that MPE training can significantly improve a system initially trained with the ML criterion. Likewise, MPELR adaptation can significantly reduce the CER for the unsupervised adaptation task. This result is superior to that obtained by conventional MLLR adaptation. Finally, N-best rescoring using the WEM criterion achieves a slight improvement over traditional MAP decoding. We are", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8." }, { "text": "Mandarin Large Vocabulary Continuous Speech Recognition", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "which maximizes the logarithmic product of the posterior probabilities of the reference transcriptions. The use of the zero-one loss function implies that MMI tends to minimize the sentence error rate. Hence, it is reasonable to say that MMI is inferior to MPE in terms of CER.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "In this subsection, we evaluate the performance of the MPE-based unsupervised acoustic model adaptation approach. In these experiments, utterance-based unsupervised adaptation was used. First, each test utterance was decoded using the MPE-trained acoustic models. Then, after the forward-backward stage to gather sufficient statistics, the acoustic models were adapted according to the recognized transcriptions. All the Gaussian components of the HMM acoustic models were clustered into three broad phonetic regression classes (i.e., INITIAL, FINAL, and Silence) in advance. Only the mean vectors of each Gaussian component were adapted because it has been found that adapting the mean vectors alone yields the most improvement [Gales and Woodland 1996] . Unsupervised MLLR adaptation was performed as the baseline. In the experiment results presented in Table 2 , comparing Row 4 (MPE + MLLR) to Row 3 (MPE), we observe that the CER can be reduced from 20.77% to 20.45%, which indicates that MLLR adaptation can, to some extent, effectively mitigate the degradation of ASR performance caused by different acoustic variations. Row 5 of Table 2 gives the error rate obtained by MPELR adaptation. This result, 0.16% improvement in terms of CER, shows that MPELR is slightly better than MLLR. One possible reason for the insignificant improvement over MLLR is the use of a weak-sense auxiliary function. As a result, the convergence speed of MPE-based techniques is not as fast as the strong-sense auxiliary function used in ML-based techniques. In contrast, the advantage of MPE is that it tries to achieve a lower error rate when over-training is encountered. This is why MPE training is performed after ML training and not for bootstrapping the initial models. Similarly, MPELR adaptation can be performed after MLLR adaptation. However repeated on-line adaptation causes the decoding phase to become tardy, which is why it is only performed once in the online stage.", "cite_spans": [ { "start": 729, "end": 754, "text": "[Gales and Woodland 1996]", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 856, "end": 863, "text": "Table 2", "ref_id": null }, { "start": 1137, "end": 1144, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Experiments on Unsupervised MPELR Acoustic Model Adaptation", "sec_num": "7.2" }, { "text": "currently conducting an in-depth investigation of the WEM approaches to language modeling [Kuo and Chen, 2005] , as well as their comparison and integration with other approaches.", "cite_spans": [ { "start": 90, "end": 110, "text": "[Kuo and Chen, 2005]", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Mandarin Large Vocabulary Continuous Speech Recognition", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Lightly Supervised and Data-Driven Approaches to Mandarin Broadcast News Transcription", "authors": [ { "first": "B", "middle": [], "last": "Chen", "suffix": "" }, { "first": "J.-W", "middle": [], "last": "Kuo", "suffix": "" }, { "first": "W.-H", "middle": [], "last": "Tsai", "suffix": "" } ], "year": 2005, "venue": "International Journal of Computational Linguistics and Chinese Language Processing", "volume": "10", "issue": "1", "pages": "1--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, B., J.-W. Kuo, W.-H. Tsai, \"Lightly Supervised and Data-Driven Approaches to Mandarin Broadcast News Transcription,\" International Journal of Computational Linguistics and Chinese Language Processing, 10(1), 2005, pp.1-18.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Towards Optimal Bayes Decision for Speech Recognition", "authors": [ { "first": "J.-T", "middle": [], "last": "Chien", "suffix": "" }, { "first": "C.-H", "middle": [], "last": "Huang", "suffix": "" }, { "first": "K", "middle": [], "last": "Shinoda", "suffix": "" }, { "first": "S", "middle": [], "last": "Furui", "suffix": "" } ], "year": 2006, "venue": "Proc. ICASSP'06", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chien, J.-T., C.-H. Huang, K. Shinoda and S. Furui, \"Towards Optimal Bayes Decision for Speech Recognition,\" in Proc. ICASSP'06, 2006.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Discriminative Training for Segmental Minimum Bayes Risk Decoding", "authors": [ { "first": "V", "middle": [], "last": "Doumpiotis", "suffix": "" }, { "first": "S", "middle": [], "last": "Tsakalidis", "suffix": "" }, { "first": "W", "middle": [], "last": "Byrne", "suffix": "" } ], "year": 2003, "venue": "Proc. ICASSP'03", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Doumpiotis, V., S. Tsakalidis and W. Byrne, \"Discriminative Training for Segmental Minimum Bayes Risk Decoding,\" in Proc. ICASSP'03, 2003.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Lattice Segmentation and Minimum Bayes Risk Discriminative Training", "authors": [ { "first": "V", "middle": [], "last": "Doumpiotis", "suffix": "" }, { "first": "S", "middle": [], "last": "Tsakalidis", "suffix": "" }, { "first": "W", "middle": [], "last": "Byrne", "suffix": "" } ], "year": 2003, "venue": "Proc. Eurospeech'03", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Doumpiotis, V., S. Tsakalidis and W. Byrne, \"Lattice Segmentation and Minimum Bayes Risk Discriminative Training,\" in Proc. Eurospeech'03, 2003.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Pinched Lattice Minimum Bayes Risk Discriminative Training for Large Vocabulary Continuous Speech Recognition", "authors": [ { "first": "V", "middle": [], "last": "Doumpiotis", "suffix": "" }, { "first": "W", "middle": [], "last": "Byrne", "suffix": "" } ], "year": 2004, "venue": "Proc. ICSLP'04", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Doumpiotis, V. and W. Byrne, \"Pinched Lattice Minimum Bayes Risk Discriminative Training for Large Vocabulary Continuous Speech Recognition,\" in Proc. ICSLP'04, 2004.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Pattern Classification", "authors": [ { "first": "R", "middle": [ "O" ], "last": "Duda", "suffix": "" }, { "first": "P", "middle": [ "E" ], "last": "Hart", "suffix": "" }, { "first": "D", "middle": [ "G" ], "last": "Stork", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duda, R. O., P. E. Hart and D. G. Stork, Pattern Classification, 2nd ed. New York: John and Wiley, 2000.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Mean and Variance Adaptation within the MLLR Framework", "authors": [ { "first": "M", "middle": [ "J F" ], "last": "Gales", "suffix": "" }, { "first": "P", "middle": [ "C" ], "last": "Woodland", "suffix": "" } ], "year": 1996, "venue": "Computer Speech and Language", "volume": "10", "issue": "", "pages": "249--264", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gales, M. J. F. and P. C. Woodland, \"Mean and Variance Adaptation within the MLLR Framework,\" Computer Speech and Language, 10, 1996, pp.249-264.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Minimum Bayes-Risk Automatic Speech Recognition", "authors": [ { "first": "V", "middle": [], "last": "Goel", "suffix": "" }, { "first": "W", "middle": [], "last": "Byrne", "suffix": "" } ], "year": 2000, "venue": "Computer Speech and Language", "volume": "14", "issue": "", "pages": "115--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Goel, V. and W. Byrne, \"Minimum Bayes-Risk Automatic Speech Recognition,\" Computer Speech and Language, 14, 2000, pp.115-135.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "An Inequality for Rational Functions with Applications to Some Statistical Estimation Problems", "authors": [ { "first": "P", "middle": [ "S" ], "last": "Gopalakrishnan", "suffix": "" }, { "first": "D", "middle": [], "last": "Kanevsky", "suffix": "" }, { "first": "A", "middle": [], "last": "N\u00e1das", "suffix": "" }, { "first": "D", "middle": [], "last": "Nahamoo", "suffix": "" } ], "year": 1991, "venue": "IEEE Trans. Information Theory", "volume": "37", "issue": "", "pages": "107--113", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gopalakrishnan, P. S., D. Kanevsky, A. N\u00e1das and D. Nahamoo, \"An Inequality for Rational Functions with Applications to Some Statistical Estimation Problems,\" IEEE Trans. Information Theory, 37, 1991, pp.107-113.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A Novel Loss Function for the Overall Risk Criterion Based Discriminative Training of HMM Models", "authors": [ { "first": "J", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "B", "middle": [], "last": "Horvat", "suffix": "" }, { "first": "Z", "middle": [], "last": "Kacic", "suffix": "" } ], "year": 2000, "venue": "Proc. ICSLP'00", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaiser, J., B. Horvat and Z. Kacic, \"A Novel Loss Function for the Overall Risk Criterion Based Discriminative Training of HMM Models,\" in Proc. ICSLP'00, 2000.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Overall Risk Criterion Estimation of Hidden Markov Model Parameters", "authors": [ { "first": "J", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "B", "middle": [], "last": "Horvat", "suffix": "" }, { "first": "Z", "middle": [], "last": "Kacic", "suffix": "" } ], "year": 2000, "venue": "Speech Communication", "volume": "38", "issue": "", "pages": "383--398", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaiser, J., B. Horvat and Z. Kacic, \"Overall Risk Criterion Estimation of Hidden Markov Model Parameters,\" Speech Communication, 38, 2000, pp.383-398.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Minimum Word Error Based Discriminative Training of Language Models", "authors": [ { "first": "J.-W", "middle": [], "last": "Kuo", "suffix": "" }, { "first": "B", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2005, "venue": "Proc. INTERSPEECH'05", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kuo, J.-W. and B. Chen, \"Minimum Word Error Based Discriminative Training of Language Models,\" in Proc. INTERSPEECH'05, 2005.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "An Initial Study on Minimum Phone Error Discriminative Learning of Acoustic Models for Mandarin Large Vocabulary Continuous Speech Recognition", "authors": [ { "first": "J.-W", "middle": [], "last": "Kuo", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kuo, J.-W, \"An Initial Study on Minimum Phone Error Discriminative Learning of Acoustic Models for Mandarin Large Vocabulary Continuous Speech Recognition,\" Master Thesis, National Taiwan Normal University, June 2005.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Maximum Likelihood Linear Regression for Speaker Adaptation of Continuous Density Hidden Markov Models", "authors": [ { "first": "C", "middle": [ "J" ], "last": "Leggetter", "suffix": "" }, { "first": "P", "middle": [ "C" ], "last": "Woodland", "suffix": "" } ], "year": 1995, "venue": "Computer Speech and Language", "volume": "9", "issue": "", "pages": "171--185", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leggetter, C. J. and P. C. Woodland, \"Maximum Likelihood Linear Regression for Speaker Adaptation of Continuous Density Hidden Markov Models,\" Computer Speech and Language, 9, 1995, pp.171-185.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Binary Codes Capable of Correcting Deletions, Insertions and Reversals", "authors": [ { "first": "A", "middle": [], "last": "Levenshtein", "suffix": "" } ], "year": 1966, "venue": "Soviet Physics Doklady", "volume": "10", "issue": "8", "pages": "707--710", "other_ids": {}, "num": null, "urls": [], "raw_text": "Levenshtein, A., \"Binary Codes Capable of Correcting Deletions, Insertions and Reversals,\" Soviet Physics Doklady, 10(8), 1966, pp.707-710.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Finding consensus in speech recognition: word error minimization and other applications of confusion networks", "authors": [ { "first": "L", "middle": [], "last": "Mangu", "suffix": "" }, { "first": "E", "middle": [], "last": "Brill", "suffix": "" }, { "first": "A", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2000, "venue": "Computer Speech and Language", "volume": "14", "issue": "", "pages": "373--400", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mangu, L., E. Brill, and A. Stolcke, \"Finding consensus in speech recognition: word error minimization and other applications of confusion networks,\" Computer Speech and Language, 14, 2000, pp.373-400.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Discriminative Training of Hidden Markov Models using Overall Risk Criterion and Reduced Gradient Method", "authors": [ { "first": "K", "middle": [], "last": "Na", "suffix": "" }, { "first": "B", "middle": [], "last": "Jeon", "suffix": "" }, { "first": "D", "middle": [], "last": "Chang", "suffix": "" }, { "first": "S", "middle": [], "last": "Chae", "suffix": "" }, { "first": "S", "middle": [], "last": "Ann", "suffix": "" } ], "year": 1995, "venue": "Proc. Eurospeech'95", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Na, K., B. Jeon, D. Chang, S. Chae, and S. Ann, \"Discriminative Training of Hidden Markov Models using Overall Risk Criterion and Reduced Gradient Method,\" in Proc. Eurospeech'95, 1995.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Hidden Markov Models, Maximum Mutual Information Estimation, and the Speech Recognition Proble", "authors": [ { "first": "Y", "middle": [], "last": "Normandin", "suffix": "" } ], "year": 1991, "venue": "Ph.D Dissertation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Normandin, Y., \"Hidden Markov Models, Maximum Mutual Information Estimation, and the Speech Recognition Proble,\" Ph.D Dissertation, McGill University, Montreal, 1991.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A Word Graph Algorithm for Large Vocabulary Continuous Speech Recognition", "authors": [ { "first": "S", "middle": [], "last": "Ortmanns", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" }, { "first": "X", "middle": [], "last": "Aubert", "suffix": "" } ], "year": 1997, "venue": "Computer Speech and Language", "volume": "11", "issue": "", "pages": "43--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ortmanns, S., H. Ney and X. Aubert, \"A Word Graph Algorithm for Large Vocabulary Continuous Speech Recognition,\" Computer Speech and Language, 11, 1997, pp.43-72.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Minimum Phone Error and I-smoothing for Improved Discriminative Training", "authors": [ { "first": "D", "middle": [], "last": "Povey", "suffix": "" }, { "first": "P", "middle": [ "C" ], "last": "Woodland", "suffix": "" } ], "year": 2002, "venue": "Proc. ICASSP'02", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Povey, D. and P. C. Woodland, \"Minimum Phone Error and I-smoothing for Improved Discriminative Training,\" in Proc. ICASSP'02, 2002.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Large Scale Discriminative Training of Acoustic Models for Speech Recognition", "authors": [ { "first": "D", "middle": [], "last": "Povey", "suffix": "" }, { "first": "P", "middle": [ "C" ], "last": "Woodland", "suffix": "" } ], "year": 2002, "venue": "Computer Speech and Language", "volume": "16", "issue": "", "pages": "25--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Povey, D and P. C. Woodland, \"Large Scale Discriminative Training of Acoustic Models for Speech Recognition,\" Computer Speech and Language, 16, 2002, pp. 25-47.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Discriminative Training for Large Vocabulary Speech Recognition", "authors": [ { "first": "D", "middle": [], "last": "Povey", "suffix": "" } ], "year": 2004, "venue": "Ph.D Dissertation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Povey, D, \"Discriminative Training for Large Vocabulary Speech Recognition,\" Ph.D Dissertation, Peterhouse, University of Cambridge, July 2004.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "FMPE: Discriminatively Trained Features for Speech Recognition", "authors": [ { "first": "D", "middle": [], "last": "Povey", "suffix": "" }, { "first": "B", "middle": [], "last": "Kingsbury", "suffix": "" }, { "first": "L", "middle": [], "last": "Mangu", "suffix": "" }, { "first": "G", "middle": [], "last": "Saon", "suffix": "" }, { "first": "H", "middle": [], "last": "Soltau", "suffix": "" }, { "first": "G", "middle": [], "last": "Zweig", "suffix": "" } ], "year": 2005, "venue": "Proc. ICASSP'05", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Povey, D., B. Kingsbury, L. Mangu, G. Saon, H. Soltau and G. Zweig, \"FMPE: Discriminatively Trained Features for Speech Recognition,\" in Proc. ICASSP'05, 2005.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Bayes Risk Minimization using Metric Loss Functions", "authors": [ { "first": "R", "middle": [], "last": "Schl\u00fcter", "suffix": "" }, { "first": "T", "middle": [], "last": "Scharrenbach", "suffix": "" }, { "first": "V", "middle": [], "last": "Steinbiss", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2005, "venue": "Proc. Eurospeech'05", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schl\u00fcter, R., T. Scharrenbach, V. Steinbiss and H. Ney, \"Bayes Risk Minimization using Metric Loss Functions,\" in Proc. Eurospeech'05, 2005.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "The N-best algorithms: an efficient and exact procedure for finding the N most likely sentence hypotheses", "authors": [ { "first": "R", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Y.-L", "middle": [], "last": "Chow", "suffix": "" } ], "year": 1990, "venue": "Proc. ICASSP'90", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schwartz, R. and Y.-L. Chow, \"The N-best algorithms: an efficient and exact procedure for finding the N most likely sentence hypotheses,\" in Proc. ICASSP'90, 1990.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Explicit Word Error Minimization in N-best List Rescoring", "authors": [ { "first": "A", "middle": [], "last": "Stolcke", "suffix": "" }, { "first": "Y", "middle": [], "last": "Konig", "suffix": "" }, { "first": "M", "middle": [], "last": "Weintraub", "suffix": "" } ], "year": 1997, "venue": "Proc. Eurospeech'97", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stolcke, A., Y. Konig, M. Weintraub, \"Explicit Word Error Minimization in N-best List Rescoring,\" in Proc. Eurospeech'97, 1997.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "SRI language Modeling Toolkit", "authors": [ { "first": "A", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stolcke, A., SRI language Modeling Toolkit, version 1.3.3, 2000. http://www.speech.sri.com/projects/srilm/.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "MATBN: A Mandarin Chinese Broadcast News Corpus", "authors": [ { "first": "H.-M", "middle": [], "last": "Wang", "suffix": "" }, { "first": "B", "middle": [], "last": "Chen", "suffix": "" }, { "first": "J.-W", "middle": [], "last": "Kuo", "suffix": "" }, { "first": "S.-S", "middle": [], "last": "Cheng", "suffix": "" } ], "year": 2005, "venue": "International Journal of Computational Linguistics and Chinese Language Processing", "volume": "10", "issue": "2", "pages": "219--236", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang, H.-M., B. Chen, J.-W. Kuo, and S.-S. Cheng, \"MATBN: A Mandarin Chinese Broadcast News Corpus,\" International Journal of Computational Linguistics and Chinese Language Processing, 10(2), 2005, pp.219-236.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "MPE-Based Discriminative Linear Transform for Speaker Adaptation", "authors": [ { "first": "L", "middle": [], "last": "Wang", "suffix": "" }, { "first": "P", "middle": [ "C" ], "last": "Woodland", "suffix": "" } ], "year": 2004, "venue": "Proc. ICASSP'04", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang, L. and P. C. Woodland, \"MPE-Based Discriminative Linear Transform for Speaker Adaptation,\" in Proc. ICASSP'04, 2004.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Improved Discriminative Training Using Phone Lattices", "authors": [ { "first": "J", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "A", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2005, "venue": "Proc. INTERSPEECH'05", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zheng, J. and A. Stolcke, \"Improved Discriminative Training Using Phone Lattices,\" in Proc. INTERSPEECH'05, 2005.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "A word lattice can efficiently encode a large number of possible hypothesized word sequences.", "type_str": "figure", "num": null }, "FIGREF1": { "uris": null, "text": "Raw phone accuracy calculation.", "type_str": "figure", "num": null }, "FIGREF3": { "uris": null, "text": "Approximate accuracy versus exact accuracy.Mandarin Large Vocabulary Continuous Speech Recognitionwhere \u03bb is the current model parameter set, q is a specific phone arc in", "type_str": "figure", "num": null }, "FIGREF4": { "uris": null, "text": "q s and q e represent the start and end times of the phone arc q , respectively; m is the mixture index of the acoustic models; m \u00b5 and m \u03a3 are, respectively, the mean vector and covariance matrix for mixture m ;", "type_str": "figure", "num": null }, "FIGREF10": { "uris": null, "text": "occupation probability for mixture m . I-smoothing can also be considered as an interpolation between the MPE estimate and the ML estimate. As m \u03c4 \u2192 \u221e , it performs like ML training. On the other hand, it behaves purely as MPE training when 0 m \u03c4 \u2192 . Basically, the technique provides better results when the value of m \u03c4 is properly chosen (e.g., we adopted a setting of 10 m \u03c4 =", "type_str": "figure", "num": null }, "FIGREF11": { "uris": null, "text": "the weighted average phone accuracy of hypothesized word sequences that involve q ; and r avg c", "type_str": "figure", "num": null }, "FIGREF12": { "uris": null, "text": "in the weighted average phone accuracy between the word sequences containing arc q and all word sequences in the lattice. statistics are contributed to phone arc q in MPE training. Positive contributions are made to arc q if ( ) r c q is greater than r avg c , i.e., if phone arc q is more accurate than the average. contributions are made to arc q and thus show the discrimination. For a reasonable combination of acoustic model likelihoods and language model probabilities, it is necessary to restrict the acoustic likelihoods by introducing an exponential scaling factor. The scaling factor is empirically set depending on the task at hand; in our experiments, we adopted a value of 1/12. Alternatively, a word unigram language model constraint can be used to improve the generalization capabilities of such discriminative Mandarin Large Vocabulary Continuous Speech Recognition training.", "type_str": "figure", "num": null }, "FIGREF14": { "uris": null, "text": "Recognition results, in terms of the CER, for three systems trained on ML, MMI, and MPE criteria, respectively.", "type_str": "figure", "num": null }, "TABREF2": { "text": "the weighted average accuracy of hypothesized word sequences in", "html": null, "type_str": "table", "num": null, "content": "
lat W rthat include q ; and( | ) ( ) ( , )
" }, "TABREF5": { "text": ")-dimensional extended mean vector based on the current estimate. Meanwhile, the covariance matrices can be updated by", "html": null, "type_str": "table", "num": null, "content": "
where the subscript k is the class index; k W matrix; and 1 T T m m \u03be \u00b5 \u23a1 \u23a4 = \u23a3 \u23a6 is the ( 1 d +k k b A = \u23a1 \u23a3\u23a4 \u23a6 is a( 1) d d \u00d7 +transformation
" }, "TABREF7": { "text": "", "html": null, "type_str": "table", "num": null, "content": "
Training setTest set#Speakers
GenderTotal length (sec)Total Syllables#SpeakersTotal length (sec)Total Syllables#Speakersin the training and test sets
Male 46,001.3 Female 46,007.2545,732\u2264 66 1,301.4 \u2264 111 3,914.026,2199 \u2264 239 \u2265 13
" }, "TABREF9": { "text": "", "html": null, "type_str": "table", "num": null, "content": "
INITIAL/FINAL Error RateCharacter Error Rate (%)
(%)
ML13.5623.78
(ML+) MPE11.1220.77
(ML+) MPE + MLLR10.9420.45
(ML+) MPE + MPELR10.8220.29
" }, "TABREF10": { "text": "", "html": null, "type_str": "table", "num": null, "content": "
CER (%)
" } } } }