Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
70.7 kB
{
"paper_id": "O07-5004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:08:22.686157Z"
},
"title": "Performance of Discriminative HMM Training in Noise",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Du",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Science and Technology of China",
"location": {
"postCode": "230027",
"settlement": "Hefei",
"region": "P. R. China"
}
},
"email": ""
},
{
"first": "Peng",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {},
"email": "pengliu@microsoft.com"
},
{
"first": "Frank",
"middle": [
"K"
],
"last": "Soong",
"suffix": "",
"affiliation": {},
"email": "frankkps@microsoft.com"
},
{
"first": "Jian-Lai",
"middle": [],
"last": "Zhou",
"suffix": "",
"affiliation": {},
"email": "jlzhou@microsoft.com"
},
{
"first": "Ren-Hua",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Science and Technology of China",
"location": {
"postCode": "230027",
"settlement": "Hefei",
"region": "P. R. China"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this study, discriminative HMM training and its performance are investigated in both clean and noisy environments. Recognition error is defined at string, word, phone, and acoustic levels and treated in a unified framework in discriminative training. With an acoustic level, high-resolution error measurement, a discriminative criterion of minimum divergence (MD) is proposed. Using speaker-independent, continuous digit databases, Aurora2, the recognition performance of recognizers, which are trained in terms of different error measures and different training modes, is evaluated under various noise and SNR conditions. Experimental results show that discriminatively trained models perform better than the maximum likelihood baseline systems. Specifically, in MWE and MD training, relative error reductions of 13.71% and 17.62% are obtained with multi-training on Aurora2, respectively. Moreover, compared with ML training, MD training becomes more effective as the SNR increases.",
"pdf_parse": {
"paper_id": "O07-5004",
"_pdf_hash": "",
"abstract": [
{
"text": "In this study, discriminative HMM training and its performance are investigated in both clean and noisy environments. Recognition error is defined at string, word, phone, and acoustic levels and treated in a unified framework in discriminative training. With an acoustic level, high-resolution error measurement, a discriminative criterion of minimum divergence (MD) is proposed. Using speaker-independent, continuous digit databases, Aurora2, the recognition performance of recognizers, which are trained in terms of different error measures and different training modes, is evaluated under various noise and SNR conditions. Experimental results show that discriminatively trained models perform better than the maximum likelihood baseline systems. Specifically, in MWE and MD training, relative error reductions of 13.71% and 17.62% are obtained with multi-training on Aurora2, respectively. Moreover, compared with ML training, MD training becomes more effective as the SNR increases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "With the progress of Automatic Speech Recognition (ASR), noise robustness of speech recognizers attracts more and more attention for practical recognition systems. Various noise robust technologies can be grouped into three classes: 1. Feature domain approaches, which aim at noise resistant features, e.g., speech enhancement, feature compensation or transformation methods [Gong 1995] ; 2. Model domain approaches, e.g., Hidden Markov Model (HMM) decompensation [Varga et al. 1990] , Parallel Model Combination (PMC) [Gales et al. 1994] , which aim at modeling the distortion of features in noisy environments directly; 3. Hybrid approaches.",
"cite_spans": [
{
"start": 375,
"end": 386,
"text": "[Gong 1995]",
"ref_id": "BIBREF4"
},
{
"start": 464,
"end": 483,
"text": "[Varga et al. 1990]",
"ref_id": "BIBREF15"
},
{
"start": 519,
"end": 538,
"text": "[Gales et al. 1994]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In the past decade, discriminative training has been shown quite effective in reducing word error rates of HMM based ASR systems in a clean environment. In the first stage, sentence level discriminative training criteria, including Maximum Mutual Information (MMI) [Schluter 2000; Valtchev et al. 1997] and Minimum Classification Error (MCE) [Juang et al. 1997] , were proposed and proven effective. Recently, new criteria such as Minimum Word Error (MWE) and Minimum Phone Error (MPE) [Povey 2004 ], which are based on fine error analysis at word or phone level, have achieved further improvement in recognition performance.",
"cite_spans": [
{
"start": 265,
"end": 280,
"text": "[Schluter 2000;",
"ref_id": "BIBREF13"
},
{
"start": 281,
"end": 302,
"text": "Valtchev et al. 1997]",
"ref_id": "BIBREF14"
},
{
"start": 342,
"end": 361,
"text": "[Juang et al. 1997]",
"ref_id": "BIBREF6"
},
{
"start": 486,
"end": 497,
"text": "[Povey 2004",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In [Ohkura et al. 1993; Meyer et al. 2001; Laurila et al. 1998 ], noise robustness investigation on sentence level discriminative criteria such as MCE, Corrective Training (CT) is reported. Hence, we give a more complete investigation of noise robustness for general minimum error training.",
"cite_spans": [
{
"start": 3,
"end": 23,
"text": "[Ohkura et al. 1993;",
"ref_id": "BIBREF11"
},
{
"start": 24,
"end": 42,
"text": "Meyer et al. 2001;",
"ref_id": "BIBREF10"
},
{
"start": 43,
"end": 62,
"text": "Laurila et al. 1998",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "From a unified view of error minimization, the major difference between MCE, MWE and MPE is the error definition. String based MCE is based upon minimizing sentence error rate, while MWE is based on word error rate, which is more consistent with the popular metric used in evaluating ASR systems. Hence, the latter yields a better word error rate, at least on the training set [Povey 2004 ]. However, MPE performs slightly but universally better than MWE on the testing set [Povey 2004 ]. The success of MPE might be explained as follows: when refining acoustic models in discriminative training, it makes more sense to define errors in a more granular form of acoustic similarity. However, error definition at phone label level is only a rough approximation of acoustic similarity.",
"cite_spans": [
{
"start": 377,
"end": 388,
"text": "[Povey 2004",
"ref_id": "BIBREF12"
},
{
"start": 474,
"end": 485,
"text": "[Povey 2004",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Based on the analysis above, we have proposed using acoustic dissimilarity to measure errors [Du et al. 2006] . As acoustic behavior of speech units is characterized by HMMs, by measuring Kullback-Leibler Divergence (KLD) [Kullback et al. 1951] between two given HMMs, we can obtain a physically more meaningful assessment of their acoustic similarity.",
"cite_spans": [
{
"start": 93,
"end": 109,
"text": "[Du et al. 2006]",
"ref_id": "BIBREF1"
},
{
"start": 188,
"end": 221,
"text": "Kullback-Leibler Divergence (KLD)",
"ref_id": null
},
{
"start": 222,
"end": 244,
"text": "[Kullback et al. 1951]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Adopting KLD for defining dissimilarity, the corresponding training criterion is referred as Minimum Divergence (MD) [Du et al. 2006; Du et al. 2007] . The criterion possesses the following potential advantages: 1) It employs acoustic similarity for high-resolution error definition, which is directly related to acoustic model refinement; 2) Label comparison is no longer used, which alleviates the influence of the chosen language model and phone set and the resultant hard binary decisions caused by label matching. Due to these advantages, MD is expected to be more flexible and robust.",
"cite_spans": [
{
"start": 117,
"end": 133,
"text": "[Du et al. 2006;",
"ref_id": "BIBREF1"
},
{
"start": 134,
"end": 149,
"text": "Du et al. 2007]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In our work, MWE, which matches the evaluation metric, and MD, which focuses on refining acoustic dissimilarity, are compared. Other issues related to robust discriminative training, including how to design the maximum likelihood baseline and how to treat with the silence model is also discussed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Experiments were performed on Aurora2 [Hirsch et al. 2000] , which is a widely adopted database for research on noise robustness. For completeness, we tested the effectiveness of discriminative training on different ML baselines and different noise environments.",
"cite_spans": [
{
"start": 38,
"end": 58,
"text": "[Hirsch et al. 2000]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The rest of paper is organized as follows. In Section 2, issues on noise robustness of minimum error training will be discussed. In Section 3, MD training will be introduced. Experimental results are shown and discussed in Section 4. Finally, in Section 5, we give our conclusions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In this section, we will give a brief discussion of the major issues we are facing in robust discriminative training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Noise Robustness Analysis of Minimum Error Training",
"sec_num": "2."
},
{
"text": "In [Povey 2004] and [Du et al. 2006] , various discriminative training approaches are unified under the framework of minimum error training, where the objective function is an average of the recognition accuracies r ( , ) W W A of all hypotheses weighted by the posterior probabilities. For conciseness, we consider the single training utterance case:",
"cite_spans": [
{
"start": 3,
"end": 15,
"text": "[Povey 2004]",
"ref_id": "BIBREF12"
},
{
"start": 20,
"end": 36,
"text": "[Du et al. 2006]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Error Resolution of Minimum Error Training",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r ( ) ( | ) ( , ) P \u2208 = \u2211 W W O W W F A \u03b8 \u03b8 M",
"eq_num": "(1)"
}
],
"section": "Error Resolution of Minimum Error Training",
"sec_num": "2.1"
},
{
"text": "where \u03b8 represents the set of the model parameters; O is a sequence of acoustic observation vectors; r W is the reference word sequence; M is the hypotheses space;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Resolution of Minimum Error Training",
"sec_num": "2.1"
},
{
"text": "( | ) P \u03b8 W O is the posterior probability of the hypothesis W given O, which can be formulated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Resolution of Minimum Error Training",
"sec_num": "2.1"
},
{
"text": "' ( | ) ( ) ( | ) ( | ') ( ') W P P P P P \u03ba \u03b8 \u03b8 \u03ba \u03b8 \u2208 = \u2211 O W W W O O W W M (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Resolution of Minimum Error Training",
"sec_num": "2.1"
},
{
"text": "where \u03ba is the acoustic scaling factor. A is word accuracy, which matches the commonly used evaluation metric of speech recognition. However, MPE has been shown to be more effective in reducing recognition errors because it provides a more precise measurement of word errors at the phone level. We can argue this point by advocating the final goal of discriminative training. In refining acoustic models to obtain better performance, it makes more sense to measure acoustic similarity between hypotheses instead of word accuracy. The symbol matching does not relate acoustic similarity with recognition. The measured errors can also be strongly affected by the phone set definition and language model selection. Therefore, acoustic similarity is proposed as a finer and more direct error definition in MD training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Resolution of Minimum Error Training",
"sec_num": "2.1"
},
{
"text": "Criterion ( , ) r W W A Objective String based MCE ( ) r = \u03b4 W W Sentence accuracy MWE r LEV( , ) r \u2212 W WW Word accuracy MPE r r LEV( , ) P P P \u2212 W W W Phone accuracy MD r ( || ) D \u2212 W W Acoustic similarity",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 1. Comparison of criteria of minimum error training. ( W P : Phone sequence corresponding to word sequence W; LEV(,): Levenshtein distance between two symbol strings;| \u22c5 |: Number of symbols in a string.)",
"sec_num": null
},
{
"text": "Here, we aim at seeking how criteria with different error resolution performs in noisy environments. In our experiments, the whole-word model, which is commonly used in digit tasks, is adopted. For the noisy robustness analysis, MWE, which matches with the evaluation metric of speech recognition, will compared with MD, which possesses the highest error resolution as shown in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 378,
"end": 385,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Table 1. Comparison of criteria of minimum error training. ( W P : Phone sequence corresponding to word sequence W; LEV(,): Levenshtein distance between two symbol strings;| \u22c5 |: Number of symbols in a string.)",
"sec_num": null
},
{
"text": "In noisy environments, various ML trained baselines can be designed. So, the effectiveness of minimum error training with different training modes will be explored. In [Hirsch et al. 2000] , two different sets of training, clean-training and multi-training, are used. In clean-training mode, only clean speech is used for training. Hence, there will be a mismatch when the model is tested in noisy environments. To alleviate the mismatch, multi-training, in which training set is composed of noisy speech with different SNRs, can be applied. Actually, multi-training can only achieve a \"global SNR\" match. To achieve a \"local SNR\" match, we adopt a SNR-based training mode. In the training phase, we train a series of models at different SNR levels, while in testing, all these models are paralleled as multi pronunciations of a HMM. Ideally, the model that matched the local SNR best will be automatically selected in decoding. SNR-based training can be considered as a high resolution acoustic modeling of multi-training. An illustration of the three training modes is shown in Figure 1 .",
"cite_spans": [
{
"start": 168,
"end": 188,
"text": "[Hirsch et al. 2000]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 1080,
"end": 1088,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Training Modes",
"sec_num": "2.2"
},
{
"text": "An important issue in discriminative training is how to update silence or background models, which is even more critical in a noisy environment. In our research, we pay special attention to this issue for appropriate guidelines. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Modes",
"sec_num": "2.2"
},
{
"text": "A word sequence is acoustically characterized by a sequence of HMMs. For automatically measuring acoustic similarity between W and r W , we adopt KLD between the corresponding HMMs:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Defining Errors by Acoustic Similarity",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r r ( , ) ( || ) D = \u2212 W W W W A",
"eq_num": "(3)"
}
],
"section": "Defining Errors by Acoustic Similarity",
"sec_num": "3.1"
},
{
"text": "The HMMs, when they are reasonably well trained in ML sense, can serve as succinct descriptions of data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Defining Errors by Acoustic Similarity",
"sec_num": "3.1"
},
{
"text": "Our goal is to measure the KLD for word sequences in Eq. 3. Given two word sequences r W and W without their state segmentations, we should use a state matching algorithm to measure the KLD between the corresponding HMMs [Liu et al. 2005] . With state segmentations, the calculation can be further decomposed down to the state level:",
"cite_spans": [
{
"start": 221,
"end": 238,
"text": "[Liu et al. 2005]",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "KLD between Two Word Sequences",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1: 1: r 1: 1: 1: 1: r r ( | ) 1: 1: r ( | ) ( ) ( ) = ( | ) log T T T T T T p 1:T T T p D D p d = \u222b o s o s W W s s o s o",
"eq_num": "(4)"
}
],
"section": "KLD between Two Word Sequences",
"sec_num": "3.2"
},
{
"text": "where T is the number of frames; 1:T o and 1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KLD between Two Word Sequences",
"sec_num": "3.2"
},
{
"text": "r T s are the observation sequence and hidden state sequence, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KLD between Two Word Sequences",
"sec_num": "3.2"
},
{
"text": "By assuming all observations are independent, we obtain: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KLD between Two Word Sequences",
"sec_num": "3.2"
},
{
"text": "1: 1 r r r r 1 1 ( | ) ( ) ( ) ( | )l o g ( | ) t t T T T : T t t t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KLD between Two Word Sequences",
"sec_num": "3.2"
},
{
"text": ") p |s = o 1 ( ; , ) s M sm m w = \u2211 sm sm o \u00b5 \u2211 N",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KLD between Two Word Sequences",
"sec_num": "3.2"
},
{
"text": ", so the comparison is reduced to measuring KLD between two GMMs. Since there is no closed-form solution, we need to resort to the computationally intensive Monte-Carlo simulations. The unscented transform mechanism [Goldberger et al. 2003 ] has been proposed to approximate the KLD measurement of the two GMMs.",
"cite_spans": [
{
"start": 216,
"end": 239,
"text": "[Goldberger et al. 2003",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "KLD between Two Word Sequences",
"sec_num": "3.2"
},
{
"text": "o \u00b5 \u2211 N be a N -dimensional Gaussian distribution and h be an arbitrary IR IR N \u2192 function, the unscented transform mechanism suggests approximating the expectation of h by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Let ( ; , )",
"sec_num": null
},
{
"text": "2 1 1 ( ; , ) ( ) ( ) 2 N k k h d h N = \u2248 \u2211 \u222b o o o o \u00b5 \u2211 N (6) where (1 2 ) k k N \u2264 \u2264 o",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Let ( ; , )",
"sec_num": null
},
{
"text": "are the artificially chosen \"sigma\" points: , in Eq. 5. Then, KLD between two states (GMMs) can be approximated by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Let ( ; , )",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "k k k N\u03bb = + o u \u00b5 (1 ) k N k k N\u03bb k N + = \u2212 \u2264 \u2264 o u \u00b5 ,",
"eq_num": "where"
}
],
"section": "Let ( ; , )",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r r r 2 ( | ) 1 r 2 ( | ) 1 1 ( ) log t t s mk t t mk M N p s t t s m N p s m k D s s w = = \u2248 \u2211 \u2211 o o",
"eq_num": "(7)"
}
],
"section": "Let ( ; , )",
"sec_num": null
},
{
"text": "where mk o is the th k \"sigma\" point in the th m Gaussian kernel of state r t s . By plugging this into Eq. 4, we obtain the KLD between two word sequences given their state segmentations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Let ( ; , )",
"sec_num": null
},
{
"text": "Usually, a word graph is a compact representation of large hypotheses space in speech recognition. As the KLD between a hypothesised word sequence and the reference can be decomposed down to the frame level, we have the following word graph based representation of (1): is dependent on the model parameters, which should be updated in optimization process. In [Du et al. 2007] , we conclude that the optimization of the gain function term has little impact on the performance. So here, r ( , ) W W A is considered a constant term and not optimized. The KLDs related to gain function are precomputed using the ML trained model parameters. Then our optimization of objective function is the same as that mentioned in [Povey 2004] . We use the Forward-Backward algorithm to update the word graph and the Extended Baum-Welch algorithm to update the model parameters in the training iterations.",
"cite_spans": [
{
"start": 360,
"end": 376,
"text": "[Du et al. 2007]",
"ref_id": "BIBREF0"
},
{
"start": 715,
"end": 727,
"text": "[Povey 2004]",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gain Function Calculation",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ": ( ) ( | ) ( ) w w P w \u2208 \u2208 \u2208 = \u2211 \u2211 W W W O M M \u03b8 \u03b8 F A",
"eq_num": "(8)"
}
],
"section": "Gain Function Calculation",
"sec_num": "3.3"
},
{
"text": "Experiments on TIDigits and Aurora2, both English continuous digit tasks, were performed. The English vocabulary is made of the 11 digits, from 'one(1)' to 'nine(9)', plus 'oh(0)' and 'zero(0)'. The baseline configuration for two databases is listed in Table 2 . The Aurora2 task consists of English digits in the presence of additive noise and linear convolutional channel distortion. These distortions have been synthetically introduced to clean TIDigits data. Three testing sets measure performance against noise types similar to those seen in the training data (set A), different from those seen in the training data (set B), and with an additional convolutional channel (set C). The baseline performance and other details can be found in [Hirsch et al. 2000] .",
"cite_spans": [
{
"start": 743,
"end": 763,
"text": "[Hirsch et al. 2000]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 253,
"end": 260,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "For minimum error training, the acoustic scaling factor \u03ba was set to 1 33 . All KLDs between any two states were precomputed to make the MD training more efficient. For Aurora2, we select the best results after 20 iterations for each sub set of testing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "As a preliminary result of noise robustness analysis, we first give the results of MD on the clean TIDigits database compared with MWE. As shown in Figure 2 , performance of MD achieves 57.8% relative error reduction compared with the ML baseline and also outperforms MWE in all iterations. Table 3 , we explore whether to update the silence model in minimum error training using different training modes. Since it is unrelated to the criteria, here we adopt MWE. When applying clean-training, the performances of all test sets without updating silence model are consistently better. However, in multi-training, the conclusion is the opposite. From the results, we can conclude that increasing the discrimination of the silence model will lead to performance degradation in mismatched cases (clean-training) and performance improvement in matched cases (multi-training). This can be explained as follows: For the clean-training case, if we increase the discrimination of the silence model, the noise segments are more easily recognized as digits when testing on noisy data. Then, insertion errors will increase. However, for the multi-training case, the silence model represents both silence and noise segments, which is matched with that when testing on noisy data. So, by updating the silence model, the global performance will be improved. Obviously, our SNR-based training belongs to the latter. In all our experiments, the treatment of silence model will obey this conclusion. Table 4 , the performances of MD and MWE are compared. Here, multi-training is adopted because it is believed that matching between training and testing can tap the potential of minimum error training. For the overall performance on three test sets, MD consistently outperforms MWE. From the viewpoint of SNRs, MD outperforms MWE in most cases when SNR is below 15dB. Hence, we can conclude that, although MWE matches with the model type and evaluation metric of speech recognition, MD, which possesses the highest error resolution, outperforms it in low SNR. In other words, the performance can be improved in low SNR by increasing the error resolution of criterion in minimum error training. This conclusion can be also drawn in clean-training and SNR-based training cases. Figure 3 shows relative improvement over ML baseline using MD training with different training modes. From this figure, some conclusions can be obtained. First, set B, whose noise scenarios are different from training, achieves the most obvious relative improvement in most cases. The relative improvement of set A is comparable with set B in the clean-training and multi-training, but worse than set B in SNR-based training. The relative improvement of set C, due to the mismatch of noise scenario and channel, was almost the worst in all training modes. Second, the relative improvement performance declines for decreasing SNR in clean-training. However, in multi-training and SNR-based training, the peak performance is in the range of 20dB to 15dB. Also, in the low SNRs, the performance of cleaning-training is worse than the other two training modes on set A and set B.",
"cite_spans": [],
"ref_spans": [
{
"start": 148,
"end": 156,
"text": "Figure 2",
"ref_id": null
},
{
"start": 291,
"end": 298,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 1482,
"end": 1489,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 2258,
"end": 2266,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Experiments on TIDigits Database",
"sec_num": "4.2"
},
{
"text": "The summary of performance is listed in Table 5 . Word accuracy of our SNR-based training outperforms multi-training on all test sets, especially set A and set C. For the overall relative improvement, the best result of 17.62% is achieved in multi-training.",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 47,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experiments on Aurora2 Database",
"sec_num": "4.3"
},
{
"text": "In this paper, the noise robustness of discriminatively trained HMMs is investigated. Discriminatively trained models are tested on English continuous digit databases, and MD and MWE criteria are experimentally compared to test the affection of error resolution. We observe: 1. Minimum error training is effective not only in clean environments, but also in noisy environments, which can be concluded in various training modes. Minimum error training is more effective as the SNR increases. Even when testing on mismatched noise scenarios, minimum error training also achieves better performance than ML training. 2. In minimum error training, higher resolution error analysis is more helpful at low SNRs. 3. Silence models should be carefully updated when the training and testing data are not well-matched.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A New Minimum Divergence Approach to Discriminative Training",
"authors": [
{
"first": "J",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "F",
"middle": [
"K"
],
"last": "Soong",
"suffix": ""
},
{
"first": "R.-H",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2007,
"venue": "InProceedings of IEEE International Conference on Acoustic, Speech, and Signal Processing",
"volume": "",
"issue": "",
"pages": "677--680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Du, J., P. Liu, H. Jiang, F.K. Soong, and R.-H. Wang, \"A New Minimum Divergence Approach to Discriminative Training,\" InProceedings of IEEE International Conference on Acoustic, Speech, and Signal Processing, 2007, pp. 677-680.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Minimum Divergence Based Discriminative Training",
"authors": [
{
"first": "J",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "F",
"middle": [
"K"
],
"last": "Soong",
"suffix": ""
},
{
"first": "J.-L",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "R.-H",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2006,
"venue": "InProceedings of International Conference on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "2410--2413",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Du, J., P. Liu, F.K. Soong, J.-L. Zhou, and R.-H. Wang, \"Minimum Divergence Based Discriminative Training,\" InProceedings of International Conference on Spoken Language Processing, 2006, pp. 2410-2413.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Robust Continuous Speech Recognition using Parallel Model Combination",
"authors": [
{
"first": "M",
"middle": [
"J F"
],
"last": "Gales",
"suffix": ""
},
{
"first": "S",
"middle": [
"J"
],
"last": "Young",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gales, M.J.F. and S.J. Young, \"Robust Continuous Speech Recognition using Parallel Model Combination,\" Technical Report EDICS Number: SA 1.6.8, Cambridge University, 1994.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An Efficient Image Similarity Measure Based on Approximations of KL-Divergence between Two Gaussian Mixtures",
"authors": [
{
"first": "J",
"middle": [],
"last": "Goldberger",
"suffix": ""
}
],
"year": 2003,
"venue": "InProceedings of International Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "370--377",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goldberger, J., \"An Efficient Image Similarity Measure Based on Approximations of KL-Divergence between Two Gaussian Mixtures,\" InProceedings of International Conference on Computer Vision, 2003, pp. 370-377.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Speech Recognition in Noisy Environments: A Survey",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Gong",
"suffix": ""
}
],
"year": 1995,
"venue": "Speech Communication",
"volume": "16",
"issue": "",
"pages": "261--291",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gong, Y., \"Speech Recognition in Noisy Environments: A Survey,\" Speech Communication, 16, 1995, pp. 261-291.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The AURORA Experimental Framework for the Performance Evaluations of Speech Recognition Systems under Noisy Conditions",
"authors": [
{
"first": "H",
"middle": [
"G"
],
"last": "Hirsch",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Pearce",
"suffix": ""
}
],
"year": 2000,
"venue": "InProceedings of ISCA ITRW ASR",
"volume": "",
"issue": "",
"pages": "181--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hirsch, H.G. and D. Pearce, \"The AURORA Experimental Framework for the Performance Evaluations of Speech Recognition Systems under Noisy Conditions,\" InProceedings of ISCA ITRW ASR, 2000, pp. 181-188.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Minimum Classification Error Rate Methods for Speech Recogtion",
"authors": [
{
"first": "B.-H",
"middle": [],
"last": "Juang",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Chou",
"suffix": ""
},
{
"first": "C.-H",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE Transactions on Speech and Audio Processing",
"volume": "5",
"issue": "3",
"pages": "257--265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juang, B.-H., W. Chou, and C.-H. Lee, \"Minimum Classification Error Rate Methods for Speech Recogtion,\" IEEE Transactions on Speech and Audio Processing, 5(3), 1997, pp. 257-265.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "On Information and Sufficiency",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kullback",
"suffix": ""
},
{
"first": "R",
"middle": [
"A"
],
"last": "Leibler",
"suffix": ""
}
],
"year": 1951,
"venue": "Ann. Math. Stat",
"volume": "22",
"issue": "",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kullback, S. and R.A. Leibler, \"On Information and Sufficiency,\" Ann. Math. Stat, 22, 1951, pp. 79-86.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Combination of Discriminative and Maximum Likelihood Techniques for Noise Robust Speech Recognition",
"authors": [
{
"first": "K",
"middle": [],
"last": "Laurila",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Vasilache",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Viikki",
"suffix": ""
}
],
"year": 1998,
"venue": "InProceedings of IEEE International Conference on Acoustic, Speech, and Signal Processing",
"volume": "",
"issue": "",
"pages": "85--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laurila, K., M. Vasilache, and O. Viikki, \"A Combination of Discriminative and Maximum Likelihood Techniques for Noise Robust Speech Recognition,\" InProceedings of IEEE International Conference on Acoustic, Speech, and Signal Processing, 1998, pp. 85-88.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Effective Estimation of Kullback-Leibler Divergence between Speech Models",
"authors": [
{
"first": "P",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "F",
"middle": [
"K"
],
"last": "Soong",
"suffix": ""
},
{
"first": "J.-L",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, P., F.K. Soong, and J.-L. Zhou, \"Effective Estimation of Kullback-Leibler Divergence between Speech Models,\" Technical Report, Microsoft Research Asia, 2005.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Improved Noise Robustness by Corrective and Rival Training",
"authors": [
{
"first": "C",
"middle": [],
"last": "Meyer",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Rose",
"suffix": ""
}
],
"year": 2001,
"venue": "InProceedings of IEEE International Conference on Acoustic, Speech, and Signal Processing",
"volume": "",
"issue": "",
"pages": "293--296",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meyer, C. and G. Rose, \"Improved Noise Robustness by Corrective and Rival Training,\" InProceedings of IEEE International Conference on Acoustic, Speech, and Signal Processing, 2001, pp. 293-296.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Noise-robust HMMs Based on Minimum Error Classification",
"authors": [
{
"first": "K",
"middle": [],
"last": "Ohkura",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Rainton",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sugiyama",
"suffix": ""
}
],
"year": 1993,
"venue": "InProceedings of IEEE International Conference on Acoustic, Speech, and Signal Processing",
"volume": "",
"issue": "",
"pages": "75--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ohkura, K., D. Rainton, and M. Sugiyama, \"Noise-robust HMMs Based on Minimum Error Classification,\" InProceedings of IEEE International Conference on Acoustic, Speech, and Signal Processing, 1993, pp. 75-78.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Discriminative Training for Large Vocabulary Speech Recognition",
"authors": [
{
"first": "D",
"middle": [],
"last": "Povey",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Povey, D., \"Discriminative Training for Large Vocabulary Speech Recognition,\" PhD thesis, Cambridge University, 2004.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Investigations on Discriminative Training Criteria",
"authors": [
{
"first": "R",
"middle": [],
"last": "Schluter",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schluter, R., \"Investigations on Discriminative Training Criteria,\" PhD thesis, Aachen University, 2000.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "MMIE Training of Large Vocabulary Speech Recognition Systems",
"authors": [
{
"first": "V",
"middle": [],
"last": "Valtchev",
"suffix": ""
},
{
"first": "J",
"middle": [
"J"
],
"last": "Odell",
"suffix": ""
},
{
"first": "P",
"middle": [
"C"
],
"last": "Woodland",
"suffix": ""
},
{
"first": "S",
"middle": [
"J"
],
"last": "Young",
"suffix": ""
}
],
"year": 1997,
"venue": "Speech Communication",
"volume": "22",
"issue": "",
"pages": "303--314",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valtchev, V., J.J. Odell, P.C. Woodland, and S.J. Young, \"MMIE Training of Large Vocabulary Speech Recognition Systems,\" Speech Communication, 22, 1997, pp. 303-314.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Hidden Markov Model Decomposition of Speech and Noise",
"authors": [
{
"first": "A",
"middle": [
"P"
],
"last": "Varga",
"suffix": ""
},
{
"first": "R",
"middle": [
"K"
],
"last": "Moore",
"suffix": ""
}
],
"year": 1990,
"venue": "InProceedings of IEEE International Conference on Acoustic, Speech, and Signal Processing",
"volume": "",
"issue": "",
"pages": "845--848",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Varga, A.P. and R.K. Moore, \"Hidden Markov Model Decomposition of Speech and Noise,\" InProceedings of IEEE International Conference on Acoustic, Speech, and Signal Processing, 1990, pp. 845-848.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Illustration of three training modes",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "eigenvector of \u03a3 , respectively. Geometrically, all these \"sigma\" points are on the principal axes of \u03a3 . Equation 6 is precise if h is quadratic.For our case, the Gaussian distribution in Eq. 6 is replaced by a GMM, and the function",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "certain state at time t on arc w and the reference, respectively.From the objective function defined in Eq. 1",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF3": {
"text": "Relative Improvement over ML baseline on Aurora2 using different training modes in MD training",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"content": "<table><tr><td>D</td><td>s</td><td>s</td><td>=</td><td>t \u2211 =</td><td>Ds s</td><td>=</td><td>t \u2211\u222b =</td><td>p</td><td>o s</td><td>t</td><td>p p</td><td>t o s o s</td><td>t</td><td>d o</td><td>t</td><td/><td>(5)</td></tr><tr><td colspan=\"3\">which means we s is</td><td colspan=\"3\">characterized</td><td/><td>by</td><td/><td>a</td><td/><td colspan=\"2\">Gaussian</td><td/><td colspan=\"2\">Mixture</td><td>Model</td><td>(GMM):</td></tr><tr><td>(</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": "can calculate KLD state by state, and sum them up. Now, our problem is how to measure the KLD between two states. Conventionally, each state"
},
"TABREF2": {
"content": "<table><tr><td>System</td><td>Feature</td><td>Model Type</td><td># State /Digit</td><td># Gauss /State</td><td># string of training set</td><td># string of testing set</td></tr><tr><td>TIDigits Aurora2</td><td>MFCC_E_D_A</td><td>left-to-right whole-word model</td><td>10 16</td><td>6 3</td><td>12549 8440*2</td><td>12547 1001*70</td></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": ""
},
"TABREF3": {
"content": "<table><tr><td colspan=\"4\">Training Mode Update Silence Model Set A Set B Set C Overall</td></tr><tr><td>Clean</td><td>YES</td><td>61.85 56.94 66.26</td><td>60.77</td></tr><tr><td>Clean</td><td>NO</td><td>64.74 61.69 67.95</td><td>64.16</td></tr><tr><td>Multi</td><td>YES</td><td>89.15 89.16 84.66</td><td>88.26</td></tr><tr><td>Multi</td><td>NO</td><td>88.91 88.55 84.43</td><td>87.87</td></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": ""
},
"TABREF4": {
"content": "<table><tr><td/><td/><td/><td/><td colspan=\"7\">Multi-Training -Results (Minimum Divergence)</td><td/><td/><td/><td/></tr><tr><td/><td/><td/><td>A</td><td/><td/><td/><td/><td>B</td><td/><td/><td/><td>C</td><td/><td>Rel</td></tr><tr><td/><td colspan=\"2\">Subway Babble</td><td>Car</td><td colspan=\"4\">Exhibition Average Restaurant Street</td><td colspan=\"7\">Airport Station Average Subway M Street M Average Average Impr</td></tr><tr><td>Clean</td><td>99.14</td><td>99.12</td><td>98.9</td><td>99.2</td><td>99.09</td><td>99.14</td><td>99.12</td><td>98.9</td><td>99.2</td><td>99.09</td><td>98.89</td><td>98.85</td><td>98.87</td><td>99.05 35.32%</td></tr><tr><td>20dB</td><td>98.71</td><td>98.55</td><td>98.81</td><td>98.61</td><td>98.67</td><td>98.43</td><td>98.37</td><td>98.57</td><td>98.89</td><td>98.57</td><td>98.65</td><td>97.64</td><td>98.15</td><td>98.52 43.92%</td></tr><tr><td>15dB</td><td>98.5</td><td>98</td><td>98.33</td><td>97.93</td><td>98.19</td><td>98</td><td>97.76</td><td>97.79</td><td>97.93</td><td>97.87</td><td>97.88</td><td>96.74</td><td>97.31</td><td>97.89 42.04%</td></tr><tr><td>10dB</td><td>97.18</td><td>96.55</td><td>97.2</td><td>96.08</td><td>96.75</td><td>96.41</td><td>95.8</td><td>96.06</td><td>95.31</td><td>95.90</td><td>95.15</td><td>94.04</td><td>94.60</td><td>95.98 34.81%</td></tr><tr><td>5dB</td><td>92.39</td><td>89.81</td><td>90.49</td><td>90.25</td><td>90.74</td><td>89.28</td><td>87.06</td><td>90.52</td><td>87.23</td><td>88.52</td><td>84.68</td><td>82.56</td><td>83.62</td><td>88.43 20.78%</td></tr><tr><td>0dB</td><td>72.8</td><td>64.63</td><td>58.93</td><td>70.32</td><td>66.67</td><td>65.24</td><td>64</td><td>69.19</td><td>62.48</td><td>65.23</td><td>49.25</td><td>54.44</td><td>51.85</td><td>63.13 10.51%</td></tr><tr><td>-5dB</td><td>31.04</td><td>29.56</td><td>22.7</td><td>28.57</td><td>27.97</td><td>30.06</td><td>28.96</td><td>33.58</td><td>25.46</td><td>29.52</td><td>22.01</td><td>24.24</td><td>23.13</td><td>27.62 4.15%</td></tr><tr><td colspan=\"2\">Average 91.92</td><td>89.51</td><td>88.75</td><td>90.64</td><td>90.20</td><td>89.47</td><td>88.60</td><td>90.43</td><td>88.37</td><td>89.22</td><td>85.12</td><td>85.08</td><td>85.10</td><td>88.79</td></tr><tr><td>Rel</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"4\">Impr 28.10% 12.93% 16.53%</td><td>21.79%</td><td>19.60%</td><td>27.93%</td><td colspan=\"4\">12.04% 22.53% 22.40% 21.45%</td><td>11.21%</td><td>4.93%</td><td>8.17%</td><td>17.62%</td></tr><tr><td/><td/><td/><td/><td colspan=\"7\">Multi-Training -Results (Minimum Word Error)</td><td/><td/><td/><td/></tr><tr><td/><td/><td/><td>A</td><td/><td/><td/><td/><td>B</td><td/><td/><td/><td>C</td><td/><td>Rel</td></tr><tr><td/><td colspan=\"2\">Subway Babble</td><td>Car</td><td colspan=\"4\">Exhibition Average Restaurant Street</td><td colspan=\"7\">Airport Station Average Subway M Street M Average Average Impr</td></tr><tr><td>Clean</td><td>99.14</td><td>99.18</td><td>99.02</td><td>99.29</td><td>99.16</td><td>99.14</td><td>99.18</td><td>99.02</td><td>99.29</td><td>99.16</td><td>98.99</td><td>99.06</td><td>99.03</td><td>99.13 40.96%</td></tr><tr><td>20dB</td><td>98.86</td><td>98.67</td><td>98.78</td><td>98.7</td><td>98.75</td><td>98.74</td><td>98.43</td><td>98.72</td><td>98.95</td><td>98.71</td><td>98.34</td><td>97.4</td><td>97.87</td><td>98.56 45.45%</td></tr><tr><td>15dB</td><td>98.74</td><td>98.13</td><td>98.33</td><td>97.69</td><td>98.22</td><td>98.5</td><td>97.82</td><td>98.03</td><td>98.06</td><td>98.10</td><td>97.33</td><td>96.25</td><td>96.79</td><td>97.89 41.97%</td></tr><tr><td>10dB</td><td>96.87</td><td>95.95</td><td>96.87</td><td>95.43</td><td>96.28</td><td>96.22</td><td>95.53</td><td>96.42</td><td>95.74</td><td>95.98</td><td>94.63</td><td>93.5</td><td>94.07</td><td>95.72 30.03%</td></tr><tr><td>5dB</td><td>92.32</td><td>88.85</td><td>88.25</td><td>88.83</td><td>89.56</td><td>88.36</td><td>87.3</td><td>89.53</td><td>86.61</td><td>87.95</td><td>84.49</td><td>82.62</td><td>83.56</td><td>87.72 15.40%</td></tr><tr><td>0dB</td><td>70.31</td><td>63.33</td><td>53.44</td><td>64.7</td><td>62.95</td><td>64.6</td><td>68.18</td><td>68.27</td><td>59.12</td><td>65.04</td><td>47.62</td><td>54.44</td><td>51.03</td><td>61.40 6.25%</td></tr><tr><td>-5dB</td><td>29.66</td><td>29.72</td><td>21.8</td><td>25.27</td><td>26.61</td><td>30.21</td><td>27.84</td><td>33.49</td><td>23.97</td><td>28.88</td><td>21.31</td><td>24.24</td><td>22.78</td><td>26.75 3.01%</td></tr><tr><td colspan=\"2\">Average 91.42</td><td>88.99</td><td>87.13</td><td>89.07</td><td>89.15</td><td>89.28</td><td>89.45</td><td>90.19</td><td>87.70</td><td>89.16</td><td>84.48</td><td>84.84</td><td>84.66</td><td>88.26</td></tr><tr><td>Rel</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"3\">Impr 23.69% 8.60%</td><td>4.53%</td><td>8.69%</td><td>10.98%</td><td>26.64%</td><td colspan=\"4\">18.62% 20.65% 17.92% 21.02%</td><td>7.39%</td><td>3.39%</td><td>5.46%</td><td>13.71%</td></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": ""
},
"TABREF5": {
"content": "<table><tr><td/><td>Word Accuracy (%)</td><td>Relative Improvement</td></tr><tr><td>Training Mode</td><td>Set A Set B Set C Overall</td><td>Set A Set B Set C Overall</td></tr><tr><td>Clean-Training</td><td>63.49 58.94 68.96 62.76</td><td>5.56% 7.21% 8.32% 6.76%</td></tr><tr><td>Multi-Training</td><td colspan=\"2\">90.20 89.22 85.10 88.79 19.60% 21.45% 8.17% 17.62%</td></tr><tr><td colspan=\"3\">SNR-based Training 91.27 89.27 86.70 89.56 10.00% 26.21% 1.14% 15.68%</td></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": ""
}
}
}
}