|
{ |
|
"paper_id": "O09-1003", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T08:11:23.467487Z" |
|
}, |
|
"title": "Noise-Robust Speech Features Based on Cepstral Time Coefficients", |
|
"authors": [ |
|
{ |
|
"first": "Ja-Zang", |
|
"middle": [], |
|
"last": "Yeh", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Sun Yat-Sen University", |
|
"location": { |
|
"settlement": "Kaohsiung", |
|
"country": "Taiwan" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Chia-Ping", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "cpchen@cse.nsysu.edu.tw" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper, we investigate the noise-robustness of features based on the cepstral time coefficients (CTC). By cepstral time coefficients, we mean the coefficients obtained from applying the discrete cosine transform to the commonly used mel-frequency cepstral coefficients (MFCC). Furthermore, we apply temporal filters used for computing delta and acceleration dynamic features to the CTC, resulting in delta and acceleration features in the frequency domain. We experiment with five different variations of such CTC-based features. The evaluation is done on the Aurora 3 noisy digit recognition tasks with four different languages. The results show all but one such feature set performance gain, the other feature sets actually lead to performance gains. The best feature set achieves an improvement of 25% over the baseline feature set of MFCC.", |
|
"pdf_parse": { |
|
"paper_id": "O09-1003", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper, we investigate the noise-robustness of features based on the cepstral time coefficients (CTC). By cepstral time coefficients, we mean the coefficients obtained from applying the discrete cosine transform to the commonly used mel-frequency cepstral coefficients (MFCC). Furthermore, we apply temporal filters used for computing delta and acceleration dynamic features to the CTC, resulting in delta and acceleration features in the frequency domain. We experiment with five different variations of such CTC-based features. The evaluation is done on the Aurora 3 noisy digit recognition tasks with four different languages. The results show all but one such feature set performance gain, the other feature sets actually lead to performance gains. The best feature set achieves an improvement of 25% over the baseline feature set of MFCC.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "A front-end of a speech recognition system may consist of several stages for noise-robustness to achieve good performance. In the early stage of spectral domain, well-known methods such as spectral subtraction [1] and Wiener filter [2] may be applied. In the middle stage of cepstral domain, the mel-frequency cepstral coefficients (MFCC) are commonly used as the static feature set. In the postprocessing stage, there may be normalization, temporal information integration, and transformation modules.", |
|
"cite_spans": [ |
|
{ |
|
"start": 210, |
|
"end": 213, |
|
"text": "[1]", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 232, |
|
"end": 235, |
|
"text": "[2]", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "It has been observed that simple normalization approaches, such as the cepstral mean subtraction (CMS) [3] , cepstral variance normalization (CVN) [4] , and histogram normalization (HEQ) [5] can lead to significant performance improvement in recognition accuracy in noisy environment. Apparently such methods are capable of alleviating the mismatch between the clean and noisy data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 106, |
|
"text": "[3]", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 147, |
|
"end": 150, |
|
"text": "[4]", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 187, |
|
"end": 190, |
|
"text": "[5]", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "In this paper we investigate novel features based on simple transformation methods. Specifically, we insert a window of static cepstral vectors in a matrix and then apply the discrete cosine transform (DCT) along the temporal axis. The coefficents after the DCT is called the cepstral time coefficients, and the resultant matrix is called the cepstral time matrix (CTM) [6, 7] . After CTM for each frame is extracted, we further apply normalization and routines for delta and acceleration feature extraction to the cepstral time coefficients. The transformed features are combined with the static MFCC features to form the final feature vector.", |
|
"cite_spans": [ |
|
{ |
|
"start": 370, |
|
"end": 373, |
|
"text": "[6,", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 374, |
|
"end": 376, |
|
"text": "7]", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "This paper is organized as follows. Section 2 defines the cepstral time matrix and introduces the investigated feature transformations. The experimental setup and recognition results are described in Section 3. In Section 4, we draw conclusions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Our feature extraction and transformation process is illustrated in Figure 1 . We begin with a review of the cepstral time matrix, which is followed by the mathematical definition of the proposed additive transformation methods.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 76, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Feature Transformations", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "We first insert a fixed number of adjacent feature vectors in a matrix", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cepstral Time Coefficients", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "C t \uf8ee \uf8ef \uf8ef \uf8f0 C t 11 C t 12 . . . C t 1T . . . . . . . . . C t K1 C t K2 . . . C t KT \uf8f9 \uf8fa \uf8fa \uf8fb f t f t+1 . . . f t+T \u22121 .", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Cepstral Time Coefficients", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "Here K is the feature vector dimension, and f t is the feature vector of frame t, C t is the matrix whose column vectors are the T consecutive feature vectors starting from frame t. The cepstral time matrix at frame t, D t , is related to C t by the discrete-cosine transform. Each row of D t is the discrete-cosine transform of the corresponding row of C t . That is,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cepstral Time Coefficients", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "D t i: = DCT (C t i: ).", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Cepstral Time Coefficients", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "Here D t i: is the i-th row of matrix D. 1 We call D t in the nth cepstral time coefficient (CTC) of channel i at frame t. D is also called cepstral time matrix (CTM). It represents the spectral information of cepstral coefficient in an analysis window of frames. 1 Since our matrix index starts from 1 instead of 0, here the DCT needs to be", |
|
"cite_spans": [ |
|
{ |
|
"start": 41, |
|
"end": 42, |
|
"text": "1", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 264, |
|
"end": 265, |
|
"text": "1", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cepstral Time Coefficients", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "D t in = T \u03c4 =1 C t i\u03c4 cos (2\u03c4 \u2212 1)(n \u2212 1)\u03c0 2T . (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cepstral Time Coefficients", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "In this paper, we have 5 different transforms applied to CTC, each leading to a different feature vector.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CTC-Based Features", |
|
"sec_num": "2.2." |
|
}, |
|
{ |
|
"text": "The first transform is dividing the first column of D t by the number of frames (T ), while leaving other columns unchanged. Let E t be the new feature matrix, we have", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method E", |
|
"sec_num": "2.2.1." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "E t :1 = D t :1 /T E t :n = D t :n , n = 1", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Method E", |
|
"sec_num": "2.2.1." |
|
}, |
|
{ |
|
"text": "Note E t :1 has a physical meaning. According to (2) , it is the mean of the cepstral coefficients within an analysis window (while D t :1 is the sum). We then compute a novel feature set based on E t . Specifically, we treat the columns in E t as a temporal sequence and apply the delta and acceleration feature extraction steps. That is,", |
|
"cite_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 52, |
|
"text": "(2)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method E", |
|
"sec_num": "2.2.1." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u0206 t :2 = E t :2 \u2212 E t :1 E t :3 = E t :3 \u2212 2E t :2 + E t :1 .", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Method E", |
|
"sec_num": "2.2.1." |
|
}, |
|
{ |
|
"text": "We add the\u0206 ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method E", |
|
"sec_num": "2.2.1." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "E t = \uf8ee \uf8ef \uf8f0 C t :1 E t :2 E t :3 \uf8f9 \uf8fa \uf8fb .", |
|
"eq_num": "(6" |
|
} |
|
], |
|
"section": "Method E", |
|
"sec_num": "2.2.1." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "= D t :1 /N t F t :n = D t :n , n = 1", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Method E", |
|
"sec_num": "2.2.1." |
|
}, |
|
{ |
|
"text": "where N t is the maximum magnitude in the first column, i.e.,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method E", |
|
"sec_num": "2.2.1." |
|
}, |
|
{ |
|
"text": "N t = max d |D t d1 |.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method E", |
|
"sec_num": "2.2.1." |
|
}, |
|
{ |
|
"text": "The remaining operations are similar to Method E. That is,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method E", |
|
"sec_num": "2.2.1." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "F t :2 = F t :2 \u2212 F t :1 F t :3 = F t :3 \u2212 2F t :2 + F t :1 .", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "Method E", |
|
"sec_num": "2.2.1." |
|
}, |
|
{ |
|
"text": "We addF", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method E", |
|
"sec_num": "2.2.1." |
|
}, |
|
{ |
|
"text": "(t) :2 andF (t) :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method E", |
|
"sec_num": "2.2.1." |
|
}, |
|
{ |
|
"text": "3 to the static MFCCs, resulting in a feature vector of", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method E", |
|
"sec_num": "2.2.1." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "F t = \uf8ee \uf8ef \uf8f0 C t :1 F t :2 F t :3 \uf8f9 \uf8fa \uf8fb .", |
|
"eq_num": "(9)" |
|
} |
|
], |
|
"section": "Method E", |
|
"sec_num": "2.2.1." |
|
}, |
|
{ |
|
"text": "In Method G, we add the first and second columns of CTM, which represents the zeroth and first cepstral time coefficients, to the static MFCC vector,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method G", |
|
"sec_num": "2.2.3." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "G t = \uf8ee \uf8ef \uf8f0 C t :1 D t :1 D t :2 \uf8f9 \uf8fa \uf8fb .", |
|
"eq_num": "(10)" |
|
} |
|
], |
|
"section": "Method G", |
|
"sec_num": "2.2.3." |
|
}, |
|
{ |
|
"text": "In Method H, we add the second and third columns of CTM, which represent the first and second cepstral time coefficients, to the static MFCC vector,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method H", |
|
"sec_num": "2.2.4." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "H t = \uf8ee \uf8ef \uf8f0 C t :1 D t :2 D t :3 \uf8f9 \uf8fa \uf8fb .", |
|
"eq_num": "(11)" |
|
} |
|
], |
|
"section": "Method H", |
|
"sec_num": "2.2.4." |
|
}, |
|
{ |
|
"text": "In Method I, we no longer use the MFCC. Instead, we simply use the zeroth, first, and second cepstral time coefficients,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method I", |
|
"sec_num": "2.2.5." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "I t = \uf8ee \uf8ef \uf8f0 D t :1 D t :2 D t :3 \uf8f9 \uf8fa \uf8fb .", |
|
"eq_num": "(12)" |
|
} |
|
], |
|
"section": "Method I", |
|
"sec_num": "2.2.5." |
|
}, |
|
{ |
|
"text": "For completeness, we describe our baseline features as Method B. Our baseline simply uses the 12 MFCCs (c 1 , . . . , c 12 ), the log energy, and the delta and delta-delta features. Therefore, the feature vector has a dimension of 39, which agrees with other methods. Furthermore, our baseline results agree with the Aurora 3 baseline results [8, 9] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 343, |
|
"end": 346, |
|
"text": "[8,", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 347, |
|
"end": 349, |
|
"text": "9]", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method B", |
|
"sec_num": "2.2.6." |
|
}, |
|
{ |
|
"text": "We evaluate the proposed CTC-based speech features on the Aurora 3 noisy-digit recognition tasks [8, 9] . Aurora 3 is a multi-lingual speech database, consisting of digit-string utterances in Danish, German, Finnish and Spanish. It provides a platform for fair comparison between systems of different front-ends. All the results reported in this paper follow the Aurora 3 evaluation guidelines.", |
|
"cite_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 100, |
|
"text": "[8,", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 101, |
|
"end": 103, |
|
"text": "9]", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Database", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "We first evaluate the number of vectors to be included in C t , and decide to use T = 15. For the static features we use 12 MFCC features and the log energy, making K = 13. Therefore, the initial matrix C t is of size 13 \u00d7 15. Table 1 lists the experimental results on the Aurora 3 database. The entries in the table are the averaged relative improvements of word error rates over the baseline.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 227, |
|
"end": 234, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "Consistent performance across different methods have been observed in the experiments. Specifically, Method H achieves the best performance, while Method G yields the worst performance, in all languages. Given that Method G and Method H differ only in the cepstral time coefficients they include in the final feature vector, it is fair to say that the zeroth cepstral time coefficient is detrimental to recognition accuracy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "Methods E, Method F, and Method I yield mixed results. In Finnish, Method E outperforms Method F and Method I. In Spanish and Danish, Method F outperforms Method I and Method E. Method E and Method F are similar in the sense that the first column (zeroth cepstral time coefficients) are normalized, and then used in procedures similar to delta and acceleration feature extraction, in the frequency domain rather than in the time domain. It is not surprising that they have similar performance level. The comparison of Method G and H concludes that the zeroth CTC is detrimental of recognition accuracy. The zeroth CTC corresponds to the first column of CTM. Therefore in Method E and F, we try schemes of normalizing the first column of CTM. In Method E we divide the first column of CTM by T, and in Mthod F we normalize the value of first column to the range \u22121 to 1. The performance of E and F given in Table 1 are better than the baseline. Lastly, we also try Method I, which uses only CTCs, and excludes MFCCs. Its recognition accuracy is also better than the baseline. It appears that the difference between Channel 0 and Channel 1 is smaller in the cases of (F) and (H) than in the case of (B). Therefore the mismatchedness is reduced. Table 2 lists the experimental results of Method H on the Aurora 3 database, given as percent word error rate (WER) results. These results include the four Aurora 3.0 languages (Finnish, Spanish, German, and Danish) and the Well-Matched(WM), Medium-Matched(MM), and Highly-Mismatched(HM) training/testing cases. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 906, |
|
"end": 913, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 1243, |
|
"end": 1250, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "In this paper, we use five difference feature sets based on the cepstral time coefficients. Method E and F, which first normalize the first column and then apply the delta and delta-delta operations on the first 3 columns of CTM, lead to performance gains over the baseline. Method G and H, which combine different sets of columns of CTM with the raw MFCC vector, lead to mixed results. Method I, which uses all cepstral time coefficients, leads to improvement. Overall, the combination of raw MFCC and the second and the third columns of CTM yields the best results among all experimented feature sets. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "In general, we will use notation A i: to denote the i-th row vector and A :j to denote the j-th column vector, of matrix A.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Supression of acoustic noise in speech using spectral subtraction", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Boll", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1979, |
|
"venue": "IEEE Transactions on Acoustics, Speech and Signal Processing", |
|
"volume": "27", |
|
"issue": "2", |
|
"pages": "113--120", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Boll, \"Supression of acoustic noise in speech using spectral subtraction,\" IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 27, no. 2, pp. 113-120, April 1979.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "An hypothesized Wiener filtering approach to noisy speechrecognition", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Berstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Shallom", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Acoustics, Speech, and Signal Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "913--916", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Berstein and I. Shallom, \"An hypothesized Wiener filtering approach to noisy speechrecogni- tion,\" in Acoustics, Speech, and Signal Processing, 1991. ICASSP-91., 1991 International Con- ference on, 1991, pp. 913-916.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Cepstral analysis technique for automatic speaker verification", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Furui", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1981, |
|
"venue": "IEEE Transactions on Acoustics, Speech and Signal Processing", |
|
"volume": "29", |
|
"issue": "2", |
|
"pages": "254--272", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Furui, \"Cepstral analysis technique for automatic speaker verification,\" IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 29, no. 2, pp. 254-272, 1981.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A recursive feature vector normalization approach for robust speechrecognition in noise", |
|
"authors": [ |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Viikki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Bye", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Laurila", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the 1998 IEEE International Conference on", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "O. Viikki, D. Bye, and K. Laurila, \"A recursive feature vector normalization approach for robust speechrecognition in noise,\" in Acoustics, Speech and Signal Processing, 1998. Proceedings of the 1998 IEEE International Conference on, vol. 2, 1998.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Histogram equalization of speech representation for robust speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "De La Torre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Peinado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Segura", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Perez-Cordoba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Benitez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Rubio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "IEEE Transactions on Speech and Audio Processing", |
|
"volume": "13", |
|
"issue": "3", |
|
"pages": "355--366", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. de La Torre, A. Peinado, J. Segura, J. Perez-Cordoba, M. Benitez, and A. Rubio, \"His- togram equalization of speech representation for robust speech recognition,\" IEEE Transactions on Speech and Audio Processing, vol. 13, no. 3, pp. 355-366, 2005.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Inclusion of temporal information into features for speechrecognition", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Milner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Spoken Language, 1996. ICSLP 96. Proceedings., Fourth International Conference on", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "B. Milner, \"Inclusion of temporal information into features for speechrecognition,\" in Spoken Language, 1996. ICSLP 96. Proceedings., Fourth International Conference on, vol. 1, 1996.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "A comparison of front-end configurations for robust speechrecognition", |
|
"authors": [], |
|
"year": 2002, |
|
"venue": "IEEE International Conference on Acoustics, Speech, and Signal Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "--, \"A comparison of front-end configurations for robust speechrecognition,\" in IEEE Interna- tional Conference on Acoustics, Speech, and Signal Processing, 2002. Proceedings.(ICASSP'02), vol. 1, 2002.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Small vocabulary evaluation: Baseline mel-cepstrum performances with speech endpoints", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Motorola", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Motorola Au/374/01, \"Small vocabulary evaluation: Baseline mel-cepstrum performances with speech endpoints,\" October 2001.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Speechdatcar: A large speech database for automotive environments", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Moreno", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Lindberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Draxler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Richard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Choukri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Euler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Allen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the II LREC Conference", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Moreno, B. Lindberg, C. Draxler, G. Richard, K. Choukri, S. Euler, and J. Allen, \"Speechdat- car: A large speech database for automotive environments,\" in Proceedings of the II LREC Con- ference, vol. 1, no. 2, 2000.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "The block diagram of the proposed feature transformation methods.", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "to the static MFCCs, resulting in a feature vector of", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "plots the temporal sequences of the fifth dimension of the third column (Dimension 31 out of 39) of the feature vectors of Method B, F, and H of a pair of Danish utterances. The pair consists of an utterance of Channel 0 (the cleaner instance) and an utterance of Channel 1 (the noisier instance). Specifically, using our previously defined notations, Figure 2(B) is the plot of \u25b3 2 f t 5 , Figure 2(F) is the plot ofF t 53 , and Figure 2(H) is the plot ofH t 53 .", |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Plot of Dimension 31 (out of 39) of a Danish utterance recorded in two mismatched channels. (B) is the \u25b3 2 f t 5 , (F) isF t 53 , and (H) isH t 53 . The horizontal axis is the frmae index and the vertical axis is the feature value. The dotted line ('.') represents Channel 0 and the starred line ('*') represents Channel 1.", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"content": "<table><tr><td>)</td></tr><tr><td>2.2.2. Method F</td></tr></table>", |
|
"num": null, |
|
"type_str": "table", |
|
"text": "An alternative transform is to normalize the feature values in the first column to the range of [\u22121, 1]. This is achieved by dividing D t :1 by the maximum magnitude of the first column. Let F t be defined by F" |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"content": "<table><tr><td/><td colspan=\"4\">German Spanish Finnish Danish</td></tr><tr><td>E</td><td>-12.4</td><td>16.2</td><td>16.5</td><td>16.3</td></tr><tr><td>F</td><td>-10.5</td><td>22.4</td><td>10.8</td><td>16.3</td></tr><tr><td>G</td><td>-58.1</td><td>-29.0</td><td>-42.9</td><td>-19.2</td></tr><tr><td>H</td><td>7.5</td><td>26.6</td><td>25.4</td><td>23.2</td></tr><tr><td>I</td><td>-10.8</td><td>19.8</td><td>8.5</td><td>13.1</td></tr></table>", |
|
"num": null, |
|
"type_str": "table", |
|
"text": "The overall (averaged over conditions) relative improvements of the word error rates in the Aurora 3 tasks." |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"5\">Aurora3 Reference Word Error Rate</td></tr><tr><td/><td colspan=\"4\">German Spanish Finnish Danish</td></tr><tr><td>WM</td><td>9.4</td><td>13.1</td><td>9.5</td><td>20.4</td></tr><tr><td>MM</td><td>21.9</td><td>26.3</td><td>27.5</td><td>50.6</td></tr><tr><td>HM</td><td>25.7</td><td>57.8</td><td>69.6</td><td>66.8</td></tr><tr><td colspan=\"5\">Aurora3 Word Error Rate, Method H</td></tr><tr><td/><td colspan=\"4\">German Spanish Finnish Danish</td></tr><tr><td>Well</td><td>9.1</td><td>9.7</td><td>7.0</td><td>15.4</td></tr><tr><td>Mid</td><td>19.8</td><td>18.4</td><td>21.3</td><td>39.0</td></tr><tr><td>High</td><td>21.7</td><td>45.4</td><td>50.2</td><td>52.4</td></tr><tr><td colspan=\"5\">Aurora3 Relative Percentage Improvement</td></tr><tr><td colspan=\"5\">German Spanish Finnish Danish Avg.</td></tr><tr><td>Well</td><td>4.4</td><td>26.0</td><td>26.2</td><td>24.5 20.3</td></tr><tr><td>Mid</td><td>5.3</td><td>29.9</td><td>22.7</td><td>22.9 20.2</td></tr><tr><td>High</td><td>15.5</td><td>23.0</td><td>27.9</td><td>21.5 22.0</td></tr><tr><td>overall</td><td>7.5</td><td>26.6</td><td>25.4</td><td>23.2 20.7</td></tr></table>", |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Our most recent Aurora 3.0 results using the method H, given as percent word error rate (WER) results. These results include the four Aurora 3.0 languages (Finnish, Spanish, German, and Danish) and the Well-Matched(WM), Medium-Matched(MM), and Highly-Mismatched(HM) training/testing cases." |
|
} |
|
} |
|
} |
|
} |