{ "paper_id": "O06-2006", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:07:09.206338Z" }, "title": "Voice Activity Detection Based on Auto-Correlation Function Using Wavelet Transform and Teager Energy Operator", "authors": [ { "first": "Bing-Fei", "middle": [], "last": "Wu", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Chiao-Tung University", "location": { "settlement": "HsinChu", "country": "Taiwan" } }, "email": "" }, { "first": "Kun-Ching", "middle": [], "last": "Wang", "suffix": "", "affiliation": {}, "email": "kunching@itri.org.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, a new robust wavelet-based voice activity detection (VAD) algorithm derived from the discrete wavelet transform (DWT) and Teager energy operation (TEO) processing is presented. We decompose the speech signal into four subbands by using the DWT. By means of the multi-resolution analysis property of the DWT, the voiced, unvoiced, and transient components of speech can be distinctly discriminated. In order to develop a robust feature parameter called the speech activity envelope (SAE), the TEO is then applied to the DWT coefficients of each subband. The periodicity of speech signal is further exploited by using the subband signal auto-correlation function (SSACF) for. Experimental results show that the proposed SAE feature parameter can extract the speech activity under poor SNR conditions and that it is also insensitive to variable-level of noise.", "pdf_parse": { "paper_id": "O06-2006", "_pdf_hash": "", "abstract": [ { "text": "In this paper, a new robust wavelet-based voice activity detection (VAD) algorithm derived from the discrete wavelet transform (DWT) and Teager energy operation (TEO) processing is presented. We decompose the speech signal into four subbands by using the DWT. By means of the multi-resolution analysis property of the DWT, the voiced, unvoiced, and transient components of speech can be distinctly discriminated. In order to develop a robust feature parameter called the speech activity envelope (SAE), the TEO is then applied to the DWT coefficients of each subband. The periodicity of speech signal is further exploited by using the subband signal auto-correlation function (SSACF) for. Experimental results show that the proposed SAE feature parameter can extract the speech activity under poor SNR conditions and that it is also insensitive to variable-level of noise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Voice activity detection (VAD) refers to the ability to distinguish speech from noise and is an integral part of a variety of speech communication systems, such as speech coding, speech recognition, hand-free telephony, and echo cancellation. In the GSM-based communication system, a VAD scheme is used to lengthen the battery power through discontinuous transmission when speech-pause is detected [Freeman et al. 1989] . Moreover, a VAD algorithm can be used under a variable bit rate of the speech coding system in order to control the average bit rate and the overall quality of speech coding [Kondoz et al. 1994] . Perviously, Sohn et al. [Sohn et al. 1998 ] presented a VAD algorithm that adopts a novel noise spectrum adaptation by applying soft decision techniques. The decision rule is drawn from the generalized likelihood ratio test by assuming that the noise statistics are known a priori. Cho et al. [Cho et al. 2001] presented an improved version of the algorithm designed by Sohn. Specifically, Cho presented a smoothed likelihood ratio test to reduce the detection errors. Furthermore, Beritelli et al. [Beritelli et al. 1998 ] developed a fuzzy VAD using a pattern matching block consisting of a set of six fuzzy rules. Additionally, Nemer et al. [Nemer et al. 2001 ] designed a robust algorithm based on higher order statistics (HOS) in the residual domain of the linear prediction coding coefficients (LPC). Meanwhile, the International Telecommunication Union-Telecommunications Sector (ITU-T) designed G. 729B VAD [Benyassine et al. 1997] , which consists of a set of metrics, including line spectral frequencies (LSF), low band energy, zero-crossing rate (ZCR), and full-band energy. However, the common feature parameters mentioned above are based on averages over windows of fixed length or are derived through analysis based on a uniform time-frequency resolution. For example, it is well known that speech signals contain many transient components and exhibit the non-stationary property. The classical Fourier Transform (FT) works well for wide sense stationary signals but fails in the case of non-stationary signals since it applies only uniform-resolution analysis. Conversely, if the multi-resolution analysis (MRA) property of DWT [Strang et al. 1996] is used, the classification of speech into voiced, unvoiced or transient components can be accomplished.", "cite_spans": [ { "start": 398, "end": 419, "text": "[Freeman et al. 1989]", "ref_id": "BIBREF5" }, { "start": 596, "end": 616, "text": "[Kondoz et al. 1994]", "ref_id": "BIBREF9" }, { "start": 631, "end": 660, "text": "Sohn et al. [Sohn et al. 1998", "ref_id": "BIBREF13" }, { "start": 901, "end": 929, "text": "Cho et al. [Cho et al. 2001]", "ref_id": "BIBREF4" }, { "start": 989, "end": 994, "text": "Sohn.", "ref_id": null }, { "start": 1101, "end": 1140, "text": "Beritelli et al. [Beritelli et al. 1998", "ref_id": "BIBREF1" }, { "start": 1263, "end": 1281, "text": "[Nemer et al. 2001", "ref_id": "BIBREF11" }, { "start": 1534, "end": 1558, "text": "[Benyassine et al. 1997]", "ref_id": "BIBREF0" }, { "start": 2262, "end": 2282, "text": "[Strang et al. 1996]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The periodic property is an inherent characteristic of speech signals and is commonly used to characterize speech. In this paper, the periodic properties of subband signals are exploited to accurately extract speech activity. In fact, voiced or vowel speech sounds have a stronger periodic property than unvoiced sounds and noise signals, and this property is concentrated in low frequency bands. Thus, we let the low frequency bands have high resolution in order to enhance the periodic property by decomposing only the low band in each level. Three-level wavelet decomposition is further divided into four non-uniform subbands. Consequently, the well-known \"Auto-Correlaction Function (ACF)\" is defined in the subband domain to evaluate the periodic intensity of each subband, and is denoted as the \"Subband Signal Auto-Correlaction Function (SSACF)\". Generally speaking, the existing methods for suppressing noise are almost all based on the frequency domain. However, these methods indeed waste too much computing power in on-line work. Considering computing complexity, the Teager energy operator (TEO), which is a powerful nonlinear operator and has been successfully used in various speech processing applications [Kaiser et al. 1990] , [Bovik et al. 1993] , [Jabloun et al. 1999 ] is applied to eliminate noise components from the wavelet coefficients in each subband priori to SSACF measurement. Consequently, to evaluate the periodic intensity of each subband signal, a Mean-Delta method [Ouzounov et al. 2004 ] is applied in the envelope of each SSACF. First, the Delta SSACF, similar to the delta-cepstrum evaluation, is used to measure the local variation of each SSACF. Next, since the DSSACF is averaged over its length, the value of the Mean DSSACF (MDSSACF) can almost describe the amount of periodicity in each subband. Eventually, by only summing the values of the four MDSSACFs, we can apply a robust feature parameter, called the speech activity envelope (SAE) parameter. Experimental results show that the envelope of the SAE feature parameter can accurately indicate the boundary of speech activity under poor SNR conditions and that it is also insensitive to variable-level noise. In addition, the proposed wavelet-based VAD can be performed on-line. This paper is organized as follows. Section 2 describes the proposed algorithm based on DWT and TEO. In addition, the proposed robust feature parameter is also discussed. Section 3 evaluates the performance of the proposed algorithm and compares it with that of other wavelet-based VAD algorithms and ITU-T G.729B VAD. Finally, Section 4 presents conclusions.", "cite_spans": [ { "start": 1221, "end": 1241, "text": "[Kaiser et al. 1990]", "ref_id": "BIBREF8" }, { "start": 1244, "end": 1263, "text": "[Bovik et al. 1993]", "ref_id": "BIBREF2" }, { "start": 1266, "end": 1286, "text": "[Jabloun et al. 1999", "ref_id": "BIBREF7" }, { "start": 1498, "end": 1519, "text": "[Ouzounov et al. 2004", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this section, each part for the proposed VAD algorithm is discussed in turn.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposed Algorithm Based on DWT and TEO", "sec_num": "2." }, { "text": "The wavelet transform (WT) is based on time-frequency signal analysis. This wavelet analysis adopts a windowing technique with variable-sized regions. It allows the use of long time intervals when we want more precise low-frequency information, and shorter regions where we want high-frequency information. It is well known that speech signals contain many transient components and exhibit the non-stationary property. When we make use of the MRA property of the WT, better time-resolution is needed in the high frequency range to detect the rapid changing transient component of a signal, while better frequency resolution is needed in the low frequency range to track slowly time-varying formants more precisely. Through MRA analysis, the classification of speech into voiced, unvoiced or transient components can be accomplished. An efficient way to implement this DWT using filter banks was developed in 1988 by Mallat [Mallat 1989 ].", "cite_spans": [ { "start": 923, "end": 935, "text": "[Mallat 1989", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Discrete Wavelet Transform", "sec_num": "2.1" }, { "text": "In Mallat's algorithm, the j -level approximations j A and details j D of the input signal are determined by using quadrature mirror filters (QMF). Figure 1 shows that the decomposed subband signals A and D are the approximation and detail parts of the input speech signal obtained by using the high-pass filter and low-pass filter, implemented with the Daubechies family wavelet, where the symbol \u21932 denotes an operator of downsampling by 2. In fact, a voiced or vowel speech sound has more significant periodicity than an unvoiced sound on noise signal. Thus, the periodicity of a subband signal can be exploited to accurately extract speech activity. In addition, the periodicity is almostly concentrated in low frequency bands, so we let the low frequency bands have high resolution in order to enhance the periodic property by decomposing only low bands in each level. Figure 2 employed the used structure of three-level wavelet decomposition. By using DWT, we can divide the speech signal into four non-uniform subbands. The wavelet decomposition structure can be used to obtain the most significant periodicity in the subband domain. ", "cite_spans": [], "ref_spans": [ { "start": 148, "end": 156, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 874, "end": 882, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Discrete Wavelet Transform", "sec_num": "2.1" }, { "text": "It has been observed that the TEO can enhance the discriminability between speech and noise and further suppress noise components from noisy speech signals [Jabloun et al. 1999] . Compared with the traditional noise suppression approach based on the frequency domain, the TEO based noise suppression can be more easily implemented through the time domain.", "cite_spans": [ { "start": 156, "end": 177, "text": "[Jabloun et al. 1999]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Teager Energy Operator", "sec_num": "2.2" }, { "text": "In continuous-time, the TEO is defined as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wavelet Transform and Teager Energy Operator", "sec_num": null }, { "text": "2 [ ( )] [ ( )] ( ) ( ) c s t s t s t s t \u03c8 = \u2212 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wavelet Transform and Teager Energy Operator", "sec_num": null }, { "text": "where ( ) s t is a continuous-time signal and s ds dt = . In discrete-time, the TEO can be approximated by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wavelet Transform and Teager Energy Operator", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "2 [ ( )] ( ) ( 1) ( 1) d s n s n s n s n \u03c8 = \u2212 + \u2212 ,", "eq_num": "(1)" } ], "section": "Wavelet Transform and Teager Energy Operator", "sec_num": null }, { "text": "where ( ) s n is a discrete-time signal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wavelet Transform and Teager Energy Operator", "sec_num": null }, { "text": "Let us consider a speech signal ( ) s n degraded by uncorrelated additive noise ( ) u n , the resulting signal is shown below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wavelet Transform and Teager Energy Operator", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) ( ) ( ) y n s n u n = + .", "eq_num": "(2)" } ], "section": "Wavelet Transform and Teager Energy Operator", "sec_num": null }, { "text": "The Teager energy of the noisy speech signal", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wavelet Transform and Teager Energy Operator", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "[ ( )] d y n \u03c8 is given by [ ( )] [ ( )] [ ( )] 2 [ ( ), ( )] d d d y n s n u n s n u n \u03c8 \u03c8 \u03c8 \u03c8 = + + ,", "eq_num": "(3)" } ], "section": "Wavelet Transform and Teager Energy Operator", "sec_num": null }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wavelet Transform and Teager Energy Operator", "sec_num": null }, { "text": "[ ( )] d s n \u03c8 and [ ( )] d u n \u03c8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wavelet Transform and Teager Energy Operator", "sec_num": null }, { "text": "are the Teager energy of the discrete speech signal and the additive noise, respectively. The subscript d means the \"discrete.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wavelet Transform and Teager Energy Operator", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "[ ( ), ( )] d s n u n \u03c8 is the cross-d \u03c8 energy of ( ) s n and ( ) v n , such that [ ( ), ( )] ( ) ( ) 0.5 ( 1) ( 1) 0.5 ( 1) ( 1) d s n u n s n u n s n u n s n u n \u03c8 = \u2212 \u2212 \u22c5 + \u2212 + \u22c5 \u2212 ,", "eq_num": "(4)" } ], "section": "Wavelet Transform and Teager Energy Operator", "sec_num": null }, { "text": "where the symbol \u22c5 denotes the inner product. Since ( ) s n and ( ) u n are zero mean and independent, the expected value of the crossd \u03c8 energy is zero. Thus, Eq.(5) can be derived from Eq.(3) as shown below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wavelet Transform and Teager Energy Operator", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "{ } { } { } [ ( )] [ ( )] [ ( )] d d d E y n E s n E u n \u03c8 \u03c8 \u03c8 = + .", "eq_num": "(5)" } ], "section": "Wavelet Transform and Teager Energy Operator", "sec_num": null }, { "text": "Experimental results show that the Teager energy of the speech is much higher than that of the noise. Thus, compared with { }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wavelet Transform and Teager Energy Operator", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "[ ( )] d E y n \u03c8 , { } [ ( )] d E u n \u03c8 is negligible as shown by { } { } [ ( )] [ ( )] d d E y n E s n \u03c8 \u03c8 \u2248 .", "eq_num": "(6)" } ], "section": "Wavelet Transform and Teager Energy Operator", "sec_num": null }, { "text": "The definition of the \"Auto-Correlation Function (ACF)\" used to measure the self-periodic intensity of subband signal sequences is shown below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subband Signal Auto-Correlation Function (SSACF)", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "0 ( ) ( ) ( ), 0,1,...... p k n R k s n s n k k p \u2212 = = + = \u2211 ,", "eq_num": "(7)" } ], "section": "Subband Signal Auto-Correlation Function (SSACF)", "sec_num": "2.3" }, { "text": "where p is the length of ACF and k denotes the shift of the sample.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subband Signal Auto-Correlation Function (SSACF)", "sec_num": "2.3" }, { "text": "In this subsection, the ACF will be defined in the subband domain and called the \"Subband Signal Auto-Correlation Function (SSACF).\" It can be derived from the wavelet coefficients on each subband following TEO processing. Figure 3 displays that the waveform of the normalized SSACFs ( (0) 1 R = ) of each subband, respectively. It is observed that the SSACF of voiced speech has more obvious peaks than that of unvoiced speech and white noise does. In addition, for unvoiced speech, the ACF has more intense periodicity than white noise does, especially in the 3 A subband. ", "cite_spans": [], "ref_spans": [ { "start": 223, "end": 231, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Subband Signal Auto-Correlation Function (SSACF)", "sec_num": "2.3" }, { "text": "To evaluate the periodic intensity of subband signals, a Mean-Delta method is applied here to each SSACF. First, a measure similar to delta cepstrum evaluation is used to estimate the periodic intensity of the SSACF, namely, the \"Delta Subband Signal Auto-Correlation Function (DSSACF),\" as shown below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mean of the absolute values of the DSSACF (MDSSACF)", "sec_num": "2.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "2 ( ) ( ) M m M M M m M mR k m R k m =\u2212 =\u2212 + = \u2211 \u2211 ,", "eq_num": "(8)" } ], "section": "Mean of the absolute values of the DSSACF (MDSSACF)", "sec_num": "2.4" }, { "text": "where M R is the DSSACF over an M -sample neighborhood.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mean of the absolute values of the DSSACF (MDSSACF)", "sec_num": "2.4" }, { "text": "For a particular frame, it is computed by using only the frame's SSACF (intra-frame processing), while the delta cepstrum is computed by using cepstrum coefficients from neighboring frames (inter-frame processing). It is observed that the DSSACF value is almost similar to the local variation over the SSACF.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mean of the absolute values of the DSSACF (MDSSACF)", "sec_num": "2.4" }, { "text": "Second, the delta of the SSACF is averaged over an M -sample neighborhood M R , where the mean of the absolute values of the DSSACF (MDSSACF) is given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mean of the absolute values of the DSSACF (MDSSACF)", "sec_num": "2.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 0 1 ( ) b N M M k b R R k N \u2212 = = \u2211 ,", "eq_num": "(9)" } ], "section": "Mean of the absolute values of the DSSACF (MDSSACF)", "sec_num": "2.4" }, { "text": "where b N indicates the length of the subband signal. Figure 4 shows that the SAE feature parameter is developed by summing the four MDSSACF values. Each subband can provide information for extracting voice activity precisely. It is found that the SAE feature parameter accurately indicates the boundary of speech activity under -5dB factory noise. ", "cite_spans": [], "ref_spans": [ { "start": 54, "end": 62, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Mean of the absolute values of the DSSACF (MDSSACF)", "sec_num": "2.4" }, { "text": "A block diagram of the proposed wavelet-based VAD algorithm is displayed in Figure 5 . For a given level j , the wavelet transform decomposes the noisy speech signal into 1 j + subbands corresponding to wavelet coefficients sets, ", "cite_spans": [], "ref_spans": [ { "start": 76, "end": 84, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": ". Block diagram of the proposed wavelet-based VAD", "sec_num": null }, { "text": "The SSACF is derived from the Teager energy of noisy speech as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ". Block diagram of the proposed wavelet-based VAD", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "3 3 , , [ ] k m k m R Rt = ,", "eq_num": "(12)" } ], "section": ". Block diagram of the proposed wavelet-based VAD", "sec_num": null }, { "text": "where [ ] R \u22c5 denotes the auto-correlation operator.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ". Block diagram of the proposed wavelet-based VAD", "sec_num": null }, { "text": "Next, the DSSACF is given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ". Block diagram of the proposed wavelet-based VAD", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "3 3 , , [ ] k m k m R R = \u2206 ,", "eq_num": "(13)" } ], "section": ". Block diagram of the proposed wavelet-based VAD", "sec_num": null }, { "text": "where [ ] \u2206 \u22c5 denotes the Delta operator.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ". Block diagram of the proposed wavelet-based VAD", "sec_num": null }, { "text": "Then, the MDSSACF is obtained by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wavelet Transform and Teager Energy Operator", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "3 3 , [ ] k km R E R = .", "eq_num": "(14)" } ], "section": "Wavelet Transform and Teager Energy Operator", "sec_num": null }, { "text": "where [ ] E \u22c5 indicates the mean operator.", "cite_spans": [ { "start": 6, "end": 9, "text": "[ ]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Wavelet Transform and Teager Energy Operator", "sec_num": null }, { "text": "Finally, the SAE feature parameter is obtained by ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wavelet Transform and Teager Energy Operator", "sec_num": null }, { "text": "In order to accurately determine the boundary of voice activity, the VAD decision is usually made through thresholding. To estimate the time-varying noise characteristics accurately, in this subsection, an adaptive threshold value is derived from the statistics of the SAE feature parameter during a noise-only frame, and the VAD decision process recursively updates the threshold by using the mean and variance of the values of the SAE parameters. We compute the initial noise mean and variance with the first five frames, assuming that the first five frames contain noise only. We then compute the thresholds for the speech and noise as follows [Gerven et al. 1997] :", "cite_spans": [ { "start": 647, "end": 667, "text": "[Gerven et al. 1997]", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "A VAD Decision Based on Adaptive Thresholding", "sec_num": "2.6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s n s n T \u00b5 \u03b1 \u03c3 = + \u22c5 ,", "eq_num": "(16)" } ], "section": "A VAD Decision Based on Adaptive Thresholding", "sec_num": "2.6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "n n n n T \u00b5 \u03b2 \u03c3 = + \u22c5 ,", "eq_num": "(17)" } ], "section": "A VAD Decision Based on Adaptive Thresholding", "sec_num": "2.6" }, { "text": "where s T and n T indicate the speech threshold and noise threshold, respectively. Similarly, n \u00b5 and n \u03c3 represent the mean and variance of the values of the SAE parameters, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A VAD Decision Based on Adaptive Thresholding", "sec_num": "2.6" }, { "text": "The VAD decision rule is defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A VAD Decision Based on Adaptive Thresholding", "sec_num": "2.6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "if ( ( ) ) ( )=1 else if ( ( ) ) ( )=0; else ( )= ( 1). s n SAE t T VAD t SAE t T VAD t VAD t VAD t > < \u2212", "eq_num": "(18)" } ], "section": "A VAD Decision Based on Adaptive Thresholding", "sec_num": "2.6" }, { "text": "If the detection result shows a noise period, the mean and variance of the values of the SAE are updated by as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A VAD Decision Based on Adaptive Thresholding", "sec_num": "2.6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) ( 1) (1 ) ( ) n n t t S A Et \u00b5 \u03b3 \u00b5 \u03b3 = \u22c5 \u2212 + \u2212 \u22c5 ,", "eq_num": "(19) 2 2 ( ) [ ] [ ( )]" } ], "section": "A VAD Decision Based on Adaptive Thresholding", "sec_num": "2.6" }, { "text": "n b u f f e r m e a n n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A VAD Decision Based on Adaptive Thresholding", "sec_num": "2.6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "t SAE t \u03c3 \u00b5 = \u2212 ,", "eq_num": "(20)" } ], "section": "A VAD Decision Based on Adaptive Thresholding", "sec_num": "2.6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "2 2 2 [ ] () [ ] ( 1 ) ( 1 ) () buffer mean buffer mean S A E t S A E t S A Et \u03b3 \u03b3 = \u22c5 \u2212 + \u2212 \u22c5 .", "eq_num": "(21)" } ], "section": "A VAD Decision Based on Adaptive Thresholding", "sec_num": "2.6" }, { "text": "Here,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A VAD Decision Based on Adaptive Thresholding", "sec_num": "2.6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "2 [ ]", "eq_num": "( 1 )" } ], "section": "A VAD Decision Based on Adaptive Thresholding", "sec_num": "2.6" }, { "text": "buffer mean SAE t \u2212 is a mean of the buffer of the SAE value during a noise-only frame. We then update the thresholds by using the updated mean and variance of the values of the SAE parameters. Figure 6 displays the VAD decision, based on the adaptive threshold strategy. It is clearly seen that the boundary of voice activity has been accurately extracted. The two thresholds are updated during voice-inactivity but not during voice-activity. ). The results of speech activity detection were obtained under three kinds of background noise, which included white noise, car noise, and factory noise, taken from the Noisex-92 database [Varga et al. 1993] . The speech database contained 60 speech phrases (in Mandarin and in English) spoken by 32 native speakers (22 males and 10 females), sampled at 8000 Hz and linearly quantized at 16 bits per sample. The two probabilities of correctly detecting speech frames, cs P , and falsely detecting speech frames, f P , were the ratio of the correct speech decision to the total number of hand-labeled speech frames and the ratio of the false speech decision or false noise decision to the total number of hand-labeled frames used to objectively measure performance of these three VADs. Table 1 compares the performance of the proposed wavelet-based VAD, the wavelet-based VAD proposed by Chen et al. [Chen et al. 2002] , and the ITU standard G.729B [Benyassine et al. 1997] under three types of noise and three specific SNR values: 30,10, and -5dB. From this table, it can be seen that in terms of the average correct and false speech detection probability, the proposed wavelet-based VAD is superior to Chen's VAD algorithm and G.729B VAD over all three SNRs under various types of noise. Table 2 shows the computing time of the three VAD algorithms, where Matlab was used on a Celeron 2.0G CPU PC to process 138 frames of a speech signal. It is found that the computing time consists of the time needed for feature extraction, and the voice activity decision process. The computing time of Chen's VAD was nearly twelve times longer than that of proposed VAD. We attribute the computing time of Chen's VAD to five-level wavelet decomposition. Its feature parameter is based on 17 critical-subbands, using the perceptual wavelet packet transform (PWPT). And after, wavelet reconstruction is required in Chen's approach. In our approach, however, we only divide four subbands using wavelet transform and do not waste extra computing time on wavelet reconstruction. Figure 7 shows the performance of the proposed VAD for an utterance produced continuously under variable-level noise. We decreased and increased the level of background noise and set the SNR value to 0 dB. Compared with the envelope of the VAS parameter, it is observed that the envelope of the SAE parameter was more robust against the variable noise-level and able to extract the exact boundary of the voice activity. This can be mainly attributed to the fact that the value of each MDSSACF depends on the amount of variation of the ACF, not on the energy level of the signal. ", "cite_spans": [ { "start": 633, "end": 652, "text": "[Varga et al. 1993]", "ref_id": "BIBREF15" }, { "start": 1332, "end": 1362, "text": "Chen et al. [Chen et al. 2002]", "ref_id": "BIBREF3" }, { "start": 1393, "end": 1417, "text": "[Benyassine et al. 1997]", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 194, "end": 202, "text": "Figure 6", "ref_id": "FIGREF5" }, { "start": 1230, "end": 1237, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 1734, "end": 1741, "text": "Table 2", "ref_id": null }, { "start": 2508, "end": 2516, "text": "Figure 7", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "A VAD Decision Based on Adaptive Thresholding", "sec_num": "2.6" }, { "text": "Compared with Chen's wavelet-based VAD, our experimental results shows that the proposed wavelet-based VAD algorithm is more suitable for on-line work. In terms of complexity, Chen's wavelet-based VAD algorithm [Chen et al. 2002] requires five-level wavelet decomposition to decompose the speech signal into 17 critical-subbands by using PWPT. In addition, it uses more extra computing time to complete wavelet reconstruction. In tests with non-stationary noise, it was found that each MDSSACF depends only on the amount of variation of the normalized ACF, not on the energy level of the signal, so the envelope of the proposed SAE feature parameter is insensitive to variable-level noise. Conversely, in Chen's wavelet-based method, the VAS feature parameter closely depends on the subband energy, so the achieved performance is poor under variable-level noise.", "cite_spans": [ { "start": 211, "end": 229, "text": "[Chen et al. 2002]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4." } ], "back_matter": [ { "text": "This work was supported by National Science Council of Taiwan under grant no. NSC 94-2213-E-009-066.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "ITU-T Recommendation G.729 Annex B: a silence compression scheme for use with G.729 optimized for V.70 digital simultaneous voice and data applications", "authors": [ { "first": "A", "middle": [], "last": "Benyassine", "suffix": "" }, { "first": "E", "middle": [], "last": "Shlomot", "suffix": "" }, { "first": "H", "middle": [ "Y" ], "last": "Su", "suffix": "" }, { "first": "D", "middle": [], "last": "Massaloux", "suffix": "" }, { "first": "C", "middle": [], "last": "Lamblin", "suffix": "" }, { "first": "J", "middle": [ "P" ], "last": "Petit", "suffix": "" } ], "year": 1997, "venue": "IEEE Communications Magazine", "volume": "35", "issue": "9", "pages": "64--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benyassine, A., E. Shlomot, H. Y. Su, D. Massaloux, C. Lamblin, and J. P. Petit, \"ITU-T Recommendation G.729 Annex B: a silence compression scheme for use with G.729 optimized for V.70 digital simultaneous voice and data applications,\" IEEE Communications Magazine, 35(9), 1997, pp.64-73.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A robust voice activity detector for wireless communications using soft computing", "authors": [ { "first": "F", "middle": [], "last": "Beritelli", "suffix": "" }, { "first": "S", "middle": [], "last": "Casale", "suffix": "" }, { "first": "A", "middle": [], "last": "Cavallaro", "suffix": "" } ], "year": 1998, "venue": "IEEE Journal on Selected Areas in Communications", "volume": "16", "issue": "9", "pages": "1818--1829", "other_ids": {}, "num": null, "urls": [], "raw_text": "Beritelli, F., S. Casale, and A. Cavallaro, \"A robust voice activity detector for wireless communications using soft computing,\" IEEE Journal on Selected Areas in Communications, 16(9), 1998, pp.1818-1829.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "AM-FM energy detection and separation in noise using multiband energy operators", "authors": [ { "first": "A", "middle": [ "C" ], "last": "Bovik", "suffix": "" }, { "first": "P", "middle": [], "last": "Maragos", "suffix": "" }, { "first": "T", "middle": [], "last": "Quatieri", "suffix": "" } ], "year": 1993, "venue": "IEEE Transactions on Signal Processing", "volume": "41", "issue": "12", "pages": "3245--3265", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bovik, A. C., P. Maragos, and T. Quatieri, \"AM-FM energy detection and separation in noise using multiband energy operators,\" IEEE Transactions on Signal Processing, 41(12), 1993, pp.3245-3265.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A Wavelet-based Voice Activity Detection Algorithm in Noisy Environments", "authors": [ { "first": "S", "middle": [ "H" ], "last": "Chen", "suffix": "" }, { "first": "J", "middle": [ "F" ], "last": "Wang", "suffix": "" } ], "year": 2002, "venue": "International Conference on 9th Electronics, Circuits and Systems", "volume": "", "issue": "", "pages": "995--998", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, S.H., and J.F. Wang, \"A Wavelet-based Voice Activity Detection Algorithm in Noisy Environments,\" International Conference on 9th Electronics, Circuits and Systems, 2002, pp.995-998.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Analysis and improvement of a statistical model-based voice activity detector", "authors": [ { "first": "Y", "middle": [ "D" ], "last": "Cho", "suffix": "" }, { "first": "A", "middle": [], "last": "Kondoz", "suffix": "" } ], "year": 2001, "venue": "IEEE Signal Processing Letters", "volume": "8", "issue": "10", "pages": "276--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cho, Y. D., and A. Kondoz, \"Analysis and improvement of a statistical model-based voice activity detector,\" IEEE Signal Processing Letters, 8(10), 2001, pp.276-278.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The voice activity detector for the pan European digital cellular mobile telephone service", "authors": [ { "first": "D", "middle": [ "K" ], "last": "Freeman", "suffix": "" }, { "first": "G", "middle": [], "last": "Cosier", "suffix": "" }, { "first": "C", "middle": [ "B" ], "last": "Southcott", "suffix": "" }, { "first": "I", "middle": [], "last": "Boyd", "suffix": "" } ], "year": 1989, "venue": "International Conference on Acoustics, Speech, and Signal Processing", "volume": "", "issue": "", "pages": "369--372", "other_ids": {}, "num": null, "urls": [], "raw_text": "Freeman ,D. K., G. Cosier, C. B. Southcott, and I. Boyd, \"The voice activity detector for the pan European digital cellular mobile telephone service,\" International Conference on Acoustics, Speech, and Signal Processing, 1989, pp.369-372.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A comparative study of speech detection methods", "authors": [ { "first": "S", "middle": [ "V" ], "last": "Gerven", "suffix": "" }, { "first": "F", "middle": [], "last": "Xie", "suffix": "" } ], "year": 1997, "venue": "Proceedings of Eurospeech", "volume": "", "issue": "", "pages": "1095--1098", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gerven , S. V., and F. Xie, \"A comparative study of speech detection methods,\" In Proceedings of Eurospeech, 3, 1997, pp.1095-1098.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Teager energy based feature parameters for speech recognition in car noise", "authors": [ { "first": "F", "middle": [], "last": "Jabloun", "suffix": "" }, { "first": "A", "middle": [ "E" ], "last": "Cetin", "suffix": "" }, { "first": "E", "middle": [], "last": "Erzin", "suffix": "" } ], "year": 1999, "venue": "IEEE Signal Processing Letters", "volume": "6", "issue": "10", "pages": "259--261", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jabloun, F., A. E. Cetin, and E. Erzin, \"Teager energy based feature parameters for speech recognition in car noise,\" IEEE Signal Processing Letters, 6(10), 1999, pp.259-261.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "On a simple algorithm to calculate the 'energy' of a signal", "authors": [ { "first": "J", "middle": [ "F" ], "last": "Kaiser", "suffix": "" } ], "year": 1990, "venue": "International Conference on Acoustics, Speech, and Signal Processing", "volume": "", "issue": "", "pages": "381--384", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaiser, J. F., \"On a simple algorithm to calculate the 'energy' of a signal,\" International Conference on Acoustics, Speech, and Signal Processing, 1990, pp.381-384.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Digital Speech Coding for Low Bit Rate Communications Systems", "authors": [ { "first": "A", "middle": [ "M" ], "last": "Kondoz", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kondoz, A. M., Digital Speech Coding for Low Bit Rate Communications Systems, John Wiley & Sons Ltd., 1994.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A theory for multiresolution signal decomposition: the wavelet representation", "authors": [ { "first": "S", "middle": [], "last": "Mallat", "suffix": "" } ], "year": 1989, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "11", "issue": "7", "pages": "674--693", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mallat, S., \"A theory for multiresolution signal decomposition: the wavelet representation,\" IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(7), 1989, pp.674-693.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Robust voice activity detection using higher-order statistics in the LPC residual domain", "authors": [ { "first": "E", "middle": [], "last": "Nemer", "suffix": "" }, { "first": "R", "middle": [], "last": "Goubran", "suffix": "" }, { "first": "S", "middle": [], "last": "Mahmoud", "suffix": "" } ], "year": 2001, "venue": "IEEE Transactions on Speech and Audio Processing", "volume": "9", "issue": "3", "pages": "217--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nemer, E., R. Goubran and S. Mahmoud, \"Robust voice activity detection using higher-order statistics in the LPC residual domain,\" IEEE Transactions on Speech and Audio Processing, 9(3), 2001, pp.217-231.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A Robust Feature for Speech Detection", "authors": [ { "first": "A", "middle": [], "last": "Ouzounov", "suffix": "" } ], "year": 2004, "venue": "Cybernetics and Information Technologies", "volume": "4", "issue": "2", "pages": "3--14", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ouzounov, A., \"A Robust Feature for Speech Detection,\" Cybernetics and Information Technologies, 4(2), 2004, pp.3-14.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A voice activity detector employing soft decision based noise spectrum adaptation", "authors": [ { "first": "J", "middle": [], "last": "Sohn", "suffix": "" }, { "first": "W", "middle": [], "last": "Sung", "suffix": "" } ], "year": 1998, "venue": "International Conference on Acoustics, Speech, and Signal Processing", "volume": "", "issue": "", "pages": "365--368", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sohn, J., and W. Sung, \"A voice activity detector employing soft decision based noise spectrum adaptation,\" International Conference on Acoustics, Speech, and Signal Processing, 1998, pp.365-368.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Assessment for automatic speech recognition: II. NOISEX-92: A database and an experiment to study the effect of additive noise on speech recognition systems", "authors": [ { "first": "A", "middle": [], "last": "Varga", "suffix": "" }, { "first": "H", "middle": [ "J M" ], "last": "Steeneken", "suffix": "" } ], "year": 1993, "venue": "Speech Communication", "volume": "12", "issue": "", "pages": "247--251", "other_ids": {}, "num": null, "urls": [], "raw_text": "Varga, A., and H. J. M. Steeneken, \"Assessment for automatic speech recognition: II. NOISEX-92: A database and an experiment to study the effect of additive noise on speech recognition systems,\" Speech Communication, 12, 1993, pp.247-251.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Discrete wavelet transform (DWT) using filter banks Figure 2. Structure of three-level wavelet decomposition", "num": null, "uris": null }, "FIGREF1": { "type_str": "figure", "text": "Examples of normalized SSACF for voiced speech, unvoiced speech and white noise", "num": null, "uris": null }, "FIGREF2": { "type_str": "figure", "text": "The development of the SAE feature parameter with and without band-decomposition 2.5 Block Diagram of the Proposed Wavelet-Based VAD Figure 5", "num": null, "uris": null }, "FIGREF5": { "type_str": "figure", "text": "Adaptive thresholding strategy for extracting the boundary of voice activity3. Simulation ResultsThe proposed wavelet-based VAD algorithm operates on a frame-by-frame basis (frame size = 256 samples/frame, overlapping size =", "num": null, "uris": null }, "FIGREF6": { "type_str": "figure", "text": "The effects of variable noise-level on the proposed SAE parameter and Chen's VAS parameter for a noisy speech sentence consisting of continuous words", "num": null, "uris": null }, "TABREF0": { "num": null, "html": null, "content": "
cS P (%)f P (%)
TypeSNR(dB)Proposed VADChen's VADG.729B VADProposed VADChen's VADG.729B VAD
3099.197.392.16.26.97.3
Car Noise1097.396.186.58.69.316.3
-592.693.572.310.510.921.5
3096.997.296.97.610.39.1
Factory Noise1093.194.182.38.813.218.9
-587.285.670.710.915.426.4
3099.197.298.41.31.92.0
White Noise1098.598.186.31.51.83.6
-593.292.960.51.62.33.3
Average95.2294.6782.896.33812.04
", "type_str": "table", "text": "" } } } }