{
"paper_id": "O06-2004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:07:05.507379Z"
},
"title": "Robust Target Speaker Tracking in Broadcast TV Streams",
"authors": [
{
"first": "Junmei",
"middle": [],
"last": "Bai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Chinese Academy of Sciences",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "jmbai@hitic.ia.ac.cn"
},
{
"first": "Hongchen",
"middle": [],
"last": "Jiang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Chinese Academy of Sciences",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "hcjiang@hitic.ia.ac.cn"
},
{
"first": "Shilei",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Chinese Academy of Sciences",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "slzhang@hitic.ia.ac.cn"
},
{
"first": "Shuwu",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Chinese Academy of Sciences",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "swzhang@hitic.ia.ac.cn"
},
{
"first": "Bo",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Chinese Academy of Sciences",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "xubo@hitic.ia.ac.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper addresses the problem of audio change detection and speaker tracking in broadcast TV streams. A two-pass audio change detection algorithm, which includes detection of the potential change boundaries and refinement, is proposed. Speaker tracking is performed based on the results of speaker change detection. In speaker tracking, Wiener filtering, endpoint detection of pitch, and segmental cepstral feature normalization are applied to obtain a more reliable result. The algorithm has low complexity. Our experiments show that the algorithm achieves very satisfactory results.",
"pdf_parse": {
"paper_id": "O06-2004",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper addresses the problem of audio change detection and speaker tracking in broadcast TV streams. A two-pass audio change detection algorithm, which includes detection of the potential change boundaries and refinement, is proposed. Speaker tracking is performed based on the results of speaker change detection. In speaker tracking, Wiener filtering, endpoint detection of pitch, and segmental cepstral feature normalization are applied to obtain a more reliable result. The algorithm has low complexity. Our experiments show that the algorithm achieves very satisfactory results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Broadcast TV programs are rich multimedia information resources. They contain large amounts of AV (audio & video) contents including speech, music, images, motion, text, and so on. Finding ways to extract and manage these various kinds of AV content information is becoming extremely important and necessary for application-oriented multimedia content mining and management. The analysis and classification of audio data are important tasks in many applications, such as speaker tracking, speech recognition, and content-based indexing. Among of them, target speaker tracking in TV streams is an important research topic for TV scene analysis. In contrast with general speaker recognition, speaker detection in audio streams usually requires segments of relatively homogenous speech and speaker tracking in this task should also determine the target speakers' locations, in other word, the starting and ending times. In such applications, effective methods for segmenting continuous audio streams into homogeneous segments are required.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The problem of acoustic segmentation and classification has become crucial for the application of automatic speech recognition to audio stream processing. The automatic segmentation of long audio streams and the clustering of audio segments according to different acoustic characteristics have received much attention recently [Lu and Zhang 2002; Chen and Gopalakrishnan 1998; Delacourt and Wellekens 2000; Wilcox et al. 1994; Pietquin et al. 2002] . To detect target speakers in an audio stream, it is best to segment the audio stream into homogeneous regions according to changes in speaker identity, environmental conditions and channel conditions. In fact, there are no explicit cues of changes among these audio signals, and the same speaker may appear multiple times in audio streams. Thus, it is not easy to segment an audio stream correctly. Various segmentation algorithms proposed in the literature [Lu and Zhang 2002; Chen and Gopalakrishnan 1998; Delacourt and Wellekens 2000; Ajmera et al. 2003; Cettolo and Federico 2000] can be categorized as follows [Chen et al. 1998 ]: 1) Decoder-guided segmentation algorithms: The input audio stream is first decoded by an automation speech recognition (ASR) systems, and then the desired segments are produced by cutting the input at the silence locations generated by the decoder. Other information from the decoder, such as gender information, can also be utilized in segmentation.",
"cite_spans": [
{
"start": 327,
"end": 346,
"text": "[Lu and Zhang 2002;",
"ref_id": "BIBREF10"
},
{
"start": 347,
"end": 376,
"text": "Chen and Gopalakrishnan 1998;",
"ref_id": "BIBREF5"
},
{
"start": 377,
"end": 406,
"text": "Delacourt and Wellekens 2000;",
"ref_id": "BIBREF6"
},
{
"start": 407,
"end": 426,
"text": "Wilcox et al. 1994;",
"ref_id": "BIBREF18"
},
{
"start": 427,
"end": 448,
"text": "Pietquin et al. 2002]",
"ref_id": "BIBREF13"
},
{
"start": 909,
"end": 928,
"text": "[Lu and Zhang 2002;",
"ref_id": "BIBREF10"
},
{
"start": 929,
"end": 958,
"text": "Chen and Gopalakrishnan 1998;",
"ref_id": "BIBREF5"
},
{
"start": 959,
"end": 988,
"text": "Delacourt and Wellekens 2000;",
"ref_id": "BIBREF6"
},
{
"start": 989,
"end": 1008,
"text": "Ajmera et al. 2003;",
"ref_id": "BIBREF0"
},
{
"start": 1009,
"end": 1035,
"text": "Cettolo and Federico 2000]",
"ref_id": "BIBREF4"
},
{
"start": 1066,
"end": 1083,
"text": "[Chen et al. 1998",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "2) Model-based segmentation algorithms: Different models, e.g., Gaussian mixture models, are build for a fixed set of acoustic classes, such as telephone speech, pure music, etc, from a training corpus. In these schemes, a sliding window approach and multivariate Gaussian models are generally used. Decisions about the maximum likelihood boundary are made.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "3) Metric-based segmentation algotithms: The audio stream is segmented at places where maxima of the distances between neighboring windows appear, and distance measures, such as the KL distance and the generalized likelihood ratio (GLR) distance [Fisher et al. 2003 ], are utilized.",
"cite_spans": [
{
"start": 246,
"end": 265,
"text": "[Fisher et al. 2003",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "These methods are not very successful at detecting acoustic changes that occur in data [Chen et al.1998 ]. Decoder-guided segmentation only places boundaries at silence locations, which in general have no direct connection with acoustic changes in the data. Model-based segmentation usually can not be generalized to unseen acoustic conditions. Meanwhile, both model-based and metric-based segmentation rely on a threshold which sometimes lacks stability and robustness. In addition, model-based segmentation does not generalize to unseen acoustic conditions.",
"cite_spans": [
{
"start": 87,
"end": 103,
"text": "[Chen et al.1998",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "As for target speaker detection, which is similar to general speaker verification, the traditional methods focus on likelihood ratio detection and template matching. Among these approaches, Gaussian Mixture Models (GMMs) have been the most successful so far [Reynolds et al. 2000] . Reynolds also extended of these methods by adapting the speaker model from a universal background model (UBM). The speaker detector we adopted in our experiments is based on adapted GMMs. In the target speaker detecting system, we also used the segmental cepstral mean and variance normalization (SCMVN) to normalize the cepstral coefficients to get robust segmental parameter statistics that are suitable for various kinds of environmental conditions.",
"cite_spans": [
{
"start": 258,
"end": 280,
"text": "[Reynolds et al. 2000]",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The task of automatic speaker tracking involves finding target speakers in test audio streams. Given an audio stream, all the segments containing a target speaker's voice must be located with the starting and ending times. The general approach to speaker tracking consists of three steps: audio segmentation, audio classification, and speaker verification. A complete block diagram of the proposed speaker tracking system is shown in Figure 1 . The diagram shows how the components of the system fit together.",
"cite_spans": [],
"ref_spans": [
{
"start": 434,
"end": 442,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Overview",
"sec_num": "2."
},
{
"text": "The three steps are defined as three modules in Figure 1 , denoted as M1, M2, and M3. First, audio streams are segmented in M1 by means of two-pass audio segmentation. Then, in M2, these audio segments are classified into different classes, such as speech, music, noise and so on. Last, the speech segments are tested in M3 to verify if target speakers appear in the audio streams. Sometimes, M2 is not necessary when the speaker verification module can distinguish target speakers with other audio signals with acceptable precision. The individual blocks will be described in detail in following sections. ",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 56,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Overview",
"sec_num": "2."
},
{
"text": "The goal of automatic segmentation of audio signals is to detect changes in speaker identity, environmental conditions, and channel conditions. The problem is to find acoustic change detection points in an audio stream. A two-pass segmentation process for audio streams is presented in this paper. First, audio segmentation based on entropy is used to detect potential audio change points. Then, speaker change boundary refinement based on Bayesian decisions is applied.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two-Pass Audio Segmentation",
"sec_num": "3."
},
{
"text": "In the first pass, we use entropy measures to determine the turn candidates. Entropy is a measure of the uncertainty or disorder in a given distribution [Papoulis 1991 ]. There are many methods for calculating entropy. Ajmera calculates entropy based on posterior probabilities and sets it as one of the features for discriminating speech and music [Ajmera et al. 2003] . It is a model-based classification scheme that makes decisions based on the scores of audio signals to two models: a speech model and a music model. Generally, the speech model is estimated from lots of speech spoken by different speakers, and it acts as a universal model. Thus, it is not suitable for distinguishing different speakers, particularly unknown speakers.",
"cite_spans": [
{
"start": 153,
"end": 167,
"text": "[Papoulis 1991",
"ref_id": "BIBREF12"
},
{
"start": 349,
"end": 369,
"text": "[Ajmera et al. 2003]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "First-Pass Segmentation Based on Entropy",
"sec_num": "3.1"
},
{
"text": "The entropy method used in this work is also an extension of the model-based segmentation scheme. Generally, model-based methods apply a maximum likelihood of the Gaussian process with a penalty weight to detect turns in audio streams. By appropriately defining this penalty, one can generate decisions based on the Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC), the Consistent AIC (CAIC), the Minimum Description Length (MDL) principle, and the Minimum Message Length (MML) principle. It has been found that BIC, MDL, and CAIC give the best results and that with proper tuning, all three produce comparable results [Cettolo et al. 2000] .",
"cite_spans": [
{
"start": 647,
"end": 668,
"text": "[Cettolo et al. 2000]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "First-Pass Segmentation Based on Entropy",
"sec_num": "3.1"
},
{
"text": "In this paper, entropy is calculated based on statistical parameters of audio features. The decision rule is not based on scores but on the shape of the entropy contour. In order to clearly show the performance of our method, it is compared with BIC in this paper. The of entropy-based audio segmentation scheme is described in detail in the following: [You et al. 2004] :",
"cite_spans": [
{
"start": 353,
"end": 370,
"text": "[You et al. 2004]",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "First-Pass Segmentation Based on Entropy",
"sec_num": "3.1"
},
{
"text": "Assume a random variable X of dimension K. The entropy of the random variable (RV) is computed by first estimating its probability distribution function (pdf). We can compute the pdf either from the RV's histogram or from a parameterized distribution. The latter is used to reduce the amount of computation. Assume that the pdf follows a K-dimensional Gaussian density:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "1 1 1 2 2 ( ) ( ) ( ) | 2 | T X X P X e \u00b5 \u00b5 \u03c0 \u2212 \u2212 \u2212 \u2212 \u03a3 \u2212 = \u03a3 , (1-a)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "where \u00b5 is the mean vector and \u03a3 is the covariance matrix. The entropy of X is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "( ) ( ) ( ) E X P X LogP X dX = \u2212 \u222b . (1-b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "Eq. (1-b) can been replaced by [You et al. 2004] :",
"cite_spans": [
{
"start": 31,
"end": 48,
"text": "[You et al. 2004]",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "( ) 2 log E X KLog \u03c0 \u2248 + \u03a3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "(1-c)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "The entropy curve of a speech signal in a sliding window is calculated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "Define 1 2 { , , } N = Y y y y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "as the cepstral sequence of an audio stream in a sliding window of N frames. At a given frame index",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "j (1 ) j N < <",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": ", the sliding window is partitioned into two sub-windows. Denote them as and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "( ) j r N ( ( ) ( ) j l j r N N N + = )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "respectively. Assume that each window is generally modeled with a multivariate Gaussian density, such as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "( ) ( ) ( , ) j l j l N \u00b5 \u03a3 and ( ) ( ) ( , ) j r j r N \u00b5 \u03a3 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "respectively. The sum of the entropy of each side of the window is computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "( ) ( ) ( ) ( ) ( ) ( ) ( ) 1 1 ( log 2 ) log 2 ) j l j l N N i j l j l j l j l i i E K K N \u03c0 \u03c0 = = = + \u03a3 = + \u03a3 \u2211 \u2211 , (1-d) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 1 1 ( log 2 ) log 2 j r j r N N i j r j r j r j r i i E K K N \u03c0 \u03c0 = = = + \u03a3 = + \u03a3 \u2211 \u2211 . (1-e)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "Then, the segmentation entropy at j can be computed as follows",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "( ) ( ) ( ) ( ) ( ) ( ) 1 1 ( ) ( ) ( ) ( ) ( ) log 2 log | | log 2 log | |, ( ) log 2 log | | log | | . j l j r N N j l j l j r j r i i j l j l j r j r E j K N K N E j NK N N \u03c0 \u03c0 \u03c0 = = = + \u00d7 + + \u00d7 = + \u00d7 + \u00d7 \u2211 \u2211 \u03a3 \u03a3 \u03a3 \u03a3 (1-f) log 2 NK",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "\u03c0 is a constant. It is ineffective for determining the entropy curve and can been omitted. Thus, the segmentation entropy at j can be simplified as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) ( ) ( ) ( ) ( ) log | | log | | j l j l j r j r E j N N = \u00d7 + \u00d7 \u03a3 \u03a3 .",
"eq_num": "(1)"
}
],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "Decision making is performed by analyzing the entropy curve in each window as described below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "H1: There is a potential change point in the sliding window.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "The sequence entropy value shows a step-down change until it reaches a minimal value at time t . Then, it increases gradually. t can be considered as a change point. Here, arg min ( )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "j t E j = .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "H0: There is no any change point in the sliding window.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "The segmentation entropies vary randomly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "We can make the following observations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "a) The minimal entropy varies for different window sizes and different audio conditions. However, if the entropy decreases gradually till it reaches a minimal polar, then it increases gradually, there is a changing point at the polar. b) Since there are fewer data in the region close to the original point on the left, the segmentation entropies in this region are unable to describe the entropy curve accurately. The same is true, on the right. Thus, these two regions are ignored in the final analysis. As shown in Figure 2 , t \u03b8 is defined as the number of the points ignored on each side.",
"cite_spans": [],
"ref_spans": [
{
"start": 518,
"end": 526,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "c) The basic processing unit or the sliding window length is 3s; however, the overlapping length between two neighboring windows is not fixed. If there is not change point in the prior window, the overlapping length is 1.5s; otherwise, the overlapping length is relative to the location of the last change point in the prior window.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of a Gaussian Random Variable",
"sec_num": null
},
{
"text": "Often there are false positives in potential speaker change points obtained with the algorithms described above. To remove false positives, a refinement algorithm is applied. The algorithm is based on the dissimilarity between two adjacent sub-segments. In this step, two distance measures, the Bayesian decision and KL distance, are applied to validate or discard candidates from the first pass. Suppose the feature vector extracted from each sub-segment is Gaussian, and assume that the feature probability distribution functions are n-variable normal populations, such as 1 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Second-Pass Speaker Change Boundary Refinement",
"sec_num": "3.2"
},
{
"text": "( , ) N \u00b5 \u03a3 and 2 2 ( , ) N \u00b5 \u03a3 . The Bayesian decision distance between two speech segments can be defined as [Lu et al. 2002 ",
"cite_spans": [
{
"start": 111,
"end": 126,
"text": "[Lu et al. 2002",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Second-Pass Speaker Change Boundary Refinement",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "] 1 1 1 1 2 1 2 2 [( )( )] BD D tr \u2212 \u2212 = \u03a3 \u2212\u03a3 \u03a3 \u2212\u03a3 .",
"eq_num": "(2)"
}
],
"section": "Second-Pass Speaker Change Boundary Refinement",
"sec_num": "3.2"
},
{
"text": "Provided that the speech of each segment can been modeled with a multivariate Gaussian density, the Kullback-Leibler (KL) distance between two speech slices is defined by [Homayoon et al. 1998 ]",
"cite_spans": [
{
"start": 171,
"end": 192,
"text": "[Homayoon et al. 1998",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Samples of entropy contour",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 1 2 2 1 1 2 1 ( ) ( ) i i i i M i KL i i M i w d w d D w w = = + = + \u2211 \u2211 ,",
"eq_num": "(3)"
}
],
"section": "Figure 2. Samples of entropy contour",
"sec_num": null
},
{
"text": "1 1 1 2 2 2 1 1 2 2 j j j i i i ij j i i j d \u00b5 \u00b5 \u00b5 \u00b5 \u2212 \u2212 = + + + \u2211 \u2211 \u2211 \u2211 \u2211 \u2211 , (3-a) 1 min( ) i ij j d d = , (3-b) 2 min( ) j ij i d d = .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Samples of entropy contour",
"sec_num": null
},
{
"text": "(3-c)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Samples of entropy contour",
"sec_num": null
},
{
"text": "{ | 1, 2,..., } i t t w w i M =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Samples of entropy contour",
"sec_num": null
},
{
"text": "is the mixture weight of the model of the tth segment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Samples of entropy contour",
"sec_num": null
},
{
"text": "In general, if two speech segments are spoken by the same speaker, the distance between them will be small; otherwise, the distance will be large. Thus, we apply a simple criterion: if the distance between two speech segments is larger than a given threshold, then these two segments can be considered as to be spoken by different speakers. The thresholds adopted in this study were set experientially. Figure 3 shows an example of two-pass audio segmentation of 26-second long audio stream. The audio stream includes two speakers and 3 speaker change boundaries, which are 7s, 15s and 22s respectively. It can be seen that the number of the potential boundaries is greater than that of real boundaries. The Bayesian decision is performed on these potential speaker change points to remove the false ones. In Figure 3 , D bd ,",
"cite_spans": [],
"ref_spans": [
{
"start": 403,
"end": 411,
"text": "Figure 3",
"ref_id": null
},
{
"start": 809,
"end": 817,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 2. Samples of entropy contour",
"sec_num": null
},
{
"text": "is an experiential threshold for the Bayesian decision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3. Two pass segmentation procedure: the entropy contour and the Bayesian decision",
"sec_num": null
},
{
"text": "The aim of audio classification is to distinguish speech and other audio signals. Currently, the state-of-the-art method of classification is based on GMM. Four models were applied in our experiments, a speech model, an unvoiced model, a music model, and a noise model, to classify the audio segments. Among them, only speech slices were used to detect target speakers in subsequent processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Audio Segments Classification",
"sec_num": "3.3"
},
{
"text": "To a certain extent, speaker detection is similar to automatic speaker verification (ASV), which is used to verify the identity claimed by a speaker. The general approach to speaker detection mainly consists of four parts: speech signal pre-processing, speaker feature extraction, speaker modeling, and recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Target Speaker Tracking System",
"sec_num": "4."
},
{
"text": "In automatic speaker detection systems, the mismatch between training and recognition, generated by additive or convoluting noises, often severely degrades the recognition accuracy. In addition, the non-speech signals, mainly silence and noise, contain little information of speakers. They are the same for each person and contain no distinguishing features, only ones that are confusing for speaker detection. They can degrade the discrimination ability for different speakers. Thus it is necessary to reduce the noise and discard the irrelevant information before performing speaker features extraction. In our experiments, we applied Wiener filtering and pitch-based endpoint detection in speech slice pre-processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Slice Pre-Processing",
"sec_num": "4.1"
},
{
"text": "Though pitch is a robust feature to noise, it is difficult to measure pitch accurately and reliably for several reasons. Since the key is to detect the active endpoints by means of pitch,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Slice Pre-Processing",
"sec_num": "4.1"
},
{
"text": "it is not appropriate to put much emphasis on the precision values of pitch. Moreover, we use wiener filter to alleviate the noise, which makes the pitch detection more precise. In Figure 4 , we can see that the pitch, which is mostly susceptible to noise, is near the endpoint. We set the active endpoint at the place where the pitch is less than zero. Although the pitch may not be precise, it is valid for endpoint detection. If the interval between two adjacent unvoiced frames is too short, say, less than 10 frames, then these unvoiced frames will be reserved.",
"cite_spans": [],
"ref_spans": [
{
"start": 181,
"end": 189,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 4. Pre-processing by Wiener filter and endpoint detection on pitch",
"sec_num": null
},
{
"text": "Although there is no exclusive feature for distinguishing different speakers' voices, the speech spectrum has been shown to be very effective for speaker recognition. This is because the spectrum reflects a person's vocal tract structure, the predominant physiological factor that distinguishes one person's voice from others. The Mel-frequency cepstral coefficient (MFCC) vectors have been used extensively for speaker recognition. However, the MFCC features can be severely affected by noise. Thus, some methods should be used to compensate for the corrupted speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker Feature Extraction and Normalization",
"sec_num": "4.2"
},
{
"text": "The widely used method for Cepstral feature normalization is Cepstral mean subtraction (CMS). CMS is performed over an entire file, and it can reduce the stationary convolution noise caused by the channel. However, CMS can also reduce some slow dynamic features of speakers. In this study, the segmental cepstral mean and variance normalization (SCMVN) were used. SCMVN is calculated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker Feature Extraction and Normalization",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( 1)/2 ( 1)/2 ( ) ( ) ( ) ( ) t L t t L t x i i x i i \u00b5 \u03c3 + \u2212 + \u2212 \u2212 = ,",
"eq_num": "(4)"
}
],
"section": "Speaker Feature Extraction and Normalization",
"sec_num": "4.2"
},
{
"text": "where, t X is the feature vector at time t , and L is the length of the sliding window; t , which is the first frame in the current window, gives the current place of the window in the speech;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker Feature Extraction and Normalization",
"sec_num": "4.2"
},
{
"text": "( ) t i \u00b5 and ( ) t i \u03c3 are the means and variances of the feature vector in this window. It should be noted that the length of the window, L , is fixed since the normalization of all feature should be uniform. In addition, a proper value of L should be adopted. The estimations of ( ) t i \u00b5 and ( ) t i \u03c3 may be imprecise if L is too short. And if it is too long, the calculation will be more complex. SCMVN has two possible effects: Firstly, it can reduce the action of addition noises in feature variance. Generally, addition noises result in decreased variance. Secondly, the features are mapped to a normal distribution over a sliding window, which is helpful for modeling the speakers' GMM later in speaker recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker Feature Extraction and Normalization",
"sec_num": "4.2"
},
{
"text": "The basic speaker detector is a likelihood ratio detector with target and alternative probability distributions. For text independent speaker verification GMMs (Gaussian Mixture Models) have been most successful so far [Reynolds et al. 2000] . The test ratio may be expanded by using the Bayesian rule:",
"cite_spans": [
{
"start": 219,
"end": 241,
"text": "[Reynolds et al. 2000]",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker Tracking",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( | ) ( ) ( | ) ( ) ( | ) ( ) ( | ) i U B M i UBM i UBM f x g f x T x f x g f x \u03bb \u03bb \u03bb \u03bb \u03bb \u03bb = = ,",
"eq_num": "(5)"
}
],
"section": "Speaker Tracking",
"sec_num": "4.3"
},
{
"text": "where ( ) g \u03bb is the prior density. In fact, the prior density is assumed to be equal for the UBM and the target model. The set of feature vectors is often very large, hence, the value of (..) f is often very small. Therefore, it is common to compute the logarithm of the test ratio instead. The log-test ratio is given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker Tracking",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) log ( | ) log ( | ) i i U B M x f x f x \u03b8 \u03bb \u03bb = \u2212 .",
"eq_num": "(6)"
}
],
"section": "Speaker Tracking",
"sec_num": "4.3"
},
{
"text": "Thus, the most suitable speaker models can be found based on the largest likelihood ratio. If the largest likelihood ratio is larger than a threshold, the identity of the current speaker can be determined; otherwise, the current segment is considered for a new speaker. In this way, we can determine the identity of the current speaker. Suppose that so far, K speakers are registered in the speaker model database; the concrete expression for identifying the speaker of the current segment is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker Tracking",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "0 0 arg max max max i i i i if ID Non if \u03b8 \u03b8 \u03b8 \u03b8 \u03b8 \u2265 \u23a7 \u23aa = \u23a8 \u2264 \u23aa \u23a9 ,",
"eq_num": "(7)"
}
],
"section": "Speaker Tracking",
"sec_num": "4.3"
},
{
"text": "where 1 i K \u2264 \u2264 and Non represents a new speaker. The threshold 0 \u03b8 can be either speaker dependent or speaker independent. The purpose of speaker dependent thresholds is to reduce the negative effects of speaker dependent variability on performance. Another solution is to apply a reversible transform to score values so that the result is equivalent to using speaker dependent thresholds. For practical reasons, the transform is based on impostor scores rather than on true speaker scores. One such method, currently known as znorm [Reynolds 1995] , transforms the impostor score distribution to zero mean and unit variance, while a Gaussian distribution is assumed. For an observation x and a claimed identity i \u03bb , the normalized log-test is given by",
"cite_spans": [
{
"start": 534,
"end": 549,
"text": "[Reynolds 1995]",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker Tracking",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) ( ) Znorm i i i i x x \u03b8 \u00b5 \u03b8 \u03c3 \u2212 = ,",
"eq_num": "(8)"
}
],
"section": "Speaker Tracking",
"sec_num": "4.3"
},
{
"text": "where i \u00b5 and i \u03c3 are the moment estimates of the impostor score distribution for a speaker i \u03bb .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker Tracking",
"sec_num": "4.3"
},
{
"text": "The proposed audio segmentation and speaker tracking algorithms were evaluated using an audio database, recorded directly from the CCTV news channel. The database is composed of about 10 hours of audio streams, which are from different TV programs, such as news, interviews, music, and movies. In the test database, at least one target speaker appeared in each file. Figure 5 reports the length statistics for the segments in the test set. A segment was defined as a contiguous portion of an audio signal, homogeneous in terms of acoustic source and channel. The duration of two adjacent turns in the test data varied from 2 seconds to 5 minutes. In Figure 5 , the x-axis is the time duration, and the number reprensents the duration. On the right side of Figure 5 , the first row corresponds to the second row. For example, 1=\"<3s\" and 2=\"3s~10s\". This shows that about 2% of the audio segments were less than 3 seconds long. We tested the performance with windows of 2 seconds and 3 seconds. It was observed that the performance decreased dramatically when the two-second window is used. Thus, we selected 3 seconds as the unit window size. That is to say, for those speaker segments which were less than 3 seconds long, the segmentation results were not reliable.",
"cite_spans": [],
"ref_spans": [
{
"start": 367,
"end": 375,
"text": "Figure 5",
"ref_id": null
},
{
"start": 650,
"end": 658,
"text": "Figure 5",
"ref_id": null
},
{
"start": 756,
"end": 764,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Database",
"sec_num": "5.1"
},
{
"text": "The input audio stream was first down-sampled into a uniform format: 8KHZ, 16bits, and mono-channel, regardless of the input format. In first pass segmentation, the speech stream was then pre-emphasized and divided into sub-segments using 3-second window with some overlapping. That is, the basic processing unit was 3 seconds; however, the temporal resolution of segmentation was not fixed. If there was no change point in the prior window,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.2"
},
{
"text": "Target speakers the overlapping length was 1.5 second, or the overlapping length was relative to the location of the last change point in the prior window.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 5. Histogram for the audio segment durations of all audio streams and",
"sec_num": null
},
{
"text": "In target speaker detection, the most important features extracted from the frame were MFCC and pitch. MFCC and the delta parameters were employed to characterize target speakers. The 16-dimensional MFCC vector and 1-dimensional energy were extracted from the speech signal every 12 ms with a 24 ms window. The delta parameters were then computed and appended to the previous vectors, thus producing a 34-dimensional feature vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 5. Histogram for the audio segment durations of all audio streams and",
"sec_num": null
},
{
"text": "There were a total of 40 target speakers, who consisted of reporters, commentators, comperes, and interviewees. The target models were adapted from UBM parameters, using two minutes of training data. The target speaker detector was a likelihood ratio detector for adaptation GMMs. Our UBM was a 1024 mixture GMM, trained using about 6 hours of broadcast data from 60 speakers with equal number of males and females. Target models were derived by means of Bayesian adaptation from the UBM parameters using two minutes of training data. Only the mean vectors were adapted, as this had been observed to provide better performance. The amount of adaptation of each mixture mean was data dependent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 5. Histogram for the audio segment durations of all audio streams and",
"sec_num": null
},
{
"text": "The baseline system only used CMS to alleviate noises; then, Wiener filtering, endpoint detection via the pitch, and SCMVN were applied, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 5. Histogram for the audio segment durations of all audio streams and",
"sec_num": null
},
{
"text": "The criteria of performance for audio segmentation and speaker detection are shown below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5.3"
},
{
"text": "For audio segmentation, the false alarm rate (FAR) and missed detection rate (MDR) were calculated as follows [Lu et al. 2002 . ERR is a common criterion for judging the performance of speaker verification systems.",
"cite_spans": [
{
"start": 110,
"end": 125,
"text": "[Lu et al. 2002",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5.3"
},
{
"text": "The statistics results of audio segmentation are shown in Table 1 . In first pass segmentation, the entropy-based method was better than BIC, particularly in MDR. However, FAR was still a little high with both methods. This was mostly due to the following reasons. First, FAR in long segments is great. As shown in Figure 5 , about 10% of the segments were longer than 60 seconds. These long segments resulted in 5%-10% FAR. Second, the noise information increased FAR. In fact, some of the false detections in long segments affected the speaker-tracking performance a little, for about 20 seconds of speech is enough for speaker recognition. What's more, about 25% FAR appeared in speech signals. Thus, a speaker change boundary refinement algorithm was applied to remove false positives. As shown in Table 1 , second pass refinement decreased FAR from 30.4% to 14.4% and from 31.2% to 14.9% based on the entropy results and on BIC results, respectively, In MDR, there was about a 0.6% increase based on the entropy results and a 1.8% increase based on the BIC results. As for the second pass refinement schemes, Bayesian decision was little better than the KL distance. ",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 65,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 315,
"end": 323,
"text": "Figure 5",
"ref_id": null
},
{
"start": 802,
"end": 809,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results of Audio Segmentation",
"sec_num": "5.3.1"
},
{
"text": "There are many factors that affect the performance of speaker detection. Among them, the target speech duration is a very important factors especially for the false reject (FR) rate in target speaker detection. Generally speaking, the shorter the speech is, the higher FR and FA will be. As shown in Figure 6 , the FR rate decreased greatly with increasing time when the speech durations were less than 20 seconds long. And it changed little when the speech durations were longer than 20 seconds. Noise is another interference factor in target speaker detection. The performance in target speaker detection with different strategies is shown in Table 2 . The EER and the relative improvement compared with the baseline are illustrated in Table 2 . Compared with the conventional CMS, SCMVN was better at compensating for the corruption caused by noise. Its effect was clear in target speaker detection. Wiener filtering and endpoint detection based on pitch are only used in speaker detection because the error in noise estimating in Wiener filtering increases when the noise environment changes, so it cannot work well with long speech durations. In this case, Wiener filtering is not helpful but costly in terms of time. And silence signals are useful for audio segmenting, so they are not discarded. However, their effects in speaker detection were clear in our experiments. The integrated system with SCMVN, Wiener filtering, and endpoint detection showed the best performance. ",
"cite_spans": [],
"ref_spans": [
{
"start": 300,
"end": 308,
"text": "Figure 6",
"ref_id": null
},
{
"start": 645,
"end": 652,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 738,
"end": 745,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results of Target Speaker Detection",
"sec_num": "5.3.2"
},
{
"text": "In this paper, we have presented a novel approach to unsupervised audio segmentation and a speaker tracking system. A two-pass audio change detection algorithm has been proposed, which includes potential audio change detection and speaker boundary refinement. The results of two-pass audio segmentation are classified as speech or music according their characteristics. Speaker tracking is based on the results of audio classification. In speaker tracking, Wiener filtering, endpoint detction based on pitch, and the segmental cepstral mean and variance normalization are applied to get more reliable results. The algorithm achieves satisfactory accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "There is still room for improvement of the proposed approach. In the experiments, we found that if two speakers were speaking synchronously, it was not easy to detect the change boundary. It was also found that the same speaker in various environments sometimes was detected as different speakers or rejected. This indicates that our compensation for the Figure 6 . The FR of speaker detection at different speech durations mismatch effect of the environment or channel is still insufficient. In our future research, we will focus on these issues.",
"cite_spans": [],
"ref_spans": [
{
"start": 355,
"end": 363,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
}
],
"back_matter": [
{
"text": "Support provided by the National Natural Science Foundation of China (NSFC) under grant no. 60475014 and the National Hi-tech Research Plan under grant no. 2005AA114130 is gratefully acknowledged.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Speech/music segmentation using entropy and dynamism features in a HMM classification framework",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ajmera",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Mccowan",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Bourlard",
"suffix": ""
}
],
"year": 2003,
"venue": "Speech Communication",
"volume": "40",
"issue": "3",
"pages": "351--363",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ajmera, J., I. McCowan, and H. Bourlard, \"Speech/music segmentation using entropy and dynamism features in a HMM classification framework,\" Speech Communication, 40(3), 2003, pp.351-363.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Audio Segmentation and Speaker Detection in Broadcast TV Stream",
"authors": [
{
"first": "J",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of 10th International Conference on SPEECH and COMPUTER",
"volume": "",
"issue": "",
"pages": "547--550",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bai, J., S. Zhang, R. Zheng, S. Zhang, and B. Xu, \"Audio Segmentation and Speaker Detection in Broadcast TV Stream,\" In Proc. of 10th International Conference on SPEECH and COMPUTER, 2005, Patras, Greece, pp.547-550.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Distance Measure Between collections of Distributions and Its Application to Speaker Recognition",
"authors": [
{
"first": "H",
"middle": [
"S M"
],
"last": "Beigi",
"suffix": ""
},
{
"first": "S",
"middle": [
"H"
],
"last": "Maes",
"suffix": ""
},
{
"first": "J",
"middle": [
"S"
],
"last": "Sorensen",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. of Int. Conf. On Acoustic, Speech, and Signal Processing",
"volume": "",
"issue": "",
"pages": "753--756",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beigi, H. S. M., S. H. Maes, and J. S. Sorensen, \"A Distance Measure Between collections of Distributions and Its Application to Speaker Recognition,\" In Proc. of Int. Conf. On Acoustic, Speech, and Signal Processing, 1998, Seattle, Washington, USA, pp. 753-756.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Speaker Recognition: a Tutorial",
"authors": [
{
"first": "J",
"middle": [
"P"
],
"last": "Campbell",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of The IEEE",
"volume": "85",
"issue": "9",
"pages": "1437--1462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Campbell, J.P., \"Speaker Recognition: a Tutorial.\" Proceedings of The IEEE, 85(9), 1997, pp. 1437-1462.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Model Selection Criteria for Acoustic Segmentation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Cettolo",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of the ISCA ITRW ASR2000 Automatic Speech Recognition",
"volume": "",
"issue": "",
"pages": "221--227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cettolo, M., and M. Federico., \"Model Selection Criteria for Acoustic Segmentation,\" In Proc. of the ISCA ITRW ASR2000 Automatic Speech Recognition, 2000, Paris, France, pp. 221-227.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Speaker, Environment, and Channel Change Detection and Clustering via the Bayesian Information Criterion",
"authors": [
{
"first": "S",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "P",
"middle": [
"S"
],
"last": "Gopalakrishnan",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. of the DARPA Broadcast News Transcription and Understanding Workshop",
"volume": "",
"issue": "",
"pages": "127--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, S., and P.S. Gopalakrishnan, \"Speaker, Environment, and Channel Change Detection and Clustering via the Bayesian Information Criterion,\" In Proc. of the DARPA Broadcast News Transcription and Understanding Workshop. 1998. Virginia, USA, pp.127-132.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "DISTBIC: A speaker-based segmentation for audio data indexing",
"authors": [
{
"first": "P",
"middle": [],
"last": "Delacourt",
"suffix": ""
},
{
"first": "C",
"middle": [
"J"
],
"last": "Wellekens",
"suffix": ""
}
],
"year": 2000,
"venue": "Speech Communication",
"volume": "32",
"issue": "1-2",
"pages": "111--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Delacourt, P., and C.J. Wellekens, \"DISTBIC: A speaker-based segmentation for audio data indexing,\" Speech Communication, 32 (1-2), 2000, pp.111-126.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Approaches to speaker detection and tracking in conversational speech",
"authors": [
{
"first": "R",
"middle": [
"B"
],
"last": "Dunn",
"suffix": ""
},
{
"first": "D",
"middle": [
"A"
],
"last": "Reynolds",
"suffix": ""
},
{
"first": "T",
"middle": [
"F"
],
"last": "Quatieri",
"suffix": ""
}
],
"year": 2000,
"venue": "Digital Signal Processing",
"volume": "10",
"issue": "1-3",
"pages": "93--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dunn, R.B., D. A. Reynolds, and T. F. Quatieri, \"Approaches to speaker detection and tracking in conversational speech,\" Digital Signal Processing, 10 (1-3), 2000, pp.93-112.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Generalized likelihood ratio test for voiced/unvoiced decision using the harmonic plus noise model",
"authors": [
{
"first": "E",
"middle": [],
"last": "Fisher",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tabrikian",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Dubnov",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. Of Int. Conf. On Acoustic, Speech, and Signal Processing",
"volume": "",
"issue": "",
"pages": "440--443",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fisher, E., J. Tabrikian, and S. Dubnov, \"Generalized likelihood ratio test for voiced/unvoiced decision using the harmonic plus noise model,\" In Proc. Of Int. Conf. On Acoustic, Speech, and Signal Processing, 2003, Hong Kong, pp. 440-443.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automatic Speaker Clustering",
"authors": [
{
"first": "H",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Kubala",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Scwartz",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. of the DARPA Speech Recognition Workshop",
"volume": "",
"issue": "",
"pages": "108--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jin, H., F. Kubala, and R. Scwartz, \"Automatic Speaker Clustering,\" In Proc. of the DARPA Speech Recognition Workshop, 1997, pp. 108-111.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Speaker change detection and tracking in real-time news broadcasting analysis",
"authors": [
{
"first": "L",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "H",
"middle": [
"J"
],
"last": "Zhang",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of the 10th ACM International Conference on Multimedia",
"volume": "",
"issue": "",
"pages": "602--610",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lu, L., and H.J. Zhang. \"Speaker change detection and tracking in real-time news broadcasting analysis, \" In Proc. of the 10th ACM International Conference on Multimedia, 2002, Juan-les-Pins, France, pp. 602-610.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Speaker Change Detection and Speaker Clustering Using VQ Distortion for Broadcast News",
"authors": [
{
"first": "K",
"middle": [],
"last": "Mori",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Nakagawa",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of Int. Conf. On Acoustic, Speech, and Signal Processing",
"volume": "",
"issue": "",
"pages": "413--416",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mori, K., and S. Nakagawa. \"Speaker Change Detection and Speaker Clustering Using VQ Distortion for Broadcast News,\" In Proc. of Int. Conf. On Acoustic, Speech, and Signal Processing, 2001, Salt-Lake City, USA, pp.413-416.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Random Variables, and Stochastic Processes",
"authors": [
{
"first": "A",
"middle": [],
"last": "Papoulis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Probability",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Papoulis, A Probability, Random Variables, and Stochastic Processes. 3rd ed. McGraw-Hill, 1991.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Applied Clustering for Automatic Speaker-based segmentation of Audio Material",
"authors": [
{
"first": "O",
"middle": [],
"last": "Pietquin",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Couvreur",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Couvreur",
"suffix": ""
}
],
"year": 2002,
"venue": "Belgian Journal of Operations Research",
"volume": "41",
"issue": "1-2",
"pages": "69--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pietquin, O., L. Couvreur, and P. Couvreur, \"Applied Clustering for Automatic Speaker-based segmentation of Audio Material,\" Belgian Journal of Operations Research, Statistics and Computer Science, 41(1-2), 2002, pp. 69-81.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Speaker Verification. Using Adapted Gaussian Mixture Models",
"authors": [
{
"first": "D",
"middle": [
"A"
],
"last": "Reynolds",
"suffix": ""
},
{
"first": "T",
"middle": [
"F"
],
"last": "Quatieri",
"suffix": ""
},
{
"first": "R",
"middle": [
"B"
],
"last": "Dunn",
"suffix": ""
}
],
"year": 2000,
"venue": "Digital Signal Processing",
"volume": "10",
"issue": "1-3",
"pages": "19--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reynolds, D. A., T.F. Quatieri, and R.B. Dunn, \"Speaker Verification. Using Adapted Gaussian Mixture Models,\" Digital Signal Processing, 10(1-3), 2000, pp. 19-41.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Speaker Identification and Verification using Gaussian Mixture Speaker Models",
"authors": [
{
"first": "D",
"middle": [
"A"
],
"last": "Reynolds",
"suffix": ""
}
],
"year": 1995,
"venue": "Speech Communication",
"volume": "17",
"issue": "1-2",
"pages": "91--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reynolds, D.A., \"Speaker Identification and Verification using Gaussian Mixture Speaker Models,\" Speech Communication, 17(1-2), 1995, pp. 91-108.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Automatic Segmentation Classification and Clustering of Broadcast News Audio",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Sigler",
"suffix": ""
},
{
"first": "U",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Raj",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Stern",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. of the DARPA Speech Recognition Workshop",
"volume": "",
"issue": "",
"pages": "97--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sigler, M.A., U. Jain, B. Raj, and M. Stern. \"Automatic Segmentation Classification and Clustering of Broadcast News Audio,\" In Proc. of the DARPA Speech Recognition Workshop, 1997, pp. 97-99.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Audio-Visual Content Analysis for Content-Based Video Indexing",
"authors": [
{
"first": "S",
"middle": [],
"last": "Tsekeridou",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Pitas",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. of 1999 IEEE Int. Conf. on Multimedia Computing and Systems",
"volume": "",
"issue": "",
"pages": "667--672",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsekeridou, S., and Ioannis Pitas, \"Audio-Visual Content Analysis for Content-Based Video Indexing,\" In Proc. of 1999 IEEE Int. Conf. on Multimedia Computing and Systems, 1999, Florence, Italy, pp. 667--672.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Segmentation of Speech Using Speaker Identification",
"authors": [
{
"first": "L",
"middle": [],
"last": "Wilcox",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Kumber",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Balasubramanian",
"suffix": ""
}
],
"year": 1994,
"venue": "Proc. of Int. Conf. On Acoustic, Speech, and Signal Processing",
"volume": "",
"issue": "",
"pages": "161--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wilcox, L., F. Chen, D. Kumber, and V. Balasubramanian, \"Segmentation of Speech Using Speaker Identification,\" In Proc. of Int. Conf. On Acoustic, Speech, and Signal Processing, 1994, Adelaide, Australia, pp.161-164.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A noise-robust ASR Front-end Using Wiener filter Constructed from MMSE Estimation of Clean Speech and Noise",
"authors": [
{
"first": "J",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Droppo",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Acero",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of IEEE Automatic Speech Recognition and Understanding Workshop",
"volume": "",
"issue": "",
"pages": "321--326",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wu, J., J. Droppo, L. Deng, and A. Acero, \"A noise-robust ASR Front-end Using Wiener filter Constructed from MMSE Estimation of Clean Speech and Noise,\" In Proc. of IEEE Automatic Speech Recognition and Understanding Workshop, St. Thomas, U.S, 2003, pp.321-326.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Entropy-based variable frame rate analysis of speech signals and its application to ASR",
"authors": [
{
"first": "H",
"middle": [],
"last": "You",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Alwan",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of Int. Conf. On Acoustic, Speech, and Signal Processing",
"volume": "",
"issue": "",
"pages": "529--552",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "You, H., Q. Zhu, and A. Alwan, \"Entropy-based variable frame rate analysis of speech signals and its application to ASR,\" In Proc. of Int. Conf. On Acoustic, Speech, and Signal Processing, 2004, Montreal, Canada, pp. 529-552.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Block diagram of the speaker tracking system components M1-Segmentation Module, M2 -Classification Module, M3-Speaker verification"
},
"TABREF1": {
"num": null,
"text": "",
"content": "
| | FAR | MDR | | | FAR | MDR |
First | Entropy | 30.4% | 6.5% | Second pass | BD KL | 14.4% 16.0% | 7.1% 7.3% |
Pass | BIC | 31.2% | 13.1% | Second pass | BD KL | 14.9% 15.2% | 14.5% 15.0% |
",
"html": null,
"type_str": "table"
},
"TABREF2": {
"num": null,
"text": "",
"content": "Case | ERR | ERR Relative Reduction |
Baseline | 25.2% | 0 |
WF + ED | 23.3% | 9.1% |
SCMVN + WF + ED | 22.8% | 9.5% |
",
"html": null,
"type_str": "table"
}
}
}
}