Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
97.7 kB
{
"paper_id": "O07-5002",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:08:19.982442Z"
},
"title": "A Novel Characterization of the Alternative Hypothesis Using Kernel Discriminant Analysis for LLR-Based Speaker Verification",
"authors": [
{
"first": "Yi-Hsiang",
"middle": [],
"last": "Chao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Academia Sinica",
"location": {
"settlement": "Taipei",
"country": "Taiwan"
}
},
"email": "yschao@iis.sinica.edu.tw"
},
{
"first": "Hsin-Min",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Academia Sinica",
"location": {
"settlement": "Taipei",
"country": "Taiwan"
}
},
"email": ""
},
{
"first": "Ruei-Chuan",
"middle": [],
"last": "Chang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Academia Sinica",
"location": {
"settlement": "Taipei",
"country": "Taiwan"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In a log-likelihood ratio (LLR)-based speaker verification system, the alternative hypothesis is usually difficult to characterize a priori, since the model should cover the space of all possible impostors. In this paper, we propose a new LLR measure in an attempt to characterize the alternative hypothesis in a more effective and robust way than conventional methods. This LLR measure can be further formulated as a non-linear discriminant classifier and solved by kernel-based techniques, such as the Kernel Fisher Discriminant (KFD) and Support Vector Machine (SVM). The results of experiments on two speaker verification tasks show that the proposed methods outperform classical LLR-based approaches.",
"pdf_parse": {
"paper_id": "O07-5002",
"_pdf_hash": "",
"abstract": [
{
"text": "In a log-likelihood ratio (LLR)-based speaker verification system, the alternative hypothesis is usually difficult to characterize a priori, since the model should cover the space of all possible impostors. In this paper, we propose a new LLR measure in an attempt to characterize the alternative hypothesis in a more effective and robust way than conventional methods. This LLR measure can be further formulated as a non-linear discriminant classifier and solved by kernel-based techniques, such as the Kernel Fisher Discriminant (KFD) and Support Vector Machine (SVM). The results of experiments on two speaker verification tasks show that the proposed methods outperform classical LLR-based approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In essence, the speaker verification task is a hypothesis testing problem. Given an input utterance U, the goal is to determine whether U was spoken by the hypothesized speaker or not. The log-likelihood ratio (LLR)-based detector [Reynolds 1995 ] is one of the state-of-the-art approaches for speaker verification. Consider the following hypotheses: is the likelihood of hypothesis H i given the utterance U, and \u03b8 is the threshold. H 0 and H 1 are, respectively, called the null hypothesis and the alternative hypothesis. Mathematically, H 0 and H 1 can be represented by parametric models denoted as \u03bb and \u03bb , respectively; \u03bb is often called an anti-model. Though H 0 can be modeled straightforwardly using speech utterances from the hypothesized speaker, H 1 does not involve any specific speaker, thus lacks explicit data for modeling. Many approaches have been proposed to characterize H 1 , and various LLR measures have been developed. We can formulate these measures in the following general form [Reynolds 2000 ]:",
"cite_spans": [
{
"start": 231,
"end": 245,
"text": "[Reynolds 1995",
"ref_id": "BIBREF10"
},
{
"start": 1006,
"end": 1020,
"text": "[Reynolds 2000",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "H 0 : U is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "1 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( | ) ( | ) ( ) log log , ( ( | ), ( | ),..., ( | )) ( | ) N p U p U L U p U p U p U p U \u03bb \u03bb \u03bb \u03bb \u03bb \u03bb = = \u03a8",
"eq_num": "(2)"
}
],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "where \u03a8(\u22c5) is some function of the likelihood values from a set of so-called background models {\u03bb 1 ,\u03bb 2 ,...,\u03bb N }. For example, the background model set can be obtained from N representative speakers, called a cohort [Rosenberg 1992 ], which simulates potential impostors. If \u03a8(\u22c5) is an average function [Reynolds 1995] , the LLR can be written as:",
"cite_spans": [
{
"start": 219,
"end": 234,
"text": "[Rosenberg 1992",
"ref_id": "BIBREF12"
},
{
"start": 306,
"end": 321,
"text": "[Reynolds 1995]",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 1 1 ( ) log ( | \u03bb) log ( | \u03bb ) . N i i L U p U p U N = \u23a7 \u23ab = \u2212 \u23a8 \u23ac \u23a9 \u23ad \u2211",
"eq_num": "(3)"
}
],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Alternatively, the average function can be replaced by various functions, such as the maximum [Liu 1996] , i.e.:",
"cite_spans": [
{
"start": 94,
"end": 104,
"text": "[Liu 1996]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "2 1 ( ) log ( | \u03bb) max log ( | \u03bb ), i i N L U p U p U \u2264 \u2264 = \u2212",
"eq_num": "(4)"
}
],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "or the geometric mean [Liu 1996 ], i.e.,",
"cite_spans": [
{
"start": 22,
"end": 31,
"text": "[Liu 1996",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "3 1 1 ( ) log ( | \u03bb) l o g ( |\u03bb ) .",
"cite_spans": [
{
"start": 14,
"end": 20,
"text": "( | \u03bb)",
"ref_id": null
},
{
"start": 27,
"end": 33,
"text": "( |\u03bb )",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "N i i L U p U p U N = = \u2212 \u2211 (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "A special case arises when \u03a8(\u22c5) is an identity function and N = 1. In this instance, a single background model is usually trained by pooling all the available data, which is generally irrelevant to the clients, from a large number of speakers. This is called the world model or the Universal Background Model (UBM) [Reynolds 2000 ]. The LLR in this case becomes:",
"cite_spans": [
{
"start": 315,
"end": 329,
"text": "[Reynolds 2000",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "4 ( ) log ( | \u03bb) log ( | ), L U p U p U = \u2212 \u2126",
"eq_num": "(6)"
}
],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "where \u2126 denotes the world model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "However, none of the LLR measures developed so far has proven to be absolutely superior to any other, since the selection of \u03a8(\u22c5) is usually application and training data dependent. In particular, the use of a simple function, such as the average, maximum, or geometric mean, is a heuristic that does not include any optimization process. The issues of selection, size, and combination of background models motivate us to design a more comprehensive function, \u03a8(\u22c5), to improve the characterization of the alternative hypothesis. In this paper, we first propose a new LLR measure in an attempt to characterize H 1 by integrating all the background models in a more effective and robust way than conventional methods. Then, we formulate this new LLR measure as a non-linear discriminant classifier and apply kernel-based techniques, including the Kernel Fisher Discriminant (KFD) [Mika 1999] and Support Vector Machine (SVM) [Burges 1998 ], to optimally separate the LLR samples of the null hypothesis from those of the alternative hypothesis.",
"cite_spans": [
{
"start": 878,
"end": 889,
"text": "[Mika 1999]",
"ref_id": "BIBREF9"
},
{
"start": 923,
"end": 935,
"text": "[Burges 1998",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Discriminant Analysis for LLR-Based Speaker Verification",
"sec_num": null
},
{
"text": "SVM-based techniques have been successfully applied to many classification and regression tasks, including speaker verification. Unlike our work, existing approaches [Bengio 2001; Wan 2005] only use a single background model, i.e., the world model, to represent the alternative hypothesis, instead of integrating multiple background models to characterize the alternative hypothesis. For example, Bengio et al. [Bengio 2001 ] proposed a decision function:",
"cite_spans": [
{
"start": 166,
"end": 179,
"text": "[Bengio 2001;",
"ref_id": "BIBREF0"
},
{
"start": 180,
"end": 189,
"text": "Wan 2005]",
"ref_id": "BIBREF13"
},
{
"start": 411,
"end": 423,
"text": "[Bengio 2001",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Discriminant Analysis for LLR-Based Speaker Verification",
"sec_num": null
},
{
"text": "5 1 2 ( ) log ( | \u03bb) log ( | ) , L U a p U a p U b = \u2212 \u2126+ (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Discriminant Analysis for LLR-Based Speaker Verification",
"sec_num": null
},
{
"text": "where a 1 , a 2 , and b are adjustable parameters estimated using SVM. An extended version of Eq. (7) with the Fisher kernel and the LR score-space kernel for SVM was investigated in Wan [Wan 2005 ].",
"cite_spans": [
{
"start": 187,
"end": 196,
"text": "[Wan 2005",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Discriminant Analysis for LLR-Based Speaker Verification",
"sec_num": null
},
{
"text": "The results of speaker verification experiments conducted on both the XM2VTS database [Messer 1999 ] and the ISCSLP2006-SRE database [Chinese Corpus Consortium 2006] show that the proposed methods outperform classical LLR-based approaches. The remainder of this paper is organized as follows. Section 2 describes the design of the new LLR measure in our approach. Sections 3 and 4 introduce the kernel discriminant analysis used in this work and the formation of the characteristic vector by background model selection, respectively. Section 5 contains our experiment results. Finally, in Section 6, we present our conclusions.",
"cite_spans": [
{
"start": 86,
"end": 98,
"text": "[Messer 1999",
"ref_id": "BIBREF8"
},
{
"start": 133,
"end": 165,
"text": "[Chinese Corpus Consortium 2006]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Discriminant Analysis for LLR-Based Speaker Verification",
"sec_num": null
},
{
"text": "First of all, we redesign the function \u03a8(\u22c5) in Eq. (2) as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the Alternative Hypothesis",
"sec_num": "2.1"
},
{
"text": "1 2 1 2 1/( ... ) 1 2 ( | ) ( ) ( ( | \u03bb ) ( |\u03bb ) ... ( |\u03bb ) ) , N N w w w w w w N p U p U p U p U \u03bb + + + = \u03a8 = \u22c5 \u22c5 \u22c5 u (8) where 1 2 [ ( | ), ( | ),..., ( | )] T N p U p U p U \u03bb \u03bb \u03bb = u",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the Alternative Hypothesis",
"sec_num": "2.1"
},
{
"text": "is an N\u00d71 vector and i w is the weight of the ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the Alternative Hypothesis",
"sec_num": "2.1"
},
{
"text": "likelihood p(U | \u03bb i ), i = 1,2,..., N.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the Alternative Hypothesis",
"sec_num": "2.1"
},
{
"text": "w = , 1 * argmax log ( | \u03bb ) i N i i p U \u2264 \u2264 = ; and 0 i w = , * i i \u2200 \u2260 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the Alternative Hypothesis",
"sec_num": "2.1"
},
{
"text": "By substituting Eq. (8) into Eq. (2), we obtain:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the Alternative Hypothesis",
"sec_num": "2.1"
},
{
"text": "1 2 1 2 1 2 1 2 6 1/( ... ) 1 2 1/( ... ) 1 2 ( | ) ( ) log ( | ) ( | ) log ( ( | ) ( | ) ... ( | ) ) ( | ) ( | ) ( | ) log ... ( | ) ( | ) ( | ) N N N N w w w w w w N w w w w w w N p U L U p U p U p U p U p U p U p U p U p U p U p U \u03bb \u03bb \u03bb \u03bb \u03bb \u03bb \u03bb \u03bb \u03bb \u03bb \u03bb \u03bb + + + + + + = = \u22c5 \u22c5 \u22c5 \u239b \u239e \u239b \u239e \u239b \u239e \u239b \u239e \u239c \u239f = \u22c5 \u22c5\u22c5 \u239c \u239f \u239c \u239f \u239c \u239f \u239c \u239f \u239d \u23a0 \u239d \u23a0 \u239d \u23a0 \u239d \u23a0 1 2 1 2 1 2 1 2 1 ( | ) ( | ) ( | ) log log ... log ... ( | ) ( | ) ( | ) accept 1 reject ... ' accept ' reject, N N N T N T p U p U p U w w w w w w p U p U p U w w w \u03bb \u03bb \u03bb \u03bb \u03bb \u03bb \u03b8 \u03b8 \u03b8 \u03b8 \u239b \u239e = + + + \u239c \u239f + + + \u239d \u23a0 \u2265 \u23a7 = \u23a8 < + + + \u23a9 \u2265 \u23a7 = \u23a8 < \u23a9 w x w x (9) where 1 2 [ , ..., ] T N w w w = w",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the Alternative Hypothesis",
"sec_num": "2.1"
},
{
"text": "is an N\u00d71 weight vector, the new threshold",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the Alternative Hypothesis",
"sec_num": "2.1"
},
{
"text": "1 2 ' ( ... ) N w w w \u03b8 \u03b8 = + + +",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the Alternative Hypothesis",
"sec_num": "2.1"
},
{
"text": ", and x is an N \u00d7 1 vector in the space R N , expressed by 1 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the Alternative Hypothesis",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( | ) ( | ) ( | ) [log , log ,..., log ] . ( | ) ( | ) ( | ) T N p U p U p U p U p U p U \u03bb \u03bb \u03bb \u03bb \u03bb \u03bb = x",
"eq_num": "(10)"
}
],
"section": "Analysis of the Alternative Hypothesis",
"sec_num": "2.1"
},
{
"text": "The implicit idea in Eq. (10) is that the speech utterance U can be represented by a characteristic vector x.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the Alternative Hypothesis",
"sec_num": "2.1"
},
{
"text": "If we replace the threshold ' \u03b8 in Eq. (9) with a bias b, the equation can be rewritten as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the Alternative Hypothesis",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) ( ) T L U b f = + = w x x ,",
"eq_num": "(11)"
}
],
"section": "Analysis of the Alternative Hypothesis",
"sec_num": "2.1"
},
{
"text": "where f(x) forms a so-called linear discriminant classifier. This classifier translates the goal of solving an LLR measure into the optimization of w and b, such that the utterances of clients and impostors can be separated. To realize this classifier, three distinct data sets are needed: one for generating each client's model, one for generating the background models, and one for optimizing w and b. Since the bias b plays the same role as the decision threshold of the conventional LLR measure, which can be determined through a trade-off between false acceptance and false rejection, the main goal here is to find w. Existing linear discriminant analysis techniques, such as Fisher's Linear Discriminant (FLD) [Duda 2001] or Linear SVM [Burges 1998 ], can be applied to implement Eq. (11).",
"cite_spans": [
{
"start": 716,
"end": 727,
"text": "[Duda 2001]",
"ref_id": "BIBREF3"
},
{
"start": 742,
"end": 754,
"text": "[Burges 1998",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the Alternative Hypothesis",
"sec_num": "2.1"
},
{
"text": "Fisher's Linear Discriminant (FLD) is one of the popular linear discriminant classifiers [Duda 2001 ]. Suppose the i-th class has",
"cite_spans": [
{
"start": 89,
"end": 99,
"text": "[Duda 2001",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Discriminant Analysis",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "n i data samples, 1 { ,.., } i i i i n = X x x , i = 1, 2. The goal of FLD is to seek a direction w in the space R N such that the following Fisher's criterion function J(w) is maximized: ( ) , T b T w J = w S w w w S w",
"eq_num": "(12)"
}
],
"section": "Linear Discriminant Analysis",
"sec_num": "2.2"
},
{
"text": "where S b and S w are, respectively, the between-class scatter matrix and the within-class scatter matrix defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Discriminant Analysis",
"sec_num": "2.2"
},
{
"text": "1 2 1 2 ( ) ( ) T b = \u2212 \u2212 S m m m m (13) and 1,2 ( )( ) , i T w i i i= \u2208 = \u2212 \u2212 \u2211 \u2211 x X S x m x m (14)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Discriminant Analysis",
"sec_num": "2.2"
},
{
"text": "where m i is the mean vector of the i-th class computed by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Discriminant Analysis",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 1 . i n i i s s i n = = \u2211 m x",
"eq_num": "(15)"
}
],
"section": "Linear Discriminant Analysis",
"sec_num": "2.2"
},
{
"text": "According to Duda [Duda 2001 ], the solution for w, which maximizes J(w) defined in Eq. (12), is the leading eigenvector of ",
"cite_spans": [
{
"start": 18,
"end": 28,
"text": "[Duda 2001",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Discriminant Analysis",
"sec_num": "2.2"
},
{
"text": "Intuitively, f(x) in Eq. (11) can be solved via linear discriminant training algorithms [Duda 2001 ], such as FLD or Linear SVM. However, such methods are based on the assumption that the observed data of different classes is linearly separable, which is obviously not feasible in most practical cases with nonlinearly separable data. To solve this problem more effectively, we propose using a kernel-based nonlinear discriminant classifier. It is hoped that data from different classes, which is not linearly separable in the original input space R N , can be separated linearly in a certain higher dimensional (maybe infinite) feature space F via a nonlinear mapping \u03a6. Let \u03a6(x) denote a vector obtained by mapping x from R N to F. Then, the objective function, based on Eq. 11, can be re-defined as:",
"cite_spans": [
{
"start": 88,
"end": 98,
"text": "[Duda 2001",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Discriminant Analysis",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) ( ) , T F f b = \u03a6 + x w x",
"eq_num": "(16)"
}
],
"section": "Kernel Discriminant Analysis",
"sec_num": "3."
},
{
"text": "which constitutes a linear discriminant classifier in F, where F w is a weight vector in F.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Discriminant Analysis",
"sec_num": "3."
},
{
"text": "In practice, it is difficult to determine the kind of mapping that would be applicable; therefore, the computation of \u03a6(x) might be infeasible. To overcome this difficulty, a promising approach is to characterize the relationship between the data samples in F, instead of computing \u03a6(x) directly. This is achieved by introducing a kernel function k(x, y)=<\u03a6(x),\u03a6(y)>, which is the dot product of two vectors \u03a6(x) and \u03a6(y) in F. The kernel function k(\u22c5) must be symmetric, positive definite and conform to Mercer's condition [Burges 1998 ].",
"cite_spans": [
{
"start": 524,
"end": 536,
"text": "[Burges 1998",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Discriminant Analysis",
"sec_num": "3."
},
{
"text": "A number of kernel functions exist, such as the simplest dot product kernel function k(x, y) = x T y, and the very popular Radial Basis Function (RBF) kernel k(x, y) = exp(\u2212 ||x \u2212 y|| 2 / 2\u03c3 2 ) in which \u03c3 is a tunable parameter. Existing techniques, such as KFD [Mika 1999] or SVM [Burges 1998 ], can be applied to implement Eq. (16).",
"cite_spans": [
{
"start": 263,
"end": 274,
"text": "[Mika 1999]",
"ref_id": "BIBREF9"
},
{
"start": 282,
"end": 294,
"text": "[Burges 1998",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Discriminant Analysis",
"sec_num": "3."
},
{
"text": "Suppose the i-th class has ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Fisher Discriminant (KFD)",
"sec_num": "3.1"
},
{
"text": "n i data samples, 1 { ,.., } i i i i n = X x x , i = 1, 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Fisher Discriminant (KFD)",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) , T F b F F T F w F J \u03a6 \u03a6 = w S w w w S w",
"eq_num": "(17)"
}
],
"section": "Kernel Fisher Discriminant (KFD)",
"sec_num": "3.1"
},
{
"text": "where b \u03a6 S and w \u03a6 S are, respectively, the between-class scatter matrix and the within-class scatter matrix in F defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Fisher Discriminant (KFD)",
"sec_num": "3.1"
},
{
"text": "1 2 1 2 ( ) ( ) T b \u03a6 \u03a6 \u03a6 \u03a6 \u03a6 = \u2212 \u2212 S m m m m (18) and 1,2 ( ( ) )( ( ) ) , i T w i i i \u03a6 \u03a6 \u03a6 = \u2208 = \u03a6 \u2212 \u03a6 \u2212 \u2211 \u2211 x X S x m x m (19) where 1 (1/ ) ( ) i n i i i s s n \u03a6 = = \u03a6 \u2211 m",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Fisher Discriminant (KFD)",
"sec_num": "3.1"
},
{
"text": "x , and i = 1, 2, is the mean vector of the i-th class in F. Let ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Fisher Discriminant (KFD)",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b1 = = + \u2211 x x x",
"eq_num": "(21)"
}
],
"section": "Kernel Fisher Discriminant (KFD)",
"sec_num": "3.1"
},
{
"text": "Our goal, therefore, changes from finding",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Fisher Discriminant (KFD)",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "F w to finding \u03b1, which maximizes ( ) , T T J = \u03b1 M\u03b1 \u03b1 \u03b1 N\u03b1",
"eq_num": "(22)"
}
],
"section": "Kernel Fisher Discriminant (KFD)",
"sec_num": "3.1"
},
{
"text": "where M and N are computed by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Fisher Discriminant (KFD)",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 2 1 2 ( ) ( ) T = \u2212 \u2212 M \u03b7 \u03b7 \u03b7 \u03b7 (23) and 1,2 ( ) , i i T i n n i i= = \u2212 \u2211 N K I 1 K",
"eq_num": "(24)"
}
],
"section": "Kernel Fisher Discriminant (KFD)",
"sec_num": "3.1"
},
{
"text": "respectively, where i \u03b7 is an l\u00d71 vector whose j-th element",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Fisher Discriminant (KFD)",
"sec_num": "3.1"
},
{
"text": "1 ( ) (1/ ) ( , ) i n i i j i j s s n k = = \u2211 \u03b7 x x , j = 1,2,..., l; K i is an l\u00d7n i matrix with ( ) ( , ) i i js j s k = K x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Fisher Discriminant (KFD)",
"sec_num": "3.1"
},
{
"text": "x ; I n i is an n i \u00d7n i identity matrix; and 1 n i is an n i \u00d7n i matrix with all entries equal to 1/n i . Following Mika [Mika 1999 ], the solution for \u03b1, which maximizes J(\u03b1) defined in Eq. 22, is the leading eigenvector of N -1 M.",
"cite_spans": [
{
"start": 123,
"end": 133,
"text": "[Mika 1999",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Fisher Discriminant (KFD)",
"sec_num": "3.1"
},
{
"text": "Alternatively, Eq. (16) can be solved with an SVM, the goal of which is to seek a separating hyperplane in the feature space F that maximizes the margin between classes. Following Burges [Burges 1998 ], F w is expressed as:",
"cite_spans": [
{
"start": 187,
"end": 199,
"text": "[Burges 1998",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machine (SVM)",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 ( ), l F j j j j y \u03b1 = = \u03a6 \u2211 w x (25) which yields 1 ( ) ( , ) , l j j j j f y k b \u03b1 = = + \u2211 x x x",
"eq_num": "(26)"
}
],
"section": "Support Vector Machine (SVM)",
"sec_num": "3.2"
},
{
"text": "where each training sample x j belongs to one of the two classes identified by the label y j \u2208{\u22121,1}, j=1, 2,..., l. We can find the coefficients \u03b1 j by maximizing the objective function, ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machine (SVM)",
"sec_num": "3.2"
},
{
"text": "1 1 1 1 ( ) ( , ), 2 l l l j i j i j i j j i j Q y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machine (SVM)",
"sec_num": "3.2"
},
{
"text": "where C \u03b1 is a penalty parameter [Burges 1998 ]. The problem can be solved using quadratic programming techniques [Vapnik 1998 ]. Note that most \u03b1 j are equal to zero, and the training samples associated with non-zero \u03b1 j are called support vectors. A few support vectors act as the key to deciding the optimal margin between classes in the SVM. An SVM with a dot product kernel function is known as a Linear SVM.",
"cite_spans": [
{
"start": 33,
"end": 45,
"text": "[Burges 1998",
"ref_id": "BIBREF1"
},
{
"start": 114,
"end": 126,
"text": "[Vapnik 1998",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machine (SVM)",
"sec_num": "3.2"
},
{
"text": "In our experiments, we use B+1 background models, consisting of B cohort set models and one world model, to form the characteristic vector x in Eq. (10); and B cohort set models for L 1 (U) in Eq. 3, L 2 (U) in Eq. 4, and L 3 (U) in Eq. (5). Two cohort selection methods [Reynolds 1995] are used in the experiments. One selects the B closest speakers to each client; and the other selects the B/2 closest speakers to, plus the B/2 farthest speakers from, each client. The selection is based on the speaker distance measure [Reynolds 1995 ], computed by:",
"cite_spans": [
{
"start": 271,
"end": 286,
"text": "[Reynolds 1995]",
"ref_id": "BIBREF10"
},
{
"start": 523,
"end": 537,
"text": "[Reynolds 1995",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Formation of the Characteristic Vector",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( |\u03bb ) ( |\u03bb ) (\u03bb , \u03bb ) log log , ( |\u03bb ) ( |\u03bb ) j j i i i j i j j i p U p U d p U p U = +",
"eq_num": "(29)"
}
],
"section": "Formation of the Characteristic Vector",
"sec_num": "4."
},
{
"text": "where i \u03bb and j \u03bb are speaker models trained using the i-th speaker's utterances U i and the j-th speaker's utterances U j , respectively. Two cohort selection methods yield the following two (B+1)\u00d71 characteristic vectors:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formation of the Characteristic Vector",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "cst 1 cst ( | \u03bb) ( |\u03bb) ( | \u03bb) log log ... log ( | ) ( | \u03bb ) ( |\u03bb ) T B p U p U p U p U p U p U \u23a1 \u23a4 = \u23a2 \u23a5 \u2126 \u23a3 \u23a6 x (30) and cst1 cst / 2 fst1 fst / 2 ( |\u03bb) ( | \u03bb) (| \u03bb) ( | \u03bb) (| \u03bb) ( | ) ( |\u03bb ) ( | \u03bb ) ( | \u03bb ) ( | \u03bb ) = [log log log log log ] B B p U p U p U p U p U T p U p U p U p U p U \u2126 x ,",
"eq_num": "(31)"
}
],
"section": "Formation of the Characteristic Vector",
"sec_num": "4."
},
{
"text": "where cst i \u03bb and fst i \u03bb are the i-th closest model and the i-th farthest model of the client model \u03bb , respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formation of the Characteristic Vector",
"sec_num": "4."
},
{
"text": "We evaluate the proposed approaches on two databases: the XM2VTS database [Messer 1999] and the ISCSLP2006 speaker recognition evaluation (ISCSLP2006-SRE) database [Chinese Corpus Consortium 2006] .",
"cite_spans": [
{
"start": 74,
"end": 87,
"text": "[Messer 1999]",
"ref_id": "BIBREF8"
},
{
"start": 164,
"end": 196,
"text": "[Chinese Corpus Consortium 2006]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5."
},
{
"text": "For the performance evaluation, we adopt the Detection Error Tradeoff (DET) curve [Martin 1997 ]. In addition, the NIST Detection Cost Function (DCF), which reflects the performance at a single operating point on the DET curve, is also used. The DCF is defined as:",
"cite_spans": [
{
"start": 82,
"end": 94,
"text": "[Martin 1997",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "arg arg (1 ) DET Miss Miss T et FalseAlarm FalseAlarm T et C C P P C P P = \u00d7 \u00d7 + \u00d7 \u00d7 \u2212 ,",
"eq_num": "(32)"
}
],
"section": "Experiments",
"sec_num": "5."
},
{
"text": "where Miss P and FalseAlarm P are the miss probability and the false-alarm probability, respectively, Miss C and FalseAlarm C are the respective relative costs of detection errors, and ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5."
},
{
"text": "The first set of speaker verification experiments was conducted on speech data extracted from the XM2VTS database [Messer 1999 ], which is a multimodal database consisting of face images, video sequences, and speech recordings taken on 295 subjects. The raw database contained approximately 30 hours of digital video recordings, which was then manually annotated. Each subject participated in four recording sessions at approximately one-month intervals, and each recording session consisted of two shots. In a shot, every subject was prompted to read three sentences \"0 1 2 3 4 5 6 7 8 9\", \"5 0 6 9 2 8 1 3 7 4\", and \"Joe took father's green shoe bench out\" at his/her normal pace. The speech was recorded by a microphone clipped to the subject's shirt.",
"cite_spans": [
{
"start": 114,
"end": 126,
"text": "[Messer 1999",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation on the XM2VTS Database",
"sec_num": "5.1"
},
{
"text": "In accordance with Configuration II of the evaluation protocol described in Luettin [Luettin 1998 ], the XM2VTS database was divided into three subsets: \"Training\", \"Evaluation\", and \"Test\". In our speaker verification experiments, we used the \"Training\" subset to build the individual client's model and the world model 1 , and the \"Evaluation\" subset to estimate the decision threshold \u03b8 in Eq. (1) and the parameters w, F w , and b in 1 Currently, we do not have an external resource to train the world model and the background models.",
"cite_spans": [
{
"start": 84,
"end": 97,
"text": "[Luettin 1998",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation on the XM2VTS Database",
"sec_num": "5.1"
},
{
"text": "We follow the evaluation protocol in [Luettin 1998 ], which suggests \"If a world model is needed, as in speaker verification, a client-dependent world model can be trained from all other clients but the actual client. Although not optimal, it is a valid method.\" We will train the world model and the background models using an external resource in our future work. Eq. (11) or Eq. (16). The performance of speaker verification was then evaluated on the \"Test\" subset. As shown in Table 1 , a total of 293 speakers 2 in the database were divided into 199 clients, 25 \"evaluation impostors\", and 69 \"test impostors\". We used 12 (2\u00d72\u00d73) utterances/speaker from sessions 1 and 2 to train the individual client's model, represented by a Gaussian Mixture Model (GMM) [Reynolds 1995] with 64 mixture components. For each client, the other 198 clients' utterances from sessions 1 and 2 were used to generate the world model, represented by a GMM with 256 mixture components; 20 or 40 speakers were chosen from these 198 clients as the cohort. Then, we used 6 utterances/client from session 3, and 24 (4\u00d72\u00d73) utterances/evaluation-impostor over the four sessions, which yielded 1,194 (6\u00d7199) client samples and 119,400 (24\u00d725\u00d7199) impostor samples, to estimate \u03b8 , w, F w , and b. However, as a kernel-based classifier can be intractable when a large number of training samples is involved, we reduced the number of impostor samples from 119,400 to 2,250 using a uniform random selection method. In the performance evaluation, we tested 6 utterances/client in session 4 and 24 utterances/test-impostor over the four sessions, which produced 1,194 (6\u00d7199) client trials and 329,544 (24\u00d769\u00d7199) impostor trials. Table 2 summarizes all the parametric models used in each system. Using a 32-ms Hamming-windowed frame with 10-ms shifts, each speech utterance (sampled at 32 kHz) was converted into a stream of 24-order feature vectors, each consisting of 12 Mel-scale frequency cepstral coefficients [Huang 2001 ] and their first time derivatives.",
"cite_spans": [
{
"start": 37,
"end": 50,
"text": "[Luettin 1998",
"ref_id": "BIBREF6"
},
{
"start": 762,
"end": 777,
"text": "[Reynolds 1995]",
"ref_id": "BIBREF10"
},
{
"start": 1987,
"end": 1998,
"text": "[Huang 2001",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 481,
"end": 488,
"text": "Table 1",
"ref_id": "TABREF4"
},
{
"start": 1702,
"end": 1709,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation on the XM2VTS Database",
"sec_num": "5.1"
},
{
"text": "task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 2. A summary of the parametric models used in each system for the XM2VTS",
"sec_num": null
},
{
"text": "H 0 H 1 System a 64-mixture client GMM a 256-mixture world model B 64-mixture cohort GMMs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 2. A summary of the parametric models used in each system for the XM2VTS",
"sec_num": null
},
{
"text": "L 1 \u221a \u221a L 2 \u221a \u221a L 3 \u221a \u221a L 4 \u221a \u221a L 5 \u221a \u221a L 6 \u221a \u221a \u221a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 2. A summary of the parametric models used in each system for the XM2VTS",
"sec_num": null
},
{
"text": "First, B was set to 20 in the experiments. We implemented the proposed LLR system based on linear-based classifiers (FLD and Linear SVM) and kernel-based classifiers (KFD and SVM) in eight ways: 1) FLD with Eq. (30) (\"FLD_w_20c\"), 2) FLD with Eq. (31) (\"FLD_w_10c_10f\"), 3) Linear SVM with Eq. (30) (\"LSVM_w_20c\"), 4) Linear SVM with Eq. (31) (\"LSVM_w_10c_10f\"), 5) KFD with Eq. (30) (\"KFD_w_20c\"), 6) KFD with Eq. (31) (\"KFD_w_10c_10f\"), 7) SVM with Eq. (30) (\"SVM_w_20c\"), and 8) SVM with Eq. (31) (\"SVM_w_10c_10f\"). Both SVM and KFD used an RBF kernel function with \u03c3= 5. For performance comparison, we used six systems as our baselines: 1) L 1 (U) with the 20 closest cohort models (\"L1_20c\"), 2) L 1 (U) with the 10 closest cohort models plus the 10 farthest cohort models (\"L1_10c_10f\"), 3) L 2 (U) with the 20 closest cohort models (\"L2_20c\"), 4) L 3 (U) with the 20 closest cohort models (\"L3_20c\"), 5) L 4 (U) (\"L4\"), and 6) L 5 (U) using an RBF kernel function with \u03c3= 10 (\"L5\"). Figure 1 shows the results of the baseline systems evaluated on the \"Test\" subset in DET curves. We observe that the curves \"L1_10c_10f\", \"L4\" and \"L5\" are better than the others. Thus, in the subsequent experiments, we focused on the performance improvements of our proposed LLR systems over these three baselines.",
"cite_spans": [],
"ref_spans": [
{
"start": 990,
"end": 998,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "5.1.1"
},
{
"text": "The results of our proposed LLR systems, based on linear-based classifiers and kernel-based classifiers, versus the baseline systems evaluated on the \"Test\" subset are shown in Figs. 2 and 3 , respectively. It is clear that the proposed LLR systems based on either linear-based classifiers or kernel-based classifiers outperform the baseline systems, while KFD perform better than SVM. ",
"cite_spans": [],
"ref_spans": [
{
"start": 177,
"end": 190,
"text": "Figs. 2 and 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Figure 1. Baselines: DET curves for the XM2VTS \"Test\" subset (B = 20).",
"sec_num": null
},
{
"text": "We participated in the text-independent speaker verification task of the ISCSLP2006 Speaker Recognition Evaluation (ISCSLP2006-SRE) plan [Chinese Corpus Consortium 2006] . The database contained 800 clients. Each client has one long training utterance, ranging in duration from 21 to 85 seconds, with an average length of 37.06 seconds. In addition, there are 5,933 utterances in the \"Test\" subset, each of which ranges in duration from 5 seconds to 54 seconds, with an average length of 15.66 seconds. Each test utterance is associated with the client claimed by the speaker, and the task is to judge whether it is true or false. The ratio of true",
"cite_spans": [
{
"start": 137,
"end": 169,
"text": "[Chinese Corpus Consortium 2006]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation on the ISCSLP2006-SRE Database",
"sec_num": "5.2"
},
{
"text": "clients to imposters is approximately 1:20. The answer sheet was released after the evaluation finished.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Discriminant Analysis for LLR-Based Speaker Verification",
"sec_num": null
},
{
"text": "To form the \"Evaluation\" subset for estimating \u03b8, w, F w , and b, we extracted some speech from each client's training utterance in the following way. First, we sorted the 800 clients in descending order according to the length of their training utterances. Then, for the first 100 clients, we cut two 4-second segments from the end of each client's training utterance; however, for the remaining 700 clients, we only cut one 4-second segment from the end of each client's training utterance. This yielded 900 (2\u00d7100+700) \"Evaluation\" utterances. In estimating \u03b8, w, F w , and b, each \"Evaluation\" utterance served as a client sample for its associated client, but acted as an imposter sample for each of the remaining 799 clients. This yielded 900 client samples and 719,100 (900\u00d7799) impostor samples. We used all the client samples and 2,400 randomly-selected impostor samples to estimate F w of the kernel-based classifiers. To determine \u03b8 or b, we used the 900 client samples and 18,000 randomly-selected impostor samples. This follows the suggestion in the ISCSLP2006-SRE Plan that the ratio of true clients to imposters in the \"Test\" subset should be approximately 1:20.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Discriminant Analysis for LLR-Based Speaker Verification",
"sec_num": null
},
{
"text": "The remaining portion of each client's training utterance was used as \"Training\" to train that client's model through UBM-MAP adaptation [Reynolds 2000 ]. This was done by first pooling all the speech in \"Training\" to train a UBM [Reynolds 2000 ] with 1,024 mixture Gaussian components, and then adapting the mean vectors of the UBM to each client's GMM according to his/her \"Training\" utterance.",
"cite_spans": [
{
"start": 137,
"end": 151,
"text": "[Reynolds 2000",
"ref_id": "BIBREF11"
},
{
"start": 230,
"end": 244,
"text": "[Reynolds 2000",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Discriminant Analysis for LLR-Based Speaker Verification",
"sec_num": null
},
{
"text": "The signal processing front-end was same as that applied in the XM2VTS task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Discriminant Analysis for LLR-Based Speaker Verification",
"sec_num": null
},
{
"text": "The GMM-UBM [Reynolds 2000 ] system is the current state-of-the-art approach for the text-independent speaker verification task. Thus, in this part, we focus on the performance improvements of our methods over the baseline GMM-UBM system.",
"cite_spans": [
{
"start": 12,
"end": 26,
"text": "[Reynolds 2000",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "5.2.1"
},
{
"text": "As with the GMM-UBM system, we used the fast scoring method [Reynolds 2000 ] for likelihood ratio computation in the proposed methods. Both the client model \u03bb and the B cohort models were adapted from the UBM \u2126. Since the mixture indices were retained after UBM-MAP adaptation, each element of the characteristic vector x was computed approximately by only considering the C mixture components corresponding to the top C scoring mixtures in the UBM [Reynolds 2000] . In our experiments, the value of C was set to 5.",
"cite_spans": [
{
"start": 60,
"end": 74,
"text": "[Reynolds 2000",
"ref_id": "BIBREF11"
},
{
"start": 449,
"end": 464,
"text": "[Reynolds 2000]",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "5.2.1"
},
{
"text": "B was set to 100 in the experiments. We implemented the proposed LLR system in four ways: 1) KFD with Eq. (30) (\"KFD_w_100c\"), 2) KFD with Eq. (31) (\"KFD_w_50c_50f\"), 3) SVM with Eq. (30) (\"SVM_w_100c\"), and 4) SVM with Eq. (31) (\"SVM_w_50c_50f\"). We compared the proposed systems with the baseline GMM-UBM system and Bengio et al. 's system (L5) . Figure 4 shows the results of experiments conducted on 5,933 \"Test\" utterances in DET curves. The proposed LLR systems clearly outperform the baseline GMM-UBM system and Bengio et al.'s system (L5) . According to the ISCSLP2006 SRE plan, the performance is measured by the NIST DCF with .",
"cite_spans": [
{
"start": 332,
"end": 346,
"text": "'s system (L5)",
"ref_id": null
},
{
"start": 519,
"end": 546,
"text": "Bengio et al.'s system (L5)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 349,
"end": 357,
"text": "Figure 4",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "5.2.1"
},
{
"text": "In each system, the decision threshold, \u03b8 or b, was selected to minimize the DCF on the \"Evaluation\" subset, and then applied to the \"Test\" subset. The minimum DCFs for the \"Evaluation\" subset and the associated DCFs for the \"Test\" subset are given in Table 5 . We observe that \"KFD_w_50c_50f\" achieved a 34.08% relative improvement over \"GMM-UBM\", and a 19.73% relative improvement over \"L5\". ",
"cite_spans": [],
"ref_spans": [
{
"start": 252,
"end": 259,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "5.2.1"
},
{
"text": "We discarded 2 speakers (ID numbers 313 and 342) because of partial data corruption.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "An analysis of the results based on HTER is given in Table 3 . For each approach, the decision threshold, \u03b8 or b, was used to minimize HTER on the \"Evaluation\" subset and then applied to the \"Test\" subset. From Table 3 , we observe that all the proposed LLR systems outperform the baseline systems and, for the \"Test\" subset, a 29.72% relative improvement was achieved by \"KFD_w_20c\", compared to \"L5\" -the best baseline system. The advantage of integrating multiple background models in our methods could be the reason why the proposed LLR systems based on the linear SVM (\"LSVM_w_20c\" and \"LSVM_w_10c_10f\") outperform \"L5\", which applied the kernel-based SVM in L 5 (U). We also observe that, in the proposed LLR systems, all of the kernel-based methods outperform the linear-based methods.To analyze the effect of the number of background models, we implemented several proposed LLR systems and baseline systems with B = 40. An analysis of the results based on the HTER is given in Table 4 . Compared to Table 3 , the performance of each system with B = 40 is, in general, better than that of its counterpart with B = 20, but not always. For instance, \"KFD_w_20c_20f\" in Table 4 achieved a lower HTER for \"Evaluation\" but a higher HTER for \"Test\", compared to \"KFD_w_10c_10f\" in Table 3 . This may be the result of overtraining. However, from Table 4 , it is clear that the superiority of the proposed LLR systems over the baseline systems is again demonstrated.",
"cite_spans": [],
"ref_spans": [
{
"start": 53,
"end": 60,
"text": "Table 3",
"ref_id": null
},
{
"start": 211,
"end": 218,
"text": "Table 3",
"ref_id": null
},
{
"start": 985,
"end": 992,
"text": "Table 4",
"ref_id": null
},
{
"start": 1007,
"end": 1014,
"text": "Table 3",
"ref_id": null
},
{
"start": 1174,
"end": 1181,
"text": "Table 4",
"ref_id": null
},
{
"start": 1282,
"end": 1289,
"text": "Table 3",
"ref_id": null
},
{
"start": 1346,
"end": 1353,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 3. Best baselines vs. our proposed LLR systems based on kernel-based classifiers: DET curves for the XM2VTS \"Test\" subset (B = 20).",
"sec_num": null
},
{
"text": "We have presented a new LLR measure for speaker verification that improves the characterization of the alternative hypothesis by integrating multiple background models in a more effective and robust way than conventional methods. This new LLR measure is formulated as a non-linear classification problem and solved by using kernel-based classifiers, namely, the Kernel Fisher Discriminant and Support Vector Machine, to optimally separate the LLR samples of the null hypothesis from those of the alternative hypothesis. Experiments, in which the proposed methods were applied to two speaker verification tasks, showed notable improvements in performance over classical LLR-based approaches. Finally, it is worth noting that the proposed methods can be applied to other types of data and hypothesis testing problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6."
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Learning the Decision Function for Speaker Verification",
"authors": [
{
"first": "S",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mari\u00e9thoz",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the IEEE International Conference on Acoustic, Speech, and Signal Processing",
"volume": "",
"issue": "",
"pages": "425--428",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bengio, S. and J. Mari\u00e9thoz, \"Learning the Decision Function for Speaker Verification,\" In Proceedings of the IEEE International Conference on Acoustic, Speech, and Signal Processing, 2001, Salt Lake City, USA, pp. 425-428.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Tutorial on Support Vector Machines for Pattern Recognition",
"authors": [
{
"first": "C",
"middle": [],
"last": "Burges",
"suffix": ""
}
],
"year": 1998,
"venue": "Data Mining and Knowledge Discovery",
"volume": "2",
"issue": "",
"pages": "121--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burges, C., \"A Tutorial on Support Vector Machines for Pattern Recognition,\" Data Mining and Knowledge Discovery, 2, 1998, pp. 121-167.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Evaluation Plan for ISCSLP'2006 Special Session on Speaker Recognition",
"authors": [],
"year": 2006,
"venue": "Chinese Corpus Consortium (CCC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chinese Corpus Consortium (CCC), \"Evaluation Plan for ISCSLP'2006 Special Session on Speaker Recognition,\" 2006.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Pattern Classification, 2",
"authors": [
{
"first": "R",
"middle": [
"O"
],
"last": "Duda",
"suffix": ""
},
{
"first": "P",
"middle": [
"E"
],
"last": "Hart",
"suffix": ""
},
{
"first": "D",
"middle": [
"G"
],
"last": "Stork",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Duda, R. O., P. E. Hart and D. G. Stork, Pattern Classification, 2 nd ed., John Wiley & Sons, New York, 2001.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Spoken Language Processing",
"authors": [
{
"first": "X",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Acero",
"suffix": ""
},
{
"first": "H",
"middle": [
"W"
],
"last": "Hon",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, X., A. Acero and H. W. Hon, Spoken Language Processing, Prentics Hall, New Jersey, 2001.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Speaker Verification Using Normalized Log-Likelihood Score",
"authors": [
{
"first": "C",
"middle": [
"S"
],
"last": "Liu",
"suffix": ""
},
{
"first": "H",
"middle": [
"C"
],
"last": "Wang",
"suffix": ""
},
{
"first": "C",
"middle": [
"H"
],
"last": "Lee",
"suffix": ""
}
],
"year": 1996,
"venue": "IEEE Trans. Speech and Audio Processing",
"volume": "4",
"issue": "",
"pages": "56--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, C. S., H. C. Wang and C. H. Lee, \"Speaker Verification Using Normalized Log-Likelihood Score,\" IEEE Trans. Speech and Audio Processing, 4, 1996, pp.56-60.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Evaluation Protocol for the Extended M2VTS Database (XM2VTSDB)",
"authors": [
{
"first": "J",
"middle": [],
"last": "Luettin",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Maitre",
"suffix": ""
}
],
"year": 1998,
"venue": "IDIAP-COM 98-05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luettin, J. and G. Maitre, \"Evaluation Protocol for the Extended M2VTS Database (XM2VTSDB),\" IDIAP-COM 98-05, IDIAP, 1998.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The DET Curve in Assessment of Detection Task Performance",
"authors": [
{
"first": "A",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Doddington",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Kamm",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ordowski",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Przybocki",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of Eurospeech",
"volume": "",
"issue": "",
"pages": "1895--1898",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin, A., G. Doddington, T. Kamm, M. Ordowski and M. Przybocki, \"The DET Curve in Assessment of Detection Task Performance,\" In Proceedings of Eurospeech, 1997, pp. 1895-1898.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "XM2VTSDB: The Extended M2VTS Database",
"authors": [
{
"first": "K",
"middle": [],
"last": "Messer",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Matas",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kittler",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Luettin",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Maitre",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 2 nd International Conference on Audio and Video-based Biometric Person Authentication",
"volume": "",
"issue": "",
"pages": "72--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Messer, K., J. Matas, J. Kittler, J. Luettin, and G. Maitre, \"XM2VTSDB: The Extended M2VTS Database,\" In Proceedings of the 2 nd International Conference on Audio and Video-based Biometric Person Authentication, 1999, Washington D. C., USA, pp. 72-77.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Fisher Discriminant Analysis with Kernels",
"authors": [
{
"first": "S",
"middle": [],
"last": "Mika",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "R\u00e4tsch",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Sch\u00f6lkopf",
"suffix": ""
},
{
"first": "K",
"middle": [
"R"
],
"last": "M\u00fcller",
"suffix": ""
}
],
"year": 1999,
"venue": "Neural Networks for Signal Processing IX",
"volume": "",
"issue": "",
"pages": "41--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mika, S., G. R\u00e4tsch, J. Weston, B. Sch\u00f6lkopf and K. R. M\u00fcller, \"Fisher Discriminant Analysis with Kernels,\" Neural Networks for Signal Processing IX, 1999, pp. 41-48.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Speaker Identification and Verification Using Gaussian Mixture Speaker Models",
"authors": [
{
"first": "D",
"middle": [
"A"
],
"last": "Reynolds",
"suffix": ""
}
],
"year": 1995,
"venue": "Speech Communication",
"volume": "17",
"issue": "",
"pages": "91--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reynolds, D. A., \"Speaker Identification and Verification Using Gaussian Mixture Speaker Models,\" Speech Communication, 17, 1995, pp. 91-108.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Speaker Verification Using Adapted Gaussian Mixture Models",
"authors": [
{
"first": "D",
"middle": [
"A"
],
"last": "Reynolds",
"suffix": ""
},
{
"first": "T",
"middle": [
"F"
],
"last": "Quatieri",
"suffix": ""
},
{
"first": "R",
"middle": [
"B"
],
"last": "Dunn",
"suffix": ""
}
],
"year": 2000,
"venue": "Digital Signal Processing",
"volume": "10",
"issue": "",
"pages": "19--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reynolds, D. A., T. F. Quatieri and R. B. Dunn, \"Speaker Verification Using Adapted Gaussian Mixture Models,\" Digital Signal Processing, 10, 2000, pp. 19-41.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The Use of Cohort Normalized Scores for Speaker Verification",
"authors": [
{
"first": "A",
"middle": [
"E"
],
"last": "Rosenberg",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Delong",
"suffix": ""
},
{
"first": "C",
"middle": [
"H"
],
"last": "Lee",
"suffix": ""
},
{
"first": "B",
"middle": [
"H"
],
"last": "Juang",
"suffix": ""
},
{
"first": "F",
"middle": [
"K"
],
"last": "Soong",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of International Conference on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "599--602",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rosenberg, A. E., J. Delong, C. H. Lee, B. H. Juang and F. K. Soong, \"The Use of Cohort Normalized Scores for Speaker Verification,\" In Proceedings of International Conference on Spoken Language Processing, 1992, Banff, Canada, pp. 599-602.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Speaker Verification Using Sequence Discriminant Support Vector Machines",
"authors": [
{
"first": "V",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Renals",
"suffix": ""
}
],
"year": 2005,
"venue": "IEEE Trans. Speech and Audio Processing",
"volume": "13",
"issue": "2",
"pages": "203--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wan ,V. and S. Renals, \"Speaker Verification Using Sequence Discriminant Support Vector Machines,\" IEEE Trans. Speech and Audio Processing, 13(2), 2005, pp. 203-210.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Statistical Learning Theory",
"authors": [
{
"first": "V",
"middle": [],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vapnik, V., Statistical Learning Theory, John Wiley & Sons, New York, 1998.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "., } { ,.., } { ,.., } Since the solution of F w mustKernel Discriminant Analysis for LLR-Based Speaker Verificationlie in the span of all training data samples mapped in F[Mika 1999], \u03b1 T = [\u03b1 1 , \u03b1 2 ,..., \u03b1 l ]. Accordingly, Eq. (16) can be re-written as:",
"num": null
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"text": "Best baselines vs. our proposed LLR systems based on linear-based classifiers: DET curves for the XM2VTS \"Test\" subset (B = 20).",
"num": null
},
"FIGREF5": {
"type_str": "figure",
"uris": null,
"text": "DET curves for the ISCSLP2006-SRE \"Test\" subset.",
"num": null
},
"TABREF0": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>( ) log L U =</td><td colspan=\"2\">0 1 ( | ) ( | ) p U H p U H</td><td>\u03b8 \u03b8 \u2265 \u23a8 &lt; \u23a7 \u23a9</td><td>accept accept</td><td>0 1 H H</td><td>( i.e., reject</td><td>H</td><td>0</td><td>) ,</td><td>(1)</td></tr><tr><td>where ( | ), i p U H</td><td>i =</td><td>0, 1,</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"3\">The LLR test is expressed as:</td><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"text": "from the hypothesized speaker, H 1 : U is not from the hypothesized speaker."
},
"TABREF2": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td/><td/><td>The goal of KFD is to seek</td></tr><tr><td colspan=\"4\">a direction</td></tr><tr><td>( J w</td><td>F</td><td>)</td><td>is maximized:</td></tr></table>",
"text": "in the feature space F such that the following Fisher's criterion function"
},
"TABREF4": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>Session</td><td>Shot</td><td>199 clients</td><td>25 evaluation impostors</td><td>69 test impostors</td></tr><tr><td>1</td><td>1</td><td/><td/><td/></tr><tr><td/><td>2</td><td>Training</td><td/><td/></tr><tr><td>2</td><td>1</td><td/><td/><td/></tr><tr><td/><td>2</td><td/><td>Evaluation</td><td>Test</td></tr><tr><td>3</td><td>1</td><td>Evaluation</td><td/><td/></tr><tr><td/><td>2</td><td/><td/><td/></tr><tr><td>4</td><td>1</td><td>Test</td><td/><td/></tr><tr><td/><td>2</td><td/><td/><td/></tr></table>",
"text": ""
},
"TABREF5": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td colspan=\"3\">HTERs for the XM2VTS \"Evaluation\" and \"Test\" subsets (B = 20).</td></tr><tr><td/><td>min HTER for \"Evaluation\"</td><td>HTER for \"Test\"</td></tr><tr><td>L1_20c</td><td>0.0676</td><td>0.0535</td></tr><tr><td>L1_10c_10f</td><td>0.0589</td><td>0.0515</td></tr><tr><td>L2_20c</td><td>0.0776</td><td>0.0635</td></tr><tr><td>L3_20c</td><td>0.0734</td><td>0.0583</td></tr><tr><td>L4</td><td>0.0633</td><td>0.0519</td></tr><tr><td>L5</td><td>0.0590</td><td>0.0508</td></tr><tr><td>FLD_w_20c</td><td>0.0459</td><td>0.0433</td></tr><tr><td>LSVM_w_20c</td><td>0.0472</td><td>0.0495</td></tr><tr><td>FLD_w_10c_10f</td><td>0.0468</td><td>0.0455</td></tr><tr><td>LSVM_w_10c_10f</td><td>0.0453</td><td>0.0434</td></tr><tr><td>KFD_w_20c</td><td>0.0247</td><td>0.0357</td></tr><tr><td>SVM_w_20c</td><td>0.0320</td><td>0.0414</td></tr><tr><td>KFD_w_10c_10f</td><td>0.0232</td><td>0.0389</td></tr><tr><td>SVM_w_10c_10f</td><td>0.0310</td><td>0.0417</td></tr><tr><td colspan=\"3\">Table 4. HTERs for the XM2VTS \"Evaluation\" and \"Test\" subsets (B = 40).</td></tr><tr><td/><td>min HTER for \"Evaluation\"</td><td>HTER for \"Test\"</td></tr><tr><td>L1_40c</td><td>0.0675</td><td>0.0493</td></tr><tr><td>L1_20c_20f</td><td>0.0589</td><td>0.0506</td></tr><tr><td>L2_40c</td><td>0.0765</td><td>0.0597</td></tr><tr><td>L3_40c</td><td>0.0722</td><td>0.0554</td></tr><tr><td>KFD_w_40c</td><td>0.0074</td><td>0.0345</td></tr><tr><td>SVM_w_40c</td><td>0.0189</td><td>0.0386</td></tr><tr><td>KFD_w_20c_20f</td><td>0.0050</td><td>0.0416</td></tr><tr><td>SVM_w_20c_20f</td><td>0.0192</td><td>0.0403</td></tr></table>",
"text": ""
},
"TABREF6": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td>min DCF for \"Evaluation\"</td><td>DCF for \"Test\"</td></tr><tr><td>GMM-UBM</td><td>0.0129</td><td>0.0179</td></tr><tr><td>L5</td><td>0.0120</td><td>0.0147</td></tr><tr><td>KFD_w_50c_50f</td><td>0.0067</td><td>0.0118</td></tr><tr><td>SVM_w_50c_50f</td><td>0.0067</td><td>0.0123</td></tr><tr><td>KFD_w_100c</td><td>0.0063</td><td>0.0145</td></tr><tr><td>SVM_w_100c</td><td>0.0076</td><td>0.0142</td></tr></table>",
"text": ""
}
}
}
}