\chapter{Discriminative Training}
\label{chap:hmmdiscri}
\ifpdf
    \graphicspath{{Chapter3/Chapter3Figs/PNG/}{Chapter3/Chapter3Figs/PDF/}{Chapter3/Chapter3Figs/}}
\else
    \graphicspath{{Chapter3/Chapter3Figs/EPS/}{Chapter3/Chapter3Figs/}}
\fi

Hidden Markov Models provide a good representation of the unpredictable and sequential nature of the human speech production system, thus become the dominant technology for continuous speech recognition. However HMMs make some assumptions which reduce their generality (see section~\ref{hmmlim}). Most importantly, the MLE training of HMM parameters greatly reduce its discriminant power. In this chapter, we will review some discriminative training schemes for HMMs, which pertain most of the state-of-the-art Large Vocabulary Continuous Speech Recognition (LVCSR) systems. 

\section{Introduction}
Most conventional training criterion for HMMs has been maximum likelihood estimation (MLE) which attempts to maximize the likelihood of the training data observations $O={O_1,O_2,\ldots,O_R}$:
\begin{eqnarray}
F_{MLE}(\lambda)=\sum_{r=1}^RlogP_{\lambda}(O_r|M_{w_r})
\end{eqnarray}
where $\lambda$ is the set of model parameters and $M_{w_r}$ is the HMM corresponding to the transcription of utterance $O_r$, $R$ is the number of traing samples.

Using this criterion, HMM model can be efficiently trained using famous Baum-Welch algorithm\citep{Baum1970}. However, MLE makes several assumptions:
\begin{itemize}
\item Observations are from a known family of distributions. Typically multivariate Gaussian Mixture Models are used as the emission probability model for HMM states;
\item Training data is unlimited;
\item True Language Model is known.
\end{itemize}
If all these assumptions are satisfied, no other training criteria will do better. MLE serves as a minimum variance, consistent estimator of the true model parameters. However, for the application of speech recognition, none of these assumptions holds. Therefore, MLE will not guarantee to produce optimal results, thus many researchers in ASR have explored alternative frameworks for discriminative parameter estimation.

Instead of estimating the HMM parameters such that the training data are most likely to be generated by the model, discriminative training attempts to optimize the correctness of a model by formulating an objective function that more closely track actual error rates. By optimizing this objective function, the system will favour the correct class while suppress the competing classes.

In the next sessions, we will review some of the best known previously described discriminative training schemes for HMMs, namely, Maximum Mutual Information (MMI)\citep{Valtchev1997303}, Minimum Classification Error (MCE)\citep{Juang1997}, Minimum Phone Error (MPE)\citep{Dan2003}, Large Margin (LM)\citep{Fei2007}, Multiple Layer Perceptron (MLP)\citep{BourlardM1993}.

\section{Maximum Mutual Information}
Given the observation sequence $O$, a speech recognizer should choose a word sequence $W$ such that there is a minimum amount of uncertainty about the correct answer, i.e, the training word sequence should have the maximum mutual information with the corresponding observation sequences. This can be formulated as maximizing an objective function\citep{Dan2003}:
\begin{eqnarray}
\label{mmi}
F_{MMI}(\lambda)=\sum_{r=1}^R{log\frac{P_{\lambda}(O_r|M_{w_r})P(w_r)}{\sum_{\hat{w}}P_{\lambda}(O_r|M_{\hat{w}})P(\hat{w})}}
\end{eqnarray}
where $M_{w_r}$ is the HMMs corresponding to the transcription of utterance $w$, $P(w)$ is the probability of sentence $w$ which is determined by a language model, and the denominator sums over each possible word sequences $\hat{w}$. 

By maximizing formula~\ref{mmi}, the numerator must be increased while the denominator is decreased. There is also a likelihood term in formula~\ref{mmi}: $P_{\lambda}(O_r|M_{w_r})$ which attempts to maximize the likelihood of the data. However, the essential difference between MLE and MMI lies in the denominator term which can be made smaller by reducing the probabilities of the competing classes. Thus, MMIE attempts both make the correct hypotheses more probable, while making the competing/incorrect hypotheses less probable.

By examining formula~\ref{mmi}, we can see that in order to accumulate the statistics associated with the denominator, a whole recognition pass on all training utterances is needed for each iteration of MMIE which is computational expensive for large vocabulary system. Moreover, the objective function cannot optimized using conventional Baum-Welch algorithm. This can be optimized by standard gradient-based methods and extended Baum-Welch algorithm (EBW)\citep{GopalakrishnanKNN91}.

\section{Minimum Classification Error}
MMI attempts to maximize the mutual information between the word sequence and observation sequence instead of minimize the error rate directly. In view of that, another discriminative training approach called minimum classification error\citep{Juang1997} was proposed to directly formulate the classifier design problem as a classification error rate minimization problem.

A speech utterance belonging to string class $S_j$ is denoted by a sequence of feature vectors $O_1^T=(O_1,O_2,\ldots,O_T)$, the HMM discriminant function \citep{Juang1992} for $S_j$ is defined as the joint probability of $O_1^T$ and the best Viterbi state sequence $Q^j=(q_1^j,q_2^j,\ldots,q_T^j)$:
%\begin{eqnarray}
\begin{align}
g_j(O_1^T,\lambda)&=logP(S_j)+log(P_{\lambda}(O_1^T,Q^j))\\
&=logP(S_j)+\sum_{t=1}^Tlog(a_{q_{t-1}^jq_{t}^j})+\sum_{t=1}^Tlog(b_{q_t^j}(O_t)),
\end{align}
%\end{eqnarray}  
where $a_{ij}$ denotes the transition probability and $b_q(O)$ denotes the emission probability at state $q$ for observation $O$. $P(S_j)$ denotes the prior probability of string class $S_j$ which is modelled by the language model.

The classifier/recognizer is operating under the following decision rule:
\begin{eqnarray}
\label{decision}
C(O_1^T)=S_i,\text{ if }g_i(O_1^T,\lambda)=\argmax_jg_j(O_1^T,\lambda)=\argmax_jlog(P_{\lambda}(O_1^T,Q^j))
\end{eqnarray}

Formula~\ref{decision} is clearly not suitable for optimization, an embedded smoothing for a loss function is proposed to estimate the error probability. The operational decision rule~\ref{decision} can be formulated in a functional form:
\begin{eqnarray}
\label{func}
d_i(O)=-g_i(O,\lambda)+log[\frac{1}{M-1}\sum_{j,j\neq{}i}exp[g_j(O,\lambda)\eta]]^{\frac{1}{\eta}},
\end{eqnarray}
where $\eta$ is a positive number. For an $j$th class utterance $O$, $d_i(O)>0$ implies a misclassification while $d_i(O)\leq0$ means a correct decision. By adjusting $\eta$ and $M$, one can take all the competing classes into consideration according to the individual significance when estimating the classifier parameter $\lambda$. The misclassification measure of formula~\ref{func} is then embedded in a smoothed zero-one function, such as sigmoid. A general form for the loss function can be defined as:
\begin{eqnarray}
l_i(O,\lambda)=l(d_i(O)).
\end{eqnarray}
If we choose $l$ as sigmoid function, this function can then be re-written as:
\begin{eqnarray}
\label{loss}
l(d)=\frac{1}{1+exp(-\gamma{}d+\theta)},
\end{eqnarray}
where $\theta$ is normally set to zero and $\gamma$ set to $\geq$ one. From formula~\ref{loss} we can see that when $d_i(O)$ is much smaller than zero, which implies a correct classification, no loss is incurred. When $d_i(O)$ is bigger than zero, it will lead to a penalty which becomes a classification error count.

\section{Minimum Phone Error}
The classification errors to be minimized in MCE formulation correspond to sentence level recognition errors. Using MCE on large vocabulary task is problematic for long utterance since the sentence level error is too rough for the evaluation of an LVCSR system. In LVCSR, recognition performance is normally measured in sub-string levels, e.g. word error rate (WER), phone error rate (PER). To directly reflect these evaluation criteria in discriminative training, alternative methods like minimum word error rate (MWE)\citep{Povey2002} and minimum phone error rate (MPE)\citep{Povey2002}  are proposed. The principle of these two methods are the same: they attempt to formulate the objective function which directly reflects the sub-string recognition error. Therefore, we only give the review of MPE, same principle applies also to MWE.

MPE attempts to minimize the number of phone level errors made by maximizing
\begin{eqnarray}
\label{mpe}
F_{MPE}(\lambda)=\sum_{r=1}^R\frac{\sum_s{p_{\lambda}(O_r|s)}^k{P(s)}^kRawPhoneAccuracy(s,s_r)}{\sum_u{p_{\lambda}(O_r|u)}^k{P(u)}^k},
\end{eqnarray}
where $RawPhoneAccuracy(s,s_r)$ is the number of phones that are correct in the phone sequence $s$, $k$ is the scaling factor. Therefore, formula~\ref{mpe} is the weighted average of the correct phones in all possible sequences. By maximizing formula~\ref{mpe}, the number of correct phones in the most probable sequences is also increased.

However, it is not trivial to calculate $RawPhoneAccuracy(s,s_r)$ in sub-string level, since this requires dynamic programming (DP) to compute the edit distance between two sub-string sequences for substitution, deletion and insertion errors. Therefore, this edit distance cannot be directly incorporated in the objective function for optimization. Alternatively, $RawPhoneAccuracy(s,s_r)$ can be calculated based on some simple heuristics measures which can be computed locally without DP. For an efficient calculation of $RawPhoneAccuracy(s,s_r)$  on lattice-based recognizer in \citep{Dan2003}, an approximation of $RawPhoneAccuracy(s,s_r)$ is used:
\begin{eqnarray}
RawPhoneAccuracy(s,s_r)=\sum{PhoneAcc(s_r)}\\
PhoneAcc(z)=\argmax_z{\left\{
\begin{array}{lr}
{-1+2e(q,z)}&\text{if q and z are the same phone}\\
{-1+e(q,z)}&\text{if q and z are different phones}
\end{array}
\right.},
\end{eqnarray}  
where $q$ is a given hypothesis phone, $z$ is the phone found in the reference phone sequence which overlaps in time with $q$ and $e(q, z)$ is the proportion of the length of $z$ which is overlapped. The phone $z$ is chosen so as to make $PhoneAcc(q)$ as large as possible.

\section{Large Margin}
All the discriminative criteria discussed above have one principle in common: they attempt closely track the actual error rates, as opposed to the Maximum likelihood to fit the data. However, these criteria suffer from complicated update rules and slow convergence speed. In view of this, in \citep{Sha2006} a new discriminative acoustic modelling approach  called ``large margin GMMs'' for multiway classification is proposed. This approach is designed to maximize the distance between labeled examples and the decision boundaries of different classes.
\subsection{Decision Rule}
Large margin GMMs are trained from a set of labelled samples $\{(x_n,y_n)\}$. Each class $y_n$ is modelled as a single ellipsoid in the input space. The ellipsoid for class $c$ is parameterized by a centroid vector $\mu_c\in{}R_d$ , a positive semi-definite matrix $\Psi\in{}R_{d\times{}d}$ which determines its orientation and also a non-negative scalar offset $\theta_c\geq{}0$. The decision rule is defined as:
\begin{eqnarray}
\label{deci_rule}
y=\argmin_c\{{(x-\mu_c)}^T\Psi_c(x-\mu_c)+\theta_c\}
\end{eqnarray}
This formula can be represented in a simpler expression according to \citep{Sha2006}. For each class, all the parameters ${\mu_c,\Psi_c,\theta_c}$ are collected into a single enlarged matrix $\Phi_c\in{}R_{(d+1)\times(d+1)}$ which is positive semi-definite:
\begin{eqnarray}
\Phi_c=
\left[
\begin{array}{lr}
{\Psi_c}&{-\Psi_c\mu_c}\\
{-\mu_c^T\Psi_c}&{\mu_c^T\Psi_c\mu_c+\theta_c}
\end{array}
\right]
\end{eqnarray}
Based on formula~\ref{deci_rule}, we can rewrite the decision rule as:
\begin{eqnarray}
\label{redf}
y=\argmin_c{z_T\Phi_cz} \quad where\quad z={[x\quad 1]}^T
\end{eqnarray}
By this reparameterization, the non-linear arguments in formula~\ref{deci_rule} become linear in the parameter $\Phi_c$. 
\subsection{Margin Maximization}
As in SVMs, the margin of a sample is defined as the distance to the nearest decision boundary. A constraint that each sample lies at least one unit distance to the nearest decision boundary to each competing class is imposed:
\begin{eqnarray}
\label{con}
\forall_c\neq{}y_n, z_n^T(\Phi_c-\Phi_{y_c})z_n\leq{}1
\end{eqnarray}
The optimization then becomes an instance of semi-definite programming:
\begin{eqnarray}
\label{obj}
min \sum_{nc}\xi_{nc}+\gamma\sum_ctrace(\Psi_c)\\
s.t. 1+z_n^T(\Phi_{y_n}-\Phi_c)z_n\leq{}\xi_{nc},\\
\xi_{nc}\geq{}0, \forall_c\neq{}y_n, n=1,2,\ldots,N\\
\Phi_c\succ0, c=1,2,\ldots,C
\end{eqnarray}
where $\xi_{nc}$ is a non-negative slack variable to monitor the amount of violation of the margin constraints in formula~\ref{con}, $\gamma$ is a balancing hyper-parameter which can be set by cross validation.

This maximization can be extended to GMMs. Let $\Phi_{cm}$ denote the matrix for the $m^{th}$ mixture component in class $c$. Each sample is now labelled as $\{x_n,y_n,m_n\}$, where $y_n$ is the class label and $m_n$ is the mixture label. The constraint in formula~\ref{con} is now defined by $M$ constraints, where $M$ is the number of mixture components:
\begin{eqnarray}
\label{cons}
\forall_c\neq{}y_n, \forall_m, z_n^T(\Phi_{cm}-\Phi_{y_nm_n})\geq{}1
\end{eqnarray}
These constraints can be folded into a single one by introducing a ``softmax'' inequality: $min_ma_m\geq{}-log\sum_me^{-a_m}$:
\begin{eqnarray}
\forall_c\neq{}y_n, -log\sum_me^{-z_n^T\Phi_{cm}z_n}-z_n^T\Phi_{y_nm_n}z_n\geq{}1
\end{eqnarray}
The objective function in formula~\ref{obj} can be extended similarly:
\begin{eqnarray}
min \sum_{nc}\xi_{nc}+\gamma\sum_ctrace(\Psi_{cm})\\
s.t. 1+z_n^T(\Phi_{y_nm_n}+log\sum_me^{-z_n^T\Phi_{cm}z_n}\leq{}\xi_{nc},\\
\xi_{nc}\geq{}0, \forall_c\neq{}y_n, n=1,2,\ldots,N\\
\Phi_{cm}\succ0, c=1,2,\ldots,C, m=1,2,\ldots,M
\end{eqnarray}

\section{Optimization Methods}
In the previous sections, we have briefly reviewed some popular discriminative training criteria for HMM-based speech recognition. In large vocabulary ASRs, discriminative training needs to handle very large HMMs, which may involve millions of free variables in optimization. Therefore, an efficient optimization algorithm plays a crucial role in discriminative training. How to solve this kind of large scale optimization problems efficiently and effectively is a huge challenge since there are many important issues to be addressed, e.g., how to accelerate the convergence speed, how to avoid being stuck in local optimization points. In this section, we will review some important optimization methods proposed for the objective functions of various discriminative training schemes discussed above.

\subsection{Extended Baum-Welch algorithm}
The Baum-Welch algorithm is extended in \citep{GopalakrishnanKNN91} for the optimization problems of MMIE. Assume the objective function, $F$, involves some parameters of discrete statistical models, e.g. $\lambda_{ij}$ with the sum-to-one constraint $\sum_j\lambda_{ij}=1$ and $0<\lambda_{ij}<1$. \citep{GopalakrishnanKNN91} shows that the following re-estimation formula for $\lambda_{ij}$:
\begin{align}
\lambda_{ij}^{n+1}=\frac{\lambda_{ij}^{n}(\frac{\delta F}{\delta\lambda_{ij}}|_{\lambda_{ij}=\lambda_{ij}^{n}}+D)}{\sum_k\lambda_{ik}^{n}(\frac{\delta F}{\delta\lambda_{ik}}|_{\lambda_{ik}=\lambda_{ik}^{n}}+D)},
\end{align}
will converge to a local optimum for a sufficiently large constant $D$ with the guarantee that $F(\lambda_{ij}^{n+1})\geq F(\lambda_{ij}^{n})$. For Gaussian distributions, the re-estimation formulas for the mean and covariance for a state $j$, mixture component $m$, $\mu_{jm}$ and $\sigma_{jm}^2$, can be re-estimated by:
%\begin{eqnarray}
\begin{align}
{\hat{\mu}}_{jm}=\frac{\{ \theta_{jm}^{num}(O)-\theta_{jm}^{den}(O)\}+D\mu_{jm}}{\{\gamma_{jm}^{num}-\gamma_{jm}^{den}+D\}}, \\
{\hat{\sigma}}_{jm}^2=\frac{\{ \theta_{jm}^{num}(O^2)-\theta_{jm}^{den}(O^2)\}+D(\sigma_{jm}^2+\mu_{jm}^2)}{\{\gamma_{jm}^{num}-\gamma_{jm}^{den}+D\}}-{\hat{\mu}}_{jm}^2,
\end{align}
%\end{eqnarray}
where ${\theta_{jm}(O)}$ and ${\theta_{jm}(O^2)}$ are sums over time of the observation data and squared data, weighted by their posterior probability of the Gaussian mixture component $m$ of state $j$:
%\begin{eqnarray}
\begin{align}
\theta_{jm}(O)=\sum_{r=1}^{R}{\sum_{t=1}^{T_r}O^r(t)\gamma_{jm}^r(t)} ,\\
\theta_{jm}(O^2)=\sum_{r=1}^{R}{\sum_{t=1}^{T_r}{O^r(t)}^2\gamma_{jm}^r(t)}.
\end{align}
%\end{eqnarray}
The sum over time of the Gaussian posterior probability is the Gaussian occupancy, $\gamma_{jm}$:
\begin{eqnarray}
\gamma_{jm}=\sum_{t=1}^T\gamma_{jm}(t).
\end{eqnarray}
$D$ is a smoothing constant which is an important implementation issue in EBW. If set too large, the training converges very slowly, if set too small, the updates may not increase the objective function on each iteration. A lower bound on $D$ is the value which ensures that all the variance remain positive. In \citep{WoodlandP02}, it is reported that using a Gaussian specific $D$ constant would provide an improved convergence speed over state specific $D$. The Gaussian specific $D_{jm}$ is set at the maximum of (i) twice the value necessary to ensure positive variance updates for all dimensions of the Gaussian; or (ii) a global constant $E$ multiplied by the denominator occupancy $\gamma_{jm}^{den}$.
\subsection{Gradient Descent}
This is a method commonly adopted in early discriminative training work for ASR (\citep{Juang1992},\citep{Juang1997}). The gradient descent method is a simple and general scheme that can be flexibly applied to any differential objective functions. Given any differential objective function $F(\lambda)$, the general form of gradient descent search can be represented into an iteractive updating formula along the gradient direction:
\begin{align}
\lambda^{n+1}=\lambda^{n}-\epsilon_{n}\nabla_{\lambda}F(\lambda^{n}),
\end{align}
where $\lambda^{n}$ denotes the set of model parameters in iteration $n$. This optimization method computes the gradient of the loss function for each training utterance $O_n$ and updates parameters in the opposite direction. The learning process is controlled by a learning rate $\epsilon_{n}$ which decreases as the token presentation index $n$ increases.

Gradient descent search algorithm can be implemented in either batch or online mode. In batch mode, for each iteration, gradient at $\lambda^{n}$ over all training samples is accumulated and then the model parameters are updated once and only once. The advantage of batch mode is this approach can be parallelized over multiple processors. While for online method also known as probabilistic descent, we calculate the gradient for each training sentence and model parameters are immediately updated based on this gradient. Online method can automatically exploit data correlation which allows the training procedure to proceed quickly. However, the online method is relatively slow to process a large amount of data since it is hard to parallelize. Therefore, a so-called ``semi-batch'' mode where the model is updated every $n$ samples is proposed as a compromise.   

The major drawback of the gradient descent method is that it is very slow to converge since it only explores the first-order derivative during optimization. A uniform learning rate $\epsilon_{n}$ may not be appropriate for all parameters. To ensure the convergence for every parameter, an extremely small learning rate has to be used, which in turn leads to very slow convergence overall. The second order derivatives of the objective function, i.e., Hessian matrix $H=\nabla^2F(\lambda)$, can provide important information for properly setting different step sizes for different model parameters. In the next few sessions, we will briefly introduce some optimization methods which explore Hessian matrix during search process.
\subsection{Quickprop Algorithm}
In the traditional Newton's method, if the objective function can be approximated by a quadratic function and its Hessian Matrix is positive definite, the optimum point $\Lambda^{opt}$ can be reached from any staring point $\Lambda^{0}$ in one single step along the gradient direction and this direction can be calculated from Hessian Matrix:
\begin{align}
\label{newton}
\Lambda^{opt}=\Lambda^{0}-H^{-1}\nabla{}F(\Lambda)
\end{align}
However, in practice, we cannot guarantee that Hessian Matrix is positive definite and also the Hessian Matrix is usually very large in size, i.e., the square of the number of model parameters. Therefore, diagonal approximation of the true Hessian Matrix is usually adopted. 

Quickprop\citep{Fahlman1988} is a batch-oriented second-order optimization method which loosely based on Newton's method. For the objective function $F$ interested, a quadratic approximation $M$ is built, e.g., using the first three terms of the Taylor series expansion of the function, around the current point $\lambda$. For a given step $s$, we can solve for the step $s_{N}$ which leads to a point where the gradient of the model is zero. In Quickprop, a diagonal Hessian Matrix is used which can be efficiently updated over iterations. The $i$-th diagonal element of Hessian at $n$-th iteration can be approximated:
\begin{align}
H_{ii}=\nabla_{ii}^2F(\Lambda)=\frac{\delta^2F(\Lambda^n)}{\delta\lambda_i^2}\approx\frac{\frac{\delta F(\Lambda^n)}{\delta\lambda_i}-\frac{\delta F(\Lambda^{n-1})}{\delta\lambda_i}}{\Delta\lambda_i^{n-1}},
\end{align}
where $\Delta\lambda_i^{n-1}$ denotes the update step size of $i$-th parameter, $\lambda_i$, at iteration $n-1$. Substituting this approximation into Newton's updating formula in formula~\ref{newton}, we derive the Quickprop updating formula for $i$-th parameter, $\lambda_i$:
\begin{align}
\lambda_i^{n+1}=\lambda_i^{n}-\Delta\lambda_i^n\nabla^2F(\lambda^n),
\end{align}
where the step size for $\lambda_i$, i.e., $\Delta\lambda_i^n$ can then be calculated based on Hessian matrix:
\begin{align}
\Delta\lambda_i^n=\frac{\Delta\lambda_i^{n-1}}{\frac{\delta F(\Lambda^n)}{\delta\lambda_i}-\frac{\delta F(\Lambda^{n-1})}{\delta\lambda_i}}.
\end{align}

Meanwhile, the positive definiteness of the approximated Hessian matrix is addressed by examining the sign of the gradient w.r.t. each parameter for successive iterations\citep{McDermottHRNK07}.

\subsection{Rprop}
Rprop\citep{Riedmiller93adirect} also uses different step sizes to update different model parameters. However, Rprop uses only the sign of the derivative to determine the update direction instead of using the magnitude. The aim is to eliminate possible negative effects of the magnitude of the partial derivative: each time the partial derivative with respect to a parameter changes its sign, it is considered that the last update was too large and missed a local minimum. Therefore the update value is the reduced by a certain factor. If the sign stays the same, this represents that the parameter is in a shallow region of the error surface, so the update value should be increased by another factor to speed up the convergence:
\begin{eqnarray}
\Delta\lambda_i^n= \left\{
\begin{array}{lr}
{-\Delta_i^n}&\text{if}\quad \frac{\delta F(\Lambda^n)}{\delta\lambda_i}>0\\
{+\Delta_i^n}&\text{if}\quad \frac{\delta F(\Lambda^n)}{\delta\lambda_i}<0\\
{0}&\text{otherwise}
\end{array}
\right.
\end{eqnarray}
where the magnitude of step size, $\Delta_i^n$, is different for each parameter and evolves as follows:
\begin{eqnarray}
\Delta\lambda_i^n= \left\{
\begin{array}{lr}
{\eta^+\Delta_i^{n-1}}&\text{if}\quad \frac{\delta F(\Lambda^{n-1})}{\delta\lambda_i}\frac{\delta F(\Lambda^n)}{\delta\lambda_i}>0\\
{\eta^-\Delta_i^{n-1}}&\text{if}\quad \frac{\delta F(\Lambda^{n-1})}{\delta\lambda_i}\frac{\delta F(\Lambda^n)}{\delta\lambda_i}<0\\
{\Delta_i^{n-1}}&\text{otherwise}
\end{array}
\right.
\end{eqnarray}
where $0<\eta^-<1<\eta^+$.
\section{Multiple Layer Perceptron}
Between the end of the 1980s and the beginning of 1990s, a new NN/HMM hybrid structure is explored by some researchers for ASR. The goal is to improve flexibility and recognition performance by taking advantage from the properties of both HMMs and NNs. NN is a widely studied discriminative training scheme for various pattern recognition applications. In this session, we will mainly explore the discriminative nature of Multiple Layer Perceptron, a widely used form of NN for speech recognition.
\subsection{Basic Structure}
MLPs have a layered feed forward architecture with an input layer, zero or more hidden layers and also an output layer as illustrated in Figure~\ref{fig:mlp}. 
\begin{figure}[!htbp]
  \begin{center}
    \leavevmode
      \includegraphics[width=7.3cm]{mlparc}
    \caption{Architechure of a standard MLP.}
    \label{fig:mlp}
  \end{center}
\end{figure}
Each layer computes a set of linear \emph{discriminant} functions followed by a nonlinear function, which is often a sigmoid function:
\begin{eqnarray}
f(x)=\frac{1}{1+\exp(-x)}.
\end{eqnarray}
As discussed in \citep{BourlardM1993}, this nonlinear function performs a different role for the hidden and the output units:
\begin{itemize}
\item On the hidden units, it generates high order moments of the input which produce a nonlinear high-dimension class boundary; many nonlinear functions can be used to achieve this besides sigmoid such as radial basis functions;
\item On the output units, the non-linearity can be viewed as a differentiable approximation to the decision threshold of a threshold logic unit or perceptron \citep{Rosenblatt1961}. For this purpose, the output non-linearity should be a sigmoid or sigmoid-like function. Alternatively, a function called \emph{softmax} can be used, as it approximates a statistical sigmoid function. For an output layer of $K$ units, this function would be defined as:
\begin{eqnarray}
f(x_i)=\frac{\exp(x_i)}{\sum_{n=1}^K \exp(x_n)}.
\end{eqnarray}
\end{itemize}

It has been proved in \citep{Cybenko89},\citep{Poggio89} that MLPs with one hidden layer and enough hidden units can approximate any continuous input/output mapping. MLP parameters $\Theta$ are trained to associate a ``desired'' output vector with an input vector. This is achieved via the \emph{Error Back Propagation} (EBP) algorithm ( \citep{Parker1982,Parker1985,RumelhartHW86} that uses a steepest descent procedure to iteratively minimise a cost function.

Commonly used cost functions are the \emph{Mean Square Error} (MSE) criterion:
\begin{eqnarray}
\label{mse}
E=\sum_{n=1}^N \| g(x_n,\Theta)-d(x_n) \|^2
\end{eqnarray}
or the relative entropy criterion :
\begin{eqnarray}
E_e=\sum_{n=1}^N \sum_{k=1}^K d_k(x_n) \ln \frac{d_k(x_n)}{g_k(x_n,\Theta)}
\end{eqnarray}
where $g(x_n,\Theta)=(g_1(x_n,\Theta),\ldots,g_k(x_n,\Theta),\ldots,g_K(x_n,\Theta))$ represents the actual MLP output vector and $d(x_n)=(d_1(x_n),\ldots,d_k(x_n),\ldots,d_K(x_n))$ represents the desired output vector as given by the labelled training data, $K$ is the total number of classes, and $N$ is the total number of training samples.

\subsection{Motivations}
NNs have several advantages which make them particularly attractive for ASR:
\begin{itemize}
\item They naturally accommodate discriminant training. When trained for classification using LMS or relative entropy, the parameters are actually trained to minimize the error rate while maximizing the discrimination between the correct class and the competing ones;
\item They can incorporate multiple constraints and automatically find the optimized constraint combination for classification, and they do not have the independent assumptions for features as opposed in HMM;
\item They can accommodate contextual information and feedback in its architecture to get a better performance.
\end{itemize}
\subsection{Discriminative nature of MLP inference}
It has been proved that the outputs of MLPs used in classification mode can be interpreted as estimates of a \emph{posteriori} probabilities of output classes conditioned on the input \citep{Bourlard1990},\citep{BourlardM1993},\citep{Gish1990}.

Let $q_k$, where $k=1,\ldots,K$, be the output units of an MLP and assume the training set consists of a labelled sequence of $N$ acoustic vectors $\{x_1,x_2,\ldots,x_N\}$. At time $n$, the input pattern is acoustic vector $x_n$, and is associated with a class $a_k^n$. In classification mode, MSE criterion (formula~\ref{mse}) becomes:
\begin{eqnarray}
E=\frac{1}{2}\sum_{n=1}^N\sum_{k=1}^K{[\| g(x_n,\Theta)-\delta_{kl_n}\|]}^2
\end{eqnarray}
where  $g(x_n,\Theta)$ represents the activation of output unit associated with class $q_k$ given $x_n$ and $\delta_{kl_n}$ is the Kronecker delta where $l_n$ represents the index of the class of $x_n$.

It has been shown in \citep{BourlardM1993}, if MLP contains enough parameters and if the training does not get stuck at a local minimum, the optimal output values of the MLP estimates of the probability distribution over classes conditioned on the input:
\begin{eqnarray}
\label{opt}
g_k(x_n,\Theta^{opt})=P(q_k|x_n)
\end{eqnarray}
% ------------------------------------------------------------------------
The estimation probability of formula~\ref{opt} is a \emph{posteriori} probability which is known to lead to the optimal classification, also they are discriminant by nature. Therefore, they can minimize the classification error rate, at least at the frame level.
\subsection{Estimating HMM Emission Probabilities with NN}
\label{sec:estnn}
In early attempts, NNs are applied to classify speech units such as phonemes or words, typically by mapping temporal representations into spatial ones, or by using recurrences. However this approach can only be adopted on simple speech recognition problems \citep{PeelingM88,watrous-shastri-87,WaibelHHSL88} because NNs classifying complete temporal sequences is not successful for continuous speech recognition due to their inability to deal with the time sequential nature of speech. Moreover, we do not know any principled way to translate an input sequence of acoustic vectors into an output sequence of speech units with NNs only. On the other hand, HMMs provide a reasonable structure for representing sequences of speech sounds and words. Therefore, NN/HMM hybrid system can be used by integrating the discriminative nature of NNs and the ability of representing sequential data of HMMs.

As discussed in Chapter 2, given the basic HMM equations, we would like to estimate the emission probability $p(x_n|s_k)$, that is, the probability of the observed data vector $x_n$ given a HMM state $s_k$ . However, HMMs are based on a very strict formalism that is difficult to modify without losing the theoretical foundations or the efficiency of the training algorithms, for example, the independent assumption of features, the maximum likelihood training scheme. Fortunately, NNs can be used to estimate probabilities that are related to these emission probabilities, and thus can be fairly easily integrated into a HMM-based approach. As we discussed in last subsection, NNs can be trained to produce the posterior probability, $p(s_k|x_n)$, of the HMM state given the acoustic data, if each NN output is associated with a specific HMM state. However, HMM requires the likelihood of the data, these can be converted back to emission probabilities using Bayes' rule:
\begin{eqnarray}
\frac{p(x_n|s_k)}{p(x_n)} = \frac{p(s_k|x_n)}{p(s_k)}.
\end{eqnarray}
where $p(s_k)$ is the class prior, i.e., the relative frequencies of each class determined from the class labels. The scaled likelihood of the left hand side can be used as an emission probability for HMM, since, during recognition, the scaling factor $p(x_n)$ is a constant for all classes and will not affect the classification.

\begin{figure}[!htbp]
  \begin{center}
    \leavevmode
      \includegraphics[width=7.3cm]{nnhmm}
    \caption{Estimating HMM emission probabilities with NN in the hybrid NN/HMM ASR.}
    \label{fig:nnhmm}
  \end{center}
\end{figure}

Figure~\ref{fig:nnhmm} shows the basic hybrid scheme. NN generates posterior estimates that are transformed into emission probabilities as described above, and then the standard HMM based algorithms such as Viterbi Decoding are used either for forced alignment (when the word sequence is assumed) or for recognition (when word sequences are hypothesised). In this paper, we mainly explore the discriminative nature of NN by this hybrid approach.




%%% Local Variables: 
%%% mode: latex
%%% TeX-master: "../thesis"
%%% End: 
