\chapter{Preliminary Results And Future Work}
\label{chap:exp}
\ifpdf
    \graphicspath{{Chapter5/Chapter5Figs/PNG/}{Chapter5/Chapter5Figs/PDF/}{Chapter5/Chapter5Figs/}}
\else
    \graphicspath{{Chapter5/Chapter5Figs/EPS/}{Chapter5/Chapter5Figs/}}
\fi

Much of the striking process in the state of the art LVCSR system over the past few years has been attributed to the successful development and applications of discriminative training\citep{Dan2003},\citep{McDermottHRNK07},\citep{BourlardM1993}. We have reviewed various discriminative training schemes in Chapter~\ref{chap:hmmdiscri}, all these learning methods attempt to optimize a mapping function from inputs to any desired outputs which can be estimated based on criteria that are more relevant to the ultimate classification and regression task. As a discriminative learning method in its nature, Neural Network (NN) has been successfully applied into speech recognition and shows superior results compared to generative learning schemes such as Maximum Likelihood (ML) trained HMM.

In this chapter, we will firstly present some preliminary results on discriminative training using NNs under NN/HMM framework for both phone and word recognition and then give a brief proposal for our future work.
\section{Preliminary Experimental Results}
\subsection{Database Description}
All our experiments are conducted on the well known Wall Street Journal (WSJ0) dataset. This dataset consists of speaker-independent reading materials, which are split into training, development and evaluation sets. The training set consists of 92 speakers while the testing set consists of 20 speakers. The test set is split into two sets: test\_A (dt5a) and test\_B (dt5b), each contains nearly half number of sentences for each speaker. In our experiments, we use all the training utterances to train the NN needed for a specific task, and forward the trained NN on test set dt5a. A summary of the WSJ0 dataset is illustrated in Table~\ref{tbl:wsj0sum}. 
\begin{table}
	\caption{Summary of the WSJ0 training and testing sets.}
	\label{tbl:wsj0sum}
	\begin{center}
		\begin{tabular}{|c||c|c|c|}	
			\hline
			Data Set & Speaker Number & Utterance Number & Data Length(hours) \\
			\hline
			train & 92 & 9889 & 18.84 \\
			\hline
			test\_A & 20 & 368 & 0.73\\
			test\_B & 20 & 374 & 0.67 \\
			\hline
		\end{tabular}		
	\end{center}	
\end{table}
\subsection{Phone Recognition Results on WSJ0}
The phoneme set of WSJ0 contains 40 monophones. The feature used in our experiments is MFCC and its first and second derivatives and they are augmented to a 39 dimensional vector. We adopt NN as our discriminative training scheme. In our case, a simple three-layer MLP (we refer to this as MLP-mono in the following of this chapter) is used: 
\begin{description}
\item[Input layer] A window of 15 frames are augmented ($39\times15$) to form a 585-dimension input feature vector which corresponds to the 585 input units. By doing this, the neighboring frame contexts of a specific frame are considered during training which will benefit the recognition performance.
\item[Hidden layer] A hidden layer of 2000 units is used;
\item[Output layer] The 120 output units  correspond to the posterior probabilities of the 40 3-state monophones. 
\end{description}
\subsubsection{Monophone Based Phone Recognition}
For the monophone system, we simply adopt MLP-mono which is described above to produce the posterior probabilities of 40 3-state monophones. We then modify these probabilities to a new 120-dimensional feature for the HMM system as if they were generated by the HMM states with zero means and unit variances. Then standard HMM decoding strategies offered in HTK\citep{HTK} can be employed to decode the network. 

As a comparison, we also conduct phone recognition task utilizing a conventional maximum likelihood trained HMM system in which each HMM state is modelled by a mixture of 32 Gaussians. Also, we adopt another discriminative training method, MMIE for this phone recognition task. The best results of these three systems after tuning are reported in Table~\ref{tbl:wsj0mono}. 
\begin{table}
	\caption{PER (\%) for HMM and NN/HMM monophone ASRs on WSJ0.}
	\label{tbl:wsj0mono}
	\begin{center}
		\begin{tabular}{|c||c|}	
			\hline
			System & PER \\
			\hline
			HMM & 43.71 \\
			\hline
			NN/HMM & 35.97 \\
			\hline
			MMIE & 39.52 \\
			\hline
		\end{tabular}		
	\end{center}	
\end{table}
As shown in Table~\ref{tbl:wsj0mono}, discriminative training schemes, i.e. MMIE, NN/HMM outperform the generative learning approach significantly , e.g.. there is a relative improvement of 17.7\% for NN/HMM. Also from Table~\ref{tbl:wsj0mono}, the performance using NN is better than MMIE for this task. This shows that our NN/HMM has a better frame level discriminative power than the sequence classification based MMIE approach.

Generative approach attempts to estimate the joint distribution of the observation data and class label. However, maximizing this joint distribution does not necessarily improve the classification result unless the model assumption is correct. However, in speech processing, HMM is not the correct model for speech production. Discriminative learning, on the other hand, aims at optimizing an object function which reflects the classification task directly, therefore, can improve the classification result significantly. 

\subsubsection{Triphone Based Phone Recognition}
We have reported the results of monophone  besed phone recognition in Table~\ref{tbl:wsj0mono}. Most of the state of the art LVCSR systems adopt triphone acoustic models, since triphone can take both its left and right neighbour monophones as its contexts thus can model the speech more accurately. To further improve the performance, we adopt triphone models instead of monophone for phone recognition in this section. 

Since there are thousands of triphones appearing in the training set, modelling them individually will require prohibiting amount of training data for a robust model estimation. Therefore, based on some acoustic expert knowledge, a decision tree is built to cluster these individual triphone states into clusters. Each triphone state is then assigned a cluster label with the form of $ST\_centerPhone\_stateID\_clusterID$. Based on these clusters, we can build a second NN set to incorporate triphone context information: for each monophone state in MLP-mono, we build an NN to predicate all the clusters of this state. For example, suppose the second state of monophone $aa$: $aa[2]$ has 20 clusters ranging from $ST\_aa\_2\_1$ to $ST\_aa\_2\_20$. Similar to the construction of MLP-mono, we augment 15 consecutive frames to form a 585-dimensional input vector for the input layer, followed by a 2000-unit hidden layer, then a 20-unit output layer corresponding to these 20 clusters. Note instead of using all the training samples, we use only samples which belong to the clusters of $aa[2]$ to train this context dependent NN. Therefore, since there are 120 monophone states, a second NN set of 120 NNs will be built, each of which predicates the clusters of a specific monophone state.

For the triphone based system, we have two sets of NNs thus two set of posterior probabilities. Therefore, we need to interpolate the posteriors between these two sets to get the posterior probabilities of all the clusters. The interpolation strategy we adopt is:
\begin{itemize}
\item Each posterior probability of the second set NN is multiplied by the output unit number of the specific context dependent NN which it belongs to to form a new second set NN posterior. This step is used to balance among all the second set NNs with different number of output units since NNs with fewer number of output units tend to have larger posterior probabilities which may not be fare to the ones with larger number of output units.
\item An interpolation factor $\alpha$ is introduced to combine the posterior probabilities of these two sets of NNs. The final posterior probabilities of a cluster is calculated as:
\begin{eqnarray}
P_{cluster}=P_{first\_set\_posterior}*\alpha+P_{second\_set\_posterior}*(1-\alpha)
\end{eqnarray}
\end{itemize}

In total, there are more than 6000 clusters, therefore, we would have more than 6000 posterior probabilities. Unlike the MLP-mono case, we cannot simply modified all these posteriors to form a feature vector of more than 6000 elements, because this is intractable for the decoding task, i.e., during recognition phase, for every frame, we would have to compute more than 6000 posteriors. Therefore, we adopt WFST as our decoding scheme for this task. As dicussed in Chapter~\ref{chap:wfst}, all the elements of an SR system, including HMM models, lexicon models, language models can be modelled as WFSTs. The composition operation of WFSTs makes it convenient to integrate all these parts to form a system ready for efficient decoding task. In our case, we model our posteriors as the observation WSFT $O$, the mapping between cluster names to triphone states as WFST $X$, the triphone HMM model topologies as WFST $H$, the mapping between physical triphones to logical triphones as WFST $Y$, then a mapping between triphone sequences and the corresponding monophone sequences, i.e., context dependent phone model sequence to context independent phone model sequence as WFST $C$. \emph{Note} for the the observation WFST $O$, we only take the top 500 posterior probabilities of each frame into consideration to handle the complexity problem.

To form the final decoding network $P$, a composition operation is applied on all these components: $P=O\circ X\circ H\circ Y\circ C$. For decoding, a shortest\_path operation is applied to the decoding network to find the most likely phone sequence.

Similar to the monophone based system, we also perform this task using Maximum likelihood trained triphone HMM system with 32 mixtures as a comparison. The results are reported in Table~\ref{tbl:wsj0tri}.
\begin{table}
	\caption{PER (\%) for HMM and NN/HMM triphone ASRs on WSJ0.}
	\label{tbl:wsj0tri}
	\begin{center}
		\begin{tabular}{|c||c|}	
			\hline
			System & PER \\
			\hline
			HMM & 27.47 \\
			\hline
			NN/HMM & 21.46 \\
			\hline
		\end{tabular}		
	\end{center}	
\end{table}
From Table~\ref{tbl:wsj0tri}, we can see again triphone based NNs greatly outperform the conventional maximum likelihood trained HMM triphone systems. By comparing Table~\ref{tbl:wsj0mono} and Table~\ref{tbl:wsj0tri}, we can also conclude that the recognition system gains a lot from the triphone acoustic modelling. Moreover, for the HMM triphone system, we use the standard decoding method offered in HTK, this decoding takes much longer time than the WFST decoding framework. 

\subsection{Word Recognition Results on WSJ0}
We have shown that NN approach outperforms the Maximum likelihood trained HMM systems for phone recognition task. We explore word recognition task in this section. 

From the phone recognition task, we already have the posterior probabilities of all clusters and also a decoding network for phone recognition. To perform the word recognition task, we need also convert the lexicon and language model to WFSTs. This will gives us a lexicon WFST $L$ mapping from the phone sequences to words and a language model $G$ which encodes the $n$-gram probabilities. By composing $P$ with $L$ followed by $G$, we build a decoding network ready for word recognition. Similar to the phone recognition task, we apply a shortest path operation to this network for decoding. In this experiment, we adopt two different language models, namely Bigram, Trigram. The performances are reported in Table~\ref{tbl:wsj0word} in terms of Word Error Rate (WER).
\begin{table}
	\caption{NN/HMM triphone system WER (\%) on WSJ0.}
	\label{tbl:wsj0word}
	\begin{center}
		\begin{tabular}{|c||c|}	
			\hline
			LM & WER \\
			\hline
			Bigram & 13.12 \\
			\hline
			Trigram & 11.85 \\
			\hline
		\end{tabular}		
	\end{center}	
\end{table}
However, the performance reported in Table~\ref{tbl:wsj0word} is worse than the standard ML trained HMM system in Table ~\ref{tbl:hmmword}, although it has a better phone recognition result.
\begin{table}
	\caption{HMM triphone system WER (\%) on WSJ0.}
	\label{tbl:hmmword}
	\begin{center}
		\begin{tabular}{|c||c|}	
			\hline
			LM & WER \\
			\hline
			Bigram & 10.14 \\
			\hline
			Trigram & 7.40 \\
			\hline
		\end{tabular}		
	\end{center}	
\end{table} 
\section{Discussions}
Although discriminative training using NN has a significant improvement over conventional ML trained HMM system for phone recognition task, it has a poorer performance regarding word recognition task. Lattice-based MMI and MPE training are typically performed in state-of-the-art HMM based systems for word recognition task. The lattices are generated with some language models. Hence, MMI and MPE can take into account the language models during training. Thus, these discriminative training schemes enjoy an advantage over traditional frame-based NN training method because the criteria they used are based on sequence classification which is more related to word error rate for word recognition task. Our NN/HMM model training does not use any LM information, this may also account for one of the reasons why discriminative training using NN in the ordinary manner will not compete other discriminative training criteria like MMIE, MPE for the word recognition task. 
\subsection{Training Criteria for NNs}
Commonly used training criteria of a conventional NN are the \emph{Mean Square Error} (MSE) criterion:
\begin{eqnarray}
\label{mse}
E=\sum_{n=1}^N \| g(x_n,\Theta)-d(x_n) \|^2
\end{eqnarray}
or the relative entropy criterion :
\begin{eqnarray}
E_e=\sum_{n=1}^N \sum_{k=1}^K d_k(x_n) \ln \frac{d_k(x_n)}{g_k(x_n,\Theta)}
\end{eqnarray}
where $g(x_n,\Theta)=(g_1(x_n,\Theta),\ldots,g_k(x_n,\Theta),\ldots,g_K(x_n,\Theta))$ represents the actual MLP output vector and $d(x_n)=(d_1(x_n),\ldots,d_k(x_n),\ldots,d_K(x_n))$ represents the desired output vectors as given by the labelled training data, $K$ is the total number of classes, and $N$ is the total number of training samples.
Back-propagation is used to update the parameters of NN, i.e. the weight matrix. For MSE, the gradient resulting from formula~\ref{mse} is equal to
\begin{eqnarray}
\frac{\delta E}{\delta \omega_{ij}}=Y_ig_j(x)(1-g_j(x))(g_j(x)-d_j(x))
\end{eqnarray}
in which $Y_j$ is the pre-nonlinearity value of the $j$-th output layer unit, $\omega_{ij}$ is the weight between the hidden unit $i$ and the output unit j, $d_j(x)$ is the class label of training sample $x$.

For relative entropy criterion, the partial derivative of the error function relative to weights is
\begin{eqnarray}
\frac{\delta E_e}{\delta \omega_{ij}}=Y_i(g_j(x)-d_j(x)).
\end{eqnarray}
These two criteria are designed for NN for general use, in other words, they are not specially designed for speech recognition task. Updating the parameters according to these two criteria can guarantee a frame accuracy increase unless a local maxima is achieved. For the phone recognition task, no language model and lexicon model is used, i.e., the frame accuracy is directly related to the recognition performance. Therefore, updating the parameters using these criteria can guarantee a performance improvement over ML trained HMM system even MMIE trained HMMs because of NN's discriminative nature on frame basis. This improvement can be seen in Table~\ref{tbl:wsj0tri}. However for word recognition task, the objective functions in other discriminative approaches like MMIE, MCE, MWE directly reflect the word level recognition errors because of their adoptions of sequence classification training criteria. Therefore, proper optimizations of these objective functions will benefit the system performance for the word recognition task. Moreover, language models can be easily incorporated in these objective functions which also benefits the word recognition system. MSE and relative entropy, on the other hand, concern only the frame accuracy, they cannot reflect word recognition errors directly. Therefore, the performance on word recognition using NN is not a satisfactory, it is even worse than the standard ML trained HMM system (see Table~\ref{tbl:hmmword}).
\subsection{Limitations of HMM System Based Discriminative Training Criteria}
\label{lim}
Training criteria like MMIE, MCE, MWE are designed for standard HMM models and work well for speech recognition. However, they also suffer from several problems:
\begin{itemize}
\item They result in large-scale nonconvex optimization problems. These problems do not have a closed-form solution for their objective functions. Therefore, gradient optimization or approximation schemes are usually adopted. However, these schemes can be easily trapped in a shallow local optimum point in the complicated surface of the objective function.
\item They usually have slow convergence speed. Some optimization methods, e.g., Quickprop\citep{Fahlman1988}, Rprop\citep{Riedmiller93adirect}, explore the Hessian Matrix to speed up the convergence. However, Hessian Matrix is usually too large in size, diagonal approximation of the true Hessian Matrix is usually adopted.
\item They all adopt GMMs as their acoustic models. Feature independence is assumed for a tractable learning problem.
\end{itemize} 
\subsection{Advantages of NN Acoustic Modelling}
Neural Networks are a useful alternative to traditional GMM acoustic modelling in speech recognition in several aspects:
\begin{itemize}
\item They relax the assumptions about the distribution of the input features, allowing a flexible front-end feature extraction. NNs can automatically exploit the independence of the features, therefore, no independence assumptions are imposed.
\item Because NNs can estimate posterior probabilities, evidences from multiple feature streams can be easily combined in a single NN/HMM system. 
\item NNs can make good use of long span feature vectors, e.g., they can take several consecutive frames and augment them into a new feature vectors to incorporate the context information among frames.
\end{itemize}
\section{Future Work}
NN as an alternative to GMM acoustic modelling enjoys several above-mentioned advantages. However, their training criteria are frame-based, thus do not related to the word error rate. On the other hand, although discriminative training schemes for standard HMM systems like MMIE, MPE suffer from several problems (see section~\ref{lim}), they have a better word recognition performance than the frame-based NN/HMM system because they adopt a sequence classification based training criteria which are more closely related to word error rate. Therefore, we would like to extend the current work in two aspects to improve the NN/HMM system performance on word recognition tasks discussed in the next sessions.
\subsection{NNs For Feature Transformation}
In our first approach, we treat NN as a way of feature transformation. NNs can be trained in the usual way and all the \emph{training data} is forwarded to this network to get the activations of the training data set. These activations can then be transformed, e.g. PCA, to a new set of features. Due to the discriminative nature of NN, these new generated features can have a better discriminative power than the original features. Thus, we can then build standard HMM systems using these features, moreover, sequence classification based discriminative training methods like MMIE, MPE can be employed to train this system. This approach is illustrated in the diagram~\ref{dia}:
\begin{figure}[!htbp]
  \begin{center}
    \leavevmode
      \includegraphics[width=7.3cm]{feature_transform}
    \caption{Illustration of feature transform scheme.}
    \label{dia}
  \end{center}
\end{figure}
\subsection{Sequence Classification Based NN Training Criteria}
Thanks to the sequence classification criteria used in MMIE, MPE, these standard HMM discriminative training criteria enjoy a better word recognition performances. It is also possible to train NN/HMM hybrid system to discriminate between sequences instead of frames. 

Consider the cross entropy of NN training:
\begin{eqnarray}
L_{XENT}(\theta)=\sum_{r=1}^R\sum_{t=1}^T\sum_{i=1}^N\hat{y}_{rt}(i)log\frac{\hat{y}_{rt}(i)}{y_{rt}(i)},
\end{eqnarray} 
where $\theta$ denotes the parameters of NNs, $y_{rt}(i)$ is the network output for physical state $i$ at time $t$ in sample $r$ and $\hat{y}_{rt}(i)$ is the hard label for state $i$. During training, the EBP algorithm adjusts $\theta$ to minimize $L_{XENT}(\theta)$. Softmax function is usually adopted as the output layer nonlinearity, 
\begin{eqnarray}
y_{rt}(i)=\frac{e^{a_{rt}(i)}}{\sum_j^{N}e^{a_{rt}(j)}}.
\end{eqnarray}
where $a_{rt}(i)$ is the input to the softmax for state $i$ at time $t$ for sample $r$. Gradient-descent training can be based on a convenient expression for the derivative of the loss with respect to the activations:
\begin{eqnarray}
\frac{\delta L_{XENT}}{\delta a_{rt}(i)}=y_{rt}(i)-\hat{y}_{rt}(i).
\end{eqnarray}

Let $L_{SEQ}(\theta)$ be any sequence classification criterion, e.g., MMIE, MPE or MWE. The expected occupancies $\gamma_{rt}^{NUM}(i)$ and $\gamma_{rt}^{DEN}(i)$ for each physical state required by the EBW updates are computed with forward-backward passes over numerator and denominator lattices, respectively. These expected occupancies are also related to the gradient of the loss with respect to state log-likelihoods\citep{Povey2002}:
\begin{eqnarray}
\label{cr}
\frac{\delta L_{SEQ}}{\delta l_{rt}(i)}=k(\gamma_{rt}^{DEN}(i)-\gamma_{rt}^{NUM}(i)),
\end{eqnarray}
where $l_{rt}(i)$ is the log-likelihood of physical state $i$ at time $t$ in sample $r$ and $k$ is the acoustic scaling factor. In NN/HMM system, $l_{rt}(i)=logy_{rt}(i)-logp(i)$, where $p(i)$ is the prior of state $i$, $y_{rt}(i)$ is the softmax output for state $i$ at time $t$ in sample $r$. By the chain rule,
\begin{eqnarray}
\frac{\delta L_{SEQ}}{\delta y_{rt}(i)}=k\frac{(\gamma_{rt}^{DEN}(i)-\gamma_{rt}^{NUM}(i))}{ y_{rt}(i)},
\end{eqnarray}
derivatives of sequence classification criterion with respect to the softmax activations can then be derived:
\begin{eqnarray}
\label{seq}
\frac{\delta L_{SEQ}}{\delta a_{rt}(i)}=k(\gamma_{rt}^{DEN}(i)-\gamma_{rt}^{NUM}(i)).
\end{eqnarray}
Thanks to formula~\ref{seq}, we have a simple recipe for training NN acoustic models using any of the sequence classification developed for GMMs in the lattice based EBW framework: the gradient with respect to the cross entropy criterion is replaced with the gradient with respect to the sequence-classification criterion. Backpropagation can be run as usual, instead of using a frame based classification criterion, NNs now update their parameters based on a sequence classification criterion which will benefit the word level recognition.









