\chapter{Introduction}
\ifpdf
    \graphicspath{{Chapter1/Chapter1Figs/}}
\else
    \graphicspath{{Chapter1/Chapter1Figs/EPS/}}
\fi
Human beings have adopted speech as their primary way of information exchange and social communication probably since prehistory, and they will remain so even with the emerging of new media of the future. Most computers currently utilize a graphical user interface (GUI). GUIs are mostly based on graphically represented interface objects and functions such as windows, icons, menus, and pointers. We communicate with computers mostly by mouse and keyboard through GUI, in other words, computers lack the human ability to speak, listen and understanding. Speech, due to its simplicity and convenience, can serve as one of the primary schemes for modern Human Computer Interface (HCI). In fact, although speech-based interaction now is far from its maturity, spoken language technology is already incorporated in many applications at home, on mobile devices, and in offices; these applications have greatly changed the way we live and work.

To incorporate speech in modern human computer interfaces and apply it to various real world tasks, researchers in speech domain have devoted a great deal of their efforts to mechanical realization of human speech capabilities on computers, e.g. automatic speech recognition, speech synthesis, speech-to-speech translation over the past few decades. Many theories and techniques have been developed since the emerging of speech research in 1960s.  In this paper, we focus on Automatic Speech Recognition (ASR).

\section{ASR System Architecture}
The aim of an Automatic Speech Recognition (ASR) is to convert a speech waveform into textual form. This process is commonly known as Speech-To-Text (STT) or a speech transcription process. One fundamental requirement of the system is that it should accurately and efficiently convert a speech signal to a text transcription of the spoken words, independent of the speaker's accent, gender, recording device, and the acoustic environment which the speaker is located in (e.g., quiet office, noisy factory, outdoor). Speech recognition problem is the task of taking an utterance containing a certain length of speech data and transforming it into a text string which is as close as possible to the transcript that a careful human would generate. We can form the ASR problem into a noisy-channel model shown in Figure~\ref{fig:noisy_channel}. 
\begin{figure}[!htbp]
  \begin{center}
    \leavevmode
      \includegraphics[height=5cm,width=10cm]{noisy_channel}
    \caption{The noisy channel model for speech recognition.}
    \label{fig:noisy_channel}
  \end{center}
\end{figure} 

The intuition of the noisy channel model is to treat the acoustic waveform as a ``noisy'' version of the string of words, i.e. a version that has been passed through a noisy communication channel (i.e. our vocal tract system). This channel introduces ``noise'' which makes it hard to recognize the ``true'' string of words. Our goal is then to build a model of the channel (i.e. acoustic model) so that we can figure out how it modifies this ``true'' sentence and hence recover it. The insight of the noisy channel model is that if we know how the channel distorts the source, we could find the correct source sentence for a waveform by taking every possible sentence in the language (i.e. language model), running each sentence through our noisy channel model, and seeing if it matches the output. We then select the best matching source sentence as our desired source sentence. To do this, we need models of the prior probability of a source sentence ($N$-gram model), the probability of words being realized as certain strings of phones (lexicons), and the probability of phones being realized as acoustic feature (Gaussian mixture models). 

The essential components of a standard speech recognition system are shown in Figure~\ref{fig:ess_compo_diagram}. 

In feature extraction phase, a spoken utterance is converted to a sequence of feature vectors, aiming at retaining useful information in the waveform meanwhile removing noise and other irrelevant information. Useful features include Linear Prediction Coefficients (LPC)\citep{Atal71}, Mel Frequency Ceptral Coefficients (MFCC)\citep{DavisMFCC}, Perceptual Linear Prediction Coefficients (PLP)\citep{HermanskyPLP}. 
\begin{figure}[!htbp]
  \begin{center}
    \leavevmode
      \includegraphics[height=2cm,width=10cm]{ess_compo_diagram}
    \caption{Essential components of a standard speech recognition system.}
    \label{fig:ess_compo_diagram}
  \end{center}
\end{figure}

Recognition phase attempts to decode the input feature vector sequences into string sequences corresponding to these feature vectors. This phase utilizes three other components, namely, acoustic models, language models and the lexicon:
\begin{description}
\item[Acoustic Model]
An acoustic model captures the characteristics of a sound unit and it is the direct model a speech recognition engine uses to recognize speech. Typically, statistical acoustic models are adopted to model the sound units such as phonemes, syllables and words. The most widely adopted statistical acoustic model is the Hidden Markov Model (HMM).
\item[Language Model]
A statistical language model is used to assign a probability to a sequence of word tokens. It serves as a guide to the search algorithm by predicting the next word given the history and also disambiguates between phrases which are acoustically similar. $n$-gram statistical language models are typically adopted together with the HMM acoustic models for speech recognition due to the ease of integration.
\item[Lexicon]
A lexical model connects the acoustic and language models. If a speech recognition system uses a phone acoustic model and word language model, then the lexical model defines the mapping between words and the phoneme set. In this case, the lexical model is simply a pronunciation dictionary. However, if word acoustic models were used, the lexical model is simply a one-to-one trivial mapping.
\end{description}

The post processing component is used to evaluate the system performance by comparing the set of hypotheses generated by the system with the references. There are several forms of the error/distance metrics:
\begin{itemize}
\item Sentence Error Rate(SER)
\begin{align}
SER = \frac{\text{\# sentences different from the references }}{\text{\# of reference sentences}}
\end{align}
\item Word Error Rate(WER)
\begin{align}
WER=\frac{\text{\# substituted words}+\text{\# deleted words}+\text{\# inserted words}}{\text{\# of reference words}}
\end{align}
\item Phone Error Rate(PER)
\begin{align}
WER=\frac{\text{\# substituted phones}+\text{\# deleted phones}+\text{\# inserted phones}}{\text{\# of reference phones}}
\end{align}
\end{itemize}

\section{Formal Description of ASR}
Because speech is so variable, an acoustic input sentence will never exactly match any model we already have for this sentence. Therefore, we form the ASR problem as a special case of Bayesian inference. The probabilistic implementation of this problem can be expressed as finding the most likely word sequence $\hat{W}$ that maximises the \emph{a posteriori} probability $P(W|O)$ of the string $W$, given the feature vector, $O$:
\begin{eqnarray}
\hat{W}=\argmax_W P(W|O).
\label{eqn:map}
\end{eqnarray}
Apply Bayes' rule, we can break (~\ref{eqn:map}) down as follows:
\begin{eqnarray}
\hat{W}=\argmax_W \frac{P(O|W)P(W)}{P(O)}.
\label{eqn:bayes}
\end{eqnarray}
The probability on the right-hand side of (~\ref{eqn:bayes}) are easier to compute than $P(W|O)$. $P(W)$ is the \emph{a priori} probability of the word string, which can be estimated by the $N$-gram language model. We can ignore $P(O)$ since we are maximizing over all possible sentences, and $P(O)$ does not change for each sentence, i.e., for each potential sentence we are still examining the same observation $O$, which must have the same probability $P(O)$. Thus, by ignoring $P(O)$, we have
\begin{eqnarray}
\hat{W}=\argmax_W \frac{P(O|W)P(W)}{P(O)}=\argmax_W P(O|W)P(W).
\label{eqn:alter}
\end{eqnarray}

To summarize, the most probable sentence $W$ given some observation sequence $O$ can be computed by taking the product of two probabilities for each sentence and choosing the sentence for which this product is the largest. These two terms can be computed by two components of a speech recognizer: $P(W)$, the \emph{a priori} probability is computed by the language model; $P(O|W)$, the observation likelihood, is computed by the acoustic model:
\begin{eqnarray}
\hat{W}=\argmax_W \underbrace{P(O|W)}_{likelihood}\underbrace{P(W)}_{prior},
\end{eqnarray}

\section{Challenges in ASR}
Speech recognition research has been going on since the 1960s. Although significant process has been made, the performance is far from human satisfactory. ASR systems fall far short of human speech perception in all but the simplest, most constrained tasks. Before ASR systems become ubiquitous in society, many improvements are required in both system performance and operational performance. In the system area, we need large improvements in accuracy, efficiency, and robustness in order to utilize speech technology for a wide range of tasks, on a wide range of processors, and under a wide range of operating conditions. However, speech recognition is a difficult task because of the following factors:
\begin{itemize}
\item the dynamic aspect of the speech signal;
\item differences between speakers - inter-speaker variability, i.e, different speakers will generate different waveforms speaking the same sentence;
\item differences between how a given speaker utters the same word: intra-speaker variability;
\item acoustic channel variabilities: microphone differences, background noises, bandwidths;
\item difficulties in modelling the syntax and the semantics of languages;
\item difficulties in encapsulating domain information.
\end{itemize}
The variabilities in the speech signal are handled by a statistical pattern recognition approach in speech recognition research. Compared to many other pattern recognition problems, speech recognition is a very large problem (hundreds/thousands of hours of acoustic training data and billions of words of language model data) with the desire to have a real time performance which is a great challenge. 

\section{Discriminative Training}
ASR has been a major goal for a large research community in the last few decades. The predominant approach to large vocabulary, speaker-independent, continuous speech recognition (LVCSR) has been based on Hidden Markov Models (HMMs). Traditional training scheme for HMM is the Maximimum Likelihood (ML), which attempts to estimate the HMM parameters so that the models are most likely to generate the training data. However in general, maximizing the likelihood does not necessarily minimize the phone or word error rates, which are more relevant metrics for ASR.

%%%%Feisha Large Margin
Noting this weakness, many researchers in ASR studied alternative frameworks for parameter estimation of HMMs, e.g., conditional maximum likelihood\citep{Nadas1983}, minimum classification error\citep{JK1992}, maximum mutual information\citep{WoodlandP02} and Minimum Phone Error (MPE)\citep{Dan2003}. The learning algorithms in these frameworks optimize discriminative criteria that more closely track actual error rates, as opposed to the Expectation Maximization(EM) algorithm for maximum likelihood estimation. 

Discriminative training means HMM parameters are trained by optimizing some measures of classifications so that hypotheses generated by the recognizer on the training data more closely ``match'' the correct word-sequences, whilst generalizing to unseen data. However, these algorithms do not enjoy the simple update rules and relatively fast convergence of EM, but carefully and skillfully implemented, they can lead to lower error rates\citep{RM2005}\citep{WoodlandP02}.

Inspired by support vector machines (SVMs), a large margin learning algorithm \cite{Fei2007} is proposed to discriminatively train the HMM parameters. This approach attempts to maximize the distance between labelled examples and the decision boundaries that separate different classes \citep{Vapnik1998}. Under mild assumptions, the required optimization is convex, without any spurious local maxima. In contrast to SVMs, however, this approach are very naturally suited to problems in multiway (as opposed to binary) classification; another virtue is that they do not require the kernel trick for nonlinear decision boundaries.

Neural Network (NN) based models have been widely proposed as a potentially powerful approach to speech recognition as NNs can be helpful for hard pattern recognition problems. NNs are learning machines which provide discriminant-based learning, i.e, the models are trained to suppress incorrect classifications as well as to accurately model each class separately. Despite the very good results achieved in static pattern classification, NNs by themselves have not been shown to be effective for large scale recognition of continuous speech \citep{BourlardM1993}. There is at least one fundamental difficulty with supervised training of a connectionist network for continuous speech recognition: a target function must be defined, even though the training is done for connected speech units where the segmentation is generally unknown, however this is not a problem for HMM-based training, which only requires the sequence of speech units and not their temporal segmentations. Therefore, hybrid NN/HMM models have been built. This hybrid scheme adopts NN as a HMM state emission output probability estimator within the standard HMM framework. This approach brings some benefits relative to standard HMM recognizers:
\begin{itemize}
\item They provide discriminant-based learning compared to the traditional ML-based training for HMM parameters.
\item When used in classification mode and trained with an Least Mean Square(LMS) criterion or an entropy criterion, the network outputs will estimate posterior probabilities without requiring strong assumptions about the underlying probability density functions.
\item Because NNs are capable of incorporating multiple constraints and finding optimal combinations of constraints for classification, features do not need to be treated as independent. In other words, there is no need for strong assumptions about statistical distributions and independence of input features.
\end{itemize}
 
Weighted Finite State Transducers (WFSTs) can serve as an integrated representation of the main components of current large-vocabulary speech recognition system, such as HMMs, tree lexicons, or n-gram language models\citep{WFST02}\citep{WFST08}. A finite-state transducer is a finite automaton whose state transitions are labelled with both input and output symbols. Therefore, a path through the transducer encodes a mapping from an input symbol sequence to an output symbol sequence. A weighted transducer puts weights on transitions in addition to the input and output symbols. Weights may encode probabilities, durations, penalties, or any other quantity that accumulates along paths to compute the overall weight of mapping an input sequence to an output sequence. Weighted transducers are thus a natural choice to represent the probabilistic finite-state models prevalent in speech processing. In this paper, we investigate WFSTs as our decoding approach for the NN/HMM system.

In this paper, we will mainly address the problem of discriminative training of HMM parameters adopting an NN/HMM hybrid approach. Besides, We adopt WFSTs as the decoding scheme for our NN/HMM system. The remaining of the paper is organised as follows: we give a general description of HMM based and hybrid NN/HMM based ASRs in Chapter~\ref{chap:hmmasr}. Various discriminative training schemes are explained in Chapter~\ref{chap:hmmdiscri} for HMM based ASRs, then the discriminative nature of NN and how the hybrid system is formed are also discussed. We then give a general introduction of WFST in Chapter~\ref{chap:wfst}. Preliminary experimental results on NN/HMM discriminative training are presented in Chapter~\ref{chap:exp} together with a brief introduction of our proposed method of discriminative training in the hybrid NN/HMM ASR system.   

% ------------------------------------------------------------------------


%%% Local Variables: 
%%% mode: latex
%%% TeX-master: "../thesis"
%%% End: 
