\chapter{Introduction}
\ifpdf
    \graphicspath{{Chapter1/Chapter1Figs/}}
\else
    \graphicspath{{Chapter1/Chapter1Figs/EPS/}}
\fi
Speech has been the dominant mode of human information exchange and social communication from human prehistory to the new media of the future. Most computers currently utilize a graphical user interface (GUI), based on graphically represented interface objects and functions such as windows, icons, menus, and pointers and we communicate with the computer mostly by mouse and keyboard, in other words, they lack the human ability to speak, listen and understanding. Speech, due to its simplicity and convinience, will serve as one of the primary schemes for modern Human Computer Interface (HCI). In fact, even before speech-based interaction reaches full maturity, applications in home, mobile, and office segments are incorporating spoken language technology to change the way we live and work.

Speech is the primary means of communication between people. For reasons ranging from technological curiosity about the mechanisms for mechanical realization of human speech capabilities, to the desire to automate simple tasks inherently requiring human-machine interactions, research in automatic speech recognition (and speech synthesis) by machine has attracted a great deal of attention over the past five decades.

\section{ASR System Architechture}
The aim of an Automatic Speech Recognition (ASR) is to convert a speech waveform into textual form. This process is commonly known as Speech-To-Text(STT) or a speech transcription process. One fundamental requirement of the system is that is should accurately and efficiently convert a speech signal to a text transcription of the spoken words, independent of the speaker's accent, gender, the recording device, and the acoustic environment which the speaker is located (e.g., quiet office, noisy room, outdoors). The speech recognition problem is the task of taking an utterance containing a certain length of speech data and transforming it into a text string which is as close as possible to the transcript that a careful human would generate. We can form the ASR problem into a noisy-channel model shown in Fig.~\ref{fig:noisy_channel_model}. The intuition of the noisy channel model is to treat the acoustic waveform as an “noisy” version of the string of words, i.e. a version that has been passed through a noisy communications channel (i.e. our vocal tract system). This channel introduces “noise” which makes it hard to recognize the “true” string of words. Our goal is then to build a model of the channel (i.e. acoustic model) so that we can figure out how it modified this “true” sentence and hence recover it. The insight of the noisy channel model is that if we know how the channel distorts the source, we could find the correct source sentence for a waveform by taking every possible sentence in the language (i.e. language model), running each sentence through our noisy channel model, and seeing if it matches the output. We then select the best matching source sentence as our desired source sentence. 

Essential components of a basic speech recognition system are shown in Fig.~\ref{fig:ess_compo_diagram}. In feature extraction phase, a spoken utterance is converted to a sequence of feature vectors, aiming at retaining useful information in the waveform and meanwhile removing noise and other irrelevant information. Useful features include Linear Prediction Coefficients(LPC), Mel Frequency Ceptral Coefficients(MFCC), Perceptual Linear Prediction Coefficients(PLP). 
\begin{figure}[!htbp]
  \begin{center}
    \leavevmode
      \includegraphics[height=4cm,width=12cm]{ess_compo_diagram}
    \caption{Essential Components of a Basic speech Recognition System.}
    \label{fig:ess_compo_diagram}
  \end{center}
\end{figure}

Then the recognition phase attempts to decode the input feature vector sequences into string sequences corresponding to these feature vectors. This phase utilizes three other components, namely, acoustic models, language models and the lexicon:
\begin{description}
\item[Acoustic Model]
An acoustic model captures the characteristics of a sound unit and it is the direct model a speech recognition engine uses to recognise speech. Typically, statistical acoustic models are applied to model the sound units such as phonemes, syllables and words. The most widely adopted statistical acoustic models are the Hidden Markov Models (HMMs).
\item[Language Model]
A statistical language model is used to assign a probability to a sequence of word tokens. It serves as a guide to the search algorithm by predicting the next word given the history and also disambiguate between phrases which are acoustically similiar. $n$-gram statistical language models are typically used together with the HMM acoustic models for speech recognition due to the ease of integration.
\item[Lexicon]
A lexical model connects the acoustic and language models. If a speech recognition system uses a phone acoustic models and word language model, then the lexical model defines the mapping between words and the phoneme set. In this case, the lexical model is simply a pronunciation dictionary. However, if word acoustic models were used, the lexical model is simply a one-to-one trivial mapping.
\end{description}
The post processing component is used to evaluate the system performance by comparing the set of hypotheses which are generated by the system with the references. There are several forms of the error/distance metircs:
\begin{itemize}
\item Sentence Error Rate(SER)
\item Word Error Rate(WER)
\item Phone Error Rate(PER)
\end{itemize}
In this paper, we mainly tackle the problem of how to train the acoustic model in a discriminative manner.

\section{Formal Description of the ASR Problem}
Because speech is so variable, an acoustic input sentence will never exactly match any model we have for this sentence, we form the ASR problem as a special case for Bayesian inference. The probabilistic implementation of this problem can be expressed as finding the most likely word sequence $\hat{W}$ that maximises the \emph{a posteriori} probability $P(W|O)$ of the string $W$, given the feature vector, $O$:
\begin{eqnarray}
\hat{W}=\argmax_W P(W|O).
\label{eqn:map}
\end{eqnarray}
Apply Bayes' rule, we can break (\ref{eqn:map}) down as follows:
\begin{eqnarray}
\hat{W}=\argmax_W \frac{P(O|W)P(W)}{P(O)}.
\label{eqn:bayes}
\end{eqnarray}
The probability on the right-hand side of ~(\ref{eqn:bayes}) are easier to compute than $P(W|O)$. $P(W)$ is the \emph{a priori} probability of the word string, which can be estimated by the $N$-gram lanuage model. We can ignore $P(O)$ since we are maxmizing over all possible sentences, and $P(O)$ doesn't change for each sentence, for each potential sentence we are still examing the same observation $O$, which must have the same probability $P(O)$. Thus, we have
\begin{eqnarray}
\hat{W}=\argmax_W \frac{P(O|W)P(W)}{P(O)}=\argmax_W P(O|W)P(W).
\label{eqn:alter}
\end{eqnarray}
To summarize, the most probable sentence $W$ given some observation sequence $O$ can be computed by taking the product of two probabilities for each sentence and choosing the sentence for which this product is the greatest. The components of the speech recognizer that compute thest two terms have names: $P(W)$, the \emph{a priori} probability is computed by the language mopdel. While $P(O|W)$, the observation likelihood, is computed by the acoustic model:
\begin{eqnarray}
\hat{W}=\underbrace{P(O|W)}_{likelihood}\underbrace{P(W)}_{prior},
\end{eqnarray}

\section{Challenges in ASR}
So far, ASR systems fall far short of human speech perception in all but the simplest, most constrained tasks. Before ASR systems become ubiquitous in society, many improvements will be required in both system performance and operational performance. In the system area we need large improvements in accuracy, efficiency, and robustness in order to utilize the technology for a wide range of tasks, on a wide range of processors, and under a wide range of operating conditions. Speech recognition research has been going on since the 1960s. Although significant process has been made, the performance is far from human satisfactory. Speech recognition is a difficult task because of the following factors:
\begin{itemize}
\item the dynamic aspect of the speech signal
\item differences between speakers: inter-speaker variability, i.e, different speakers will generate different waveforms speaking the same sentence
\item differences between how a given speaker utters the same word: intra-speaker variability
\item acoustic channel: microphone differences, background noise, bandwidths
\item difficulities in modelling the syntax and the semantics of lanuages
\item difficulities in encapsulating domain information
\end{itemize}
The variabilities in the speech signal is handled by a statistical pattern recognition approaches in speech recognition, moreover, compared to many other pattern recognition problems, speech recognition is a very large problem(hundreds/thousands of hours of acoustic training data and billions of words of language model data). with the desire to have a realtime performance. 

\section{Discriminative Training}
ASR has been a major goal for a large research community in the last few years. The predominant approach to large vocabulary, speaker-independent, continuous speech recognition (LVCSR) has been based on Hidden Markov Models (HMMs). Traditional traing scheme for HMM is the Maximimum Likelihood (ML) training criterion, which attempts to estimate the HMM parameters that are most likely to generate the training data, however, in general maximizing the likelihood does not minimize the phone or word error rates, which are more relevant metrics for ASR.

%%%%Feisha Large Margin
Noting this weakness, many researchers in ASR have studied alternative frameworks for parameter estimation based on conditional maximum likelihood\citep{}, minimum classification error\citep{}, maximum mutual information\citep{} and Minimum Phone Error (MPE)\citep{}. The learning algorithms in these frameworks optimize discriminative criteria that more closely track actual error rates, as opposed to the EM algorithm for maximum likelihood estimation. Discriminative training means the training of HMM parameters so as to optimize some measure of objects to modify the parameters so that hypotheses generated by the recognizer on the training data more closely "match" the correct word-sequences, whilst generalizing to unseen data.These algorithms do not enjoy the simple update rules and relatively fast convergence of EM, but carefully and skillfully implemented, they lead to lower error rates\citep{}\citep{}.

Inspired by support vector machine (SVMs), a large margin learning algorithm is proposed to discriminatively train the HMM parameters. This approach attempts to maximize the distance between labeled examples and the decision boundaries that seperate different classes\citep{}\cite{}. Under mild assuptions, the required optimization is concex, without any spurious local maxima. In contrast to SVMs, however, this approach are very natually suited to problems in multiway (as opposed to binary) classification; also, they do not require the kernel trick for nonlinear decision boundaries.

Neural Network (NN) based models have been widely proposed as a potentially powerful approach to speech recognition as NNs can be helpful for hard pattern recognition problems. NNs are learning machines which provide discriminant-based learning, i.e, the models are trained to suppress incorrect classifications as well as to accurately model each class separately.Despite the very good results achieved in static pattern classification, NNs by themselves have not been shown to be effective for large scale recognition of continuous speech\citep{}. There is at least one fundamental difficulty with supervised training of a connectionist network for continuous speech recognition: a target function must be defined, even though the training is done for connected speech units where the segmentation is generally unknown, however this is not a problem for HMM-based training, which only requires the sequence of speech units and not their temporal segmentations. Therefore, hybrid NN/HMM models have been built, in which the NN system acts as a HMM state emission output probability estimator and is used as the acoustic model within the HMM framework. This approach brings some benefits relative to HMM-only recognisers:
\begin{itemize}
\item They provide discriminant-based learning compared to the traditional ML-based training for HMM parameters.
\item When used in classification mode and trained with an LMS criterion or an entropy criterion, the network outputs will estimate posterior probabilities without requiring strong assumptions about the underlying probability density functions.
\item Because ANNs are capable of incorporating multiple constraints and finding optimal combinations of constraints for classification, features do not need to be treated as independent. In other words, there is no need for strong assumptions about statistical distributions and independence of input features.
\end{itemize}
 
Weighted Finite State Tranducers (WFSTs) can serve as an integrated representation of the main components of current large-vocabulary speech recognition system, such as hidden Markov models (HMMs), tree lexicons, or n-gram language models\citep{}. A finite-state transducer is a finite automaton whose state transitions are labeled with both
input and output symbols. Therefore, a path through the transducer encodes a mapping from an input symbol sequence to an output symbol sequence. A weighted transducer puts weights on transitions in addition to the input and output symbols. Weights may encode probabilities, durations, penalties, or any other quantity that accumulates along paths to compute the overall weight of mapping an input sequence to an output sequence. Weighted transducers are thus a natural choice to represent the probabilistic finite-state models prevalent in speech processing. In this paper, we investigate this scheme as our decoding approach for the NN/HMM system.

In this paper, we will address the problem of discriminative training of HMM parameters adopting a NN/HMM hybird approach and the WFST decoding scheme. The remaining of the paper is organised as follows: we give a general description of the HMM based and the hybrid NN/HMM based ASRs in Chapter~\ref{chap:hmmasr} and Chapter~\ref{chap:nnasr}. Various discriminative training scheme are explained in Chapter~\ref{chap:hmmdiscri} for HMM based ASRs and in Chapter~\ref{chap:nndiscri} we will discuss the discriminative nature of the hybrid NN/HMM based ASRs. We then give a general introduction of WFST in Chapter~\ref{chap:wfst}. Preliminary experimental results on NN/HMM discriminative training are presented in Chapter~\ref{chap:exp} together with a brief introduction of our proposed method of discriminative training in the hybrid NN/HMM ASR system.   


Here is an equation\footnote{the notation is explained in the nomenclature section :-)}:
\begin{eqnarray}
CIF: \hspace*{5mm}F_0^j(a) &=& \frac{1}{2\pi \iota} \oint_{\gamma} \frac{F_0^j(z)}{z - a} dz
\end{eqnarray}
\nomenclature[zcif]{$CIF$}{Cauchy's Integral Formula}                                % first letter Z is for Acronyms 
\nomenclature[aF]{$F$}{complex function}                                                   % first letter A is for Roman symbols
\nomenclature[gp]{$\pi$}{ $\simeq 3.14\ldots$}                                             % first letter G is for Greek Symbols
\nomenclature[gi]{$\iota$}{unit imaginary number $\sqrt{-1}$}                      % first letter G is for Greek Symbols
\nomenclature[gg]{$\gamma$}{a simply closed curve on a complex plane}  % first letter G is for Greek Symbols
\nomenclature[xi]{$\oint_\gamma$}{integration around a curve $\gamma$} % first letter X is for Other Symbols
\nomenclature[rj]{$j$}{superscript index}                                                       % first letter R is for superscripts
\nomenclature[s0]{$0$}{subscript index}                                                        % first letter S is for subscripts

\section{Second Paragraph}
and here I write more ...

\subsection{sub first paragraph}
... and some more ...

Now I would like to cite the following: \cite{latex} and \cite{texbook}
and \cite{Rud73}.

I would also like to include a picture ...

\begin{figure}[!htbp]
  \begin{center}
    \leavevmode
    \ifpdf
      \includegraphics[height=6in]{aflow}
    \else
      \includegraphics[bb = 92 86 545 742, height=6in]{aflow}
    \fi
    \caption{Airfoil Picture}
    \label{FigAir}
  \end{center}
\end{figure}

% above code has been macro-fied in Classes/MacroFile.tex file
%\InsertFig{\IncludeGraphicsH{aflow}{6in}{92 86 545 742}}{Airfoil Picture}{FigAir}

So as we have now labelled it we can reference it, like so (\ref{FigAir}) and it
is on Page \pageref{FigAir}. And as we can see, it is a very nice picture and we
can talk about it all we want and when we are tired we can move on to the next
chapter ...

I would also like to add an extra bookmark in acroread like so ...
\ifpdf
  \pdfbookmark[2]{bookmark text is here}{And this is what I want bookmarked}
\fi
% ------------------------------------------------------------------------


%%% Local Variables: 
%%% mode: latex
%%% TeX-master: "../thesis"
%%% End: 
