\chapter{Introduction}
\ifpdf
    \graphicspath{{Chapter1/Chapter1Figs/}}
\else
    \graphicspath{{Chapter1/Chapter1Figs/EPS/}}
\fi
Speech has been the dominant mode of human information exchange and social communication from human prehistory to the new media of the future. Most computers currently utilize a graphical user interface (GUI), based on graphically represented interface objects and functions such as windows, icons, menus, and pointers and we communicate with the computer mostly by mouse and keyboard, in other words, they lack the human ability to speak, listen and understanding. Speech, due to its simplicity and convinience, will serve as one of the primary schemes for modern Human Computer Interface (HCI). In fact, even before speech-based interaction reaches full maturity, applications in home, mobile, and office segments are incorporating spoken language technology to change the way we live and work.

Speech is the primary means of communication between people. For reasons ranging from technological curiosity about the mechanisms for mechanical realization of human speech capabilities, to the desire to automate simple tasks inherently requiring human-machine interactions, research in automatic speech recognition (and speech synthesis) by machine has attracted a great deal of attention over the past five decades.

\section{ASR System Architechture}
The aim of an Automatic Speech Recognition (ASR) is to convert a speech waveform into textual form. This process is commonly known as Speech-To-Text(STT) or a speech transcription process. One fundamental requirement of the system is that is should accurately and efficiently convert a speech signal to a text transcription of the spoken words, independent of the speaker's accent, gender, the recording device, and the acoustic environment which the speaker is located (e.g., quiet office, noisy room, outdoors). The speech recognition problem is the task of taking an utterance containing a certain length of speech data and transforming it into a text string which is as close as possible to the transcript that a careful human would generate. We can form the ASR problem into a noisy-channel model shown in Fig.~\ref{fig:noisy_channel_model}. The intuition of the noisy channel model is to treat the acoustic waveform as an “noisy” version of the string of words, i.e. a version that has been passed through a noisy communications channel (i.e. our vocal tract system). This channel introduces “noise” which makes it hard to recognize the “true” string of words. Our goal is then to build a model of the channel (i.e. acoustic model) so that we can figure out how it modified this “true” sentence and hence recover it. The insight of the noisy channel model is that if we know how the channel distorts the source, we could find the correct source sentence for a waveform by taking every possible sentence in the language (i.e. language model), running each sentence through our noisy channel model, and seeing if it matches the output. We then select the best matching source sentence as our desired source sentence. 

Essential components of a basic speech recognition system are shown in Fig.~\ref{fig:ess_compo_diagram}. In feature extraction phase, a spoken utterance is converted to a sequence of feature vectors, aiming at retaining useful information in the waveform and meanwhile removing noise and other irrelevant information. Useful features include Linear Prediction Coefficients(LPC), Mel Frequency Ceptral Coefficients(MFCC), Perceptual Linear Prediction Coefficients(PLP). 
\begin{figure}[!htbp]
  \begin{center}
    \leavevmode
      \includegraphics[height=4cm,width=12cm]{ess_compo_diagram}
    \caption{Essential Components of a Basic speech Recognition System.}
    \label{fig:ess_compo_diagram}
  \end{center}
\end{figure}


%%% End: 
