%/* ----------------------------------------------------------- */
%/*                                                             */
%/*                          ___                                */
%/*                       |_| | |_/   SPEECH                    */
%/*                       | | | | \   RECOGNITION               */
%/*                       =========   SOFTWARE                  */ 
%/*                                                             */
%/*                                                             */
%/* ----------------------------------------------------------- */
%/* developed at:                                               */
%/*                                                             */
%/*      Speech Vision and Robotics group                       */
%/*      Cambridge University Engineering Department            */
%/*      http://svr-www.eng.cam.ac.uk/                          */
%/*                                                             */
%/*      Entropic Cambridge Research Laboratory                 */
%/*      (now part of Microsoft)                                */
%/*                                                             */
%/* ----------------------------------------------------------- */
%/*         Copyright: Microsoft Corporation                    */
%/*          1995-2000 Redmond, Washington USA                  */
%/*                    http://www.microsoft.com                 */
%/*                                                             */
%/*              2001  Cambridge University                     */
%/*                    Engineering Department                   */
%/*                                                             */
%/*   Use of this software is governed by a License Agreement   */
%/*    ** See the file License for the Conditions of Use  **    */
%/*    **     This banner notice must not be removed      **    */
%/*                                                             */
%/* ----------------------------------------------------------- */
%
% HTKBook - Steve Young 24/11/97
%
% revised by JBA and VV

\mychap{A Tutorial Example of Using HTK}{exampsys}

\sidepic{recipe}{80}{
This final chapter of the tutorial part of the book will describe the
construction of a recogniser for simple voice dialling applications.  This
recogniser will be designed to recognise continuously spoken digit strings and
a limited set of names.  It is sub-word based so that adding a new name to the
vocabulary involves only modification to the pronouncing dictionary and task
grammar.  The HMMs will be continuous density mixture Gaussian tied-state
triphones with clustering performed using phonetic decision trees.  Although
the voice dialling task itself is quite simple, the system design is
general-purpose and would be useful for a range of applications.  
}

In addition to the construction of a simple voice-dialling system, examples for
using more advanced options, such as decorrelating transforms, the large
vocabulary decoder and discriminative training, are also given. These sections
do not necessarily link fully with the other parts of the tutorial, but aim to
give the user an idea of the form of command-lines and steps that might be
involved in using these options.

The system will be built from scratch even to the extent of recording training
and test data using the \HTK\ tool \htool{HSLab}.  To make this tractable, the
system will be speaker dependent\footnote{A stage of the tutorial deals 
with adapting the speaker dependent models for new speakers}, but the same design 
would be followed to build a speaker independent system.  The only difference being 
that data would be required from a large number of speakers and there would 
be a consequential increase in model complexity. 

Building a speech recogniser from scratch involves a number of inter-related
subtasks and pedagogically it is not obvious what the best order is to present
them. In the presentation here, the ordering is chronological so that in effect
the text provides a recipe that could be followed to construct a similar
system.  The entire process is described in considerable detail in order give a
clear view of the range of functions that \HTK\ addresses and thereby to
motivate the rest of the book.

The \HTK\ software distribution also contains an example of constructing a
recognition system for the 1000 word ARPA Naval Resource Management Task. This
is contained in the directory \texttt{RMHTK} of the \HTK\ distribution.
Further demonstration of \HTK's capabilities can be found in the directory 
\texttt{HTKDemo}. Some example scripts that may be of assistance during the 
tutorial are available in the \texttt{HTKTutorial} directory.

At each step of the tutorial presented in this chapter, the user is advised to
thoroughly read the entire section before executing the commands, and also to
consult the reference section for each \HTK\ tool being introduced
(chapter~\ref{c:toolref}), so that all command line options and arguments are
clearly understood.

\mysect{Data Preparation}{egdataprep}

The first stage of any recogniser development project is data preparation.
\index{data preparation}  Speech data is needed both for training and for
testing.  In the system to be built here, all of this speech will be recorded
from scratch and to do this scripts are needed to prompt for each sentence.  In
the case of the test data, these prompt scripts will also provide the reference
transcriptions against which the recogniser's performance can be measured and a
convenient way to create them is to use the task grammar as a random generator.
In the case of the training data, the prompt scripts will be used in
conjunction with a pronunciation dictionary to provide the initial phone level
transcriptions needed to start the HMM training process.  Since the application
requires that arbitrary names can be added to the recogniser, training data
with good phonetic balance and coverage is needed.  Here for convenience the
prompt scripts needed for training are taken from the TIMIT acoustic-phonetic
database.

It follows from the above that before the data can be recorded, a phone set
must be defined, a dictionary must be constructed to cover both training and
testing and a task grammar must be defined.

\subsection{Step 1 - the Task Grammar}

The goal of the system to be built here is to provide a voice-operated
interface for phone dialling. Thus, the recogniser must handle digit strings
and also personal name lists. Examples of typical inputs might be
\begin{quote}
Dial three three two six five four

Dial nine zero four one oh nine

Phone Woodland

Call Steve Young
\end{quote}

\HTK\ provides a grammar definition language for
specifying simple task grammars\index{task grammar} such as this.  It consists
of a set of variable definitions followed by a regular 
expression describing the words to recognise.  For the
voice dialling application, a suitable grammar might be
\begin{verbatim}
    $digit = ONE | TWO | THREE | FOUR | FIVE |
             SIX | SEVEN | EIGHT | NINE | OH | ZERO;
    $name  = [ JOOP ] JANSEN |
             [ JULIAN ] ODELL |
             [ DAVE ] OLLASON |
             [ PHIL ] WOODLAND | 
             [ STEVE ] YOUNG;
    ( SENT-START ( DIAL <$digit> | (PHONE|CALL) $name) SENT-END )
\end{verbatim}
where the vertical bars denote alternatives, the square brackets denote
optional items and the angle braces denote one or more repetitions.  The
complete grammar can be depicted as a network as shown in
Fig.~\href{f:dialnet}.

\centrefig{dialnet}{110}{Grammar for Voice Dialling}

\sidefig{step1}{25}{Step 1}{-4}{
The above high level representation of a task grammar
is provided for user convenience.  The \HTK\ recogniser actually 
requires a
word network to be defined  using a low level notation
called \HTK\ Standard Lattice Format\index{standard lattice format} (SLF)
\index{SLF}
in which each word instance and each word-to-word transition
is listed explicitly.  This word network can be created 
automatically from the grammar above using 
the \htool{HParse}
tool, thus assuming that the file \texttt{gram} contains the
above grammar, executing
}\index{hparse@\htool{HParse}}
\begin{verbatim}
    HParse gram wdnet
\end{verbatim}
will create an equivalent word network in 
the file \texttt{wdnet} (see Fig~\href{f:step1}).


\subsection{Step 2 - the Dictionary}

The first step in building a dictionary is to create a sorted list of the
required words. 
In the telephone dialling task pursued here, it is quite easy to create a list
of required words by hand. However, if the task were more complex, it would be
necessary to build a word list from the sample sentences present in the training
data. Furthermore, to build robust acoustic models, it is necessary to train
them on a large set of sentences containing many words and preferably
phonetically balanced. For these reasons, the training data will consist of
English sentences unrelated to the phone recognition task. Below, a short
example of creating a word list from sentence prompts will be given. As noted
above the training sentences given here are extracted from some prompts used
with the TIMIT database\index{TIMIT database} and for convenience reasons they 
have been renumbered. For example, the first few items might be as follows
\vspace{1cm}
\begin{verbatim}
    S0001 ONE VALIDATED ACTS OF SCHOOL DISTRICTS
    S0002 TWO OTHER CASES ALSO WERE UNDER ADVISEMENT
    S0003 BOTH FIGURES WOULD GO HIGHER IN LATER YEARS
    S0004 THIS IS NOT A PROGRAM OF SOCIALIZED MEDICINE
    etc
\end{verbatim}
The desired training word list\index{word list} (\texttt{wlist}) could then be
extracted automatically from these.  Before using HTK, one would need to edit
the text into a suitable format.  For example, it would be necessary to change
all white space to newlines and then to use the UNIX utilities \texttt{sort}
and \texttt{uniq} to sort the words into a unique alphabetically ordered set,
with one word per line.  The script \texttt{prompts2wlist} from the
\texttt{HTKTutorial} directory can be used for this purpose.

The dictionary\index{dictionary!construction}\index{dictionary!format}  
itself can be built from a standard source 
using \htool{HDMan}\index{hdman@\htool{HDMan}}.
For this example, the British English BEEP pronouncing dictionary will be
used\footnote{Available by anonymous ftp from 
\texttt{svr-ftp.eng.cam.ac.uk/pub/comp.speech/dictionaries/beep.tar.gz}.
Note that items beginning with unmatched quotes, found at the start
of the dictionary, should be removed.}.  
Its phone set will be adopted without modification except that 
the stress marks will be removed and a short-pause (\texttt{sp}) will
be added to the end of every pronunciation. If the dictionary contains any
silence markers then the \texttt{MP} command will merge the \texttt{sil} and 
\texttt{sp} phones into a single \texttt{sil}. These changes can be applied 
using \htool{HDMan} and an edit script (stored in \texttt{global.ded})
containing the three commands
\begin{verbatim}
   AS sp
   RS cmu
   MP sil sil sp
\end{verbatim}
where \texttt{cmu} refers to a style of stress marking\index{stress marking} in which 
the lexical stress level is
marked by a single digit appended to the phone name (e.g.\ \texttt{eh2} means
the phone \texttt{eh} with level 2 stress). 

\centrefig{step2}{100}{Step 2}

\noindent
The command
\begin{verbatim}
    HDMan -m -w wlist -n monophones1 -l dlog dict beep names
\end{verbatim}
will create a new dictionary called \texttt{dict} by searching the source
dictionaries \texttt{beep} and \texttt{names} to find pronunciations for each
word in \texttt{wlist} (see Fig~\href{f:step2}). Here, the \texttt{wlist} in
question needs only to be a sorted list of the words appearing in the task
grammar given above.

Note that \texttt{names} is a manually constructed file containing
pronunciations for the proper names used in the task grammar. The option
\texttt{-l} instructs \htool{HDMan} to output a log file \texttt{dlog} which 
contains various statistics about the constructed dictionary. In particular,
it indicates if there are words missing. \htool{HDMan} can also output a list
of the phones used, here called \texttt{monophones1}. Once training and test
data has been recorded, an HMM will be estimated for each of these phones.

The general format of each dictionary entry\index{dictionary!entry} is
\begin{verbatim}
    WORD [outsym] p1 p2 p3 ....
\end{verbatim}
which means that the word \texttt{WORD} is pronounced as the sequence of phones
\texttt{p1 p2 p3 ...}.  The string in square brackets specifies the string to
output when that word is recognised.  If it is omitted then the word itself is
output.  If it is included but empty, then nothing is output.

To see what the dictionary is like, here are a few entries.
\begin{verbatim}
    A               ah sp
    A               ax sp
    A               ey sp
    CALL            k ao l sp
    DIAL            d ay ax l sp
    EIGHT           ey t sp
    PHONE           f ow n sp
    SENT-END    []  sil
    SENT-START  []  sil
    SEVEN           s eh v n sp
    TO              t ax sp
    TO              t uw sp
    ZERO            z ia r ow sp
\end{verbatim}
Notice that function words such as \texttt{A} and \texttt{TO}
have multiple pronunciations.
The entries for \texttt{SENT-START} and \texttt{SENT-END} have a silence
model \texttt{sil} as their pronunciations and null output symbols.  

\subsection{Step 3 - Recording the Data}

The\index{recording speech} training and test data will be recorded using the
\HTK\ tool \htool{HSLab}\index{hslab@\htool{HSLab}}. This is a combined 
waveform recording and labelling tool. In this example \htool{HSLab} will be
used just for recording, as labels already exist. However, if you do not have
pre-existing training sentences (such as those from the TIMIT database) you can
create them either from pre-existing text (as described above) or by labelling
your training utterances using \htool{HSLab}. \htool{HSLab} is invoked by typing
\begin{verbatim}
    HSLab noname
\end{verbatim}
This will cause a window to appear with a waveform display area in the upper
half and a row of buttons, including a record button in the lower half.  When
the name of a normal file is given as argument, \htool{HSLab} displays its
contents.  Here, the special file name \texttt{noname} indicates that new data
is to be recorded. \htool{HSLab} makes no special provision for prompting the
user.  However, each time the record button is pressed, it writes the
subsequent recording alternately to a file called \verb|noname_0.| and to a
file called \verb|noname_1.|.  Thus, it is simple to write a shell script
which for each successive line of a prompt file, outputs the prompt, waits for
either \verb|noname_0.| or \verb|noname_1.| to appear, and then renames
the file to the name prepending the prompt (see Fig.~\href{f:step3}).
\index{extensions!wav@\texttt{wav}}

While the prompts for training sentences already were provided for above, the
prompts for test sentences need to be generated before recording them. 
The tool\index{prompt script!generationof}\index{hsgen@\htool{HSGen}}
\htool{HSGen} can be used to do this by randomly traversing a word network and 
outputting each word encountered. For example, typing
\begin{verbatim}
    HSGen -l -n 200 wdnet dict > testprompts
\end{verbatim}
would generate 200 numbered test utterances, the first few of which would look something like:
\begin{verbatim}
    1.  PHONE YOUNG  
    2.  DIAL OH SIX SEVEN SEVEN OH ZERO
    3.  DIAL SEVEN NINE OH OH EIGHT SEVEN NINE NINE
    4.  DIAL SIX NINE SIX TWO NINE FOUR ZERO NINE EIGHT  
    5.  CALL JULIAN ODELL
    ... etc
\end{verbatim}
These can be piped to construct the prompt file \texttt{testprompts} for
the required test data.

\subsection{Step 4 - Creating the Transcription Files}

\sidefig{step3}{50}{Step 3}{-4}{}
To train a set of HMMs, every file of training data must have an associated
phone level transcription.  Since there is no hand labelled data to bootstrap a
set of models, a flat-start scheme will be used instead.  To do this, two sets
of phone transcriptions will be needed.  The set used initially will have no
short-pause (\texttt{sp}) models between words.  Then once reasonable phone
models have been generated, an \texttt{sp} model will be inserted between words
to take care of any pauses introduced by the speaker.\index{flat start}

The starting point for both sets of phone transcription is an
orthographic\index{transcription!orthographic} transcription in \HTK\ label
format.  This can be created fairly easily using a text editor or a scripting
language.
An example of this is found in the RM Demo at point 0.4. Alternatively, the
script \texttt{prompts2mlf} has been provided in the \texttt{HTKTutorial}
directory.
The effect should be to convert the prompt utterances exampled above into the
following form:
\begin{verbatim}
    #!MLF!#
    "*/S0001.lab"
    ONE 
    VALIDATED 
    ACTS 
    OF 
    SCHOOL 
    DISTRICTS
    .
    "*/S0002.lab"
    TWO 
    OTHER 
    CASES 
    ALSO 
    WERE 
    UNDER 
    ADVISEMENT
    .
    "*/S0003.lab" 
    BOTH 
    FIGURES 
    (etc.)
\end{verbatim}
As can be seen, the prompt labels need to be converted into path names, each
word should be written on a single line and each utterance should be terminated
by a single period on its own.  The first line of the file just identifies the
file as a \textit{Master Label File} (MLF).  This is a single file containing a
complete set of transcriptions.  \HTK\ allows each individual transcription to
be stored in its own file but it is more efficient to use an MLF.
\index{master label files}\index{MLF}

The form of the path name used in the MLF deserves some explanation since it is
really a \textit{pattern} and not a name.\index{master label files!patterns}
When \HTK\ processes speech files, it expects to find a transcription (or 
{\it label file}) with the same name but a different extension.  Thus, if the file
\texttt{/root/sjy/data/S0001.wav} was being processed, \HTK\ would look for a
label file called \texttt{/root/sjy/data/S0001.lab}.  When MLF files are used,
\HTK\ scans the file for a pattern which matches the required label file name.
However, an asterix will match any character string and hence the pattern used
in the example is in effect path independent.  It therefore allows the same
transcriptions to be used with different versions of the speech data to be
stored in different locations.

Once the word level MLF has been created, phone level MLFs can be generated
using the label editor \htool{HLEd}\index{hled@\htool{HLEd}}. For example,
assuming that the above word level MLF is stored in the file
\texttt{words.mlf}, the command
\begin{verbatim}
    HLEd -l '*' -d dict -i phones0.mlf mkphones0.led words.mlf
\end{verbatim}
will generate a phone level transcription of the following form
where the \texttt{-l} option is needed to generate the path '\verb+*+' in the 
output patterns.
\begin{verbatim}
    #!MLF!#
    "*/S0001.lab"
    sil
    w
    ah
    n
    v
    ae
    l
    ih
    d
    .. etc
\end{verbatim}
This process is illustrated in Fig.~\href{f:step4}.

The \htool{HLEd} edit script \texttt{mkphones0.led} 
contains the following commands
\begin{verbatim}
   EX
   IS sil sil
   DE sp
\end{verbatim}
The expand \texttt{EX} command replaces each word in \texttt{words.mlf} 
by the corresponding pronunciation in the dictionary file \texttt{dict}.  
The \texttt{IS}
command inserts a silence model \texttt{sil} at the start and end of
every utterance.  Finally, the delete \texttt{DE} command deletes all
short-pause \texttt{sp} labels, which are not wanted in the transcription
labels at this point.  

\centrefig{step4}{60}{Step 4}

\subsection{Step 5 - Coding the Data}

The final stage of data preparation is to parameterise the raw speech
waveforms into sequences of feature vectors.  \HTK\ support both 
FFT-based\index{analysis!FFT-based}
and LPC-based\index{analysis!LPC-based} analysis.  
Here Mel Frequency Cepstral Coefficients (MFCCs)\index{MFCC coefficients},
which are derived from FFT-based log spectra, will be used.

Coding can be performed using the tool \htool{HCopy}\index{hcopy@\htool{HCopy}} 
configured to\index{coding}
automatically convert its input into MFCC vectors.  To do this, a configuration
file (\texttt{config}) is needed which specifies all of the conversion 
parameters\index{parameterisation}. 
Reasonable settings for these are as follows
\begin{verbatim}
    # Coding parameters
    TARGETKIND = MFCC_0
    TARGETRATE = 100000.0
    SAVECOMPRESSED = T
    SAVEWITHCRC = T
    WINDOWSIZE = 250000.0
    USEHAMMING = T
    PREEMCOEF = 0.97
    NUMCHANS = 26
    CEPLIFTER = 22
    NUMCEPS = 12
    ENORMALISE = F
\end{verbatim}
Some of these settings are in fact the default setting, but they
are given explicitly here for completeness.  In brief, they specify
that the target parameters are to be MFCC using $C_0$ as the energy
component, the frame period is 10msec (\HTK\ uses units of 100ns),
the output should be saved in compressed format, and a crc checksum should
be added.  The FFT should use a Hamming window and the signal should
have first order preemphasis applied using a coefficient of 0.97.
The filterbank should have 26 channels and 12 MFCC coefficients should
be output. 
The variable \texttt{ENORMALISE} is by default true and performs energy
normalisation on recorded audio files. It cannot be used with live audio and
since the target system is for live audio, this variable should be set to
false.

Note that explicitly creating coded data files is not necessary, as coding can
be done "on-the-fly" from the original waveform files by specifying the
appropriate configuration file (as above) with the relevant HTK tools. However,
creating these files reduces the amount of preprocessing required during
training, which itself can be a time-consuming process.

To run \htool{HCopy},  a list of
each source file and its corresponding output file is needed.  For example,
the first few lines might look like\index{extensions!mfc@\texttt{mfc}}
\begin{verbatim}
    /root/sjy/waves/S0001.wav /root/sjy/train/S0001.mfc
    /root/sjy/waves/S0002.wav /root/sjy/train/S0002.mfc
    /root/sjy/waves/S0003.wav /root/sjy/train/S0003.mfc
    /root/sjy/waves/S0004.wav /root/sjy/train/S0004.mfc
    (etc.)
\end{verbatim}
Files containing lists of files are referred to as script files\footnote{
Not to be confused with files containing \textit{edit} scripts.
}
and\index{extensions!scp@\texttt{scp}}
by convention are given the extension \texttt{scp} (although 
\HTK\ does not demand this).  Script files are specified using the standard
\texttt{-S} option and their contents are read simply as extensions
to the command line.  Thus, they avoid the need for command lines with
several thousand arguments\footnote{
Most UNIX shells, especially the C shell, only allow a limited and
quite small number of arguments.}.
\index{command line!arguments}\index{command line!script files}

\centrefig{step5}{100}{Step 5}

\noindent
Assuming that the above script is stored in the file \texttt{codetr.scp},
the training data would be coded by executing
\begin{verbatim}
    HCopy -T 1 -C config -S codetr.scp
\end{verbatim}
This is illustrated in Fig.~\href{f:step5}. A similar procedure is
used to code the test data (using \verb|TARGETKIND = MFCC_0_D_A| in
config) after which all of the pieces are in place to start training
the HMMs.
 

\mysect{Creating Monophone HMMs}{egcreatmono}

In this section, the creation of a well-trained set of single-Gaussian
monophone HMMs will be described.  The starting point will be
a set of identical monophone HMMs in which every mean and variance is
identical.  These are then retrained, short-pause models are
added and the silence model is extended slightly.  The monophones
are then retrained.

Some of the dictionary entries have multiple pronunciations.  However,
when \htool{HLEd} was used to expand the word level MLF to create the
phone level MLFs, it arbitrarily selected the first pronunciation it found.
Once reasonable monophone HMMs have been created, the recogniser tool
\htool{HVite} can be used to perform a \textit{forced alignment} 
of\index{forced alignment}
the training data.  By this means, a new phone level MLF is created in which
the choice of pronunciations depends on the acoustic evidence.  This new
MLF can be used to perform a final re-estimation of the monophone HMMs.
\index{monophone HMM!construction of}

\subsection{Step 6 - Creating Flat Start Monophones}

The first step in HMM training is to define a prototype model.  The
parameters of this model are not important, its purpose is to
define the model topology.  For phone-based systems,  a good
topology to use is 3-state left-right with no skips such as the following
\begin{verbatim}
    ~o <VecSize> 39 <MFCC_0_D_A>
    ~h "proto"
    <BeginHMM>
     <NumStates> 5
     <State> 2
        <Mean> 39
          0.0 0.0 0.0 ...
        <Variance> 39
          1.0 1.0 1.0 ...
     <State> 3
        <Mean> 39
          0.0 0.0 0.0 ...
        <Variance> 39
          1.0 1.0 1.0 ...
     <State> 4
        <Mean> 39
          0.0 0.0 0.0 ...
        <Variance> 39
          1.0 1.0 1.0 ...
     <TransP> 5
      0.0 1.0 0.0 0.0 0.0
      0.0 0.6 0.4 0.0 0.0
      0.0 0.0 0.6 0.4 0.0
      0.0 0.0 0.0 0.7 0.3
      0.0 0.0 0.0 0.0 0.0
    <EndHMM>
\end{verbatim}
where each ellipsed vector is of length 39.  This number, 39, is computed from
the length of the parameterised static vector (\texttt{MFCC\_0} = 13) plus
the delta coefficients (+13) plus the acceleration coefficients (+13).

The \HTK\ tool \htool{HCompV}\index{hcompv@\htool{HCompV}} will scan a set of data files, compute
the global mean and variance and set all of the Gaussians in a given HMM
to have the same mean and variance.\index{flat start}
Hence, assuming that a list of all the training files is stored in
\texttt{train.scp}, the command
\begin{verbatim}
    HCompV -C config -f 0.01 -m -S train.scp -M hmm0 proto
\end{verbatim}
will create a new version of \texttt{proto} in the directory \texttt{hmm0}
in which the zero means and unit variances above have been replaced
by the global speech means and variances.
Note that the prototype HMM defines the parameter kind as \texttt{MFCC\_0\_D\_A} (Note: 'zero' not 'oh').
This means that delta and acceleration coefficients are to be computed and
appended to the static MFCC coefficients computed and stored during the
coding process described above.  To ensure that these are computed during loading,
the configuration file \texttt{config} should be modified
to change the target kind, i.e.\ the configuration file entry for
\texttt{TARGETKIND} should be changed to
\begin{verbatim}
   TARGETKIND = MFCC_0_D_A
\end{verbatim}
\htool{HCompV} has a number of options specified for it.  The 
\texttt{-f} option causes a variance floor 
macro\index{variance floor macros} (called \texttt{vFloors}) to be generated which
is equal to 0.01 times the global variance.  This is a vector
of values which will be used to set a floor on the variances estimated
in the subsequent steps.  The \texttt{-m} option asks for means to be computed
as well as variances.  Given this
new prototype model stored in the directory
\texttt{hmm0}, a \textit{Master Macro File}\index{master macro files} 
(MMF) called \texttt{hmmdefs} \index{MMF}
containing a copy for each of the required monophone HMMs is constructed 
by manually copying the prototype and relabelling it for each required 
monophone (including ``sil'').  
The format of an MMF is similar to that
of an MLF and it serves a similar purpose in that it avoids having
a large number of individual HMM definition files\index{HMM!definition files} 
(see Fig.~\href{f:MMFeg}).

\centrefig{MMFeg}{85}{Form of Master Macro Files}

The flat start monophones stored in the directory \texttt{hmm0} are
re-estimated using the embedded re-estimation\index{embedded re-estimation} 
tool \htool{HERest}\index{herest@\htool{HERest}}
invoked as follows
\begin{verbatim}
   HERest -C config -I phones0.mlf -t 250.0 150.0 1000.0 \
    -S train.scp -H hmm0/macros -H hmm0/hmmdefs -M hmm1 monophones0
\end{verbatim}
The effect of this is to load all the models in \texttt{hmm0} which are
listed in
the model list \texttt{monophones0} (\texttt{monophones1} less the short 
pause (\texttt{sp}) model). These are then re-estimated them using the data
listed in \texttt{train.scp} and the new model set is stored in the
directory \texttt{hmm1}.
Most of the files used in this invocation of \htool{HERest} have 
already been described.  The exception is the file \texttt{macros}.
This should contain a so-called \textit{global options} macro and
the variance floor macro \texttt{vFloors} generated earlier.  The global options macro
simply defines the HMM parameter kind and the vector size i.e.
\begin{verbatim}
   ~o <MFCC_0_D_A> <VecSize> 39
\end{verbatim}
See Fig.~\href{f:MMFeg}. This can be combined with \texttt{vFloors} into a text file
called \texttt{macros}.

\centrefig{step6}{85}{Step 6}

The \texttt{-t} option sets the pruning\index{pruning} thresholds to be used during
training.  Pruning limits the range of state alignments that the
forward-backward algorithm includes in its summation and it
can reduce the amount of computation required by an
order of magnitude.  For most training files, a very tight pruning threshold
can be set, however, some training files will provide poorer acoustic
matching and in consequence a wider pruning beam is needed.  \htool{HERest}
deals with this by having an auto-incrementing pruning threshold.  In the
above example, pruning is normally 250.0.  If re-estimation fails on any
particular file, the threshold is increased by 150.0 and the file is
reprocessed.  This is repeated until either the file is successfully
processed or the pruning limit of 1000.0 is exceeded.  At this point it 
is safe to assume that there
is a serious problem with the training file and hence the fault should be fixed
(typically it will be an incorrect transcription) or the training file should be discarded.
The process leading to the initial set of monophones in the directory
\texttt{hmm0} is illustrated in Fig.~\href{f:step6}.

Each time \htool{HERest} is run it performs a single re-estimation.  Each new
HMM set is stored in a new directory.  Execution of \htool{HERest} should be
repeated twice more, changing the name of the input and output directories (set
with the options \texttt{-H} and \texttt{-M}) each time, until the directory
\texttt{hmm3} contains the final set of initialised monophone HMMs.

\subsection{Step 7 - Fixing the Silence Models}

\sidefig{egsils}{55}{Silence Models}{-4}{
The previous step has generated a 3 state left-to-right HMM for each
phone and also a HMM for the silence model\index{silence model} \texttt{sil}.  The 
next step is to add extra transitions from states 2 to 4 and from
states 4 to 2\index{transitions!adding them}
in the silence model.  The idea here is to make the model more robust
by allowing individual states to absorb the various
impulsive noises in the training data.  The backward skip allows this to happen
without committing the model to transit to the following word.

Also, at this point, a 1 state
short pause\index{short pause} \texttt{sp} model should be created.  
This should be a so-called \textit{tee-model}\index{tee-models}
which has a direct transition from entry to exit node.
This \texttt{sp} has its emitting state tied to the centre state of the silence model.
The required topology of the two silence models is shown in Fig.~\href{f:egsils}.
}

These silence models can be created in two stages
\begin{itemize}
\item 
Use a text editor on the file \texttt{hmm3/hmmdefs} to copy the centre state of
the \texttt{sil} model to
make a new \texttt{sp} model and store the resulting MMF \texttt{hmmdefs}, which 
includes the new \texttt{sp} model, in the new directory \texttt{hmm4}. 

\item Run the HMM editor \htool{HHEd}\index{hhed@\htool{HHEd}} to add 
the extra transitions required
and tie the \texttt{sp} state to the centre \texttt{sil} state
\end{itemize}

\htool{HHEd} works in a similar way to \htool{HLEd}.  It applies a set of commands in
a script to modify a set of HMMs.  In this case, it is executed as follows
\begin{verbatim}
    HHEd -H hmm4/macros -H hmm4/hmmdefs -M hmm5 sil.hed monophones1
\end{verbatim}
where \texttt{sil.hed} contains the following commands
\begin{verbatim}
    AT 2 4 0.2 {sil.transP}
    AT 4 2 0.2 {sil.transP}
    AT 1 3 0.3 {sp.transP}
    TI silst {sil.state[3],sp.state[2]}
\end{verbatim}
The \texttt{AT}\index{at@\texttt{AT} command} commands add transitions to the
given transition matrices and the final \texttt{TI}\index{ti@\texttt{TI}
command} command creates a tied-state called \texttt{silst}.  The parameters of
this tied-state are stored in the \texttt{hmmdefs} file and within each silence
model, the original state parameters are replaced by the name of this
macro\index{macros}.  Macros are described in more detail below. For now it is
sufficient to regard them simply as the mechanism by which
\HTK\ implements parameter sharing. 
Note that the phone list used here has been changed, because the original list
\texttt{monophones0} has been extended by the new \texttt{sp} model. The new 
file is called \texttt{monophones1} and has been used in the above \htool{HHEd}
command.

\centrefig{step7}{110}{Step 7}

Finally, another two passes of \htool{HERest} are applied using the phone
transcriptions with \texttt{sp} models between words.  This leaves the
set of monophone HMMs created so far in the directory \texttt{hmm7}.
This step is illustrated in Fig.~\href{f:step7}

\subsection{Step 8 - Realigning the Training Data}

As noted earlier, the dictionary contains multiple pronunciations for 
some words, particularly function words.  The phone models created so
far can be used to \textit{realign} the training data and create new
transcriptions.  This can be done with a single invocation of the\index{realignment}
\HTK\ recognition tool \htool{HVite}\index{hvite@\htool{HVite}}, viz
\begin{verbatim}
    HVite -l '*' -o SWT -b silence -C config -a -H hmm7/macros \
          -H hmm7/hmmdefs -i aligned.mlf -m -t 250.0 -y lab \
          -I words.mlf -S train.scp  dict monophones1 
\end{verbatim}
This command uses the HMMs stored in \texttt{hmm7} to transform the input
word level transcription \texttt{words.mlf} to the new phone level transcription
\texttt{aligned.mlf} using the pronunciations stored in the dictionary
\texttt{dict} (see Fig~\href{f:step8}).   The key difference between this
operation and the original word-to-phone mapping performed by \htool{HLEd}
in step 4 is that the recogniser considers all pronunciations for each
word and outputs the pronunciation that best matches the acoustic data.
\index{phone alignment}\index{phone mapping}

In the above, the \texttt{-b} option is used to insert a silence model\index{silence model}
at the start and end of each utterance.  The name \texttt{silence} is used
on the assumption that the dictionary contains an entry
\begin{verbatim}
    silence sil
\end{verbatim}
Note that the dictionary should be sorted firstly by case (upper case first) and secondly 
alphabetically.  The \texttt{-t} option sets a pruning level of 250.0 and the \texttt{-o} 
option is used to suppress the printing of scores, word names and time
boundaries in the output MLF.

\centrefig{step8}{85}{Step 8}

Once the new phone alignments have been created, another  2 passes
of \htool{HERest} can be applied to reestimate the HMM set parameters
again.  Assuming that this is done, the final monophone HMM set will
be stored in directory \texttt{hmm9}.

When aligning the data, it is sometimes clear that there is significant
amounts of silence at the beginning and end of some utterances (to spot this the 
time-stamp information would need to be output during the alignment 
so the option {\tt -o SW} would need to be used). Rather than explicitly 
extracting the portion of the utterance with the appropriate amount of silence at the start and end, the script file specified using the {\tt -S} option can 
be used for this purpose. Suppose that \texttt{train.scp} contains the 
following lines
\begin{verbatim}
/root/sjy/data/S0001.mfc
/root/sjy/data/S0002.mfc
/root/sjy/data/S0003.mfc
...
\end{verbatim}
To specify a particular segmentation from this a new scp-file would be
generated containing
\begin{verbatim}
S0001.mfc=/root/sjy/data/S0001.mfc[20,297]
S0002.mfc=/root/sjy/data/S0002.mfc[25,496]
S0003.mfc=/root/sjy/data/S0003.mfc[22,308]
...
\end{verbatim}
Where for the first utterance only the segment from the 21st frame (the first frame is labelled as frame 0) to the 298th frame inclusive of the original MFCC file should be used. Note with this form of scp file the label file would also have to be modified to (here the word level MLF is considered, but both word and phone-level MLFs would need to be changed)
\begin{verbatim}
    #!MLF!#
    "S0001.lab"
    ONE 
    VALIDATED 
    ACTS 
    OF 
    SCHOOL 
    DISTRICTS
    .
    "S0002.lab"
    TWO 
    ....
\end{verbatim}
This is a general process for specifying segmentations of the training
and test data without explicitly extracting the data (and thus requiring a new
MFCC file to be stored). For tasks where data must be segmented as part of the
recognition task, for example Broadcast News transcription, this form of
approach is very useful.

Note for the rest of this tutorial it is assumed that this segmentation
process was {\em not} necessary.

\mysect{Creating Tied-State Triphones}{egcreattri}

Given a set of monophone HMMs, the final stage of model building is to create
context-dependent triphone\index{HMM!triphones} HMMs.  This is done in 
two steps.  Firstly, the
monophone transcriptions are converted to triphone transcriptions and a set
of triphone models are created by copying the monophones and re-estimating.
Secondly, similar acoustic states of these triphones are tied to ensure that
all state distributions can be robustly estimated.

\subsection{Step 9 - Making Triphones from Monophones}

Context-dependent triphones can be made by simply 
cloning\index{HMM!cloning}\index{cloning} monophones and then
re-estimating using triphone transcriptions.  The latter should be created
first using \htool{HLEd}\index{hled@\htool{HLEd}} because 
a side-effect is to generate a list of all
the triphones for which there is at least one example in the training data.
That is, executing
\begin{verbatim}
    HLEd -n triphones1 -l '*' -i wintri.mlf mktri.led aligned.mlf
\end{verbatim}
will convert the monophone transcriptions in \texttt{aligned.mlf} to
an equivalent set of triphone transcriptions in \texttt{wintri.mlf}.
At the same time, a list of triphones is written to the file \texttt{triphones1}.
The edit script \texttt{mktri.led}  contains the commands
\begin{verbatim}
    WB sp
    WB sil
    TC 
\end{verbatim}
The two \texttt{WB}\index{wb@\texttt{WB} command} commands define \texttt{sp} and \texttt{sil}
as \textit{word boundary symbols}.  These then block the addition of
context in the \texttt{TI} command, seen in the following script, which converts all phones
(except word boundary symbols) to triphones
\index{triphones!word internal}\index{triphones!from monophones}\index{triphones!by cloning}.  
For example,
\begin{verbatim}
    sil th ih s sp m ae n sp ...
\end{verbatim}
becomes
\begin{verbatim}
    sil th+ih th-ih+s ih-s sp m+ae m-ae+n ae-n sp ...
\end{verbatim}
This style of triphone transcription is referred to as \textit{word internal}.
\index{word internal}
Note that some biphones will also be generated as contexts at word boundaries
will sometimes only include two phones.

The cloning of models can be done efficiently using the HMM editor \htool{HHEd}:
\begin{verbatim}
    HHEd -B -H hmm9/macros -H hmm9/hmmdefs -M hmm10 
         mktri.hed monophones1
\end{verbatim}
where the edit script \texttt{mktri.hed}
contains a clone command \texttt{CL} followed by \texttt{TI} commands to tie all of
the transition matrices in each triphone\index{triphones!notation} set, that is:
\begin{verbatim}
    CL triphones1
    TI T_ah {(*-ah+*,ah+*,*-ah).transP}
    TI T_ax {(*-ax+*,ax+*,*-ax).transP}
    TI T_ey {(*-ey+*,ey+*,*-ey).transP}
    TI T_b {(*-b+*,b+*,*-b).transP}
    TI T_ay {(*-ay+*,ay+*,*-ay).transP}
    ...
\end{verbatim}  
The file \texttt{mktri.hed} can be generated using the {\em Perl} script
\texttt{maketrihed} included in the \texttt{HTKTutorial} directory.
When running the \htool{HHEd}\index{hled@\htool{HHEd}} command you
will get warnings about trying to tie transition matrices for the sil
and sp models. Since neither model is context-dependent there aren't
actually any matrices to tie.

The clone command \texttt{CL}\index{cl@\texttt{CL} command} takes as its
argument the name of the file containing the list of triphones (and
biphones)\index{cloning}\index{parameter tying}\index{item lists} generated
above.  For each model of the form \texttt{a-b+c} in this list, it looks for
the monophone \texttt{b} and makes a copy of it.\index{tying!transition
matrices} Each \texttt{TI} command takes as its argument the name of a macro
and a list of HMM components.  The latter uses a notation which attempts to
mimic the hierarchical structure of the HMM parameter set in which the
transition matrix \texttt{transP} can be regarded as a sub-component of each
HMM.  The list of items within brackets are patterns designed to match the set
of triphones, right biphones and left biphones for each phone.

\centrefig{egtranstie}{80}{Tying Transition Matrices}

Up to now macros and tying have only been mentioned in passing.  Although a
full explanation must wait until chapter~\ref{c:HMMDefs}, a brief explanation
is warranted here.  Tying means that one or more HMMs share the same set of
parameters.  On the left side of Fig.~\href{f:egtranstie}, two HMM definitions
are shown.  Each HMM has its own individual transition matrix.  On the right
side, the effect of the first \texttt{TI} command in the edit script
\texttt{mktri.hed} is shown.  The individual transition matrices have been
replaced by a reference to a \textit{macro} called \texttt{T\_ah} which
contains a matrix shared by both models.  When reestimating tied parameters,
the data which would have been used for each of the original untied parameters
is pooled so that a much more reliable estimate can be obtained.

Of course, tying could affect performance if performed indiscriminately.
Hence, it is important to only tie parameters which have little effect on
discrimination.  This is the case here where the transition parameters do not
vary significantly with acoustic context but nevertheless need to be estimated
accurately.  Some triphones will occur only once or twice and so very poor
estimates would be obtained if tying was not done.  These problems of data
insufficiency will affect the output distributions too, but this will be dealt
with in the next step.

Hitherto, all HMMs have been stored in text format and could be inspected like
any text file.  Now however, the model files will be getting larger and space
and load/store times become an issue.  For increased efficiency,
\HTK\ can store and load MMFs in binary\index{HMM!binary storage}
format.  Setting the standard \texttt{-B} option causes this to happen.

\sidefig{step9}{55}{Step 9}{-4}{
Once the context-dependent models have been cloned, the new triphone set can be
re-estimated using \htool{HERest}.  This is done as previously except that the
monophone model list is replaced by a triphone list and the triphone
transcriptions are used in place of the monophone transcriptions.  

For the final pass of \htool{HERest}, the \texttt{-s} option should be used to
generate a file of state occupation statistics called \texttt{stats}.  In
combination with the means and variances, these enable likelihoods to be
calculated for clusters of states and are needed during the state-clustering
process \index{statistics!state occupation} described below.
Fig.~\href{f:step9} illustrates this step of the HMM construction
procedure. Re-estimation should be again done twice, so that the resultant
model sets will ultimately be saved in \texttt{hmm12}.  
}
\begin{verbatim}
   HERest -B -C config -I wintri.mlf -t 250.0 150.0 1000.0 -s stats \
    -S train.scp -H hmm11/macros -H hmm11/hmmdefs -M hmm12 triphones1
\end{verbatim}


\subsection{Step 10 - Making Tied-State Triphones}

The outcome of the previous stage is a set of triphone HMMs with all triphones
in a phone set sharing the same transition matrix.  When estimating these
models, many of the variances in the output distributions
will have been floored since there will be\index{variance!flooring problems}\index{state tying}
\index{tying!states}\index{data insufficiency}
insufficient data associated with many of the states.  The last step in
the model building process is to tie states within triphone sets
in order to share data and thus be able to make robust parameter estimates.

In the previous step, the \texttt{TI} command was used to
explicitly tie all members of a set of transition matrices together. 
However,
the choice of which states to tie requires a bit more  subtlety since
the performance of the recogniser depends crucially on how accurate
the state output distributions capture the statistics of the speech data.

\htool{HHEd} provides two mechanisms which allow states to be clustered 
and\index{state clustering}
then each cluster tied.  The first is data-driven and uses a similarity
measure between states.  The second uses decision trees\index{decision trees}
and is based on asking questions about the left and right contexts of each
triphone.  The decision tree attempts to find those contexts which make the largest
difference to the acoustics and which should therefore distinguish clusters.

Decision tree state tying is performed by running \htool{HHEd} 
in the normal way, i.e.
\begin{verbatim}
   HHEd -B -H hmm12/macros -H hmm12/hmmdefs -M hmm13 \
        tree.hed triphones1 > log
\end{verbatim}
Notice that the output is saved in a log file.  This is important since
some tuning of thresholds is usually needed.

The edit script \texttt{tree.hed}, which contains the instructions regarding
which contexts to examine for possible clustering, can be rather long and
complex. A script for automatically generating this file, \texttt{mkclscript},
is found in the RM Demo. A version of the \texttt{tree.hed} script, which can
be used with this tutorial, is included in the \texttt{HTKTutorial} directory.
Note that this script is only capable of creating the TB commands (decision 
tree clustering of states).  The questions (QS) still need defining by
the user.  There is, however, an example list of questions which may be 
suitable to some tasks (or at least useful as an example) supplied with the 
RM demo (lib/quests.hed).  The entire script appropriate for clustering 
English phone models is too long to show here in the text, however, its main 
components are given by the following fragments:

\begin{verbatim}

    RO 100.0 stats
    TR 0
    QS "L_Class-Stop" {p-*,b-*,t-*,d-*,k-*,g-*} 
    QS "R_Class-Stop" {*+p,*+b,*+t,*+d,*+k,*+g} 
    QS "L_Nasal" {m-*,n-*,ng-*} 
    QS "R_Nasal" {*+m,*+n,*+ng}
    QS "L_Glide" {y-*,w-*} 
    QS "R_Glide" {*+y,*+w}
    ....
    QS "L_w" {w-*} 
    QS "R_w" {*+w} 
    QS "L_y" {y-*} 
    QS "R_y" {*+y} 
    QS "L_z" {z-*} 
    QS "R_z" {*+z} 
 
    TR 2

    TB 350.0 "aa_s2" {(aa, *-aa, *-aa+*, aa+*).state[2]}
    TB 350.0 "ae_s2" {(ae, *-ae, *-ae+*, ae+*).state[2]}
    TB 350.0 "ah_s2" {(ah, *-ah, *-ah+*, ah+*).state[2]}
    TB 350.0 "uh_s2" {(uh, *-uh, *-uh+*, uh+*).state[2]}
    ....
    TB 350.0 "y_s4" {(y, *-y, *-y+*, y+*).state[4]}
    TB 350.0 "z_s4" {(z, *-z, *-z+*, z+*).state[4]}
    TB 350.0 "zh_s4" {(zh, *-zh, *-zh+*, zh+*).state[4]}

    TR 1
    
    AU "fulllist"
    CO "tiedlist"

    ST "trees"
\end{verbatim}
Firstly, the \texttt{RO}\index{ro@\texttt{RO} command} command is used to set
the outlier threshold\index{outlier threshold} to 100.0 and load the statistics
file\index{statistics file} generated at the end of the previous step.  The
outlier threshold determines the minimum occupancy\index{minimum occupancy} of
any cluster and prevents a single outlier state forming a singleton cluster
just because it is acoustically very different to all the other states.  The
\texttt{TR}\index{tr@\texttt{TR} command} command sets the trace level to zero
in preparation for loading in the questions.  Each
\texttt{QS}\index{qs@\texttt{QS} command} command loads a single question and
each question is defined by a set of contexts.  For example, the first
\texttt{QS} command defines a question called \texttt{L\_Class-Stop} which is
true if the left context is either of the stops \texttt{p},
\texttt{b}, \texttt{t}, \texttt{d}, \texttt{k} or \texttt{g}.

\sidefig{step10}{50}{Step 10}{-4}{}
Notice that for a triphone system, it is necessary to include questions
referring to both the right and left contexts of a phone. The questions should
progress from wide, general classifications (such as consonant, vowel, nasal,
diphthong, etc.) to specific instances of each phone.
Ideally, the full set of questions loaded using the \texttt{QS} command would
include every possible context which can influence the acoustic realisation of
a phone, and can include any linguistic or phonetic classification which may be
relevant. There is no harm in creating extra unnecessary questions, because
those which are determined to be irrelevant to the data will be ignored.

The second \texttt{TR} command enables intermediate level progress reporting so
that each of the following \texttt{TB} commands\index{tb@\texttt{TB} command}
can\index{tree building} be monitored.  Each of these \texttt{TB} commands
clusters one specific set of states.  For example, the first \texttt{TB}
command applies to the first emitting state of all context-dependent models for
the phone \texttt{aa}.

Each \texttt{TB} command works as follows.  Firstly, each set of states defined
by the final argument is pooled to form a single cluster.  Each question in the
question set loaded by the \texttt{QS} commands is used to split the pool into
two sets.  The use of two sets rather than one, allows the log likelihood of
the training data to be increased and the question which maximises this
increase is selected for the first branch of the tree. The process is then
repeated until the increase in log likelihood achievable by any question at any
node is less than the threshold specified by the first argument (350.0 in this
case).

Note that the values given in the \texttt{RO} and \texttt{TB} commands affect
the degree of tying and therefore the number of states output in the clustered
system.  The values should be varied according to the amount of training data
available.
As a final step to the clustering, any pair of clusters which can be merged
\index{cluster merging} such that the decrease in log likelihood is below
the threshold is merged.  On completion, the states in each cluster $i$ are
tied to form a single shared state with macro name \texttt{xxx\_i} where
\texttt{xxx} is the name given by the second argument of the \texttt{TB}
command.

The set of triphones used so far only includes those needed to cover the
training data. The \texttt{AU} command takes as its argument a new list of
triphones expanded to include all those needed for recognition.  This list can
be generated, for example, by using \htool{HDMan} on the entire dictionary (not
just the training dictionary), converting it to triphones using the command
\texttt{TC} and outputting a list of the distinct triphones to a file using the
option \texttt{-n} 

\begin{verbatim}
    HDMan -b sp -n fulllist -g global.ded -l flog beep-tri beep
\end{verbatim}

\noindent
The -b sp option specifies that the sp phone is used as a word boundary, and so 
is excluded from triphones.  The effect of the \texttt{AU} command is to use the 
decision trees to synthesise all of the new previously unseen triphones in the new 
list.
\index{au@\texttt{AU} command}

Once all state-tying has been completed and new models synthesised, 
some models may  share exactly
the same 3 states and transition matrices and are thus identical.
The \texttt{CO} command\index{co@\texttt{CO} command}\index{model compaction} is used
to compact the model set by finding all identical models and tying them
together\footnote{
Note that if the transition matrices had not been tied, the \texttt{CO}
command would be ineffective since all models would be different by
virtue of their unique transition matrices.}, producing a new list of models
called \texttt{tiedlist}.

One of the advantages of using decision tree clustering is that it allows
previously\index{unseen triphones}
unseen triphones to be synthesised.  To do this, the trees must
be saved and this is done by the \texttt{ST} command\index{st@\texttt{ST} command}.
Later if new previously unseen triphones are required, for example in the
pronunciation of a new vocabulary item, the existing model set can be
reloaded into \htool{HHEd}, the trees reloaded using 
the \texttt{LT} command\index{lt@\texttt{LT} command}
and then a new extended list of triphones created using 
the \texttt{AU} command.\index{au@\texttt{AU} command}

After \htool{HHEd} has completed,  the effect of tying can be studied and
the thresholds adjusted if necessary.  The log file will
include summary statistics which give the total number of physical
states remaining and the number of models after compacting.

Finally, and for the last time, the models are re-estimated twice using
\htool{HERest}.  Fig.~\href{f:step10} illustrates this last step in the HMM
build process.  The trained models are then contained in the file
\texttt{hmm15/hmmdefs}.

\mysect{Recogniser Evaluation}{egrectest}

The recogniser is now complete and its performance can be evaluated.  
The recognition network and dictionary have already been constructed, 
and test data has been recorded.  
Thus, all that is necessary is to run the recogniser and 
then evaluate the results using the \HTK\ analysis tool \htool{HResults}\index{recogniser evaluation}

\subsection{Step 11 - Recognising the Test Data}

Assuming that \texttt{test.scp} holds a list of the coded test files,
then each test file will be recognised and its transcription output to
an MLF called \texttt{recout.mlf} by executing the following
\begin{verbatim}
    HVite -H hmm15/macros -H hmm15/hmmdefs -S test.scp \
          -l '*' -i recout.mlf -w wdnet \
          -p 0.0 -s 5.0 dict tiedlist
\end{verbatim}
The options \texttt{-p} and \texttt{-s} set the \textit{word insertion penalty}
\index{word insertion penalty}
and the \textit{grammar scale factor}, \index{grammar scale factor}
respectively.  The word insertion penalty
is a fixed value added to each token when it transits from the end of one word
to the start of the next.  The grammar scale factor is the amount by which
the language model probability is scaled before being 
added to each token  as it transits from the end of one word
to the start of the next.  These parameters can have a significant effect
on recognition performance and hence, some tuning on development test data
is well worthwhile.

The dictionary contains monophone transcriptions whereas the supplied HMM list
contains word internal triphones.  \htool{HVite}\index{hvite@\htool{HVite}} 
will make the necessary 
conversions when loading the word network \texttt{wdnet}.  However, 
if the HMM list contained both monophones and context-dependent phones
then \htool{HVite} would become confused.  The required form of 
word-internal network\index{networks!word-internal} 
expansion can be forced by setting the configuration variable
\texttt{FORCECXTEXP}\index{forcecxtexp@\texttt{FORCECXTEXP}} to true and 
\texttt{ALLOWXWRDEXP}\index{allowxwrdexp@\texttt{ALLOWXWRDEXP}} to 
false (see chapter~\ref{c:netdict} for details).\index{accuracy figure}

Assuming that the MLF \texttt{testref.mlf} contains word level transcriptions
for each test file\footnote{The \htool{HLEd} tool may have to be used to insert silences 
at the start and end of each transcription or alternatively
\htool{HResults} can be used to ignore silences (or any other symbols) using
the \texttt{-e} option.}, the actual
performance can be determined by running 
\htool{HResults} as follows
\begin{verbatim}
    HResults -I testref.mlf tiedlist recout.mlf
\end{verbatim}
the result would be a print-out of the form
\begin{verbatim}
    ====================== HTK Results Analysis ==============
      Date: Sun Oct 22 16:14:45 1995
      Ref : testrefs.mlf
      Rec : recout.mlf
    ------------------------ Overall Results -----------------
    SENT: %Correct=98.50 [H=197, S=3, N=200]
    WORD: %Corr=99.77, Acc=99.65 [H=853, D=1, S=1, I=1, N=855]
    ==========================================================
\end{verbatim}
The line starting with \texttt{SENT:} indicates that of the 200 test utterances,
197  (98.50\%) were correctly recognised.  The following line starting with \texttt{WORD:} 
gives the word level statistics and indicates that of the 855 words in total,
853 (99.77\%) were recognised correctly.  There was 1 deletion error (\texttt{D}), 
1 substitution\index{recognition!results analysis}
error (\texttt{S}) and 1 insertion error (\texttt{I}).  The accuracy figure (\texttt{Acc})
of 99.65\% is lower than the percentage correct (\texttt{Cor}) because it takes
account of the insertion errors which the latter ignores.

\centrefig{step11}{120}{Step 11}

\mysect{Running the Recogniser Live}{egreclive}

The recogniser can also be run with live input\index{live input}.  
\index{recognition!direct audio input}
To do this it is only
necessary to set the configuration variables needed to convert the input
audio to the correct form of  parameterisation.  Specifically, the following
needs to be appended to the configuration file \texttt{config} to
create a new configuration file \texttt{config2}
\begin{verbatim}
    # Waveform capture
    SOURCERATE=625.0
    SOURCEKIND=HAUDIO
    SOURCEFORMAT=HTK
    ENORMALISE=F
    USESILDET=T
    MEASURESIL=F
    OUTSILWARN=T
\end{verbatim}
These indicate that the source is direct audio with sample period 62.5
$\mu$secs.  The silence detector is enabled and a measurement of the background
speech/silence levels should be made at start-up.  The final line makes sure
that a warning is printed when this silence measurement is being made.

Once the configuration file has been set-up for direct audio input,
\htool{HVite} can be run as in the previous step except that no files need be
given as arguments

\begin{verbatim}
    HVite -H hmm15/macros -H hmm15/hmmdefs -C config2 \
          -w wdnet -p 0.0 -s 5.0 dict tiedlist
\end{verbatim}

On start-up, \htool{HVite} will prompt the user to speak an
arbitrary sentence (approx. 4 secs) in order to measure the speech and
background silence levels. It will then repeatedly recognise and, if trace
level bit 1 is set, it will output each utterance to the terminal. A typical
session is as follows\index{recognition!output}

\begin{verbatim}
   Read 1648 physical / 4131 logical HMMs
   Read lattice with 26 nodes / 52 arcs
   Created network with 123 nodes / 151 links

   READY[1]>
   Please speak sentence - measuring levels
   Level measurement completed
   DIAL FOUR SIX FOUR TWO FOUR OH  
        == [303 frames] -95.5773 [Ac=-28630.2 LM=-329.8] (Act=21.8)
   
   READY[2]>
    DIAL ZERO EIGHT SIX TWO 
        == [228 frames] -99.3758 [Ac=-22402.2 LM=-255.5] (Act=21.8)
   
   READY[3]>
    etc
\end{verbatim}
During loading, information will be printed out regarding the different
recogniser components. The physical models are the distinct HMMs used by 
the system, while the logical models include all model names. The number 
of logical models is higher than the number of physical models because many 
logically distinct models have been determined to be physically identical 
and have been merged during the previous model building steps. The lattice
information refers to the number of links and nodes in the recognition syntax.
The network information refers to actual recognition network built by
expanding the lattice using the current HMM set, dictionary and any context
expansion rules specified.
After each utterance, the numerical information gives the total number
of frames, the average log likelihood per frame, the total acoustic score,
the total language model score and the average number of models active.

Note that if it was required to recognise a new name, then the
following two changes would be needed
\begin{enumerate}
\item the grammar would be altered to include the new name
\item a pronunciation for the new name would be added to the dictionary
\end{enumerate}
If the new name required triphones which did not exist, then they could be
created by loading the existing triphone set into
\htool{HHEd}\index{hhed@\htool{HHEd}}, loading the decision trees using the
\texttt{LT} command\index{lt@\texttt{LT} command} and then using the
\texttt{AU} command\index{au@\texttt{AU} command} to generate a new complete
triphone set.\index{triphones!synthesising unseen}

\mysect{Adapting the HMMs}{exsysadapt}

The previous sections have described the stages required to build a simple 
voice dialling system. To simplify this process, speaker dependent models were 
developed using training data from a single user. Consequently, recognition 
accuracy for any other users would be poor.
To overcome this limitation, a set of speaker independent models could be 
constructed, but this would require large amounts of training data from a 
variety of speakers. An alternative is to adapt the current speaker dependent 
models to the characteristics of a new speaker using a small amount of 
training or adaptation data\index{adaptation}. In general, adaptation 
techniques are applied to well trained speaker independent model sets to 
enable them to better model the characteristics of particular speakers.

\HTK\ supports both supervised adaptation\index{adaptation!supervised adaptation}, 
where the true transcription of the data is known and unsupervised 
adaptation\index{adaptation!unsupervised adaptation} where the
transcription is hypothesised.
In \HTK\ supervised adaptation is performed offline by
\htool{HERest} using maximum likelihood linear transformations
(for example MLLR, CMLLR)\index{adaptation!MLLR} 
and/or maximum a-posteriori (MAP)\index{adaptation!MAP} techniques to 
estimate
a series of transforms or a transformed model set, that reduces the mismatch 
between the current model set and the adaptation data. Unsupervised 
adaptation is provided by \htool{HVite}, using just linear transformations.

The following sections describe offline supervised adaptation (using
MLLR) with the use of \htool{HERest}.

\subsection{Step 12 - Preparation of the Adaptation Data}

As in normal recogniser development, the first stage in adaptation involves 
data preparation. Speech data from the new user is required for both adapting 
the models and testing the adapted system. The data can be obtained in a 
similar fashion to that taken to prepare the original test data.
Initially, prompt lists for the adaptation and test data will be generated using 
\htool{HSGen}. For example, typing

\begin{verbatim}
    HSGen -l -n 20 wdnet dict > promptsAdapt
    HSGen -l -n 20 wdnet dict > promptsTest
\end{verbatim}

\noindent
would produce two prompt files for the adaptation and test data. The amount of 
adaptation data required will normally be found empirically, but a performance 
improvement should be observable after just 30 seconds of speech.
In this case, around 20 utterances should be sufficient.
\htool{HSLab} can be used to record the associated speech.

Assuming that the script files \texttt{codeAdapt.scp} and \texttt{codeTest.scp} 
list the source and output files for the adaptation and test data respectively 
then both sets of speech can then be coded using the \htool{HCopy} commands given 
below.

\begin{verbatim}
    HCopy -C config -S codeAdapt.scp
    HCopy -C config -S codeTest.scp
\end{verbatim}

\noindent
The final stage of preparation involves generating context dependent phone 
transcriptions of the adaptation data and word level transcriptions of the test 
data for use in adapting the models and evaluating their performance.
The transcriptions of the test data can be obtained using \texttt{prompts2mlf}.
To minimise the problem of multiple pronunciations the phone level 
transcriptions of the adaptation data can be obtained by using \htool{HVite}
to perform a \textit{forced alignment} of the adaptation data. Assuming 
that word level transcriptions are listed in \texttt{adaptWords.mlf}, then the
following command will place the phone transcriptions in 
\texttt{adaptPhones.mlf}.

\begin{verbatim}
    HVite -l '*' -o SWT -b silence -C config -a -H hmm15/macros \ 
          -H hmm15/hmmdefs -i adaptPhones.mlf -m -t 250.0 \ 
          -I adaptWords.mlf -y lab -S adapt.scp dict tiedlist
\end{verbatim}

\subsection{Step 13 - Generating the Transforms}
\index{adaptation!generating transforms} \htool{HERest} provides
support for a range of linear transformations and possible number of
transformations. Regression class trees\index{adaptation!regression
tree} can be used to dynamically specify the number of transformations
to be generated, or the number may be pre-determined using a set of
baseclasses. The \HTK\ tool \htool{HHEd} can be used to build a
regression class tree and store it along with a set of baseclasses. For
example,

\begin{verbatim}
    HHEd -B -H hmm15/macros -H hmm15/hmmdefs -M classes regtree.hed tiedlist
\end{verbatim}

\noindent
creates a regression class tree using the models stored in
\texttt{hmm15} and stores the regression class tree and base classes
in the \texttt{classes} directory.  The \htool{HHEd} edit script
\texttt{regtree.hed} contains the following commands

\begin{verbatim}
    LS "hmm15/stats"
    RC 32 "rtree"
\end{verbatim}


\noindent
%% The \texttt{RN}\index{rn@\texttt{RN} command} command assigns an
%% identifier to the HMM set.
The \texttt{LS}\index{ls@\texttt{LS} command} command loads the state 
occupation statistics file 
\texttt{stats} generated by the last application of \htool{HERest} which 
created the models in \texttt{hmm15}. 
The \texttt{RC}\index{rc@\texttt{RC} command} command then attempts to build 
a regression class tree with 32 terminal or leaf nodes using these statistics.
In addition a global transform is used as the default. This baseclass for this
must still be specified, using in the file ``global'' for example

\begin{verbatim}
  ~b ``global''
  <MMFIDMASK> *
  <PARAMETERS> MIXBASE
  <NUMCLASSES> 1
    <CLASS> 1  {*.state[2-4].mix[1-12]}      

\end{verbatim}
This file should be be added to the classes directory.
 
\htool{HERest} and \htool{HVite} can be used to perform static adaptation, where all the
adaptation data is processed in a single block. Note as with standard HMM
training \htool{HERest} will expect the list of model names. In contrast 
\htool{HVite} only needs to list of words. \htool{HVite} can also be used
for incremental adaptation. In this tutorial the use of static 
adaptation with \htool{HERest} will be described with MLLR as the form
of linear adaptation.

The example use of \htool{HERest} for adaptation involves two
passes. On the first pass a global adaptation is performed. The second
pass then uses the global transformation as an {\em input}
transformation, to transform the model set, producing better
frame/state alignments which are then used to estimate a set of more
specific transforms, using a regression class tree.  After estimating
the transforms, \htool{HERest} can output either the newly adapted
model set or, in the default setting, the transformations themselves
in either a transform model file (TMF)\index{adaptation!transform
model file} or as a set of distinct transformations .  The latter
forms can be advantageous if storage is an issue since the TMFs (or
transforms) are
significantly smaller than MMFs and the computational overhead
incurred when transforming a model set using a transform is negligible.
 
The two applications of \htool{HERest} below demonstrate a static two-pass 
adaptation approach where the global and regression class transformations are 
stored in the directory \texttt{xforms} with file extensions \texttt{mllr1}
for the global transform and \texttt{mllr2} for multiple regression class
system.
\begin{verbatim}
    HERest -C config -C config.global -S adapt.scp -I adaptPhones.mlf \
           -H hmm15/macros -u a -H hmm15/hmmdefs -z -K xforms mllr1 -J classes \
           -h '*/%%%%%%_*.mfc' tiedlist

    HERest -a -C config -C config.rc -S adapt.scp -I adaptPhones.mlf \
           -H hmm15/macros -u a -H hmm15/hmmdefs -J xforms mllr1 -K xforms mllr2 \
           -J classes -h '*/%%%%%%_*.mfc' tiedlist
\end{verbatim}
where config.global has the form
\begin{verbatim}
 HADAPT:TRANSKIND              = MLLRMEAN
 HADAPT:USEBIAS                = TRUE
 HADAPT:BASECLASS              = global
 HADAPT:ADAPTKIND              = BASE
 HADAPT:KEEPXFORMDISTINCT      = TRUE

 HADAPT:TRACE   = 61
 HMODEL:TRACE   = 512
\end{verbatim}
 config.rc has the form
\begin{verbatim}
 HADAPT:TRANSKIND              = MLLRMEAN
 HADAPT:USEBIAS                = TRUE
 HADAPT:REGTREE                = rtree.tree
 HADAPT:ADAPTKIND              = TREE
 HADAPT:SPLITTHRESH            = 1000.0
 HADAPT:KEEPXFORMDISTINCT      = TRUE

 HADAPT:TRACE   = 61
 HMODEL:TRACE   = 512
\end{verbatim}
The last two entries yield useful log information to do with which
transforms are being used and from where.  \texttt{-h} is a mask that
is used to detect when the speaker changes and also is used to
determine the name of the speaker transform. File Masks may also be 
separately specified specified using configuration variables:
\begin{verbatim}
  INXFORMMASK
  PAXFORMMASK
\end{verbatim}
The output transform mask is assumed to be specified using the {\tt -h} option
and by default the input and parent transforms are assumed to be the same.

The transformed models can also be stored. This is specified by adding
\begin{verbatim}
 HADAPT:SAVESPKRMODELS          = TRUE
\end{verbatim}
To the configuration. Note this option should NOT be used with
transforms that modify the feature-space, such as {\tt MLLRCOV} and
{\tt CMLLR}.

One important difference between the standard HMM macros and the
adaptation macros, is that for adaptation multiple directories may be
specified using the {\tt -J} option to search for the appropriate macro. 
This is useful when using multiple {\em parent} transforms. The set of adaptation
transforms are: {\tt a, b, r, f, g, x, y, j}. The {\tt -J} (along with the
{\tt -K} and {\tt -E} for output and parent transforms respectively) flag 
takes an optional argument that specifies the input transform transform
file extension. For the {\tt -J} flag this can {\em only} be specified on the
first time that a {\tt -J} flag is encountered in the command line. It
is strongly recommended that this option is used as it allows easy tracking
of transforms.

\subsection{Step 14 - Evaluation of the Adapted System}

To evaluate the performance of the adaptation, the test data previously recorded 
is recognised using \htool{HVite}. Assuming that \texttt{testAdapt.scp} contains a list 
of all of the coded test files, then \htool{HVite} can be invoked in much the same way 
as before but with the additional \texttt{-J} argument used to load the model 
transformation file and  baseclasses.

\begin{verbatim}

    HVite -H hmm15/macros -H hmm15/hmmdefs -S testAdapt.scp -l '*' \ 
          -J xforms mllr2   -h '*/%%%%%%_*.mfc' -k  -i recoutAdapt.mlf -w wdnet \ 
          -J classes -C config -p 0.0 -s 5.0 dict tiedlist

\end{verbatim}

\noindent
The results of the adapted model set can then be observed using \htool{HResults} 
in the usual manner.
 
The RM Demo contains a section on speaker adaptation (section 10). These describes
the various options available  in detail along with example configuration files that
may be used.

\mysect{Adaptive training}{exadaptive}
\htool{HERest} also supports adaptive training with {\tt CMLLR} transforms. In
adaptive training adaptation transforms are used to represent speaker
differences during training so that a ``neutral'' {\em canonical model}  can be
trained. 

The first stage is to generate a {\tt CMLLR} transform for each training
speaker. All the data for each speaker must be contiguous in the script file,
in this case \texttt{train.scp}. Using the regression tree from the previous
section, the following command can be run to generate the transforms
\begin{verbatim}
    HERest -C config -C config.cmllr -S train.scp -I wintri.mlf \
           -J classes -h '*/%%%%%%_*.mfc' -K hmm15/cmllr cmllr1 \
           -H hmm15/macros -u a -H hmm15/hmmdefs  -M hmm15/cmllr tiedlist
\end{verbatim}
where the configuration file \texttt{config.cmllr} contains
\begin{verbatim}
 HADAPT:TRANSKIND              = CMLLR
 HADAPT:USEBIAS                = TRUE
 HADAPT:REGTREE                = rtree.tree
 HADAPT:ADAPTKIND              = TREE
 HADAPT:SPLITTHRESH            = 1000.0
 HADAPT:KEEPXFORMDISTINCT      = TRUE

 HADAPT:TRACE   = 61
 HMODEL:TRACE   = 512
\end{verbatim}
The transforms will be stored in the directory \texttt{hmm15/cmllr} with a file
extension \texttt{cmllr1}. 
To update the canonical model the following command would be run
\begin{verbatim}
    HERest -C config -S train.scp -I wintri.mlf \
           -J hmm15/cmllr cmllr1 -J classes -h '*/%%%%%%_*.mfc' \
           -E hmm15/cmllr cmllr1 -a \
           -H hmm15/macros -H hmm15/hmmdefs  -M hmm15a tiedlist
\end{verbatim}
The estimated model-set is put in directory \texttt{hmm15a}. It is 
possible to interleave updates of the transforms with updates of the
canonical model, or simply use the same set of transforms with multiple
iterations of \htool{HERest}.
 
\mysect{Semi-Tied and HLDA transforms}{exsyslintran}
\htool{HERest} also supports estimation of a semi-tied transform. Here only a global
semi-tied transform is described, however multiple baseclasses can be used. A new
configuration file, \texttt{config.semi}, is required. This contains
\begin{verbatim}
 HADAPT:TRANSKIND              = SEMIT
 HADAPT:USEBIAS                = FALSE
 HADAPT:BASECLASS              = global
 HADAPT:SPLITTHRESH            = 0.0
 HADAPT:MAXXFORMITER           = 100
 HADAPT:MAXSEMITIEDITER        = 20

 HADAPT:TRACE   = 61
 HMODEL:TRACE   = 512
\end{verbatim}
The \texttt{global} macro in step 13 is required to have been generated. The example
command below can then be run. This generate a new model set stored in hmm16 and 
a semi-tied transform in \texttt{hmm16/SEMITIED}.
\begin{verbatim}

    HERest -C config -C config.semi -S train.scp -I wintri.mlf \
           -H hmm15/macros -u stw -H hmm15/hmmdefs  -K hmm16 -M hmm16 tiedlist

\end{verbatim}
An additional iteration of \htool{HERest} can then be run using
\begin{verbatim}

    HERest -C config -S train.scp -I wintri.mlf -H hmm16/macros -u tmvw\
           -J hmm16 -J classes -H hmm16/hmmdefs -M hmm17 tiedlist

\end{verbatim}
To evaluate the semi-tied estimated model the following command can be used
\begin{verbatim}

    HVite -H hmm17/macros -H hmm17/hmmdefs -S test.scp -l '*' \ 
          -J hmm16 -J classes -i recout_st.mlf -w wdnet \ 
          -C config -p 0.0 -s 5.0 dict tiedlist

\end{verbatim}
Note the {\tt -J} options must be included as the semi-tied transform is
stored in the same fashion as the adaptation transforms. Thus the transform
itself is stored in directory {\tt hmm16} and the global base class in {\tt classes}.

There are a number of useful other options that may be explored using, for example
HLDA. If \texttt{config.semi} is replaced by \texttt{config.hlda} containing
\begin{verbatim}
 HADAPT:TRANSKIND              = SEMIT

 HADAPT:USEBIAS                = FALSE
 HADAPT:BASECLASS              = global
 HADAPT:SPLITTHRESH            = 0.0
 HADAPT:MAXXFORMITER           = 100
 HADAPT:MAXSEMITIEDITER        = 20
 HADAPT:SEMITIED2INPUTXFORM    = TRUE
 HADAPT:NUMNUISANCEDIM         = 5
 HADAPT:SEMITIEDMACRO          = HLDA

 HADAPT:TRACE   = 61
 HMODEL:TRACE   = 512
\end{verbatim}
An HLDA \texttt{InputXForm} that reduces the number of dimensions by 5 is estimated
and stored with the model-set. A copy of the transform is stored also stored in 
a file called \texttt{hmm16/HLDA}. For input transforms (and global semi-tied transforms)
there are two forms in which the transform can be stored. First it may be stored as
an \texttt{AdaptXForm} of type \texttt{SEMIT}. The second form is as an input transform.
The latter is preferable if the feature-vector size is modified. The form of transform
is determined by how \texttt{HADAPT:SEMITIED2INPUTXFORM} is set. 

One of the advantages of storing a global transform as an input transform is that there
is no need to specify any {\tt -J} options as the {\tt INPUTXFORM} is by default
stored with the model set options. To prevent the {\tt INPUTXFORM} being stored
with the model set (for example to allow backward compatibility) set the 
following configuration option
\begin{verbatim}
 HMODEL:SAVEINPUTXFORM          = FALSE
\end{verbatim}

\newpage
\mysect{Using the HTK Large Vocabulary Decoder {\tt HDecode}}{eghdecode}

{\bf
WARNING: The HTK Large Vocabulary
Decoder \htool{HDecode} has
been specifically written for speech recognition tasks
using cross-word triphone models. Known restrictions are:
\begin{itemize}
\item only works for cross-word triphones;
\item supports N-gram language models up to tri-grams;
\item \texttt{sil} and \texttt{sp} models are reserved as silence
models and are, by default, automatically added to the end of all
pronunciation variants of each word in the recognition dictionary;
\item  \texttt{sil} must be used as the pronunciation for the sentence start
  and sentence end tokens;
\item \texttt{sil} and \texttt{sp} models cannot occur in the dictionary,
  except for the dictionary entry of the sentence start
  and sentence end tokens;
\item word lattices generated with \htool{HDecode} must be made {\em
deterministic} using \htool{HLRescore} to remove duplicate paths
prior to being used for acoustic model rescoring
with \htool{HDecode} or \htool{HVite}.
\end{itemize}
}

The decoder distributed with HTK, \htool{HVite}, is only suitable for small
and medium vocabulary systems\footnote{\htool{HVite} becomes progressively less
efficient as the vocabulary size is increased and cross-word triphones are used.} and systems using bigrams. For larger vocabulary systems, or
those requiring trigram language models to be used directly in the search, \htool{HDecode} is
available as an extension\footnote{An additional, more restrictive, licence
  must be agreed to in order to download \htool{HDecode}.} to HTK.
\htool{HDecode} has been specifically written for large vocabulary
speech recognition using cross-word triphone models.
Known restrictions are listed above. For detailed usage, see the
\htool{HDecode} reference page~\ref{s:HDecode} for more information. \htool{HDecode} will also
be used to generate lattices for discriminative training described in the next
section. 

In this section, examples are given for using \htool{HDecode} for large vocabulary
speech recognition. Due to the limitations described above, the word-internal
tripone systems generated in the previous stages {\em cannot} be used with
\htool{HDecode}. For this section it is assumed that there is a cross-word
triphone system in the directory \texttt{hmm20} along with a model-list in
{\tt xwrdtiedlist}. In contrast to the previous sections both the macros and
HMM definitions are stored in the same file \texttt{hmm20/models}. For an
example of how to build a cross-word state-clustered triphone system, see the
Resource Management (RM) example script step 9, in the RM \texttt{samples}
tar-ball.

Note: the grammar scale factors used in this section, and the next section on
discriminative training, are consistent with the values used in the previous
tutorial sections. However for large vocabulary speech recognition systems
grammar scale factors in the range 12-15 are commonly used.

\subsection{Dictionary and Language Model}

\htool{HDecode} automatically adds {\tt sp} or {\tt sil} to the end of each
pronunciation to represent an optionally deletable or non-deletable inter-word
silences. These silences are {\em not} allowed to be in the dictionary entries
other than for the tokens at the start and end of the sentences,
\texttt{SENT-START} and \texttt{SENT-END} in the previous sections. It may
therefore be necessary to modify the dictionary. For example, the dictionary used
for recognition in the previous section should be modified to look like

\begin{verbatim}
    A               ah
    A               ax
    A               ey
    CALL            k ao l
    DIAL            d ay ax l
    EIGHT           ey t
    PHONE           f ow n
    SENT-END    []  sil
    SENT-START  []  sil
    SEVEN           s eh v n
    TO              t ax
    TO              t uw
    ZERO            z ia r ow
\end{verbatim}
This recognition dictionary will be assumed to be scored in the file
\texttt{dict.hdecode}.

A range of bigram and trigram language models, which must match the dictionary
in \texttt{dict.hdecode}, can be used.  For example the first few entries of a
bigram language model are shown below.
\begin{verbatim}
\data\
ngram 1=994
ngram 2=1490
 
\1-grams:
-4.6305 !!UNK
-1.0296 SENT-END       -1.9574
-1.0295 SENT-START     -1.8367
-2.2940 A       -0.6935
... ... 
\end{verbatim}
where \texttt{!!UNK} is a symbol representing the out-of-vocabulary
word-class. For more details of the form of language models that can be used
see chapter~\ref{c:hlmtutor}. Note if  \texttt{!!UNK} (or \texttt{<unk>}) is not the symbol used for the
OOV class a large number of warnings will be printed in the log-file. To avoid
this \htool{HLMCopy} may be used to extract the word-list \texttt{excluding}
the unknown word symbol.

For large vocabulary speech recognition tasks these language model files may become
very large. It is therefore common to store them in a compressed format. For
this section the language model is assumed be compressed using \texttt{gzip}
and stored in a file \texttt{bg\_lm.gz} in the ARPA-MIT format.

\subsection{Option  1 - Recognition}

\htool{HDecode} can be used to generate 1-best output, or 
lattices. For both options the same configuration file, assumed to be stored
in  \texttt{config.hdecode}, may be used. This should
contain the following entries
\begin{verbatim}
TARGETKIND     = MFCC_0_D_A   
HLANGMODFILTER = 'gunzip -c $.gz'
HNETFILTER     = 'gunzip -c $.gz'
HNETOFILTER    = 'gzip -c > $.gz'
RAWMITFORMAT   = T
STARTWORD      = SENT-START
ENDWORD        = SENT-END

\end{verbatim}
This configuration file has specified the frontend, the filter for reading the
language model\footnote{\texttt{gzip} and \texttt{gunzip} are assumed to be in the current
  path.}, \texttt{HLANGMODFILTER} and filters for reading and writing
lattices, \texttt{HNETFILTER} and \texttt{HNETOFILTER} respectively.

Recognition can then be run on the files specified in \texttt{test.scp} using
the following command.
\begin{verbatim}
    HDecode -H hmm20/models -S test.scp \
          -t 220.0 220.0 \
          -C config.hdecode -i recout.mlf -w bg_lm \
          -p 0.0 -s 5.0 dict.hdecode xwrdtiedlist

\end{verbatim}
The output will be written to an MLF in
\texttt{recout.mlf}.  The \texttt{-w} option specifies the  $n$-gram model, in
this case a bigram to be used. The final recognition results may be analysed using
\htool{HResults} in the same way as \htool{HVite}. 

In common with \htool{HVite}, there are a number of options that need to be
set empirically to obtain good recognition performance and speed.  The options
\texttt{-p} and \texttt{-s} specify the \textit{word insertion penalty}
\index{word insertion penalty} and the \textit{grammar scale factor}
\index{grammar scale factor} respectively as in \htool{HVite}.  There are also
a number of pruning options that may be tuned to adjust the run time. These
include the main beam (see the \texttt{-t} option), word end beam (see the
\texttt{-v} option) and the maximum model pruning (see the \texttt{-u}
option).

\subsection{Option  2 - Speaker Adaptation}

\htool{HDecode} also supports the use of speaker
adaptation transforms, as described in the tutorial steps 12-14. Note incremental
adaptation and transform estimation are {\em not} currently supported for 
\htool{HDecode}.

Similar command line options are used for speaker adaptation with
\htool{HDecode} as \htool{HVite}. The main difference is that the use of an
input transform is specified using the \texttt{-m} option in \htool{HDecode}
rather than the \texttt{-k} option in \htool{HVite}. Assuming that a set of
MLLR transforms have been generated using \htool{HERest} as described in
section~\ref{s:exsysadapt} and are stored in directory \texttt{xforms} with a
transform extension \texttt{mllr2}.  Decoding can be run using
\begin{verbatim}
    HDecode -H hmm20/models -S testAdapt.scp  \ 
          -t 220.0 220.0 \
          -J xforms mllr2 -h '*/%%%%%%_*.mfc' -m -i recoutAdapt.mlf -w bg_lm \ 
          -J classes -C config.hdecode -p 0.0 -s 5.0 dict.hdecode xwrdtiedlist
\end{verbatim}
The recognition output is written to  \texttt{recoutAdapt.mlf}.

\subsection{Option 3 - Lattice Generation}

\htool{HDecode} also support lattice generation to allow more complex language
models to be applied, or for lattice-based discriminative training. The
\texttt{-z ext} option, where \texttt{ext} specifies the extension to be used
for the lattice, specifies that lattices should be generated.

The lattices are to stored in a directory (which must be generated)
\texttt{lat\_bg}. The following command will perform lattice generation
\begin{verbatim}
    HDecode -H hmm20/models -S test.scp \
          -t 220.0 220.0 \
          -C config.hdecode -i recout.mlf -w bg_lm \
          -o M -z lat -l lat_bg -X lat \
          -p 0.0 -s 5.0 dict.hdecode xwrdtiedlist
\end{verbatim}

In addition to the standard printing options, word insertion and grammar scale
factors, an option to specify the number of tokens used per state (see the
\texttt{-n} option) is available This can significantly affect the decoding time and the
size of lattices generated. Increasing the value (the default is 32) increases
the decoding time and size of the lattices. Note the lattice will be
compressed using \texttt{gzip} as specified with the \texttt{HNETOFILTER}.

Prior to rescoring the
lattices generated by \htool{HDecode} must be made deterministic using
\htool{HLRescore}. The first stage is to generate a list of the lattices that
need to be made deterministic. Let \texttt{test.lcp} hold this
list, for example a few possible  entries are given below
\begin{verbatim}
adg0_4_sr009.lat
adg0_4_sr049.lat
adg0_4_sr089.lat
adg0_4_sr129.lat
adg0_4_sr169.lat
... ...

\end{verbatim}
For the bigram lattices previously generated in \texttt{lat\_bg} the following
command needs to be run
\begin{verbatim}
    HLRescore -C config.hlrescore -S test.lcp \
          -t 200.0 1000.0 \
          -m f -L lat_bg -w -l lat_bg_det dict.hdecode 
\end{verbatim}
The resulting deterministic bigram lattices are now stored under directory
\texttt{lat\_bg\_det}. The configuration file
\texttt{config.hlrescore} should contain the following
settings,
\begin{verbatim}
HLANGMODFILTER = 'gunzip -c $.gz'
HNETFILTER     = 'gunzip -c $.gz'
HNETOFILTER    = 'gzip -c > $.gz'
RAWMITFORMAT   = T
STARTWORD      = SENT-START
ENDWORD        = SENT-END
FIXBADLATS     = T 
\end{verbatim}
The \texttt{FIXBADLATS} configuration option ensures that if the final word in the lattice is
\texttt{!NULL}, and the word specified in \texttt{ENDWORD} is missing, then
\texttt{!NULL} is replaced by the word specified in \texttt{ENDWORD}.  This is
found to make lattice generation more robust.


\subsection{Option 4 - Lattice Rescoring}

More complicated language models, for instance, higher order $n$-gram
models, may be applied to expand the initial lattices and
improve recognition performance. Assume that a compressed version of a
trigram language model with the same vocabulary as the bigram above is stored
in \texttt{tg\_lm.gz}. 

The 1-best path in the lattice after applying the trigram language
model may be obtained using the following command.
\begin{verbatim}
    HLRescore -C config.hlrescore -S test.lcp \
          -f -i recout_tg.mlf -n tg_lm -L lat_bg -w -l lat_tg \
          -p 0.0 -s 5.0 dict.hdecode
\end{verbatim}
The 1-best output is placed in \texttt{recout\_tg.mlf}. In addition,
compressed version of the lattices now with trigram language model are stored
in \texttt{lat\_tg}. 


It is then possible to rescore these trigram lattices using \htool{HDecode}
with either a different set of acoustic models, or a different grammar scale
factor. However, prior to this it is again necessary to ensure that
the lattices are deterministic. Thus the following command is required
\begin{verbatim}
    HLRescore -C config.hlrescore -S test.lcp \
          -t 200.0 1000.0 -m f -L lat_tg \
          -w -l lat_tg_det dict.hdecode 
\end{verbatim}
These lattices can then be rescored using 
\begin{verbatim}
    HDecode -H hmm21/models -S test.scp \
          -t 220.0 220.0 \
          -C config.hdecode -i recout_rescore.mlf -L lat_tg_det \
          -p 0.0 -s 5.0 dict.hdecode xwrdtiedlist2
\end{verbatim}
where the new set of acoustic models are assumed to be stored in
\texttt{hmm21/models} and  model-list in \texttt{xwrdtiedlist2}.

%% Lattices with HMM model alignment and time stamps may also be
%% obtained using 
%% \begin{verbatim}
%%     HDecode.mod -H hmm20/models -S test.scp \
%%           -t 220.0 220.0 -z lat -l lattices.align/ -q tvaldm \
%%           -C config.hdecode -l '*' -i recout.mlf -w -L ./ \
%%           -o M -p 0.0 -s 5.0 dict.hdecode xwrdtiedlist
%% \end{verbatim}
%% and the generated lattices with model alignment will be stored under
%% \texttt{lattices.align}. 

\newpage
\mysect{Discriminative Training}{egdiscriminative}

A further refinement to the acoustic models is to use a \textit{Discriminative Training}
approach to HMM parameter estimation. Discriminative training
can bring considerable improvements in recognition accuracy
and is increasingly being used in large vocabulary speech recognition
systems.

{\bf  Note that as \htool{HDecode} is run to create the lattices, a cross-word
  triphone model set must be used. The form of dictionary described in the
  HDecode section is also required.}

\centrefig{discriminative}{90}{Discriminative Training}

The implementation of discriminative training with HTK requires the following
steps summarised in Fig.~\ref{f:discriminative}. The individual steps and
related command-lines are given below.

\subsection{Step 1 - Generation of Initial Maximum Likelihood Models} 
A cross-word triphone set of HMMs must be initially trained
using standard maximum likelihood estimation (with \htool{HERest}). Since
\htool{HDecode} is used in this recipe for both word lattice generation
and phone-marking of the lattices, cross-word triphone models are assumed in
this section, as in the previous section. These models are again
stored in the MMF \texttt{hmm20/MODELS}.

\subsection{Step 2 - Training Data LM Creation}
A ``weak'' language model, i.e. a unigram or bigram,  must be created for use in discriminative training.
It is essential that the vocabulary includes (at least) the words in the
correct word-level transcripts.  Since a weak language model is required, it
is possible to use only the transcripts of the acoustic training data in LM
creation. If a bigram LM is used, typically the count cutoff is set so that
there are approximately the same number of bigrams as unigrams.  Details of
how this can be done can be found in the \htool{HLM} tutorial section~\ref{c:hlmtutor}, but a
brief outline is given below.

First of all the data in the training set MLF must be modified into a suitable
form for language model training with sentence start and sentence end symbols.
Traditionally in language modelling \texttt{<s>} and \texttt{</s>} are used
for these symbols. However in keeping with the \htool{HDecode} section above,
\texttt{SENT-START} and \texttt{SENT-END} will be used in this section. The form of one
sentence per line with word-start/end markers can be simply created from a
word-level MLF using e.g. \texttt{awk}.

It is assumed that the word-level training transcripts with one sentence per
lines is in the file \texttt{lmtrain.txt}, and a sub-directory \texttt{lmdb}
has been created to store the database files needed for LM creation. First
\htool{LNewMap} is used to 
create a new word-map, then \htool{LGPrep} run to create the database.

\begin{verbatim}
LNewMap empty.wmap
LGPrep -A -b 500000 -n 2 empty.wmap -d lmdb lmtrain.txt
\end{verbatim}

The data files in \texttt{lmdb} can then be used to create a language
model. The file \texttt{lmdb/wmap} has a short header, and then gives a list
of all the words encountered in training.
%This first
%field can be extracted and used as a training data wordlist. Assume
%this is \texttt{wordlist.train}. 
The \htool{LBuild} command can now be used to build the required bigram
model. If it is assumed that a suitable cut-off value for the bigram is known
in order to have a model with the desired number of bigrams (in the command
below it is set to 5) then the required \htool{LBuild} command would be
\begin{verbatim}
LBuild -A -C config.lbuild lmdb/wmap -c 5 -n 2 trainbg lmdb/gram.* 
\end{verbatim}
The cut-off parameter (value supplied to \texttt{-c}) should be varied until a
bigram of a suitable size is obtained (it is also possible to find this
information in advance with the aid of the \htool{LFoF} command). In order to
compress the resulting language model (using \texttt{gzip}) the file
\texttt{config.lbuild} should contain
\begin{verbatim}
HLANGMODOFILTER = 'gzip -c > $.gz'
\end{verbatim}
The result of the above command is that a bigram LM for the training data in file \texttt{trainbg.gz}.

\subsection{Step 3 - Word Lattice Creation} 
Two sets of ``phone-marked'' lattices, called the denominator and numerator lattices, are
required for discriminative training. The first stage in generating these
phone-marked lattices is to produce word lattices.

The denominator word lattices represent the set of most likeliy word sequences
for a particular training sentence.  These denominator word lattices are
created using \htool{HDecode} in a recognition mode (similar to the
\htool{HDecode} lattice generation section above) with the initial ML-trained
models, the training data language model and speech data. Numerator word-level
lattices are created using \htool{HLRescore} and include language model log
probabilities.

The denominator word lattices creation stage uses \htool{HDecode}. The
lattices will be placed in a subdirectory \texttt{wlat.den}. The dictionary
for the training data is assumed to be available in \texttt{dict.hdecode} and a list of
training files in \texttt{train.scp}. Lattices are created using
\begin{verbatim}
    HDecode -A -H hmm20/models -S train.scp -t 220.0 220.0 -C config.hdecode \ 
          -i wlat.den/recout.mlf -w trainbg -o M -z lat -l wlat.den -X lat \
          -p 0.0 -s 5.0 dict.hdecode xwrdtiedlist
\end{verbatim}
where it has been assumed that a suitable grammar scale factor is \texttt{-s
  5.0}, consistent with the previous section describing the use of
\htool{HDecode}.

Similarly numerator word lattices will be created in \texttt{wlat.num}. Assuming that the training word-level
MLF file is located in \texttt{trainwords.mlf} and a list of these labels is
contained in \texttt{train.labscp}, the following command can be run
\begin{verbatim}
    HLRescore -A -C config.hlrescore -S train.labscp -I words.mlf \
      -n trainbg -f -t tvalqr -w -s 5.0 -l wlat.num dict.hdecode 
\end{verbatim}
using the config file \texttt{config.hlrescore} defined in the section above
for \texttt{HDecode}. An example of the first few lines of \texttt{train.labscp} might be
\begin{verbatim}
*/adg0_4_sr009.lab
*/adg0_4_sr049.lab
*/adg0_4_sr089.lab
*/adg0_4_sr129.lab
*/adg0_4_sr169.lab
... ...

\end{verbatim}
Note that not all lattices may be generated successfully in the \htool{HDecode} step, and a check should be
made that all the lattices exist before continuing. If some have failed the pruning parameters can be 
altered or a new list of successful training files created for subsequent stages.

\subsection{Step 4 - Phone Marking of Numerator and Denominator
  Lattices}

The word-level lattices are further processed using \htool{HDecode.mod}, the
initial models and the speech data to produce the phone-marked lattices used
for discriminative training. Note lattices can also be phone-marked using
\htool{HVite}. Model-marking with \htool{HVite} does not have the same
restrictions, in terms of the nature of the acoustic models, as
\htool{HDecode} and \htool{HDecode.mod}.

Before the phone-marked denominator lattices can be created, the denominator word lattices must 
be made determininstic. These will be written into the directory \texttt{wlat.den.det} assuming a list of lattices
is in \texttt{denwordlat.lcp} with the command
\begin{verbatim}
    HLRescore -C config.hlrescore -S test.lcp \
          -t 220.0 1000.0 -s 5.0 \
          -m f -L wlat.den -w -l wlat.den.det dict.hdecode
\end{verbatim}
Lattices with HMM model alignment and time stamps for the numerator can then be obtained using the command
\begin{verbatim}
    HDecode.mod -H hmm20/models -S train2.scp -t 1000.0 1000.0 \
    -z lat -l plat.num -q tvaldm -C config.hdecode  \
     -i plat.num/rec.mlf -w -L wlat.num -o M -p 0.0 -s 5.0 \
    dict.hdecode xwrdtiedlist
\end{verbatim}
The generated numerator lattices with model alignment will be stored under the directory
\texttt{plat.num}, \texttt{train2.scp} should contain the list of training files for which word lattices could
be created. The same procedure can be used to create the denominator phone-marked lattices, stored in
\texttt{plat.den}, from the determinised word lattices:
\begin{verbatim}
    HDecode.mod -H hmm20/models -S train2.scp -t 220.0 220.0 \
    -z lat -l plat.den -q tvaldm -C config.hdecode  \
     -i plat.den/rec.mlf -w -L wlat.den.det -o M -p 0.0 -s 5.0 \
    dict.hdecode xwrdtiedlist
\end{verbatim}
Again a check should be made on the phone-marked lattices that are created and the set of training
files for which both the numerator and denominator lattices exist will be stored in \texttt{train3.scp}.

\subsection{Step 5 - Generating Discriminatively Trained Models} 

Having generated the required numerator and denominator phone-marked lattices,
\htool{HMMIRest} can be run to discriminatively train the HMMs.
A number (typically 4-8) of iterations of the Extended Baum-Welch (EBW)
algorithm are run, each invocation of \htool{HMMIRest} corresponds to one iteration of
EBW.  There are a number of configuration options to \htool{HMMIRest} that
allow the choice of objective function to be varied; the amount and type of
smoothing; learning rate in EBW updates etc. For large amounts of data, each
iteration of \htool{HMMIRest} can be run in two phases: the first phase in
which blocks of data are processed in parallel and accumulators are created
for each block of data; and the second estimation phase in which the sets of
accumulators are loaded and the HMM parameters re-estimated.  Here the
commands are given that assume that the data is not processed in a parallel
mode.

For basic MMI training the following form could be used for the first iteration creating a model set in directory \texttt{hmm21}
\begin{verbatim}
    HMMIRest -C config.mmirest -A -H hmm20/models -S train3.scp -q plat.num 
        -r plat.den -u tmvw -M hmm21  xwrdtiedlist
\end{verbatim}
There are a wide variety of options in MMI training, but normally I-smoothing 
 would be used (typically with a value of 100 for MMI) and the file \texttt{config.mmirest}
would contain:
\begin{verbatim}
TARGETKIND = MFCC_0_D_A
HNETFILTER = 'gunzip -c $.gz'
LATPROBSCALE = 0.2
ISMOOTHTAU =  100
E = 2
ARC: TRACE = 3
HMMIREST: TRACE = 3
MPE = FALSE

\end{verbatim}
The value \texttt{E=2} sets a learning rate parameter, and the trace options
allow some detail on the processing of \htool{HMMIRest} to be printed.  The
invocation of \htool{HMMIRest} above will perform a single iteration of the
EBW algorithm.  Note that the value of \texttt{LATPROBSCALE} has been set to
the reciprocal of the normal grammar scale factor.

For MPE training, the above configuration file can simply be altered to have the line 
\begin{verbatim}           
MPE = TRUE
\end{verbatim}
and the value of \texttt{ISMOOTHTAU} might be reduced by a factor of 2.

\mysect{Summary}{exsyssum}
This chapter has described the construction of a tied-state phone-based
continuous speech recogniser and in so doing, it has touched on most of the
main areas addressed by \HTK: recording, data preparation, HMM definitions,
training tools, adaptation tools, networks, decoding and evaluating.  The 
rest of this book discusses each of these topics in detail.



%%% Local Variables: 
%%% mode: latex
%%% TeX-master: "htkbook"
%%% End: 
