Also, since these tools are implemented as scripts they don't automatically input or output compressed model files correctly, unlike the main SRILM tools. However, since most scripts work with data from standard input or to standard output (by leaving out the file argument, or specifying it as ``-'') it is easy to combine them with gunzip(1) or gzip(1) on the command line.
make-ngram-pfsg encodes a backoff N-gram model in ngram-format(5) as a finite-state network in pfsg-format(5). check_bows=0 disables a check for conditional probabilities that are smaller than the corresponding backoff probabilities. Such transitions should first be removed from the model with ngram -prune-lowprobs.
add-pauses-to-pfsg
replaces the word nodes in an input PFSG with sub-PFSGs that
allow an optional pause before each word.
It also inserts an optional pause following the last word in the sentence.
A typical usage is
make-ngram-pfsg class-ngram | \
add-pauses-to-pfsg >final-pfsg
The result is a PFSG suitable for use in a speech recognizer.
The option
pauselast=1
switches the order of words and pause nodes in the sub-PFSGSs;
wordwrap=0
disables the insertion of sub-PFSGs altogether.
add-classes-to-pfsg
extends an input PFSG with expansions for word classes, defined in
classes.
pfsg-file
should contain a PFSG generated from the N-gram portion of a class N-gram
model.
A typical usage is thus
make-ngram-pfsg class-ngram | \
add-classes-to-pfsg classes=classes | \
add-pauses-to-pfsg >final-pfsg
pfsg-from-ngram is a wrapper script that combines removal of low-probability N-grams, conversion to PFSG, and adding of optional pauses to create a PFSG for recognition.
make-nbest-pfsg converts an N-best list in nbest-format(5) into a PFSG which, when used in recognition, allows exactly the hypotheses contained in the N-best list. notree=1 creates separate PFSG nodes for all word instances; the default is to construct a prefix-tree structured PFSG. scale=S multiplies the total hypothesis scores by S; the default is 0, meaning that all hypotheses have identical probability in the PFSG. Three options, amw=A, lmw=L, and wtw=W, control the score weighting in N-best lists that contain separate acoustic and language model scores, setting the acoustic model weight to A, the language model weight to L, and the word transition weight to W.
pfsg-to-dot renders a PFSG in dot(1) format for subsequent layout, printing, etc. show_probs=1 includes transition probabilities in the output. show_logs=1 includes log (base 10) transition probabilities in the output. show_nums=1 includes node numbers in the output.
pfsg-to-fsm converts a finite-state network in pfsg-format(5) into an equivalent network in AT&T fsm(5) format. This involves moving output actions from nodes to transitions. If symbolfile=symbols is specified, the mapping from FSM output symbols is written to symbols for later use with the -i or -o options of fsm(1) tools. symbolic=1 preserves the word strings in the resulting FSA. scale=S scales the transition weights by a factor S; the default is -1 (to conform to the default FSM semiring). final_output=E forces the final FSA node to have output label S; this also forces creation of a unique final FSA node, which is otherwise unnecessary if the final node has a null output.
fsm-to-pfsg conversely transforms fsm(5) format into pfsg-format(5). This involves moving output actions from transitions to nodes, and generally requires an increase in the number of nodes. (The conversion is done such that pfsg-to-fsm and fsm-to-pfsg are exact inverses of each other.) The name parameter sets the name field of the output PFSG. transducer=1 indicates that the input is a transducer and that input:output pairs should be preserved in the PFSG. scale=S scales the transition weights by a factor S; the default is -1 (to conform to the default FSM semiring).
classes-to-fsm converts a classes-format(5) file into a transducer in fsm(5) format, such that composing the transducer with an FSA encoding a class language model results in an FSA for the word language model. The word vocabulary needs to be given in file vocab. isymbolfile=isymbols and osymbolfile=osymbols allow saving the input and output symbol tables of the transducer for later use. symbolic=1 preserves the word strings in the resulting FSA.
The following commands show the creation of an FSA encoding the class N-gram
grammar ``test.bo'' with vocabulary ``test.vocab'' and class expansions
``test.classes'':
classes-to-fsm vocab=test.vocab symbolic=1 \
isymbolfile=CLASSES.inputs \
osymbolfile=CLASSES.outputs \
test.classes >CLASSES.fsm
make-ngram-pfsg test.bo | \
pfsg-to-fsm symbolic=1 >test.fsm
fsmcompile -i CLASSES.inputs test.fsm >test.fsmc
fsmcompile -t -i CLASSES.inputs -o CLASSES.outputs \
CLASSES.fsm >CLASSES.fsmc
fsmcompose test.fsmc CLASSES.fsmc >result.fsmc
wlat-to-pfsg converts a word lattice in the format output by nbest-lattice(1) into pfsg-format(5).
wlat-to-dot renders a nbest-lattice(1) word lattice in in dot(1) format for subsequent layout, printing, etc. show_probs=1 includes node posterior probabilities in the output.