\newcommand{\MB}[1]{\mbox{\boldmath{$#1$}}}
\newcommand{\tp}[1]{\MB{#1}^\top} % transpose

\newcommand{\OBS}{\MB{o}_t}

\newcommand{\StateDesc}{\MB{D}_s}

\newcommand{\CanonStateVec}{\bar{\MB{b}}_t}
\newcommand{\CanonStateVecI}[1]{\MB{b}_{t,#1}}

\newcommand{\CDStateVec}[2]{\MB{V}(#1, #2)}

\newcommand{\SOutP}[1]{P(#1|\OBS)}

The previous two chapters investigated the context-dependent (CD) modelling of two major ASR architectures, namely the GMM/HMM system and the hybrid NN/HMM system.  The NNs used in the hybrid system had a simple structure with only one hidden layer.
Based on the insights obtained from these initial studies, the final part of the thesis is devoted to the CD modelling of the state-of-the-art hybrid deep neural network/hidden Markov model (DNN/HMM) systems.

%The reminder of the chapter is organised as follows: an background knowledge of the deep neural network training will be firstly given. The proposed regression-based context-dependent modelling framework will be detailed. The framework is then applied to the a random forest system to explore the complementariness o
%Section~\ref{sec:reg-training} derives an approximated cross-entropy objective function to train the regression parameters.  Section~\ref{sec:regression_versus_merger} provides some discussions about the proposed sparse regression model. Experimental results are presented in Section~\ref{sec:results}
%
% We then present some preliminary investigations of addressing the limitations existing in the current context dependent modelling DNN schemes. 
%
\section{Deep Neural Networks}
DNNs are multi-layer, feed-forward neural networks.  They comprise an input layer for observations, an output layer for output classes, and multiple hidden layers of stochastic units as shown in Figure~\ref{dnnr}. % DNNs have undirected connections between the top two layers and directed connections to all other layers from the layer above. 
In comparison with the standard shallow multi-layer perceptron (MLP) in Chapter~\ref{chapter:poe}, DNNs have several appealing advantages~\citep{Dahl12context-dependentpre-trained}: 1) The deep structure with multiple non-linear hidden layers (usually more than 5 layers) makes DNNs potentially more  representationally efficient for complex pattern recognition tasks like speech recognition; 2) The unsupervised pre-training of the DNNs provides a better weight initialisation compared with the randomly initialised weights of the MLP, making the DNNs less prone to be stuck in a poor local optimum even with a deep layered structure.


The training of DNNs involves two distinct stages. The first stage is called ``pre-training'', where the weights are pre-trained in an unsupervised fashion. Based on the pre-trained weights,  the subsequent fine-tuning of the parameters is performed to train the DNNs as discriminative classifiers  using the labelled data for the final pattern recognition task. The fine-tuning is usually done with the standard error back-propagation algorithm, which is the same as the MLP training. It is the unsupervised pre-training that makes the training of the DNN feasible. In the next section, the DNN pre-training will be briefly introduced.

 %pre-training phase with unlabelled training data in an unsupervised fashion based on the . The DNNs are then initialised with the pre-trained weights and fine-tuned by labeled training data subsequently.
\subsection{Restricted Boltzmann Machines}
The DNN pre-training is closely related to the Restricted Boltzmann Machines (RBMs)~\citep{Smolensky:1986}, where a DNN is viewed as a stack of RBMs.  
An RBM is an undirected graphical model with a layer of stochastic hidden units and a layer of stochastic visible units as shown in Figure~\ref{rbm}. A main property of the RBM is that there are no visible-visible or hidden-hidden unit connections.  The RBM training attempts to modify an energy function, so that its shape has some desired properties.   Each configuration of a visible unit vector $\MB{v}$ and a hidden unit vector $\MB{h}$ (shaded circles) is assigned  an energy by the RBM given as~\citep{Dahl12context-dependentpre-trained}:

\begin{figure}
\begin{center}

\end{center}
\caption{An RBM layer with hidden-visible unit connections}
\label{rbm}
\end{figure}
%For example, we would like plausible or desirable configurations to have low energy.
\begin{equation}
\begin{split}
E(\MB{v},\MB{h})&=\MB{-b^Tv-c^Th-v^TWh}\\
&=-\sum_{i=1}^{\mathcal{V}}\sum_{j=1}^{\mathcal{H}}\omega_{ij}v_ih_j-\sum_{i=1}^{\mathcal{V}}b_iv_i-\sum_{j=1}^{\mathcal{H}}c_jh_j
\end{split}
\end{equation}
where $\MB{W}$ is the weight matrix  between the hidden and visible units, and $\MB{b}$ and $\MB{c}$ are the bias terms for the visible and hidden unit vectors respectively. The probability of a joint hidden-visible vector configuration can be expressed in terms of the energy:
\begin{equation}
P(\MB{v,h})=\frac{e^{-E(\MB{v,h})}}{\sum_{\MB{v}}\sum_{\MB{h}}e^{-E(\MB{v,h})}}
\end{equation}
Since there are no hidden-hidden or visible-visible unit connections, the visible units are conditionally independent given the hidden units and vice versa. The conditional probability of a hidden vector given a visible vector can thus be derived as follows~\citep{Dahl12context-dependentpre-trained}:
\begin{equation}
\begin{split}
P(\MB{h}|\mathbf{v})&=\frac{e^{-E(\MB{v,h})}}{\sum_{\hat{h}}e^{-E(\MB{v,\hat{h}})}}=\frac {  e^{\MB{  b^Tv+c^Th+v^TWh } }   } {   \MB{ \sum_{\hat{h}}  e^{\MB{  b^Tv+c^T\hat{h}+v^TW\hat{h} } }  }    } \\
&= \frac {  e^{\MB {  c^Th+v^TWh } }   } {   \MB { \sum_{\hat{h}}  e^{\MB { c^T\hat{h}+v^TW\hat{h} } }  }    } = \prod_i P(h_i|\MB{v})
\end{split}
\end{equation}
%Let $\mathbf{W}_{*,i}$ denote the $i$-th column of $\mathbf{W}$. Introducing a term $\gamma_i(\mathbf{v},h_i)=-(c_i+\mathbf{v^TW}_{*,i})h_i$ which is the energy of the visible vector $\mathbf{v}$ depending on a particular hidden unit $h_i$ where $1\leq i \leq N$. The probability is further derived as:
%\begin{equation}
%\begin{split}
%P(\mathbf{h}|\mathbf{v})&=\frac {     \prod_i e^{c_ih_i+\mathbf{v^TW}_{*,i}h_i }    }   {    \sum_{\hat{h}_1} \cdots \sum_{\hat{h}_N}    \prod_i e^{c_i\hat{h}_i+\mathbf{v^TW}_{*,i}\hat{h}_i }       }\\
%&= \frac {       \prod_i e^{-\gamma_i(\mathbf{v,}h_i)}    }  {       \sum_{\hat{h}_1} \cdots \sum_{\hat{h}_N}      e^{-\gamma_i(\mathbf{v,}\hat{h}_i)}              }\\
%&= \frac {   \prod_i e^{-\gamma_i(\mathbf{v,}h_i)}    } {  \prod_i  \sum_{\hat{h}_i }  e^{-\gamma_i(\mathbf{v,}\hat{h}_i)}   }\\
%&= \prod_i \frac { e^{-\gamma_i(\mathbf{v,}h_i)}    } { \sum_{\hat{h}_i }  e^{-\gamma_i(\mathbf{v,}\hat{h}_i)}   }\\
%&=\prod_i P(h_i|\mathbf{v})
%\end{split}
%\end{equation}

Therefore, the probability of a hidden unit $h_i$ being turned on given a visible vector $\MB{v}$ is expressed as: 
\begin{equation}
\label{hv}
P(h_i=1|\MB{v}) %=\frac { e^{-\gamma_i(\MB{v,}1)}    } { e^{-\gamma_i(\MB{v,}1)} + e^{-\gamma_i(\MB{v,}0)}   }
=\sigma(c_i+\MB{v^TW}_{*,i})
\end{equation}
where $\mathbf{W}_{*,i}$ denote the $i$-th column of $\mathbf{W}$, and $\sigma(x)={(1+e^{-x})}^{-1}$ is the  sigmoid function. Since $P(\MB{h|v})$ can be factorised, we have:
\begin{eqnarray}
\label{eqn:hid}
P(\MB{h=1|v})=\sigma(\MB{c+v^TW})
\end{eqnarray}
The sigmoid function is also used as the hidden unit activation functions for the neural networks. Equation~\ref{eqn:hid} thus 
 justifies the use of the RBM weights to initialise a neural network, since it has the same formulation as the forward propagation of a neural network. For the binary visible units, we have the similar derivation yielding: 
\begin{equation}
\label{vh}
P(\MB{v=1|h})=\sigma(\MB{b+h^TW^T}).
\end{equation}
 %In fact $ P(\mathbf{h=1|v})$ is what allows up to use the weights of an RBM to initialize a MLP with sigmoid hidden units because we can equate the inference for RBM hidden units with forward propagation in a neural network.
Therefore, the marginal probability of a visible vector $\MB{v}$ assigned by the RBM is given as:
\begin{equation}
p(\MB{v}) = \frac{1}{Z}\sum_{\MB{h}}e^{  -E(\MB{v,h})}
\end{equation}
The training observations usually serve as the visible vectors for the RBM training.
If the probability of a visible vector $\MB{v}$ is to be raised,  the RBM parameters should be adjusted to lower the energy of the particular configuration. Thus, the RBM parameters can be trained to maximise the probabilities of all the visible vectors. To do this,
a gradient-based optimisation method can be used.  The derivative of the log probability of a visible vector $\MB{v}$ with respect to an RBM weight is~\citep{hinton10}:
\begin{equation}
\frac { \partial\log p(\MB{v})  }{\partial \omega_{ij}} = {<v_ih_j>}_{\text{data}}-{<v_ih_j>}_{\text{model}}
\end{equation}
where ${<v_ih_j>}_{\text{data}}$ and ${<v_ih_j>}_{\text{model}}$ are two statistics needed for the estimation of the RBM parameters.  Since there are no direct connections between hidden units, $ {<v_ih_j>}_{\text{data}}$ given a random training frame $\MB{v}$ can be obtained by setting the binary state $h_j$ to 1 with the probability computed from Equation~\ref{hv}. 

However, to get $ {<v_ih_j>}_{\text{model}}$ is much more difficult. A straightforward way of obtaining the statistics is through sampling. For example, the visible units can be set at any ``random'' state. Alternative Gibbs sampling can then be performed for a very long time until a steady state is obtained. An example of Gibbs sampling is shown in Figure~\ref{gibbs}, where the direction of the arrows represents the sampling sequence. In one iteration of Gibbs sampling,  the hidden units are updated in parallel according to the conditional probability in Equation~\ref{hv}. The visible units are then updated in parallel using  Equation~\ref{vh} after the updating of the hidden units.  The iterative procedure is repeated until a steady state is reached. However, this is prohibitively expensive as the Gibbs sampling may take a very long time to converge. Therefore, several algorithms have been proposed in the literature to speed up the training process. 
One such algorithm, which is widely used, is called  ``Contrastive Divergence''~\citep{Hinton00trainingproducts,Hinton06afast}.  Instead of using the \emph{randomly} generated states, the states of the visible units are firstly set to be a training vector. Subsequently, the binary states of the hidden units are all computed in parallel using Equation~\ref{hv}, given the visible units. Once the binary states have been chosen for the hidden units, a ``reconstruction'' is produced by setting each visible unit to 1 with a probability given by Equation~\ref{vh}. In fact, it has been shown that one iteration of Gibbs sampling works surprisingly well in practice thus saving a great amount of computation time.
\begin{figure}
\begin{center}

%\psbrace* {3,1} {3,3} {Gibbs}
\end{center}
\caption{Gibbs sampling for RBM training}
\label{gibbs}
\end{figure}

Up to this point, the visible units have been constrained to be binary. However, for speech recognition, the inputs into the ASR system are real-value feature vectors. To accommodate this, the Gaussian-Bernoulli RBM (GRBM) is used as the DNN input layer units. The energy function of the GRBM is defined as:
\begin{equation}
E(\MB{v},\mathbf{s})=\frac{1}{2}\MB{  {(v-b)}^T{(v-b)} - c^Th -v^TWh }
\end{equation}
The energy function assumes that the visible units can be modelled with a diagonal covariance Gaussian model with a unit variance on each dimension. The probability of a hidden vector given a visible vector, $P(\MB{h|v})$,  stays the same as Equation~\ref{vh}. The probability of a visible vector given a hidden vector becomes:
\begin{equation}
P(\MB{v|h})=\MB { \mathcal{N}(v;b+h^TW^T,I)}
\end{equation}
where $\MB{\mathcal{N}}$ is a Gaussian distribution with a unit variance matrix $\MB{I}$.
\subsection{DNN Training}
Once an RBM layer has been trained (a visible layer and a hidden layer) using all the training data, it can be used to generate new representations of the training data by forwarding the training data as input visible units according to Equation~\ref{eqn:hid}. The activations can then be used as the ``new'' training data to train another RBM layer. These steps can be iterated as needed until the target number of hidden layers is achieved, yielding a DNN with a stack of RBMs. This phase is called ``pre-training''.  
The pre-training provides an initialisation of the DNN weights for the following fine-tuning phase. Ideally, the whole training set should be ``reconstructed'' from the stack of RBMs, that is, $P(\MB{v})=P_{\text{train}}(\MB{v})$. In other words, 
the pre-trained DNN is learned to probabilistically reconstruct the whole training set.
 After the pre-training, a randomly initialised softmax layer is added to the stack of hidden layers. A fine-tuning phase using the standard back-propagation is invoked to get the final DNN weights for the classification task.  

The whole DNN training procedure including both pre-training and fine-tuning is summarised below and illustrated by Figure~\ref{dnnr} . The upward solid arrows indicate the direction of layer-wise pre-training. The fine-tuning direction is shown with dotted arrows:
\begin{enumerate}
\item Train the first layer as an RBM ($h_0$) that models the observations $x$ as its visible layer units.
\item Forward the training data through the RBM trained in  step 1 to obtain a new representation of the input data denoted as $P(h_1|h_0)$.
\item The activations from step 2, $P(h_1|h_0)$, are used as the new set of training data as the visible units to train a second RBM layer $h_2$.
\item Steps 2 and 3 are iterated until a desired number of hidden units is obtained.
\item After the pre-training of all hidden layers from step 1 to 4, a new layer representing all the training targets is added to the RBM stack for fine-tuning using back-propagation with the labelled training data.
\end{enumerate}


