\chapter{Minimizing Speaker Variability}

\ifpdf
    \graphicspath{{Chapter3/Chapter3Figs/}}
\else
    \graphicspath{{Chapter3/Chapter3Figs/EPS/}}
\fi

For a good recognition accuracy of an ASR system, it is important to minimize the variability due to speakers. Speaker variability is of two forms. 

\begin{enumerate}
 \item inter-speaker variability : This is caused by the differences of speakers, genders, accents and speaking styles.
 \item intra-speaker variability : This is also known as the session variability. Same speaker will have differences in the sound due to the factors like emotion, health and age.
\end{enumerate}

These days it is necessary to build speech recognition systems independent of the speaker.  This is known as the Speaker Independent (SI) Speech Recognition. Therefore, in speech recognition research, the importance 
of minimization of the inter-speaker variability is more prominent than handling of the intra-speaker variability.  Our research is focused on minimizing the inter speaker variabilities to improve the performance. 
The techniques that are used to reduce the speaker variability can be categorized into two types; namely the Model-based schemes and Feature-based schemes.

Using Model-based techniques (also known as adaptation techniques), it is possible to use small amount of data which is known as the adaptation data from a test speaker to transform the parameters of the acoustic model to match the test speaker. 
The adaptation can be performed in two ways; It can be supervised, in which the accurate transcriptions are available for the adaptation data or it can be unsupervised in which initial SI model is used to 
generate the transcriptions. 

Feature-based schemes are applied to the acoustic features. Then these new features generated from the normalization techniques are used to train an acoustic model which has less speaker variability.


In section~\ref{sec:gmm}, techniques that are used for traditional GMM-HMM systems are discussed.  Section~\ref{sec:dnn} is a discussion about the techniques used to reduce the speaker variability 
in DNN-HMM systems.


\section{Speaker Variability in GMM-HMM Systems}
\label{sec:gmm}


\subsection{Model-based Schemes}

\subsubsection{Maximum a Posteriori (MAP) Adaptation}

In MAP, as the name suggests, instead of using the standard Maximum Likelihood for the estimation of parameters, model parameters are reestimated for the new speaker by incorporating a prior distribution of of model parameters.
In otherwords, instead of maximizing $p(x|\lambda)$, $p(x|\lambda) p_{0}(\lambda)$ is maximized. where $x$ is the training data and $\lambda$ is the model \citep{MAP}.

For a particular Gaussian mean, with prior mean $\mu_{0}$ the estimate is

\begin{equation}
 \hat{\mu} = \frac{\tau \mu_{0} + \sum_{t=1}^T \gamma(t) o_{t}}{\tau + \sum_{t=1}^T \gamma(t)}
\end{equation}

where $\tau$ is a meta-parameter which is determined empirically. This parameter controls the contribution of ML Estimated mean and the prior mean. $o_{t}$ is the adaptation vector of time $t$ and the total length is $T$. 
$\gamma(t)$ is the probability of this Gaussian at time $t$.  

One advantage of MAP adaptation is that there is no need to hypothesize a form of transformation to represent the difference between speakers. MAP also works better when large amount of data is available for adaptation. However
the accuracy of the estimated prior is important to gain good performance improvements.



\subsubsection{Maximum Likelihood Linear Regression (MLLR)}

In MLLR, a linear transformation of the model parameter estimated to construct the adapted model. This linear transformation is used to map the existing model to a adapted model such that the likelihood of the adaptation 
data is maximized. The advantage of this approach is that the same transformation can be used for large number of Gaussians. When the transformation is shared with all the Gaussians of the system, the estimated transformation 
is called the global transform. MLLR can also be used with variable number of transforms, depending on the availability of adaptation data, Gaussians can be grouped into regression classes with each class having its own 
transform. In general, the number of transformations are determined automatically using a regression class tree. MLLR also facilitates rapid adaptation due to this parameter sharing. Furthermore, MLLR can be used with 
unsupervised adaptation with good performance gains. 

There are two variants to MLLR adaptation. First variant is known as the standard or unconstrained MLLR, while the second variant is known as the Constrained MLLR. 

\paragraph{Standard MLLR}\mbox{}\\


In standard MLLR \citep{MLLR},  the Gaussian mean parameters are updated according to equation~\ref{eqn:mllr1}

\begin{equation}
\label{eqn:mllr1}
\hat{\mu} = A \mu + b
\end{equation}

where $A$ is an $n x n$ matrix and $b$ is an $n$ dimensional vector. This matrix $A$ is estimated such that the likelihood of the adaptation data is maximized using the EM algorithm. Furthermore it most situations a single
iteration of the EM algorithm is sufficient for the estimation of the transform matrix $A$. However, if the system is trained with severe speaker mismatch multiple iterations can be used to improve the performance.
It is important to note that, MLLR adaptation can be also used to adapt the variances of Gaussians. In the standard MLLR approach, separate transformation matrices are used for mean and variance adaptation.


\paragraph{Constrained MLLR (CMLLR)}\mbox{}\\


The standard MLLR formulation estimates two independent transforms for the means and the variances. However, in CMLLR, one transformation matrix is used to estimate the adapation parameters \citep{CMLLR,CMLLR1}. 
The equations are of the form

\begin{equation}
 \hat{\mu} = A_{c} \mu - b_{c}
\end{equation}

\begin{equation}
 \hat{\sum} = A_{c}^T \sum A_{c}
\end{equation}




\paragraph{Speaker Adaptive Training (SAT)}\mbox{}\\

In ideal case, an acoustic model set should encode just those dimensions which allow the different acoustic classes to be discriminated. However, in speaker independent ASR, some model parameters are used to encode the 
speaker variability. In SAT,  for each speaker, CMLLR or MLLR transform are computed using the training data corresponding to the speaker. Then these transform  are applied to the features, rather than the model parameters. 
In this case, the canonical model estimation formula is almost unchanged from \citep{CMLLR1}. Applying these transformations on features is easier than the adaptation of model parameters. In addition to that, CMLLR/SAT training
can be followed by a discriminative training method, which is helpful in increasing the robustness of the model to variabilities. 



\subsubsection{Clustering}

In previously mentioned adapation techniques we are not using explicit information about the characteristic of a particular speaker, while in clustering based approaches speakers are grouped
into different clusters according to their acoustic characteristics. The simplest way of clustering is based on the gender. Traditional speaker clustering goes a step further and estimates HMMs for a 
number of speaker groups. Test speakers are be viewed as an interpolated model which is actually a weighted sum of speaker cluster HMMs. However in this approach it is necessary to make hard decisions 
about speaker types, and also training data needs to clustered. Therefore, these methods highly depends on the quality of clusters.


\paragraph{Cluster Adapative Training}\mbox{}\\

The idea of CAT \citep{CAT} is to represent a speaker as a weighted sum of individual speaker cluster models. It is assumed that all the different speaker cluster models
have a common variance and mixture weights and only the Gaussian mean values vary. Thus the means for a particular Gaussian for a particular speaker is found
as


\begin{equation}
 \hat{\mu} = \sum_{c} \lambda_{c} \mu_{c}
\end{equation}


where the $\mu_{c}$ is the corresponding mean of the corresponding Gaussian in cluster $c$ and the $\lambda_{C}$ are the cluster weights.  Note that each cluster model has
the same number of parameters as the target model. In this approach only cluster weights need to be estimated for a new speaker. For a particular set of canonical speaker cluster models, and some
adaptation data, maximum likelihood weight estimation formulae for the cluster weights can be derived. Furthermore given cluster weights for individual speakers the canonical speaker cluster model means can 
also be updated. Therefore the CAT canonical model estimation scheme consists of interleaving weight estimation and speaker cluster updates during training. During the adapation, adapation data is used to estimated the
weights for test speakers. This direct estimation of cluster models is know nas thes “model-based” CAT. 

An alternative form, also discussed in \citep{CAT} is to use MLLR transforms to form a “canonical transform” to represent each individual speaker cluster. This approach reduces the number of parameters to be estimated.
When there are different types of speakers (e.g. accent groups; male/female; or speaker present in different noise conditions) the CAT cluster models can be initialised using models trained from reduced data sets. 
Another possibility is to use an initialisation based on eigenvoices.


\paragraph{Eigenvoices}\mbox{}\\

The eigenvoice technique \citep{EigenVoices} also performs speaker adaptation by forming models as a weighted sum of canonical speaker HMMs and adapts only the mean vectors. However, the eigenvoice method finds these 
canonical speakers known as eigenvoices using principle component analysis (PCA) of sets of “supervectors” constructed from all the mean values in a set of speaker dependent HMM systems. The eigenvoices with largest eigenvalues
are chosen as a basis set. During adaptation the maximum likelihood eigen-decomposition algorithm is proposed in \citep{EigenVoices}to estimate the weighted combination of eigenvoices. Of course this algorithm is identical 
to the CAT weight estimation algorithm for adaptation. Moreover, the same weight estimation formulae can be derived on the basis of a weighted projection technique \citep{EigenVoices1}. 

\subsection{Feature-based Schemes}

\subsubsection{Cepstral Mean Normalization (CMN) and Cepstral Variance Normalization (CVN)}

CMN subtract the long term cepstral mean value of the feature vector from each observation. These observations are normally utterance or speaker based. In CVN, the cepstral feature values are divided by its standard deviation
to have unit variance. The intution behind this normalization is to only keep the variations of the speech signal which is believe to be important in speech recognition \citep{CMN}. Since the speaker effects are not significantly 
varying with time or the amplitude of the signal, this normalization technique is helpful in reducing the speaker variability. 

\subsubsection{Vocal Tract Length Normalization (VTLN)}

Vocaltract lengths of each speaker is different. This difference causes slight shifts in formant frequencies. VTLN is applied to approximate a canonical scaling  of formants \citep{VTLN}. This is achieved by doing a grid 
search over the possible frequency warpings to maximize the data. Then this estimated frequency warping function is used to normalize speech features among different speakers. Estimation of the frequency function is 
computationally expensive because of the grid search and multiple decoding passes. Furthermore, the performance is also limited as this function is estimated manually. It is necessary to identify the speakers clearly before 
performing VTLN.


\subsubsection{Feature-space MLLR (FMLLR)}

In practice, CMLLR is also used as an adaptation techniques for acoustic features. When this transformation is applied to the feaures, it is commonly known as Feature-space MLLR (FMLLR). In FMLLR, the concept of acoustic classes
is not easily applicable to directly determine a relationship with feature vector and gaussians. Therefore, a global transformation is applied to the acoustic features, before the training of the acoustic model. 

The FMLLR transformation of the observation vectors at time $t$ as follows:

\begin{equation}
 \hat{o_{t}} = A_{c}^{-1} o_{t} +   A_{c} ^{-1} b_{c}
\end{equation}

Note that a factor of $|A_{c}|$ is also needed when calculating the Gaussian likelihood. The maximum likelihood solution for this form requires iterative optimisation given the sufficient statistics, but yields similar 
performance to unconstrained MLLR.

\pagebreak
\section{Speaker Variability in DNN-HMM Systems}
\label{sec:dnn}

For GMMs, techniques like MLLR and MAP estimation work really well. These methods work by transforming or estimating means and/or variances using adaptation data. Linear transformations based methods can be 
estimated and applied in a flexible manner either using a global transform for all the parameters or multiple transforms according for a set of clusters of parameters.  However, direct utilization of these robust 
adaptation techniques for DNN-HMM systems are not clear since it is hard to find a structure as in GMM parameters for the weights of a DNN based system.  In addition, DNNs have millions of parameters to be adapted 
but the amount of adaptation data is small. Therefore it is also necessary to avoid overfitting during the adaptation.

In addition, the adaptation data may not include sufficient observations for all the acoustic units. This has a more severe influence for the DNN models when compared with the traditional GMM models. In GMM systems, 
normally, unobserved acoustic units in the adaptation data may kept unadapted or share of some of their parameters with others. However, the training criterion for DNN  is discriminative. Therefore during the adaptation, 
those acoustic units with insufficient observations in the adaptation data will be penalized. Consequently, the posterior probabilities of these units will be reduced inappropriately. 


\subsection{Feature-based Schemes}

Usage of speaker adapted features to train DNNs. As in GMM systemes, it is also possible to use FMLLR and VTLN for DNNs. Even though, these FMLLR and VTLN transforms are estimated using a GMM-HMM model, DNN-HMMs trained 
using these features shows clear improvements interms of WERs. In the next section we summarizes model-based schemes for DNNs to reduce speaker variabilities.


\subsection{Model-based Schemes}

Let us denote $W_{l}$ as the weights of the layer $l$ of the SI-DNN. The DNN contains $n$ layers including the input and output layers.  The calculation of the activations of a layer $l$ is given in ~\ref{eqn:si}

\begin{eqnarray}
y_{l}=logistic(W_{l} y_{l-1} + b_{l})
\label{eqn:si}
\end{eqnarray}
where $y_{l-1}$ denotes the activations of the layer below and $b^{l}$ is the bias vector for the layer $l$.



\subsubsection{Retraining}

The most intuitive way of adapting the SI model to a new speaker is to retrain the model using adapation data. It is important to note that the retraining methods do not change the model structure. 


\paragraph{Retraining the Entire Network}\mbox{}\\

This approach starts with the SI model and that entire model is retrained using the adaptation data. The EBP algorithm is used for retraining the weights. Therefore, after adapation each test speaker will have its own model
trained with his adaptation data \citep{RTN}.

Since, in practical applications the amount of available adapation data is small, and the number of parameters to be adapted is large it is necessary to avoid overfitting of the network. Therefore, a method called early stopping
\citep{RTN1} is used during the adaptation process. In early stopping, the training is stopped before converging in to a local optima. The widely used early stopping approach is using cross-validation dataset that estimates 
the generalisation error on a validation set and stops the training when the error reaches a minima.


\paragraph{Retraining the Last Layer}\mbox{}\\

In \citep{RLL1, RLL}, for all test speakers the adaptation is started from the SI model but the weight update only occurs at the last hidden layer to the output layer of the DNN , which reduces a large amount of parameters 
to estimate compared with adapting the entire network. In otherwords, only the weights $W^{n}$ of the layer $n$ will be updated in this method. The intuition behind those methods is that the input-to-hidden layer 
provides an “internal representation” which is common to all tasks, while the hidden-to-output layer provides a task-dependent decision function constructed on the basis of this internal representation \citep{RLL1}.

Furthermore, in \citep{RLL} uses a selective adaptation of the most active hidden neurons. Firstly, the adaptation data is propagated through the original SI network and the hidden nodes with maximum variance is selected, 
since hidden nodes with a high variance transfers a larger amount of information to the output layer. Pruning takes place if the node’s variance is lower than a given percentage of the maximum value. The number of selected 
nodes after pruning can be justified by the amount of adaptation data available. Secondly, the selected nodes are retrained to minimise the crossentropy between NN outputs and target values. Similarly, only the weights 
connecting the selected hidden neurons and the output nodes are updated by a standard gradient descent procedure. The adaptation training stops after a fixed number of iterations to control the adapted model’s generality 
and the model with the lowest word error rate (WER) is selected.


\paragraph{Conservative Training}\mbox{}\\

In conservative training regularizations are added to the adaptation criterion.  As mentioned previously, the large number of parameters of a DNN needs to be adapted using limited amount of adaptation data, 
which can leads to overfitting. In \citep{KLDNN}, a regularization method is proposed using KL divergence.  This is achieved by forcing the distribution of the adapted model is to be close to the that of the speaker 
independent model. In other words, the adaptation is done conservatively. All the weights of the network is adapted directly using the adaptation data. The estimated distribution is a linear interpolation 
between the target distribution (derived using alignments) and the distribution of the speaker independent model. This linear interpolation method is analogous to MAP for GMMs.


\subsubsection{Speaker Input as a Speaker Dependent Bias}

The techniques described under this section, augment the input features to the DNN with speaker information. This extra speaker information can be considered as a bias to the first hidden layer.

\begin{eqnarray}
y_{1}=logistic(W_{1} V_{speech} + b_{speaker})
\label{eqn:bias}
\end{eqnarray}
where $V_{speech}$ denotes the speech input vector, $W_{1}$ is the corresponding weight matrix and $b_{speaker}$ is the speaker dependent bias vector for the first hidden 
layer. Furthemore, $b_{speaker} = W_{speaker} V_{speaker} + b_{1}$, here $V_{speaker}$ denotes speaker input, and the $W_{speaker}$ is the corresponding weight matrix.

\paragraph{Augmenting Features With I-Vectors \citep{IVECT,IVECT1}}

In this approach acoustic feature vectors are augmented before used it as input to the DNN. The idea is to let the network figure out and perform speaker normalization during the training. This can be achieved by 
providing features for phonetic discrimination as well as features specific to the speaker. When, speaker specific features are provided, the DNN can learn speaker-dependent transforms for the acoustic features 
in order to create a canonical phone classification space in which inter-speaker variability is reduced.


The strengths of this method is listed below.

\begin{enumerate}
 \item i-vectors are fixed dimensional representation of speech segments
 \item only one representation (i-vector) per speaker is required
 \item Simple to implement and the computational overhead is small when compared with VTLN and fMLLR where multiple decoding passes are needed
 \item Dimensionality of the i-vector can be tuned for better performance
 \item This method is complementary to feature normalization methods like fMLLR and VTLN
\end{enumerate}

However, this method only has a little control and it is not entirely clear underline changes in the model space. Furthemore, techniques that modify the feature space can normally remove the average characteristics,
while the techniques that modify the model space believed to be able to do a better job in handling variabilities.


\paragraph{Augmenting Features With MLP Factors \citep{MLP_FACTORS}}

MLP factor based method is motivated by the success achieved applying factor analysis based methods in speaker and speech recognition. In this method, two bottleneck DNNs that share the same input layer are trained. 
One DNN is trained as a phone classifier and the other is trained as a speaker classifier. Then extracted features or so called factors from both bottleneck DNNs are used to train the final DNN for phone recognition.  


\paragraph{Speaker Normalization Using Explicit Speaker Representations \citep{NORMDNN}}

In this approach, first an auto-encoder based bottleneck features are extracted and used to train a GMM-HMM system.  The idea behind training of this GMM based system is that this system can be used to estimate 
linear transformations based speaker adaptation techniques like MLLR and CMLLR.  For a particular speaker, a vectorized CMLLR transformations are estimated using the utterances of that speaker.  Then parameters of 
these estimated transforms are used as speaker representations and are concatenate with frame based speech features to generate a features which has both speech and speaker related information. Then these generated 
features are used to train a DNN for final phone classification.  The idea behind this approach is to Incorporate prior knowledge of the speaker to reduce the speaker variability similar to i-vector based approach.


\subsubsection{Speaker Code as a Speaker Dependent Bias}

The two methods given in the this section is based on learning a speaker code for each speaker. These speaker codes act as speaker dependent biases. In the first method, these speaker codes are used to transform the 
features. These transformed features are feeded into the SI-DNN. In the second method, the adapation is performed directly by providing the speaker code information to the original DNN.

\begin{eqnarray}
y_{l}=logistic(W_{l} y_{l-1} + B_{l} S_{c})
\label{eqn:scode}
\end{eqnarray}
where $y_{l-1}$ denotes the activations from the layer below, $W_{l}$ is the corresponding weight matrix and $S_{c}$ is the speaker code for the speaker $c$. $B_{l}$ is the weight matrix that connect speaker codes to the  
layer $l$.

\paragraph{Speaker Code With the Adaptation Network \citep{SPEAKECODE1}}\mbox{}\\

\begin{figure}[!htbp]
  \begin{center}
    \leavevmode
      \includegraphics[width=10cm]{speakercode}
    \caption{Speaker Adaptation of DNN based on speaker code feature transformations \citep{SPEAKECODE1}}
    \label{fig:speakercode}
  \end{center}
\end{figure}

In addition to learning a speaker independent DNN for phone classification, this methods relies on jointly learning a large generic adaptation NN for all the speakers as well as a speaker code for each speaker. 
This speaker code is a very compact feature vector representing speaker-dependent information.  The idea behind this approach is that the speaker codes  and the adaptation NN are used to nonlinearly transformed the acoustic 
features into a speaker independent feature space \ref{fig:speakercode}. 

During the training phase, firstly, the speaker independent DNN is learned. Secondly, the adaptation NN is inserted where the activations of the top layer of the adaptation network are feed-forwarded to the input layer 
of the original DNN.  Therefore, the activations of the top layer of the adaptation NN represents nonlinearly transformed features and number of hidden units of that layer is equal to the input layer size of the original DNN.  
Each layer of the adaptation NN receives inputs from the speaker code as well as from the lower layer. Thirdly, the weights of the adaptation network and  the speaker code is learned jointly while keeping the weights of the 
original DNN unchanged. 

During the adaptation phase, the adaptation data is used to estimate speaker codes for test speakers. During this estimation, all the weights of the DNN and adaptation NN are frozen. These learned speaker codes are used for 
recognition. 

A major advantage of this method is that the parameters of the speaker code is small, therefore, it is possible to perform fast adaptation using small amount of adaptation data. In addition,the size of the speaker code is 
adaptable according to the availability of the adaptation data. However, it is worthwhile to note that this adaptation is supervised.  Furthermore, it is also necessary to train this adaptation NN and is time-consuming when 
it is used for large scale speech recognition tasks.


\paragraph{Speaker Code Based Approach for Direct Adaptation in the Model Space \citep{SPEAKECODE2}}\mbox{}\\

In this method, a very compact feature vector representing speaker-dependent information called speaker codes are directly connected to the speaker independent DNN using new set of weights.  These speaker codes are 
connected to every hidden layer of the network as well as the output layer.  These new set of weights are learned using additional information about speaker labels.  The intuition behind this approach is to provide 
some speaker specific compensation or a normalization at each level of the distributed representation of the DNN. 

In the theoretical point of view, these speaker codes behave as a bias to the original speaker independent DNN. During the learning phase, only the speaker codes and the weights connected to these speaker codes are 
estimated using training data, keeping all the weights of the SI DNN unchanged. It is also worthwhile to note that speaker codes are only updated when the data is from the relevant speaker. During the adaptation phase, only 
speaker codes are estimated using the adaptation data while keeping all the weights of the model unchanged. Then these speaker codes of test speakers are used for recognition of test utterances.

One major advantage of this direct adaptation is the reduced complexity of the adaptation parameters. However, this adaptation is performed in supervised fashion which has a little use in practical applications.  
It is still not clear about the improvements that can achieve from this method when using unsupervised adaptation.


\subsubsection{Augmenting the Model}

Another commonly used and successful adaptation technique reported in the literature consists of augmenting the original SI model with a linear transformation network. These methods, instead of using a speaker dependent
bias, estimate a set of weights using the adapation data to rotate the weights that are above the linear layer to reduce the mismatch between training and testing speakers. 

Let us denote $W_{l}$ as the weights of the layer $l$ of the SI-DNN and $W_{LN}$ is the weights of the linear network estimated using the adapation data. The DNN contains $n$ layers including the input and output layers.  
The calculation of the activations of a layer $l$ is given in ~\ref{eqn:ln}

\begin{eqnarray}
y_{l}=logistic( (W_{LN} W_{l})  y_{l-1} + b_{l})
\label{eqn:ln}
\end{eqnarray}
where $y_{l-1}$ denotes the activations of the layer below and $b^{l}$ is the bias vector for the layer $l$.


\paragraph{Linear Input Network (LIN)}\mbox{}\\

A classical approach is to add a linear input transformation network that acts as a pre-processor to the main network \citep{LIN1, RTN}. The aim of the augmented transformation layer is to map the incoming speech into a
“better” representation, one that enhances the ability of the main NN classifier to compute the posterior probabilities.

During the adaptation process, the LIN is firstly initialised to the identity matrix; only the augmented LIN is updated using the enrollment data and the weights of the original network are unchanged. The architecture of the 
transformation layer can be defined with different degrees of complexity, tying, and number of parameters, depending on the amount of adaptation data available.

The advantages to this architecture include: a compact representation of multiple speakers, for each speaker only a speaker specific transformation layer instead of a whole SD NN model is needed; rapid adaptation of 
transformation layer instead of the whole network with small amount of data; separate sub-networks for modelling speaker-dependent and speaker-independent characteristics; and more robust training of a small number of 
parameters with limited adaptation data.



\paragraph{Linear Hidden Network (LHN)}\mbox{}\\

The approach presented in \citep{LHN} explores a new possibility of adapting NN models with transformations of hidden layer representations. In a layered neural network, the activation values of each hidden layer 
are a nonlinear transfomations of the layer below and become more and more abstract with more and more nonlinear transformations. Similar to the LIN adaptation, the values of an identity matrix initialise the weights of the LHN.
The weights are estimated using a standard EBP algorithm keeping the weights of the original network frozen. In \citep{LHN} the LHN was applied to the last hidden layer of the SI NN model which has four layers in total including two hidden layers. However since the outputs of an internal layer can be considered as features more discriminative than 
the original ones, LHN can be applied to any hidden layer of the network.


\paragraph{Linear Output Network (LON)}\mbox{}\\
 
This technique is also similar to LIN and LHN approaches. The model is augmented by including a linear network, before the softmax component of the output layer. This linear layer is also initialized to the identity matrix. 
Similar to LIN, LHN methods, during the adapation, weights of the SI DNN is frozen and the LON weights are updated using the EBP algorithm.

\paragraph{Subspace Method}\mbox{}\\

These subspace methods aims to find a speaker subspace and then construct adaptation transformations as a point in the subspace.  The one method used \citep{LIBO} is to perform PCA of adaptation matrices to find eigenvectors.
Then transformations for test speakers can be estimated as a linear combination of these eigenvectors. The coefficients of for each speaker is estimated using backpropagation algorithm. This subspace method can be used with 
LIN, LHN and LON techniques.

