\chapter{Preliminary Results And Future Work}

\label{chap:results}
\ifpdf
    \graphicspath{{Chapter4/Chapter4Figs/}}
\else
    \graphicspath{{Chapter4/Chapter4Figs/EPS/}}
\fi

We have summarized the techniques that are used to handle the speaker variability in ASR in the previous chapter. We clustered these techniques into two, namely, feature space compensation (Normalization) and model space 
compensation (Adaptation). As mentioned previously, traditional our research focus is on handling the speaker variability in DNN-HMM systems. One of the major trend which has achieved considerable gains in performance 
improvements is transforming features by concatenating speaker representative information to the normal speech features. However, these techniques only has a little control and it is not entirely clear underline changes in 
the model space when ASR systems are trained on these features. Furthemore, it is believed that techniques that modify the feature space can normally remove the average characteristics while the techniques that modify the 
model space believed to be able to do a better job in handling variabilities. The performance improvments that can be seen in GMM-HMM adaptation methods compared with the GMM-HMM normalization improvements support this belief.
Therefore, it is worthwhile to investigate the ways of changing the model space of DNN acousic models.

The speaker code based method \citep{SPEAKECODE2} changes the model space to adapt the SI model to new test speakers and give promising results. However, their results are based on supervised adapation and it is still not clear
about the results in unsupervised adaptation. In supervised adaptation for each new speaker, a number of correctly annotated speech data must be provided to adapt the SI model. In practice, this enrollment data can be got by 
asking the new speaker to read some pre-selected sentences. However, for many real applications, like speech based help-desk systems, prompting sentences for adaptation seem unrealistic. In these cases, unsupervised adaptation 
would be a good choice, which is a non-intrusive process and could work in the background all the time. Meanwhile, the continuous modification of the model parameters could eliminate any non-stationary mismatches. Thus the focu
of our reseach is to develop techniques that could adequately adapt the DNN-HMM ASR systems in a unsupervised way but still give better recognition performance.

Furthemore, some of the techniques like LIN, LHN are mainly used with shallow NNs. However, in DNNs the knowledge is represented in more distributed way. Therefore, it is still not clear, at which level of the representation
the adapation of the DNN should be carried out. It is interesting to investigate these methods in the terms of the performance and also the necessary to observe the changes in the model space after the adapation. 

In this chapter, we will first present our preliminary experimental results using unsupervised adaptation and then give a brief explanation of our proposed method and future plans.


\section{LHN based Speaker Adaptation}

In our preliminary work, we applied Linear Hidden Network (LHN) based speaker adaptation on a DNN trained on the Wall Street Journal (WSJ) dataset. In addition to the simple LIN adapation, we also investigated the eigen basis
based approach \citep{EigenVoices} in estimating the LHN transforms for test speakers.


\subsection{Database information} 

The information about the WSJ dataset is summarized in Table~\ref{tbl:wsjsum}.

\begin{table}
	\caption{Summary of the WSJ training and testing sets.}
	\label{tbl:wsjsum}
	\begin{center}
		\begin{tabular}{|c||c|c|}	
			\hline
			Data Set & Number of Speakers & Number of Utterances  \\
			\hline
			train & 283 & 37416 \\
			\hline
			dev & 10 & 503 \\
			\hline
			test & 8 & 333 \\
			\hline
		\end{tabular}		
	\end{center}	
\end{table}


\subsection{Speaker Independent Model information} 

Since the DNNs does not require uncorrelated data, our SI model is trained with filterbanks features with 23 coefficients alone with first and second temporal derivatives. Features were preprocessed by the cepstral mean 
normalization (CMN) algorithm. We concatenated 11 consecutive frames to form the input feature vectors for the DNN. The SI model is trained with 6 hidden layers with 2048 nodes per each hidden layer. 3428 target class 
labels were used for tied triphone states. The complexity of the model is 33.75M parameters. The model was first pre-trained generatively using RBMs and then it was finetuned using the cross-entropy criterion. The recognition
results of the SI model is given in Table~\ref{tbl:si} and the overall WER is 4.29.

\begin{table}
	\caption{WER of test speakers for SI model.}
	\label{tbl:si}
	\begin{center}
		\begin{tabular}{|c||c|c|}	
			\hline
			Speaker & WER & Details  \\
			\hline
			440 & 4.10 & 3 ins, 2 del, 23 sub  \\
			\hline
			441 & 5.54 & 4 ins, 4 del, 29 sub  \\
			\hline
			442 & 4.60 & 3 ins, 0 del, 30 sub \\
			\hline
			443 & 5.04 & 4 ins, 3 del, 29 sub \\
			\hline
			444 & 5.01 & 4 ins, 3 del, 28 sub  \\
			\hline
			445 & 2.15  & 3 ins, 0 del, 13 sub \\
			\hline
			446 & 4.25 & 5 ins, 3 del, 20 sub \\
			\hline
			447 & 3.82 & 6 ins, 1 del, 22 sub \\
			\hline
		\end{tabular}		
	\end{center}	
\end{table}



For the LHN adaptation, a linear transform was inserted between the first and second hidden layers and these newly added weights were initialized using the identity matrix. In LIN adaptation only these newly added weights will 
be adapted using the adapation data. The number of paramenters to be adapted is around 4.2M. This adaptation was performed in unsupervised fashion where the transcriptions for the adapation data was generated performing 
recognition using the SI model. 

The LIN adapation results are given in Table~\ref{tbl:lin1} and the overall recognition result for LIN adapation is 4.13. 

\begin{table}
	\caption{WER of test speakers after LIN adaptation}
	\label{tbl:lin1}
	\begin{center}
		\begin{tabular}{|c||c|c|}	
			\hline
			Speaker & WER & Details  \\
			\hline
			440 & 4.10 & 3 ins, 2 del, 23 sub  \\
			\hline
			441 & 4.94 & 5 ins, 4 del, 24 sub  \\
			\hline
			442 & 4.60 & 4 ins, 0 del, 29 sub \\
			\hline
			443 & 5.46 & 8 ins, 4 del, 27 sub \\
			\hline
			444 & 4.29 & 3 ins, 2 del, 25 sub  \\
			\hline
			445 & 2.15  & 3 ins, 0 del, 13 sub \\
			\hline
			446 & 4.10 &  5 ins, 3 del, 19 sub  \\
			\hline
			447 & 3.56 & 2 ins, 1 del, 24 sub \\
			\hline
		\end{tabular}		
	\end{center}	
\end{table}

We were able to improve the WER from 4.29 to 4.13 using the LHN adapation. Therefore, even with DNNs, LHN approach is effective.

\subsection{Eigenvoice Based Adaptation} 

In the LHN experiment, 4.2M parameters were adapted. This is still a large number of parameters to adapt. Therefore, we decided to use an Eigenvoice based approach \citep{EigenVoices} to reduce the number of parameters to 
be adapted. In \citep{LIBO}, this method is used to adapt a shallow NN using a LIN and a LON and gained considerable improvements.

We seleced 200 training speakers  with equal number of female and male speakers and estimated the LHN transformation for each speaker. A set of eigen-bases were extracted using PCA from these training transformations. 
Then the adapation problem can be formulated as in Equation~\ref{eqn:basis}. For the purpose of our experiment we extracted 80 eigen-vectors. 

\begin{equation}
\label{eqn:basis}
 \hat{W} = W_{SI} + \sum_{k=1}^N \alpha_{k} B_{k}
\end{equation}

where $W_{SI}$ is the SI model and $B_{k}$ is the a basis with index $k$ and $\alpha_{k}$ is the corresponding basis weight. These basis weights were initialized to zero which is equallent to the unadapted model. The EBP is used to estimated basis
weights. From each weight change, it is possible to calculate the change of the $\alpha_{k}$ using the following equation~\ref{eqn:delta}

\begin{equation}
\label{eqn:delta}
 \Delta\alpha_{k} =  -\eta\frac{\partial\epsilon}{\partial\alpha_{k}} = -\eta\frac{\partial\epsilon}{\partial W}\frac{\partial W}{\partial\alpha_{k}} = \Delta W . B_{k} = \sum_{i,j} \Delta w_{ij}b_{ij}^k
\end{equation}

Then the new basis weight $\hat{\alpha{k}}$ is computed using the previous basis weight $\alpha_{k}$  using the $\hat{\alpha_{k}}=\alpha_{k}+\Delta\alpha_{k}$. 

We used two types of weight changes for basis weights estimation. Firstly, weight change was calculated after each epoch (given in Table~\ref{tbl:eig1} aind it recorded the overall performance of WER of 4.16. 
Secondly, weight changes were recorded after the convergence (given in Table~\ref{tbl:eig2} and with this the overall performance was 4.09.  

\begin{table}
	\caption{WER of test speakers after Eigenvoice Based LIN Adaptation}
	\label{tbl:eig1}
	\begin{center}
		\begin{tabular}{|c||c|c|}	
			\hline
			Speaker & WER & Details  \\
			\hline
			440 & 3.81 & 2 ins, 2 del, 22 sub  \\
			\hline
			441 & 5.39 & 4 ins, 4 del, 28 sub  \\
			\hline
			442 & 4.87 & 5 ins, 0 del, 30 sub \\
			\hline
			443 & 5.04 & 6 ins, 3 del, 27 sub \\
			\hline
			444 & 4.43 & 4 ins, 2 del, 25 sub  \\
			\hline
			445 & 2.15  & 3 ins, 0 del, 13 sub \\
			\hline
			446 & 4.10 &  5 ins, 3 del, 19 sub  \\
			\hline
			447 & 3.69 & 2 ins, 1 del, 25 sub \\
			\hline
		\end{tabular}		
	\end{center}	
\end{table}




\begin{table}
	\caption{WER of test speakers after Eigenvoice Based LIN Adaptation (Coefficients were estimated after convergence)}
	\label{tbl:eig2}
	\begin{center}
		\begin{tabular}{|c||c|c|}	
			\hline
			Speaker & WER & Details  \\
			\hline
			440 & 3.95 & 2 ins, 2 del, 23 sub  \\
			\hline
			441 & 5.09 & 4 ins, 4 del, 26 sub  \\
			\hline
			442 & 4.74 & 5 ins, 0 del, 29 sub \\
			\hline
			443 & 5.32 & 8 ins, 3 del, 27 sub \\
			\hline
			444 & 4.15 & 3 ins, 2 del, 24 sub  \\
			\hline
			445 & 2.15  & 3 ins, 0 del, 13 sub \\
			\hline
			446 & 4.10 &  5 ins, 3 del, 19 sub  \\
			\hline
			447 & 3.43 & 2 ins, 1 del, 23 sub \\
			\hline
		\end{tabular}		
	\end{center}	
\end{table}



The results are summarized in Table~\ref{tbl:sum}. The improvement of when the basis weights were estimated using the weight change after the convergence recorded the best performance.

When the weight changes were calculated after reaching convergence to update basis weights, those change of weights explain the goal of adapation better. In otherwords, these weight changes should be in the right direction.
Which will give better updates for the basis weights resulting a good estimated transformation to be finetuned by the EBP again. However, when weight change calculations are based after one ecoch, some of the weights will 
have changes that are undesired for the final goal. We believe this can be addressed by changing the learning rate during the adaptation.



\begin{table}
	\caption{Summary of Results}
	\label{tbl:sum}
	\begin{center}
		\begin{tabular}{|c||c|c|}	
			\hline
			Approach & WER  \\
			\hline
			SI Model & 4.29  \\
			\hline
			LHN & 4.13  \\
			\hline
			LHN + eigenvoice & 4.16 \\
			\hline
			LHN + eigenvoice to convergence & 4.09 \\
			\hline
		\end{tabular}		
	\end{center}	
\end{table}

\section{Measuring the Speaker Variability of a DNN}


In a DNN-HMM system, it is worthwhile to investigate the behaviour of the speaker variability in each layer. Intuitively, the speaker variability should be less in upper hidden layers compared to that in lower hidden layers. 
However, simple distance metrics like euclidean or KL-Divergence cannot be used to measure the changes of speaker variabilities. The reason for this is that different hidden layers represent different model spaces in the
distributed representation of information in a DNN and not comparable.

Therefore, instead of measuring the distances, we propose to analyze the sensitivity of hidden units for speaker variability. For each hidden unit of the DNN, record the avearge activations for set of selected speakers. 
These activations can be represented as a vector. In this way, for every hidden unit, the activation vector is computed.  In this report we propose two mechanisms to analyze the sensitivity using the activation vector for a 
given hidden unit. The first measure is based on entropy and the other measure is based on variance.


\subsection{Entropy Based Measure}

First the activation vector is normalized to create a probability distribution. Then the entropy of this probability distribution is calculated. The entropy is higher when the distribution is close to the uniform distribution.
Therefore, it can be considered that the hidden units with higher value for entropy are less sensitive to the speaker variability. 

A hidden unit which is less senstive to the speaker variabilities are given in ~\ref{fig:goodunit}

\begin{figure}[!htbp]
  \begin{center}
    \leavevmode
      \includegraphics[width=9cm]{goodunit}
    \caption{A less sensitive hidden unit to speaker variability; Entropy = 2.99}
    \label{fig:goodunit}
  \end{center}
\end{figure}


On the otherhand, if the entropy for the hidden unit is smaller, it is considered more sensitive to the speaker variabilities ~\ref{fig:badunit}. 

\begin{figure}[!htbp]
  \begin{center}
    \leavevmode
      \includegraphics[width=9cm]{badunit}
    \caption{A more sensitive hidden unit to speaker variability; Entropy = 0.32}
    \label{fig:badunit}
  \end{center}
\end{figure}




Furthemore, it is worthwhile to investigate the changes of entropy for hidden units due to the adapation. Interestingly, we observed entropy changes in both directions. In otherwords, some units become more sensitive to the
speaker variability while other units became less sensitive. It is also worth to mention that most of these changes are negligible. The changes of entropy for the second hidden layer for the adapation techniques mentioned in 
the previous section is given in ~\ref{fig:diff_layer2}. 


\begin{figure}[!htbp]
  \begin{center}
    \leavevmode
      \includegraphics[width=9cm]{diff_layer2}
    \caption{Entropy change for Hidden Layer 2}
    \label{fig:diff_layer2}
  \end{center}
\end{figure}




\subsection{Variance Based Measure}

In this measure, for each hidden unit, the variance between the values of the activation vector is calculated.  For a given unit, if the variance is high compared to the other units then that unit can be considered as 
highly sensitive to the speaker variability. A hidden unit which is highly senstive to the speaker variabilities are given in ~\ref{fig:more_variance}


\begin{figure}[!htbp]
  \begin{center}
    \leavevmode
      \includegraphics[width=9cm]{more_variance}
    \caption{A highly sensitive hidden unit to speaker variability}
    \label{fig:more_variance}
  \end{center}
\end{figure}

On the otherhand, if the variance of that hidden unit is smaller, that unit is considered less sensitive to the speaker variability ~\ref{fig:less_variance}.

\begin{figure}[!htbp]
  \begin{center}
    \leavevmode
      \includegraphics[width=9cm]{less_variance}
    \caption{A less sensitive hidden unit to speaker variability}
    \label{fig:less_variance}
  \end{center}
\end{figure}



The variance of the following hidden unit ~\ref{fig:inc_variance}, increased after the adapation. Therefore, after the adaptation that unit become more senstive to the speaker variability.

\begin{figure}[!htbp]
  \begin{center}
    \leavevmode
      \includegraphics[width=9cm]{varianceBad}
    \caption{Unit becomes more sensitive}
    \label{fig:inc_variance}
  \end{center}
\end{figure}

The variance of the hidden unit given below decreased after the adapation ~\ref{fig:dec_variance}.

\begin{figure}[!htbp]
  \begin{center}
    \leavevmode
      \includegraphics[width=9cm]{varianceGood}
    \caption{Unit becomes less sensitive}
    \label{fig:dec_variance}
  \end{center}
\end{figure}



\subsection{Discussion}

A Hidden units can be sensitive to the speaker due to two reasons:

\begin{enumerate}
 \item The hidden unit can encode information about speakers that are necessary for the phone classification
 \item The unit can be encodeing the information releavant to the mismatch between speakers. This information has a negatively effect for the phone classification.
\end{enumerate}

Even though, during the adaptation, the sensitivity of the hidden units to speaker variability changes in both directions. In other words, some hidden units becomes more sensitive while others become less sensitive. Therefore 
during the adaptation the speaker sensitivity of hidden unit changes. 

In the original SI DNN, less sensitive but and an active hidden unit can be considered as a unit that encode phonetic information. However, during the adaptation the weights that are connected to these hidden units may change 
to encode information related to the adapted speakers. In otherwords, some important information that were used in SI DNN for phone classification might get lost after the adaptation. This issue is known as the 
``Catastrophic forgetting''. 


When adapting a DNN with limited amount of data, some knowledge encoded to facilitate the phone classification can be affected negatively.  This problem is a concern for DNNs because the training procedure is discriminative. 
The conservative training methods are developed to mitigate this issue. 

If we don’t adapt weights from the nodes that are not sensitive to the speaker variability, then this problem can also mitigated indirectly.

We believe the speaker sensitvity measure can be used as an criteria to select weights that are suitable for adapatation. The benifits of this of two folds:1) The reduction of the number of parameters to be adapted. 2) Mitigate
the issue of Catastrophic forgetting. In the future work section, how are we going to utilize this information is discussed in detail.
