Gaussian mixture models (GMMs) are currently the most popular approach for phoneme classification in automatic speech recognition \cite{lmgmm1}.
GMMs are usually trained with maximum likelihood (ML) methods, i.e.\ the parameters of the mixture components are chosen so that the likelihood of the training data is maximized.
The most common algorithm for finding maximum likelihood solutions is the expectation-maximization (EM) algorithm \cite{bishop}.

In tasks where GMMs are mainly used to predict class labels for new data points, the classification performance of GMMs ist more important than the generative aspect of modelling the underlying distribution.
Recently there has been increased interest in combining discriminative training approaches with GMMs in order to improve the classification performance of GMMs \cite{lmgmm1, lmgmm2}.
One common approach is to maximize the classification margin instead of the likelihood of the training data.
This idea comes from the work on support vector machines (SVMs), which are currently one of the state-of-the-art techniques for binary classification.
For GMMs, the size of the margin is measured by the Mahalanobis distance.
In order to classify a new data point, a large margin GMM chooses the closest class center under the Mahalanobis distance (c.f.\ the maximum posterior class probability for maximum likelihood GMMs).

In this project, we implemented several classifiers based on maximum likelihood GMMs and the EM algorithm in order to achieve the following goals.
\begin{itemize}
\item Reproduce some of the results in the literature \cite{lmgmm1} in order to gain a better understanding of GMMs and the EM algorithm.
\item When comparing maximum likelihood with large margin GMMs, some experiments in the literature were only performed for the large margin approach \cite{lmgmm1}.
We wanted to compare the relative performance of the two techniques in more detail by repeating these experiments with maximum likelihood GMMs.
\item Compare the performance of GMMs with our other classifiers, SVMs (section \ref{section:svm}) and random forests (section \ref{section:random_forest}).
\item Explore the performance of classifiers combining our three individual classifiers.
We were particularly interested in combining GMMs with SVMs as another approach for introducing large margin methods into GMMs.
\end{itemize}

\subsection{Implementation}
In order to understand the implementation details of GMMs, we decided to write the GMM classifiers ourselves.
We chose Matlab as programming environment because it allowed us to write the core algorithms without much overhead.
The main part of the implementation was the EM algorithm.
The overall structure of the algorithm follows \cite{bishop}.
Relative to this high-level description, we adapted our implementation in various ways to run it on the relatively large TIMIT data set.

\begin{itemize}
\item In order to initialize the mixture components, we first run the K-means algorithm to find one cluster per mixture component.
We then use the sample mean and covariance of each cluster to initialize the corresponding mixture component.
This approach usually gives a relatively good start for the EM algorithm and consequently reduces the number of iterations required by EM.
Since K-means converges faster than EM, this is a sensible optimization.
We implemented the K-means algorithm ourselves.

\item Implementing the formulas for the EM algorithm directly can lead to floating point precision problem because the likelihood of a data point decays exponentially with its distance to the center of a mixture component.
Hence we performed all computations with probabilities in a logarithmic scale.

\item We added a regularization term to the covariance matrices in order to avoid cases where the covariance matrix was not positive definite.
The regularization term was a identity matrix scaled by $10^{-5}$.

\item We vectorized the code in order to avoid all loops over individual data points.
This was important because the size of the TIMIT training set (abour 140.000 examples per feature set).

\item We stopped the algorithm when the differnce between the log-likelihoods of two consecutive iterations was smaller than 0.001 or after 100 iterations.
\end{itemize}


\subsection{Experiments}
Here we present the results of the GMM classifiers  we implemented.
The experiments of comparing and combining GMMs with our other classifiers are described in sections \ref{section:comparison} and \ref{section:combined}.

We performed all classification experiments on the TIMIT corpus described in the introduction.
The actual data set was provided by the course staff and is the same as in \cite{lmgmm1}, which also contains a description of the eight different feature sets.
The data set contains 48 different classes, each corresponding to a phoneme.

In order to compare our results with the literature, we followed the standard TIMIT phoneme classification benchmark.
All classifiers were trained on the training set only.
For tests and comparisons during the development of our classifiers we used the development set.
The classification performance on the test set was only measured after the code of the classifiers had been finalized.
Moreover, we mapped the 48 classes to a reduced set of 39 classes for calculating the error rates so that our results can be compared with the literature.

\subsubsection{Mixture Components}
First, we compared the performance of GMMs with different numbers of mixture components per class on the eight feature sets.
Tables \ref{table:gmm_mixture_dev_error} and \ref{table:gmm_mixture_test_error} contain the results for the development and test set respectively.

\begin{table}[!htb]
\centering
\begin{tabular}{|c||c|c|c|c|c|}
\hline
Set & 1-mix & 2-mix & 3-mix & 4-mix & 5-mix \\
\hline
S1 & 24.9\% & 24.1\% & 24.0\% & 23.5\% & 23.9\%\\
\hline
S2 & 24.6\% & 23.7\% & 23.5\% & 23.3\% & 23.7\%\\
\hline
S3 & 24.7\% & 23.4\% & 22.6\% & 22.9\% & 22.7\%\\
\hline
S4 & 25.3\% & 23.4\% & 22.5\% & 22.9\% & 23.0\%\\
\hline
S5 & 26.3\% & 24.6\% & 24.1\% & 24.2\% & 24.1\%\\
\hline
S6 & 25.9\% & 24.2\% & 23.4\% & 23.2\% & 24.0\%\\
\hline
S7 & 26.2\% & 25.5\% & 24.8\% & 24.5\% & 24.8\%\\
\hline
S8 & 26.3\% & 24.7\% & 24.2\% & 24.2\% & 24.0\%\\
\hline
\hline
avg & 25.5\% & 24.2\% & 23.7\% & 23.6\% & 23.8\% \\
\hline
\end{tabular}

\caption{Error rates of the maximum likelihood GMMs on the development set.}
\label{table:gmm_mixture_dev_error}
\end{table}

\begin{table}[!htb]
\centering
\begin{tabular}{|c||c|c|c|c|c|}
\hline
Set & 1-mix & 2-mix & 3-mix & 4-mix & 5-mix \\
\hline
S1 & 25.8\% & 24.7\% & 24.9\% & 24.4\% & 25.1\%\\
\hline
S2 & 25.7\% & 24.2\% & 24.0\% & 24.1\% & 25.0\%\\
\hline
S3 & 24.9\% & 24.0\% & 23.1\% & 23.1\% & 24.1\%\\
\hline
S4 & 25.5\% & 23.8\% & 23.5\% & 23.0\% & 23.3\%\\
\hline
S5 & 26.0\% & 25.0\% & 24.3\% & 24.5\% & 24.9\%\\
\hline
S6 & 25.7\% & 25.2\% & 24.1\% & 24.3\% & 24.6\%\\
\hline
S7 & 26.6\% & 25.9\% & 25.8\% & 25.3\% & 25.5\%\\
\hline
S8 & 26.8\% & 25.3\% & 24.8\% & 24.7\% & 24.9\%\\
\hline
\hline
avg & 25.9\% & 24.8\% & 24.3\% & 24.2\% & 24.7\% \\
\hline
\end{tabular}

\caption{Error rates of the maximum likelihood GMMs on the test set.}
\label{table:gmm_mixture_test_error}
\end{table}

The results show that GMMs with 3 or 4 mixture components per class achieve the best results.
A higher number of mixture components already tends to overfit to the training data and consequently shows worse generalization performance on the development and test set.
On the other hand, one or two mixture components per class give worse results because they cannot fully capture the complexity of the data set.


Table \ref{table:gmm_mixture_paper_comparison} compares our result for 1-mix, 2-mix and 4-mix GMMs with those in \cite{lmgmm1}.
While our error rates are slightly higher, the difference is relatively small.
One reason for the discrepancy could be that the GMMs in \cite{lmgmm1} are trained with the cross-validation EM algorithm \cite{cvem}, while we used the standard EM algorithm.
Moreover, the authors of \cite{lmgmm1} trained two independent GMMs for each case and used the development set two select the better candidate.

\begin{table}[!htb]
\centering
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
Set & \multicolumn{2}{|c|}{1-mix} & \multicolumn{2}{|c|}{2-mix} & \multicolumn{2}{|c|}{4-mix} \\
\hline
 & \cite{lmgmm1} & us & \cite{lmgmm1} & us & \cite{lmgmm1} & us \\
\hline
dev & 24.8\% & 25.5\% & 23.8\% & 24.2\% & 23.5\% & 23.6\% \\
\hline
test & 25.2\% & 25.9\% & 24.4\% & 24.8\% & 24.1\% & 24.2\% \\
\hline
\end{tabular}

\caption{Comparison of our results with those in \cite{lmgmm1}.}
\label{table:gmm_mixture_paper_comparison}
\end{table}

\subsubsection{Committees}
The authors of \cite{lmgmm1} achieve a significant increase in the classification performance of their large margin GMM by combining the eight feature sets in a committee.
We performed similar experiments to find out whether maximum likelihood GMMs would also profit from using all eight feature sets.

In the committee approach, we separately train one GMM per feature set.
For clasification, we add the log posterior probabilities for each class from the eight GMMs and choose the class with the largest sum.
Table \ref{table:gmm_committee} summarizes the results for the development and test set.

\begin{table}[!htb]
\centering
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|}
\hline
Set & \multicolumn{2}{|c|}{1-mix} & \multicolumn{2}{|c|}{2-mix} & \multicolumn{2}{|c|}{3-mix} & \multicolumn{2}{|c|}{4-mix} & \multicolumn{2}{|c|}{5-mix} \\
\hline
& avg & com & avg & com & avg & com & avg & com & avg & com \\
\hline
dev & 25.5\% & 22.5\% & 24.2\% & 19.9\% & 23.7\% & 19.7\% & 23.6\% & 19.6\% & 23.8\% & 19.9\% \\
\hline
test & 25.9\% & 22.6\% & 24.8 \% & 19.9\% & 24.3\% & 19.7\% & 24.2\% & 19.2\% & 24.7\% & 19.1\% \\
\hline
\end{tabular}
\caption{Error rates of the committee classifiers.}
\label{table:gmm_committee}
\end{table}

The results show that the committee classifiers achieve a lower error rate than the average over all feature sets.
Interestingly, the 5-mix committees have the largest improvments.
One reason could be that the 5-mix GMMs better complement each other because they are more diverse for different feature sets.
It is worth noting that the error rates for the committee classifiers are comparable to the error rates of the hierarchical large margin GMMs in \cite{lmgmm1}.

\subsubsection{Hierarchies}
For large margin GMMs, recent research shows an improvement in the classification performance by using a hierarchical GMM structure \cite{lmgmm1}.
In a hierarchical GMM, the classes are divided into a set of disjoint clusters.
When training a hierarchical GMM, we separately fit GMMs to each cluster and each class.
The goal of this approach is to have a larger number of training points at the cluster level, which can lead to more robust cluster-level GMMs.
In a maximum likelihood hierarchical GMM, we then pick the class that maximizes the sum of the log posterior probability for the class and for the corresponding cluster.

We used a clustering based on manner classes and divided the phonemes into eight sets: short vowels, long vowels, semi-vowels, nasals, weak fricatives, strong fricatives, stops and closures (with silences) \cite{lmgmm1}.
In the following experiments, we tested three different hierarchies based on the number of mixture components on the cluster level and class level.
$H(x,y)$ denotes a hierarchy with $x$ mixture components per class and $y$ mixture components per cluster.
Moreover, we added a weight $w$ for the cluster log posterior probabilities.
So for a data point $x$, the prediction $\hat{y}$ is given by

\[
\hat{y} = \argmax_c \{ \log P(c \; | \; x) + w \log P(\textrm{cluster}(c) \; | \; x) \}
\]

Tables \ref{table:gmm_hierarchy_dev} and \ref{table:gmm_hierarchy_test} summarize the results of the experiments for feature set 1.

\begin{table}[!htb]
\centering
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|}
\hline
$w$ & 0.0 & 0.05 & 0.1 & 0.2 & 0.4 & 0.6 & 0.8 & 1.0 & 1.5 & 2.0\\
\hline
$H(1,2)$ & 24.9\% & 26.5\% & 29.3\% & 36.7\% & 46.9\% & 52.6\% & 57.0\% & 60.0\% & 65.8\% & 69.6\%\\
\hline
$H(2,4)$ & 23.9\% & 24.8\% & 27.1\% & 33.7\% & 43.9\% & 49.7\% & 54.1\% & 57.6\% & 64.0\% & 67.9\%\\
\hline
$H(3,6)$ & 24.0\% & 24.6\% & 26.1\% & 31.1\% & 41.8\% & 48.0\% & 53.0\% & 57.8\% & 65.6\% & 68.8\%\\
\hline
\end{tabular}
\caption{Error rates of hierarchical GMMs on the development set.}
\label{table:gmm_hierarchy_dev}
\end{table}

\begin{table}[!htb]
\centering
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|}
\hline
$w$ & 0.0 & 0.05 & 0.1 & 0.2 & 0.4 & 0.6 & 0.8 & 1.0 & 1.5 & 2.0\\
\hline
$H(1,2)$ & 25.8\% & 26.6\% & 29.7\% & 37.5\% & 47.1\% & 52.7\% & 56.9\% & 59.9\% & 65.7\% & 69.4\%\\
\hline
$H(2,4)$ & 24.5\% & 25.3\% & 27.6\% & 34.4\% & 44.5\% & 49.9\% & 54.1\% & 58.4\% & 64.3\% & 67.7\%\\
\hline
$H(3,6)$ & 24.4\% & 25.0\% & 26.7\% & 31.8\% & 42.5\% & 48.4\% & 52.8\% & 57.4\% & 65.6\% & 69.2\%\\
\hline
\end{tabular}
\caption{Error rates of hierarchical GMMs on the test set.}
\label{table:gmm_hierarchy_test}
\end{table}

The results show that we achieve the best classification performance for $w=0$.
In this case, $H(1,2)$, $H(2,4)$ and $H(3,6)$ essentially correspond to a normal GMM with one, two or three mixture components per class.
It is worth noting that in \cite{lmgmm1}, the reported error rates for $H(1,2)$ and $H(2,4)$ are also almost equal to 1-mix and 2-mix, respectively.
The authors also use weights for the log posterior probabilities and the development set is used to choose the values.
While the authors do not give values for their weights, we guess that they used cross-validaton and arrived at similarly small weights.
While a hierarchical approach might offer benefits for large margin GMMs, it does not improve the performance of GMMs trained for maximum likelihood.

For a better understanding of the results above, we performed further experiments to determine the classification performance at the cluster level.
Table \ref{table:gmm_hierarchy_cluster} shows the results.

\begin{table}[!htb]
\centering
\begin{tabular}{|c||c|c|c|}
\hline
Set & $H(1,2)$ & $H(2,4)$ & $H(3,6)$ \\
\hline
dev & 27.9\% & 26.8\% & 26.1\% \\
\hline
test & 27.9\% & 26.8\% & 26.4\% \\
\hline
\end{tabular}
\caption{Error rates of hierarchical GMMs at the cluster level.}
\label{table:gmm_hierarchy_cluster}
\end{table}

The error rates at the cluster level are higher than the error rates for per-phoneme classification.
One reason for this could be that the smaller number of mixture components is not able to capture the complexity of the phoneme clusters.
Moreover, the relatively high error rates at the cluster level explain why the performance of the hierarchical GMMs is better for small values of $w$.
