The first approach that we used for the project is Support Vector Machines (SVMs) for doing phonetic classification. SVMs are general classifiers that can be applied in a very large number of tasks. One very good property of SVMs is that they perform well with minimal configuration and without any prior knowledge of the dataset. Thus, this approach was a very natural choice as a first step and allowed us to get a better understanding of the data and quickly obtain a working classifier. However, since SVMs are mostly appropriate for binary tasks we worked along the lines of the work of Salomon, King and Osborne \cite{SVM1}  and explored different ways in order to extend them to the multiclass case. More previous work on phoneme classification work using Support Vector Machines include \cite{SVM2, SVM3}.

\subsection{Multiclass SVMs}

There are multiple ways to extend SVMs to do multiclass classification. Several of the approaches that we tried are the following:
\begin{enumerate}
\item One vs All Classifier

This is the most standard method for performing multiclass classifications. In this method, for every class $c_i \in C$ we create a binary classifier where all examples of the class C are labeled as +1 and all examples from other classes are labeled as -1. We train all $|C|$ classifiers (i.e. one for every class) using the SVM method. To classify an example we pick the class that corresponds to the SVM with the highest output value. This approach requires a relatively small amount of training since it is linear with the number of classes but doesn't have any bound on the generalization error and this is obvious in practice when comparing its error rate with other approaches.

\item One vs One - Voting scheme

For this approach we create and train a binary classifier for every pair of classes considering only their corresponding examples. Then, for the classification we ask each of the $|C|(|C|-1)/2$ classifiers to vote for a class and then we output the class with the largest number of votes. This approach requires $O(|C|^2)$ time for both training and classification.

\item One vs One - DAGSVM

DAGSVM creates as previously a binary classifier for every pair of classes but applies a different and faster method for classification. It picks an arbitrary linear ordering of the classes and always compare the classes at the two extremes discarding the losing class every time. This process is repeated until only one class is left which is the output of the classification. DAGSVM is a novel algorithm which improves on the Voting classifier. It has good generalization bounds and in most cases its performance is similar to the voting scheme while it requires only linear time with respect to the number of classes for classification.

\item One vs One - Hierarchical DAGSVM

Building on top of DAGSVM we can define a two layer classification method as follows: Classes are grouped into categories and the DAGSVM method is applied within each one of them to pick a winner. Then DAGSVM is applied anew to the winner classes of each category to pick the final winner which is the output of the classification. The training is exactly the same as previously.

\item Subset vs Subset

Subset vs Subset is a hybrid approach where instead of doing One vs All or One vs One classification we generalize the approach and group data set into categories and use binary classifiers to decide between categories instead of simple classes. It is a hierarchical method as previously but uses extra classifiers so that it can perform better.
\end{enumerate}

\subsection{Our approach}

In order to get better classification results we decided to work directly on the data set provided with all the 48 classes and reduce to 39 only when comparing the results to existing literature. This way we would be able to exploit better the richer structure of our dataset. Overall, we tried implementing all different methods discussed above but we focused our attention primarily to one vs one methods since they generally performed a lot better as we can see in the subsequent section.

In the beginning of our work we experimented with the built-in Matlab functions for SVMs. In order to make the training run in reasonable time we had to reduce the number of samples in each class to about 100. We did this by random sampling on the samples of the data-set. The resulting classifier that we got had about 50\% error-rate which showed that the approach worked but the number was a bit disappointing compared to the literature. We tried different kernels (linear, polynomial, gaussian) and different classification techniques but the resulting classifier improved only by negligible amounts.

Since the number of samples in every class were from one to two orders of magnitude higher than what Matlab could handle we searched for more efficient implementations so that we can train on the full dataset. We tried using SVMlight but the training time for all 1128 classifiers (=48(48-1)/2 pairs of classes) we needed would have been infeasible. We ended up writing our own SVM implementation in C++ using the optimization package CPLEX which contains a very
efficient Quadratic Programming solver. The training took about 8 hours for a total of 1128 classifiers (about 23 seconds per classifier). Then the resulting hyperplanes (linear kernel) were imported to Matlab for further processing. The training time was a huge improvement compared to \cite{SVM1}  which were only able to train on 40% of the dataset requiring 10,000 hours of CPU time.

The experiments on the data set were run on the first feature set using linear kernels. Throughout the training only the training data were considered so the error rates we get in the development and test set are prediction errors. Some adjustments we tried over the SVM algorithm and tested using cross validation are:
\begin{itemize}
\item Changing the cost of the slack variables uniformly
\item Changing the relative cost between the classes for better weighting the error rate.
\end{itemize}

Changing the costs uniformly didn't have a large impact on the outcome so the value 1000 was chosen. On the other hand, when we change the relative cost between the classes there is much difference on the resulting hyperplane. When comparing class A to class B, if we penalize the slack of class A more than class B then the resulting hyperplane is able to classify correctly a higher percentage of samples of class A and fewer of class B. Through these changes and cross validation we observed some slight improvement on the overall classification accuracy. Since the improvement was not very significant, we decided to set an equal weight to both classes for every pair of  classes.

For the DAGSVM, we used the original ordering of the classes since this way the first comparisons are between the most dissimilar phonemes (vowels 1 to 16 and consonants 17-48).

For the hierarchical DAGSVM, we grouped the classes into categories based on the phonetic properties and applied the DAGSVM in each category to select a set oif winners and then we applied one additional DAGSVM comparison on the set of winners to get the winning class.

As for the subset-vs-subset classifier, we split the classes into categories as previously and applied the voting classifier on the categories to select a winning category and then we applied the voting classifier once more to select the class in the winning category. This approach however produced worse results than the voting classifier and was abandoned.

\subsection{Our results}

The results for the different classification approaches that were described above are given in table \ref{table:svm_error48} for 48 classes and in table \ref{table:svm_error39} for the reduced set of 39 classes.

\begin{table}[!htb]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Method & Error Rate (Train) & Error Rate (Dev) & Error Rate (Test) \\
\hline
One vs All & 39.43\% & 40.04\% & 41.08\%  \\
\hline
Voting & 26.72\% & 28.82\% & 30.96\%  \\
\hline
DAGSVM & 27.09\% & 29.34\% & 31.12\%  \\
\hline
Hier. DAGSVM & 26.92\% & 29.19\% & 31.06\%  \\
\hline
\end{tabular}
\caption{Error rates for 48 classes}
\label{table:svm_error48}
\end{table}

\begin{table}[!htb]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Method & Error Rate (Train) & Error Rate (Dev) & Error Rate (Test) \\
\hline
One vs All & 34.08\% & 34.60\% & 35.52\%  \\
\hline
Voting & 22.38\% & 24.49\% & 26.28\%  \\
\hline
DAGSVM & 22.78\% & 24.99\% & 26.46\%  \\
\hline
Hier. DAGSVM & 22.65\% & 24.95\% & 26.18\%  \\
\hline
\end{tabular}
\caption{Error rates for 39 classes}
\label{table:svm_error39}
\end{table}

From the tables above it is obvious that the Voting classifier performs better than all others. One thing to note is that all one-vs-one classifiers have similar results. DAGSVM has similar error rates with voting while requiring only linear time with the classes for classification.

Next, we provide some graphs giving the true positive rates (figure \ref{figure:svm_tpr}) and the false positive rates (figure \ref{figure:svm_fpr}) for the Voting classifier. True positive rate is the fraction of times that our classifier predicts class A and the label of the sample is actually class A. False positive rate is the number of times it predicts class A and the label of the sample is actually some other class B divided by the total number of errors in the set.


\begin{figure}[!htb]
	\centering
\subfloat[Training set]{
\includegraphics[width=0.3\textwidth]{figures/Ytrain1cm.pdf}
}
\subfloat[Development Set]{
\includegraphics[width=0.3\textwidth]{figures/Ydev1cm.pdf}
}
\subfloat[Testing set]{
\includegraphics[width=0.3\textwidth]{figures/Ytest1cm.pdf}
}	
	\caption{True positive rates(\%) for the voting one-vs-one classifier (39  classes)}
	\label{figure:svm_tpr}
\end{figure}


\begin{figure}[!htb]
	\centering
\subfloat[Training set]{
\includegraphics[width=0.3\textwidth]{figures/Ytrain1fp.pdf}
}
\subfloat[Development Set]{
\includegraphics[width=0.3\textwidth]{figures/Ydev1fp.pdf}
}
\subfloat[Testing set]{
\includegraphics[width=0.3\textwidth]{figures/Ytest1fp.pdf}
}	
	\caption{False positive rates(\%) for the voting one-vs-one classifier (39  classes)}
	\label{figure:svm_fpr}
\end{figure}

The results we got for the classification task outperform those of \cite{SVM1}. That is because they only managed to
train using around 40\% of the TIMIT training set with the technology at that time, while we were able to train using the full TIMIT dataset.

\subsection{Results for vowel classification}

In the section we compared the results for the classification on 4 vowels (/aa/,/ae/,/ey/,/ow/) as indicated in \cite{SVM2}.
The results we got for the voting classifier and the DAGSVM are summarized in table \ref{table:svm_vowel_error}.

\begin{table}[!htb]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Method & Error Rate (Train) & Error Rate (Dev) & Error Rate (Test) \\
\hline
Voting & 8.83\% & 9.76\% & 12.76\%  \\
\hline
DAGSVM & 8.75\% & 9.53\% & 12.76\%  \\
\hline
\end{tabular}
\caption{Error rates for the 4 vowels classes}
\label{table:svm_vowel_error}
\end{table}


We can see that in this case the DAGSVM outperforms the voting method. Tables \ref{table:svm_confusion_voting} and \ref{table:svm_confusion_dag} give the summary of the confusion matrices for the development set for both classifiers. 

\begin{table}[!htb]
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\# & /aa/ & /ae/ & /ey/ & /ow/ & precision(\%)\\
\hline
/aa/ & 217 & 12 & 0 & 7 & 91.95 \\
\hline
/ae/ & 17 & 206 & 7 & 7 & 86.92 \\
\hline
/ey/ & 1 & 10 & 216 & 3 & 93.91\\
\hline
/ow/ & 13 & 6 & 2 & 147  & 87.50\\
\hline
\end{tabular}
\caption{ Confusion matrix on dev set using voting}
\label{table:svm_confusion_voting}
\end{table}

\begin{table}[!htb]
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\# & /aa/ & /ae/ & /ey/ & /ow/ & precision(\%)\\
\hline
/aa/ & 217 & 12 & 0 & 7 & 91.95\\
\hline
/ae/ & 16 & 207 & 7 & 7 & 87.34\\
\hline
/ey/ & 1 & 9 & 217 & 3  & 94.35\\
\hline
/ow/ & 12 & 7 & 2 & 147  & 87.50\\
\hline
\end{tabular}
\caption{Confusion matrix on dev set using DAGSVM}
\label{table:svm_confusion_dag}
\end{table}

Our results are significantly better than those in \cite{SVM2}  who obtain 86.3\% accuracy (= 13.7\% error rate) by using SMO and degree 10 polynomial kernel. The reason for this difference is that we were able to train our classifiers using a larger set of data.
