In the past few years, there has been much research on combining generative and discriminative approaches in machine learning, e.g.\ \cite{discgen}.
The goal of this line of research is to combine the relative strenghts of the two approaches.
While generative models can exploit more statistical properties of the input, discriminative models often perform better in classification tasks.
Following this idea, we experimented with several ways to combine the three classifiers from the previous sections.

\subsection{GMMs and SVMs}
\label{subsection:gmmsvm}

Since SVMs are trained to find an optimal decision boundary separating two classes, they perform well on the task of pairwise classification (see section \ref{section:svm}).
However, SVMs cannot incorporate information of the entire distribution, such as class priors.
On the contrary, GMMs are trained to model the underlying distribution by maximizing the likelihood of the given data.
Our first approach aimed to combine these strenghts by first using the GMM to find a set of most likely candidate classes and then make a final decision between these candidates with the SVMs.

For a given data point $x$, the algorithm works in two stages:
\begin{enumerate}
\item Find the $k$ most likely classes $C$ by calculating the posterior probability of each class in the GMM and picking the $k$ highest values.
\item Use the SVMs to make a decision among the $k$ classes in $C$.
This is done with the same DAG-SVM approach for multiclass classification as in section \ref{section:svm}.
\end{enumerate}

For the following experiments, we used our SVM and the 2-component GMM from sections \ref{section:svm} and \ref{section:gmm} respectively.
Table \ref{table:gmmsvm_error} shows the error rates on the first set of features for both the development and the test set and $k \in \{2,3,4\}$.
Figures \ref{figure:gmmsvm_error} \subref{figure:gmmsvm_error_dev} and \subref{figure:gmmsvm_error_test} show the results for larger values of $k$

\begin{table}[!htb]
\centering
\begin{tabular}{|c||c|c|c|c|c|}
\hline
Set & SVM & 2-mix GMM & GMM-SVM $k=2$ & GMM-SVM $k=3$ & GMM-SVM $k=4$ \\
\hline
dev & 25.0\% & 23.8\% & 22.5\% & 23.27\% & 23.8 \% \\
\hline
test & 26.4\%  & 24.9\% & 24.0\% & 24.45\% & 25.1\%\\
\hline
\end{tabular}
\caption{Error rates of the GMM-SVM classifiers.}
\label{table:gmmsvm_error}
\end{table}

\begin{figure}[!htb]
  \centering
  \subfloat[Development set]{\includegraphics[width=.45\textwidth]{figures/gmmsvm_deverror.pdf}
    \label{figure:gmmsvm_error_dev}}
  \quad
  \subfloat[Test set]{\includegraphics[width=.45\textwidth]{figures/gmmsvm_testerror.pdf}
    \label{figure:gmmsvm_error_test}}
  \caption{Error rates of the GMM-SVM classifiers.}
  \label{figure:gmmsvm_error}
\end{figure}

The results show that for $k=2$, the GMM-SVM approach outperforms both the SVM and the 2-mix GMM.
While the error rate is still better than both individual classifiers for $k=3$, the results for $k=4$ are already worse than that of the 2-mix GMM.
So the discriminative power of the SVMs is useful for making a final decision between the two most likely classes of the 2-mix GMM.
However, worse multiclass performance of the SVMs for $k\geq 4$ leads to an error that is higher than that of the 2-mix GMM.
Note that for $k=48$, the error rate of the GMM-SVM is slightly worse than that of the DAGSVM approach.
This is because we use a different order of comparisons in the GMM-SVM: the classes are sorted by their posterior probability according to the GMM and not by their class label.

\subsection{Committees}
While the approach in the previous section is suitable for combining GMMs and SVMs, the random forest classifier cannot easily be incorporated into this framework.
Hence we implemented a second type of committee classifier that requires only the individual predictions of two multiclass classifiers.
Hence any pair of multiclass classifiers can be combined in a committee.

The committee keeps a score for each possible pair of class labels $(y_1, y_2)$ the two classifiers can produce.
When training the committee, the current training point $x$ is fed into both classifiers.
Let $y_1$ and $y_2$ be the predictions of the first and second classifier respectively.
Moreover, let $y$ be the true label of $x$.
Now we update the score for $(y_1, y_2)$ as follows:
\begin{itemize}
\item If $y_1 = y$, increase the score for $(y_1, y_2)$ by 1.
\item If $y_2 = y$, decrease the score for $(y_1, y_2)$ by 1.
\end{itemize}

For classification, the new test point $x$ is again fed into both classifiers to produce labels $y_1$ and $y_2$.
Now there are two cases:
\begin{itemize}
\item If the score for $(y_1, y_2)$ is non-negative, predict $y_1$.
\item Otherwise predict $y_2$.
\end{itemize}

The idea behind this classifier is that we use the training set to find out which classifier produces a more accurate prediction for each possible pair of predictions from the individual classifiers.

Table \ref{table:scommittee_error} shows the results of the committee classifier for all three pairs of our individual classifiers.
For GMMs, we again use the 2-mix version.
The experiments were run on the first feature set.

\begin{table}[!htb]
\centering
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
Set & SVM & GMM & Random Forest & GMM-SVM & GMM-RF & SVM-RF\\
\hline
dev & 25.0\% & 23.8\% & 27.8\% & 23.1\% & 27.8\% & 27.7 \% \\
\hline
test & 26.4\% & 24.9\% & 28.2\% & 24.2\% & 28.1\% & 28.2\%\\
\hline
\end{tabular}
\caption{Error rates of the committee classifiers.}
\label{table:scommittee_error}
\end{table}

The GMM-SVM committee has a small advantage over both the SVM and the GMM classifiers.
However, using the random forest classifier yields no improvement.
The reason is that the random forest has a very low training error due to overfitting.
So almost all entries in the score matrix are in favor of the random forest.
Consequently, the classification results of the GMM-RF and SVM-RF committees are essentially the same as for the random forest classifier.

The committee classifier can be useful if the individual classifiers have complementary strenghts on the training set.
Otherwise, the prediction performance is dominated by one of the classifiers.
Note that the GMM-SVM with $k=2$ described in \ref{subsection:gmmsvm} achieves better results than the GMM-SVM committee.
