\documentclass{article}
%\documentclass[journal]{IEEEtran}
%\documentclass{report}
%\documentclass{acta}

\usepackage{graphicx}

\begin{document}

\title{EFME LU Exercise 3\\Results and Discussion}

\author{Tuscher Michaela \and Geyer Lukas \and Winkler Gernot}

\maketitle

\begin{abstract}
In this exercise we used 3 different classification algorithms: k-NN classification, mahalanobis-distance and a perceptron classification to classify strokes.
In the following text we will compare the algorithm's performances and discuss the results of the classifications.
\end{abstract}

\begin{table}
	\centering
	\begin{tabular}{r c c}
		Stroke type & 6 class & 2 class\\
		\hline
		Black lead & 1 & dry \\
		Black chalk & 2 & dry \\
		Paint Brush & 3 & wet \\
		Reed pen & 4 & wet \\
		Goose quill & 5 & wet \\
		Silver point & 6 & dry \\
	\end{tabular}
	\caption{class labels of stroke types}
	\label{tab:strokeClass}
\end{table}

\section{Dataset}
The dataset for this exercise consists of 155 strokes with 20 features per stroke, which were obtained by measuring the infrared reflections for different wavelengths. The first 10 features are mean values, the last 10 contain the corresponding standard deviations. The strokes can be classified either into 2 classes which are wet strokes and dry strokes or into 6 classes according to 6 different stroke types like black lead or reed pen (see Table~\ref{tab:strokeClass}).

\section{Classifiers}


\subsection{k-NN Classifier}
As in the first 2 exercises, one of our classifiers is the k-NN classifier. It computes the distances between the tested featurevector and the featurevectors of the trainingsset and outputs the class which occurs most often between the k-nearest trainingsset vectors.

Our Program returns the results for $k$ = 1, 2, 3, 5, 10.


\subsection{Mahalanobis Distance Classifier}
The Mahalanobis Distance Classifier is also known from exercise 2. It computes the mean vectors and covariance matrices of each class using the trainingset data. Together with the tested featurevector the mahalanobis distances of the classes are calculated and the test data gets classified into the class with the shortest mahalanobis distance.
For the classification we used 3 different calculations of the covariance matrix:
\begin{enumerate}
	\item all classes have the same covariance matrix
	\item the covariance matrices are diagonal
	\item all classes have the same covariance matrices which are diagonal
\end{enumerate}


\subsection{Perceptron}
The Perceptron is the only classifier which was newly implemented for this exercise. It computes a weight vector for the trainingset. Then the featurevectors from the test set are passed to the perceptron which is capable to distinguish between two different classes. Since there is also a 6-class-problem this is not sufficient. To solve this problem we trained six perceptrons, each testing the membership to one class against the five remaining classes. The perceptron with the highest output signal is decided to deliver the correct classification.

\begin{table}
	\centering
	\begin{tabular}{r l}
		Filter name & included features\\
		\hline
		AllSelection & all (1 - 20) \\
		BestSelection 2 Classes & 1, 3, 16 \\
		BestSelection 6 Classes & 3, 7, 15, 17 \\
		BestSelection & 1, 3, 7, 15, 16, 17 \\
		RandomSelection & 2, 5, 8, 12 \\
	\end{tabular}
	\caption{used feature filters}
	\label{tab:featureFilters}
\end{table}

\section{Features}
We have 5 different feature selections so the impact of the choice of the features can be seen very clearly (see Table~\ref{tab:featureFilters}). One selection consists of all features of the dataset. Three contain the best selections of features and for the last selection we took 4 randomly chosen features: 2, 5, 8, 12. For the selections of the best features we used sequential forward selection. This means that we tested the classification with each single feature, took the best, then combined it with the other 19 features, took the best pair and so on.
We found out that the choice of the best features is very important for all 3 classification methods, but especially for the k-NN classifier. For k-NN we wanted as few features as possible, because normally its performance decreases with increasing number of features. We also found out, that especially the k-NN classifier works better with different features for the classification in 2 or 6 classes. So we chose 3 features for the classification in 2 classes which are: 1, 3, 16. 
As shown in Figure~\ref{fig:featureComp} the results for using these 3 features are mostly better than using all features in the 2-class classification, but in the 6-class classification they aren't such a good choice. Therefore we chose different features for the classification in 6 classes which are: 3, 7, 15, 17. Feature number 3 is part of both selections, because it's the best feature of all and since we used sequential forward selection it was always the first feature to be chosen. For the classification in 6 classes we had to use at least 4 features, because 3 weren't good enough to be better than all features together. These 4 features are most of the time better than all features together, at least in the 6-class classification, but not with 2 classes. The last feature selection consists of a combination of the best features for 2 and 6 classes: 1, 3, 7, 15, 16, 17. This feature selection proofs to be quite good for all algorithms and both class separations.


\section{Test- and Trainingsets}
For the separation between training and testset we chose a percentage of strokes which will be taken for training. We took 70\% for the classification into 2 classes and also 70\% for 6 classes. Our function randomly chooses strokes from each class according to this percentage, puts them into the trainingset and afterwards runs the classification. This happens 5 times in a row and the results of the classifications are stored. In the end we have the results for 5 randomly chosen test- and trainingssets. The final output are the average results of these 5 different separations.
We use this method because we want the result of the classifications to be as much independent from the choice of training data as possible.
With the optional parameter in calling the \texttt{main()} Method, the seed for the random separation can be specified so that the results in this report are replicable. All the results depicted in this report are achieved with seed = 0.1, if not stated otherwise.

\begin{figure}
    \centering
    \includegraphics[width=4.25in]{Bilder/FeatureComparison}
    \caption{impact of feature selection}
    \label{fig:featureComp}
\end{figure}

\begin{figure}
    \centering
    \includegraphics[width=4.25in]{Bilder/MethodComparison}
    \caption{impact of chosen algorithm parameter}
    \label{fig:methodComp}
\end{figure}

\section{Classification Results}
Figures~\ref{fig:featureComp} and \ref{fig:methodComp} provide an overview over the results of the algorithms. Noteworthy is that the y-axis starts at 40\% and not at 0\%.

In Figure~\ref{fig:featureComp} the influence of different feature selections onto the classification performance is depicted; since kNN and mahalanobis both use multiple methods (five different $k$ and three different covariance matrix assumptions) the results of the different methods are averaged for these two algorithms.

In Figure~\ref{fig:methodComp} one can see the influence of the chosen method onto the classification performance of kNN and mahalanobis. For the 2-class-problem the feature filter \texttt{BestSelection 2 Classes} was used, and for the 6-class-problem the \texttt{BestSelection 6 Classes} filter.

\subsection{k-NN Classifier}

Our k-NN classification gives pretty good results (above 95 percent) in the 2 class problem, but a lot worse results (around 70 percent on average) when classifying into 6 different classes. As seen in the following figures k does not have a lot of impact on results, also feature subselection does not change a lot. Selecting all features surprisingly results in very little difference to our best subselections. kNN is the most robust of the three algorithms against feature selections.

\begin{figure}
    \centering
    \includegraphics[width=4.78in]{Bilder/kNN2class}
    \caption{k-NN results for 2 classes with different feature selections}
    \label{fig:kNN2class}
\end{figure}

As seen in Figure~\ref{fig:kNN2class} there is a similar classification rate for all combinations of k and feature selections for the 2 class problem. The results are good for this case.


\begin{figure}
    \centering
    \includegraphics[width=4.78in]{Bilder/kNN6class}
    \caption{k-NN results for 6 classes with different feature selections}
    \label{fig:kNN6class}
\end{figure}

In the second figure (Figure~\ref{fig:kNN6class}) for k-NN we see that for the 6 class problems the k-NN classifier gives worse results. The different k values and the feature selections also do not result in very different classification rates similar to the 2 class problem.

An interesting observation that can be noticed especially in the six class problem (Figure~\ref{fig:kNN6class}) is that the results seem to converge to a certain classification result with increasing k. In \texttt{RandomSelection} this value is approached from lower results, with the other (better) feature selections the results decrease, but again towards this value. The reason might be that with $k = 10$ it surpasses the size of some class training sets, leading to more equalized classifications.

\subsection{Mahalanobis Distance Classifier}
As described above, we used 3 different calculations for the covariance matrix for the classification. Therefore we have 3 different classification results per feature selection. The average results for each different covariance matrix calculation are shown in the lower part of figure 1 of the Matlab program. When looking at the average results, it seems like there isn't a big difference between the performances of the different cases of covariance matrices. As shown in Figure~\ref{fig:mahalNoSeed} and Figure~\ref{fig:mahalSeed1}, there doesn't seem to be a covariance matrix which always leads to better results than the other 2. Figure~\ref{fig:mahalNoSeed} shows a classification with seed 0.1, where the diagonal covariance matrix (in green) gets worse results than the other 2 in the classification in 6 classes. Figure~\ref{fig:mahalSeed1} shows a classification with seed 1, where the results are exactly the opposite for 6 classes. 

\begin{figure}
    \centering
    \includegraphics[width=4.78in]{Bilder/mahalNoSeed}
    \caption{Mahalanobis classification with seed 0.1}
    \label{fig:mahalNoSeed}
    \includegraphics[width=4.78in]{Bilder/mahalSeed1}
    \caption{Mahalanobis classification with seed 1}
    \label{fig:mahalSeed1}
\end{figure}

However the cases in which the covariance matrices for all classes are the same (in blue) and the same and diagonal (in red) do seem to provide similar results. This can be seen especially when looking at the results of the classification in 6 classes. Whereas all 3 cases of covariance matrices lead to very similar results when classifying in 2 classes. 
Concerning the different feature selections, when classifying in 2 classes in general the \texttt{BestSelection} and \texttt{BestSelection 2 Classes} provide the best results. With 6 classes the best feature combinations are \texttt{BestSelection}, \texttt{BestSelection 6 Classes} and also \texttt{AllSelection}. A classification with the features from \texttt{RandomSelection} provides in average the worst results, but when looking at the results from the different cases of covariance matrices when using \texttt{RandomSelection}, it's very interesting that the results are mostly very good when using the case where all covariance matrices are the same. This is shown in Figure~\ref{fig:randomSelectionSeed1}, where a seed of 1 was used. 

\begin{figure}
    \centering
    \includegraphics[width=4.78in]{Bilder/randomSelectionSeed1}
    \caption{Mahalnobis classification for featureset RandomSelection with seed 1}
    \label{fig:randomSelectionSeed1}
\end{figure}

There it gets over 94\% of right classifications, whereas the other covariance matrices only have 67\%-70\% of right classifications. For the other feature selections the different cases of covariance matrices seem to have equally good performances among each other.
The conclusion of this is that in general it doesn't seem to be important which covariance matrix is chosen for the classification in 2 classes. For the classification in 6 classes sometimes the diagonal covariance matrix is the best and sometimes the other 2. All in all the classification results for 2 classes are always far better than for 6 classes. For 6 classes the best results are mostly around 70\%. In the worst case the results are even under 50\%. Whereas for 2 classes the best results are always over 95\% and even up to 99\% and the worst results are mostly around 70\%. This means the best results of the classification in 6 classes are somewhere around the worst results of the classifications into 2 classes.



\subsection{Perceptron}
The results for the perceptron shown in Figure~\ref{fig:featureComp} have a very characteristic pattern for the 2-class-problem: with the feature selections \texttt{AllSelection}, \texttt{BestSelection 2 Classes} and \texttt{BestSelection}, the results are very good (in fact, with seed = 0.1 as used in the figure, the result with \texttt{BestSelection 2 Classes} and \texttt{BestSelection} is with $99.58\%$ the best of the three algorithms (for details about the filters see Table~\ref{tab:featureFilters}).

On the contrary, \texttt{Best Selection 6 Classes} and \texttt{RandomSelection} deliver results of about $50\%$, which is much worse than the performance of the other algorithms. None of the other algorithms showed a comparable dependency of the selected features.

Regarding the 6-class-problem, the perceptron shows it's lack of finding the correct model for the data due to it's restricted ability to just differentiate by a line. Because of underfitting the results are just slightly above $50\%$. \texttt{RandomSelection} even leads to a result of $34.69\%$, the worst result of the three algorithms (even though just the values for seed = 0.1 are mentioned in this paper, we tested more often with random training-/testset separation to make sure that the results with seed = 0.1 are representative).

\subsection{Comparison}
Figure~\ref{fig:total2class} and Figure~\ref{fig:total6class} show the total results of all classifiers and all feature selections.

\begin{figure}
    \centering
    \includegraphics[width=4.78in]{Bilder/total2class}
    \caption{Total results for classification in 2 classes}
    \label{fig:total2class}
\end{figure}

\begin{figure}
    \centering
    \includegraphics[width=4.78in]{Bilder/total6class}
    \caption{Total results for classification in 6 classes}
    \label{fig:total6class}
\end{figure}

In general it gets clear that all classification methods provide much better results for the classification in 2 classes than in 6 classes. The kNN classifier seems to be a very good choice for classification in 2 and 6 classes, because it almost always leads to the best classification results compared to the other classifiers, no matter which features are chosen. However with the right choice of features the perceptron gets even better results than the kNN classifier and up to 99.6\% of right classifications, at least when classifying into 2 classes. This is all in all the best classification result we got with any classifier. Sadly it seems to be very dependent on the right choice of features, because even with 2 classes its results for \texttt{RandomSelection} are very poor, especially compared to the other classifiers. It also has a much worse performance than the others when classifying into 6 classes. The Mahalanobis Distance classifier seems to provide results of similar quality as the kNN classifier, though even the best case of the mahalanobis classifier always seems to be a bit worse than the best kNN case. Also the cases where we use the diagonal matrices (mahal2 in Figure~\ref{fig:total2class} and Figure~\ref{fig:total6class}) and where all covariance matrices are the same and diagonal (mahal3) seem to have some problems with some feature selections like \texttt{RandomSelection} where they have a much worse performance than kNN and the mahalanobis case where we use the same covariance matrices for all classes (mahal1).
Concerning the feature selections all classifiers provide worse results for \texttt{RandomSelection} than for all other selections, no matter if classification is into 2 or 6 classes. When classifying into 2 classes the best feature selection for all classifiers are \texttt{BestSelection 2 Classes} where the results are between 97.5\% and 99.6\% and \texttt{BestSelection} with results between 97.9\% and 99.6\%. The best results for classification into 6 classes in general do we get with the feature selection \texttt{BestSelection} with results between 55.5\% and 75.5\%.


\section{Conclusion}
The three algorithms that we implemented in the three exercises all have their strengths and weaknesses. While the 2-class-problem can be processed with results of nearly equal quality by all three algorithms, the 6-class-problem reveals that the perceptron is too simple for this sort of problem. On the other hand, it's simplicity can also be seen as advantage; perceptrons can be combined to produce better results.

The mahalanobis classifier is able to provide steady acceptable results, but since it needs to calculate the covariance matrix the training needs a bit more effort. The perceptron also needs to do calculations for training, but otherwise than in case of mahalanobis, the existing results can be reused and improved if a new training vector is added to the existing ones.

kNN does not need any precomputations and it is also easy to add a new training vector to the training dataset, but it has the disadvantage that the whole training dataset has to be stored for classification purposes (the perceptron only needs the weight vector and the mahalanobis classifier needs the covariance matrices and the mean vectors).

Whereas the 2-class-problem can be solved satisfactorily by all three classifiers and might also be used in real life (of course depending on the needed correct classification rate), the 6-class-problem is obviously too challenging for the algorithms; the results in this case are not good enough to fulfill the needs of most, if not all, real life problems.


\end{document}
