\documentclass{article}
%\documentclass[journal]{IEEEtran}
%\documentclass{report}
%\documentclass{acta}

\usepackage{graphicx}

\begin{document}

\title{EFME LU Exercise 1\\Results and Discussion}

\author{Tuscher Michaela \and Geyer Lukas \and Winkler Gernot}

\maketitle

\begin{abstract}
For this exercise we wrote two algorithms for image classification in MATLAB.
In the following text we will present the results and discuss them.
\end{abstract}

\section{Feature Extraction}

We chose to make use of three features: aspect ratio, formfactor and compactness. The reason why we took these 3 features was that they are invariant to scale, rotation and translation. Also we wanted to keep the feature vector as short as possible especially because of the k-NN performance so we chose the features of which we thought they would be most significant for the classification of the images.

\section{Classification by Threshold}

This algorithm was created by looking at the feature vectors (main.m creates a features.csv file listing all calculated features) of our images. It was easy to see that pencils have a far lower aspect-ratio than others.
So if the aspect-ratio is lower than 0.2 this image has to be a pencil. Apples are easy to distinguish by compactness compared to others. They all have a very high one (above 0.9). With this 2 categories filtered out, only cellular phone, tree and bat were left. After looking at the feature vectors we could see that trees and bats have a low form factor (below 0.4), while cellphones were above that. The decision between tree and bat is very tricky with our chosen features. The min-max spans of both image types overlap for all 3 features. A more complex comparison was necessary. The aspect ratio span from trees is higher than bats, so if an image is above 0.58 or below 0.32 it has to be a tree. The maximum form factor value for trees is higher than those of bats, so if form factor is higher than 0.29 it has to be a tree. Compactness of trees also have higher maximum values than those of bats, if compactness is higher than 0.59 it is a tree. With this 4 decisions we could classify all except 1 tree correct, resulting in a 99/100 overall success rate.

\section{k-NN Classifier}
As suggested, we used the Leave-one-out cross-validation to validate our k-NN classifier. This solution is applicable even though the dataset is quite small (20 images per class), since it uses all of the images as training dataset. Except for the picture that is tested, of course, because in that case the distance to itself would be zero and the result of the algorithm would become distorted. The training dataset for every class should be big in relation to the value of k, otherwise there will be too many feature vectors of other classes within the k-NN. Also, a higher amount of training data refines the implicit decision boundary.

\begin{figure}
    \centering
    \includegraphics[width=4.0in]{graph2}
    \caption{Classification Error}
    \label{fig:error}
\end{figure}

\subsection{Results}
The validation showed, that the classification algorithm that uses thresholds manually set for exactly the given data delivers better results than the k-NN-Classifier when used on the same features. While the threshold-classifier answers correctly for 99 of the 100 given test images, the best result of k-NN is 96 / 100, and that is only accomplished in the case $k = 1$, so just with NN. Other values for $k$ often deliver only 92 / 100 as result. For $k > 12$ the classification error increases nearly monotonically (see Figure~\ref{fig:error}), most likely because the number of considered neighbors approaches or even exceeds the number of training pictures of each class, leading to wrong results.

An interesting aspect is, that the k-NN algorithm succeeds with the picture that is misclassified by our threshold algorithm, but only in case $k = 2$.

The \texttt{main()} Method of our MATLAB project accepts an optional argument giving the maximum value for the range wherein k is tested. Standard value is 15, which should give a good overview of the results.

\section{Conclusion}

The validation results show that our threshold classification is superior to the k-NN classification when used on this particular image dataset. But considering its high dependence on the used images and the high effort in adapting it to new images it is not reasonably practicable for most application scenarios.


\end{document}
