\documentclass{article}
%\documentclass[journal]{IEEEtran}
%\documentclass{report}
%\documentclass{acta}

\usepackage{graphicx}

\begin{document}

\title{EFME LU Exercise 2\\Results and Discussion}

\author{Tuscher Michaela \and Geyer Lukas \and Winkler Gernot}

\maketitle

\begin{abstract}
For this exercise we wrote another algorithm for feature vector classification in MATLAB.
In the following text we will compare the new algorithm to the k-NN classifier from Exercise 1, present the results and discuss them.
\end{abstract}

\section{Data Normalization}
The data set consists of 13 features which have a wide difference of value spans. Some are below 1 while another one has widespread values above 1000. This will make vector distances very biased towards features with bigger value spans so we decided a normalization is useful. We mapped each feature into the interval of 0 and 1 by following calculation: new value = (value - min) / (max - min).

The normalization has absolutely no remarkable influence on the classification with mahalanobis distance, since this approach regards the mean and the variance.

\section{k-NN classifier}
The algorithm had to be adapted to the new requirements: it now uses given textfiles containing definitions of test- and training data separation as well as feature selections to tell which features shall be used for classification.
It became apparent that the selection of features used has immense influence on the results, whereas the percentage of correct results didn't change that much between a training set size of 10 and 20.

For \texttt{BestSelection.txt} we did a rough approximation to the ideal feature selection: first we tried every feature by itself, took the one that delivered the best results and combined it with the remaining 12 features, to estimate the best combination with the chosen feature. We decided that three features should be enough, since the rule of thumb we learned in the lecture says that the number of features shall not surpass a tenth of the number of training data (which is in fact still a bit much since we only used 10 or 20 feature vectors as training set).

We created two more feature selections: one just by randomly selecting 4 features (\texttt{RandomSelection.txt}) and one that contains all features \linebreak (\texttt{AllSelection.txt}). The comparison to our \texttt{BestSelection.txt} showed that, surprisingly, using all Features ($\approx 132/148$) is only marginal inferior to our selection ($\approx 139/148$), whereas the random selection ($\approx 98/148$) delivers results considerably worse than our selection.

Although the results are not that disappointing, the algorithm is with a misclassification probability of about 6 percent not convenient for real life usage.


\section{Comparison to classification by mahalanobis distance}
The classification utilizing the mahalanobis distance is nearly throughout better than the k-NN classification, but not in a really remarkable way. Conspicuous is that the result of our \texttt{BestSelection.txt} together with training set size of 20 are quite bad (110/118) relative to the results from k-NN (between 110 and 114, depending on k) and it is even one classification worse than when using all features (111/118).


\section{Discriminant functions}
When we plotted the results of the calculation per hand and the results of the MATLAB \texttt{classify} function we discovered, that they are very similar. Especially the quadratic discriminant functions where we have individual covariance matrices for each class. The parameter we used for computing the quadratic function with \texttt{classify} is \texttt{'mahalanobis'}. However we also could have chosen \texttt{'quadratic'} because (at least in this case) the results are exactly the same. In comparison to our calculation per hand the only difference concerns the constants of the resulting function. The \texttt{classify} result for the constants is 24.6250 and ours is 24.82773. The reason for this may be that in our calculation we assumed the probabilities for each class to be 0.5, while in the classify function the probabilities don't seem to be considered in the calculation.

The differences in the linear discriminant functions are bigger. We calculated the function using identity matrices for the covariance matrices of both classes. However there doesn't seem to be a way to tell \texttt{classify} to do so. The most similar result to ours when using \texttt{classify} was with the parameter \texttt{'diagLinear'}. With this \texttt{classify} uses the diagonal of the covariance matrices to calculate the linear discriminant function. While the diagonal of the covariance matrix of class B would be the same as when using the identity matrix, this isn't the case for class A. So the differences between the \texttt{classify} result and ours in the linear function are likely to be caused by this circumstance and again because of the probabilities.

When plotting the test sets, the mean values and the discriminant function, it's easy to see that the function cuts the connection line between the two mean values of the classes exactly in the middle.
If the identity matrix (or the diagonals of the covariance matrices) is used instead of independent covariance matrices, the result is a linear function, else it's a quadratic function. This leads to the conclusion that with individual covariance matrices the classification is more precise.


\section{Conclusion}



\end{document}
