\section{Task 3}
\begin{enumerate}
\item When using cross validation, smaller sets of data are used, to
  create classifiers. In this case, we will use several test sets to
  calculate an error rate for each of their classifiers. We then can
  calculate the mean of these error rates, supplying ourselves with a
  better representation of the error rates per k.

  So the use of cross validation is to get a view of our
  classification algorithm, which represents it better, than when we
  would only have used it once, to classify once and draw our
  conclusions from that results.

  In other learning algorithms than KNN, cross validation can also be
  used to tweak several internal variables, because the different
  classifiers that come from the different test sets, can be combined
  in one extra good classifier.

  Bootstrapping could improve the quality of the classifier, because
  the difference between classes are mostly clearer after
  bootstrapping.

\item
For ten fold cross validation, we created three extra functions.

The first function `\texttt{createSets}' takes a set of labeled
elements, and creates N equally sized subsets, N-1 of which are
training sets and 1 of which is the test set. This function can be
seen in listing \ref{lst:createSets}.

The second function, `\texttt{nFold}' (listing \ref{lst:nFold})
iterates over these test sets, and calculates an error rate for each
test set in the sets returned by createSets.  Added to that, it does
this several times, to show the differences between values of k in the
knn-algorithm.

The third function `\texttt{tenFold}' (listing \ref{lst:tenFold}) wraps these
functions together to show a representation of the mean error rates of the test
sets.
\begin{figure}
  \centering
  \subfloat[until K=50]{\label{fig:k50}\includegraphics[width=0.5\textwidth]{errorRatesTenFoldKTo50.pdf}}                
  \subfloat[until K=200]{\label{fig:k200}\includegraphics[width=.5\textwidth]{errorRatesTenFoldKTo200.pdf}}
  \caption{Graphs representing the error rate for several values of K. THe
  difference between these graphs is the resolution of K.}
  \label{fig:graphsKFold}
\end{figure}
Using these functions, we can generate the graphs of figure
\ref{fig:graphsKFold}. Using this graph, we can conclude that \textbf{the best
value of K is around 11, because the mean error is the lowest there}. Something
interesting can be seen in figure \ref{fig:k200} too: The tenfold
cross-validation algorithm uses one tenth of the total data, which means that
when K reaches 200, the chance of misclassifying reaches .5, because (almost)
all of the elements of the training set are used to draw a conclusion.
\item
It would be smart to use the best value of K in our final classifier. There
could of course be a slight difference between the effect of the size of K in a
training set of 1500 elements, in stead of 200, but this would just be the
slight difference that can be also seen in, for example the difference between
$k=11$ and $k=12$ while using a training set of 200 elements.

The use of doing cross-validation, in this case, would be to calculate a good
value for k, with as least noise in the results as possible.

\item 
The test error is calculated by the following three rules: 
\begin{lstlisting}
mat = confmat(Results, TestSet(:,3:4));
correct(k) = (mat(1,1) + mat(2,2)) / length(Results);
ErrorRates(k, i) = 1-correct(k);
\end{lstlisting}
So the test error is $1-correct$, which is calculated by dividing the number of
correctly classified instances by the number of elements in the test set. 

The test error is, using this calculation, a representation of how many elements
in the test set were wrongly classified. 

A validation set is used in several learning algorithms, to tweak the classifier
after training it, using a different set. In the case of KNN, a validation set
would not make a lot of difference, since the parameters of KNN are just the
elements in the training set.
\end{enumerate}
