%%% lab02.tex --- 
%% Author: xtroce@bathhouse
%% Version: $Id: lab02.tex,v 0.0 2012/09/13 11:18:08 xtroce Exp$

\documentclass[12pt,a4paper]{article}
\special{papersize=210mm,297mm}

%% use a better geometry for A4 paper
\usepackage[a4paper,top=3cm,bottom=3cm,left=3cm,right=3cm]{geometry}

%% package to use graphics in latex
\usepackage[dvipdf]{graphicx}

%% package to import other tex files into the current file
\usepackage{import}

%%package for drawing IPA sound font
%\usepackage{tipa}

%%package for multiple figures in one figure
\usepackage{subfig}

%% package for spanning multiple rows or columns in tables
%\usepackage{multirow}

%% package to include matlab code into latex
%% to include use \lstinputlisting{MATLABFILE}
%% to include a whole file
\usepackage[framed,numbered,autolinebreaks,useliterate]{mcode}

%% Package to colorize table output
%\usepackage{colortbl}

%% To write Umlaute in latex without encoding
%\usepackage[ansinew]{inputenc}

%% german language package
%\usepackage{ngerman}

%\usepackage[debugshow,final]{graphics}
%\usepackage{setspace}

%\usepackage{listings}

%% for new packages use the ~/texmf/tex folder
%% run texhash afterwards


\title{lab02}
\author{Sebastian Dr\"oppelmann 5783453\\ Maarten de Waard 5894883}
\date{\today}

%\doublespacing
\bibliographystyle{plain}%Choose a bibliograhpic style

\begin{document}

%%Titlepage
\begin{titlepage}
\maketitle
\begin{center}
%\includegraphics[scale=0.2]{logo_uva.ps}
\end{center}
\thispagestyle{empty}
\end{titlepage}

%%TOC
%\tableofcontent
%\newpage

%%%%##########################################################################

\section{Introduction}
The tasks are solved in different functions which are combined with
main function. These code listings can be found in the Appendix
\ref{sec:mainfunc}.

\section{Task 1}
We  implemented one  function,  `\texttt{sampleSet}' that  is able  to
split  a  set into  a  training  set and  a  testset  using a  certain
percentage  as  trainingset.  Both  sets  are  then  returned  by  the
function.  This  function is called  on both matrices included  in the
\emph{twoclass.mat} file.

Then we combine these two datasets to one combined trainingset and one
testset  and randomize these  newly created  sets. For  this we  use a
function called `\texttt{combineSets}' that combines all the Arrays in
a \emph{MATLAB}  structure and adds a  unique label to be  used by the
\emph{Netlab Toolkit} to calculate the confusion matrix. This label is
essentially  a binary vector  for each  datapoint where  the belonging
class is switched to one and all other classes are 0. For example in a
situation  with 2  classes,  class A  would  be represented  by (1  0)
whereas class B's vector would look like (0 1).



The output plot of the two classes from the trainset can be seen in
Figure \ref{fig:trainset}.

\begin{figure}[h!t]
\centering
\includegraphics[scale=0.8]{trainingset.png}
\caption{The two classes in a trainingset}
\label{fig:trainset}
\end{figure}

\section{Task 2}

\begin{figure}[h!t]
\centering
\includegraphics[scale=0.8]{errorvalues.png}
\caption{The two classes in a trainingset}
\label{fig:errorvalue}
\end{figure}

For the second task we iterate through the different values for k we
want to test. For each k that we test we initialize the classifier
with the training set and the value of k and then test it with the
testset. The range of k's tested was from 1 to 100 k's. For each
tested value of k the confusion matrix was calculated using the
`\texttt{confmat}' function provided by the \emph{Netlab Toolkit}. From
this we could extract the error value for the given k.  For the
k-value 1 the error value would be 14.4\%. If you look at the
evaluation on different k's you will see that with very few k's the
error value will be quite high. as the k's get larger the error values
go down until they reach a minimum around 30 k's, where the minimum is
the lowest. Then the error function oscillates while raising slightly
close to 100 k's. The plot of the 100 k's and the error values can be
seen in Figure \ref{fig:errorvalue}.

Since the minimal error value is given in the situation where k=28
with the error value of 9.6\% until k reaches 31, these values might be
appropriate for a good value of k. Since we want to choose our k as
small as possible since bigger k's go hand in hand with longer
calculation times, in this case we would choose k=28 for the smallest k giving
us the best error value. Also it would be possible to have a slightly higher
error value with a much smaller k which would be interesting for
situations where the data amount is much larger. So depending on
data size and acceptance of error the value that would be chosen is
between 9 and 28 where 28 is the lowest error value and 9 the first k
with an error value less than 11\%.\\
Furthermore we can see on the plot with different values that with
small k's the error value is high and falls quite fast, it flattens
out and at k close to 100 the error value goes up again due to
overfitting.

\input{maarten.tex}

\appendix
\section{Code listings}
\begin{figure}
\lstinputlisting{../sampleSet.m}
\caption{Split function}
\label{lst:splitfunc}
\end{figure}

\begin{figure}
\lstinputlisting{../combineSets.m}
\caption{Combine function}
\label{lst:combinefunc}
\end{figure}

\label{sec:mainfunc}
\begin{figure}
\lstinputlisting{../traintest.m}
\caption{The main function}
\label{lst:mainfunc}
\end{figure}

\begin{figure}
\lstinputlisting{../createSets.m}
\caption{createSets function}
\label{lst:createSets}
\end{figure}

\begin{figure}
\lstinputlisting{../nFold.m}
\caption{nFold function}
\label{lst:nFold}
\end{figure}

\begin{figure}
\lstinputlisting{../tenFold.m}
\caption{tenFold function}
\label{lst:tenFold}
\end{figure}

%%%%##########################################################################
\bibliography{lab02.bib}
\end{document}
