\documentclass[12pt]{article} % Default font size is 12pt, it can be changed here

\usepackage{geometry} % Required to change the page size to A4
\geometry{a4paper} % Set the page size to be A4 as opposed to the default US Letter

\usepackage{graphicx} % Required for including pictures

\usepackage{float} % Allows putting an [H] in \begin{figure} to specify the exact location of the figure
\usepackage{wrapfig} % Allows in-line images such as the example fish picture

\usepackage{amsmath}
\usepackage{lipsum} % Used for inserting dummy 'Lorem ipsum' text into the template

\linespread{1.2} % Line spacing

%\setlength\parindent{0pt} % Uncomment to remove all indentation from paragraphs

\graphicspath{{Pictures/}} % Specifies the directory where pictures are stored

\begin{document}


\begin{titlepage}

\newcommand{\HRule}{\rule{\linewidth}{0.5mm}} % Defines a new command for the horizontal lines, change thickness here

\center % Center everything on the page

\textsc{\LARGE UNSW}\\[1.5cm] % Name of your university/college
% \textsc{\Large }\\[0.5cm] % Major heading such as course name
% \textsc{\large Minor Heading}\\[0.5cm] % Minor heading such as course title

\HRule \\[0.4cm]
{ \huge \bfseries k-Nearest Neighbour }\\[0.4cm] % Title of your document
\HRule \\[1.5cm]

\begin{minipage}{0.9\textwidth}
\begin{flushleft} \large
\emph{Authors:}
\linebreak
 Huy Nguyen  (3430069) \linebreak
Armin Chitizadeh  

\end{flushleft}
\end{minipage}


{\large \today}\\[3cm] % Date, change the \today to a set date if you want to be precise

%\includegraphics{Logo}\\[1cm] % Include a department/university logo - this will require the graphicx package

\vfill % Fill the rest of the page with whitespace

\end{titlepage}


%	TABLE OF CONTENTS


\tableofcontents 

\newpage 


\section{Introduction} 

Instance based learning is one of the lazy learning algorithms which all the work is done when the time comes to classifying the instances. When to predict for each new instance, one or more nearest neighbour are used, it is called k-nearest neighbour. 

The task was implementing the k-Nearest Neighbour (kNN) for both classification and numeric prediction.

Our program has the ability to work with both classification and numeric prediction, automatically based on the dataSet provided. 
We also has done the evaluation and tested our program  agains the two provided datasets. ionosphere and autos.
One of the extra features that distinguish our program is its ability to find the best k fold, which gives the less error in prediction automatically. 
And extra experiment, which test the effect of number of folds in cross-validation on the final choosing best nearest neighbour.




\newpage  

\section{Implementation} 

We have used java as our programming language and just used the basic provided library. For version control SVN was used, and Latex was used for documentation.

\subsection{Loading data} 

\begin{flushleft}

To keep our implementation simple, we choose to load only arff files. The program parses the arff file keep track of each attribute in the dataset. For each attribute, the program keeps track of the minimum and maximum value. It also intelligently marks those attributes that have the constant data and flag them for pruning. 

Pruned attributes will not be used in the calculation improving the performance of our program. For continuous attributes, attributes with the same minimum and maximum are pruned candidate. For categorical attributes, attributes that contains only one category will be pruned.

\end{flushleft}

\subsection{Prediction} 


\subsubsection{Nearest Neighbours}

\begin{flushleft}

The program will use the \textbf{Euclidean distance} to find the nearest instances to our target instance. We evaluate all the data points \textbf{linearly}. We then rank them from closest to furthest and take the top k points. Although, this naive method is slow, this is the most straight-forward implementation for us. Another alternative is to use a \textbf{kd-Tree}, which would improve the performance considerably. However, we assume that the dataset are not large, less than 1 million data points. 

Since attribute has its own scale, the program normalises all the attributes. In short, the program will find the maximum and minimum values for each attributes, then it will scale values in range of $0..1$. Categorical attribute will have 0 for match and 1 for non-match. For dataset with missing attribute values (marked as '?'), such as autos.arff, our program simply gives it a maximum distance of 1.

\end{flushleft}

\subsubsection{Predicting Value}

\begin{flushleft}

We consciously made two different ways of prediction, weighted and non-weighted. Weighted distance has a bias, favouring closer example data points while non-weighted treat all the data point the same. The program allows the user to choose which type of prediction suited to their dataset.

For non-weighted prediction, categorical prediction will simply pick the majority case. The numeric prediction (regression) will be done by calculating the mean of the k data points.

The numeric weighted prediction will use the variable $w_i$ where , $w_i = 1/d(x_q,x_i)^2$  to increase the effect of closer points to our target point. This feature, distance-weighted, will enable the program for k=n, gives different values in different part of the dataset. This feature is referenced  as being 'global model', compare to 'local model' for smaller k (Machine Learning,Peter Flach)[2].

The categorical weighted prediction similar function proposed by Dudani[1]: 

\[
    w'_i = 
\begin{cases}
    \frac{d(x_q,x_i^{NN}) - d(x_q,x_k^{NN})}{d(x_q,x_k^{NN}) - d(x_q,x_1^{NN})},& \text{if } d(x_q,x_i^{NN}) \neq d(x_q,x_1^{NN}) \\
    1,              & \text{otherwise} d(x_q,x_i^{NN}) = d(x_q,x_1^{NN})
\end{cases}
\]

This mean that the closer to the query point, the higher the weight will be for the neighbouring points. The winning category will be the one with the highest total weight. We chose this as it is the simplest weighted function for categorical data.

\end{flushleft}

\subsection{Evaluation} 

\begin{flushleft}

Our program allows the user to evaluate their dataset for the best k values to use in their subsequent predictions. Each attribute has its own best k and thus the program allows the user to repeatedly change the attribute for their calculation.

The idea of best k, in short, means the k value would yield the smallest error. To evaluate the best k, we use cross-validation, dividing the dataset into two subsets, a test and a training. For each point in the test set, we use the training set to predict value of a certain k and evaluate the error distance from the test value. The program will test a range of k (specified by the user) to find the k that it has the smallest error. It also allows the user to choose between weighted prediction as well as non-weighted prediction. 

The system uses two different error evaluation of the results according to the type of the class which we want to predict, regression or classification. Both of these way are explained in detail on below:

\end{flushleft}

\subsubsection{Classification}
Choosing the how to evaluate the expedient the result is function of the cost. For each dataset and problem we need to take it into account that what is the cost of making wrong decision. However for this project we decided to choose the Precision. which indicate the correctness of our prediction . or in more formal way can be said as $TP/TP+FP$.

\subsubsection{Numeric}
How ever the same way has been used to conduct the experiment for the numeric prediction, But the evaluation isn't as simple as absent or present of error. The errors come in different sizes.
For our system we have used the Relative square errors. The relative squared error, take the tool squared error and then normalised it be the total squared error of the default predicator (Data Mining, Ian H. Witten  )[3].
The main reason for this decision is how it normalise the result in a way, it's more suitable  for different range of data with different scales.
We used the euclidean distance, to find the difference of the target and predicted value. It will make more sense for the provided data.
Considering all the above, According to Eibe Frank,"in most practical situations the best numerical prediction method is still the best no matter which error measure is used. page(182)[3].

\newpage  

\section{Experimentation} 

\subsection{ionosphere.arff}

The following are results for evaluating the best K for the \textit{class} attribute in the dataset with distance-weighted prediction:

    \begin{tabular}{| l | l |}
    \hline
    k & Error \\ \hline
 2 & 0.151429 \\ \hline
 3 & 0.102857 \\ \hline
 4 & 0.097143 \\ \hline
 5 & 0.100000 \\ \hline
 6 & 0.097143 \\ \hline
 7 & 0.097143 \\ \hline
 8 & 0.097143 \\ \hline
 9 & 0.105714 \\ \hline
10 & 0.114286 \\ \hline
    \hline
    \end{tabular}
    \\
    
    As you can see, the best k here is 4
    
\subsection{autos.arff}

The following are results for evaluating the best K for the \textit{price} attribute in the dataset with distance-weighted prediction:

    \begin{tabular}{| l | l |}
    \hline
    k & Error \\ \hline
 2 & 0.097693 \\ \hline
 3 & 0.093173 \\ \hline
 4 & 0.089324 \\ \hline
 5 & 0.088250 \\ \hline
 6 & 0.086489 \\ \hline
 7 & 0.086728 \\ \hline
 8 & 0.086262 \\ \hline
 9 & 0.087071 \\ \hline
10 & 0.088484 \\ \hline
    \hline
    \end{tabular}
    \\
    
    As you can see, the best k here is 8.    


\subsection{Others}

As it was discussed, the program has the ability to find the best number of neighbours that will gives the least errors. However this depends on providing some parameters, which one is the number of Folds for evaluation. 
So an extra experiment was conducted to examine the effect of using different fold numbers for cross-fold validation, on choosing the best number of neighbours for prediction. 
So 35 different dataSet was chosen to conduct the experience on, from "seasr" website. 
The results and graph are attached on appendix. 
Concluding from the results, It can be seen that while changing the different value for the fold. has some minor effects on choosing the best k, but the majority will still be the same. Since one propose of choosing the best number of near neighbour is to learn the k for the similar data, the minor changes will not have any effect on the majority class. 




\newpage  
\section{Conclusion} 


As conclusion, the kNN is a type of instance-based learning or lazy learning, which where is exact model of the data, but the prediction is base on the nearest neighbour of the instance. 
The program that we have implemented, has the ability to load the dataSets with 'arff' data format. Then prune the data in them for the efficient prediction. Then start to predict and evaluate the prediction base on the cross-out validation. It has the ability to use both  distance-weighted Nearest Neighbour and none-distance-weighted Nearest Neighbour for prediction.
 At the end it will find the best number of neighbours which will give the least error. 
It is also experimented that the number of folds won't have any perceptible effects.




\newpage  

\section{Appendix} 

Text for appendix

\newpage  
\begin{thebibliography}{99} % Bibliography - this is intentionally simple in this template

\bibitem[1] {} Dudani, Sahibsingh A. (1976)
\newblock The Distance-Weighted k-Nearest-Neighbor Rule
\newblock {\em Systems, Man and Cybernetics, IEEE Transactions on}, p325 - p327.

\bibitem[2] {} Peter, Flach (2012) ,
\newblock {\em Machine Learning, The Art and Science of Algorithms that Make Sense of Data},
\newblock Cambridge University Press,
\newblock New York

\bibitem[3] {} Ian H. Witten \& Eibe Frank \& Mark A. Hall (2011)
\newblock {\em Data Mining, Practical Machine Learning Tools and Techniques}  Third Edition,
\newblock Morgan Kaufmann,
\newblock Burlington
 
\bibitem[4] {} Data Set resource  : {\em http://repository.seasr.org/Datasets/UCI/arff/ }
\end{thebibliography}


\end{document}