\phantomsection
\chapter{Evaluation of Machine Learning Algorithms} % Main chapter title
\label{c4_evaluation} % For referencing the chapter elsewhere, use \ref{Chapter1} 
\rhead[\emph{Evaluation of Algorithms}]{\thepage}
\lhead[\thepage]{\emph{Evaluation of Algorithms}}
\section{Settings}
\label{compate_settings}
In this chapter we are evaluating algorithms introduced in the previous chapter and discussing their advantages and disadvantages.\\
For the evaluation, the following settings are made:\\
\begin{description}
	\item[Environment] \hfill \\
	\begin{figure}
	\centering
		\scalebox{0.2}{\includegraphics[angle=90]{img/env}}
		\rule{35em}{0.5pt}
	\caption{Testing Environment}
	\label{fig:env}
	\end{figure}
Figure \ref{fig:env} on page \pageref{fig:env} show the environment of where our Wi-Fi fingerprint data are collected. It is the northern part of 4th floor of Building E of Technology University of Hamburg Harbug. The floor is entirely Wi-Fi covered. There are 10 labelled rooms, where we collected data. Each room 4-7 access points signals are available. The set of rooms contains rooms close to each other(4075-4078), rooms separated by corridor(4091, 4088, 4086), rooms that are separated by rooms(4091, 4088, 4086) and rooms that far from the central part(4084, 4071, 4065).
	\item[Samples] \hfill \\
  		\begin{enumerate}
 	 		\item \textbf{Collecting:}  While collecting, the device (HTC Desire) is moving around in the room, the door is opened and then closed. The device stays in the same position as people using the phone, close to the body and at the height of chest or ear. We try to simulate a general room condition and smartphone usage scenario to fulfil the condition that all fingerprint data are collected from a smartphone in a room.
  			\item \textbf{Format:} Here defines the structure used in the \textit{arff} file. The attribute will be each MAC address of the access point and the value is the signal strength in numeric from 0 to 100. 0 means no signal and 100 is the strongest signal. This value is calculated from the strength value which smart phone measures. The signal strength value from smart phone is between [-100,0] in dB, where 0 is the strongest signal and -100 is no signal (absent in detection). Then we make a linear transform $\left| \left| m \right| -100 \right|  $, where $  m$ is the measured value.  The the value is mapped to [0,100]. We can see from the data section in the sample \textit{arff} file\ref{base_arff}, for each row, a room can is associated with a collection of numerical values, such data structure can be processed by most machine learning algorithms. For each number, it represents the signal strength of a specific access point. For most of the access point, a room has no signal from such access point, hence there are many 0s for each room. Such a row matches the definition of Wi-Fi fingerprint.
  			\item \textbf{Data:}\hfill \\
  			The \textit{baseset.arff} \ref{base_arff} contains 1400 records in data section, we take them as the \textit{Base Set}. The number of data collected from each room is not identical, which is a reflection on real situation. Before the evaluation of classifiers, we have to derive training data set and test data set from  \textit{Base Set}. Roughly 70 - 90 data(5\% of probability to be selected to test set in each class) will be randomly extracted from base set as test data and the rest is the training data set. They are called \textit{Derived Set}. The extraction is performed 5 times so that we have 5 different pairs of training data and test data set. The performance of each algorithm is evaluated base on average performance on 5 derived sets.
	  			\begin{lstlisting}[label=base_arff,caption=baseset.arff]
@relation _E4

//list of attributes, MAC addresses of access points
@attribute 1c:c6:3c:54:1a:e7 integer
@attribute 00:19:07:58:31:b0 integer
@attribute 00:19:07:58:31:b1 integer
@attribute 7c:4f:b5:41:5d:de integer
@attribute 00:12:43:f9:31:21 integer
@attribute 00:24:36:a9:24:3b integer
@attribute bc:05:43:f8:5d:e5 integer
@attribute 88:03:55:3e:c4:1f integer
@attribute 00:19:07:5a:4b:20 integer
@attribute 00:12:43:8a:d6:71 integer
@attribute 00:12:43:8a:d6:70 integer
@attribute 00:12:43:f9:31:20 integer
@attribute 00:12:43:8a:e1:f1 integer
@attribute 00:12:43:f9:3c:a0 integer
@attribute 00:12:43:f9:3c:a1 integer
@attribute 00:12:43:8a:e1:f0 integer
@attribute 00:19:07:5a:4b:21 integer

//list of room labels
@attribute ROOM {4071,4088,4077,unknown,4065,4078,4075,4084,4076,4086,4091,}

@data
//Wi-Fi fingerprints: a collection of mapping of signal strength and access point, followed by room label
0,0,0,0,0,0,0,0,0,21,20,0,9,16,29,7,0,4075
0,0,0,0,0,0,0,0,0,18,19,0,9,22,24,10,0,4075
0,0,0,0,0,0,0,0,0,19,19,0,10,31,31,10,0,4075
0,0,0,0,0,0,0,0,0,26,22,0,12,26,26,10,0,4075
0,0,0,0,0,0,0,0,0,23,24,0,13,39,42,14,0,4091
//Truncated
\end{lstlisting}
\item \textbf{Device:}\hfill \\
The device we used to collect data is also a smart phone:  HTC Desire. The application used to recorde data will be demonstrated in chapter\ref{c5_implementation}. To be brief, it's an application that scans the Wi-Fi signal, log it and generate the \textit{arff} file in the formate like \ref{base_arff}.
		\end{enumerate}
	\item[Training Time] \hfill \\
    The training time is the time used to train the model respect to training data set.  Time is measured from Java code in a computer with settings as follows:
    \begin{enumerate}
    \item \textbf{CPU}:Intel(R) Core(TM) i5-2430M @2.4GHz
    \item \textbf{OS}: Windows 7 Professional SP1 64 bit
    \item \textbf{Jave}: Java 1.7
    \item \textbf{WEKA}: Weka 3.6.9, the stable version from the book.\cite{wekaDataMining}
    \end{enumerate}
	\item[Evaluation] \hfill \\
  As we discussed in Chapter \ref{c1_introduction}, for our objective, the crucial criteria of the classification precision in percentage and error in meters. The final result based on the averaged result from 5 pares of derived sets. Precision is the percentage of correct classification. Error in meter is calculated base on confusion matrix $\textbf{C}  $. The matrix \ref{eq:confution_matrix} is an example of confusion matrix, it show the classification result with the correct classification as well as the number of incorrect classification for each class.  The row stands for instances of the classes, the column stands for the classification result.For the ideal situation, that all classification are correct, the confusion matrix will be a diagnose matrix. The the element in row 1 column 2(represented as $ c_{1,2} $, marked in square brace) for example, it means there are 2 test instances which are actually of class a, but are classified as class b. And apperently, the total number of test data $ K $ satisfy that $ K = \sum _{ i=1 }^{ n }{ \sum _{ j=1 }^{ n }{ { c }_{ i,j } }  }  $, where $  n$ is the number of classes. 
\begin{equation}\label{eq:confution_matrix}
\begin{matrix}
a & b & c & class \\ 3 & \left[ 2 \right]  & 0 & a \\ 0 & 2 & 3 & b \\ 1 & 1 & 3 & c 
\end{matrix} 
\end{equation}

In order to calculate the  error, we need to transform confusion matrix $\textbf{C}$ \ref{eq:confution_matrix} to matrix $C^{N}$, whose elements are the original but normalized with $ K $, the tatal number of test data, see equation \ref{eq:confusion_norm}. Meanwhile, we need another matrix $\textbf{D} $, the same structure of confusion matrix $\textbf{C} $, which elements are geometric distance of each class(room). We refer the element in $\textbf{D} $ is $  d_{i,j} $, and element in $C^{N} $ is  $ { c }_{ i,j }^{ N } $.
\begin{equation}\label{eq:confusion_norm}
{ c }_{ i,j }^{ N }=\frac { { c }_{ i,j } }{ \sum _{ i=1 }^{ n }{ \sum _{ j=1 }^{ n }{ { c }_{ i,j } }  }  } 
\end{equation}
The error in meter is calculated by equation
\begin{equation}\label{error}
E =\sum _{ i=1 }^{ n }{ \sum _{ j=1 }^{ n }{ { d }_{ i,j }{ c }_{ i,j }^{ N } }  } 
\end{equation}
where n is the number of classes.
\end{description}
The following part will be performance of algorithms against 5 derived sets. Model build time, precision in percentage and error in meter is evaluated.

\section{Naive Bayes}
\begin{lstlisting}[label=code_naive,caption=Naive Bayes in WEKA]
NaiveBayes nb = new NaiveBayes();
String option = "";
nb.setOptions(Utils.splitOptions(optionString));
...//more code on training classifier
\end{lstlisting}
Code clip \ref{code_naive} shows the invocation of Naive Bayes Classifier in \textit{WEKA} API. This is a vary simple and straightforward classier, thus this classifier needs no option.
\subsection{Evaluation}
The evaluation result is on table\ref{table:eva_naive}.
\begin{table}
\centering
    \begin{tabular}{l|l|l|l}
    \hline
    \# & Model Build Time (s) & Precision (\%) & Error (m) \\ \hline
    1    & 0.08                 & 80.7            & 1.49         \\
    2    & 0.09                 & 86.0           & 0.98         \\
    3    & 0.10                 & 84.1            & 1.21        \\
    4    & 0.09                 & 86.8           & 1.09         \\
    5    & 0.09                 & 90.6            & 0.84         \\\hline
    Average    & 0.09           & 85.6            & 1.12        \\
    Standard Deviation    & 0.01         & 3.63            & 0.25         \\ \hline
    \end{tabular}
    \caption {Evaluation of Naive Bayes Classifier}
    \label{table:eva_naive}
\end{table}
\subsection{Discussion}
The discussion falls in the advantage and disadvantage of naive bayes classifier.\\
Pros:
\begin{enumerate}
  \item \textit{Incrementality}: with each training example,the prior and the likelihood can be updated dynamically, this feature can used to build an updateable classifier, such that training data is not loaded into memory at once and future classification result can be used again to update the classifier.
  \item \textit{Probabilistic hypotheses}: outputs not only a classification, but a probability distribution over all classes, which can be taken as the confidence to the classification.
  \item \textit{Meta-classification}: the outputs of several classifiers can be combined,e.g.,by multiplying the probabilities that all classifiers predict for a given class.
\end{enumerate}
Cons:
\begin{enumerate}
  \item \textit{Large training data size}: as have discussed in the concept of Bayes classifier, it is not possible to find the joint probability of all possible combination of attribute values. The expected sample data size would be $ M\cdot { S }^{ k }\cdot n $, where $ S $ is the possible number of values of signal strength, in our case, 100; $ k$ is the number of attribute, in our case, the number of access points,17;$ n $ is the number of classes, here we have 10 rooms;$M  $  is any reasonable number to make it possible to compute the probability of each attribute combination. We don't even need to compute the exact number . The idea is that we can never have enough training data for the current settings. The absence of  training data surely leads to underfitting in classification, which reduce the precision of the classifier.
  \item \textit{Unsuitable assumption in Naive Bayes}: The assumption \ref{naive_assumption} that for any class, the probability for each signal strength is independent. Obvious this is not true. The signal strength in each room are correlated to some extend as a result of signal spreading. This makes the physical meaning of the classification unreliable.
\end{enumerate}

\section{Q Nearest Neighbour(Q-NN) Classifier}
\begin{lstlisting}[label=code_knn,caption=Q-NN in WEKA]
IBk ibk = new IBk();
String option = "-K 10 -I -W 0 -A \"weka.core.neighboursearch.LinearNNSearch -S -A \\\"weka.core.EuclideanDistance -R first-last\\\"\"";
nb.setOptions(Utils.splitOptions(optionString));
...//more code on training classifier
\end{lstlisting}
Code clip \ref{code_knn} shows the invocation of Q-NN Classifier in \textit{WEKA} API.  This classifier has several important options.
\begin{description}
\item[-K]: Specifies the number Q, here we set it to 10.
\item[-I]: The distance is weighted with $  1 /distance $ so that the nearer reference point in Q points gets more weight. This option can be committed, then all Q reference points get the same weight, as specified in equation \ref{eq:knn-decision}.
\item[-A]: The algorithm for searching the nearest distance. Here we used \textit{LinearNNSearch}, which is the brute force search, iterating all points in the training set. There are many other algorithms to search for the Q nearest neighbours. We are going to discuss them later, since brute force can sure to find the answer, it's a problem of efficiency. \textit{LinearNNSearch} also contains a option, which indicates what distance is measured. Here we specify it to be \textit{EuclideanDistance}.
\end{description}
\subsection{Evaluation}
The evaluation result is on table\ref{table:eva_knn}.
\begin{table}
\centering
    \begin{tabular}{l|l|l|l}
       \hline
    \# & Model Build Time (s) & Precision (\%) & Error (m) \\ \hline
    1    & 0.01                 & 85.2            & 1.15         \\
    2    & 0.01                 & 89.5           & 0.88         \\
    3    & 0.01                 & 84.1            & 1.38         \\
    4    & 0.01                 & 85.5            & 1.14         \\
    5    & 0.02                 & 91.8            & 0.81         \\\hline
    Average    & 0.01           & 87.2            & 1.07         \\
    Standard Deviation    & 0.00                 & 3.26            & 0.23         \\ \hline
    \end{tabular}
    \caption {Evaluation of Q-NN Classifier}
    \label{table:eva_knn}
\end{table}
\subsection{Discussion}
Selection of options:
\begin{description}
\item[Weight]\hfill\\
Specified by -I, whether we should weight the nearest training data respect to its distance so that nearer training data gets more weight. We decided to use the weighted version since we run the comparisons between weighted and not weighted through 5 sets random training and test set selected from data in settings.\ref{compate_settings}. Table \ref{table:eva_knn_w} shows that weighted distance leads to a little better classification accuracy.
\begin{table}
\centering
    \begin{tabular}{l|l|l}
    \hline
    \# & Weighted(\%) & Not Weighted(\%) \\ \hline
    1    & 85.2                 & 83.0       \\ 
    2    & 89.5                 & 87.7       \\ 
    3    & 84.1                 & 87.0       \\ 
    4    & 85.5                  & 86.7       \\ 
    5    & 91.8                 & 90.6       \\ \hline
    \end{tabular}
    \caption {Accuracy of Weighted \& Unweighed}
    \label{table:eva_knn_w}
\end{table}
\item[Search Algorithms]\hfill\\
Search algorithms are used to find nearest neighbour, despite the \textit{LinearNNSearch}, which implements the brute force search, iterating all train data. There are three other algorithms which tries to optimize the searching procedure by constructing a tree structure, such that a iteration on train data on certain tree branch is sufficient, instead of all train data.
\begin{enumerate}
\item[\textit{KD-Tree}]: This algorithm builds a K-Dimension tree so that the Q dimension hyper space is split into several parts, each tree branch is a part containing all training data from that part.\cite{kdTree}
\item[\textit{Ball Tree}]: This algorithm changes the Q dimension hyper space to Q+1 dimension, each point in the space gets an additional attribute which is the radius to the center, which changes the whole space to a ball. A \textit{BallTree} is a complete binary tree in which a ball is associated with each node in such a way that an interior node's ball is the smallest which contains the balls of its children. \cite{ballTree}
\item[\textit{Cover Tree}]: This algorithm calculate distance to a root in advance, the tree has a hierarchy which superior node covers the inferiors by hierarchy decreasing distance. \cite{coverTree}
\end{enumerate}   
Details on each tree can be found in the reference\cite{kdTree}, \cite{ballTree} and  \cite{coverTree}, hereby we focus on the effect. Table \ref{table:eva_knn_s} shows the accuracy with each search algorithm regarding to 5 derived set. In additionally, weight of distance has been enabled. We can conclude from the table that KD tree shares the same accuracy and error in meter but enjoys a less model building time and efficiency in searching. Thus we are using KD tree in the final implementation is case of classification efficiency.
\begin{table}
\centering
    \begin{tabular}{l|lll|lll|lll}\hline
    \#          & \multicolumn{3}{l}{Cover Tree} & \multicolumn{3}{l}{KD Tree} & \multicolumn{3}{l}{Ball Tree} \\\hline
    Title      & T.(s)       & Acc.(\%) & E.(m) & T.(s)            & Acc.(\%) & E.(m) & T.(s)      & Acc.(\%) & E.(m) \\
    1          & 0.07          & 85.2     & 1.15      & 0.03           & 85.2     & 1.15     & 0.05       & 85.2     & 1.15     \\
    2          & 0.07          & 89.5     & 0.88     & 0.03            & 89.5     & 0.88      & 0.04      & 89.5     & 0.88     \\
    3          & 0.06          & 84.1     & 1.38  & 0.02               & 84.1     & 1.38   & 0.04         & 84.1     & 1.38  \\
    4          & 0.07          & 85.5     & 1.14      & 0.03           & 85.5     & 1.14     & 0.03       & 85.5     & 1.14    \\
    5          & 0.06          & 91.8     & 0.81     & 0.02            & 91.8     & 0.81     & 0.03       & 91.8     & 0.81   \\\hline
    Average    & 0.07          & 87.2     & 1.07     & 0.02            & 87.2     & 1.07     & 0.04      & 87.2     & 1.07     \\
    St. Dev. & 0.00          & 3.26       & 0.23     & 0.00           & 3.26       & 0.23      & 0.00     & 3.26       & 0.23     \\\hline
    \end{tabular}
    \caption {Accuracy of Searching Algorithms}
    \label{table:eva_knn_s}
\end{table}
\item[Q]\hfill\\
The number of nearest reference points that we convern about. Specified by -K, such that we can change the number Q to see the effect on accuracy. We use the first derived set, enable distance weight and use \textit{KD Tree} algorithm. The result is shown in table \ref{table:eva_knn_q}. And we can conclude that let Q be 10 is a suitable setting.
\begin{table}
\centering
    \begin{tabular}{l|l|l}
    \hline
    Q & Accuracy(\%) & Error(m)  \\ \hline
    1    & 84.1     & 1.35	     \\ 
    5    & 85.2     & 1.18	      \\
    10    & 85.2     & 1.15	     \\
    15    & 79.5     & 1.70	      \\
    20    & 77.2     & 1.90      \\ \hline
    \end{tabular}
    \caption {Effect of Q}
    \label{table:eva_knn_q}
\end{table}
\end{description}
Pros:
\begin{enumerate}
\item  \textit{Incrementality} : the classifier is updatable since every new instance can be used to training the classifier again. From the concept of Q-NN classifier, every new measurement will search in the train data hyper space for the nearest neighbour, a new train data can be directly add to the train data set.
 \item \textit{Probabilistic hypotheses}: outputs not only a classification, but a probability distribution over all classes, which can be taken as the confidence to the classification.
\end{enumerate}
Cons:
\begin{enumerate}
\item \textit{Calculation}: Although \textit{WEKA} provides several option to improve the efficiency in searching for the nearest neighbour, the classification still contains iteration of training data for distance computation. Since the training data can be updated, which leads to unbounded in data size and dimension (attribute increase, more access points), the classification is undertaken with increasing efforts in computation.
\end{enumerate}


\section{Linear Logistic Regression}
The \textit{Linear Logistic Regression} is implemented by \textit{SimpleLogistic} in \textit{WEKA}.
\begin{lstlisting}[label=code_log,caption=SimpleLogistic in WEKA]
String optionString = "-I 0 -M 500 -H 50 -W 0.0";
SimpleLogistic sl = new SimpleLogistic();
sl.setOptions(Utils.splitOptions(optionString));
...//more code on training classifier
\end{lstlisting}
Code clip \ref{code_log} shows the invocation of SimpleLogistic Classifier in \textit{WEKA} API.
\subsection{Evaluation}
The evaluation result is on table\ref{table:eva_log}.
\begin{table}
\centering
    \begin{tabular}{l|l|l|l}
       \hline
    \# & Model Build Time (s) & Precision (\%) & Error (m) \\ \hline
    1    & 2.99                 & 82.9            & 1.5        \\
    2    & 3.18                & 93.0            & 0.47         \\
    3    & 2.85                 & 82.6            & 1.42        \\
    4    & 2.69                 & 88.0            & 1.06         \\
    5    & 3.57                 & 87.4            & 0.82         \\\hline
    Average    & 3.06                 & 87.4           & 1.05        \\
    Standard Deviation    & 0.34      & 4.60            & 0.43         \\ \hline
    \end{tabular}
    \caption {Evaluation of SimpleLogistic Classifier}
    \label{table:eva_log}
    \end{table}
\subsection{Discussion}
Selection of options:\\\\
We can see some options for the \textit{SimpleLogistic} classifier\ref{code_log}. But they are no topic related options, which make significant accuracy change in room classification. Since they are only related to computational performance in algorithm implementation, we simply selected the default options. \\\\
Pros:
\begin{enumerate}
\item \textit{Classification Efficiency}: We have mentioned in the beginning the this algorithm that it is designed for continious classification, which is vary efficient in classification. Efforts on classifying is trivial.
\end{enumerate}
Cons:
\begin{enumerate}
\item \textit{Long Training Time}: The time needed to train the model is extremely high due to the complicated mathematical operation behind equation \ref{eq:lr_weight}. The situation become worse when the attributes increase, meaning more access points are present.
\end{enumerate}
More on \textit{Regression}:\\\\
There are other regression algorithms provided by \textit{WEKA}, among which a typical one should be \textit{Multilayer Perception Regression}. Multilayer Perception uses \textit{back-propagation} to compute weight. It a more complex matrix computation. Code clip \ref{code_mlp} shows the invocation of Multilayer Perception Classifier for the evaluation. Options has no topic related meanings, only for the performance in implementation of \textit{WEKA}. Thus we use the default settings. The result evaluation is shown on table\ref{table:eva_mlp}. The \textit{Regression} classifier all share the advantage of low classification efforts but high model building efforts.
\begin{lstlisting}[label=code_mlp,caption=Multilayer Perception in WEKA]
String option = "-L 0.3 -M 0.2 -N 500 -V 0 -S 0 -E 20 -H a";
MultilayerPerceptron multilayerPerception = new MultilayerPerceptron();
multilayerPerception.setOptions(Utils.splitOptions(option));
...//more code on training classifier
\end{lstlisting}

\begin{table}
\centering
    \begin{tabular}{l|l|l|l}
       \hline
    \# & Model Build Time (s) & Precision (\%) & Error (m) \\ \hline
    1    & 7.07                 & 82.9            & 1.56         \\
    2    & 8.52                 & 93.0            & 0.46         \\
    3    & 7.11                & 81.1           & 1.6         \\
    4    & 8.11                 & 84.3            & 1.29         \\
    5    & 7.69                 & 87.1            & 1.13         \\\hline
    Average    & 7.7                 & 85.7           & 1.21         \\
    Standard Deviation    & 0.63                 & 4.62            & 0.46         \\ \hline
    \end{tabular}
    \caption {Evaluation of Multilayer Perception Regression Classifier}
    \label{table:eva_mlp}
    \end{table}

\section{Support Vector Machine}
The \textit{Support  Vector Machine} is implemented in \textit{Sequential Minimal Optimization}, or SMO for short in \textit{WEKA}. SMO is an efficient algorithm implemented concept of SVM. SMO is fasted for linear SVMs (with the help of kernel functions if nonlinear separation possible)and sparse data sets.\cite{SMO}
\begin{lstlisting}[label=code_svm,caption=SVM in WEKA]
SMO smo = new SMO();
String option = "-C 1.0 -L 0.001 -P 1.0E-12 -N 0 -V -1 -W 1 -K \"weka.classifiers.functions.supportVector.PolyKernel -C 250007 -E 1.0\"";
smo.setOptions(Utils.splitOptions(option));
...//more code on training classifier
\end{lstlisting}
Code clip \ref{code_svm} shows the invocation of SMO Classifier in \textit{WEKA} API. We used polynomial kernel function and other default settings. The kernel function makes not much difference and other options are only computation performance optimization.
\subsection{Evaluation}
The evaluation result is on table\ref{table:eva_svm}.
\begin{table}
\centering
    \begin{tabular}{l|l|l|l}
       \hline
    \# & Model Build Time (s) & Precision (\%) & Error (m) \\ \hline
    1    & 0.50                 & 78.4            &1.94         \\
    2    & 0.42                 & 86.0            & 1.12        \\
    3    & 0.45                 & 79.7            & 1.66         \\
    4    & 0.45                 & 86.8           & 1.15         \\
    5    & 0.43                 & 90.6            & 0.89         \\\hline
    Average    & 0.45                 & 84.3            & 1.35         \\
    Standard Deviation    & 0.03                & 5.1            & 0.43         \\ \hline
    \end{tabular}
    \caption {Evaluation of SVM Classifier}
    \label{table:eva_svm}
    \end{table}
\subsection{Discussion}
Pros:
\begin{enumerate}
\item No distinctive advantage in our case.
\end{enumerate}
Cons:
\begin{enumerate}
\item No distinctive disadvantage in our case.
\end{enumerate}


\section{Random Forest}
\begin{lstlisting}[label=code_rf,caption=Random Forest in WEKA]
RandomForest randomForest = new RandomForest();
String option = "-I 50 -K 0 -S 1";
randomForest.setOptions(Utils.splitOptions(option));
...//more code on training classifier
\end{lstlisting}
Code clip \ref{code_rf} shows the invocation of random Forest Classifier from \textit{WEKA} API. We used 50 trees for the classification and $ \log _{ 2 }{ k } +1 $ attributes for each tree, where $ k $ is the number of attributes.
\subsection{Evaluation}
The evaluation result is on table\ref{table:eva_rf}.
\begin{table}
\centering
    \begin{tabular}{l|l|l|l}
        \hline
    \# & Model Build Time (s) & Precision (\%) & Error (m) \\ \hline
    1    & 0.95                 & 81.8            & 1.34        \\
    2    & 0.70                 & 91.2            & 0.63         \\
    3    & 0.73                & 82.6            & 1.43         \\
    4    & 0.82                 & 80.7            & 1.68         \\
    5    & 0.66                 & 89.4           & 0.93         \\\hline
    Average    & 0.77                 & 85.2            & 1.2         \\
    Standard Deviation    & 0.11                 & 4.8            & 0.42         \\ \hline
    \end{tabular}
    \caption {Evaluation of Random Forest Classifier}
    \label{table:eva_rf}
    \end{table}
\subsection{Discussion}
Options:\\
\begin{enumerate}
\item -K: specifies the number of attributes for each tree, set to 0 and it uses $ \log _{ 2 }{ k } +1 $ attributes, where $ k $ is the number of attributes. This is an optimized solution according to \textit{WEKA}.
\item -I: specifies the number of trees used. The original paper of random forest\cite{randomForest} states that there's no overfitting\footnote{Overfitting means the classifier has too many parameters generated for the training data, which leads a better performance when you classify the training data again. But this also induces that the classifier lose generalization, which means the classifier performance worse on new data.} for random forest, which means we can select as many trees as we want and the more tree the precise the result. But Mark R. Segal states in his paper \cite{rf_overfit} that there is also overfits when in noisy data set. result of compare on different number of trees are shown on table \ref{table:eva_rf_q}. we are using the first derived data set and we find a overfitting problem as a result of noisy in Wi-Fi signal, in such case we tried all the derived data set to find a suitable number of trees. Table\ref{table:eva_rf_overfit} shows the result when we test in different derived set (DS, DS3 means the third derived set). Derived set 2 and 5 shows good performance of trees, the accuracy increase with trees increase and convergence with a certain number of trees. Derived set 1,3 and 4 suffers from overfitting problem, the accuracy has a peak when the number of trees is 10. For the trade of between original idea of random forest and the overfitting problem caused by noise, we decide to use 20 trees. The decreasing of trees not only cause better accuracy but also reduce the model building time and classification time.
\begin{table}
\centering
    \begin{tabular}{l|l|l|l}
    \hline
     I & Model Build Time (s) & Precision (\%) & Error (m) \\ \hline
    1    & 0.13                 & 80.7            & 1.54        \\
    5    & 0.29                 & 83.0            & 1.42        \\
    10    & 0.38                 & 88.63           & 0.91        \\
    20    & 0.47                 & 86.3           & 1.09        \\
    50    & 0.97                & 81.8            & 1.34         \\
    100    & 1.58                 & 83.0            & 1.32         \\ \hline
    \end{tabular}
    \caption {Effect of Number of Trees in Random Forest}
    \label{table:eva_rf_q}
    \end{table}
\begin{table}
\centering
    \begin{tabular}{l|l|l|l|l|l}
    \hline
    I   & DS1(\%) & DS2(\%) & DS3(\%) & DS4(\%) & DS5(\%) \\\hline
    1   & 80.7   & 86.0   & 79.7   & 72.3   & 81.2   \\
    5   & 83.0   & 89.5   & 82.6   & 80.7   & 84.7   \\
    10  & 88.6   & 89.5   & 84.1   & 84.3   & 85.9   \\
    20  & 86.3   & 91.2   & 81.2   & 80.7   & 89.4   \\
    50  & 81.8   & 91.2   & 82.6   & 79.5   & 90.6   \\
    100 & 83     & 91.2   & 82.6   & 79.5   & 90.6   \\\hline
    \end{tabular}
    \caption {Overfitting effect in Random Forest}
    \label{table:eva_rf_overfit}
\end{table}
\end{enumerate}
Pros:
\begin{enumerate}
\item \textit{Efficiency}: Short model build time and short classification time, this is the ideal model based classifier.
\item \textit{Large Capacity}: Random forest is able to handle data with large number of attribute set due to the random selection of attribute phase. For large number of attribute, it still keeps the efficiency and won't need much data for training compare to other machine learning algorithm since for each tree, the number of attribute is largely smaller than the total number of attributes.
\end{enumerate}
Cons:
\begin{enumerate}
\item Unstable: Random Forest based on decision made by trees with randomly selected attributes. Randomness in selection makes it hard qualify each tree whether it is even able to make meaning for prediction. The bad situation for a tree is the predicted class has no relevance with the selected attributes, which means the classification is unreliable.Increasing of tree number in the forest will decrease the impact of bad trees, thus more trees are better.  However the accuracy converges when tree number increases. Increasing of tree number is supposed to compensate accuracy lose caused by randomness, however, due to the overfitting problem, we don't know which is the suitable number of trees. Thus we don't have a best setting for this algorithms, current setting may leads to unstable performance.
\item Tree number: Due to the overfitting problem, the tree number should be carefully selected and there's no easy formula for that yet. It is a try and error process, for each training set, we need to find a suitable tree size.
\end{enumerate}
Improvements:\\
There's a new implementation of \textit{Random Forest} called \textit{Fast Random Forest}\footnote{An efficient implementation of random forest.http://code.google.com/p/fast-random-forest/} It is a private project apart from \textit{WEKA}. This project is an optimization of Random Forest from \textit{WEKA. }The project claims that the accuracy preserves and it reduces the model building time. In the code \ref{code_frf} we use the same settings as in random forest \ref{code_rf} (with 50 trees)and the result is shown in table \ref{table:eva_frf}. The average shows the accuracy preserves as declared and the model build time is largely decreased, comparing to original random forest, see table \ref{table:eva_rf}. So when we later on refer to random forest algorithm, we refer to fast random forest algorithm. Table \ref{table:eva_frf_20} show the performance when we use 20 trees, as a compensate to overfitting problem. We are using this result as a final performance of random forest algorithm.
\begin{lstlisting}[label=code_frf,caption=Fast Random Forest in WEKA]
FastRandomForest fastRandomForest = new FastRandomForest();
String option = "-I 50 -K 0 -S 1";
fastRandomForest.setOptions(Utils.splitOptions(option));
...//more code on training classifier
\end{lstlisting}
\begin{table}
\centering
    \begin{tabular}{l|l|l|l}
        \hline
    \# & Model Build Time (s) & Precision (\%) & Error (m) \\ \hline
    1    & 0.32                 & 84.1           & 1.32         \\
    2    & 0.32                 & 89.5            & 0.75         \\
    3    & 0.33                & 84.1           & 1.2         \\
    4    & 0.33                 & 79.5            & 1.87         \\
    5    & 0.30                 & 89.4            & 0.93         \\\hline
    Average    & 0.32                 & 85.3           & 1.21         \\
    Standard Deviation    & 0.01                &4.2           & 0.43         \\ \hline
    \end{tabular}
    \caption {Evaluation of Fast Random Forest Classifier}
    \label{table:eva_frf}
    \end{table}
    \begin{table}
\centering
    \begin{tabular}{l|l|l|l}
        \hline
    \# & Model Build Time (s) & Precision (\%) & Error (m) \\ \hline
    1    & 0.27                 & 85.2           & 1.16         \\
    2    & 0.25                 & 87.7            & 0.91         \\
    3    & 0.26                & 85.5           & 1.14         \\
    4    & 0.27                 & 81.9            & 1.64         \\
    5    & 0.25                 & 89.41            & 0.89         \\\hline
    Average    & 0.26                 & 85.9           & 1.14         \\
    Standard Deviation    & 0.01                 &2.8            & 0.43         \\ \hline
    \end{tabular}
    \caption {Fast Random Forest with 20 Trees}
    \label{table:eva_frf_20}
    \end{table}

\section{Vote Classifier}
\begin{lstlisting}[label=code_vote,caption=Vote in WEKA]
Vote vote = new Vote();
String option = "-S 1 -R MAJ -B \"hr.irb.fastRandomForest.FastRandomForest -I 20 -K 0 -S 1\" -B \"weka.classifiers.lazy.IBk -K 10 -W 0 -A \\\"weka.core.neighboursearch.KDTree -A \\\\\\\"weka.core.EuclideanDistance -R first-last\\\\\\\"\\\"\" -B \"weka.classifiers.bayes.NaiveBayes \" -B \"weka.classifiers.functions.SimpleLogistic -I 0 -M 500 -H 50 -W 0.0\" -B \"weka.classifiers.functions.SMO -C 1.0 -L 0.001 -P 1.0E-12 -N 0 -V -1 -W 1 -K \\\"weka.classifiers.functions.supportVector.PolyKernel -C 250007 -E 1.0\\\"\"";
vote.setOptions(Utils.splitOptions(option));
...//more code on training classifier
\end{lstlisting}
Code clip \ref{code_vote} shows the invocation of Vote Classifier in \textit{WEKA} API. We contains all the introduced classifiers(including Naive Bayes, Q-NN, SimpleLogistic, SMO and Random Forest) and let them vote for the result.
\subsection{Evaluation}
The evaluation result is on table\ref{table:eva_vote}.
\begin{table}
\centering
    \begin{tabular}{l|l|l|l}
       \hline
    \# & Model Build Time (s) & Precision (\%) & Error (m) \\ \hline
    1    & 4.01                 & 83.0            & 1.39         \\
    2    & 4.38                 & 91.2            & 0.72         \\
    3    & 4.00                 & 85.5           & 1.17        \\
    4    & 3.78                 & 89.2           & 0.89         \\
    5    & 4.86                 & 90.6            & 0.86         \\
    Average    & 4.2                 & 87.9            & 1.01         \\
    Standard Deviation    & 0.42                 & 3.52            & .027         \\ \hline
    \end{tabular}
    \caption {Evaluation of Vote Classifier}
    \label{table:eva_vote}
    \end{table}
\subsection{Discussion}
Option:\\
\textbf{-R} specifies how the vote is processed, in the previous evaluation we use the majority as a result. We can also choose the result which has the highest probability. The result shows in table \ref{table:eva_vote_r}. Giving this setting we don't need to worry about the number of classifiers since it might cause problem when we use even number of classifier when we want to have a majority.
\begin{table}
\centering
     \begin{tabular}{l|l|l|l}
       \hline
    \# & Model Build Time (s) & Precision (\%) & Error (m) \\ \hline
    1    & 4.3                 & 84.0            & 1.14         \\
    2    & 4.3                & 93.0         & 0.56         \\
    3    & 3.94                 & 84.0            & 1.24         \\
    4    & 3.83                 & 87.9            & 1.07         \\
    5    & 4.8                 & 91.8            & 0.77         \\
    Average    & 4.24                 & 88.4            & 0.96         \\
    Standard Deviation    & 0.38                 & 4.22            & 0.28         \\ \hline
    \end{tabular}
    \caption {Vote By Max. Probability}
    \label{table:eva_vote_r}
    \end{table}

\section{Summery}
The performance of all introduced  algorithms are summarized in table\ref{table:eva_sum}, all values are taken from the averaged values. Value for Naive Bayes Classifier is taken from table \ref{table:eva_naive}. Value for Q-NN classifier is taken from table \ref{table:eva_knn_s}, since the combination of weighted distance, 10 nearest neighbour and KD-tree has the best performance. The value of Linear Logistic Regression classifier is from table \ref{table:eva_log}. The value of Multilayer Perception classifier is from table \ref{table:eva_mlp}. The value of Support Vector Machine classifier is from table \ref{table:eva_svm}. The value of random forest classifier is form table \ref{table:eva_frf_20} since the fast random forest with 20 tree has the base performance. For the vote classifier we take the value from \ref{table:eva_vote_r} since a maximum probability from all those classifier has the best performance over all.\\
\begin{table}
\centering
    \begin{tabular}{l|l|l|l}
    \hline
    Name & Model Build Time (s) & Precision (\%) & Error (m) \\ \hline
    Naive Bayes    & 0.09           & 85.6            & 1.12        \\
    Q-NN    & 0.02            & 87.2     & 1.07 \\
    Linear Logistic Regression      & 3.06                 & 87.4           & 1.05\\
    Multilayer Perception & 7.7                 & 85.7           & 1.21         \\
    Support Vector Machine    & 0.45                 & 84.3            & 1.35         \\
    Random Forest   & 0.26                 & 85.9           & 1.14         \\
    Vote     & 4.24                 & 88.1            & 0.96         \\  \hline
    \end{tabular}
    \caption {Summery of Algorithms}
    \label{table:eva_sum}
    \end{table}
We can find from the table \ref{table:eva_sum} that if the model build time is not considered, vote classifier is the best classifier. However it is not the most efficient one, we are doing the job 5 times more compare to other classifier. Considering the implementation of model building and classification efficiency on a smartphone, as well as the increasing number of attributes( more access points if we have more rooms). Naive Bayes classifier suffers more form the error caused by the false assumption\ref{naive_assumption}. Linear Logistic Regression classifier and Multilayer Perception classifier needs long model build time. Q-NN classifier needs long classification time. Support Vector Machine classifier simply has the worst accuracy and error in meter in our case. Consequently, random Forest classifier is the best choice. It's efficient has an acceptable model building time and fast classification ability. Moreover, it can handle large attributes set, which is quite necessary in our data setting, in which the attribute(number of MAC addresses of access points) increase rapidly.

