\phantomsection
\chapter{Machine Learning algorithms} % Main chapter title
\label{c3_algorithms} % For referencing the chapter elsewhere, use \ref{Chapter1} 
\rhead[\emph{Introduction of Algorithms}]{\thepage}
\lhead[\thepage]{\emph{Introduction of Algorithms}}
Some symbols for the algorithms in this chapter.
\begin{description}
\item[$ v $]\hfill\\
A column vector, representing measurements of signal strength of all access point. ${ v }_{ i } $ in \textbf{v} is the signal strength of \textit{ith} access point, totally there are $ k $ access points.
\item[$\omega  $]\hfill\\
The classification result, a room label in our case.
\item[$d(v) $]\hfill\\
The decision function, looking for $\omega  $ respect to $ v $.
\item[$ y $]\hfill\\
The real class label, a real room label in our case.
\item[$ k $]\hfill\\
The number of attributes, each attribute is the signal strength from an access point in our case.
\item[$ n $]\hfill\\
The number of classes, the number of rooms in our case.
\end{description}
The following part will be the elaboration of several machine learning algorithms.

\section{Naive Bayes}
Naive Bayes classifier is based on Bayes classifier. It is a vary common classifier, almost the first-come-to-mind algorithm in all machine learning problems. The basic idea of it is to find a class label of the maximum probability of a specific measurement.
The principle is based on \textit{Bayes Rule}\ref{eq:bayes_rule} :
\begin{equation}\label{eq:bayes_rule}
prob(y|v)=\frac { prob(v|y)\cdot prob(y) }{ prob(v) } 
\end{equation}
\begin{description}
\item[$ prob(y) $]: is called \textit{prior probability}, it means the independent probability of each class, in our case, it means the probability to be in each room, which ideally, with infinite samples, should be a constant for each room, that is $\frac { 1 }{ n }   $, where $n  $ is the number or rooms. Such setting means people can be in each room in equal probability.\\
\item[$ prob(v) $]: is the probability of a specific measurement.\\
\item[$ prob(v|y) $]: is called likelihood of $ v $ respect to class $ y $. Given a class, the probability of a specific measurement.\\
\item[$prob(y|v) $]: \label{def:posteriori} is called posteriori probability,  means the probability of a class given a specific measurement.
\end{description}
The maximum posteriori probability is exactly what we are looking for, so the \textit{Bayes Decision Rule} is :
\begin{equation}\label{eq:bayes_decision}
Decide\quad for\quad class\quad \hat { \omega  }\begin{cases} if\quad prob(\hat { \omega  } |v)=\max _{ \omega  }{ \left\{ prob(\omega |v) \right\}  } \\ \qquad and\quad prob(\hat { \omega  } |v)\quad >\quad \beta \\ otherwise\quad reject \end{cases}
\end{equation}
The rule states that we are looking for a class that has the maximum posteriori probability. $\beta  $ is the rejection rate, it stands for the rate we can't find a suitable class label in given class set. In our case, we can ignore it, take it as 0, so that it means we can always find a class for the measurement.\\
In most situation, take ours for example, the posteriori probability is not available. Combine equation \ref{eq:bayes_rule} and equation \ref{eq:bayes_decision}, we can conclude that when we are looking for maximum posteriori probability, we can try to find the maximum likelihood instead:
\begin{equation}\label{bayes_decision_induction}
Decide\quad for\quad class\quad \hat { \omega  } \begin{cases} if\quad prob(v|\hat { \omega  } )\cdot prob(\hat { \omega  } )=\max _{ \omega  }{ \left\{ prob(v|\omega )\cdot prob(\omega ) \right\}  } \\ \qquad and\quad prob(v|\hat { \omega  } )\cdot prob(\hat { \omega  } )\quad >\quad \beta \cdot prob(v)\\ otherwise\quad reject  \end{cases}
\end{equation}
The prior probability is easy to find from samples, and ideally they should be the same since the probability for people in each room should be the same, so we can even make the following transform to make it more clear.
\begin{equation}\label{bayes_decision_ml}
Decide\quad for\quad class\quad \hat { \omega  }\begin{cases} if\quad prob(v|\hat { \omega  } )=\max _{ \omega  }{ \left\{ prob(v|\omega ) \right\}  } \\ \qquad and\quad prob(v|\hat { \omega  } )\quad >\quad \beta \cdot prob(v)\cdot n\\ otherwise\quad reject  \end{cases}
\end{equation}
$ n $ stands for the number of classes. Since we don't need to care about rejection, we are aiming to find $  \max _{ \omega  }{ \left\{ prob(v|\omega ) \right\}  } $, which is the maximum likelihood.\\
However, estimating $ prob(v|y) $ is difficult in reality. Let's check a more demonstrative representation: $ prob({ v }_{ 1 },{ v }_{ 2 },...,{ v }_{ k }|\omega )$, $k $ is the number of attributes, so it is not feasible to find out each attribute combination in the training set, and even calculate the probabilities.\\
\textit{Naive Bayes} made an assumption that attribute values are conditionally independent given the class, so:
\begin{equation}\label{naive_assumption}
prob({ v }_{ 1 },{ v }_{ 2 },...,{ v }_{ k }|\omega )=\prod _{ i=1 }^{ k }{ prob({ v }_{ i }|\omega ) } 
\end{equation}
Hence for Naive Bayes classifier, we are looking for
\begin{equation}\label{naive_decision}
\max _{ \omega  }{ \left\{ \prod _{ i=1 }^{ k }{ prob({ v }_{ i }|\omega ) } \cdot prob(\omega ) \right\}  } 
\end{equation}
of which all elements in the formula can be calculated from the training data. The classification result is $ \omega $ which fulfils equation \ref{naive_decision}.

\section{Q Nearest Neighbour(Q-NN) Classifier}
The vector representation of measurement $ v $ give us a feeling that such a classification can be treated as a pure geometric problem, looking for the nearest point in the hyper space. The idea is to find the nearest reference point, where a sample is, and label of new measurement will be attached with same label as the reference point. If we only need the vary one nearest neighbour, it is a classifier called 1-NN classifier. Accordingly, Q-NN classifier takes the Q nearest neighbour into account. The decision is made through equation \ref{eq:knn-decision}. The number Q need to be carefully chosen since if it's too small, the result will be unreliable due to the weakness in tolerance reference data variance. On the other hand, if it's too large, the locality of estimation is destroyed.\footnote{In most literature, this method is referred to as K Nearest Neighbour. We use Q to distinguish from attribute number $ k $.}
\begin{equation}\label{eq:knn-decision}
d\left( v \right) =\frac { 1 }{ Q } \sum _{ q=1 }^{ Q }{ { y }_{ Q-nearest\quad among\quad references } } 
\end{equation}
\begin{figure}
	\centering
		\scalebox{1.0}{\includegraphics{img/knn}}
		\rule{35em}{0.5pt}
	\caption{Q-NN example\cite{pattern}}
	\label{fig:knn}
\end{figure}
Figure \ref{fig:knn} gives a better demonstration how equation \ref{eq:knn-decision} works. In this case, $ Q = 10 $. The estimated vector of probability for each class is $ d\left( v \right) ={ \left( 0.2,0.5,0.3 \right)  }^{ T } $, which refers to the posteriori probability\ref{def:posteriori}. For this example, class 2 is the winner since it gets the highest posteriori probability. For a more complicated situation, we can weight the nearest neighbour with $  1 / distance  $, so that the nearer reference get more weight then the further one. Generally speaking, for a new measurement, we have to iterate all training data, calculate the distance to the new measurement, find the nearest Q neighbour and apply the formula \ref{eq:knn-decision} and get the classification result.

\section{Linear Logistic Regression}
We are suffering from the long classification time in Q-NN classifier. In some situation, localization needs continuous predication, which requires less classification time, so it is better to have a classifier with less classification time. The idea is to have a formula, which we put in all values of attribute and the result is direly the class label.Linear Logistic Regression becomes an option. It gets class label as a linear combination of attributes with its specific weights. The equation connects class and attributes with weights is called \textit{regression equation}, and process of determine the weights is called \textit{regression}.\cite{wekaDataMining}
\\
\textit{Regression} is an ideal solution to classification problems whose attributes and class are numeric.(It causes arguments that in our case, the class is not always numeric. Actually, it won't make any differences is we relabel them with number and map them back in the end. That's also how \textit{WEKE} implementation dose, all class is relabelled with numeric.) The simplest form of \textit{regression} is \textit{Linear Regression}. The class label is calculated regarding to :
\begin{equation}\label{eq:linear_regression}
\omega ={ w }_{ 1 }{ v }_{ 1 }+{ w }_{ 2 }{ v }_{ 3 }+...+{ w }_{ k }{ v }_{ k }
\end{equation}
where $ w $ is the weight of attribute.\\
For any specific training data set, the weight for each class is calculated through minimization of square error:
\begin{equation}\label{eq:least_square}
\min { \left( \sum _{ i=1 }^{ N }{ { \left( { y }_{ i }-{ \omega  }_{ i } \right)  }^{ 2 } }  \right)  } =\min { \left( \sum _{ i=1 }^{ N }{ { \left( { y }_{ i }-\sum _{ j=1 }^{ k }{ { w }_{ j }{ v }_{ j } }  \right)  }^{ 2 } }  \right)  } 
\end{equation}
where $ N $ is the number of train data and $ k $ is the number of attributes.\\
Once the math has been accomplished, the result is a set of numeric weights, based on the training data, which can be used to predict the class of new instances. We calculate for each class with equation\ref{eq:linear_regression}. Then comes the other problem, how to match the value to class label. Simply take the largest one is not a vary good idea, which leads the classification to detection, thus we simply have one "yes" and the others are "no". Meanwhile, those calculated values are not restricted in [0,1] so that they can't be treated as probabilities. Another implicit drawback is that least-squares regression(from equation\ref{eq:least_square}) assumes that the errors are not only statistically independent but are also normally distributed with the same standard deviation, an assumption that is violated when the method is applied to classification problems because the observations makes binary decision. Such problems solved by \textit{Logistic Regression}.\\
The result from \textit{Linear Regression} is not between 0 and 1, so that \textit{Logistic Regression} make a linear transform to the posterior probability\ref{def:posteriori}, so that our target lies from negative infinity to positive infinity. The transform is defined in equation \ref{eq:lr_linear_transform}. Now the calculated value can represent
\begin{equation}\label{eq:lr_linear_transform}
prob(\omega |v)\rightarrow \log { \left( \frac { prob(\omega |v) }{ 1-prob(\omega |v) }  \right)  } 
\end{equation}
The transformed variable is approximated using a linear function just like the ones generated by linear regression. The resulting model is
\begin{equation}\label{eq:lr_approximation}
prob(\omega |v)=\frac { 1 }{ 1+exp(-\sum _{ i=1 }^{ k }{ { w }_{ i }{ v }_{ i } } ) } 
\end{equation}
where $ w $ means the weight for each attributes. In order to find the weight, \textit{Linear Regression} seeks the minimum squared error. In logistic regression seeks the maximum of log-likelihood of the model instead. This is given by
\begin{equation}\label{eq:lr_weight}
\max { \left( \sum _{ i=1 }^{ N }{ \left( 1-{ x }_{ i } \right)  } \log { \left( 1-prob\left( { \omega  }_{ i }|v \right)  \right)  } +{ x }_{ i }\log { \left( prob\left( { \omega  }_{ i }|v \right)  \right)  }  \right)  } 
\end{equation}
where $ { x }_{ i } $ is either 0 or 1 and $ N $ is the number of training data. With the new weight, we are able to calculate the probability for each class.\cite{wekaDataMining} Calculating all probability of class against the new measurement, the class with highest probability will be the classification result.
There is also a similar method called Multilayer Perception, which uses back propagation to find the weights. It is also a regression algorithm. Details can be found in \cite{pattern}.


\section{Support Vector Machine}
Another classification method starts also in the hyper space produced by the training data. The idea is we can cut the hyper space with hyper plains so that difference class lie in different part of the space. The introduction of the concept starts with the most straightforward way, which is linear separable case.\\
Assume that the training data are separated by a hyperplane in the form of 
\begin{equation}
d\left( v \right) =\left( a\cdot v \right) +b
\end{equation}
where $ a $ is a vector which normal to hyperplane,$ \left| b \right| /\left\| a \right\|  $ is the perpendicular distance from the hyperplane to the origin. As show on Figure\ref{fig:smo_linear}, the points $  v$ that the lie on the hyperplane satisfy $ a\cdot v +b=0 $. Those training data lying on the hyperplanes $ H_{1} $ and $ H_{2} $  are called \textit{Support Vectors}. Only the support vectors are critical to the classification. The goal is to find $ a $ and $ b $ with the minimum  function complexity and maximum margin, which increase the generalization performance of the classifier. Such an method is called \textit{Support Vector Machine}, SVM.
\begin{figure}
	\centering
		\scalebox{1.0}{\includegraphics{img/smo_linear}}
		\rule{35em}{0.5pt}
	\caption{Linear Separable Support Vector\cite{pattern}}
	\label{fig:smo_linear}
\end{figure}
For a more general case, there won't be direct linear seperable hyperplane for any two classes, example like sub-figure (a) in Figure\ref{fig:smo_nonlinear}. The eclipse boundary for class A is hard to separate from class B by linear separation. For that reason we introduce a new method called \textit{Kernel Method}. Kernel method maps the data into a high-dimensional feature space, the nonlinear problem becomes linear in feature space. The dimensionality of feature space can be arbitrary large. The mapping is implemented by \textit{kernel function}. There are many kernel functions available, polynomial, Gaussian RBF, or Sigmoidal.\cite{SMO} There's no prior on those functions, the choice is base on pattern in data. Take the example in Figure \ref{fig:smo_nonlinear}, the kernel function is $ r\left( { v }_{ 1 },{ v }_{ 2 } \right) ={ \left( { v }_{ 1 }\cdot {v }_{ 2 } \right)  }^{ 2 } $ get an extra data from the $ v $. Thus the 2-D input data are map in to 3-D space and in 3-D space(sub-figure b) linear hyperplane can separate the two classes.
\begin{figure}
	\centering
		\scalebox{1.0}{\includegraphics{img/smo_nonlinear}}
		\rule{35em}{0.5pt}
	\caption{Kernel Method for Nonlinear Separation\cite{pattern}}
	\label{fig:smo_nonlinear}
\end{figure}
For this classifier, kernel function will be applied to split the hyper plane and new measurement get it class label when it fall in that partial of hyper space.


\section{Random Forest}
Given all the introduction of previous machine learning algorithms, we are always spending efforts on algebra computation, either learning phase or classification phase. Those efforts on computation might be essential for those corresponding method, but that's not essential for all machine learning algorithm. Algorithm C4.5 from Quinlan makes a decision tree from the training data with attributes.\cite{decisionTree} Figure \ref{fig:decisionTree} is an example of decision tree. According to the figure, a room like (12,0,2), which means  $ v_{1}=12,v_{2}=0,v_{3}=2 $, will be classified as \textit{Room4}. Implementation of C4.5 is can be found in \cite{c45}. Once the decision tree is build, the classification will be easy by simply traversing the tree. Now the problem lies the choose of attributes. The simplest way is we take all attributes into account, which is always right but we never get enough training data to build decision tree with unlimited attribute. It is computational infeasible. The \textit{Random Forest} give us a solution. Random forest algorithm combines decision tree construction with random attributes selection. Each tree is constructed independently with random selected attributes and all training data. The selection of attribute is called attribute bagging, which is a put-back-draw action, meaning that some attributes can be selected more than once and some may not be selected. The number of attributes for each tree is predefined, a limited number, mostly far smaller then total number of  attributes, is more feasible in computation. Doubts are raised on how one tree is qualified to be a predictor since it doesn't even  consider the influence from all other unselected attributes.  The answer is that we have a whole forest to make the predication. Each tree in the forest are entitled to predict based on some attributes, which makes it an expert for that partial of attributes. By considering opinions from all "partial expert", we can make a better predication as if we have a real expert on all attributes. The result from all trees are collected and the final decision is made by voting. Consequently, the whole forest has the ability to make predication base on all attributes.\cite{randomForest}
\begin{figure}
\centering
\tikzstyle{elli} = [draw=red, shape=ellipse]
\tikzstyle{rec} = [draw=red, shape=rectangle]

\begin{tikzpicture}

\node[elli] (v1) at (0.5,0) {$v_1$};
\node [elli] (v2) at (-1,-2) {$v_2$};
\node [elli] (v3) at (2,-2) {$v_3$};
\node [rec] (r1) at (-2.5,-4) {Room1};
\node [rec] (r2) at (0,-4) {Room2};
\node [rec] (r3) at (1.5,-4) {Room3};
\node [rec] (r4) at (4,-4) {Room4};
\draw  (v1) edge node[left]{$>20$} (v2);
\draw  (v1) edge node[right]{$\le20$} (v3);
\draw  (v2) edge node[right]{$>10$} (r1);
\draw  (v2) edge node[right]{$\le10$} (r2);
\draw  (v3) edge node[right]{$>27$} (r3);
\draw  (v3) edge node[right]{$\le27$} (r4);
\end{tikzpicture}

\caption{\label{fig:decisionTree} Example of Decision Tree }
\end{figure}

\section{Vote Classifier}
This classifier is not really a classifier, it is more like a framework. It contains several known classifier, such as ones introduced in the previous sections, and make its classification bases on the results of those inner classifiers.


\section{Summery}
In the domain of machine learning, there's no dominating algorithm, which means there is no algorithm is potentially superior to others for all problems.\cite{pattern} To solve our problem, we need to evaluate those algorithms by gathering Wi-Fi fingerprint samples and classifying by those algorithms. The model build time, classification in percentage and geometric error in meter will be evaluated and compared. A decision is made regarding those result and the whether the characteristic of those classifier fit our data.