\documentclass{article}

\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{amssymb,amsmath,graphicx}

\title{Pervasive Positioning: Mandatory Project 1\\Wi-Fi Positioning}
\author{Peter Høeg Steffensen, 20082410, Kristian Kongsted 20081434, \\ Jon Korsgaard Sorensen 20083030, Allan Stisen 20083311}
\date{\today}

\begin{document}
\maketitle
\section{How to run}
The project has a ant build script where all possible commands can be found with \texttt{ant help}. Otherwise run \texttt{ant run} which runs all the algorithms and the ScoreNN which outputs the error distributions.

\section{Implementation details}
In both the fingerprinting and the model-based algorithm we first build a radio map which is a list of positions with a map of samples.

\subsection{Fingerprinting-based algorithm}
In the fingerprinting-based algorithm we basically sum up the offline signal strength samples and take the average for the different APs measurements. This is mainly because there are different measurements for the same APs. This averages is then used to calculate the euclidian distance:
\begin{equation*}
E = \sqrt{(S_{1} - R_{1}) ^{2}+ (S_{2} - R_{2}) ^{2} + \cdots + (S_{N} - R_{N}) ^{2}}
\end{equation*} 
Which is from the paper: Location in Ubiquitous Computing. 
For each APs there is in the online sample but not in the offline sample the distance get a penalty of a constant, and the opposite case will also get the penalty. This has lowered our error in estimating distances a lot.

\subsection{Model-based algorithm}
In the model-based algorithm we basically do the same as in the fingerprinting algorithm. Which means that when an offline sample needs to be estimated we do the following: We iterate over all the signal samples in the online sample and check if the offline sample that it compares with has a sample for that specific AP. Then we average the distances for all the APs for that particular online sample. Finally we sort the list with regards to distances and take the $k$ nearest neighbors positions.

We have decided that an AP is unbearable when $P_{d}$ is less than some minimum Wi-Fi signal strength around -90 dBm.

We have decided that our model based algorithm starts on the lower left corner of the building and continues to the right and upwards.
If the coordinate is somehow not in the building, for instance in the lower right or upper left corner, then it is not added to the radio map.

\subsubsection{Direction}
The direction is not taken into account in any of the algorithms, which can explain some of the high mean errors.

\section{Experiments}

\subsubsection{Model-based algorithm}
When we made the implementation detail with unbearable APs in the model-based algorithm, it actually produced worse or identical results as the unrealistic implementation where the APs always were hearable but with low signal strengths. But after introducing the penalty in the algorithms as in the fingerprinting based algorithm it made much better results. As it can be seen in the figures.

\subsection{Plot of two $k$-based algorithms}
\begin{figure}[h]
\begin{center}
\includegraphics[width=12cm]{PlottingTwoBasedKManAlg.png}
\caption{A plot of the two $k$-based algorithms that relates different values of $k$ (1-5) to the median accuracy of 100 different runs.}
\end{center}
\end{figure}
The plot can be seen in figure 1. It is generated by the program 'PlottingTwoBasedKManAlg', which runs with $k$-values from 1 to 7 estimating all online positions with both $k$-based algorithms. It generates an average from each $100^{th}$ run.

It is clear that the model-based algorithm benefits from having a larger $k$. As it conjugating around 14 m. 
The fingerprinting algorithm also seems to benefit a little from the higher $k$'s. But these results can be misleading, because the average may settle or raise them. Our results with the higher $k$'s (around 4-6) show a lower variance around a factor 2 and a standard deviation, than for $k$ around 1-3. So for the higher $k$'s we might get a generally better estimate, but when it miss it is not a high generated miss. The best results seems to be around $k = 5$, which corresponds well with the articles we have read which recommends a $k$ around 4.

The story is somehow the same for the model-based algorithm, even tough a higher $k$ generates a better result, it also generates a lower variance and standard deviation. So in this scenario it generally benefits the average of the $k$ positions.

\subsection{Evaluation of radiomap}
The radiomap for the fingerprint-based algorithm was given as the offline trace. For the model-based solution we generated our own map in the local coordinate system from this equation:
\begin{equation}
	P(d)[dBm] = P(d_0)[dBm] - 10 \cdot n \cdot log(\frac{d}{d_0})
\end{equation}
As seen in figure 2 the online radiomap, which is a plot of the signal strengths at various distances from the AP, it makes sense that the curve is smooth since each dBm is calculated from (1). We have not taken WAF into account but there might be a few cases where the signal would have been weaker, due to fact that the building is blocking the signal.

The other line, the offline radiomap, are the actual mesurements in the building. This curve is not smooth, since the mesurements made at the time could have been made in a room with an open door. This could result in a stronger signal than at a messurement made in the same room with a closed door.

The reason why the online radiomap has lower dBm at some distances could be because we have estimated $n = 3.415$ and $P_d(o) = -33.77$. If these are not the correct values, we might be a bit off since $P_d(0)$ is inserted the $log$ function in (1). 
\begin{figure}[h!]
\begin{center}
\includegraphics[width=12cm]{radiomap.png}
\caption{A plot relating signal strengths a various points to the distance from the point of the mesured AP.}
\end{center}
\end{figure}

\subsection{Error distribution for the four algorithms}
Figure 3 shows the error distribution for the four algorithms. It shows that fingerprinting and fingerprinting3NN follow each other nicely, until around 78\% confidence interval. After that the fingerprinting3NN outruns the other. This is because of the estimation of the coordinates in the 3NN version. This avoids some of the bigger errors in the estimate, but will also average the direct hits so it will not hit as clear as the version without $k$.

The model-based algorithms has some of the same patterns as the fingerprinting-based algorithms. The NN-version starts of better, but around 10\% the 3NN version is in general better. The reason why the model-based 3NN does not suffer from the same problems could be because of the former graph, which plots the theoretically signal strength a bit higher than the real signal strength. Therefore the averaging can sometimes hide these sort of errors. 
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=12cm]{errorDistributionPlots.png}
\caption{A plot with error distribution for each of the four algorithms with five online samples and $k=3$.}
\end{center}
\end{figure}

\subsection{Plot for the four algorithms that relates the number of online samples to the median accuracy}
This plot can be seen in figure 4. It is generated by running the four algorithms on 1 thru 10 samples. The distances between the real and estimated geographical positions are calculated. When more than one sample is used, the average distance is calculated instead for each algorithm. In the figure $k$ is set to 3.

The precision of the estimates depends highly on the samples chosen and the amount of samples. In general, the more samples chosen (and the quality of these samples) the higher the probability that the estimates reflect reality.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=12cm]{RelateOnlineSamplesToMedianAccuracy.png}
\caption{A plot with the four algorithms with 1-10 samples and $k=3$.}
\end{center}
\end{figure}

\end{document}
