%
% Results and discussion schtuff
%

\section{Evaluation of the performance of the robot}
\subsection{Wall following}
The complete wall following as a stand alone component worked well. Especially the following of a straight wall in a straight manner (not too wobbly) which was important for our mapping to be accurate.
\\

In the case of tiny areas the robot had troubles getting through and turning away instead.

\subsection{Mapping and localization}
The most tested case for mapping was doing laps around the inner part of the second maze. Therefore this was fairly stable.

The decisions in the map node about what to do purely based on the states has one big drawback which was eventually discovered as being a real problem but was not completely fixed. The assumption is that the robot is doing exactly what it should in each state. If the robot in one state is doing anything else but what it actually should, movement wise in the current state, the position will be off. This could definitely have been made more robust had it been discovered earlier. More specifically the actual problem was that the robot did not always stop after sending a stop message to the motors.
\\

After one lap when it tried to localize and adjust its position to the mapped walls did not work in all special cases like for example when it ran through a wall. Also other cases of mapping while doing the wall following were largely untested.

\subsection{Image recognition}
The tag detection was fast for small images and when not running too much heavy computations on the robot it stopped close to the tag. With a more detailed image the backing up to get a better picture part was needed. This only worked when the stop signals reached the motors. Therefore it sometimes found the same tag multiple times.
\\

The SURF algorithm works well with different angles for the image as well as rotation and different sizes and lightning. It can also cope pretty well with affine transformation of the image. One problem with using this algorithm on the tags given is that they are relatively simple and possesses only a few interresning key points which the algorithm needs to work well. Because SURF is good in coping with light differencies the colour recognition didn't become that certain because we here used the different RGB colour by them selfs in three greyscale images. These images will be little bit like different shadings of the template. The program was also pretty slow on the robot, it took  almost 30 seconds to identify one tag.
\\

Probably there would be better methods or combinations of methods to recognize the tags than SURF partly because of the low detailed tags which is not ideal for this method. The colour recognition suffered much since SURF is more suited to spatial differences in the images rather than colour differencies. Because SURF worked pretty well with the shape recognition but not that well with the colour recognition a combination with another method could be a good solution where the other method identify just the colour.
A function that crop the images could have increased the speed performance as the scene image to consider would have a much smaller area. To make it even more efficient the SURF key desciptors for the templates could have been saved instead of recalculated every time the program runs.
\\

We had several problems while developing the program, chief of which was a bug (http://code.opencv.org/issues/1911) in OpenCV 2.4 which unfortunately forced us to spend a long time
on recompiling and reinstalling an older version of OpenCV which was very time consuming.

\subsection{Path planning}
Unfortunately the path following never really worked on the robot. There is a map, a path/field and movement but the communication between the relevant nodes somehow did not get through.

\subsection{All-in-all}
Everything together did not work well. Part of the reason is probably due to the heavy load of messages and computation on the robot which makes it generally slower in reacting among other things.


\section{Conclusions and things that could have been done differently}
We took it too easy and a too basic approach in the beginning. Only focusing on the basic stuff for the current milestone led us to redoing large parts of wall following over and over without really progressing with anything else meanwhile.
\\

In the end every part was kind of finished and integration between components were well thought through and implemented. But alas it would have taken more testing, debugging, tweaking to get it to work.
\\

The greatest bottleneck which always appeared as soon as we started testing on the robot was problems with the basic nodes. For example we would get \emph{stop failed}, \emph{read adc failed}... at critical points. This possibly caused by overload of the processor or messages that were passed around.
\\

Lessons learned: 
\begin{itemize}
	\item Face basic problems as soon as they occur
	\item Do not take it too easy, think further than the next milestone
	\item Try to run small code iterations as soon as possible on the robot
\end{itemize}