\section{Analysis}
\subsection{Strengths}
There are a number of components which worked well, or better than expected:
\begin{itemize}	
	\item IR sensors. These were more accurate than we initially assumed, and very fast, giving us a good idea of distances around our robot. Through publication of distance measurements for all IR sensors at once, all components relying on distance information could easily retrieve it.
	\item Mapping. The rviz package, combined with position estimates and sensor readouts, allowed us to not only build a map of the environment as we drove through it, but also visualize this in a relatively simple manner. The path-finding component was then able to use this map and provide waypoints for our navigation behavior, which integrated seamlessly into a well-operating navigation controller.

In fact, due to problems with the exploration behavior, we had very little time to test the navigation node; but contrary to what one should expect when writing such a complex component, it worked almost flawlessly on the first try, requiring only minor adjustments in waypoint check-off distances. 
	\item Tag detection. While our tag identification is not perfect, simply finding the red squares was easy, fast and reliable. We did not encounter a single false positive inside the maze, and very few false negatives. Because the tag detection algorithm worked with relative sizes and distances, it worked independent of resolution, and we saw no significant performance degradation at lower resolutions. Since most work is performed on simple bitwise images, memory and CPU requirements were well within acceptable levels.
	\item Behavior selection. Due to the simplistic nature of behaviors, and their automatic selection, we were able to use the wall-following and obstacle avoidance behaviors from the first milestone throughout the project, with very few adjustments. Moreso, because these were well tested components, we did not have to worry about bumping into walls when writing the more complex exploration and navigation behaviors. This greatly facilitated performance on more complex tasks.
\end{itemize}

\subsection{Weaknesses}
Unfortunately, not everything always works as well as you like. Some areas we know our robot not to excel at include:

\begin{itemize}
	\item Tag identification. As our performance analysis showed, tag recognition rate was not bad overall, at about 80\% in good conditions. However, we know this number could be improved significantly. Worse, color detection yields even lower rates, and is not as good at determining when it should refrain from giving an opinion, resulting in more incorrect color classifications. Under contest conditions, this part would not have yielded us many points.
	\item Exploration. The exploration behavior was initially designed to track where the robot was, where it had been, which spaces were open and which directions it had faced. This worked very well in the regard that it provided a thorough exploration of the maze. Sadly, it also meant the robot would spend 5 minutes exploring a single corridor, with much going back and forth to positions it had already seen. We suspect this behavior to have contained some bugs when performing phase 1 in the contest. 

In the end, as a last resort measure, we rewrote the exploration behavior to perform simple wall following. This proved more effective in the sense that a much larger part of the maze was explored in time, although not complex enough to reach the inner disjoint regions. 
	\item Localization. Our plans included a particle filter capable of augmenting the robot odometry as we drove. In practice, the robot odometry prove to be more reliable and accurate than the particle filter's estimate of its position. Unfortunately, due to time constraints, we were never able to fully explain or remedy this.
\end{itemize}


\subsection{Bottlenecks}\label{sec:analysis_bottlenecks}
Considering the robots performance during phase 2 relied heavily on an accurate and complete map, the main bottlenecks when considering the provided task were the exploration behavior (which in its original, complex form, never functioned as it should, while in its much more simplistic reduced form never navigated the entire maze) and poor localization due to the defective particle filter. With better localization, the map used during phase 2 would be more accurate, and the navigation node would be better able to correct its position, preventing the robot from bumping into walls when its starting position did not match the original starting position exactly.

Overall though, the main bottleneck can be summarized simply as lack of development and testing. If we had another couple of weeks to thoroughly test and optimize the existing functionality, it is the firm belief of this author\footnote{Matthijs Dorst, at your service} that our robot would have performed very well in the competition. Of course, the same can almost certainly be said for all of our competitors. The global design of the robot, both in software and hardware, seems sound; only a few components really require more tweaking to be up to par.











