\section{Performance}
\label{sec:performance}


\subsection{Camera Performance: Tag Recognition}

\subsubsection{Test Conditions}

The test taken to evaluate performance on tag recognition, consisted of putting a tag in a white wall, of the same material as the walls of the maze, and at a 23 cm distance of the camera. This is illustrated in figure \ref{fig:testcond}.

For each tag, 5 captures and attempts at correct recognition were made.

			\begin{figure}[!ht]
				\centering
				\includegraphics[width=0.49\textwidth]{img/tag_performance_method.jpg}
				\caption{Tag performance measurement method}
				\label{fig:testcond}
			\end{figure}


\subsubsection{Results}

From these tests, results were taken and condensed in tables \ref{tab:blacktags} through \ref{tab:greentags}.

\renewcommand{\arraystretch}{1.3}
\begin{table}[htbp]
  \centering 
  \begin{tabular}{l | c c c c c }
	\toprule
	\multicolumn{6}{c}{Color: Black}  \\ 
	\cmidrule{2-6}
    Tag & Test 1 & Test 2 & Test 3 & Test 4 & Test 5 \\ 
    \midrule
    Apple & Failed & Failed & Failed & Failed & Failed \\ 
    Cup & Red & Green & Black & Red & Red \\ 
    Bear & Green & Black & Black & Black & Red \\ 
    Glasses & Red & Red & Red & Red & Red \\
    Dryer & Failed & Red Laptop & Failed & Red Laptop & Red  Laptop\\ 
    Laptop & Black & Black & Green & Black Dryer & Green \\
    Glass & Red & Red & Insuf & Red & Red \\ 
    Hammer & Red & Red & Red & Red & Red \\ 
    Banana & Black & Black & Red & Black & Black \\ 
    Book & Red & Red & Failed & Black & Red \\ 
    Scissors & Red & Red & Red & Red & Red \\
    Camera & Red & Red & Red & Red & Red  \\ 
    \bottomrule
  \end{tabular}
  \caption{Results of the test for the 12 tags, of color black.}
  \label{tab:blacktags}
\end{table}
 
 
\begin{table}[htbp]
  \centering
  \begin{tabular}{@{} l |  c c c c c @{}}
    \toprule
	\multicolumn{6}{c}{Color: Blue}  \\ 
	\cmidrule{2-6}
    Tag & Test 1 & Test 2 & Test 3 & Test 4 & Test 5 \\ 
    \midrule
    Apple & Failed & Failed & Failed & Failed & Failed \\ 
    Cup & Blue & Blue & Blue & Blue & Blue \\ 
    Bear &Blue & Blue & Blue & Blue & Blue \\ 
    Glasses & Red & Red & Red & Red & Red \\
    Dryer & Failed & Black  & Black & Failed & Black\\ 
    Laptop & Failed & Blue & Blue & Blue Scissors & Failed \\
    Glass & Failed & Failed & Blue & Blue & Blue \\ 
    Hammer & Failed & Failed & Dryer & Red Dryer & Dryer \\ 
    Banana &Blue & Blue & Blue & Blue & Blue \\ 
    Book & Blue Laptop & Blue Cup & Blue Laptop & Blue Cup & Blue Cup \\ 
    Scissors & Blue & Blue & Blue & Blue & Blue \\
    Camera & Blue & Blue & Blue & Blue & Blue  \\ 
    \bottomrule
  \end{tabular}
  \caption{Results of the test for the 12 tags, of color blue.}
  \label{tab:bluetags}
\end{table}


\begin{table}[htbp]
  \centering
  \begin{tabular}{@{} l |  c c c c c @{}}
    \toprule
	\multicolumn{6}{c}{Color: Red}  \\ 
	\cmidrule{2-6}
    Tag & Test 1 & Test 2 & Test 3 & Test 4 & Test 5 \\ 
    \midrule
    Apple & Failed & Failed & Failed & Failed & Failed \\ 
    Cup &Red & Red & Red & Red & Red \\ 
    Bear & Failed & Red & Red & Failed & Red Cup \\ 
    Glasses & Red & Red & Red & Red & Red \\
    Dryer & Failed & Failed &Failed &Failed & Red\\ 
    Laptop & Red & Red & Red & Red & Red Scissors\\
    Glass & Red & Red & Red & Red & Red \\ 
    Hammer & Failed & Failed & Failed & Failed & Failed \\ 
    Banana & Red & Red & Red & Red & Red \\ 
    Book & Red & Red & Red & Red & Red\\ 
    Scissors &Red & Red & Red & Red & Red\\
    Camera & Red & Red & Red & Red & Red  \\ 
    \bottomrule
  \end{tabular}
  \caption{Results of the test for the 12 tags, of color red.}
  \label{tab:redtags}
\end{table}


\begin{table}[htbp]
  \centering
  \begin{tabular}{@{} l |  c c c c c @{}}
    \toprule
	\multicolumn{6}{c}{Color: Green}  \\ 
	\cmidrule{2-6}
    Tag & Test 1 & Test 2 & Test 3 & Test 4 & Test 5 \\ 
    \midrule
    Apple & Failed & Failed & Failed & Failed & Failed \\ 
    Cup & Green & Green & Green & Green & Green \\ 
    Bear & Green & Green & Green & Green & Green\\ 
    Glasses &Green & Green & Green & Green & Green \\
    Dryer &Failed & Failed & Failed & Failed & Failed\\ 
    Laptop & Green & Green & Green & Green & Green Cup \\
    Glass & Green & Green & Green & Green & Failed \\ 
    Hammer & Green & Green & Green & Green & Green \\ 
    Banana &Green & Green & Green & Green & Green\\ 
    Book &Green & Green & Green & Green & Green \\ 
    Scissors &Green & Green & Green & Green & Green\\
    Camera & RGreen & Green & Green & Green & Green  \\ 
    \bottomrule
  \end{tabular}
  \caption{Results of the test for the 12 tags, of color green.}
  \label{tab:greentags}
\end{table}
   
   \subsubsection{Analysis of Results}
   
   It should be noted that 5 instances of tag recognition are not enough for a frequency-based  probability analysis for each tag, as it is too small a number of samples. Thus, it shall be considered that each individual test is an individual tag being tested, in the sense that we tested 240 tags (5 times each of the 12 different shaped tags for 4 colors).
   
   From the above and the the tables \ref{tab:blacktags},\ref{tab:bluetags},\ref{tab:redtags} and \ref{tab:greentags}, the following analysis can be made:\\
   
   
   \textbf{For the black color}:
 	  \begin{itemize}

		\item  Percentage of tags totally correct: $17\%$

		\item  Percentage of shapes correctly identified: $77\%$

	  \end{itemize}

When there was a mismatch in shape, it also occurred for color, so the percentage of tags completely correct coincides with the percentage of correct color detection.\\

   
   
   \textbf{For the blue color:}
   	\begin{itemize}

		\item  Percentage of tags totally correct: $42\%$

		\item  Percentage of shapes correctly identified: $63\%$

		\item  Percentage of color correctly identified: $52\%$

	\end{itemize}
	

   
   \textbf{For the red/pink color}:
	
	 \begin{itemize}

		\item  Percentage of tags totally corect: $70\%$

		\item  Percentage of color correctly identified: $73\%$

	\end{itemize}
The number of correct shapes coincides in this case with the totally correct tags. Mismatching shapes occurred simultaneously with correct color.\\


   
   \textbf{For the green color:}
   \begin{itemize}

	\item  Percentage of tags totally corect: $80\%$

	\item  Percentage of color correctly identified: $82\%$

	\end{itemize}

   Same as the above for shapes.\\
   
   With this, the best color for detection was green, in the testing condition, followed by red, blue, and the worse, black.
   
   It should be noted also that the testing hints at a systematic error for black pictures being classified as red. The percentage of misclassified color from black to red was $60\%$.  \\
   
   Now, taking a look at the overall numbers:
   
   
\begin{itemize}

	\item  Percentage of tags totally corect: $52\%$

	\item  Percentage of shapes correctly identified: $71\%$

	\item  Percentage of color correctly identified: $56\%$

\end{itemize}

From here it can be argued that the robot was provided with a robust shape detection in general, but an average color recognition. Main problems were with the Black-Red mismatching for the black tags.

Also, the Apple never got recognized (shape or color - the robot remains Linux all the way), followed by the Dryer and the Hammer with the lowest rates of recognition, compared to the other tags. 


We assume there was a problem when dealing with the Apple tag, due to being easier than the Glass or the Hammer to recognize, shape-wise. Then, removing it from the statistics would give some better results:


\begin{itemize}

	\item  Percentage of tags totally correct: $57\%$

	\item  Percentage of shapes correctly identified: $77\%$

	\item  Percentage of color correctly identified: $61\%$

\end{itemize}

\subsection{Path Planning and SLAM}
  During testing path planning and SLAM we usually achieve maps sufficiently good for navigation. They do however sometimes get ``corrupt'' in the sense that there is a stray wall segment blocking a segment of the corridor, as shown in figure \ref{fig:blocking_wall_segments}. This could have been solved in several different ways, but currently we have no solution around maps like this --- the path finder will not force its way through the inflated area.

  \begin{figure}[!ht]
    \centering
    \includegraphics[width=0.49\textwidth]{img/blocking_wall_segments.png}
    \caption{Map with blocked corridor}
    \label{fig:blocking_wall_segments}
  \end{figure}

  However, when a map without issues is generated, like in figure \ref{fig:without_blocking_wall_segments}, the robot navigates through corridors and follows the list of waypoints returned from the path planning service. We do not take any optimal path into account but simply grab the tags in the order they were detected. The robot goes from waypoint to waypoint, pausing to take a snapshot of the tag at the final location. This functionality is stable and works well, given a correct map.

  \begin{figure}[!ht]
    \centering
    \includegraphics[width=0.49\textwidth]{img/map_without_blocking_wall_segments.png}
    \caption{Clean map without blockages}
    \label{fig:without_blocking_wall_segments}
  \end{figure}

  As evident in figure \ref{fig:blocking_wall_segments} and \ref{fig:without_blocking_wall_segments} there is also a systematic angular drift in the odometry. We attempt to correct the angular drift by looking at the wall alignment, but this correction is not enough (or faulty).