%
% Tag detection, image recognition and maybe dealing with the late images.
%

\section{Image recognition}
The image recognition was divided in two parts. First the tag was in general detected in the image to classify it in a second step.

\subsection{Tag detection}
Before a tag can be classified, the tag need to be detected first.
The idea is to detect the tag and than stop in front of the tag. The stop in front of the tag was needed to take a better picture and also for having all the computation power for the image recognition and avoid interfering with basic computations (encoder reading, adc reading).
\\

To detect a tag each image is taken and searched for red pixels. 
Therefore the image is transformed in an image described in HSV-colors and filtered for the red color. If the image contains more than $30$ red pixels a tag is detected in the image. The magic number $30$ was taken to avoid random small red noise in the image.  

\subsection{The Image Recognition Problem}
The task of image recognition of a tag consisted of two parts, to identify the object within the tag and to identify the objects colour. There were 12 different object shapes and 4 different colours, and to top it off there was also an empty tag with no object in it.

\subsection{Our Image Recognition Method}
We assume that the robot is moving and with a certain frequency take pictures and publish them directly. From those images the first thing to do is to decide if a tag pops up in the picture or not and thereby also desides its location. To compete the recognition the shape and the colour has to be recognized.

\subsubsection{Idea}
Our node used to decide if there is a tag in the picture or not is constantly listen to the image publisher node. This node has to be fast enough not to disturb other nodes in progress and also be able to forfill its task by considering all published images. When this node identify a tag it sends a command that will stop the robot movement which will make it able to take a good picture. This picture is then to be processed in the image recognition node for analysis. Because the camera might have big differencies in angle to the tag and also that the tag size will differ the idea is to use a feature based method which should be invariant to differencies in angle and size. It will also be invariant to differnecies in lumination. A set of template images will then be used to compare with the tag image. This set consists of all possible tags we could find in the maze, (number of shapes)*(number of colours).

\subsubsection{The Solution}
First by making the robot stop a special node was created that recognized if there was an tag in the image from the camera. It was designed to react to the overall redness in the image. That is because the tag frame was red and pretty noticable. The node stopped the robot but normally with a delay so the position of the robot was to far ahead to take a good picture of the tag. This was resolved by back up the robot into a better position where a picture was taken by the camera.
\\

Now the image was sent to the main image recognition node. This node was designed to firstly recognize the shape of the tag, eg. dryer or banana. The method used for this was a feature based technique called SURF, which is a development from the SIFT method. SURF uses integral images which is the big different compared to SIFT and will thereby vastly reduce the calculations.
\\

First the template images and the scene image, which contain the tag, will be transformed to greyscale images. To reduce the high frequency noise a gaussian smoothing is applied to all images sent to the SURF program. Then the SURF key descriptors, which describes the features of interrest, will be detected for both the template and the scene. These key descriptors from the template image will then be compared  using a nerest neighbour technique with the key descriptors from the scene image. This will result in an amount of matches bestween the images but also amounts of key points for each template image. These key points will be used to reduce the effect that a template with many details will have more matches with the scene than a template with less details just because of the difference in number of key points. The result is therefore normalized using the ratio of the matches of the key descriptors between the images and the number of key pints in the template image.
\\

result = (number of key descriptors matched)/(number of key points found in template)
\\

This is done for all the templates with black figures in the tag and the template that gives the best match result is chosen as the answer for the right shape.
Mainly the same method is used to find the right colour. The first difference is that only the chosen shape are being considered. That means the template to be matched with consists only of the different colour for the chosen shape, e.g. magenta, blue, black and green teddy. The RGB colours in both the tempalte images and the scene image are also saved to be able to indetify colour much better compared to greyscale. The images is split into three images representing the three colour values to be able to be used in the algoritm. The result is then calculated like for the shape recognition but all the results from the three comparisons have to be taken into consideration. Green and magenta were for example often hard to distinguish for the algorithm so the result is only given as the answer if the ration between the result of the first choise and the result for the second best matching is large enough. That is because the minus points for giving wrong answer and if the result is not convincing enough it should not just take a chance.
\\

When testing the algorithm about 90 \% of the tags were correctly identified regarding the shape. About 50 \% of the tag colours were correctly classified (not considering the ratio between the best match and the second best match).

% Problems part?



% Conclusion part
Conclusions
Probably there would be better methods or combinations of methods to recognize the tags than SURF partly because of the low detailed tags which is not ideal for this method. The colour recognition suffered much since SURF is more suited to spatial differences in the images rather than colour differencies. Because SURF worked pretty well with the shape recognition but not that well with the colour recognition a combination with another method could be a good solution where the other method identify just the colour.
A function that crop the images could have increased the speed performance as the scene image to consider would have a much smaller area. To make it even more efficient the SURF key desciptors for the templates could have been saved instead of recalculated every time the program runs.