\chapter{Implementation}
\label{sec:implementation}
The implementation is done using C++. The OpenCV library is used for image processing tasks. Additional libraries used for the virtual animals controller include TinyXML and Boost Asio. An application that can record data from a Kinect device and then use the recorded data for testing the accuracy of the gesture recognition with different settings was also implemented.

\section{OpenCV}
The Open Source Computer Vision library, OpenCV is a library with a large number of functions for image processing and computer vision tasks \cite{web:opencv}.

\section{Kinect interface}
To retrieve the two images from the Kinect device two callback functions are defined. When the program is running and a Kinect device is plugged in, these are called when there is a new image available.

\subsection{Test application}
The testing application uses pre-recorded data from a Kinect device, in order to test the algorithms on the same data with different settings. The recording is done using the \texttt{record} utility distributed with the freenect library. A small program for reading the recorded images was written.

\section{Image preprocessing}
In order to extract useful information from the image sequence the RGB and depth images are processed before they are used by the people detection and gesture recognition classes. This will result in a image where the depth image has been used to remove the background from the RGB image. First, a mask is created using the depth image. This is done by filtering out values that has a higher depth value than some theshold. An example of a mask image can be seen in Figure \ref{fig:mask}

\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.5\textwidth]{fig/mask.eps} 
\caption{Result of background removal}
\label{fig:mask}
\end{center}
\end{figure}
The result is then combined with the RGB image, which results in a image such as the one shown in Figure \ref{fig:person}. This image is then used for blob detection, making it easy to distinguish the person from the background. 

\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.5\textwidth]{fig/person.eps} 
\caption{Result after people detection}
\label{fig:person}
\end{center}
\end{figure}

\section{People detection}
For this implementation, several assumptions are made that simplifies the detection of people. It is assumed that people who perform the gestures that are to be recognised are standing within a given distance from the device, and that no other objects of approximately the same size as humans are within this area. These assumptions allows for the depth image to be used as a mask, to remove objects further away or closer to the device than some threshold.

\subsection{Blobs}
For detection of blobs, connected components in the image, the cvBlobsLib was used \cite{web:blob}. This library contains functions not only for finding labelled connected components, but also for filtering them, and for finding their contours, area, and various other features. 

\subsection{Filtering}
Blobs that are considered too small to be humans are removed using filtering methods from cvBlobLib. The threshold for how small objects are allowed to be is set to approximately 20000 pixels, but this can be changed according to how far away the system is to find people and how small children it should be able to find. After this filtering the height-width ratio of the remaining blobs are checked in order to determine if they are likely to be people. These values can be set using a configuration file, if no values are specified the default span is between 0.75 and 7.5. In Figure \ref{fig:person} a white rectangle has been drawn around the blob that has been determined to be human.

\section{Gesture recognition}
The implementation of the gesture recognition is done using the CCA method described in Section \ref{sec:theory}. To solve the gesture spotting problem described in Section \ref{subsec:spotting} a time window is used. 

\subsection{Gesture representation}
In this implementation a gesture is represented as a sequence of positions of some features. The tracking of these features has a large impact on the quality of the result, which makes the selection of features and tracking methods important. The features selected for this implementation are the center, top and leftmost position of a blob that has been labeled as a human. 

\subsection{Recording}
In order to detect gestures they must first be captured, then compared to training data. In this implementation the coordinates of the centre and top of the blob are stored, as well as the leftmost and rightmost positions. These coordinates are stored in a list, the size of which can be changed. The default size is 50 values. 

The capture of a gesture is performed by storing the values in the current list in a OpenCV matrix, which can then either be stored as a prototype of used for detecting gestures by comparing it to previously stored prototypes. 

\subsection{Training}
The training process consists of storing prototype gestures. These are captured using the recording method described in the previous subsection and then placed in one of the prototype groups, which are represented by vectors of prototype matrices. 

\subsection{Detecting}
Detection is carried out by first capturing the current gesture and then comparing it to the stored prototypes using CCA. Since the number of calculations and comparisons per gesture is relatively low gestures are compared to each prototype in the prototype groups. It is then labelled as belonging to the group with which it has the maximum average correlation values. 

