\chapter{Introduction}
\label{sec:introduction}
This report decribes the implementation of a visual gesture recognition method for interacting with a computer. These gestures are found using both RGB images and depth data captured by a Microsoft Kinect device.

There are several applications where gesture interaction is useful, such as human-robot interaction and medical systems. A common application for gesture control is entertainment systems, such as video games and interactive displays. Several methods for gesture control have been proposed, with different benefits and drawbacks with aspect to factors such as accuracy, speed and cost. 

This implementation is designed with a scenario in mind where users want to interact with virtual animals on a screen in an exhibition environment. The animal application uses the gestures as input to the animals, not to control them directly but rather to affect their behaviour. 

\section{Overview}
The report starts with an overview of gesture recognition methods and methods for retrieving depth images. In Section \ref{sec:device} a brief description of the Kinect device is given. 

Chapter \ref{sec:theory} describes the fundamental theories that this project is based upon. In Chapter \ref{sec:implementation} the implementation of the gesture recognition is explained, and its results are shown in Chapter \ref{sec:result}.

