\chapter{Background}
\label{sec:background}
Interacting with systems using gestures is an attractive approach for several different applications. Some gesture recognition problems have been researched in the robotics area, such as controlling a robot arm by human gesture. Games and entertainment is another area where control by gesture has been investigated. Depending on the application, different methods for detection, tracking and recognition have been proposed.

\section{Gesture control}
A gesture is a movement used for non-verbal communication. It can include movements of different body parts, such as the hands or head. Gesture recognition is used for example in robot-human interaction and to control games and other interactive applications. The gestures can be recorded by different kinds of sensors either held by users or attached to them, such as gloves or remote controls. There are also systems based on computer vision.

\subsection{Vision based recognition}
Wachs et. al. \cite{Wachs:2011} provide an overview of various challenges and solutions in vision based recognition of hand gestures. Several of the methods described can also be applied to recognition of gestures performed with other parts of the body.

In a vision based recognition system some features are detected, tracked and recognized. The recognition can be either of a static pose or a dynamic sequence. The features that are being used can be very different, depending on the application. Typical features for gesture recognition include the position and pose of arms and hands, since there are often used by humans communicating with gestures. 

\subsection{Recognition of sequences}
A dynamic gestures consists of a sequence of features, such as positions, contour or angle. Some gesture recognition application makes use of Hidden Markov Models to identify sequences, such as the Wii remote application by Schlomer et. al. \cite{Schlomer}. 

\subsection{Gesture spotting}
\label{subsec:spotting}
One of the problems associated with gesture recognition is the task of spotting when a gesture is taking place and when the user is doing something that has no meaning. There are different approaches to solving this problem, such as introducing 'start' and 'stop' gestures, or using some audio cue to determine when a gesture starts.

Another approach is to use a time window, and have the system not only find the most probable gesture at that time, but also determine if the likelihood of the gesture is above some threshold. If the probable gesture falls below this threshold the system states that no meaningful gesture is taking place.

\section{Depth imaging}
Depth images is being used in various computer vision tasks. Applications include industrial inspection as well as robot vision and entertainment applications. There are several different technologies that can be used for depth image retrieval, each with different benefits and drawbacks with aspect to resolution, speed and cost. 

\subsection{Time-of-flight}
Yahav et. al. suggests a system based on a Time-of-flight camera for gaming applications \cite{gamecamera}.Time-of-flight cameras are based on the principle that a pulse of light is emitted, and then the depth is measured as the time it takes for the reflected light to return to the camera. These cameras can produce depth images at real time speed, usually with low resolution which makes them less suitable for industrial applications where image quality is crucial. There are however several ways of improving the resolution of a Time-of-flight camera, such as the superresolution method described by Schuon et. al. \cite{tofres}.

\subsection{Structured light}
Structured light is a technique based on a pattern of light being projected onto a surface. The depth is then found by calculating distorsions of the pattern. The pattern can be for example a grid, a dot matrix or single lines. Different types of light may also be used, such as A comparison of different structured light approaches can be found in Fofi et. al. \cite{struclight}. A problem that has to be solved is to find the correct corresponding projected point to each emitted point, a problem that is usually solved by encoding additional information into the emitted dot for identification. An overview of different encoding strategies is found in Batlle et. al. \cite{Batlle98recentprogress}.

\section{The Kinect device}
\label{sec:device}
The Kinect device was released by Microsoft in November 2010. It is intended as a controller for the Xbox 360 gaming console, where it is used to control games using body movements. The Kinect device has two cameras, one RGB camera and a depth camera. It also features four microphones for audio retrieval and a motor for tilting the device vertically.

\subsection{RGB data}
The RGB camera is a CMOS sensor which outputs an 8-bit image with 640x480 pixels resolution. 

\subsection{Depth data}
The depth image is retrieved using a technique developed by the company PrimeSense. The device has an IR projector, which projects a dot pattern on the room. A CMOS sensor retrieves the positions of these dots, and from the distortions of the dots a depth image is constructed using the PrimeSense chip. This is a PrimeSense proprietary technique called LightCoding, which is said to be a coded structured light technique. 

\subsection{Kinect PC development}
In November 2010 the libfreenect library \cite{web:freenect} was released. It is a open source library for accessing Kinect data on a PC. 

In December 2010 PrimeSense decided to open source the driver for their reference device, upon which the Kinect is based. They also released the binaries for their NITE middleware, which features skeleton tracking. 



