\documentclass[10pt]{IEEEconf}
\author{Tim Doolan \and Wouter Josemans}
\title{A Gesture-Based 3D Drawing Application for the Microsoft Kinect}
\usepackage{graphicx}
\usepackage[small]{caption}
\usepackage{subfig}
\usepackage{wrapfig}
\date{}
\begin{document}

%
% TODO: Tim: Pipeline: gestures, 
%            read section on tracker approach
%            Update Feature section (merge with classifier)
%            Update 3D rendering section
%            Extend interal rep
%            Add section on integration issues
%            Cheat sheet
%            Append svm results
%       Wouter: end results, pipeline
%               extend conclusions
%               Find references for Hu moments/HORDS/(svms)?

\maketitle
\begin{abstract}
In this paper, we describe our implementation of a gesture-based 3D drawing
application with the Microsoft Kinect. We implemented a tracker, classifier and 3D drawing framework 
which work together to allow a user to create and manipulate 3D models using 
gestures made with two hands. We found that the current implementation, while 
providing most of the required functionality, is not as intuitive to work with
as we had hoped, though this could be improved by the ability to reliably classify 
more gestures.
\end{abstract}
\section{Introduction}

\subsection{Kinect} % Wouter
With its introduction late 2010, the Microsoft Kinect raised a lot of interest
in the scientific community. The Kinect was originally designed as a gaming
device, where the player uses his body to control games. Open-source drivers
were soon released, and many interesting applications were found.
The Kinect combines the image of a regular camera with a ``Time of Flight"
infrared camera, giving the user both an RGB image of the scene in front of the
camera, as well as a depth image of that scene. This makes it easy to do all
kinds of things that are very hard to do with just an RGB camera, such as
segmenting objects. The process of recognizing a human body and tracking the
extremities becomes feasible with this technology. The infrared camera works by
sending out pulses of infrared light, and measuring the time it takes for the
light to bounce back into the camera. There is a small horizontal offset between
the emitter and the camera, which means that some small parts of the image seen
by the camera are not directly illuminated.
Various enthusiasts used the Kinect for all kinds of interesting applications, 
such as building a 3D reconstruction of a scene, tracking the fingers to control
an image viewing application or controlling a PowerPoint presentation with hand 
movements.
In this project, we will use the Kinect for something else entirely. 

\subsection{Basic idea} % Wouter
%Explain intuition behind working with a 3D camera for 3D drawing
As the Kinect provides us with accurate depth data, we can find out where the 
user (the ``controller") is in 3D space. Using this information can be very helpful
in controlling 3D applications, which are often hard to control using conventional 
devices such as the keyboard or the mouse. Especially 3D modeling software suffers 
from this drawback, where points in 3D space need to be placed very precisely. This
is a challenge, because the mouse's planar $x,y$ coordinates have to be mapped
in some way to volumetric 3D $x,y,z$ coordinates. This is usually done by lining up two
different perspectives, where points are first placed in one plane, and then
corrected in the second. With a 3D input device, this issue is simply not there;
points can be placed in one action.
In this project, we attempt to implement a simple 3D drawing program that allows the 
controller to draw 3D shapes in the air with his hands, and display these shapes on 
the screen. After that, the shapes can be manipulated or examined from different
angles, by using different hand gestures. The space in front of the camera maps
in some way to the 3D drawing space on screen, meaning that if the controller
moves with respect to the camera, his actions will affect a different region of
the drawing space. 

\subsection{Pipeline} % New! Tim (stuff) + Wouter (tracker/classifier?)
\begin{figure}
\includegraphics[width=7cm]{pipeline}
\caption{The pipeline for the implemented system.}
\label{fig:pipeline}
\end{figure}
First, we will describe in broad terms how the system is going to work. The main
pipeline consists of several steps, as shown in Figure \ref{fig:pipeline}. First, 
we need to continuously track the controller's hands in the camera image, and 
segment the hand from the background. Then, we will need to extract
descriptive features from the images of the hands, and use a classifier to
determine what gesture is being made. The gestures, along with the position of 
the hands, are then passed to the 3D drawing framework, which will process this
 information and perform the corresponding operation. The interface will
need quite a few different commands in order to allow full manipulation of the
model. These commands will consist of a combination of the gestures of both
hands, where the left hand is just a modifier and the right hand is also used
as cursor (determined by the center of mass). The use a combination of 2
gestures for an operation has the advantage of squaring the number of
operations, meaning a few distinguishable gestures will result in enough
variation to control the interface. We will use 3 different gestures, which
could make 9 different operations. This will be justified in the experiments.
\begin{figure}
\includegraphics[width=7cm]{endproductcropped}
\caption{The end result of the system. On the right the tracker image is shown, 
on the left is the 3D drawing framework. The green dot represents the cursor.}
\label{fig:endresult}
\end{figure}
The final system interface is shown in Figure \ref{fig:endresult}. On the left,
the 3D drawing framework is shown, in which a model of a dodecahedron is
loaded. The green dot represents the cursor, and moves in correspondence to the
controller's right hand. The controller gets visual feedback about the tracking
and classification process, which will indicate if and where something goes
wrong.

\section{Approach}
\subsection{Tracker} % Wouter
For the tracking part, we decided to use only the depth-image provided by the
Kinect, and not look at the RGB image. The reason for this is simple; looking 
at the depth image alone is good enough for reliable tracking and maintaining a
high frame rate is important. Operations done on RGB images would be
computationally more expensive. Of course, combining the information in the RGB
and depth images may lead to even better results in cases where the depth image
alone is not enough, but we believe that these cases will be rare in our
application.
\begin{figure}
\includegraphics[width=8cm]{window.png}
\caption{Tracker boundaries. The green box is the bounding box, the blue box is
the tracking window. The depth image shows depth in grey values; the closer to
the camera, the darker the pixel value. White pixels are points where no IR
response was measured.}
\label{fig:window}
\end{figure}
The tracker is based on depth-thresholding, and needs to be initialized first.
In the initialization step, the controller will hold his hand out towards the
Kinect, making sure that his hands are the closest thing to the camera, and at a
more or less equal distance to the camera. The Kinect will then look at the
depth image and find the depth value of the closest point to the camera, the
\textit{iso value}. It will then do a thresholding operation on the image, where
all pixels that have a value ``similar" to the iso value are set to 1 and the
rest are set to 0. We define ``similar" as the difference between the pixel
value and the iso value being smaller than some constant, the \textit{tolerance}.
We now know which pixels we are interested in, but we want to track two hands.
To find out which pixels form a hand together, we do 2-means clustering on the
pixel indices. Now that two hands have been found, we define bounding boxes
around each hand, which are used in the tracker update step.
The basic tracker update consists of looking at the region (the \textit{tracking
window}, which is found by expanding the bounding box by some constant value)
where the hand was seen last, determining the new iso value by finding the new
smallest value, and doing depth thresholding on the tracking window.
In Figure \ref{fig:window}, the green box indicates the bounding box of the
pixels found in the previous tracker step. The blue box indicates the region in
which we look for the new position. The basic tracker step is done for both
hands. If the hands are too close together and the tracking windows overlap, we
do 2-means clustering again in order to separate them properly. 
We implemented some heuristics to detect when the tracker has most likely lost
the hand. For example, if the bounding box exceeds certain dimensions, it is
unlikely that the hand is still being tracked. A skewed ratio between width and
height is also an indicator of incorrect tracking.

\subsection{Feature extraction and Classification} 
Once the bounding box is drawn around a hand, we can extract the hand image.
This image is turned into a binary silhouette image, where 1 indicates presence
of the hand and 0 indicates absence. The presence is determined by the same
iso value and tolerance as used in the tracking phase. From the resulting binary
image 2 features are extracted; Hu moments and a Histogram Of Radial Distances
(HORD). 
\subsubsection{Hu moments}
Hu moments are a specific type of image moment that have some nice properties
for classification. For a binary image an image moment is nothing more than a
weighted sum of the indices of the pixels with value 1. These properties are
easily made translation and scale invariant by offsetting the indices with the
image's center of mass and dividing by the scaled moment; what the Hu moments
add is rotational invariance. The 7 Hu moments are listed in figure
\ref{fig:Hu}. The equations are combinations of the scale and translation
invariant image moments of several different degrees ($\eta_{ij}$). This
particular set of combinations has been shown to be well suited for classification
\cite{hu2002visual}. 
\begin{figure}
\includegraphics[width=8cm]{moments.png}
\caption{The seven Hu moments. They are combinations of the scale and translation
invariant image moments of several different degrees ($\eta_{ij}$).}
\label{fig:Hu}
\end{figure}
\subsubsection{Histogram of radial distances}
A HORD is computed by applying edge detection to the binary image and computing
the distance from each edge pixel to the center of mass of the image. All these
distances can then be stored in a histogram to form the feature vector; for an
example see figure \ref{fig:Hord}. By
nature of its definition, a HORD is already translation and rotation invariant. If the
distances are rescaled, it also becomes scale invariant.
\begin{wrapfigure}{l}{0.2\textwidth}
\centering
\includegraphics[width=3cm]{HORD.png}
\caption{The general concept of HORD; showing the center of mass and several
examples of the radial distances.}
\label{fig:Hord}
\end{wrapfigure}
\subsubsection{Classification} 
We now have defined two rotation, translation and scale invariant features.
These are important properties when trying to classify hand poses. The two
feature vectors are concatenated and the resulting vector is used for
classification. We used an SVM\cite{svms}, trained on 900 example frames for each gesture,
to classify. The libsvm package \cite{libsvm} which we used solves multi-class problems
using the one-versus-one approach, meaning an SVM is created for each possible
pair of gestures and the gesture with the most positive outcomes is returned as
the final classification. The parameters $C$ and $\gamma$ of the SVM were
optimized using 5-fold cross validation on the training data.

\subsection{3D Rendering} % TIM
In this section we will describe the interface that the tracker-classifier
controls. The interface will be used to create or modify 3-dimensional models,
consisting of polygons, vertices and edges. It will consist of a number of basic
operations, which together should allow full manipulation the model. The
operations that can be performed are: Moving the cursor in 3
dimensions, adding polygons, vertices and edges, selecting a vertex,
translating or removing a vertex and rotating the entire model. A number of
other operations are actually implemented in the interface, but currently
cannot be controlled using gestures. Using the implementation described in the
section below, the total number of different gestures required to perform all
these operations is 7.

\subsubsection{Implementation of operations}
Since this will be a gesture driven interface, we want to minimize the total
number of gestures required for each operation. Most of the operations are
just mapped to a single gestures, however there are some exceptions.
The selection process has a couple of stages: Activating selection; the nearest 
neighbor of the cursor is highlighted. Confirming selection; the currently 
highlighted vertex is selected. Exiting selection; no vertex is selected.
Switching between these stages is done by executing the select gestures,
meaning only one gesture is required to perform selection.


Adding any type of element is done with just 2 gestures, by adding ``regular" or
``closing" vertices. Consider the example of drawing a square in Figure
\ref{fig:sq}, where 3 regular vertices and one closing vertex is added. Vertices
are added at the location of the cursor. The
first addition just forms a vertex, while the second also creates an edge from
the first to second. Adding a third vertex creates another vertex and an edge
from the third to the second vertex. Then the closing vertex is added, which of
course creates another vertex and edge, but additionally creates an edge from
this closing vertex to the very first vertex added and fills in the space formed
by all the added edges with a polygon. This way you can create a polygon of any
size, an edge (one regular and one closing vertex) and a vertex (just a closing
vertex) with just 2 gestures. It is also possible to append to or connect
existing models by first selecting a vertex and then performing the add
operation. In that case the location of the selected vertex will be used and not
that of the cursor. Here no new vertex will be created, only the connecting
edges will be added.
\begin{figure}[!bt]
    \centering
    \subfloat[First vertex added\label{subfig:sq1}]{\includegraphics[width=3cm]{sq1}} \hspace{5 pt}
\subfloat[Second vertex added\label{subfig:sq2}]{\includegraphics[width=3cm]{sq2}}\\
\subfloat[Third vertex added\label{subfig:sq3}]{\includegraphics[width=3cm]{sq3}} \hspace{5 pt}
\subfloat[Closing vertex added\label{subfig:sq4}]{\includegraphics[width=3cm]{sq4}}
    \caption{Creating a polygon from regular and closing vertices}\label{fig:sq}
\end{figure}
\subsubsection{Internal representation}
Internally the model is built up in several layers, which makes many of the
operations computationally less expensive. The first layer consists of mapping
the raw coordinates to vertex objects. Edges consists of 2 vertices and each
polygon is a combination of several edges. Polygons of more than 3 edges will
not always constitute a single plane, therefore each polygon is trivially split
into triangles and rendered as such. The advantage of the representation is
that we can translate a vertex by just changing the coordinates in the vertex
object, without changing anything in the edge or polygon representations.
Another example would be that when removing a vertex, removing the connected
edges and polygons also becomes trivial.
\begin{figure}[!bt]
    \centering
    \subfloat[Vertices\label{subfig:sq1}]{\includegraphics[width=3cm]{vert}} \hspace{5 pt}
\subfloat[Edges\label{subfig:sq2}]{\includegraphics[width=3cm]{edge}}\\
\subfloat[Polygons\label{subfig:sq3}]{\includegraphics[width=3cm]{poly}} \hspace{5 pt}
\subfloat[Combined model\label{subfig:sq4}]{\includegraphics[width=3cm]{comb}}
    \caption{Internal layers of the model}\label{fig:sq}
\end{figure}
\subsubsection{Additional features}
Drawing all polygons for a model requires a lot of redundant work. Consider for
example explicitly drawing all 6 faces of a cube. In that case you would have to
draw each of the 12 edges twice. To aid the user in preventing this redundancy, we
added an automatic polygon finding option. It tries to fill in any new polygons
that are formed by added edges, even if they are not explicitly indicated to be
new polygons. Returning to the cube example: With automatic polygon finding
enabled, the cube could be drawn by making two opposing faces of the cube and then
connecting each of the corners of the 2 faces. The polygon finder would then
automatically fill in the other four faces as the edges were added. Using this
method, none of the edges are drawn twice.

The polygon finding method basically works as follows: Every time an edge is
created, the algorithm tries to find a path from one end of the edge to the
other, using the existing edges. Polygons that already exist are of course
excluded. The problem is non-trivial however, because polygons can be arbitrary
sizes and not all edge connections form surface polygons.

\subsubsection{OpenGL rendering}
Since the high frame-rate needs to be maintained in order to have a workable
system, we implemented a number of speedups when calling OpenGL. The internal
representation is converted into simple lists of vertices to draw. Each
triangle (part of a polygon) also needs a surface normal, which is trivial to solve given 3 points.
The normals are also stored in the same lists. These lists are updated when
operations are performed and used every time the object is redrawn. The drawn
object is also put in what is called a Display list, which are stored on the
GPU for very fast rending. If the model is not modified, the same Display List
can be used, resulting in a much better frame rate. 

\subsubsection{Integration issues}
The translation of the classified gestures to operations in the
interface is non-trivial, because even with the extremely high accuracy of
the classifier, some frames will still be labeled incorrectly. Occasionally
performing seemingly random operations would make the system extremely
frustrating to work with, so we needed to make it even more robust. For this we
added a queue of the last 12 gestures for each hand and set a threshold so
that if at least 9 of the gestures were the same, the current gesture would be
that most occurring gesture. When the hands return to the cursor state, the
most the current operation according to the gesture queue is performed and the queue is
emptied, to prevent performing the operation multiple times in fast succession.

\section{Results}
\subsection{Tracker results} % Wouter Important!
\begin{figure*}[!bt]
\centering
\subfloat[Successful tracking\label{subfig:success}]{\includegraphics[width=8cm]{success}} \hspace{5 pt}
\subfloat[Hands segmented properly after clustering\label{subfig:clustersuccess}]{\includegraphics[width=8cm]{handsclosetogethersuccess}}\\
\subfloat[Tracker fails because the object on the right is closer to the camera then the hands\label{subfig:failclose}]{\includegraphics[width=8cm]{objectclosertocamera}} \hspace{5 pt}
\subfloat[Tracking fails because the hands are too close to the body\label{subfig:tooclose}]{\includegraphics[width=8cm]{tooclosetobody}}
\subfloat[Tracking fails because there is only one hand extended\label{subfig:onehandfail}]{\includegraphics[width=8cm]{onehandfail}}\\

\caption{Tracker success and fail cases}\label{fig:trackerresults}
\end{figure*}

After experimenting a lot with the tracker, we observed the following results,
which are characterized by Figure \ref{fig:trackerresults}. Looking at Figure
\ref{subfig:success}, we see successful tracking. The bounding box, center of
mass and classification are shown. If the classifier is sure about the
classification, the bounding box is colored green, otherwise it will be red. The
number in the top-left corner shows the time in seconds since the tracker
started up, and the frame rate is printed in the lower left corner, though this
measure is inaccurate due to the overhead of saving the frame.
The tracker works well enough for our application, provided that the controller
keeps its limitations in mind. These are the following:
\begin{itemize}
\item Tracking will fail if the hand moves outside of the tracking window
between frames. However, this would require the hand to move extremely fast.
Drawing in 3D does not require such fast movements, so there is no real reason
for the hand to move that fast. If the tracker loses the hand in this way, it
will latch onto the closest thing found in the tracking window, which is often
either a part of the background or part of the controller's body. In both cases
tracking can be restored by moving the hand back to the region currently being
tracked.
\item Tracking will fail if the tracking windows overlap. Two hands are being
tracked, so there are two bounding boxes and two windows. If pixels of one hand
are included in the tracking window of the other, the tracker will get confused.
We address this problem by doing the same thing we do in the initialization
step; we do depth thresholding and 2-means clustering on the entire image again,
which will divide up the pixels between the hands nicely most of the time, as
shown in Figure \ref{subfig:clustersuccess}.
\item Tracking will fail if the hand is no longer the closest thing in the
tracking window. This is shown in Figure \ref{subfig:failclose}, where a computer monitor
is closer to the camera then the hands. Additionally, if an object is within the
tolerance area in the tracking window, it will be included in the tracking
process. This is demonstrated in Figure \ref{subfig:tooclose}, where the
controller's hands are too close to his body.
\item Tracking will fail if the user only has one hand extended. The algorithms
we experimented with to allow the user to use either one or two hands slowed
down the frame rate in such a way that we decided not to use them. The user is
required to extend both hands, or the tracker will track different parts of the
same hand, which will negatively impact classification. This situation is shown
in Figure \ref{subfig:onehandfail}.
\end{itemize}

\subsection{Classifier results} % Tim Important!
\begin{center}
  \begin{tabular}{ | l | r | }
    \hline
    Number of gestures & Accuracy\\ \hline
    2 & $99.57\%$ \\ \hline
    3 & $99.21\%$  \\ \hline
    4 & $96.47\%$  \\
    \hline
  \end{tabular}
\label{tab:svm}
\end{center}
In Table \ref{tab:svm}, some results of the classifier are shown. When using two
gestures, ``spread fingers" versus ``fist", we observed an accuracy of 99.57\%.
This is a good result, but being able to distinguish only two gestures does not
give us the expressiveness that we need. Therefore, we expanded the number of
gestures. Performance was still high when we added a third gesture, ``one
finger", but adding yet another gesture proved to have a negative effect on
classification. While a classification accuracy of 96.47\% may seem reliable, in
practice the misclassifications proved to be too frequent. This is why we chose
to control the interface with three gestures.

\section{Conclusion \& Discussion} % Wouter
The overall result is a working prototype that does what it is supposed to, but
that could use a lot of improvement. We implemented operations for placing
vertices, removing vertices, creating polygons, rotating the camera, selecting
vertices and translating vertices. This interface is driven entirely by hand
poses, which is how we envisioned it, though interaction with the system is not
as intuitive as we had hoped. We do believe that this paradigm for drawing 
objects in 3D has merit, as moving the cursor through the drawing space by 
simply moving ones hand is very intuitive. A more natural mapping from gestures
to operations would make the system a lot more pleasant to work with, but in 
essence, this type of interface shows promise.

The tracker works reasonably well, aside from the drawbacks mentioned in the
previous section. We could improve the tracker by looking at the RGB image in
addition to the depth image. The RGB image could provide information about the
texture of a certain part of the image, which could be used to make tracking
more robust. We expect that using a different tracker architecture, such as a
Kalman or Particle Filter would also improve tracking, but at higher
computational costs. One of the reasons we kept the tracker simple was to make
sure it was not a bottleneck for the frame rate. With the current
implementation, the tracker is indeed fast enough not to be the bottleneck of
the system.

While the features we extract from the binary tracking image are informative
enough to reliably separate three different gestures, we will need to implement
more complex features if we want to distinguish more than three. Another way we
could improve classification is to not look at just the contour of the hand
obtained by binarization, but to also look at the different depth values of the
pixels. This would allow us to use three dimensions for distinguishing different
gestures instead of two.

The 3D drawing interface works, but it is not very intuitive. The combinations
of gestures linked to actions are arbitrary, so they require the user to
memorize them or use a cheat sheet. Being able to recognize more gestures would
help in making the user interface more intuitive, as we could use more
one-handed gestures. Another drawback of the interface is that it requires the
controller to extend his arms for a long period of time. After a few minutes,
this will make the controller's arms go tired. Better tracking might alleviate
this problem, as the user would be able to hold his hands closer to his body.
Another problem with the interface is that it is sometimes hard to see in which
part of the drawing space the cursor is. This is due to the fact that space is
projected on a 2D plane in order to display it on a computer monitor. If the
drawing space was instead rendered and displayed in 3D, by using e.g. a
head-mounted 3D display, it would be much easier to orient the cursor in the
drawing space.

Another welcome addition would be an ``undo" function, which rolls back the last
performed operation. While misclassifications are rare, they still sometimes
occur and they can have undesirable consequences for the model currently being
worked on. Being able to undo these kinds of accidents would help in making the
application more user-friendly. Implementing this functionality in an efficient
way is not trivial, however, because not all operations have an inverse
operation. When we remove a vertex, for example, we remove all polygons and
edges connected to that vertex, which cannot easily be restored by adding the
vertex again.

Finally, other approaches for representing 3D objects could be considered. For
instance, a volumetric representation could be conceived, where the drawing
space is subdivided into voxels, and the user can activate or deactivate these
voxels by ``touching"' them with a certain gesture.

\bibliographystyle{abbrv}
\bibliography{report}
\newpage
\section{Appendices}
\appendix
\section{Cheat sheet} % Tim
\begin{center}
  \begin{tabular}{ | l | l | l | }
    \hline
    Operation & Left hand & Right hand\\ \hline
    Move cursor & Any & Fist\\ \hline
    Closing vertex & One finger & Spread fingers\\ \hline
    Regular vertex & One finger & One finger\\ \hline
    Select vertex & Fist & Spread fingers\\ \hline
    Remove vertex & Fist & One finger\\ \hline
    Rotate model & Spread fingers & Spread fingers\\ \hline
    Translate model & Spread fingers & One finger\\ \hline
  \end{tabular}
\label{tab:svm}
\end{center}
\section{Technical manual} % Fascinating Wouter/Tim
All code used in this project can be downloaded from the Google Code repository
at http://code.google.com/p/kinect-3d-draw. The software requires the
installation of the following dependencies:
\begin{itemize}
\item libfreenect + python wrappers
\item Opengl/GLUT + python wrappers
\item OpenCV + python wrappers
\item numpy python package
\item libsvm (recompile and replace the corresponding binaries if the ones in
the repository fail)
\end{itemize}
Make sure that libfreenect is able to find the Kinect (run \verb?import libfreenect? 
in a Python interpreter without errors) and the OpenCV is properly
linked (\verb?import cv? should work in a Python interpreter).
If all is installed properly, the software will run using the command
\verb?python kinectDraw2Hands.py?


\end{document}

