\documentclass[11pt]{article}   % list options between brackets
\usepackage{fullpage}              % list packages between braces
\usepackage{listings}
\usepackage{color}
\usepackage{hyperref}
\usepackage{graphicx}
\usepackage{subfig}

\graphicspath{{./images/}}

\hypersetup{
    colorlinks,
    citecolor=black,
    filecolor=black,
    linkcolor=blue,
    urlcolor=blue
}

\begin{document}

\title{\bf CPSC 589 - Project Report \\ \emph{3D Motion Capture and Modelling Using Kinect Sensor}}   % type title between braces
\author{Romain Clement, Kyle Milz, Jeff Nicholson}         % type author(s) between braces
\date{December 19, 2011}    % type date between braces
\maketitle
\thispagestyle{empty}

\lstset{language=C, basicstyle=\footnotesize, tabsize=2, frame=single}

\pagebreak
\tableofcontents
\pagebreak

\begin{abstract}
We have created an application that creates simple 3D meshes using data obtained from the Kinect Sensor. These models can be exported as either a point cloud (.ply) or mesh (.obj) which can then be loaded into other applications like Blender or Maya for refinement. The application leverages techniques from computer graphics, computer vision, 3D laser scanning and others. By using a wide array of open source libraries we have created the base for a free, open application, that can be used to quickly generate rough models of real world objects.
\end{abstract}

\pagebreak

\section{Introduction}
Modelling and animation of lifelike 3D objects is hard. There are many methods for creating these objects, some more difficult than others. This project provides an easy way to create simple 3D meshes, based on objects in your surrounding environment. It is as simple as pointing the Kinect at an object or person, pressing a few buttons, and you have a mesh. The meshes we create are only the front half of the object, the surface that the Kinect can see. We have not implemented 3D scanning of objects to capture 360 degrees of information.\\
In order to pick objects from a scene, the Watershed algorithm was used. To create a mesh from a point cloud, we modified an algorithm that is commonly used in 3D laser scanner technology. We used multiple libraries, including OpenGL, Qt, libfreenect and OpenCV to create this application.\\
What we have created is a proof of concept. Using open source software, you can create a simple way to render 3D meshes. With more work, our application can be extended to create meshes with better detail, and full 360 degree view.

\section{Goals and Objectives}
The main goal of this project was to provide an easy way to model complex 3D objects, like the human body. We accomplished this goal. We also hoped to create simple animations using the live data provided by the Kinect, so we could animate real life phenomena like water and trees blowing in the wind, as well as do some simple motion capture. This proved to be beyond the scope of our project, given the restricted time frame and resources available.

\section{Challenges} \label{sec:challenges}
Over the course of this project we faced multiple challenges in attempting to realize our objectives. The end to end workflow, of converting depth data from the Kinect into a reasonable mesh, was more difficult to implement than we expected. The initial challenge was determining how to interpret the raw depth data obtained from the Kinect (Section \ref{subsec:depth_data}).\\
Once the data was being interpreted correctly, we were able to render a point cloud, of everything in the current scene. The next challenge was to find a way to distinguish between different objects within the scene. The point cloud provides no information on the boundaries of objects, other than discrepancies in depth. We used an algorithm from computer vision, the Watershed algorithm in order to solve this problem (Section \ref{subsec:object_detection}).\\
The next challenge was actually creating a mesh, given the point cloud for a specific object. The object detection gave us the boundaries of the object, but you still need a way to create connectivity among the points. To create the mesh, we used a method commonly used in 3D laser scanners, to create a face list and vertex list to describe the mesh of the object (Section \ref{subsec:mesh_creation}).\\
One last challenge we faced throughout the project, was the general performance of the application. We wanted to show multiple views of the data, but if implemented naively, this runs very slowly. By using OpenGL textures, we were able to drastically improve the performance of the application (Section \ref{subsec:perf}).

\subsection{Interpreting Depth Data} \label{subsec:depth_data}
The libfreenect driver provides low level sensor information to its API. The Kinect has an infrared camera, a regular visible light camera, and an infrared projector. The infrared projector projects a dot pattern away from the Kinect and into the frustum that the infrared camera can see.

\begin{figure}[h!]
\centering
\includegraphics[scale=0.5]{kinect_infrared_projection}
\caption{Kinect infrared pattern shown on a human face}
\label{fig:kinect_infrared_projection}
\end{figure}

The pattern of infrared dots is known as 'structured light', and the deformation of this pattern allows the Kinect to estimate the distance from the device to the deformed dot pattern. The Kinect can reliably deliver 30 frames per second of 640x480 resolution depth approximations and regular RGB camera information. Using the 640x480 mode the Kinect provides the depth as an 11 bit unsigned integer for a total of 2048 different values. 
At first a linear interpretation was taken for the 11 bit depth value. That is, the depth value returned by the kinect was the actual distance from the camera. This was not the case and the 11 bit depth value is a highly nonlinear representation of the actual depth. While searching for other people who have surely had this very same problem, a solution was found.

\begin{figure}[h!]
\centering
\includegraphics[scale=0.5]{kinect_depth_calc}
\caption{Kinect 11 bit depth value with respect to the actual distance}
\label{fig:kinect_depth_calc}
\end{figure}

Figure \ref{fig:kinect_depth_calc} is a graph generated by an experiment set up by Nathan Crock (\url{http://mathnathan.com/2011/02/03/depthvsdistance}). This shows that the relationship between the depth returned by the Kinect is not linear but raises the question of how to convert the Kinect depth into actual depth. Another gentlemen by the handle of 'marf' (\url{http://vvvv.org/forum/the-kinect-thread}) provides a regression of the graph above using the tan() function. 

\begin{displaymath}
\centering
d = tan(depth[i]/1024.0f + 0.5f)*33.825f + 5.7f;
\end{displaymath}

This equation takes the raw 11 bit depth values and turns it into a distance from the camera in cm. This equation is an important part of the application because without it nothing else would be possible.


\subsection{Object Detection} \label{subsec:object_detection}
In order to render a mesh of an arbitrary object in a 3D scene, a major step is to be able to separate objects between them, and therefore to select an object in the scene.

\subsubsection{Image segmentation}

The whole process of identifying objects in the scene, and being able to select them, is called \emph{image segmentation}. To achieve this, it is necessary to employ computer vision algorithms. We chose to use the Watershed algorithm which provide the following advantages:
\begin{itemize}
\item Sufficient accuracy for the object detection
\item Implemented by the OpenCV library
\item No need for image training stage
\item Only requirement is to roughly "mark" the regions of interest in the image to be segmented
\end{itemize}
This algorithm takes two inputs: the image to be segmented and the map of markers.\\
The first one has to be in a standard format (8-bit 3-channel image). We chose to use our coloured depth map because the colours between the different objects are really strong, therefore easier to identify by the algorithm (Figure \ref{fig:coloured_depth_map}).
\begin{figure}[htb]
\centering
\includegraphics[scale=0.5]{colouredDepthMap.png}
\caption{Coloured depth used as input for the Watershed algorithm}
\label{fig:coloured_depth_map}
\end{figure}
The second input argument, the map of markers, also needs to be in a specific format: 32-bit single-channel image. In other words, the markers map has to be filled with values identifying the different regions: zero when the pixel is not part of any marker, the number of the marked region otherwise (Figure \ref{fig:markers_map_structure}).
\begin{figure}[htb]
\centering
\includegraphics[scale=1]{markerMapStructure.png}
\caption{Structure of the input markers map for the Watershed algorithm}
\label{fig:markers_map_structure}
\end{figure}
To construct this markers map, the first step is to create a 8-bit 1-channel array of the exact same dimensions as the original image, and fill it with zeros. The second step is to insert the actual markers: for each marker, given its pixel-wise position in the 2D image, modify the corresponding value in the array to 255 (ie the maximum value). This process will provide a primary map of markers as a binary image. Then, we need to process this binary image with an edge detector to retrieve the contours of the markers and draw them into the actual 32-bit single-channel markers map. This way we obtain the markers map containing the region numbers at the markers position. The OpenCV library provides two functions to do this task: \verb$cv::findCountours$ and \verb$cv::drawContours$.\\
When the 32-bit single-channel markers map is retrieved, we can process the Watershed algorithm by using the adequate OpenCV function: \verb$cv::watershed$. The output is found in the markers map given as input. Indeed, this array is now filled with the different region numbers for each pixel in the image, representing the original image segmented accordingly to the roughly marked regions (Figure \ref{fig:markers_map_structure_output}).
\begin{figure}[htb]
\centering
\includegraphics[scale=1]{markerMapStructureOutput.png}
\caption{Structure of the output markers map of the Watershed algorithm}
\label{fig:markers_map_structure_output}
\end{figure}
From now on, this map can be used to select an object in the scene and focus the mesh processing only on the image elements that are part of this object.

\subsubsection{Interactive object detection and selection}

We needed to provide the user an interactive way to dynamically mark the regions of delimitation in the scene. So we implemented a simple line-drawing system keeping track of each marked pixel in the image (Figure \ref{fig:interactive_region_marking}). The user can clear the current marking if a mistake was made before running the object detection. When the rough region marking is done, the object detection algorithm can be launched.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.5]{interactiveRegionMarking.png}
\caption{Interactive region marking}
\label{fig:interactive_region_marking}
\end{figure}
Once the object detection step is done, the coloured depth map view is replaced by a segmented image view showing the different detected objects in the scene (Figure \ref{fig:detected_objects_view}). From this point, the user is now able to select one of the objects in the scene by clicking on it in the segmented image view (Figure \ref{fig:selected_objects_view}).
\begin{figure}[htb]
\centering
\includegraphics[scale=0.5]{detectedObjectsView.png}
\caption{Segmented image view presenting the detected objects in the scene}
\label{fig:detected_objects_view}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[scale=0.5]{objectSelectionView.png}
\caption{Segmented image view presenting the selected objects in the scene}
\label{fig:selected_objects_view}
\end{figure}

\pagebreak

\subsection{Mesh Creation} \label{subsec:mesh_creation}
We have a point cloud of multiple objects, and want to render a mesh of one specific object. The difficulty arrises in determining the connectivity of the points, in order to create faces of the mesh. We modified an algorithm commonly used to render meshes from 3D laser scans [1], to create a face list and vertex list data structure. This allows us to easily render the mesh using OpenGL, and also export to a .obj file.\\ 
Object detection (Section \ref{subsec:object_detection}) provides the bounding rectangle ($[i_{min},j_{min}],[i_{max},j_{max}]$) of the object we are interested in rendering (Figure \ref{fig:bounding_rect}). We create a mesh of the entire area within the bounding rectangle, to simplify the algorithm.\\
To begin with, we have a 640x480 array of the depth values for each pixel (i,j), we refer to this as the depth array. The object detection also provides us with a 640x480 array, with each entry either 0, for pixels that are not within the edges of the object, -1 for pixels on object boundries, or an integer greater than 0, for pixels within an object, we call this the object array. Each object identified in the scene gets a unique integer greater than zero. The user selects an object, and we get the integer value for that object, say $k$, $k>0$. Every i,j pixel which is within the object of interest has the value $k$ in the object array.\\
The first step in creating the mesh, is what we call flattening the area. For pixels within the bounding rectangle, but not within the object, we want to reduce the depth to that of the greatest depth of any pixel that is within the object. We traverse the depth array, and find the maximum depth for pixels within, the object. Then for any i,j with value in the object array not equal to $k$, and depth value greater than the maximum depth found, we set its depth to the maximum, thus flattening the everything within the bounding rectangle, but not in the object.\\
Once the object is flattened, we proceed to creating vertices. We create a new array, 640x480, and initialize all values to the maximum value for the data type we are using (ie max short). We call this the average depth array. Then we traverse the depth array, considering a 10x10 box of pixels at a time. We will be creating vertices for each of the four corners of the 10x10 box (Figure \ref{fig:square}), and each box will become two triangular faces.
\begin{figure}[htb]
\centering
\includegraphics[width=\textwidth]{square}
\caption{Traversing 10x10 pixels at a time}
\label{fig:square}
\end{figure}
We compute the average depth of these 100 pixels from the depth array. Then we check the average depth array, and for each i,j of the four corners, check if the value has been changed from the max it was initialized to. If the value has not been set, we simply set it the average we computed. If The value has been set previously, this means we are at a pixel where two 10x10 boxes overlap. We need to average the previous average depth of this pixel, with the new average computed. We sum them, and divide by two, and set the value in average depth to this. This way we create a continuous mesh. Once we have traversed the entire depth array in 10x10 sections, we can create vertices. Every tenth position in the average depth array, is a vertex. We convert the i,j coordinates to x,y world coordinates, (Equation \ref{sec:worldcoords}) 
\begin{eqnarray} \label{sec:worldcoords}
x=(i/640)*4-2,\;\;\;\;\;y=(j/480)*(-2)+1
\end{eqnarray}
and set z to the value in the average depth array. Now we have a list of vertices. We can traverse this list and create two triangular faces for each 10x10 box.
\begin{figure}[htb]
\centering
\includegraphics[width=\textwidth]{boundingRect}
\caption{Bounding rectangle of object to render}
\label{fig:bounding_rect}
\end{figure}

\pagebreak

\subsection{Performance} \label{subsec:perf}
Streaming data from the Kinect requires significant bandwidth and CPU processing power. Originally we developed a user interface that used 3 separate OpenGL contexts, one for the rendered view (3d trackball), and two smaller contexts for the depth and rgb camera views. This configurations performance was terrible due to contexts duplicating the same data and OpenGL context switches.

\begin{figure}[h!]
\centering
\includegraphics[scale=0.3]{old_gui}
\caption{Old user interface with 3 separate GL contexts}
\label{fig:old_gui}
\end{figure}

This code was branched to try out a new idea, moving the 3 OpenGL contexts into a single one. This turned out to be a good choice because the information coming from the depth and rgb cameras can be easily texture mapped, and then overlayed ontop of the rendered view. These changes increased performance significantly as before with the 3 context design the program was not responsive and laggy while currently it is quite responsive and snappy.

\pagebreak

\section{Results}
Our final application allows user to create simple meshes of objects within the Kinect's field of view. There are three different views displayed in the main window. The larger view, is initially a point cloud, created using depth data from the Kinect, with color added from the rgb data obtained from the Kinect. This main view also displays the rendered mesh. The two windows on the left side of the application, which can be hidden, show two different interpretations of the Kinect data. The bottom left view is simply the rgb data. It is nothing more than a normal video stream. The top left view, is created by mapping different depths to different colors. This creates an image where everything in the scene with the same depth is the same color.\\
The meshes we are able to create are not very high in detail. This could be improved with further refinement of the algorithms (see Future Work, Section \ref{sec:futurework}). The examples below demonstrate the current functionality of the application.
\begin{description}
\item[Example 1] Face\\
The first example is a close up of a human face. Figure \ref{fig:face_rgb} shows the image from the Kinect RGB camera. The face is marked and selected (Fig. \ref{fig:face_mark_select}). Figure \ref{fig:face_pointclouds} shows the face, exported as a .ply file, viewed in Blender. Finally, Figure \ref{fig:face_mesh} shows the resulting mesh.\\

\begin{figure}[h!]
\centering
\includegraphics[scale=0.5]{face_rgb}
\caption{Kinect RGB camera view of face}
\label{fig:face_rgb}
\end{figure}

\begin{figure}[h!]
\centering
\subfloat[Marking the face as the object we desire to render]{\label{fig:face_marked}\includegraphics[scale=0.5]{face_marked}}\quad
\subfloat[Selecting the face object from the scene]{\label{fig:face_selected}\includegraphics[scale=0.5]{face_selected}}
\caption{Marking and selecting the face}
\label{fig:face_mark_select}
\end{figure}

\begin{figure}[h!]
\centering
\subfloat[Straight on view of point cloud of the face, in Blender]{\label{fig:face_pc1}\includegraphics[scale=0.3]{face_pc1}}\quad
\subfloat[Angled view of face point cloud in Blender, shows depth of point cloud]{\includegraphics[scale=0.3]{face_pc2}}
\caption{Point cloud of the face, in Blender}
\label{fig:face_pointclouds}
\end{figure}

\begin{figure}[h!]
\centering
\includegraphics[scale=0.5]{face_mesh}
\caption{Rendered mesh of the face}
\label{fig:face_mesh}
\end{figure}

\item[Example 2] Body\\
The second example, is a full body. Figure \ref{fig:body_rgb} shows the rgb view of the person. Figure \ref{fig:body_selected} shows the body selected in the scene. Figure \ref{fig:render} shows the main view during this process. Figure \ref{fig:body_render} shows everything, before object selection. Figure \ref{fig:body_selectedRender1} and Figure \ref{fig:body_selectedRender2} show only the selected object. The rest of the scene is filtered out. The point cloud, as viewed in Blender, is seen in Figure \ref{fig:body_pointclouds}. The final mesh is displayed in Figure \ref{fig:body_mesh}.

\begin{figure}[h!]
\centering
\subfloat[Kinect RGB camera view of figure]{\label{fig:body_rgb}\includegraphics[scale=0.5]{body_rgb}}\quad
\subfloat[Body after marking and selection]{\label{fig:body_selected}\includegraphics[scale=0.5]{body_selected}}
\caption{The body to be rendered}
\label{fig:body}
\end{figure}


\begin{figure}[h!]
\centering
\subfloat[Main rendered view of scene, before object detection and selection]{\label{fig:body_render}\includegraphics[scale=0.2]{body_render}}\quad
\subfloat[Main rendered view, after selection of object. Only the body is visible now.]{\label{fig:body_selectedRender1}\includegraphics[scale=0.2]{body_selectedRender1}}\quad
\subfloat[Another angle of selected object]{\label{fig:body_selectedRender2}\includegraphics[scale=0.2]{body_selectedRender2}}
\caption{View of the scene in the main view}
\label{fig:render}
\end{figure}

\begin{figure}[h!]
\centering
\subfloat[Straight on view of point cloud of the body, in Blender]{\label{fig:body_pc1}\includegraphics[scale=0.2]{body_pc1}}\quad
\subfloat[Angled view of body point cloud in Blender, shows depth of point cloud]{\includegraphics[scale=0.2]{body_pc2}}
\caption{Point cloud of the body, in Blender}
\label{fig:body_pointclouds}
\end{figure}

\begin{figure}[h!]
\centering
\includegraphics[scale=0.3]{body_mesh}
\caption{Rendered mesh of the body}
\label{fig:body_mesh}
\end{figure}

\item[Example 3] Bobble Head\\
The level of fine detail provided by the Kinect depth data is not very high. Especially for objects that are too close to the Kinect. If you want to render a small object, like a bobble head doll, putting it close to the Kinect does not give you any depth detail. And if it is far away, there will also be no detail. You can see, if Figure \ref{fig:bobble_marked}, that there is shadowing, and no color, because the bobble head is too close to the Kinect sensor. The resulting point cloud (Fig. \ref{fig:bobble_pc}) has only the general shape, but no depth.

\begin{figure}[h!]
\centering
\subfloat[RGB camera]{\label{fig:bobble_rgb}\includegraphics[scale=0.5]{bobble_rgb}}\quad
\subfloat[Marked, note the lack of color, the object is too close to Kinect]{\label{fig:bobble_marked}\includegraphics[scale=0.5]{bobble_mark}}\quad
\subfloat[Selected]{\label{fig:bobble_selected}\includegraphics[scale=0.5]{bobble_selected}}\quad
\subfloat[Point cloud, no depth detail, only general shape]{\label{fig:bobble_pc}\includegraphics[scale=0.3]{bobble_pc}}
\caption{Bobble Head, a small object}
\label{fig:bobble}
\end{figure}

\item[Example 4] Multiple object detection\\
The object detection feature is capable at detecting many objects in the scene. Figure \ref{fig:multi_detect} shows the marking and detection of many objects in a scene.
\begin{figure}[h!]
\centering
\subfloat[RGB]{\includegraphics[scale=0.5]{multi_detect_rgb}}\quad
\subfloat[Marking objects]{\includegraphics[scale=0.5]{multi_detect_mark}}\quad
\subfloat[Detected Objects]{\includegraphics[scale=0.5]{multi_detect_done}}
\caption{Detecting multiple objects in a scene}
\label{fig:multi_detect}
\end{figure}

\end{description}

\pagebreak

\section{Future Work} \label{sec:futurework}
What we have created is only a very basic version of what could be. There are many areas that could be improved or further explored, including \hyperref[subsec:mesh_render]{Mesh Rendering} in general, rendering a full \hyperref[subsec:360]{360 degree view} of the mesh, and \hyperref[subsec:obj_track]{Object Tracking}.

\subsection{Mesh Rendering} \label{subsec:mesh_render}

The first major evolution of the program should be targeted at obtaining a better 3D mesh of a selected object. In other words, either enhance the currently used algorithm by de-noising the rendering, or by implementing another algorithm to transform the point cloud into a 3D mesh.

\subsection{360 Degree View} \label{subsec:360}

To obtain a full digital representation of a captured scene, it would require having a 360 degree view of it. A exciting technique called \emph{image registration} is a good candidate to perform this kind of transformation. In a few words, it consists in capturing (or registering) several different views of the scene and combining them together to obtain a 360 degree view (a good analogy is the panoramic technique used in 2D photography). This way we should be able to render a mesh of a complete 3D object (not only the half of it).

\subsection{Object Tracking} \label{subsec:obj_track}

All the current computations are performed on a static captured image of a 3D scene. The next step would be to able to perform these operations (from object detection to mesh rendering) on live 3D video stream. This would result in having a motion capture system being able to model a 3D object in real time. One way to achieve this is to implement (or use) an object tracking system.

\section{Conclusion}
Overall this project was a success. We set out to determine if we could render simple meshes of real life objects, using data obtained from the Kinect. We were able to do this. The application we created is definitely v1.0 software. There are many improvements that could be made (Sec. \ref{sec:futurework}) to the application. Along the way we solved many challenges (Sec. \ref{sec:challenges}), and learned a great deal about modelling, computer vision, and the power of existing libraries.

\section{References}
[1] Christoph Hoppe and Dominik Ducati, Meshing Point Clouds From 3D Laser Scans, Dec 8, 2011, \url{http://pille.iwr.uni-heidelberg.de/~laserscan01/}\\

\pagebreak

\appendix
\section{Setting Up Your Machine} \label{sec:setup}

These instructions are also available online at \url{http://code.google.com/p/3d-kinect-modelling/wiki/GettingStarted}.\\
The source is available at the svn repository: \url{https://3d-kinect-modelling.googlecode.com/svn/trunk/}\\

Follow these instructions to get the project running on your machine. Required dependencies:
\begin{itemize}
\item Qt
\item libfreenect
\item OpenCV
\end{itemize}

\subsection{MacOSX / UNIX-based Systems}
\begin{enumerate}
\item Make sure you have Qt installed
\item Set up OpenKinect on your machine:

\begin{itemize}
\item Install libfreenect and libusb using the Homebrew package. Detailed instructions here \url{http://openkinect.org/wiki/Getting_Started#Homebrew}
\end{itemize}

\item Set up OpenCV on your machine:

\begin{itemize}
\item Instructions here: 
\url{http://opencv.willowgarage.com/wiki/Mac_OS_X_OpenCV_Port}
\item It is easier to do the build from source version. Note that in their instructions, they do not specify a directory to put the binaries in. You should. when running cmake, run something like:
\begin{verbatim}
    cmake -F "Unix Makefiles" -D CMAKE_INSTALL_PREFIX=/usr/local ..
\end{verbatim}
\end{itemize}

\item You also need pkg-config on your system, to use our .pro file. Find it here: \url{http://www.freedesktop.org/wiki/Software/pkg-config}
\item Checkout the 3d-kinect-modelling project, \url{https://3d-kinect-modelling.googlecode.com/svn/trunk/}
\item Modify 3d-kinect-modelling.pro as follows:
add \verb$/usr/local/include/libfreenect$ to the \verb$INCLUDEPATH$, so that the line is as follows:
\begin{verbatim}
    INCLUDEPATH += . headers /usr/local/include/libfreenect
\end{verbatim}
\item run \verb$qmake$, to generate the Makefile, then \verb$make$
\end{enumerate}

\subsection{Windows 7}
\begin{enumerate}
\item Download and install Qt SDK, \url{http://qt.nokia.com/downloads/}
\item Download and install libfreenect
\item Follow instructions here \url{http://openkinect.org/wiki/Getting_Started#Windows}
\item Download and install OpenCV, \url{http://opencv.willowgarage.com/wiki/InstallGuide}
\item Check out source, \url{https://3d-kinect-modelling.googlecode.com/svn/trunk/}
\item Modify .pro file according to your system
\item Build using IDE of your choice, QT Creator of Visual Studio
\end{enumerate}

\pagebreak

\section{User Manual} \label{sec:usermanual}
First you have to successfully set up your machine (see Appendix \ref{sec:setup}), and start the application.\\
\\
\textbf{General Controls}\\
\emph{Rotate}: Left click and drag\\
\emph{Rotate around Z axis}: Right click and drag\\
\emph{Move center of view}: Scroll wheel\\
\emph{Hide/display left windows}: T\\
\emph{Hide/display texture in main view}: M\\
\emph{Mark regions}: Ctrl+left click\&drag\\
\emph{Clear marks}: C\\
\emph{Detect Objects}: D\\
\emph{Render Mesh}: R\\
\\
\textbf{Usage}
\begin{description}

\item[Step 1] Orient the Kinect so that the object you want to model is generally centred in the view

\item[Step 2] Press the Pause button to capture the current scene (Figure \ref{fig:usage_step1})\\
\begin{figure}[htb]
\centering
\includegraphics[width=\textwidth]{step1}
\caption{Pause}
\label{fig:usage_step1}
\end{figure}

\item[Step 3] Mark regions of the upper left image. Hold control, and use the mouse to click and drag, marking different object. In Figure \ref{fig:usage_step2}, the main person has been marked, as well as the surrounding area.\\
\begin{figure}[htb]
\centering
\includegraphics[width=\textwidth]{step2}
\caption{Marked regions in upper left window}
\label{fig:usage_step2}
\end{figure}

\item[Step 4] Press D to detect objects in the scene. The upper left view will change, showing different shades of colour for different objects (Figure \ref{fig:usage_step3}).
\begin{figure}[htb]
\centering
\includegraphics[width=\textwidth]{step3}
\caption{After object detection}
\label{fig:usage_step3}
\end{figure}
\item[Step 5] Select a specific object in the scene, using control-left click. The object you select will be displayed in blue (Figure \ref{fig:usage_step4}), and the main view will now show only this object.
\begin{figure}[htb]
\centering
\includegraphics[width=\textwidth]{step4}
\caption{Select an object}
\label{fig:usage_step4}
\end{figure}

\item[Step 6] You can now export the object as a point cloud (out.ply), or render as a mesh. To render as a mesh press R. The mesh will be displayed in the main view (Figure \ref{fig:usage_step5}. Press T to toggle the display of the left windows for easier viewing.
\begin{figure}[htb]
\centering
\includegraphics[width=\textwidth]{step5}
\caption{After rendering}
\label{fig:usage_step5}
\end{figure}

\item[Step 7] You can now export the mesh as an obj file (out.obj) by clicking the export .obj button.

\end{description}



\end{document}
