% \pagebreak[4]
% \hspace*{1cm}
% \pagebreak[4]
% \hspace*{1cm}
% \pagebreak[4]

\chapter{The Design}
\ifpdf
    \graphicspath{{Chapter1/Chapter1Figs/PNG/}{Chapter1/Chapter1Figs/PDF/}{Chapter1/Chapter1Figs/}}
\else
    \graphicspath{{Chapter1/Chapter1Figs/EPS/}{Chapter1/Chapter1Figs/}}
\fi

\section{Overview of the design}
The design can be divided into a few main parts, corresponding to each section in this chapter. A basic description is given here, with more details to follow in the respective chapters. See figure \ref{fig:flowchart} for a more detailed flowchart.

\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\textwidth]{5steps.pdf}
\caption{simple overview of this design, a more detailed flowchart is on figure \ref{fig:flowchart}.}
\label{fig:simpleflowchart}
\end{figure}

First, images are taken from different viewpoints of some object. The sensor image first goes through a few preprocessing steps so it has less noise than raw sensor inputs. The denoised images then go through an alignment procedure which aligns depth images with color images. The aligned image gets converted to a point cloud in world-coordinates. Point clouds from adjacent viewpoints are combined pairwise into a single point cloud. And all point clouds eventually get combined together to form a single point cloud model for the whole object. Point cloud can be converted to a mesh by sampling and adding edges as well as faces. Lastly, the RepRap host software can take the mesh and print the object scanned.

\begin{figure}[!ht]
\centering
\includegraphics[scale=0.7, angle=90]{flowchart.pdf}
\caption{Flowchart overview of this design}
\label{fig:flowchart}
\end{figure}
\clearpage

\section{Preprocessing steps}
Raw depth input from the sensor is very noisy (See figure \ref{fig:denoise}). However, because the structured light used to get depth image is randomly projected, noise in each depth frame is different from another. It is therefore possible to take advantage of multiple captures and using all those information to denoise the image.

We take the following approach called weighted robust averaging. If the sensor returns black, meaning the depth is unknown, then the value is not averaged into the final image; otherwise, the final depth value is the average of several frames. To be more precise, let $d^1, d^2, ..., d^9 \in \mathbb{R}^{640\times480}$ be 9 consecutive frames. Let $W^1, W^2, ... , W^9 \in \mathbb{R}^{640\times480}$ be the corresponding weight images. So $W^k_{ij} = 0$ if $d^k_{ij} = \mathtt{No Reading}$ and $W^k_{ij} = 1$ otherwise. Define $\odot$ as the pixel-wise multiplication operations. Then the final output $D$ is:

$$D = \frac{\sum_{i=0}^9 W^i \odot d^i}{\sum_{i=0}^9 W^i}$$

Where the division is also pixel-wise.

\begin{figure}[!ht]
\centering
\includegraphics[scale=.3]{before.png}
\includegraphics[scale=.3]{after.png}
\caption{The sensor input. \textbf{left image}: sensor input before robust averaging \textbf{right image}: after robust averaging}
\label{fig:denoise}
\end{figure}

\section{Alignment}

\subsection{The problem}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\textwidth]{calib.png}
\caption{Miss-aligned image. The red arrows are miss-aligned by a large amount on the color and depth images \cite{kinectcalib}.}
\label{fig:missalign}
\end{figure}

This problem is due to the way the Kinect sensors works. Kinect uses different camera at different locations for color and depth. Furthermore, the infrared laser projector is at yet a different location from depth, and color cameras. So miss-alignment is to be expected. See figure \ref{fig:missalign} for a demonstration of this problem \cite{kinectcalib}.


\subsection{The solution}
Since we would like to build a 3D colored model, we have to solve this problem. The major objectives of this component are two folds:

\begin{itemize}
\item{Take a depth image and construct the corresponding 3D scenery in a Cartesian coordinate system. The coordinates are in meters.}
\item{Find the mapping between a pixel in the depth image with another pixel in the color image.}
\end{itemize}

The solution to those two problems are well-studied within the Kinect community, and a solution used in this project is outlined in \ref{chap:align} with greater detail.

\section{The point cloud}

The input from the Kinect is a 2D array of depth values and color measures, but the output PLY file format as well as the combine algorithms in the next section works on point clouds. So it is necessary to convert the 2D arrays to a point cloud form. The conversion is done in the constructor of the pointcloud class. In essence, it simply goes over the 2-D array point by point and applies the alignment algorithm mentioned in previous section. The code is optimized for performance by using matrix operations instead of loops, so it imposes no overhead for the program. After conversion, the data are stored into two lists: one for vertex coordinates and another for its color. The two lists share the same index. The reason for separating the coordinates and color is because the combination algorithms only need to use the coordinate values. This implementation makes the list more manipulable.


Other than the constructor, three more methods are defined for this class: addition, clipping, and PLY output. The addition operation is overloaded to concatenate two objects together, which is done by concatenating all internal matrices. A clipping method is defined to cut off extraneous points around the object. The input variables defines the 6 planes that forms a cube, and the method will search in the point cloud and delete all points that are outside the cube. The code is written so that clipping on one side a time is also possible. The PLY output method follows the PLY file format standard by first printing out the headers and then a list of all points in the point cloud. The coordinates are scaled 1000 times when output, because the internal unit used in this project is meter, while the RepRap software used in 3D printing assumes all coordinates in millimeters (\cite{reprapunit}). The PLY format specification is unit-less (any unit can be used for the coordinates) so this scaling will not affect the portability of the outputted PLY file.

\section{Combination step}
Once we have a bunch of point clouds, we can compare them pairwise and extract the transformation between them. We extract the rotation matrix $R \in \mathbb{R}^{3\times3}$ and the translation vector $t \in \mathbb{R}^3$ from each pair of adjacent point clouds. The detailed algorithm is described in \ref{chap:comb}, which is based on \cite{3drecon} and \cite{nonrigid}.

After all these transformations are extracted, each point cloud is modified by its respective transformation, and all point clouds are combined into a single point cloud in the world coordinate.

The combination algorithm works by simulated annealing on the variance $\sigma^2$ as EM steps are taken. With bigger variance, combination algorithm looks for rough matches, and with smaller variance, finer matches. An uniform component is also added to the Gaussian mixture in \cite{3drecon} to deal with outliers. The process stops when 20\% of the data are explained by the uniform component. For more details on this algorithm we developed, see \ref{chap:comb}. We also tried a Bayesian model for this task, also described in \ref{chap:comb}, which unfortunately does not meet performance standards using Markov Chain Monte Carlo for inference.

There is one last hack used for combination. Because we do pairwise combination, errors accumulates so the last frame does not neccesarily wrap back to the first frame. So we introduce an ad-hoc algorithm forcing all transformation matrices to multiply to identity. We use an iterative procedure.

$$P  = R_1 R_2 ... R_n $$

We would like to modify each $R_i$ by a small amount, to make $P = I$. This can be done using the following iterative procedure, for each i from 1 to n:

\begin{enumerate}
 \item Set $P_i = R'^{-1}_{i-1} R'^{-1}_{i-2} ... R'^{-1}_{1}$ and $Q_i = R_i R_{i+1} ... R_n$
 \item Set the correction term $D_i = Q_i^{-1} P_i = I + \Delta$. $D$ should equal to $I$ if no error accumulation, so $\Delta$ represent the error of how far from $I$ we are.
 \item Take the $n^{th}$ root: $C = D^{1/n} \approx I + \Delta/n$
 \item Set $R'_i = C R_i$
 \item Multiply by $R'^{-1}_i$ on both sides of the equation, and repeat. 

\end{enumerate}

Upon each iteration, the product $P$ becomes more like identity.

\section{Mesh}
Existing 3D applications often support 3D models in mesh format. We use a third-party software MeshLab to convert a point cloud model to a triangular mesh model.

We first subsample the point cloud using the Poisson Sampling algorithm. This process forces all vertices to be uniformly distributed, while also eliminating the noisy data points. Next, we apply the Poisson Surface Construction operation to construct the 3D mesh. It is worth noting that the resulting mesh does not include color information at this point. In the final step, we run the Vertex Attribute Transfer operation to transfer the color information from the original point cloud onto the new mesh model. The transfer algorithm uses a simple closest point heuristic to match the points between the two models.
Moreover, MeshLab also allows users to export all the operations described above into a single script file (.mlx). The script can be invoked using a shell script adhere to the specifications of MeshLabServer \cite{meshlab13}.

See \ref{chap:meshsoft} for a detailed comparison between alternative meshing softwares.

\section{RepRap}

Although RepRap indicated PLY as a recommended file format on its official site (\cite{recformat}), the actual software can only read STL file and RepRap-specific (none-portable) RFO file. The STL must be in binary form (ASCII format will not load properly) and has unit of millimeter (\cite{reprapunit}). The PLY file is already in unit of millimeter, so the only thing left to do is converting PLY format to STL (MeshLab is chosen to do this job simply because previous steps used it). It is important to note that STL does not support color, the color information is lost (the RepRap printer cannot print color anyways).

After the file has been converted to STL format, it can be loaded into the RepRap host program and used to generate the 3D object in the machine! See \cite{reprapprocedure} for how to use RepRap host program to print parts. The printing is time consuming, for example, printing a toy bear requires about 11 hours.

\section{Accessing our code, replicating the results}
First, you would need the Mercurial source control system to access the up-to-date code. The code is hosted at \url{http://code.google.com/p/kinnectproj/}. A few original dataset and results can be found in the Downloads section.

If you would like to become a contributor, email one of us. Our email can be found in this project page.
 

% ------------------------------------------------------------------------


%%% Local Variables: 
%%% mode: latex
%%% TeX-master: "../thesis"
%%% End: 