\chapter{Assessments}
\ifpdf
    \graphicspath{{Chapter3/Chapter3Figs/PNG/}{Chapter3/Chapter3Figs/PDF/}{Chapter3/Chapter3Figs/}}
\else
    \graphicspath{{Chapter3/Chapter3Figs/EPS/}{Chapter3/Chapter3Figs/}}
\fi

In this section, we will use the set of requirements and criteria outlined in the Revised Requirements document as the metrics to evaluate the quality of our project from different aspects.
\section{Assessment against requirements}

\subsection{The software must be compatible with any Kinect device}
The constants described in section Alignment works pretty well with all three Kinect devices that we have purchased for this project. Moreover, there is also a way to derive those constants for an individual Kinect to optimize the performance \cite{monocularcalib}.


\subsection{Produce a 360 degrees 3D model}
The final output 3D model delivers a 360 degrees view. Moreover, the merging heuristic makes no explicit assumptions about sequential frames. Hence, we can also take two separate frames for both the top and the bottom of the scanning object and have them merged to the final model as well.


\subsection{Display only the scanning object}
In order to clip unwanted scenes from the object, we construct a magic cube based on a set of parameters. All surroundings outside of the cube are ignored. This means that the user has to estimate both the size of the object and its distance away from Kinect before conducting the clipping operation. The parameters can be reused if all scanning objects are of similar size and the distance between Kinect and the object is relatively constant.


\subsection{Output to standard 3D format}
 The final output file is in the PLY format, which is a standard 3D format designed to store 3D data from scanners \cite{ply}.


\section{Assessment against Criteria}

\subsection{Technical Criteria}

\subsubsection{Filter input noise}
To eliminate the amount of noise, a number of measures have been taking. First, we use the robust weighted averaging method to achieve a more stable depth image. In addition, the clipping process removes the surrounding from the real object. The merging heuristics also assume a certain amount of outliers (20\%). Finally, the Poisson Sampling algorithm enforces an even vertex distribution throughout the whole model. Overall, as illustrated in the figures shown previously, the final output model does an excellent job in regards to noise filter.
Color mapping: In general, the color mapping error is relatively small, within a few pixels. The main bottleneck for this criterion is Kinect itself, because the color images only have a resolution of 640x480 pixels. At the moment, the color coating can be blurry on 3D models. However, we expect the color quality to be improved if more data points are provided from Kinect.
Computation time: The major bottleneck in terms of computation time is the merge component. The algorithm has a complexity of $N^2$. If we use a sample size of 1000 points, this translates to a running time of roughly 10 minutes. Depending on the complexity of the object, 1000 sample points may be enough for simple objects such a box, but not sufficient for more sophisticated objects such as a real person. Hence, the computation time is acceptable for the purpose of 3D printing. However, the speed is insufficient for fast response applications such as 3D video streaming.

\subsubsection{Adhere to software design pattern}
 The entire program is comprised of python code, shell scripts, and MeshLab scripts. During the implementation stage, we tried our best to comment all changes and also avoid hardcoded programs to deliver a generic implementation. Moreover, there is an online code repository that all team members share \cite{kinnect}. This allows all members to synchronize to the latest work and also provides a safety mechanism to backup all code onto the cloud server.


\subsubsection{The output format to be supported by other existing applications}
The output model is of type PLY, which is supported by a variety of 3D applications such as Maya and RepRap.
\subsubsection{Relative accuracy} The relative accuracy between the real object and its 3D model is within a few centimeters. This variation is mainly due to rounding errors when performing the point cloud to mesh conversion.


\subsubsection{Handle complex objects}
 The objects tested so far are a drawer, a soy sauce bottle, a box, a bear, and a real person. The outcome is relatively accurate for the first 4 objects. The algorithm performs well against both convex and concave objects. The soy sauce bottle had a hole around the handle; this void space was also reflected in the output model. In the case of a real person, although the general 3D shape resembled to the actual person, the color mapping was poorly constructed. However, since the person was rotating himself in order to capture image frames from all 360 degrees, the input data are less accurate than the other static objects. Overall, we believe the scanner is fairly accurate against household items.

\subsubsection{Scan range} 
The software doesn’t impose further physical restrictions in terms of the scanning range; the range fully depends on the camera limitations on Kinect.


\subsection{Application Criteria}
The software should not be restricted to a specific application. If a particular application is very appealing, but requires a hardware upgrade from the current Kinect, we would still like to demonstrate its feasibility although the application may not necessarily be practical at the present moment. Below are three possible applications that we purposed in the initial requirement document:


\subsubsection{3D printing}
3D printing is a time consuming application, for the computation time is non-critical in this case. However, the printing job may require a high relative accuracy with respect to the real object. Depending on the size of the object, a precision in the order of centimeters may or may not fulfill the accuracy requirement.


\subsubsection{3D Animation}
 This application requires low relative accuracy and no specific constraint for the computation time. It is a good match to the current state of the project.


\subsubsection{3D live stream}

While accuracy is not very important in live streaming, the output has to be delivered instantaneously. The current running time takes over 10 minutes, however it is possible to improve the performance dramatically with the help of 100+ core GPUs.

\subsection{Economic Criteria}

The major objective of this project is to build a 3D scanner using Kinect as cheap as possible. A brand new Kinect costs \$149.99 at any BestBuy local store \cite{bestbuy}. Initially, we planned to build a rotation apparatus, such that the scanning object can be placed on it to rotate around. However, the merging algorithm doesn’t assume all captured frames to have the same rotational axis; hence there is no need for such apparatus. Moreover, both OpenKinect and MeshLab are open sources project that are freely available to the general public. As result, the total cost of building our Kinect 3D scanner is equivalent to the cost of Kinect itself.

% ------------------------------------------------------------------------


%%% Local Variables: 
%%% mode: latex
%%% TeX-master: "../thesis"
%%% End: 