\chapter{Alignment Methods}
\label{chap:align}
The major objectives of this component are two folds:

\begin{itemize}
\item{Take a depth image and construct the corresponding 3D scenery in a Cartesian coordinate system. The coordinates are in meters.}
\item{Find the mapping between a pixel in the depth image with another pixel in the color image.}
\end{itemize}


\section{Depth coordinates to world coordinates}
By default, the depth images are 640x480 pixel arrays with each pixel having a depth value between 0 and 2047. It is easy to construct a greyscale image of such array as shown in figure \ref{fig:capture}. In the figure, a darker coloured pixel represents a spot location nearer to the depth camera, while a brighter pixel locates farther to the camera. Moreover, the black regions are areas that the camera cannot see given the shooting angle.

\begin{figure}[h]
\centering
\includegraphics[width=.8\textwidth]{figs3/capture.png}
\caption{Greyscale image of a depth array}
\label{fig:capture}
\end{figure}

Given that in greyscale image, black is defined with a value of 0 and white is defined with a value of 256 \cite{digitalphoto}, we can see that there is an increasing relationship between the depth value and the real distance. Indeed, there exists a linear relationship between the depth measurement and its inverse distance to the camera.

\begin{figure}[h]
\centering
\includegraphics[width=.8\textwidth]{figs3/table2.png}
\caption{Relationship between depth measurements and inverse distances}
\label{fig:table2}
\end{figure}

The data points were collected experimentally \cite{kinectnode} and are showcased in \ref{fig:table2}. It is worth noting all experimental data discussed in this section were not collected by us; however, we did run a series of sample tests. Our findings did match closely with the claims. Now that we know the z-axis value of our world coordinates. To find both x-axis and y-axis values, it is just a matter of image projection using the formula listed below:

\begin{lstlisting}
P3D.x = (x_d - cx_d) * P3D.z / fx_d
P3D.y = (y_d - cy_d) * P3D.z / fy_d
P3D.z = depth(x_d,y_d)	
\end{lstlisting}


\begin{figure}[h]
\centering
\includegraphics[width=.8\textwidth]{figs3/table1.png}
\caption{Constants used for the conversion to world coordinates}
\label{fig:table1}
\end{figure}

A point cloud is a set of vertices in a three-dimensional coordinate system. If we take each pixel on the depth image and convert each of them to its perspective world coordinate, the point cloud is thus constructed.

\section{Color and Depth Mapping}
At this stage, the point cloud only contains vertices with no color. The next step is to add RGB values to each of those vertices. In order to do, we must map each vertex with its corresponding pixel on the color image. 

\begin{figure}[h]
\centering
\includegraphics[width=.8\textwidth]{figs3/calib.png}
\caption{Same checker board on both the depth image and the color image}
\label{fig:calib}
\end{figure}

As illustrated on figure \ref{fig:calib}, both the color image and the depth image are taken simultaneously. We can choose the four corners of the check board as feature points (marked with red arrows) to analyze the mapping relationship. Contrary to common sense, the mapping relationship is non-linear.  The displacement between the color camera and the depth camera implies an affine transformation between the two images in both rotation and translation. Here are the formulas that we used in our implementation \cite{kinectcalib}:

\begin{lstlisting}
P3D'=R.P3D + T
P2D_rgb.x=(P3D'.x*fx_rgb / P3D'.z) + cx_rgb
P2D_rgb.y=(P3D'.y*fy_rgb / P3D'.z) + cy_rgb
\end{lstlisting}

R and T represent the rotational and the translational matrices respectively, while fx\_rgb, fy\_rgb, cx\_rgb and cy\_rgb are intrinsic values associated with the Kinect device. Nicolas Burrus, a PhD student in computer vision did significant works to derive those constants. We took the values that he purposed and ran a number of sample tests with different objects. The formula works genuinely well with small deviations. Accordingly, we modified some of the values slightly to introduce a better fitting for our own Kinect. 

