\chapter{Combination Algorithm}
\label{chap:comb}
\section{Introduction}
Readers are assumed to understand basic mixture models and EM algorithm in this appendix. See relevant chapters in Bishop's text \cite{bishop2007}.

\subsection{The problem}
We would like to reconstruct 3D models of objects from images and depth maps of the object from different viewpoints. The input data is shown in figure \ref{fig:input}. %These are actually denoised inputs of 9 raw camera frames.

These inputs are then converted to a point cloud. A point cloud $P$ is a set of points $P = \{p_i\}_{i=0}^M$. Where $M$ is the number of points in this cloud, and $p_i = \{x,y,z, \textbf{a}\}$, contains the spatial coordinates of a point and some attributes $\textbf{a}$ of this point such as its color and other properties.

A point cloud corresponding to figure \ref{fig:input}, with colors is shown in figure \ref{fig:pconeface}.

We would like to build a full 3d reconstruction of an object from such point clouds taken in different viewpoints. Now the problem is combining these point clouds - with each point cloud having considerate overlap with its neighboring point clouds. However, each pair of adjacent point clouds is related by some unknown transformation. We require this transformation to be a rotation, plus translation. The goal is to find the rotation matrix $R \in  ^3 \mathbb{R}^3$ and translation vector $t \in \mathbb{R}^3$.


\begin{figure}[!ht]
\centering
\includegraphics[scale=.5]{figs/depth.png}
\includegraphics[scale=.5]{figs/rgb.png}
\caption{The sensor input. \textbf{top image}: depth map, whiter color means farther away, pure black means the value is unknown due to sensor limitations (shadow of projected light used for depth detection. The Kinect sensor uses the structured light method for depth capturing.). \textbf{bottom image}: the corresponding RGB map}
\label{fig:input}
\end{figure}
\clearpage

\begin{figure}[!ht]
\centering
\includegraphics[scale=.43]{figs/pc1.png}
\includegraphics[scale=.43]{figs/pc2.png}
\caption{Point clouds with color, seen from two different viewpoints of the above capture. This is obtained with some alignment and other processing steps, from the data shown in figure \ref{fig:input}. But the process is straight-forward, just takes some work.}
\label{fig:pconeface}
\end{figure}
\clearpage



\subsection{The Methods}
We quickly outline our approaches here, with a lot more details in the sections to follow. The problem can be formulated somewhat probabilistically, We do realize that the formulation is not the most natural. Also note that colors and other attributes are completely ignored, so we only deal with shapes here. One approach is EM, and the other is MCMC on a Bayesian model.

Basically, we could treat each point in some point cloud as a mixture component in a Gaussian mixture model (GMM) with mean equal to location of that point, and some variance $\sigma^2$. Then we can try to maximize the probability of this GMM generating some adjacent point cloud as data. Notice each point cloud is rigid, so the mean and variance of a GMM cannot be changed freely as in the regular EM method for GMMs. Instead, we rotate, translate all means of each mixture component by a specific rotation matrix $R \in ^3\mathbb{R}^3$ and translation vector $t \in \mathbb{R}^3$. To a first approximation, if we can maximize the likelihood, with respect to $R$ and $t$, of generating an adjacent point cloud, then the $R$ and $t$ which maximizes the likelihood can be treated as the solution. Similarly, a vague prior can also be added to $R$ and $t$ since the object might be rotated in a somewhat regular way, and we may incorporate our prior belief into this framework. Slightly more complexity arises from having to deal with systematic outliers, since adjacent point clouds are not completely the same, but just have considerate overlap (say 80\% overlap). Note that outliers here include both measurement errors, as well as the systematic outliers due to non-overlapping parts.




\section{A probabilistic formulation}
\subsection{Basic formulation}
Given two point clouds, $P$ and $Q$, where $P = \{p_1, p_2, ... , p_M\}$ and $Q = \{q_1, q_2, ... , q_N\}$. Each point in the point clouds, $p_i = \{x_i, y_i, z_i, \textbf{a}\}$, $q_i = \{x'_i, y'_i, z'_i, \textbf{a}\}$ where the attributes $\textbf{a}$ is completely ignored in what to follow except when rendering the final images (so color information. inside the attributes vector, is NOT used at all, although color does contain a lot of potential information). From now on, we drop the attributes term and treat elements in the point cloud as 3d vectors.

we treat points in point cloud $P$ as data, and points in point cloud $Q$ as mixture components.

Let $Q' = \{q'_1, q'_2, ..., q'_N\}$ where $q'_i = Rq_i + t$. Now the task is to find rotation matrix $R \in ^3\mathbb{R}^3$ and translation vector $t \in \mathbb{R}^3$, so that the likelihood of generating the data points $P$ under Gaussian mixture components with means $Q'$ and variance $\sigma^2$. It is sensible for this task to use the same variance in all directions and for all components, or at least use the same specific prior of variance for all components. we will also use the same mixing proportions. Think of it as an ``cloud'' of uniform thickness so that its shape should match another cloud. The variance should not vary significantly as in usual GMMs. 

The basic framework is presented in the coherent point drift paper \cite{nonrigid}, and an EM algorithm specific for this application is presented in \cite{3drecon}. The EM algorithm we used also does annealing and accounts for outliers, which are not in the second paper. But we refer to these two papers without providing excessive details on the EM approach, except a few additions we made that are not mentioned in the paper.
 
\subsection{Our additions to EM}
GMM is very sensitive to outliers, especially with small variance. So it is sensible to add a single component with large variance, or just a uniform component to deal with all these outliers.

Simulated annealing is also a sensible addition to the EM algorithm. $\sigma$ can be annealed, so it matches big features first and focus on precision later on. For the EM approach, we anneal $\sigma$ until 20\% of the data is explained by the outlier component and 80\% are explained by the Gaussian components.


\section{The EM approach}
The EM approach can be summarized as follows. We refer readers to \cite{3drecon} and \cite{nonrigid} for more details.

Repeat until $\sigma < \sigma_{stop} $ or when 20\% of data are explained by the uniform component
\begin{itemize}
 \item{\textbf{E-step}: Evaluate the $M \times N$ responsibility matrix}
 \item{\textbf{M-step}:Use the responsibility matrix to evaluate $R$ and $t$. Force $R$ to have unit determinant by singular value decomposition}
 \item{$\sigma = \sigma \times \alpha$}
\end{itemize}

Where $\alpha = 0.9 \sim 0.98$ is the annealing rate.

\section{Some experiments}
\subsection{A 2D experiment first}
we first generate artificial data with unknown $R$ and $t$. The algorithm was able to recover $R$ and $t$ to within 1\% error in component magnitudes. The true translation is $t = [1, 0.5, 0.2]$, and the recovered translation is $t = [1.00466532,  0.50431795,  0.20684235]$. Performance on other fake datasets are similar. Some samples are shown in figures \ref{fig:2dexample} and \ref{fig:2dexample2} with an entire ``arm'' of outliers. Other experiments yielded similar results.


\begin{figure}[h]
\centering
\includegraphics[scale=.4]{figs/cube1.png}
\includegraphics[scale=.4]{figs/cube2.png}
\includegraphics[scale=.4]{figs/cube3.png}
\caption{Matching cubic curves. Green scatter shows the original mixing components location. Red scatter shows the data location. Blue scatter shows the new mixing component location}
\label{fig:2dexample}
\end{figure}

\begin{figure}[h]
\centering
\includegraphics[scale=.33]{figs/l0.png}
\includegraphics[scale=.33]{figs/l1.png}
\includegraphics[scale=.33]{figs/l2.png}
\includegraphics[scale=.33]{figs/l3.png}
\caption{Matching cubic curves. Green scatter shows the original mixing components location. Red scatter shows the data location. Blue scatter shows the new mixing component location}
\label{fig:2dexample2}
\end{figure}
\clearpage

\subsection{3D reconstruction}
Then we sample points and apply this method to the 3D soysauce dataset. The combination result is shown in figure \ref{fig:soysauce}.

\begin{figure}[h]
\centering
\includegraphics[scale=.25]{figs/wholeobj.png}
\includegraphics[scale=.25]{figs/topview.png}
\includegraphics[scale=.25]{figs/meshed.png}
\caption{Matching real pointclouds. Left is the combined pointcloud, with many visible outliers. Middle is the topview. Right is after meshing the pointclouds}
\label{fig:soysauce}
\end{figure}

\section{The Bayesian approach}
Readers are assumed to have some background in Bayesian inference and MCMC methods to understand this section.

The application here may not be the most natural application for MCMC methods, since a point estimate is required in the end and it is sensible for this estimate to be the mode instead of the posterior average. So simple MAP actually does seem somewhat more natural, but we would like to deal with outliers in a systematic way, and use prior information effectively. These complications make the model intractable so I use MCMC. I am also interested in comparing the performance of MCMC vs. EM in this task. The EM algorithm presented in \cite{3drecon} did not work well at all, while my customized EM method with annealing seems rather inefficient.

\subsection{The model}
Recall we have two pointclouds, $P$ and $Q$, where $P = \{p_1, p_2, ... , p_M\}$ and $Q = \{q_1, q_2, ... , q_N\}$. Set $Q' = \{q'_1, q'_2, ..., q'_N\}$ where $q'_i = Rq_i + t$. We treat $P$ as the data pointcloud, and $Q$ as the mixture component pointcloud. Mixing proportions under the outliers GMM are constants, and sums to a total of $\Pi = (m + a)/ (M + a + b)$ and $\Pi/M$ each, where $m = \sum_i o_i$ $a, b$ are constants for the beta prior for outlier indicators. Once a point is labeled outlier then it comes from an uniform component, with mixing proportion $1 - \Pi$. Alternatively, we may also use a GMM with larger variance to allow for softer assignments.

So, the simpler model (model A) is as follows:
\begin{align*}
 p_i   &\sim \left\{
     \begin{array}{lr}
       \mathtt{GMM}(Q', \sigma_0^2) & : o_i = 0\\
       \mathtt{Uniform}(-l, l) & : o_i = 1
     \end{array}
   \right \\
o_i | \theta &\sim\ \mathtt{Bernoulli}(\theta)\\
\theta &\sim\ \mathtt{Beta}(a = 4, b = 16) \\
\log(\sigma_0)  &\sim\ \mathcal{N}(-4, 0.1)\\
R  &\sim\ \mathtt{Uniform}(\mathtt{rotation matrices})\\
t  &\sim\ \mathtt{Uniform}(-l,l).
\end{align*}

The slightly more complicated version is:

\begin{align*}
 p_i   &\sim \left\{
     \begin{array}{lr}
       \mathtt{GMM}(Q', \sigma_0^2) & : o_i = 1\\
       \mathtt{GMM}(Q', \sigma_1^2) & : o_i = 0
     \end{array}
   \right \\
o_i | \theta &\sim\ \mathtt{Bernoulli}(\theta)\\
\theta &\sim\ \mathtt{Beta}(a = 4, b = 16) \\
\log(\sigma_0)  &\sim\ \mathcal{N}(-4, 0.1)\\
\log(\sigma_1)  &\sim\ \mathcal{N}(-2, 1)\\
R  &\sim\ \mathcal{N}(\mathtt{rotation matrices}, R_0, \sigma_R) \\
t  &\sim\ \mathtt{Uniform}(t_0, \sigma_t).
\end{align*}

Problem domain knowledge is built into the prior. In this case, the object we are interested in have roughly a radius of decimeters. As a result, $\mathbb{E}[\log(\sigma_1)] = -2$. The feature size that should be matched are roughly centimeters, or even sub-centimeter, so $\mathbb{E}[\log(\sigma_0)] = -4$ with small variance. These values come from my beliefs, but arbitrarily specifying the prior means of $\sigma_0, \sigma_1$ in log domain is much better than arbitrarily specifying $\sigma_0, \sigma_1$ themselves. We use $\mathtt{Uniform}(-l, l)$ to mean an uniform distribution that has range $-l$ to $l$ in all dimensions.

In the simple case, $R$ and $t$, can be assumed to be have uniform distributions over its domain. Since $R$ is a rotation matrix, it really only has 3 degrees of freedom, instead of 9, which is its dimension. we do random walk first, to get $R' = R + e$ and then do SVD to get $R = UCV$, and finally get the new $R$ as $UV$. For the actual application, we might have a rather good idea of what $R$ and $t$ might be, although we do not know them exactly, and this can be incorporated into the prior for $R$ and $t$. This is another attraction of the Bayesian approach, as such information is rather important and cannot be naturally incorporated into the EM algorithm.

\subsection{The updates}
So the state space consists of $R, t, \sigma_0, \sigma_1, o_1, ..., o_M$ with $\theta$ integrated out.

We use Metropolis-Hasting updates for this task. Specifically, we just use Metropolis updates for $R, t, \sigma_0$ and $\sigma_1$ with a normally distributed step. On the other hand, the outlier indicator variables could benefit from some heuristics. Since the so-called outliers here are actually systematic, it is conceivable to propose according to neighboring components as well as fit. We will start with just proposing the opposite value and then look into the more complicated proposal method later. The probability of proposing a datapoint to be outlier can be the fraction of its nearest $r$ neighbors that are also outliers.

Because this is a mixture model. There is also significant potential for computation savings in updating the outlier indicators. Only the change in log likelihood of element i needs to be evaluated without looking at all the others, which remain unchanged. The sum of the mixing proportions from inlier and outlier components then need to be saved to get this computation saving.

We only use the simple model above, and proposing opposite values for outlier indicators in this project. 
% ------------------------------------------------------------------------

%%% Local Variables: 
%%% mode: latex
%%% TeX-master: "../thesis"
%%% End: 
