\documentclass[letter,12pt]{article}
\usepackage{amssymb}
\usepackage{amsmath}
\usepackage{amsthm}
\usepackage{graphicx} % support the \includegraphics command and options
\usepackage[hmargin=3.5cm,vmargin=4cm]{geometry}
\usepackage{listings}
\pagestyle{myheadings}
%\markright{Sida Wang 995324037\hfill CSC2541 Project\hfill}
\begin{document}

\lstset{language=Python}


\title{A Bayesian approach to shape matching}
\author{Sida Wang (995324037)}
\date{April 20 , 2011} % Activate to display a given date or no date (if empty),
         % otherwise the current date is printed 

\maketitle



\section{Introduction}
\subsection{The problem}
I would like to reconstruct 3D models of objects from images and depth maps of the object from different viewpoints. The input data is shown in figure \ref{fig:input}. %These are actually denoised inputs of 9 raw camera frames.

These inputs are then converted to a pointcloud. A pointcloud $P$ is a set of points $P = \{p_i\}_{i=0}^M$. Where $M$ is the number of points in this cloud, and $p_i = \{x,y,z, \textbf{a}\}$, contains the spatial coordinates of a point and some attributes $\textbf{a}$ of this point such as its color and other properties.

A pointcloud corresponding to figure \ref{fig:input}, with colors is shown in figure \ref{fig:pconeface}.

I would like to build a full 3d reconstruction of an object from such pointclouds taken in different viewpoints. Now the problem is combining these pointclouds - with each pointcloud having considerate overlap with its neighboring pointclouds. However, each pair of adjacent pointclouds is related by some unknown transformation. We require this transformation to be a rotation, plus translation. The goal is to find the rotation matrix $R \in  ^3 \mathbb{R}^3$ and translation vector $t \in \mathbb{R}^3$.


\begin{figure}[!ht]
\centering
\includegraphics[scale=.5]{figs/depth.png}
\includegraphics[scale=.5]{figs/rgb.png}
\caption{The sensor input. \textbf{top image}: depth map, whiter color means farther away, pure black means the value is unknown due to sensor limitations (shadow of projected light used for depth detection. The Kinect sensor uses the structured light method for depth capturing.). \textbf{bottom image}: the corresponding RGB map}
\label{fig:input}
\end{figure}
\clearpage

\begin{figure}[!ht]
\centering
\includegraphics[scale=.43]{figs/pc1.png}
\includegraphics[scale=.43]{figs/pc2.png}
\caption{Pointclouds with color, seen from two different viewpoints of the above capture. This is obtained with some alignment and other processing steps, from the data shown in figure \ref{fig:input}. But the process is straight-forward, just takes some work.}
\label{fig:pconeface}
\end{figure}
\clearpage



\subsection{The approaches}
I quickly outline my approaches here, with a lot more details in the sections to follow. The problem can be formulated somewhat probabilistically, I do realize that the formulation is not the most natural. Also note that colors and other attributes are completely ignored, so I only deal with shapes here. One approach is EM, and the other is MCMC on a Bayesian model.

Basically, I could treat each point in some pointcloud as a mixture component in a Gaussian mixture model (GMM) with mean equal to location of that point, and some variance $\sigma^2$. Then I can try to maximize the probability of this GMM generating some adjacent pointcloud as data. Notice each pointcloud is rigid, so the mean and variance of a GMM cannot be changed freely as in the regular EM method for GMMs. Instead, I rotate, translate all means of each mixture component by a specific rotation matrix $R \in ^3\mathbb{R}^3$ and translation vector $t \in \mathbb{R}^3$. To a first approximation, if I can maximize the likelihood, with respect to $R$ and $t$, of generating an adjacent pointcloud, then the $R$ and $t$ which maximizes the likelihood can be treated as the solution. Similarly, a vague prior can also be added to $R$ and $t$ since the object might be rotated in a somewhat regular way, and we may incorporate our prior belief into this framework. Slightly more complexity arises from having to deal with systematic outliers, since adjacent pointclouds are not completely the same, but just have considerate overlap (say 80\% overlap). Note that outliers here include both measurement errors, as well as the systematic outliers due to non-overlapping parts.

I present two methods, focusing on the latter, and compare the results. First method is an EM algorithm with annealing variance, with Gaussian mixture components, and one uniform component to account for outliers. Stopping when 80\% of data is explained by Gaussian mixtures, and 20\% explained by outliers. The second is MCMC with outlier indicators on each point. $R$ is sampled using the metropolis algorithm. The state variables are $R, t$, the outlier indicators $\textbf{o} = \{o_1, o_2, ..., o_N\}$, and optionally $\sigma$.


\section{A probabilistic formulation}
\subsection{Basic formulation}
Given two pointclouds, $P$ and $Q$, where $P = \{p_1, p_2, ... , p_M\}$ and $Q = \{q_1, q_2, ... , q_N\}$. Each point in the pointclouds, $p_i = \{x_i, y_i, z_i, \textbf{a}\}$, $q_i = \{x'_i, y'_i, z'_i, \textbf{a}\}$ where the attributes $\textbf{a}$ is completely ignored in what to follow except when rendering the final images (so color information. inside the attributes vector, is NOT used at all, although color does contain a lot of potential information). From now on, we drop the attributes term and treat elements in the pointcloud as 3d vectors.

I treat points in pointcloud $P$ as data, and points in pointcloud $Q$ as mixture components.

Let $Q' = \{q'_1, q'_2, ..., q'_N\}$ where $q'_i = Rq_i + t$. Now the task is to find rotation matrix $R \in ^3\mathbb{R}^3$ and translation vector $t \in \mathbb{R}^3$, so that the likelihood of generating the data points $P$ under Gaussian mixture components with means $Q'$ and variance $\sigma^2$. It is sensible for this task to use the same variance in all directions and for all components, or at least use the same specific prior of variance for all components. I will also use the same mixing proportions. Think of it as an ``cloud'' of uniform thickness so that its shape should match another cloud. The variance should not vary significantly as in usual GMMs. 

The basic framework is presented in the coherent point drift paper \cite{nonrigid}, and an EM algorithm specific for this application is presented in \cite{3drecon}. The EM algorithm I used also does annealing and accounts for outliers, which are not in the second paper. But I refer to these two papers without providing excessive details on the EM approach, which is not the focus of this project anyways, except a few additions I made that are not mentioned in the paper.
 
\subsection{My additions to EM}
GMM is very sensitive to outliers, especially with small variance. So it is sensible to add a single component with large variance, or just a uniform component to deal with all these outliers.

Simulated annealing is also a sensible addition to the EM algorithm. $\sigma$ can be annealed, so it matches big features first and focus on precision later on. For the EM approach, I anneal $\sigma$ until 20\% of the data is explained by the outlier component and 80\% are explained by the Gaussian components.


\section{The EM approach}
The EM approach can be summarized as follows. As EM is not the focus of this project, not too much details will be given. I instead refer readers to \cite{3drecon} and \cite{nonrigid} for more details.

Repeat until $\sigma < \sigma_{stop} $ or when 20\% of data are explained by the uniform component
\begin{itemize}
 \item{\textbf{E-step}: Evaluate the $M \times N$ responsibility matrix}
 \item{\textbf{M-step}:Use the responsibility matrix to evaluate $R$ and $t$. Force $R$ to have unit determinant by singular value decomposition}
 \item{$\sigma = \sigma \times \alpha$}
\end{itemize}

Where $\alpha = 0.9 \sim 0.98$ is the annealing rate.

\section{The MCMC approach}
The application here may not be the most natural application for MCMC methods, since a point estimate is required in the end and it is sensible for this estimate to be the mode instead of the posterior average. So simple MAP actually does seem somewhat more natural, but I would like to deal with outliers in a systematic way, and use prior information effectively. These complications make the model intractable so I use MCMC. I am also interested in comparing the performance of MCMC vs. EM in this task. The EM algorithm presented in \cite{3drecon} did not work well at all, while my customized EM method with annealing seems rather inefficient.

\subsection{The model}
Recall we have two pointclouds, $P$ and $Q$, where $P = \{p_1, p_2, ... , p_M\}$ and $Q = \{q_1, q_2, ... , q_N\}$. Set $Q' = \{q'_1, q'_2, ..., q'_N\}$ where $q'_i = Rq_i + t$. We treat $P$ as the data pointcloud, and $Q$ as the mixture component pointcloud. Mixing proportions under the outliers GMM are constants, and sums to a total of $\Pi = (m + a)/ (M + a + b)$ and $\Pi/M$ each, where $m = \sum_i o_i$ $a, b$ are constants for the beta prior for outlier indicators. Once a point is labeled outlier then it comes from an uniform component, with mixing proportion $1 - \Pi$. Alternatively, we may also use a GMM with larger variance to allow for softer assignments.

So, the simpler model (model A) is as follows:
%\begin{align*}
% p_i   &\sim \left\{
%     \begin{array}{lr}
%       \mathtt{GMM}(Q', \sigma_0^2) & : o_i = 0\\
%       \mathtt{Uniform}(-l, l) & : o_i = 1
%     \end{array}
%   \right \\
%o_i | \theta &\sim\ \mathtt{Bernoulli}(\theta)\\
%\theta &\sim\ \mathtt{Beta}(a = 4, b = 16) \\
%\log(\sigma_0)  &\sim\ \mathcal{N}(-4, 0.1)\\
%R  &\sim\ \mathtt{Uniform}(\mathtt{rotation matrices})\\
%t  &\sim\ \mathtt{Uniform}(-l,l).
%\end{align*}
%
%The slightly more complicated version is:
%
%\begin{align*}
% p_i   &\sim \left\{
%     \begin{array}{lr}
%       \mathtt{GMM}(Q', \sigma_0^2) & : o_i = 1\\
%       \mathtt{GMM}(Q', \sigma_1^2) & : o_i = 0
%     \end{array}
%   \right \\
%o_i | \theta &\sim\ \mathtt{Bernoulli}(\theta)\\
%\theta &\sim\ \mathtt{Beta}(a = 4, b = 16) \\
%\log(\sigma_0)  &\sim\ \mathcal{N}(-4, 0.1)\\
%\log(\sigma_1)  &\sim\ \mathcal{N}(-2, 1)\\
%R  &\sim\ \mathcal{N}(\mathtt{rotation matrices}, R_0, \sigma_R) \\
%t  &\sim\ \mathtt{Uniform}(t_0, \sigma_t).
%\end{align*}

Problem domain knowledge is built into the prior. In this case, the object we are interested in have roughly a radius of decimeters. As a result, $\mathbb{E}[\log(\sigma_1)] = -2$. The feature size that should be matched are roughly centimeters, or even sub-centimeter, so $\mathbb{E}[\log(\sigma_0)] = -4$ with small variance. These values come from my beliefs, but arbitrarily specifying the prior means of $\sigma_0, \sigma_1$ in log domain is much better than arbitrarily specifying $\sigma_0, \sigma_1$ themselves. I use $\mathtt{Uniform}(-l, l)$ to mean an uniform distribution that has range $-l$ to $l$ in all dimensions.

In the simple case, $R$ and $t$, can be assumed to be have uniform distributions over its domain. Since $R$ is a rotation matrix, it really only has 3 degrees of freedom, instead of 9, which is its dimension. I do random walk first, to get $R' = R + e$ and then do SVD to get $R = UCV$, and finally get the new $R$ as $UV$. For the actual application, we might have a rather good idea of what $R$ and $t$ might be, although we do not know them exactly, and this can be incorporated into the prior for $R$ and $t$. This is another attraction of the Bayesian approach, as such information is rather important and cannot be naturally incorporated into the EM algorithm.

\subsection{The updates}
So the state space consists of $R, t, \sigma_0, \sigma_1, o_1, ..., o_M$ with $\theta$ integrated out.

I use Metropolis-Hasting updates for this task. Specifically, I just use Metropolis updates for $R, t, \sigma_0$ and $\sigma_1$ with a normally distributed step. On the other hand, the outlier indicator variables could benefit from some heuristics. Since the so-called outliers here are actually systematic, it is conceivable to propose according to neighboring components as well as fit. I will start with just proposing the opposite value and then look into the more complicated proposal method later. The probability of proposing a datapoint to be outlier can be the fraction of its nearest $r$ neighbors that are also outliers.

Because this is a mixture model. There is also significant potential for computation savings in updating the outlier indicators. Only the change in log likelihood of element i needs to be evaluated without looking at all the others, which remain unchanged. The sum of the mixing proportions from inlier and outlier components then need to be saved to get this computation saving.

I just use the simple model above, and proposing opposite values for outlier indicators in this project.

\subsection{A 2D experiment first}
I first generate artificial data with unknown $R$ and $t$. The algorithm was able to recover $R$ and $t$ to within 1\% error in component magnitudes. The true translation is $t = [1, 0.5, 0.2]$, and the recovered translation is $t = [1.00466532,  0.50431795,  0.20684235]$. Performance on other fake datasets are similar. Some samples are shown in figures \ref{fig:2dexample} and \ref{fig:2dexample2} with an entire ``arm'' of outliers. Other experiments yielded similar results.


\begin{figure}[h]
\centering
\includegraphics[scale=.4]{figs/cube1.png}
\includegraphics[scale=.4]{figs/cube2.png}
\includegraphics[scale=.4]{figs/cube3.png}
\caption{Matching cubic curves. Green scatter shows the original mixing components location. Red scatter shows the data location. Blue scatter shows the new mixing component location}
\label{fig:2dexample}
\end{figure}

\begin{figure}[h]
\centering
\includegraphics[scale=.33]{figs/l0.png}
\includegraphics[scale=.33]{figs/l1.png}
\includegraphics[scale=.33]{figs/l2.png}
\includegraphics[scale=.33]{figs/l3.png}
\caption{Matching cubic curves. Green scatter shows the original mixing components location. Red scatter shows the data location. Blue scatter shows the new mixing component location}
\label{fig:2dexample2}
\end{figure}
\clearpage

\subsection{3D reconstruction}
Then I sample points and apply this method to the 3D dataset that I collected. The combination result is shown in figure \ref{fig:soysauce}.

\begin{figure}[h]
\centering
\includegraphics[scale=.25]{figs/wholeobj.png}
\includegraphics[scale=.25]{figs/topview.png}
\includegraphics[scale=.25]{figs/meshed.png}
\caption{Matching real pointclouds. Left is the combined pointcloud, with many visible outliers. Middle is the topview. Right is after meshing the pointclouds}
\label{fig:soysauce}
\end{figure}

\section{Discussions}
Performance is not great with the simplest MCMC, since now each iteration is of order $O(M^2 N)$ if outlier indicators are updated every iteration. Some computational savings are possible from both saving results and by updating outlier components less often, reducing complexity to $O(MN)$, same as EM but the constants are larger.

The MCMC algorithm works on model A, and it is in many ways more satisfying than EM with simulated annealing (plain EM did not work). Instead of arbitrarily stating anneal rate, the stop criteria, and how much data should be outliers as in the EM method, I can just give vague priors on theses and MCMC will figure those out for me. The modes are rather sharp under this model, making posterior mean essentially the same as the posterior mode.

However, I do get a performance penalty compared to EM with reasonable annealing rates. The end result are also comparable to EM with annealing - both works but not perfectly.

\begin{figure}[h]
\centering
\includegraphics[scale=.7]{figs/mcmc.png}
\caption{Plot of $\sigma_0$. Although the initial value is quite bad, MCMC was able to get to a good mode and stay there. The modes are rather sharp under this model.}
\label{fig:sigma}
\end{figure}


\section{Conclusions}
I applied MCMC inference on model A to the task of shape matching used for 3D reconstruction and compared to an EM algorithm of mine. Model A is more satisfying than the rather arbitrary EM algorithm, gets similar performance to a hand tuned EM algorithm but is slower.

\bibliographystyle{plain}
\bibliography{sm}

\end{document}
