\documentclass[a4paper]{article}
\usepackage[margin=3cm]{geometry}

\usepackage[T1]{fontenc}
\usepackage[math]{iwona}
\usepackage{graphicx}
\usepackage{amssymb}

\setlength{\parskip}{1.3ex plus 0.2ex minus 0.2ex}
\setlength{\parindent}{0pt}

\renewcommand{\labelitemi}{$\circ$}

\newcommand{\HRule}{\rule{\linewidth}{0.3mm}}

\begin{document}

\input{./titleProposal.tex}

\section{Motivation}
3D models have a vast amount of applications, ranging from interior design, to
crime scene reconstruction, to architecture, even to computer graphics for film
and animation.  However, 3D models of any kind of environment are expensive to
create from scratch, and the process takes too much time to be effectively used
in any sort of batch process. 2D imagery, on the other hand, is easy to create
and quick to come by. Using multiple 2D views of an environment to create 3D
models, thus, removes the difficulties posed by the former direct way of
generating these models.

In the past, many research has been done on generating 3D imagery from 2D data.
All this research, however, was based on classical techniques such as linear
algebra. In linear algebra, the basic elements of computation are real numbers.
When dealing with an environment, numbers are not the most efficient way to
represent the data. A more natural way would be to represent the environment
directly using geometric objects, such as planes, edges, lines and maybe even
spheres and cylinders. \emph{Geometric algebra} provides for this.
      In geometric algebra, geometric objects are reduced to the basic elements of
computation. Because of this, quite non-trivial geometric relationships can be
specified with relatively simple equations. For example, in geometric algebra,
when given a plane $P$ and a point $p$, the distance between these two elements
is specified as
\[ P \cdot p \]
and can be computed directly\footnote{I will not go into the mathematical part
  of the algebra, this will be explained in the thesis.}. Fitting a plane to
  points in geometric algebra is reduced to a relatively simple least squares
  problem, as the plane and points involved are direct elements of computation.

An implementation of room model fitting using geometric algebra has quite some
promise that it will be easily extendable to other, more complex shapes,
        including spherical objects and edges. Using regular forms of (linear)
  algebra makes this process difficult and far from general.

\section{Research Question}
When creating a 2D representation of a 3D world, one dimension is lost in the
process. This lost dimension can be recovered by using multiple views of the 3D
world in 2D.  One could fit the 3D point clouds that can be generated using this
knowledge from photographs of rooms to a 3D model of the captured room.

Geometric algebra is an algebra in which geometric objects are the basic
elements of computation. In comparison, in regular algebra, real numbers are the
basic elements of computation and representing geometric objects is cumbersome.

How can one to harness the power of geometric algebra to generate 3D room
models from noisy reconstructed 3D point clouds, gathered using multiple view
geometry from 2D imagery? And what are the advantages and benefits of using
geometric algebra instead of classical techniques?

\section{Expected result}
The first milestone of this project will be an implementation of a room fitter,
    which takes a 3D point cloud and outputs a room fitted to that cloud. With
    this working, the next step would be to generate the room models by hand
    using multiple view geometry. When these two subgoals have been reached,
    adding furniture to the rooms and modeling these as well is a significant
    addition which could be useful in interior modeling. In earlier steps, these
    objects have been treated like `noise', and it would be interesting to add
    these to the end result.

Depending on the speed at which the project comes along, any of these milestones
could be the endresult of the project. A demo of whatever milestone has been
reached in the end will be a subject for the presentation.\footnote{A live demo will probably be too involved, but if the final
  implementation is stable enough, it would of course be nice to generate a 3D
    model of the presentation room on the fly.}

\section{Planning}

\subsection{Week 15}
  \begin{itemize}
    \item Contact Marcel \& Dani\"el about starting up the project (Dani\"el has
    been busy until the 12th of April)
    \item Talk with Dani\"el about getting the dataset of pointclouds
    \item Create framework of report which can then be filled in along the
    course of the project
  \end{itemize}

\subsection{Week 16 - 18}
A part of week 16 and 17 I will be in Berlin on a study trip, so I can fit 2
actual weeks of work in these 3 weeks.

First week:
  \begin{itemize}
    \item Do literature research on how noise can be filtered from the point
    cloud
    \item Make first attempt at filtering noise
    \item Show first attempt of filtering to Dani\"el and get feedback on this
    \item Document results
  \end{itemize}
Second week:
  \begin{itemize}
    \item Improve noise filtering
    \item Do literature research on how to assign different data points to
    different planes, discuss this with Dani\"el
    \item Update report accordingly
  \end{itemize}
In week 18 specifically, contact Marcel about report to get feedback on the
language use in and the structure of the report so far.

\subsection{Week 18}
  \begin{itemize}
    \item Finish up noise filtering, finish report section on noise
    \item Discuss results with Dani\"el and Marcel, show report to Marcel for
    consideration
  \end{itemize}

\subsection{Week 19}
  \begin{itemize}
    \item Work out how a plane can be fitted to a set of points using geometric
    algebra
    \item Implement this and test rigorously, document this in the report
    \item Show to Dani\"el and get feedback on this
  \end{itemize}

\subsection{Week 20}
  \begin{itemize}
    \item Research how different points from the point cloud can be assigned to
    different planes, based on position (RANSAC?)
    \item Discuss methods with Dani\"el, decide on method
    \item Document the decision
    \item Meet with Marcel for report check-up
  \end{itemize}

\subsection{Week 21 and 22}
I foresee that this part of the project will take up the most time and will
therefore need two weeks (at least).
  \begin{itemize}
    \item Implement point-to-plane assignment
    \item Test on many different datasets
    \item Update report accordingly
    \item Discuss results with Dani\"el
  \end{itemize}

The next four weeks will be spent according to how far I have come along by then
in the project. If everything went as planned and I have 4 full weeks left, I
will use them to implement multiple view geometry in order to generate 3D point
clouds myself from 2D data. If not, the remaining weeks will be used to catch
up. The planning below covers the best case scenario.

\subsection{Week 22}
\begin{itemize}
  \item Research SIFT for multiple view geometry (MVC)
  \item Implement SIFT for MVC
  \item Write report section on SIFT
\end{itemize}

\subsection{Week 23}
\begin{itemize}
  \item Research RANSAC for MVC
  \item Implement RANSAC for MVC
  \item Write report section on RANSAC
\end{itemize}

\subsection{Week 24}
\begin{itemize}
  \item Use MVC implementation to generate own datasets for point fitting to
  room models
  \item Test rigorously and note what kind of rooms work well
\end{itemize}

The last two weeks will be used for testing and finishing up the report (week 25
and 26).

Expected end date of project is 30th of June, 2011.

\section{Literature}
In the past, many research has been done on \emph{fitting point clouds to 3D
  models}.
One of the most recent papers on the subject is by Esteban et al.
\cite{esteban2010automatic}, in which the urban landscape is modeled using
optical flow for generating the point clouds. Their approach is more broad, in
the sense that they don't constrain the result to any sort of set of planes.
First they estimate the camera pose using two followup images. This, in essence,
is the same method I will use for creating 3D point clouds myself. They
then fit a set of planar patches to this point cloud, after which the
texture for each plane is computed from the original photographs. My
project, although constrained, has a significant similarity to this paper,
with the major difference being that I will use geometric algebra to
represent the objects in the scene (be it the points in the point cloud or
the planes to be fitted).

The fitting process will have to take care of \emph{noise}. This can vary from noise
coming from objects in the room which generate points not belonging to a wall,
or simply noise that get generated because the cameras are not perfect
and algorithms such as SIFT can still detect pointpairs that are not in
fact the same point. To filter out this noise, multiple algorithms could
be used. One algorithm which is used often is \emph{RANSAC}, which can be used
to filter out noise but also to find what points belong to what planes.
Schnabel et al. \cite{schnabel2007efficient} have devised a modified
version of RANSAC which is tailored specifically for the purpose of
detecting shapes within point clouds. Of course, they focus on classical
techniques, but this article could form an inspiration for doing the same
thing within geometric algebra.

\emph{Geometric algebra} itself dates back to the 19th century, when H. Grassmann
introduced the idea of an algebra relating geometric elements for the first
time \cite{grassmann1844lineale}. It is a more or less direct descendant of
Grassmann algebra and Clifford algebra. In 2009 Dorst et al.
\cite{dorst2009geometric} published a book concerning geometric algebra for
computer science. This book covers all aspects of geometric algebra and does not
need any beforehand knowledge. It covers the conformal model extensively, which
is the model I will be using to represent the geometric objects used (points and
    planes).

When generating point clouds from 3D environments, \emph{multiple view geometry} is
used. The dimension lost when taking a photograph of a 3D environment can be
retrieved using multiple images. Hartley and Zisserman \cite{Hartley2000} is the
standard work on this subject. Split up in five parts, they cover every aspect
of MVC, from single-view geometry (in which the cameraspecific matrix can be
    modeled using the observed radial distortion) to two-view geometry, which
explains how 3D point clouds are generated from two images (the most salient
    part for this thesis). Later on, N-view geometry is discussed, generalizing
the two-view case. This will probably not serve my thesis, although could be
used in future work to create more reliable point clouds using relatively little
addition of data.

\nocite{*} 
\bibliographystyle{plain-annote}
\bibliography{biblio}
\end{document}
