\documentclass[a4paper]{article} % (header)
\usepackage[margin=2.5cm]{geometry}
\usepackage{algorithm}
\usepackage{algorithmic}

\usepackage{url}

\usepackage[T1]{fontenc}
\usepackage[math]{iwona}
\usepackage{graphicx}
\usepackage{algorithm}
\usepackage{amsmath}

\setlength{\parskip}{1.3ex plus 0.2ex minus 0.2ex}
\setlength{\parindent}{0pt}

\newcommand{\HRule}{\rule{\linewidth}{0.3mm}}

% (header end)

\begin{document}

\input{./title.tex}

\tableofcontents\newpage

\section{Introduction} % (fold)
\label{sec:Introduction}
When tackling geometrical problems, one of the most common approaches is to create a model
of the problem in classical linear algebra, solving it in that formalism using numbers as
the basic elements of computation. This method has been used for a long time and works
properly. However, the use of numbers as basic elements in a geometric problem generates
long-winded expressions which are in most cases far from intuitive. Here lies a
possibility for improvement. A cleaner way of representing geometric problems would arise if there
was a way in which we could represent the objects about which we are reasoning in a more direct way.
Geometric algebra (GA) \cite{dorst2009geometric}, an alternative algebra for representing and computing with geometric objects and problems, fills that void.
In GA, complete subspaces such as planes and lines are the elements of
computation. As a result, computations can be done directly on these elements, without the
need of manually manipulating any of their coordinates. This creates a compactness of expression
which generates clean and compact code. The advantages of this algebra over classical methods is
easily expressed but hard to clarify with an in depth theoretical analysis. After all, going very deep into the nuts and bolts of the algebra will most likely not demonstrate the compactness of  expression which is apparent when we deal with it on the surface. As a result, we have chosen to use a case study to show the advantages of using GA over classical methods in geometric problems.

For the case study, we tackle the problem of creating 3D models from multiple 2D
representations. This is a prime example  of a geometrical problem which has been discussed using classical formalisms many times (see
section \ref{sec:Context}). As we want to show the advantages of using GA for problems with
a geometrical context, this problem makes for a good case study.

2D representations of the environment such as photographs have many uses, but are quite limited.
The world we live in is not two dimensional. By definition, when creating a 2D
representation of a 3D world\footnote{A 3D world in which the dimensions are independent,
which they are in our world.}, information is lost. Using 3D models of the environment
thus creates new possibilities.

Creating 3D models of any kind of environment from scratch using manual modeling software
is expensive, and the process takes too much time to be effectively used on a regular
basis.  2D imagery, on the other hand, is easy to create and quick to come by. Using
multiple 2D views of an environment to create 3D models, thus, removes the difficulties
posed by the direct way of generating these models.

This method of reconstruction is twofold. First, using multiple view geometry, a
3D point cloud is created from the 2D imagery. Then, using this 3D point cloud,
a surface 3D reconstruction is created from the captured scene. The generation of these point clouds is fairly straightforward and can be done in many
different ways. In the following section we will briefly discuss the research
done on this topic. After the point cloud has been generated, the step of fitting the point cloud to a 3D surface is more challenging and has been subject of many diverse approaches.
All efforts done in the past used classical, linear algebra based techniques. We will discuss the same problem, but instead
using the formalism of GA.

In section \ref{sec:Context}, we will describe previous work done
on generating 3D surfaces from reconstructed point clouds, and briefly mention
the work done on generating the point clouds themselves. Afterwards, we will
detail the research question posed in this paper, presenting the added value
given by this work. In section \ref{sec:Method} we will describe the methods used
in this work, combining geometric algebra with classic techniques like RANSAC
and Hough transforms. The experiments done with these methods and the results generated are presented in
section \ref{sec:Experiments}. The conclusions that can be extrapolated from the
research will be discussed in section \ref{sec:Conclusions}. Finally, in section
\ref{sec:Discussion} we will discuss possible future work and relevant
improvements.

% section Introduction (end)

\newpage

\section{Context} % (fold)
\label{sec:Context}

The problem in our case study is not a new one. Much research has been done on how to most effectively
reconstruct 3D models from 2D imagery. This process is twofold. The first step deals with reconstructing a 3D point cloud from the captured 2D data (in most cases photography), after which the actual surface model of the environment is reconstructed from that point cloud in the second step.

The standard work on reconstructing the environment
represented as multiple 2D images taken from different viewpoints was written by Hartley \& Zisserman
\cite{Hartley2000}, which describes the complete process of generating 3D point clouds from 2D imagery.

One recent method for generating a point cloud from multiple 2D representations of an environment was presented by
Esteban \cite{esteban2010automatic}. It is based on creating a 
simulated stereo vision model using a single camera, where two photographs taken shortly after each other in time represent 2 virtual cameras,
which are used as input for a stereo-vision model, which removes the need for a dual camera system when reconstructing 3D environments in real time.

When the point cloud has been constructed, it should be fitted to actual surface models to regenerate the environment from which
the original data was captured. This specific problem has been studied numerous times. One method for regenerating the actual surface model of the environment from the generated point cloud is based on RANSAC. It was presented by Schnabel et al. \cite{schnabel2007efficient}. They proposed
a modified version of RANSAC (which we will discuss in section \ref{sub:RANSAC}) specifically tailored to
finding a number of different shapes in a point cloud. 

Another method which is often used for finding specific shapes within a data set is the Hough transform, which we will discuss in section \ref{sub:Hough}. Although commonly used for detecting  lines and  circles in 2D datasets, it can also be used in a 3D environment or  point clouds derived from them. Recently, Borrmann et al. \cite{borrmann2011hough} devised a method of implementing the Hough transform in 3D environments, especially
designed for finding planes.

We have seen that many methods have  been proposed for resolving the problem at hand, and although a good deal of research has
already been done, no single mention of geometric algebra has been made. Its compactness of expression when dealing with geometric
problems should be apparent when applying it to our case.

Although geometric  algebra was first mentioned and discussed by Grassmann \cite{grassmann1844lineale}, 
for the geometric algebra used in our research, we have turned mostly to a quite recent book by Dorst et al. \cite{dorst2009geometric}. It gives
a complete overview of the algebra as well as how it can be applied in a computer driven setting.

% section Context (end)

\newpage

\section{Research question} % (fold)
\label{sec:Research question}
Much research has been done on reconstructing a 3D surface model from a
reconstructed 3D point cloud. However, in all research done, the basic
elements of computation are real numbers. This makes for tedious
equations and computations, as the surfaces and points in question are only
parametrized by their numerical constituents. It would be efficient if the
points and surfaces could be viewed as elements themselves. This is where
geometric algebra comes in.

Geometric algebra is an algebra in which different kinds of geometrical
shapes
and objects are the basic elements of computation, such as lines and
planes. When they are created, different operators are available which can
easily compute a number of relations between the terms of the equation, such
as the distance between two points or the intersection of a plane and a line.
This algebra does not make new computations possible, but it does
simplify many computations significantly, which means that tasks
involving geometrical computations will often be more compactly
represented compared to using classical methods.

The question addressed in this paper arises from these points. Although the
problem at hand has been solved with classical techniques many times before,
does the use of geometric algebra result in more compact equations and code? 
Furthermore, does the use of geometric algebra present any other
advantages that one does not readily acknowledge? In
section \ref{sec:Method} we will give a comparison of the code when using geometric
algebra and the code that solves the same problem with classical methods, and
present any other findings regarding the difference between them. Section \ref{sec:Experiments}
will show some results we have gathered using an implementation we created using the methods
described in section \ref{sec:Method}.

% section Research question (end)

\newpage

\section{Method} % (fold)
\label{sec:Method}


% section Method (end)

\subsection{Geometric algebra} % (fold)
\label{sub:Geometric algebra}
When one needs to represent geometric objects, geometric algebra offers an alternative to
the algebraic approach. In geometric algebra, geometric entities are the most basic element of computation
and can be handled without working with the coordinate constituents of the objects.
Because of its geometric nature, many problems concerning the manipulation
of geometric space and objects yield more intuitive computations than when using a classical representation.

For reference, we discuss some (but not all) of the operators and objects available in geometric
algebra. For a more in depth resource on the workings of GA we refer to Dorst et al.
\cite{dorst2009geometric}.

\subsubsection{Overview of geometric algebra} % (fold)
\label{ssub:Overview of geometric algebra}

\paragraph{Basis vectors}
Similar to linear algebra, the most basic elements of GA are the basis vectors of which the
directional space is comprised. In 3D space, these are $e_1, e_2, e_3$ and each of these correspond
to one of the $x, y, z$ directions\footnote{It is possible to use a representational space in which
the basis vectors correspond to completely different directions, but we will not cover that
possibility here.}.

The specific model  we deal with (Conformal Geometric Algebra or CGA) expands the directional space with 2 extra dimensions, which
correspond to the point in the origin $o$ (the origin can be chosen arbitrarily) and the point at
infinity $\infty$. This point at infinity is a point which all lines and planes have in common and
which does not change under Euclidean transformations. By making this point explicit, the algebraic
patterns in geometrical statements become more universal \cite{dorst2009geometric}.

\paragraph{Outer product}
The outer product, also  called the wedge product is denoted as $\wedge$ and spans the subspace
comprised of its constituents. For example, $e_1 \wedge e_2$ denotes the subspace of all multiples of
$e_1$ and $e_2$. Such an outer product is called a \emph{blade}, and its dimensionality is called its
\emph{grade}. This product is defined over all elements of GA and is purely algebraic.

\begin{figure}[h!t]
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=\textwidth]{img/e1e2}
\caption{Basis vectors $e_1$ and $e_2$}
\label{fig:e1e2}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=\textwidth]{img/e1We2}
\caption[Outer product of $e_1$ and $e_2$]{The blade which results from the outer product between $e_1$ and $e_2$: $e_1 \wedge e_2$. Here, the blade is graphically represented as a filled circle, but it represents the complete subspace spanned by both vectors! In a 2D space, this is of course the complete space, in which case the blade is called the \emph{pseudoscalar}}
\label{fig:e1We2}
\end{minipage}
\end{figure}

\newpage

A blade with the same grade as the representational space is called a \emph{pseudoscalar} and is
denoted as $\mathbf{I}_n$ where $n$ is the grade of the pseudoscalar. All
pseudoscalars of the same space are scalar multiples of each other (see figure \ref{fig:e1We2}).

\paragraph{Contraction}
The contraction is a more abstract product and has  been expressed by Dorst et al. \cite{dorst2009geometric}
as:
\begin{quote}
The contraction $\mathbf{A}$ on $\mathbf{B}$ of a blade $\mathbf{A}$ of grade $a$ and a blade
$\mathbf{B}$ of grade $b$ is a specific sub-blade of $\mathbf{B}$
of grade $b - a$ perpendicular to $\mathbf{A}$, with a weight proportional to the norm of
$\mathbf{B}$ and to the norm of the projection of $\mathbf{A}$ onto $\mathbf{B}$.
\end{quote}

It can be used to `take a certain subspace out of another subspace'. For example, we can use it to retrieve one of the original vectors
from which a blade has been previously made up:

\[ A = e_1 \wedge e_2 \rightarrow e_1 \rfloor A = e_2 \]

For vectors, the contraction is quite similar to the more familiar \emph{dot product} from linear
algebra. In the model we use (conformal geometric algebra), though, the dot product has to be
extended to the added dimensions of $\vec{o}$ and $\infty$, which is where it differs from the
classical dot product. The result table is listed in figure \ref{fig:contracttable}.

\begin{figure}[h!t]
\centering
\begin{tabular}{r|c|c|c|c|c}
          & $\vec{o}$ & $e_1$ & $e_2$ & $e_3$ & $\infty$ \\ \hline
$\vec{o}$ & 0         & 0     & 0     & 0     & -1 \\ \hline
$e_1$     & 0         & 1     & 0     & 0     & 0 \\ \hline
$e_2$     & 0         & 0     & 1     & 0     & 0 \\ \hline
$e_3$     & 0         & 0     & 0     & 1     & 0 \\ \hline
$\infty$  & -1        & 0     & 0     & 0     & 0 \\
\end{tabular}
\caption{Table of outcomes for the contraction between basis vectors}
\label{fig:contracttable}
\end{figure}

Note that the rules for using the contraction are not as straightforward as this example may make one believe. An in-depth discussion of these
algebraic rules are offered in \cite{dorst2009geometric}.

\paragraph{Geometric product}
The geometric product is the fundamental product of geometric algebra and all other products are
derived from it. It is simply denoted by a space: $\vec{a} \vec{b}$ means the geometric product
between $a$ and $b$. For vectors, it is simply defined as

\[ \vec{a}\vec{b} = a \rfloor b + a \wedge b \]

In more concrete terms, we can say that the geometric product between two elements contains every
relationship between those elements (for example the distance between them, the angle between them,
their containment relationship, etc.).

The definition of the geometric product for objects of higher grade (dimensionality) is too involved to list here, please refer to Dorst et
al. \cite{dorst2009geometric}.

\paragraph{Dual form}
Any expression in geometric algebra has a one-on-one mapping with its \emph{dual form}. This means
that any object in GA can be expressed in 2 ways: directly and dually. The dual form of an object is
denoted by the operator $*$ and can be computed from the direct form using a simple equation:

\[ P^* = P \rfloor \mathbf{I}_n^{-1} \]

Conversely, the direct form can be retrieved from the dual form using \emph{undualization}:

\[ P = P* \rfloor \mathbf{I}_n \]

The geometric interpretation of the dual form is not always easily extrapolated, but for many
objects the two different representations both correspond to classical well-known representations.
For example, the direct form of a plane is the outer product (which spans a subspace between elements as we have just discussed) between three points on the plane and
the point at infinity, while its dual form uses a combination of the plane's normal vector and distance to the origin
to define it fully, a representation which users of linear algebra should be familiar with.

\paragraph{Conformal geometric algebra}
Geometric algebra is an algebra which can be implemented using many different models. The model we
use is called \emph{conformal geometric algebra} which is specifically designed for Euclidean
geometry and its transformations. All Euclidean transformations (those comprised of rotations,
reflections, translations and their compositions) can be expressed using the \emph{versor product}.

A versor is simply an object which represents some transformation. The creation of such a versor is
oftentimes quite simple, and we will encounter such a computation in section \ref{sub:3D to 2D}. The
transformation can be applied by computing the versor product. With a versor $V$ and a
to-be-transformed object $O$ this is done as follows:

\[ O_t = V O V^{-1} \]

Here $V^{-1}$ is the inverse of the versor. All orthogonal transformations (i.e. transformations that preserve angles and lengths of vectors) can be represented like
this, and in CGA all Euclidean transformations are orthogonal. This creates a very compact way of
expressing quite complex transformations in a universal way.

Moreover, in CGA, points can be expressed explicitly and differing from vectors. While a simple vector
comprised of multiples of the 3 basic direction vectors ($e_1, e_2, e_3$) denotes a \emph{direction} in space,
we would like to make an explicit representation of an actual \emph{point in space}. In CGA, we represent such a point
with

\[ p = \vec{o} + \vec{v} + \frac{1}{2}\vec{v} \rfloor \vec{v} \infty \]

This representation, combined with the contraction table listed before in figure \ref{fig:contracttable}, means we
can simply use the following equation to find the distance between two points:

\[ D = p_1 \rfloor p_2 \]

This distance measure also means that we can easily check if two points are identical: if the above equation returns 0, the points in question
are clearly in the same position and thus the same point.

\subsubsection{On compactness of expression} % (fold)
\label{ssub:On compactness of expression}
The advantages of geometric algebra we will show are based on their \emph{compactness of expression}. However,
it must be noted that this does not simply constitute a shorter way of writing down the problem definition.
After all, using natural language we can easily define the complete problem at hand as ``fit a room model
to this given point cloud''. The difference lies in the fact that the representation given by geometric algebra
is completely deterministic: following the rules of the algebra (of which we have listed a selection
in section \ref{ssub:Overview of geometric algebra}), one can calculate the result of any single
expression. There are no external functions involved other than the basic rules for calculating the
different products in the algebra.

Solving the problem stated as ``fit a room model to this given point cloud'' is obviously not as straight-forward.
We are thus talking about compactness of expression while retaining the possibility of directly calculating the
result of the expression.
% subsubsection On compactness of expression (end)

% subsubsection Overviewof geometric algebra (end)

\subsubsection{Example: Plane through three  points, two methods}
\label{ssub:Plane through three points, two methods}

We will try to make the workings of geometric algebra more concrete using an example. Here we will list a geometric problem
together with its solution using classical methods. Then, for comparison, we solve the same problem using GA to show its compactness of expression.

Two geometrical operations we need for our case study are creating the plane $P$ through three given points $p_1, p_2, p_3$ and calculating the distance
between any arbitrary point and such a plane (see section \ref{sub:RANSAC}). In linear algebra, given these three points,
one first calculates the normal of the plane:
  \[ n = (p_1 - p_2) \times (p_1 - p_3) \]
where $\times$ denotes the cross product. This normal vector combined with any of the three points ($p_d$) defines the plane fully.
Calculating the distance between an arbitrary point $p_a$ and this plane is a relatively involved operation. One first calculates
the vector $w$ from $p_d$ to $p_a$, and then projects this vector onto the normal $n$. The length of the resulting vector is equal to the distance $D$
of the point to the plane:
  \[ D = |\mathrm{proj}_n w| = \frac{|n \cdot w|}{|n|} \]
Even though these computations are relatively cheap to perform, quite some mathematics are involved and the process is not intuitive.

In conformal geometric algebra, we can use the outer product ($\wedge$) to span a subspace $S$ using as many constituents as necessary.
A sphere, for example, is defined by any four points on its surface. Remembering that the point at infinity is common to all planes and lines (section \ref{sub:Geometric algebra}), with three given points $p_1, p_2, p_3$ the plane through these points
is thus created with the simple equation
\[ P = p_1 \wedge p_2 \wedge p_3 \wedge \infty \]
This fully defines the plane. Using this representation, one can easily calculate the distance between $P$ and an arbitrary point $p$.
To do so, the plane is converted to dual form, after which the distance is given using the contraction \cite{dorst2009geometric}:
\[ D = p \rfloor P^{*} \]
This method, although not necessarily\footnote{This depends on the efficiency of the implementation.} faster (see section \ref{sub:compeff}), is more intuitive and generates code that is clear and easy to maintain.

% subsubsection Plane through three points, two methods (end)

% subsection Geometric algebra (end)

\subsection{RANSAC} % (fold)
\label{sub:RANSAC}
RANSAC is an iterative method used to estimate parameters of some (mathematical) model making use of
a set of datapoints containing outliers. First published in 1981 by Fischler and Bolles \cite{bolles1981ransac},
  it has seen quite some variations, but the core has remained the same.

The assumption upon which RANSAC is based is that a dataset contains valid datapoints and outliers. In an
iterative manner, datapoints are randomly selected from the set and a model is fitted to those points.
An error measure is calculated from that model given the rest of the points, and noted. Then the process
starts over again. This process is repeated a number of times, after which the best model (with the lowest
 error measure) is returned as the right model.

RANSAC can be described textually as presented in algorithm \ref{alg:RANSAC}.

\begin{algorithm}
\caption{RANSAC}
\label{alg:RANSAC}
\begin{enumerate}
\item Select at random the minimum amount of points necessary to determine the model parameters
\item Create model from these points
\item Determine how many points in the total set of points lie within a predefined threshold $\theta$ of the model
\item If this model has a lower error measure than the current best model, save it
\item Repeat steps 1 through 4 for a predetermined amount of $N$ steps
\item Return best model
\end{enumerate}
\end{algorithm}

A plane is defined by just three points, and thus for the problem at hand we keep selecting 3 points at random from the dataset
and generate the plane through these points. In section \ref{sub:Geometric algebra} this was shown to be defined as
\begin{equation}
P = p_1 \wedge p_2 \wedge p_3 \wedge \infty
\end{equation}
This is a basic element of computation and the original elements are not needed for any further computation
involving the plane.\footnote{The original elements are conversely also impossible to recreate from the combined representation.} The distance $D$ between the plane $P$ and any arbitrary point $p$ was defined as
\begin{equation}
D = P^*\ \rfloor\ p
\end{equation}
where $P^*$ is the dual form of $P$. These two computations are performed iteratively as shown in algorithm \ref{alg:RANSAC}.

The results gathered using this method are presented in  section \ref{sub:ExpRANSAC}.

% subsection RANSAC (end)

\subsection{Hough transform} % (fold)
\label{sub:Hough}
The Hough transform is a technique used for feature extraction, mostly seen in image analysis. It provides a method
for finding imperfect instances of a certain  class of shapes within a dataset using a voting procedure. This voting procedure
is carried out in parameter space, whose dimensionality is equal to the number of unknown parameters of the shape class to
be detected. The idea behind the transform is relatively straightforward: for each point in the dataset, the shapes that can
be formed containing that point are generated. An accumulator array stores the occurrences of these shapes using their parameters. If a certain shape
is present in the dataset, all points in the set that lie on this shape should cluster around its parameters in the accumulator array.
In the end, the local maxima in the accumulator space correspond to shapes found in the dataset.

In its most basic form, the Hough transform can be described as in algorithm \ref{alg:Hough}.

\begin{algorithm}
  \caption{Hough Transform}
  \label{alg:Hough}
  \begin{algorithmic}
  \STATE $A \gets \{\}$
  \FORALL{$p \in \textrm{dataset}$}
    \STATE $\textrm{par} \gets \textrm{parameters of }p$
    \STATE $A[\textrm{par}] \gets A[\textrm{par}] + 1$
  \ENDFOR
  \RETURN local maxima in A
  \end{algorithmic}
\end{algorithm}

This process is computationally quite expensive. For each point in the dataset, a quite large number of parametrized shapes have to be
generated (the number of shapes generated for each point dictates the precision with which shapes can be detected). With the task at hand,
a dataset with 50000 points is not out of the ordinary. This yields a computation which is infeasibly expensive. Another version of the Hough transform called
the Randomized Hough Transform (RHT), presented in 1990 by Xu \cite{xu1990new}, removes this problem. For a shape class defined by $n$ parameters, instead of passing through each point, $n$ points are selected at random and mapped to 1 point in the accumulator array. This procedure is then
repeated. After some time, the accumulator array will show local maxima at the parameters corresponding to shapes in the dataset.

\begin{algorithm}
  \caption{Randomized Hough Transform for planes}
  \label{alg:RHT}
  \begin{algorithmic}
  \STATE $A \gets \{\}$
  \REPEAT
    \STATE $p_1,\ p_2,\ p_3\ \gets \textrm{random selection of three points from dataset}$
    \STATE $\textrm{par} \gets \textrm{parameters of plane (normal and distance to origin) defined by }p_1, p_2, p_3$
    \STATE $A[\textrm{par}] \gets A[\textrm{par}] + 1$
    \COMMENT{Here $A[\textrm{par}]$ can be a new cell or an already existing cell with a maximum distance to the current parameters of $\delta$}
  \UNTIL{accumulator array has clear maxima}
  \RETURN local maxima in A
  \end{algorithmic}
\end{algorithm}

The error threshold $\delta$ specified in algorithm \ref{alg:RHT} above is introduced because the dataset used could be quite noisy, making the parametrized
planes not identical. In our case, even though two parametrized planes could represent the same real plane in the room environment, they could
have parameters which are not identical because of measurement noise. This way these planes would still fall in the same accumulator cell.

\subsubsection{Nearest Neighbour Hough Transform} % (fold)
\label{ssub:nnht}

To make the process even faster, we have  opted for not choosing the three points from the dataset
at random, but first creating a table listing the 2 nearest  neighbours for each of the datapoints.
Then, the accumulator array is filled by passing over each point in the dataset and creating the
plane through it and its two nearest neighbours.
This increases the speed significantly, as the probability that 3 points that lie very close
together are part of the same plane is much higher than a selection of 3 random points from the
set.

% subsubsection Nearest Neighbour Hough Transform (end)

\subsubsection{Unique representation} % (fold)
\label{ssub:Unique representation}

For storing the generated planes in the accumulator array, we need to make sure that the planes
generated are unique: a plane generated from a set of three points $S$ should render the exact same
representation as a plane generated  from another set of three points $S$ on that same plane,
otherwise a local maximum will never form in the parameter space. Such a unique representation is
easily extrapolated from the representation used in the previous section.

The dual form of a plane in CGA is a simple vector, in which the $e_1, e_2, e_3$ components (the Euclidean part) denote the normal direction and the $\infty$ component is proportional to the distance of the plane from the origin. If the plane is normalized, the $\infty$ component is equal to the distance to the origin.
When the plane is normalized, just 3 values need to be saved in the accumulator array in order to uniquely store the plane:

\[ P_n = \frac{P^*}{P^* \cdot P^*} \]

where $\cdot$ denotes the dot product. Now just two Euclidean components and the $\infty$ component need to be saved, the third Euclidean component can be
retrieved by acknowledging that because the plane is normalized, the following equation must hold:

\[ \sqrt{e_1^2 + e_2^2 + e_3^2} = 1 \]

The fact that three components need to be saved is not surprising. In classical techniques, the most
commonly used unique representation of planes is one where the angle $\theta$ of the normal with the $(x,y)$ plane is saved
together with the angle $\phi$ of the normal with the $(x,z)$ plane and the distance to the origin.
These are exactly the degrees of freedom a plane in 3D has, so our unique representation in GA
cannot possibly get any more compact.

% subsubsection Unique representation (end)

% subsection Hough transform (end)

\subsection{3D to 2D} % (fold)
\label{sub:3D to 2D}
Although the methods listed above are correct and should render good results (setting aside noise in
the data), when looking at the problem closely we should notice that we are essentially dealing with
just 2 dimensions: the walls of a room stand straight up and are (most of the time) not tilted.
Certainly we should use this information to our  advantage.

One method of using this extra piece of information  is by looking at the data from above and
treating that view as a 2D dataset, in which lines need to be found instead of planes. However, the
data generated from stereovision methods are often tilted: it is very likely that the pictures
taken were not completely level with the horizontal  axis, and thus looking directly from above does
not correspond with looking at the \emph{room} directly from above. This should be corrected first.

By first looking for the bottom or top plane (i.e., floor or ceiling) in the dataset, we can then rotate the complete dataset
so that this plane is level with the horizontal plane. Then, we can look from above as mentioned
before and will be left with a lower-dimensional problem.

Finding the bottom or top plane by means of RANSAC or the Hough transform can be done by
selecting the points used for generating the planes from a small portion of the set which has the
lowest or highest vertical component ($e_2$ in our model). This plane $P$ should then be rotated to
be level with the horizontal plane. With $p_1$ as the point corresponding with $e_1$ and $p_3$
corresponding with $e_3$, the horizontal plane  is defined as

\[ H = \vec{o} \wedge p_1 \wedge p_3 \wedge \infty \]

Now, as pointed out in  section \ref{ssub:Overview of geometric algebra}, we can create a versor to
perform the rotation we want. In geometric algebra, the versor rotating one object $A$ to another
$B$ can be
computed as

\[ V = 1 + BA \]

which maps to our problem as

\[ V = 1 + HP \]

This is the complete definition of the versor. It can be applied to each point in the dataset, and
the result will be the dataset rotated so that the found plane is level with the horizontal plane.

Now that the point cloud has been rotated appropriately, we can project it onto the horizontal plane.
The horizontal plane can be defined in dual form by its normal ($\vec{e_2}$). Afterwards we can span
the line perpendicular to the horizontal plane and a single point $P$ in the cloud using the outer product, spanning another subspace:

\[ L = e_2 \wedge P \wedge \infty \]

The projection of the point $P$ on the horizontal plane is then given by the \emph{meet} operation,
which is defined as

\[ P_{proj} = L^* \rfloor P^{-*} \]

where $-*$ means undualization (section \ref{ssub:Overview of geometric algebra}). Doing this operation for 
all points in the point cloud results in the cloud being projected onto the horizontal plane, which is
what we strived for.

Here we see an incredible difference between linear algebra and geometric algebra. The same problem
can be tackled in linear algebra, but is much more involved. We list it here for comparison.

In linear algebra, planes are not direct objects of computation and are represented by a
combination of different vectors, describing the angle in space and the location. In our problem we
thus want to rotate the angle vectors describing the found plane to the angle vectors describing the
horizontal plane. The rotation of one vector $\vec{v_1}$ to another vector $\vec{v_2}$ is a process comprised of two distinct
steps. First, the axis and angle of rotation need to be computed. Then, using this axis  and angle,
a matrix can be computed which performs the wanted rotation.

Given the two vectors, the axis of rotation is calculated using the cross product, which returns a
vector perpendicular to both constituents:

\[ \vec{a} = \vec{v_1} \times \vec{v_2} \]

We will need to normalize this axis vector before we can use it:

\[ \vec{a} = \frac{\vec{a}}{||\vec{a}||} \]

Then, we calculate the angle between these vectors using the dot product:

\[ \phi = \mathrm{acos}\left({\frac{\vec{v_1} \cdot \vec{v_2}}{||\vec{v_1}]|| ||\vec{v_2}||}}
\right) \]

where acos is the arc cosine. If we denote the x, y and  z  components of the normalized axis vector
as $x, y, z$ respectively, the following matrix performs the rotation:

\[
R = 
\begin{bmatrix}
(1 - \cos(\phi))x^2 + \cos(\phi) & (1 - \cos(\phi))xy - \sin(\phi)z & (1 - \cos(\phi))xz +
\sin(\phi)y \\
(1 - \cos(\phi))xy + \sin(\phi)z & (1 - \cos(\phi))y^2 + \cos(\phi) & (1 - \cos(\phi))yz -
\sin(\phi)x \\
(1 - \cos(\phi))xz - \sin(\phi)y & (1 - \cos(\phi))yz + \sin(\phi)x & (1 - \cos(\phi))z^2
+ \cos(\phi)
\end{bmatrix}
\]

This should the be applied to each point in the point cloud. Afterwards, the projection onto
the horizontal plane is one by ``throwing away'' the vertical component, which can
be achieved using the following matrix:

\[ P =
\begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 0 \\
\end{bmatrix}
\]


Compare this equation and the step before with the simple versor and the versor product listed
above. Clearly, the problem is in this case expressed much more compact in geometric algebra and generates much cleaner code.

% subsection 3D to 2D (end)

\newpage

\section{Experiments} % (fold)
\label{sec:Experiments}
We implemented the methods presented in section \ref{sec:Method} and tested the resulting
implementations on different datasets. Here we present the results using specifically two datasets,
one of which was generated artificially and another which was extrapolated from pictures of an
actual room using a stereovision algorithm.

\subsection{Software}
\label{sub:Software}

\subsubsection{GA implementation} % (fold)
\label{ssub:GA implementation}
% subsubsection GA implementation (end)
For our geometric algebra expressions we used GAIGEN by Daniel Fontijne, which is a code generator
for geometric algebra. It was written in C and outputs C code. For purpose of ease of use, we
  created a coupling between Python and the generated C code from GAIGEN. This conversion results in
  a significantly slower implementation than the original C implementation, but makes implementation
  of the algorithms proposed straightforward. Implementing the algorithms in C is a surefire way of
  significantly increasing the speed of the algorithms.

\subsubsection{Generating the datasets} % (fold)
\label{ssub:Generating the datasets}
For generating the datasets, a combination of Microsoft Photosynth \cite{photosynth} and PMVS \cite{PMVS} was
used. As input they use pictures of an environment, and the output is a 3D reconstructed point cloud
of the environment. The discussion of these programs is outside the scope of this work.

% subsubsection Generating the datasets (end)

\subsection{Data} % (fold)
\label{sub:Data}

\subsubsection{Artificial set} % (fold)
\label{ssub:Artificial set}
The first dataset we have used is one which was created artificially, in which two 2D images of a
computer generated room were given to a stereovision  algorithm along with handcrafted featurepairs,
thus resulting in a very clean dataset  with a low amount of noise. It consists of roughly 25000
datapoints in 3D space.

\begin{figure}[h!t]
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=\textwidth]{img/trivial1}
\caption[Artificial dataset, front view]{Artificial dataset\footnotemark, front view}
\label{fig:trivialfront}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=\textwidth]{img/trivial2}
\caption{Artificial dataset, side view}
\label{fig:trivialside}
\end{minipage}
\end{figure}

As can be seen  especially in the side  view, the back plane consists of many points, and is
expected to be easily found using any of the methods used.

% subsubsection Artificial set (end)

\subsubsection{Real set} % (fold)
\label{ssub:Real set}
The second dataset used was generated from a number of pictures taken from a room filled with
furniture and other objects. This results in many points inside the room being added to the point
cloud, which for our algorithms is simply a form of noise.

\begin{figure}[h!t]
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=\textwidth]{img/nedsense1}
\caption{Real dataset, front view}
\label{fig:realfront}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=\textwidth]{img/nedsense2}
\caption{Real dataset, side view}
\label{fig:realside}
\end{minipage}
\end{figure}

% subsubsection Real set (end)

\footnotetext{\textbf{N.B.:} Only 1 in every 100 points shown}

\subsection{RANSAC} % (fold)
\label{sub:ExpRANSAC}
We implemented RANSAC as presented in section \ref{sub:RANSAC} using GA, and ran the resulting
implementation on our two different datasets.

As expected, the backplane of the artificial dataset  was easily discovered. After the backplane,
the right wall was the next most likely plane to be  detected, but this result varied. In some cases,
the bottom plane was found first (see figure \ref{fig:ransactrivial}). This is all due to the random aspect of the algorithm.

\begin{figure}[h!t]
\centering
\includegraphics[width=0.7\textwidth]{img/ransactrivial}
\caption{RANSAC run on the artificial dataset. In this particular instance, the bottom and top planes were found quite well}
\label{fig:ransactrivial}
\end{figure}

Unfortunately, RANSAC proved to be insufficiently powerful for the much more noisy real dataset. As can  be seen
in figure \ref{fig:ransacreal}, planes were found that do not at all correspond with the walls in the room. This
was to be expected, as the non-empty room has many datapoints in its point cloud corresponding to furniture and other
objects present in the environment.

\begin{figure}[h!t]
\centering
\includegraphics[width=0.7\textwidth]{img/ransacreal}
\caption{RANSAC run on the real dataset. As can be seen, the dataset is way too noisy to be succesfully processed}
\label{fig:ransacreal}
\end{figure}

% subsection RANSAC (end)

\subsection{Hough transform} % (fold)
\label{sub:ExpHough transform}
We followed the Hough transform  algorithm as we listed it in section \ref{sub:Hough}, especially as specified in section \ref{ssub:nnht}, using
the nearest neighbours of points to increase the speed of the algorithm.

The results rendered were quite promising. With the  artificial dataset, 4 of the 5 expected planes were easily found, with the left planes sometimes being overlooked. As can be seen in the dataset (section \ref{sub:Data}), this is also the wall that is the least represented in the point cloud.

\newpage

\begin{figure}[h!t]
\centering
\includegraphics[width=0.7\textwidth]{img/houghtrivial}
\caption{Hough run on the artificial dataset.}
\label{fig:houghtrivial}
\end{figure}

More importantly, the results on the real dataset are much better than with RANSAC. Although the side walls are still not found, the top and  bottom planes are found quite well.

\begin{figure}[h!t]
\centering
\includegraphics[width=0.7\textwidth]{img/houghreal}
\caption{Hough run on the real dataset. The ceiling and floor of the room are generated quite well.}
\label{fig:houghreal}
\end{figure}

% subsection Hough transform (end)

\subsection{Hough transform, 3D to 2D} % (fold)
\label{sub:Hough transform, 3D to 2D}

As described in section \ref{sub:3D to 2D}, we have implemented a Hough transform which first finds the bottom or top plane which it then rotates to be level with the horizontal axis. Then the whole point cloud is projected onto the horizontal plane. The flattened datasets which came out of this are shown below in figures \ref{fig:flattriv} and \ref{fig:flatnedsense}.

\begin{figure}[h!t]
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=\textwidth]{img/flattriv}
\caption{Flattened artificial dataset}
\label{fig:flattriv}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=\textwidth]{img/flatnedsense}
\caption{Flattened real dataset}
\label{fig:flatnedsense}
\end{minipage}
\end{figure}

Using these flattened datasets, we could run the Hough transform again but this time trying to find lines which correspond to the walls of the room.
The results of this second Hough transform can be seen in figure \ref{fig:hough2dtriv} and \ref{fig:hough2dnedsense}. The green lines in both images correspond to actual walls in the dataset. The yellow line in the real set corresponds to a quite accurate line within the dataset, but which is not an actual wall. The other (blue) lines are caused by datapoints which do not correspond with any wall and should thus not appear in a perfect version.

Interestingly, the right wall is found quite well in the artificial set, something that didn't happen with the original Hough transform (see section \ref{sub:ExpHough transform}). However, the left wall is not found as one of the top lines in the 2D set.

The biggest improvement can be seen in the real set: no single wall was found in the original Hough transform (only the ceiling and floor), but the left wall is found perfectly in this 2D version.

\begin{figure}[h!t]
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=\textwidth]{img/hough2Dtrivialproc}
\caption{The 2D Hough transform run on the flattened artificial dataset}
\label{fig:hough2dtriv}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=\textwidth]{img/hough2DNedSenseproc}
\caption{The 2D Hough transform run on the flattened real dataset}
\label{fig:hough2dnedsense}
\end{minipage}
\end{figure}

% subsection Hough transform, 3D to 2D (end)

\subsection{On computational complexity} % (fold)
\label{sub:compeff}
We have seen that the representational power of geometric algebra overshadows that of linear algebra when it comes to geometric problems. However, it must be noted that geometric algebra is not a set of algorithms but a \emph{formalism}. This means that in and of itself geometric algebra will not offer an increase in computational speed over methods incorporating linear algebra.

At the time of writing, computer hardware is optimized for computing linear algebra expressions\footnote{A graphics card is in essence nothing more than a very quick matrix multiplier}. No such hardware optimizations are present for geometric algebra, although research on it has been done (Mishra et. al \cite{mishra2006color}, \cite{mishra2005hardware}).

% subsection compeff (end)

% section Results (end)

\newpage

\section{Conclusions} % (fold)
\label{sec:Conclusions}

The classic algorithms that have been discussed in section \ref{sec:Context} work well as they stand. However, using geometric algebra
significantly increases the compactness of expression. This could already be seen with our implementations of RANSAC and the Hough transform, but
was most apparent when the full power of geometric algebra could be used when converting the originally three-dimensional problem to a two-dimensional one.

Overall, the methods used for solving the problem at hand were not sufficiently powerful to offer a complete start-to-finish foolproof 3D reconstruction implementation. However, the results were promising (especially those of the Hough transform), and could be the starting point for more intricate algorithms, all the while using the compactness of expression of geometric algebra to keep the code clean.

It must be kept in mind that geometric algebra is only a formalism and thus methods incorporating it are not inherently quicker than those based on classical methods.

% section Conclusions (end)

\section{Discussion} % (fold)
\label{sec:Discussion}
The results rendered were not unexpected. As a formalism tailored specifically to geometric problems, it seemed to fit the problem at hand like a glove. 
Our expectation of much compacter code was met, as we have seen in the previous sections. However, the full power of geometric algebra has not been revealed yet. Although a significant improvement over classical methods with respect to compactness of code has been shown for the 3D-to-2D Hough transform, many more intricate details of geometric algebra and their advantages when put to use in geometric problems have because  of the nature of the case not been touched on. A case
study involving a higher level geometric problem could be the basis of a more spectacular display of representational power.

Furthermore, the universal power of geometric algebra means we could easily extend the algorithms discussed to rooms that are not strictly planar but may have spherical components, without a great increase of
representational complexity. More research into this could lead to a witnessed spectacular difference between the representational power of classical methods and geometric algebra concerning geometric problems.

The datasets used were very noisy and thus did not render the results we would have liked. Revising the datasets to be more noise-free or researching methods of cleaning up the data could resolve this issue and is a good start for future efforts.

When the planes generated are sufficiently accurate, the corners of the room still have to be extrapolated. Simply intersecting all the planes is not enough, as figure \ref{fig:intersect} demonstrates.

By computing what area of the plane is actually supported by the dataset it could be possible to differentiate between actual corners and regular plane intersections. In figure \ref{fig:intersect} for example,
   plane P will not find any support along the line between point A and point B, thus the intersection at A could be reasoned to be just an intersection, and not an actual corner. As all steps in this
   procedure\footnote{Finding support by calculating distances between points and planes and finding
   the intersecting lines between planes} are quite easily represented in geometric algebra, implementing this
   is a suggestion for future research.

Implementation-wise, as it stands, the speed of the algorithms implemented could be significantly increased by porting the implementation to a lowerlevel language like C. The translation steps currently necessary to switch between C and Python are an enormous bottleneck for speed. Although it will not render novel results, it will make new datasets available for processing which are currently too large to handle.

\newpage

\begin{figure}[h!t]
\centering
\includegraphics[width=0.7\textwidth]{img/intersect}
\caption{When simply calculating plane intersections, all the room corners do get returned, but also some intersections which are not actual corners}
\label{fig:intersect}
\end{figure}

% section Discussion (end)

\newpage

\listoffigures

\newpage

\nocite{*}

\bibliographystyle{unsrt}
\bibliography{biblio}

\end{document}
