%
%  untitled
%
%  Created by Thomas van den Berg on 2011-02-21.
%  Copyright (c) 2011 __MyCompanyName__. All rights reserved.
%
\documentclass[]{article}

% Use utf-8 encoding for foreign characters
\usepackage[utf8]{inputenc}

% Setup for fullpage use
\usepackage{fullpage}

% Use URL's
\usepackage[plainpages=false]{hyperref}


\usepackage{float}

\hypersetup{
    colorlinks,%
    citecolor=black,%
    filecolor=black,%
    linkcolor=black,%
    urlcolor=black
}

% Uncomment some of the following if you use the features
%

% Multipart figures
%\usepackage{subfigure}

% More symbols
%\usepackage{amsmath}
%\usepackage{amssymb}
%\usepackage{latexsym}

% Surround parts of graphics with box
\usepackage{boxedminipage}

% Package for including code in the document
\usepackage{listings}


\usepackage[pdftex]{graphicx}
%\usepackage[hyperfootnotes=false]{hyperref}

\title{Report Master Project\\
\textit{Controlling a Roomba Vacuum Cleaner in the OpenNERO Environment}}
\author{Gilles de Hollander, Niels Out \& Maarten van der Velden}

\date{\today}

\begin{document}

\DeclareGraphicsExtensions{.pdf, .jpg, .tif, .png}

\maketitle
\begin{figure}[!th]
\centerline{\includegraphics[width=0.5\textwidth]{roombas-title}}
\end{figure}

\section{Introduction}
The past 5 weeks we have come up with an approach to the multi-agent Roomba
vacuum cleaner-problem and evaluated it. This paper discusses the approach we have taken,
the main design choices of this approach and presents some empirical data that shows the
strong- and weak points of it.
\subsection{The problem}
The problem we addressed is inspired by autonomous robotic vacuum cleaners,
specifically those of the brand Roomba \footnote{\url{http://www.irobot.com/}}. 
These robots can automatically
navigate in a room and vacuum clean all reachable areas in it.\\
Using the \textit{OpenNero}-framework\footnote{\url{http://code.google.com/p/opennero/}} and its 
Roomba-mod \footnote{\url{http://code.google.com/p/opennero/wiki/RoombaMod}},
we simulate a square office room with a predefined number of `crumbs', distributed
according to a varying mixture of Gaussians. One or more Roombas can move around in it 
and pick up crumbs as they pass over them. Which crumbs a Roomba can observe
is dynamic and is dependent on the position of the Roomba.
The goal of these Roombas is now to \emph{clean up all the crumbs as fast as possible}.
The main challenges are thus to:
\begin{enumerate}
\item Find a balance between exploring the room to find out where the crumbs are and
cleaning them up as fast as possible.
\item Let \emph{multiple} Roombas behave in a coordinated manner, such that they supplement each other.
\end{enumerate}
 
\subsection{Our approach}
In this kind of complex tasks, a well-known effective approach is to subdivide the task into smaller
subproblems (\cite{dayan1993feudal}, \cite{mataric1997behaviour}). The simpler parts of a task can
then be performed by subsystems on which higher-order systems can build more advanced
strategies.\\
\subsubsection{A Learning Approach}
For the Roomba-problem, we have chosen this approach too. Some lower-level control systems deal with more simple 
parts of the task, like low-level navigation and are simply automated. 
Using these subsystems we then try to find an efficient higher-level policy
using Reinforcement Learning techniques. We believe that such a learning approach has multiple advantages
over planning algorithms:
\begin{enumerate}
\item When using a good representation of states and actions almost any policy can be
represented. In this way an effective policy can be found that we as designers may would not have thought
of but is indeed optimal or at least very efficient.
\item Reinforcement Learning offers a clear formalism to deal with partial observability: the fact that the Roombas
can only see a part of the room makes controlling them efficiently hard. When using a learning approach, we at least had
some notions about how to approach this specific problem as a POMDP.
\item The learning system is able to learn \emph{online} about a large set of possible distribution of crumbs
in the room and take these all in account when applying its learned policy on new distributions.
\end{enumerate}
\subsubsection{Lower-level control systems}
The learning system made use of lower-order control systems that could
perform the following tasks:
\begin{enumerate}
\item Navigate around the room from one position to another
\item Make an evasive maneuver when the Roomba starts colliding with another Roomba
\item Clear a specific area of crumbs
\end{enumerate}
The decision agent now had to learn to could control these
subsystems to come up with an effective policy.

\section{Representation}
Central to an effective RL-solution is a good representation of the problem.
In this section we will discuss how we represented the room of crumbs, other
Roombas and the possible actions the Roombas could take and why we did so.
\subsection{The environment}
We decided to represent the environment as a grid representation as is very common
in similar learning problems \cite{sutton1998reinforcement}. In this
representation the state space is composed of tuples containing for every
grid cell:
\begin{enumerate}
\item Whether other agents are present in it
\item Whether the agent itself is present in it
\item How many crumbs are present in it, discretized by being either 0, below average
or above average
\end{enumerate}
This representation seems very natural and intuitive and is relatively
easy to `read'.  It however turned out to have some problems.

\begin{figure}[!th]
\centerline{\includegraphics[width=0.5\textwidth]{representation2.png}}
\caption{Grid-representation of entire room.}
\label{fig:entire_representation}
\end{figure}

\subsection{Grid size and the curse of dimensionality}
What is now a good size of the grid representation? It seems obvious that how higher resolution
the grid, how more precise a policy can plan an effective assignment of grid cells
to agents. As often in Q-learning using a Q-table it however turned out that scalability
could be quite a problem. The state-action
space grows exponentially when increasing the size of the grid. To be more precise, the number of states
possible is at most:
\begin{equation}
n\;states = (3^k - 2^k) \cdot k \cdot {k \choose n-1} 
\end{equation}
Where $k$ is the number of grid cells and $n$ is the number of agents, 
as the number of possible grid cells with either (0, below avg, above avg) is $3^k$ minus
all the $2^k$ variations which are inconsistent (e.g. only zeros and below-average cells). 
The number of possible positions for the agent itself is $k$ and
the number of possible arrangements of the $n-1$ \emph{other} agents is 
 ${k \choose n-1}$. Probably still some impossible states are counted, but
this still means the size of the state space is exponential ($O(3^k)$) in the number of
grid cells.\\
For a 3x3 grid and 3 agents there are already around
6 million possible states and with a 5x5 grid there are already $10^{16}$ possible states. 
With our generalization algorithm only one eighth of these states
actually has to be visited to see all equivalent states, and, more importantly, probably
most of the states do not have to be visited at all to find an effective policy,
but there is clearly a real problem with scalability here. We will return to
this issue in the discussion.

\section{Learning to clean a room}
\subsection{Actions and Options}
The number of decisions a decision agent can take should be as small as possible, as
otherwise learning becomes very expensive. We thus minimized the number of decisions
for the agent to take and designed the system such that 
it had only to choose which grid cell `to go to next', going either `north, south, east
or west'. When the agent takes the action of moving to a specific grid cell,
it automatically moves to the crumb that is nearest to that
cell's center and then keeps moving to the nearest crumb in the cell until all crumbs are cleaned up.\\
This approach might be a bit naive, as there is no explicit planning of how to clean up the
crumbs \emph{within} the grid cell and the agent can not choose to \emph{not} clean up the grid cell,
but always just moves trough it.
We however had the intuition that the naive `nearest-neighbor' clean up strategy is effective enough for a single
grid cell, as the number of crumbs in the grid cell are small in number and close enough to each
other enough. To clean up all the crumbs in the cell should not
be such a problem either: if there are a lot, the agent probably should want to clean them all anyway and if there
are little, it can be done on the way to a more interesting grid cell while losing just little time. 
The big advantage of this naive approach is that it drastically reduces the size of the 
state-action-space.\\
Our decision agent can choose can take a decision on only a limited set of events:
\begin{enumerate}
\item When it reached its target cell and it is completely cleaned
\item When it got stuck in another Roomba for a multiple number of timesteps
\end{enumerate}
This means that the agent takes almost only decisions that endure \emph{multiple} timesteps. This has some implications
for how to formally describe our policy and how to write the update function. The framework
of \emph{semi}-MDPs (SMDPs) provides a good framework for dealing with this issue \cite{sutton1999between}.
In a SMDP, a decision agent no longer takes \emph{actions}, but takes \emph{options} according
to a semi-Markov policy $\mu: \mathcal{S} \times \mathcal{O} \rightarrow [0, 1]$. When the agent now takes an option,
a \emph{flat} policy $\pi: \mathcal{S} \times \mathcal{A} \rightarrow [0, 1]$  becomes effective that takes one-time-step actions according to its
state-action mapping. This policy then terminates according to the (in principle stochastic) termination 
condition $\beta: \mathcal{S}^+ \rightarrow [0, 1]$, after which a new option can be taken.\\
To apply this framework to our approach: we learn a higher-order semi-Markov policy $\mu$ that determines
when to clean which grid cell and then starts using a flat policy $\pi$ that is hand-crafted and can be described as 
`just clean up one entire grid cell $n$ by moving to the nearest crumb until all crumbs are gone'. This policy gets
terminated by the terminations conditions $\beta(s)$. These include all the states in which the target grid cell $n$
is completely empty, or the agent has not moved for $m$ steps.\\
An important consequence of using the SMDP-framework is that the action-value functions must be generalized
to option-value functions. The main difference is the way the discount factor has to be applied
over a number of timesteps. The Q-function Q(s, o) now becomes:
\begin{equation}
Q(s, o) \leftarrow Q(s, o) + \alpha [(r_{t+1} + \gamma r_{t+2} + \gamma r_{t+3} + ... \gamma r_{t+k}) + \gamma^k \max_{o \in \mathcal{O}_{s'}} Q(s', o') - Q(s, o)]
\end{equation}
Where $s$ is the original state, $o$ the option taken, $r_{t}$ the amount of reward received at time $t$,
$k$ the number of timesteps the option took, $\alpha$ the learning rate and $\gamma$ the discount
factor.\\
\subsection{Multiagent Problem}
The problem we address here is clearly a multi-agent problem: multiple agents try to achieve
a similar goal. The agents get reward for the individual crumbs they clean up,
and all get similarly punished for the time it takes before the room is completely crumb-free
and all get a large reward when the last crumb is collected.\\
The fact that the agents share a goal should make the multi-agent aspect
relatively easy. The main challenge is to make sure the
agents coordinate their actions. Many different solutions to this problem exist
\cite{busoniu2008comprehensive}, often quite complex and computationally intensive.
Due to the assumption that communication is cheap and easy (and a bit due to time constraints), 
we have chosen to only use some \emph{explicit coordination} of roles: the agents can assign themselves to
clean a certain grid, but they communicate to each other which grid cell 
is their `current target' and can never
assign themselves the same grid as someone else. This encourages agents to clean different part of the room
and keeps them from colliding. It also reduces the set of possible (joint) actions,
thereby making learning easier.\\
The order in which  agents can assign themselves grid cells is arbitrarily. This means that
certain sets of joint actions can be ruled out, even if they would have been more effective
than the one that was actually taken.\\
In addition, agents represent each others positions in their state representation and can
thus take these into account when they choose an option. 
They can however \emph{not} take into account each others current
option/target grid, as these are not explicitly represented.
It is very well possible that the agents would perform better when they would do this,
as they could coordinate more,
but this would heavily increase the state-action space, making learning computationally even
more expensive.
\subsection{Partial Observability}
We implemented the environment in such a way that it is only partially observable: the agents can only
observe the grid cells that are in their direct surrounding (see \ref{fig:neighborcells}).
This makes it harder to both learn as to apply a policy, as the exact state of the world
is unknown. There are many advanced techniques to deal with partial observability,
formalized as a POMDP \cite{oliehoek2010decision}.\\
We chose a rather simple approach.
We again assumed cheap communication, so all the agents maintain a \emph{shared}
belief space. In this belief space they (naively) assume that all unvisited grid cells contain exactly 
the average amount of crumbs of the visited  grid cells. We suggest that this is `the most likely
state' and consider our approach as a naive most-likely-state-approach.
The agents update their shared
belief space with the latest info on the grid cells they observe. As the agents are the
only entities that can change the number of crumbs in a grid cell and always know their position
exactly, the information about 
\emph{visited} grid cells in the belief space is always correct.\\
The agents update their Q-tables
and choose their actions as if the most-likely-state in their belief space is the actual state.\\
As in the ultimate representation grid cells contain only either zero, an below average or
an above average number of crumbs and an exactly average number of crumbs is interpreted
as \emph{above} average, this way of updating the belief space should encourage exploration,
as an optimal policy agent is probably more likely to drive towards grid cells with an above-average
number of crumbs in them.
\begin{figure}[!th]
\centerline{\includegraphics[width=0.35\textwidth]{roomba_neighbors.jpg}}
\caption{Grid cells the agent can observe}
\label{fig:neighborcells}
\end{figure}
\subsection{Generalization}
Because we currently only learn in a square grid-world many states are equivalent and can
be generalized over: states can be completely mirrored in the x- or y-axis or in the two
diagonal axes or rotated
by either 90, 180 or 270 degrees and be exactly equivalent. We thus implemented a simple
algorithm that did so to every visited state. When updating the Q-table, every equivalent
state is also updated.

\section{Results}
To find out the quality of the different aspects of our approach, we set up a number of experiments that compare these aspects, the results of which are discussed in this section. 
The results are generated by training epsilon greedy, and evaluating regularly on a greedy policy, and without updating the Q-table.

Please note that most of the results were rather noisy. Therefore we decided to show running averages with a window of 33 episodes, to make the figures better readable, while retaining most of the information. Furthermore, while learning, at some points the agents tend to get stuck in never ending episodes, because staying in their own cell seems to be the way to go in the state given. These results are not taken into account, because no real value could be given for them. In general this happens in about 1\% of the evaluation trials, without significant differences caused by settings.

\subsection{Baseline}
First of all, we tested a baseline implementation where all 3 agents always just pick up the nearest crumb. In this setting the agents clean the room on average in 1045 time steps. After online training, 3 agents using our learning approach manage to clean the same environment in about 800 time steps, as can be seen in Figure \ref{fig:comparison}. This conclusively shows that our approach gives a much better policy than the naive baseline-policy.

\subsection{Online vs Offline learning}
In Section 2.2 we've theoretically shown that the number of possible states grows exponential in the number of grid cells. During all experiments we kept track of the number of states encountered by the Roombas. 
If the agents are learning with a new environment each episode (online learning) the state space will grow towards the theoretical upper bound. The more agents there are, the more different states might appear. This is shown in Figure \ref{fig:states5x5}. The number of states visited here grows linearly each episode, for the four agents the state space contained almost 2 million states after only 1780 episodes.
\begin{figure}[!th]
\centerline{\includegraphics[width=0.8\textwidth]{states5x5.png}}
\caption{Comparison of state space size during online learning on a 5x5 grid.}
\label{fig:states5x5}
\end{figure}

During offline learning on the same environment, less states will be visited each episode because it is impossible to encounter all states in this setting. For the simple case of one agent in a 3x3 grid with offline learning, the size of the state space starts growing quickly for the first 250 states, see \ref{fig:states3x3-1ag}. After 1300 visited states the number of states increases only occasionally.
\begin{figure}[!th]
\centerline{\includegraphics[width=0.8\textwidth]{states3x3-1ag.png}}
\caption{State space size during offline learning on a 3x3 grid for one agent}
\label{fig:states3x3-1ag}
\end{figure}
In \ref{fig:statesGilles} the state space size of various experiments are shown. The number of states visited for the experiments with a fixed seed (offline learning) grows faster with more grid cells. The two offline learning runs (on different seeds) show a similar growth in the number of states visited. 
\begin{figure}[!th]
\centerline{\includegraphics[width=0.8\textwidth]{statesGilles.png}}
\caption{State space size for various experiments.}
\label{fig:statesGilles}
\end{figure}

\begin{figure}[!th]
\centerline{\includegraphics[width=0.8\textwidth]{compared3x3.png}}
\caption{Comparison of several settings on a 3x3 grid.}
\label{fig:comparison}
\end{figure}

\subsection{The value of teamwork}

When more than one agent is learning to clean the room, the intuition is that it is profitable to explicitly coordinate the actions the agents take: in our approach they are not allowed to move to the same grid cell. Usually it is a risk to clean the same grid cell at the same time, because the Roombas might
collide and get stuck. Therefore the agents can broadcast their current target action, so other agents will not take the same one. To test whether the agents are capable of learning this kind of coordination themselves, we turned of this explicit coordination in this experiment, shown in Figures \ref{fig:coord} and \ref{fig:nocoord}.
	

\begin{figure}[!th]
\centerline{\includegraphics[width=0.8\textwidth]{3_3x3_fixed_seed_coord.png}}
\caption{Offline learning process of 3 agents on a 3x3 grid cleaning 20 crumbs, with coordination of their moves}
\label{fig:coord}
\end{figure}

\begin{figure}[!th]
\centerline{\includegraphics[width=0.8\textwidth]{3_3x3_fixed_seed_no_coord.png}}
\caption{Offline learning process of 3 agents on a 3x3 grid cleaning 20 crumbs, without coordination of their moves}
\label{fig:nocoord}
\end{figure}

As can be seen from the results, in both situations an approximately equally effective policy is found for the agents. The main difference is that it takes longer to learn in the situation without coordination, and that the results fluctuate more than in the coordination situation. A longer learning process is caused by the fact that less useful information is given to the agents in the non-coordination case. The fluctuation is caused by agents having to avoid each other more often.

\subsection{Grid Size}
We compared the effect of different grid sizes by training on both grids of 3x3 and grids of 5x5. Figure \ref{fig:states5x5} however, show that the state space keeps increasing nearly linearly until the end of all learning processes. This means that not all states have been visited yet, let alone that the state-action values have converged. Therefore caution is needed when interpreting the results. The 3x3 state spaces show more signs of convergence because the increase of the state space is sublinear towards the end of the learning process, due to the much smaller state space.
These intuitions are confirmed when looking at the results of their learning process, the 3x3 agents show a considerable increase in efficiency (Figure \ref{fig:3x3random}). However, the results of learning a 5x5 grid did not show any improvement at all (not shown here).

\begin{figure}[!th]
\centerline{\includegraphics[width=0.8\textwidth]{exp3_3_nonfixed_big.png}}
\caption{Online learning process of 3 agents on a 3x3 grid, with a linear fit.}
\label{fig:3x3random}
\end{figure}

\subsection{Comparison to planning approach}
To make somewhat of a comparison to a planning approach to clean a room efficiently, we compared our agents to the agents used in an approach where an approach with a genetic algorithm was used to plan the optimal path with multiple agents \cite{evolutionairvoordeel}. For this comparison we used 3 agents with a clustered random distribution of crumbs and full observability. 

The best result of Karavolos \emph{et al.} using their genetic algorithm under various settings was to clean the room in 1023 time steps. As we managed to do this in around 800 steps, this is another indication that the learning approach is pretty well capable to find a decent policy to clean rooms online.

\section{Discussion \& Future Work}
\subsection{Scalability}
We have designed and implemented a system that does what it is designed to do:
clean up a room of crumbs. It does so better than the most simple greedy approach.
It does of course however still offers room for improvement.\\
First and foremost, our theoretical and empirical
results clearly show that our approach badly scales: it is possible to learn an effective 
policy in the 3x3-representation, but when we use a 5x5-representation, the state-action space
becomes so large that actual learning becomes problematic, especially when multiple
agents are involved.\\
It must be said that while a 3x3-grid may seem somewhat coarse, it could be very well possible
that it might actually be specific enough to just give a broad assignment of parts of the room
to agents which is very effective, after which the low-level behavior of the agents takes
(very good) care of the rest.\\
Still, our representation generalizes very badly. The system can precisely know how to handle
a specific situation, but as soon as one of the agents moves just a bit to the east of the room,
the system is suddenly completely clueless.\\
It should learn to treat similar states similarly, even if they are not exactly the same. 
A possible way to achieve this
could be to use neural networks as function approximators, as in the famous
TD-Gammon paper by Tesauro \cite{tesauro1995temporal}.\\
Another, more fundamentally different approach that suffers less from the huge state space
are policy-search-techniques. 
In that case the system
does not evaluates state-action pairs, but entire policies.
A neuroevolutionary approach like the one used in \cite{koppejan2009neuroevolutionary} could be a hopeful candidate here.\\
In the evaluation of RL-methods, an important evaluation criterion could be how well they generalize to
different environments \cite{whiteson2011protecting}. Our 
current representation currently clearly lacks this possibility of generalization.
to other, larger or structurally different environments. What if there's a door to
another room? Or there's a large closet in the room? 
Even if that room is very similar to the one the system has trained on, it 
is not (yet) trivial to transfer the knowledge about how to plan the system has obtained in this first room
to the second. An interesting solution to study in future work to remedy this 
might be to create some higher-order representation of `rooms' and `corridors' and
apply what the agent has learned here only on the smaller scale of individual (parts) of rooms.\\
On the sub-grid level, it could also be interesting to divide the large grid cells further
into smaller grid cells. The algorithm we use to to let a single Roomba plan his way trough a large room
full of crumbs could then maybe be applied to this sub-grid level.\\
The decision agents currently do not represent each other actions/options. This means that, even though they will
never choose the same action/grid cell to clean, they cannot coordinate their actions fully 
\cite{busoniu2008comprehensive}. One relatively easy solution to this could be to assume perfection communication
represent the \emph{joint action} as a single action and learn the problem as if we were dealing with a single-agent MDP with
an action vector of $n$ dimensions. This does however significantly increases the state-action-space, making scalability
an even larger issue\\
As a last issue, the way the system currently deals with partial observability is very naive. 
We found it however hard that, when using more advanced methods, like those mentioned in \cite{oliehoek2010decision},
we had to have some notion about the probability distribution over possible states.
As the grid was very coarse and the distribution of crumbs was relatively complex (a mixture
of 10 Gaussians with random variance) we found it hard to come up with
a meaningful distribution over states representing this.

\bibliographystyle{IEEEtran}
\bibliography{refs}

\end{document}
