\documentclass{article}

\usepackage[margin=1in]{geometry}
\usepackage{amsmath,amssymb}
\usepackage{graphicx}
\usepackage{float}
\usepackage{subfig}
\setcounter{tocdepth}{1}

\begin{document}

\title{Roomba Control: A role-assignment problem}
\author{Douwe Oosterhout, 0612677\\
Joris de Groot, 0518808\\
Dani\"el Karavolos, 6313086}
\maketitle
\tableofcontents
\newpage

\section{Introduction}
Roombas are small autonomous vacuum cleaners. When several Roombas are put in the same room, they should cooperatively clean the room for optimal effect. The simplest way to do this would be for each Roomba to always drive to the nearest crumb. However, this is probably not very efficient. The Roombas might clean the room more efficiently if they were trained on the task or were equipped with heuristics or a planner.

The problem of efficiently cleaning the room can be viewed in several ways. One way would be to view it as a route planning problem. The agents then have to find the shortest path that removes all the crumbs. 

However, we view this problem as a role-assignment problem, like the problem described in \cite{ji2006role}. Here the problem the authors try to solve arises when "a team of mobile robots must decide what role to take
on in a given planar formation, where the parameters
are the rotation and translation of the formation". They thus want to learn both the roles and the rotation/translation of the robots and let them coordinate in a world. We however strip the rotation/translation part and look only at the role assignment. If we partition the room, we can assign an area to an agent and have each agent clear that area using greedy selection and thus solve the problem of "Who goes where?".

This report describes several methods for partitioning the room and finding the optimal assignment.
Mainly, we will be focusing on genetic algorithm as described in for example \cite{beasley1993overview}. Genetic algorithms are often applied to global optimization problems. The powerful combination of mutation and crossover attempt to avoid local maxima in the fitness landscape.
Genetic algorithms have been used before to solve combinatorial optimization problems, such as in \cite{potvin1996genetic} where genetic algorithms are used to solve the traveling salesman problem.

The next sections are organized as follows: First we will discuss the OpenNERO framework and describe some of the additions we put in. This is followed by section \ref{sec:Approach}, which tells the approach we took to search for the optima in our problem space.
Our planning implementations are discussed in section \ref{sec:Implementation}. Our experiments are discussed in section \ref{sec:Exp2}, after which our results are discussed.

\section{OpenNERO}

\subsection{Framework}
%\textit{describe OpenNERO, what we improved (collision detection) and how it held us back.}
To model the problem we use the OpenNERO framework. This is a C++ program that uses python scripts to implement the AI. It has multiple scenarios to create AI agents for, like a maze and the Roomba control that we will be using. Due to compiling errors we had to use a precompiled version of OpenNERO and could thus not reach the C++ code.

The Roomba framework creates a 3-dimensional room with a smaller room and some tables and chairs in it. However, these objects in the room are purely aesthetical and the Roombas can move through them.
The room can be filled with pellets (or crumbs), which represent something a Roomba can and should clean up. The number of crumbs and the number of Roombas that the room contains can be defined by the user. Also, the way pellets are distributed in the room can be chosen or self coded.

The framework also incorporates a very crude collision detection between Roombas. When two Roombas crash into each other they simply stop and stand still for the rest of the episode. For reasons still unknown to us, they sometimes even stop moving between episodes, creating the need to reset the program altogether.

Furthermore, the program is very slow and although running the program without a graphical user interface does improve the speed, the speed gain is pretty much negligible. 

In the next few sections we will elaborate on some additions or changes we made to the OpenNERO functions.

\subsection{Filling the room with pellets}
OpenNERO has different ways of filling the room with pellets. One was an overly complicated uniform random fill. The other randomly created 10 cluster centers within the walls and assigned a spread. A pellet would then randomly be assigned to a center and a location would be drawn from a Gaussian distribution using the center as the mean and the spread as the variance. Although the centers were in the walls of the room, a pellet could fall outside the walls and had to removed afterwards. This way every run had a different number of pellets within the walls of the room.
\subsubsection{Cluster fill}
To remedy the problem stated above, when a pellet would be placed outside the walls, we would resample from the distribution until it would be placed inside the room. We even limited the pellets' placement to be placed at least one radius of a Roomba away from the wall to ensure that no pellets would spawn in unreachable corners. This way all pellets would be reachable and we always had the given amount of pellets in the room.
\subsubsection{Uniform fill}
We implemented a shorter uniform fill for the room. Here we also ensured that the pellets were at least one radius of a Roomba away from the wall to keep them reachable.
\subsection{Collision prediction}
As stated earlier, the built in collision detection stopped the agents completely, creating unusable situation concerning experiments and results. As the collision detection was done in the C++ code, which was unreachable for us, we created our own simple collision predictor. Once two agent come near each other they both move back a bit and turn 90 degrees to the right. As this is done every time step the result looks pretty jittery but it gets the job done.

\section{Approach}
\label{sec:Approach}
At the start of this project our goal was to see how many pellets we could clean up in the default number of time steps, which is 500. With this in mind we set out to implement several algorithms to solve this problem.
Later we decided that we should look at cleaning the room out entirely. This should show different algorithms performing better than in our previous goal, because it would now be inefficient to leave certain far off pellets behind. Planning how to efficiently clean up 500 crumbs by brute force is not doable. To make the problem more feasible we decided to cluster the pellets and to create an assignment from each agent to a cluster of pellets.

We cluster the room in two different ways. The first way is simply dividing the room into a grid of same size tiles. The second way is by using K-Means clustering. When the world is filled with clustered pellets we expect the K-Means method to create more meaningful clusters, but we also expect that K-Means will not make any difference when the world if filled uniformly.

To really leave this as a planning problem we decided against creating a partial observable environment. This might not be a realistic, but the problem is hard enough as is.

%Cluster the crumbs. Assign cluster to agent. agent collects crumbs greedily within cluster.
%pros: smaller state representation, discrete timesteps
%cons: not realistic

\section{Implementation}
\label{sec:Implementation}
We implemented several algorithms for assigning areas to Roombas, where some were more successful than others. In this section we give an overview of all the implementations we 
considered and describe how we implemented them. Next to this we provide an overview of a cycle of our algorithm and describe all the steps necessary to assign the agents 
to clusters.

\subsection{Brute-force assignment}
The first algorithm we implemented was a brute force assignment of the clusters. We went through every combination of agent and cluster and 
calculated the distance from the agent to its assigned cluster and summed this distance for every agent. We then chose the assignment with the shortest summed distance.
This was still pretty feasible when we used 5 clusters, but increasing the number of clusters explodes the number of combinations to consider making this solution less preferable.
Although this method should probably find good solutions, the large computational times mean we are unable to use this method for experiments.

\subsection{Heuristics}
The second algorithm we implemented for role assignment is a heuristic approach. Here we try to incorporate our knowledge about the domain in the assignment and 
exploit strategies we think should be working good in the domain. An added advantage of using a simple and maybe a little naive heuristic assignment method is that it provides us with some sort of baseline that all more complex assignment methods should be able to outperform. 

In the heuristic method we take into account both the distance from an agent to a cluster and the number of pellets available in a cluster to determine the value of all the clusters. The formula we use in this implementation is the distance from an agent to a cluster minus the amount of pellets in the cluster. During the rest of our implementation we have come up with different and possibly better heuristics, but decided to implement these in the genetic algorithm, which is described in section~\ref{subsec:GA}.

\subsection{Q-Learning}
We implemented Q-learning to try to learn the best assignments. For our state space we discretized the distance from the agents to the cluster centers and the number of pellets in a cluster. We created ten bins, ranging from zero to the maximum distance that an agent would have to travel, which is the diagonal of the room, and from zero to the maximum number of pellets in a cluster respectively. Each cluster also had a flag which contained a boolean value telling the agent if the cluster was already assigned to another agent. We implemented this system in a centralized way. All the agents shared the same Q-table so it would converge faster and one agent serialized the table after each episode so we would not have to restart learning after restarting the program. 

We never found out if this state representation worked because of some limitations we found in OpenNERO.  We found out that every agent has its own step counter. Because of the time it took to interact with the Q-Table or save the file, other agents would continue while the other was busy. After a couple of episodes some agents would think they were in episode 5 while others thought they were in episode 6. This led to uncertainty about which perceptions were used to update the Q-table and caused the Q-table to be updated at the end of the episode of each agent, increasing the runtime of an episode to such a length that it seemed impossible to perform the thousands of episodes needed for tweaking the parameters and learning the correct state-action values within the time constraints of our project. 

\subsection{Genetic Algorithms}
\label{subsec:GA}
The final method for assignment we implemented uses genetic algorithms. Genetic algorithms use an evolutionary process to come to a good solution to a problem. This method is our final choice for the assignment of clusters and we ran the majority of our experiments with this algorithm. We used an implementation of a genetic algorithm from the python module 
\textit{Pybrain}\footnote{http://pybrain.org/}, which allows us to simply give an initial population and a fitness function after which the genetic algorithm evolves the best solution. With this implementation we also introduce the difference between an "on the fly" assignment and a "full plan" assignment. In the case of "on the fly" assignment, we give each agent just one cluster to clean and leave all others for later. When an agent cleans all the pellets in its cluster, we reassign the agent to a new cluster with the best value according to the current fitness function. 
Here we have the restriction that the cluster can not be assigned to another agent already and the cluster can't be empty. If there are no more clusters which match these criteria, the agent will greedily clean pellets closest to it. In the case of a "full plan" assignment we assign all agents to multiple clusters, where the total value of the assignment is maximized. So after an agent has cleaned its initial cluster, it will continue with the next in its list of assignment, instead of the algorithm reassigning the agent to a new cluster on the fly.

To be able to plug our problem in this genetic algorithm, we need a state representation which the algorithm can evolve and a fitness function with which it can evaluate each solution. The representation we chose to use is very simple and just contains tuples of agent numbers and cluster numbers in the format $[agent\_number, cluster\_number]$ and represent the assignment of each agent to a cluster. The second thing we need to develop is a fitness function to let the genetic algorithm evaluate the different evolved assignments and determine what is the best. We have made three different fitness functions for evaluating the assignments, each one focusing on a different aspect of the problem.

Since we chose for a fully observable state, we can use all the information in the world to evaluate each set of assignments. We have the cluster location, number of pellets in each cluster, the location of all the pellets in a cluster and the location of all agents. This information we use to calculate the value of each assignment.

The first fitness function we use is one that takes into account the distance of each agent to the clusters. The goal of this value function is to maximize the sum of all distances from agents to clusters (Equation~\ref{equation:fitnessfunction1}). We expect that this function allows for the minimal travel distance of agents to clusters and will thus clean the room as fast as possible. 

\begin{equation}
\label{equation:fitnessfunction1}
f(state) = \sum_{agent}^{all\_agents} max\_distance - euclidean\_dist(agent, cluster)
\end{equation}

In this equation $max\_distance$ stands for the maximal distance an agent can travel in the world, which is the diagonal of the room. We substract the distance form an agent to a cluster form this maximal distance, because we want to give a positive value to smaller distances and maximize the fitness function. This equation shows the case where we do "on the fly" assignments. In the case of "full plan" assignment we keep using this formula until all the clusters have been assigned.

The second value function we implemented uses just the number of pellets in each cluster and tries to maximize the number of pellets that is cleaned by the agents (Equation~\ref{equation:fitnessfunction2}). Here we have two different functions for the "on the fly" and the "full plan" assignment methods, since in the "full plan" case the total sum of pellets will always be the same, namely the total number of pellets in the room. In that case we let each agent clean the same amount of pellets on average, so each agent can do the same amount of work which should result in fast cleaning times.

\begin{equation}
\label{equation:fitnessfunction2}
f(state) = \sum_{agent}^{all\_agents} \# crumbs\_in\_cluster
\end{equation}

The final value function takes best the best of both worlds from fitness functions 1 and 2 in that it takes into account both the distance to a cluster and the number of pellets in the cluster (Equation~\ref{equation:fitnessfunction3}). In this value function we take into account the ratio of pellets and the normalized distance to clusters. 

\begin{equation}
\label{equation:fitnessfunction3}
f(state) = \sum_{agent}^{all\_agents} \frac{\# crumbs}{max\_crumbs} : \frac{euclidean\_dist(agent, cluster)}{max\_distance}
\end{equation}

This function thus gives higher values to clusters close to the agent with lots of pellets in them. Again, in the "full plan" case we keep adding the values of this function for the additional clusters that are assigned to an agent.

\subsection{A cycle of the program}
We run our implementation in episodes where the goal is to clear the room as fast as possible, with an upper bound of 10000 time steps. We generate a room with 500 pellets distributed in the room using one of our two distribution methods. After the distribution we cluster the room into 10 clusters using either K-Means or a grid. The clustering procedure gives us the locations of the cluster centers. Then we continue by assigning all the pellets in the world to one of the clusters, resulting in lists of pellets locations belonging to one of the clusters. 

In the second step we use our genetic algorithm to evolve the best assignments of agents to clusters. We initialize the algorithm with a random set of assignments and let it evolve using the settings from Table 1. The genetic algorithm uses mutation and crossover to evolve each assignment. The maximum evaluations parameter represents the maximum number of evaluations the algorithm can do in total. Thus when we have an initial population of 10, this means it can evolve $\frac{25000}{10} = 2500$ generations. We use two different numbers here because the "on the fly" assignment needs far less evaluations to come to a good solution than the "full plan" assignment, since the latter has a lot more different assignment possibilities.

\begin{table}[!bht]
\label{tab:settings}
\centering
\begin{tabular}{|l|l|}
  \hline
  \multicolumn{2}{|c|}{Genetic Algorithm settings} \\
  \hline
  Initial population size & 10 \\
  Mutation probability & 0.1 \\
  Maximum evaluations (1) & 25000 \\
  Maximum evaluations (2) & 250000 \\
  \hline
\end{tabular}
\caption{Setting for GA}
\end{table}

After the assignments have been evolved all agents carry out their assignments  until the room is cleaned. In the case of "on the fly" assignment we reassign an agent to a new cluster when the agent has cleared its initial cluster. In the case of a "full plan" each agents clears all the clusters that where assigned to it. We then record the number of time steps the agents needed for the episode and then reset the world and start a new one. For the experiments we run each setting 30 times and take the average and standard deviation of the number of time steps needed to clear the room. A comparison of the results can be found in  Section~\ref{sec:Exp2}.


\section{Experimental Results I}
\label{sec:Exp2}
\subsection{Baseline}
We will compare the results of our Genetic Algorithm with the most naive solution for the room cleaning problem, a nearest neighbor policy for crumb selection. The results of this baseline can be found in figure \ref{plot_greedy}. The average number of time steps it takes to clear 500 crumbs are 2376, 1044 and 645 for 1, 3 and 5 agents respectively.

	\begin{figure}[H]
		\centering
			\includegraphics[width = 0.6\textwidth]{greedy_agents.jpg}
		\caption{Results for nearest neighbor crumb selection}
		\label{plot_greedy}
	\end{figure}

\subsection{Effects of crumb distribution}	
By default the OpenNERO framework spawns the crumbs in clusters. This inspired us to use a world representation of clustered crumbs. We have implemented two ways to cluster the crumbs, a grid and K-Means. In these experiments we use $K = 10$. To compare these representations we tested all our configurations with two spawning methods, clustered and a uniform distribution. The results of these crumb distributions on a three-agent system are shown in Figure \ref{plot_clustered} and \ref{plot_uniform} respectively.   
	\begin{figure}[H]
		\centering
		\subfloat[Results for three agents in a clustered crumb environment]{\label{plot_clustered}\includegraphics[width = 0.4\textwidth]{three_agents_clustered.jpg}}
		\subfloat[Results for three agents in a uniform distributed crumb environment]{\label{plot_uniform}\includegraphics[width = 0.45\textwidth]{three_agents_uniform.jpg}}	
	\caption{Results of experiments with crumb distribution}
	\end{figure}
In Figure \ref{plot_clustered} we see that for the clustered crumbs there is little difference between using a grid or K-Means. Though it seems that K-Means gives a slightly better performance. This is supported by the fact that the GA without lookahead using a grid representation has an average of 1240 time steps and the GA with K-Means has an average of 1195 time steps, whereas the GA with lookahead has an average of 1072 and 1042 for the grid representation and K-Means respectively. However, the respective standard deviations are 234, 237, 179 and 155, so the difference does not seem significant. Therefore, no further tests were done.\\ %This was not expected

When we look at Figure \ref{plot_uniform}, the first thing we see is that again there is little difference between the two representations.	With this distribution the difference seems even smaller. This conforms with our expectations. With the uniform crumb distribution the GA without lookahead using a grid representation has an average of 1665 time steps and the GA with lookahead an average of 1470 time steps and both techniques using K-Means have an average of 1628 and 1480 time steps respectively. The respective standard deviations are 403, 161, 262 and 196. So, again the difference between these representations does not seem significant. No further tests were done.

%Aside from the insignificant difference between representations, we do see that all GA settings perform worse with uniform distribution than with clustered distribution of crumbs. We attribute this to the fact that the crumbs within areas that are cleaned with a nearest neighbour policy, are simply further apart from eachother when the crumbs are uniformly distributed.

\subsection{Effects of fitness functions}
To see the effects of the above mentioned heuristics we ran the simulation for each heuristic, with five agents using a GA with and without lookahead policy. The time it took the agents to clear the room with each setting is shown in Figure \ref{plot_five}.
	\begin{figure}[H]
		\centering
			\includegraphics[width = 0.9\textwidth]{five_agent_clustered.jpg}
		\caption{Results for five agents in a clustered crumb environment}
		\label{plot_five}
	\end{figure}
	
%For five agents a complete plan is better than on-the-fly assignment
We see the same effect of fitness functions for both the lookahead and the no lookahead policy, the heuristic based on the number of crumbs gives a worse clearing time than the functions based on distance and the combined heuristic. This conforms with our expectations. It seems intuitive that it is important to take into account the distance to a cluster, since driving around efficiently seems more important for fast cleaning than going to the biggest clusters first. The almost equal performance of the distance heuristic and the combined heuristic shows that taking the size of the clusters into account is hardly important at all. Because of the large standard deviations, we did no tests to see if the differences we found are significant.

\subsection{Effects of role-assignment methods}
Perhaps the most important result of all is the comparison between the clearing times of our Genetic Algorithms and the baseline, the nearest neighbour policy. This shows whether the complex learning policies are useful at all. In Figure \ref{plot_base_compare} we plotted the clearing times of the GA settings with the fastest results, i.e. K-Means and the combined heuristic, for both the lookahead and no lookahead policy and different numbers of agents.
	\begin{figure}[H]
		\centering
			\includegraphics[width = 0.6\textwidth]{greedy_GA_compared.jpg}
		\caption{Comparison of the best performing GA settings with the baseline}
		\label{plot_base_compare}
	\end{figure}
		
It seems that planning ahead is not beneficial for one agent. This was not as expected. Since one agent has to clear the room all by itself, one would expect that the cost of skipping a crumb in a certain area would be large.

For more than one agent the lookahead policy seems to perform better than the no lookahead policy, but not better than the baseline, and obviously five agents clear the room faster than three agents. The average clearing times of the no lookahead, lookahead and greedy policy for five agents are respectively 726, 652, 646 with standard deviations of 111, 127 and 127. Since the mean of each policy lies within the standard deviations of the other policies, the differences in performance do not seem significant.

%\subsection{Effects of number of agents}
%There seems to be a significant speed up when increasing the number of agents. This was to be expected.
%	\begin{figure}[H!]
%		\centering
%			\includegraphics[width = 0.5\textwidth]{nr_of_agents.jpg}
%		\caption{Comparison of the number of agents with best performing GA settings}
%		\label{plot_agent_compare}
%	\end{figure}

\section{Discussion I}
From the above experiments we can conclude that using five agents to clean the room is faster than using three or one. This might seem trivial, since five agents can cover the area a lot faster than one agent. But in the original setting of the framework a collision caused the colliding agents to stop moving for the remaining part of the episode, making it infeasible to have many agents, since they would only be in each other's way. It seems that our collision detection is good enough to circumvent this problem.

We found no difference between world representations. One would expect that with a clustered distribution of crumbs K-means would be better than grid clustering, because it can find more meaningful clusters and that with a uniform distribution the grid clustering gives better performance, because K-means would try to find groupings that are not there. We cannot explain why we did not find a difference between these representations.
	
We also found that for a single agent it seems unnecessary to plan ahead, whereas it seems beneficial to plan ahead for three and five agents. However, the most important result is that our genetic algorithm that generates a plan does not perform better than the nearest neighbor crumb selection for any number of agents. Above that, our genetic algorithm that assigns roles on-the-fly actually seems to clear the room slower than the nearest neighbor crumb selection.

We found our test results to be pretty disappointing and completely below our expectations. To this end we set out to recheck the settings of our genetic algorithm. We found that a lot of the times the algorithm did not find the optimal assignment, although when checking earlier we thought our settings were satisfactory. We set out to find better settings, varying settings like population size, the number of evaluations to be done and mutation rate. Also we found that we might have created a too difficult fitness landscape for the algorithm to work in. We first returned a reward of $0$ if there were duplicates in the assignment. This really gave problems in the lookahead case, because every cluster had to be assigned exactly once. Mutation would only work if say assignment $3$ mutated into $4$, but $4$ also simultaneously mutated into $3$. If this was not the case and only assignment $3$ mutated into $4$, the full assignment would receive a fitness of $0$. The results can be found in the next section.

\section{Experimental Results II}

First off, we changed the fitness of having duplicates from $0$ to being half of the fitness the assignment would normally get. Table \ref{tab:settings_new} shows the new settings we used. Figure \ref{new_results} shows the clearing times of the GA with the no lookahead policy  and the new settings for a team of five agents next to the clearing times with the old settings.

\begin{table}[!bht]
\centering
\begin{tabular}{|l|l|}
  \hline
  \multicolumn{2}{|c|}{New Genetic Algorithm settings} \\
  \hline
  Initial population size & 50 \\
  Mutation probability & 0.2 \\
  Maximum evaluations & 2500000 \\
  \hline
\end{tabular}
\caption{New Settings for GA}
\label{tab:settings_new}
\end{table}

\begin{figure}[H]
		\centering
		\includegraphics[width = 0.5\textwidth]{new_results.jpg}
		\caption{Comparison of the old settings vs. the new settings}
		\label{new_results}
\end{figure}

\section{Discussion II}

In figure \ref{new_results} we see that our new settings do improve the collection speed of our Roombas in the distance and heuristic fitness functions and remain about the same in the case of the pellet fitness function. Although we only ran these settings in the non-lookahead case, we are confident that, with some further tweaking, they will improve the speed in the lookahead case as well.

\section{Future Work}
As stated in the results section, we were not completely satisfied with the initial results of our algorithm. Intuitively we feel that our fitness functions that take into account the distance between agents and clusters (so Equations~\ref{equation:fitnessfunction1} and ~\ref{equation:fitnessfunction3}) should be able to let our genetic algorithm evolve an efficient solution for the assignment problem, since it minimizes the distance that an agent has to travel in an episode. We therefore propose the following subjects for future work:

\subsection{Biased initialization of the Genetic Algorithm}
We now initialize the genetic algorithm with a random assignment, which thus can be a extremely bad solution to the problem. Since we want to limit the number of evaluation steps of the algorithm because of speed, it is possible that the algorithm has not converged to the best solution. This could be improved by initializing the algorithm with some bias towards a good solution, which could mean that it needs fewer iterations to converge. An example of such a bias could be the nearest neighboring cluster for each agent.

\subsection{Adaptations to the framework}
As stated before, the OpenNERO framework is a work in progress and some of the implementations are not optimal yet. For example, we encountered some problems with the internal episode counter of the agents, which where unique for each agent. Because of this, sometimes some of the agents would be in a new episode already, while others where still finishing theirs. This proved to be unpractical with the initialization of new genetic algorithms and resetting the world. A centralized counter could provide a better overview of where all agents are in the episode and could make the experiments run faster.

A faster framework could also allow itself for a more learning based approach. We could for example then initialize a Q-learning algorithm with the assignments of a genetic algorithm, which can give a good bias to the learning algorithm. In this way we could learn a solution to the problem faster and it would be a more general solution to the problem.
\bibliographystyle{plain}
\bibliography{citations}

\end{document}