

\documentclass{article}

%% PACKAGES $$
\usepackage[margin=1in]{geometry}
\usepackage{amsmath,amssymb}
\usepackage{listings}
\usepackage{float}
\usepackage{cite}
\usepackage{subfig}
\usepackage[pdftex]{graphicx}
\usepackage{algorithmic}
\usepackage{multicol}
\usepackage{tikz}

%% COMMANDS %%

\newcommand*\circled[1]{\tikz[baseline=(char.base)]{
            \node[shape=circle,draw,inner sep=2pt] (char) {#1};}}

\newcommand{\tuple}[1]{\ensuremath{\left \langle #1 \right \rangle }}

\begin{document}

\title{Learnability of the Domination Game \\ \large{Profile Project MSc Artificial Intelligence}}
\author{Thomas van den Berg \& Tim Doolan \\ \small{\texttt{thomas.g.vandenberg@gmail.com, tim.doolan@gmail.com}} \\ \small{Supervised by: Shimon Whiteson \& Hamdi Dibeklioglu}}

\date{\today}

\maketitle

\section{Introduction}
This report describes our efforts towards creating a new simulation environment for multi-agent systems students. We set out to improve the previous version of this framework on a few key points. The game that it represents is an elementary multi-agent competitive game. Because the game is meant for students to learn, and not to simulate a real-world setting, it is important that the problem allows effective use of reasonably elementary methods, without becoming unchallenging. Additionally, because the students' projects take place within the timespan of a only single course, it is important that they are able to get started with the framework quickly and easily.

This report deviates slightly from the schema of a normal scientific research paper because this was partially a software engineering problem as well. In the first section, we describe the work we did to construct the new framework. However, we did also evaluate quantitatively whether we succeeded in creating a better environment. Finally, in the discussion section, we offer some core insights into how we think this framework should be used for developing MAS methods.

\subsection{The Domination Game}
The Domination Game\footnote{Originally developed by Okke Formsma} is a simulation environment used for studying learning and planning in multi-agent competitive games. The game is set up like a simple computer strategy game or shooting game. Two teams of agents compete for occupation of so-called control points. To score they have to reach these control points and defend the enemy from getting close to them. There is ammo scattered around the field that the agents can pick up and use to shoot the other team's agents; when an agent gets shot it is reset to the starting position on the field. This framework has been used for the Autonomous Agents and Multi-Agent Systems course at the University of Amsterdam.

The game is divided into timesteps, but the simulation has a finer resolution. All agents are simultaneously asked what action they want to perform, and the result of these actions is computed by simulating each agent's movement in a number of substeps. The game was implemented in \emph{pygame}, a game development framework for the Python programming language.
 
\subsection{The Problem}
There were a number of fundamental issues with the game that we experienced when using it. First off the game was quite slow, which was mostly due to the pygame package handling the actual game logic. For the physics this meant that collisions were checked between all possible object-pairs, using the bitmap images. Aside from being very slow, it was also hard for the students to model this bitmap based approach in order to anticipate collisions between agents. The result of a collision was that agents would stop moving for that timestep, as if the friction was infinite. This made the navigation difficult, because each corner had to be maneuvered perfectly in order to avoid getting stuck.

In addition to being slow, the game's outcome was also highly stochastic. This meant that with two approximately equal agents, the variance in the scores was quite large, which makes it very hard to determine which of the two is actually better. Evolutionary algorithms especially rely on a good estimate of the skill level of an agent. The cause of the stochasticity was mostly due to the random distribution of ammo across the field, potentially giving a huge advantage to one team. On top of that, the way the game handles multiple agents attempting to capture a control point can give a large advantage to agents who are at a control point a fraction of timestep sooner.

Due to all of these difficulties and the complexity of the state-action space, a lot of hard-coding was required in order to just get a properly working agent. Also a lot of benefit could be had from hard-coding things like choke points in the maps or selecting the shot that kills the most enemies. As noted in \cite{whiteson2011protecting} reinforcement learning evaluation in fixed environments can actually be more indicative of the ability of an algorithm to exploit the specific environment, rather than being able to learn generic challenges. In summation, we wanted to try and address the following problems:

\begin{enumerate}
  \item The speed of the game
  \item The consistency of the outcome
  \item The general complexity of the game
  \item The need for and benefit gained from hard-coding 
\end{enumerate}

\section{Improving the Game}

\subsection{Physics Simulation}
By implementing a \emph{physics engine} like one that is often employed in games, we hoped to make the game faster, fairer, and easier. The goal of such a physics engine is to handle collisions between objects in a somewhat realistic manner. All kinds of physical phenomena are widely known in computer games (e.g. inertia, friction, elasticity). We chose only a small subset of these features, i.e. those features that ensure that objects keep some of their velocity after a collision. This makes it less likely for agents to suffer great penalties from small mistakes. For example: if an agent wants to maneuver around a corner, but it hits the wall, it won't be stopped dead in its tracks, but instead `slides' around the obstacle. Additionally, by using efficient methods that are widely used for physics simulation, we can speed up the simulation.

\subsubsection{The World}
The game world is assumed to consist only of \emph{rectangular} and \emph{circular} objects. Additionally, all objects are always \emph{axis-aligned}. These assumptions allow for efficient implementations of collision detection and handling. Objects do not have mass, so while they can have a certain velocity, this is does not lead to inertia. Where the objects' movement leads to collisions, they are \emph{separated}. Objects can also be \emph{non-solid}, meaning they are allowed to move through each other, but will register collisions. 

\subsubsection{Sort-and-Sweep}
In a naive implementation, each pair of objects would have to be checked to see whether they collide, leading to $O(n^2)$ checks, and then the resulting collisions have to be solved. However, there are many clever implementations to perform this `broad-phase' checking more efficiently. One of the methods that is simple to implement is called \emph{sort-and-sweep}\cite{baraff1992dynamic}. This algorithm sorts the objects along an arbitrary axis. It then iterates over the objects in order and checks whether they overlap on this axis. Because the objects are sorted, once an object is found not to overlap with the next one, it does not have to be checked against any further objects in the sorted list. Additionally, because objects are likely not to move very much between two timesteps, re-sorting the list is usually fast.


\begin{table}
    \begin{center}
    \begin{minipage}{12cm}
    \begin{algorithmic}
      \STATE $O \gets$ list of objects \COMMENT{All objects}
      \STATE $P \gets $ empty list \COMMENT{Possibly colliding pairs}
      \STATE $sort(O)$ using $min\_x$
      \FOR{$i = 1 \to |O|$}
        \FOR{$j = i \to |O|$}
          \IF{$max\_x(O_i) < min\_x(O_{j})$}
            \IF{$min\_y(O_{j}) < max\_y(O_i)$ and $min\_y(O_i) < max\_y(O_{j})$}
              \STATE Add $\tuple{O_i, O_j}$ to $P$
            \ENDIF
          \ELSE 
            \STATE break
          \ENDIF
        \ENDFOR
      \ENDFOR
      \STATE Handle collided pairs of objects in $P$
    \end{algorithmic}
    \end{minipage}
    \end{center}
    \centering
    \caption{Simple sort-and-sweep. \label{tbl:sort_and_sweep}}
\end{table}


\begin{figure}[H]
    \centering
        \subfloat[Bounding Boxes]{\includegraphics[width=0.22\textwidth]{illustrations/sort-and-sweep-01.png}} 
        \hspace{0.1\textwidth}
        \subfloat[Sort And Sweep]{\includegraphics[width=0.22\textwidth]{illustrations/sort-and-sweep-02.png}}\\
    \vspace{0.2cm}
    \begin{minipage}{0.9\textwidth}
        \caption{Illustrations for Table \ref{tbl:sort_and_sweep}. \label{fig:sort_and_sweep}}
    \end{minipage}
\end{figure}



\subsubsection{Separation}
When we know which pairs' bounding boxes collide, we can compute whether we need to separate them. Rectangular objects will always collide when their bounding boxes collide, but circles might not. If two objects collide, we separate them with the smallest movement possible, usually this means moving them in the opposite direction of the colliding surface normal, shown in Figure \ref{fig:separation}. If one of the objects is indicated not to be movable, the other object is moved the full distance. Otherwise, both objects are moved half of the separation vector, in opposite directions.

\begin{figure}[H]
  \begin{center}
  \begin{minipage}{0.95\textwidth}
    \vspace{0.5cm}
    \subfloat[Separating two rectangles.]{\includegraphics[width=0.2\textwidth]{illustrations/separation-01.png}} 
    \hspace{0.05\textwidth}
    \subfloat[Separating a circle and a rectangle.]{\includegraphics[width=0.2\textwidth]{illustrations/separation-02.png}} 
    \hspace{0.05\textwidth}
    \subfloat[Separating a circle and a rectangle.]{\includegraphics[width=0.2\textwidth]{illustrations/separation-03.png}} 
    \hspace{0.05\textwidth}
    \subfloat[Separating two circles.]{\includegraphics[width=0.2\textwidth]{illustrations/separation-04.png}} 
    \caption{Different kind of separations that can be computed. \ref{tbl:sort_and_sweep}. In (a) the rectangles are separated along vector A or B, depending on which is shorter. In (b) the circle is treated like a rectangle. In (c) the rectangle's corner is treated like a zero-radius circle. \label{fig:separation}}
  \end{minipage}
  \end{center}
\end{figure}


\subsection{Mechanics}
While the physics engine makes navigating the field a little bit more forgiving, the rules or \emph{mechanics} of the game can still give an unfair advantage to a slightly better team. Better game mechanics can also ensure that learning agents get more feedback on how well they are doing. 

\subsubsection{Ammo Placement}
In the old game ammo appeared at random places on the field, at a random interval. Especially in early stages of the game this could give a large advantage to one of the teams. A team with significantly more ammo at the start can easily conquer the field and harvest even more ammo that spawns. By creating symmetrical `ammo locations' on the field, each of which creates a pellet of ammo at a regular interval, this problem is mitigated.

\subsubsection{Simultaneous captures}
When two agents both occupy a control point, there is a number of different things that could happen. Previously, the agent who was there first would keep control of the point, making the action of the joining agent have no effect. Because finding and reaching a control point without getting shot is an achievement that should not be ignored, we changed this behavior. Now, when two agents of different teams are on a control point, the control point becomes neutral, and neither team scores. If there are more than two agents on the point, the team with the majority gains control. This way, agents are at least rewarded for preventing the other team from scoring, and sending additional agents to a control point is usually beneficial.

\subsubsection{Random Maps}
As mentioned before, we wanted to avoid the situation where the game is constantly played on the same field. It would be possible to manually create some different fields, but we thought it might be better to have a method that generates structurally similar fields automatically. The basic approach is to simply add a fixed number of randomly straight wall sections to the field. Each wall is only added if there is still a path to all ammo and control points. We use this method on one half of the field, and then mirror it to create a fully symmetric playing field.

\begin{figure}[H]
  \begin{center}
    \subfloat{\includegraphics[width=0.25\textwidth]{illustrations/field-01.png}} 
    \hspace{0.05\textwidth}
    \subfloat{\includegraphics[width=0.25\textwidth]{illustrations/field-02.png}} 
    \hspace{0.05\textwidth}
    \subfloat{\includegraphics[width=0.25\textwidth]{illustrations/field-03.png}} \\
  \vspace{0.2cm}
  \begin{minipage}{0.9\textwidth}
    \caption{Examples of randomly generated fields. Black dots indicate ammo locations, green squares are the control points. \label{fig:random_maps}}
  \end{minipage}
  \end{center}
\end{figure}

\subsection{Tools}
In addition to changing the inner workings and mechanics of the game, we also added some out-of-the-box tools for students to use. The most important of these is the navigation graph that agents can utilize. This graph indicates connectivity between areas, allowing a standard pathfinding algorithm to be used. In addition to that, there are a number of functions to perform common geometrical operations, and a fast implementation of $A^*$. Because students don't need to write their own implementations they can start working on the core tasks faster, and because the implementations are efficient we cut back on simulation time.

\begin{figure}[H]
  \begin{center}
    \subfloat{\includegraphics[width=0.4\textwidth]{illustrations/nav-graph.png}} 
    \vspace{0.2cm}
  \begin{minipage}{0.9\textwidth}
    \caption{The navigation graph is drawn on this image. For every point on the field, there is at least one node that can be reached in a straight line, and the edges indicate which nodes have straight lines connecting them. \label{fig:nav_mesh}}
  \end{minipage}
  \end{center}
\end{figure}

\section{Evaluation}

\subsection{Speed}

A fast evaluation is critical when many trials need to be run. In this experiment we compared the speed of running a single game of 400 timesteps in the old and the new framework, Table \ref{tab_speed} shows the resulting statistics. The new game is much faster and more predictable in its completion time. These results are taken using a random agent, meaning the computation time used by the agents is negligible. With more complex agents the running time will of course be higher, but this can be limited using the computational time limit set for each agent. Overall this speed up should provide a significant improvement. 

\begin{table}
  \centering
\begin{tabular}{l|r|r|}
 & New Game & Old Game \\\hline
Avg & 4.28 & 27.66 \\\hline
Min & 4.07 & 23.93 \\\hline
Max & 4.58 & 31.99 \\\hline
\end{tabular}
\caption{Speed results}
\label{tab_speed}
\end{table}

\subsection{Stability}

One of the key features of the game should be that equally skilled agents score approximately equally. Figure \ref{fig_stability_old} shows a histogram of scores, ranging from 0.0 (complete loss) to 1.0 (complete win) for one agent versus another equally skilled agent, taken over 200 games (on different random maps for the new game). Ideally the score would of course always be 0.5, but we see in the old game that scores are spread across the entire range. In the long run, the average will be around 0.5, but due to the very high variance, it will take many games to figure out where the average is exactly. The score distribution for the new game has a nice bell curve shape and a much lower variance, meaning much fewer samples will be required in order to make a reliable estimate of the mean.

\begin{figure}[H]
    \vspace{0.5cm}
    \centering
        \subfloat[Old game]{\includegraphics[width=0.45\textwidth]{results/fig_stability_old.png}} \hspace{0.04\textwidth}
        \subfloat[New game]{\includegraphics[width=0.45\textwidth]{results/fig_stability_new.png}}\\
    \begin{minipage}{0.9\textwidth}
        \caption{Histogram of scores (0.0-1.0) for two equally skilled agents measured over 200 games. \label{fig_stability_old}}
    \end{minipage}
\end{figure}

This property will speed up evaluation significantly, but only equally skilled agents should score equally. The score should be able to reflect any skill difference between two agents, i.e. the mean should shift as the skill gap between the two widens. In order to test this fact, we introduced a handicap in one of the agents. A handicap of $p$ simply causes the agent to perform a random move with a probability $p$. Figure \ref{fig_stability_set} shows how the score progresses as the handicap of the agent increases, where the different lines are some different game settings. The mean score is almost linearly dependent on the handicap for each of the different settings, showing exactly the kind of decline intended for an increasing handicap.

\begin{figure}[H]
    \vspace{0.5cm}
    \centering
        \subfloat[Average]{\includegraphics[width=0.45\textwidth]{results/fig_stability_settings_avg.png}} \hspace{0.04\textwidth}
        \subfloat[Standard deviation]{\includegraphics[width=0.45\textwidth]{results/fig_stability_settings_std.png}}\\
    \begin{minipage}{0.9\textwidth}
        \caption{Score values of agents with different levels of handicap, while varying some game settings. \label{fig_stability_set}}
    \end{minipage}
\end{figure}

The second part of the image shows the standard deviation, which does depend on the game setting. In accordance with our predictions, some settings can increase the variance, while others decrease it. The biggest impact appears to come from the spawn time; when an agent dies, he is forced to wait at his spawn point for a certain number of timesteps. This number of timesteps directly specifies the penalty for dying. A high penalty for performing badly (e.g. making a mistake and getting shot) will cause the agent to be at an even greater disadvantage. These small mistakes having a bigger or smaller impact will of course affect the variance of the game.

Figure \ref{fig_stability_cp} shows the same kind of graph but now varying the way multiple agents on a control point are handled. Our prediction was that this would also impact the variance, because originally the first agent to reach the control point would keep control of it until he moves off. This can cause big score differences from minor navigational advantages. As can be seen from the standard deviation part of the figure, all lines have approximately the same value, meaning this was not really a significant factor for variance. We theorize that this is because there is plenty of ammo with which to resolve such deadlocks, thus not there is no real impact on the game score. However there might still be a some advantage to this while learning, because it impacts the way the reward signal is distributed. See section \ref{sect_learn} for further results on this.

\begin{figure}[H]
    \vspace{0.5cm}
    \centering
        \subfloat[Average]{\includegraphics[width=0.45\textwidth]{results/fig_stability_capmode_avg.png}} \hspace{0.04\textwidth}
        \subfloat[Standard deviation]{\includegraphics[width=0.45\textwidth]{results/fig_stability_capmode_std.png}}\\
    \begin{minipage}{0.9\textwidth}
        \caption{Score values of agents with different levels of handicap, while varying the control point capture method. \label{fig_stability_cp}}
    \end{minipage}
\end{figure}

Finally figure \ref{fig_stability_bars} shows the changes in the histogram as the handicap increases. A slice with a single handicap value corresponds the histograms in Figure \ref{fig_stability_old}, whereas the change of the mean value for each handicap corresponds to Figure \ref{fig_stability_set} and lastly the difference between the left and the right image show the effect of variance on the score.

\begin{figure}[H]
  \begin{center}
    \subfloat{\includegraphics[width=\textwidth]{results/fig_stability_3dbars.png}} 
    \vspace{0.2cm}
  \begin{minipage}{0.9\textwidth}
    \caption{The histogram of scores as affected by the handicap (z-axis). Left: Low variance, Right: High variance. \label{fig_stability_bars}}
  \end{minipage}
  \end{center}
\end{figure}

\subsection{Learnability}
\subsubsection{Maze}
In this section we attempt to establish the learnability of certain aspects of the game, using Q-learning on some toy problems. First we tried just learning to navigate the maze from Figure \ref{fig_lvl_maze}. The state space for the agent was in which grid cell he was and the actions available were to try to move the center of one of the eight neighboring grid cells. For this simple problem the agent learned a good policy very quickly, favoring diagonal moves because on average more distances would be traveled per move. 

\begin{figure}[H]
  \begin{center}
    \subfloat{\includegraphics[width=0.4\textwidth]{illustrations/learn-maze.png}} 
    \vspace{0.2cm}
  \begin{minipage}{0.9\textwidth}
    \caption{Level design for the maze problem. \label{fig_lvl_maze}}
  \end{minipage}
  \end{center}
\end{figure}

We also used this maze to test the effect on learnability of our new physics versus the old sticky physics. We simulated the old game by setting the friction to infinite and reran the experiment. Figure \ref{fig_learn_maze} shows the learning curves for both style physics, where the new physics clearly converge sooner. This is because it is more forgiving for small errors in direction, still allowing the agent to approximately move in that direction (albeit a little less far).

\begin{figure}[H]
  \begin{center}
    \subfloat{\includegraphics[width=0.7\textwidth]{results/fig_mazelearn.png}} 
    \vspace{0.2cm}
  \begin{minipage}{0.9\textwidth}
    \caption{Learing curves for a maze, using new and old physics. \label{fig_learn_maze}}
  \end{minipage}
  \end{center}
\end{figure}

\subsubsection{One versus one}

Next we tried to learn some basic strategy elements against a fixed opponent, using the level in figure \ref{fig_lvl_strat}. Here the agent is supposed to learn to first go for the ammo, then to shoot the other agent and take over the control point. Using the grid representation it was indeed possible to learn the strategic choice for delayed reward.

\begin{figure}[H]
  \begin{center}
    \subfloat{\includegraphics[width=0.4\textwidth]{illustrations/learn-1v1.png}} 
    \vspace{0.2cm}
  \begin{minipage}{0.9\textwidth}
    \caption{Level design for the strategy problem. \label{fig_lvl_strat}}
  \end{minipage}
  \end{center}
\end{figure}

We also used this scenario to evaluate some of the tools that can be provided to the students. Instead of actions consisting of neighboring cells, the graph with points of interest for the level was used as an action space. This of course significantly simplifies the problem, because almost any move is a useful, allowing learning to converge much faster. The learning curve for both action spaces a can be seen in Figure \ref{fig_learn_strat}.

\begin{figure}[H]
  \begin{center}
    \subfloat{\includegraphics[width=0.7\textwidth]{results/fig_learn_1v1_grid.png}} 
    \vspace{0.2cm}
  \begin{minipage}{0.9\textwidth}
    \caption{Learning curves for 1 vs 1, using a simple and a more advanced action space. \label{fig_learn_strat}}
  \end{minipage}
  \end{center}
\end{figure}

\subsubsection{Two versus two}\label{sect_learn}

The last experiment we ran was in a two versus two setting, to try and demonstrate the ability to learn simple cooperation. Figure \ref{fig_lvl_coop} shows the level used for the experiment, where the intended behavior is for one agent to get ammo and the other to capture a control point. The state space of each agent was expanded with the location of the other agent and the points of interest action space was used.

\begin{figure}[H]
  \begin{center}
    \subfloat{\includegraphics[width=0.4\textwidth]{illustrations/learn-2v2.png}} 
    \vspace{0.2cm}
  \begin{minipage}{0.9\textwidth}
    \caption{Level design for the cooperation problem. \label{fig_lvl_coop}}
  \end{minipage}
  \end{center}
\end{figure}

This cooperation task could also be learned, as shown in Figure \ref{fig_learn_coop}. Learning converges faster against opponents with a larger handicap, and even against perfect agents (handicap 0.0) some progress can be made.

\begin{figure}[H]
  \begin{center}
    \subfloat{\includegraphics[width=0.7\textwidth]{results/fig_learn_2v2.png}} 
    \vspace{0.2cm}
  \begin{minipage}{0.9\textwidth}
    \caption{Learning curves for two versus two against opponents with an increasing handicap. \label{fig_learn_coop}}
  \end{minipage}
  \end{center}
\end{figure}

We also used this two versus two scenario to evaluate the effects on learning of the control point behavior. Figure \ref{fig_learn_cp} shows the learning curves, where a majority approach converges much faster. This is because handling scores in such a way incentivizes agents to move towards a control point more, even if there is already an enemy there.

\begin{figure}[H]
  \begin{center}
    \subfloat{\includegraphics[width=0.7\textwidth]{results/fig_learn_2v2_capturemode.png}} 
    \vspace{0.2cm}
  \begin{minipage}{0.9\textwidth}
    \caption{Learing curves for 2 vs 2, using different methods of handling multiple agents on a control point \label{fig_learn_cp}}
  \end{minipage}
  \end{center}
\end{figure}

\section{Conclusion}
The experiments show that we have achieved all that we set out to do. Agents for the game can be evaluated in less time, both because games are quicker to simulate and because their outcome is more reliable. Additionally, we have shown that changing the mechanics can make the game easier to learn. Learning low-level behavior is hard, and it's usually much easier to implement efficiently by hand. By providing tools for low-level behavior, students can focus on the higher order of behavior. In the experiment with two versus two agents, we see that learning this higher level of behavior is still enough of a challenge, even in small settings. 

\section{Discussion}

Based on the results of this project, we can recommend a good setup for the students' lab work. Many of the changes are inherent to the new framework, but the game settings are still very flexible. As our experiments show, the game is easier to learn when the `majority' capture mode causes rewards to be more abundant. Because good availability of ammo and fast respawn times make the game more stable, we recommend using those settings as well. Most importantly, however, we recommend to scale the game down to a lower number of agents. Instead of playing eight versus eight, perhaps five versus five is a better setting. We managed to effectively apply a naive learning method to a small game, and we think that students can use more advanced methods to effectively learn larger games. Although games with more than five agents per team are more complex, not much structural novelty is gained. Because the game's speed is mostly dependent on the agents' thinking time, reducing this complexity might allow the students to perform more training. This training will also be more effective because less experience is needed for the smaller game.

\begin{figure}[H]
  \begin{center}
    \subfloat{\includegraphics[width=0.7\textwidth]{illustrations/scenarios.png}} 
    \vspace{0.2cm}
  \begin{minipage}{0.9\textwidth}
    \caption{Different scenarios for working with the game. \label{fig:scenarios}}
  \end{minipage}
  \end{center}
\end{figure}

A second point of discussion is the use of random environments. It is undesirable to supply the students with a single field that is used for the entire lab session including the `finals' (scenario \circled{5} in Figure \ref{fig:scenarios}), because they will inevitably gain more by exploiting this particular map than from applying planning, learning, genetic methods, etc. What the right setup is, is not entirely clear. There are a number of gradations, each increasing the need for online learning or a map-independent representation of behavior, shown in Figure \ref{fig:scenarios}. 

To start at the top tier of these, \circled{1}, the game could be geared completely towards online learning by having the map be completely unknown to the agents, even at the start of a game. While this introduces the interesting problem of doing exploration and exploitation online, we feel that it is not realistic to run enough games needed to give the agents enough experience online. Scenario \circled{2} has the same problem, but it takes away the need for the agents to explore. 

Scenario \circled{3} and \circled{4} are our preferred choices. Ultimately, we think that \circled{4} is the ideal situation. Because the students are given the the map a while before their agents are evaluated, it can still be effective to train a policy that depends on that specific map. However, they will need to be able to either retrain their agent within this period (e.g. a week). On top of that, all advantage gained from training on a specific map is lost the next week after, so it might still be beneficial to use the experience to learn something about the game that does \emph{not} depend on the map.

By map-independent we mean a policy that is trained on one or more specific maps, but the values of which can be transferred to other maps. As an example: in stead of encoding the exact $(x,y)$ coordinates in the state space, one could use a representation where the distance along the shortest path to each objective is used. Another example would be to divide the map into more abstract `rooms' and learn policies for different configurations of rooms. It is actually a realistic scenario for an agent to be trained in a number of environments, but to be expected to perform well in new environments without first training in those as well. Another option is to optimize a policy without learning a state-action mapping. When a role-assignment method is used, the suitability functions for each role could be optimized using an evolutionary algorithm, which doesn't rely on the particular map either.




\bibliographystyle{apalike}
\bibliography{bibl.bib}
\end{document}


% ONE IMAGE
% \begin{figure}[H]
%     \vspace{0.5cm}
%     \centering
%     \begin{minipage}{0.9\textwidth}
%         \includegraphics[width=\textwidth]{images/results-comparison.png}
%         \caption{Results for each of the experiments. \label{fig:results-comparison}}
%     \end{minipage}
% \end{figure}


% TWO IMAGES IN A FIGURE
% \begin{figure}[H]
%     \vspace{0.5cm}
%     \centering
%         \subfloat{\includegraphics[width=0.4\textwidth]{images/wrong-class-ugly.png}} \hspace{0.04\textwidth}
%         \subfloat{\includegraphics[width=0.4\textwidth]{images/wrong-class-nice.png}}\\
%     \begin{minipage}{0.84\textwidth}
%         \caption{The page on the left was wrongly classified as beautiful because photos are excluded from the color palette extraction. The page on the right was wrongly classified as ugly because the smooth graphic adds a lot of colors to the extracted palette. \label{fig:miss-experiment-aesthetics}}
%     \end{minipage}
% \end{figure}