\section{Methodology}
\label{sec:methodology}
% A correct methodology is one that permits a justification of any number of conclusions

While most research in MAS has focused on extremely simple domains \cite{OliehoekRLSOTA}, the domination game is  very complex.
This has important implications: as we will discuss shortly, %YEA?
fully generic algorithms are unlikely to be tractable and domain knowledge is required. It should be noted that this is a universal property of generic solution methods: while generic in theory, in practice they are only applicable to small problems, and thus highly specialized to this class of problems. So the complexity of the DG serves to direct research towards methods that can solve problems of real-world complexity, which necessarily leads to much more specific methods.

With this in mind, we sought a way to insert domain knowledge into our agent such that learning becomes feasible, while at the same time making it possible to represent truly interesting behavior. We required the latter as learning simply for the sake of learning is not very informative if we want to analyze the complexities of this problem.
To this end, we divided the problem into a representational and strategical sub-problem, so that each could be hand-crafted or learned independently.
We created a representational module that performs sophisticated state analysis, yielding a compact representation of the current state of the game.
Furthermore, the representational module defines a set of high-level actions that jointly discretize the (turn, speed, shoot) action space, as described below.

Since the specifics of the representational module were decided by purely deductive methods, we will also want to verify its efficacy empirically.
Therefore, we created a simple rule-based system as described below. The result is a full-fledged agent that could be pitted against the agents of other teams, and our own, to test their effectiveness.
These results will be discussed in section \ref{sec:experiments}.

The rule-based strategic module, albeit effective as we will later see, is an ad hoc implementation of what we thought was intuitive and useful. To possibly improve on this, we wanted to implement a way of automatically learning the strategy of the agents, in such a way that it could outperform our rule-based strategy.
Theoretically this can be done, as our rule-based system is rather limited.
%does not take into account a plentitude of situations
For example, an intuitive rule would be not to pick up ammo if you already have ammo. However, in some cases an ammo-less enemy might be close to ammo, and it's better to pick it up anyway even if you already have ammo. Writing rules for each such case is infeasible, hence our decision to try and apply a learning algorithm to the strategy sub-problem.

%Our approach to the domination game was built on the above consideration. We first identified the important actions and important input information, and created seperate functions for these. Then to test our assumptions on the usefulness of the defined information and actions, we created a simple rule-based system as described below. This resulted in a full-fledged agent that could be pitted against the agents of other teams, and our own, to test their effectiveness. The rule-based system, albeit effective as we will later see, is an ad hoc implementation of what we thought was intuitive and useful. To possibly improve on this, we wanted to implement a way of automatically learning the strategy of the agents. In such a way that it could outperform our rule-based strategy. Theoretically this can be done, as our rule-based system does not take into account a plentitude of situations. For example, an intuitive rule would be not to pick up ammo if you already have ammo. However, in some cases an ammoless enemy might be close to ammo, and it's better to pick it up anyway even if you have ammo. Writing rules for each such case is infeasible, hence our decision to try and apply a learning algorithm to the strategy.

We decided to use NEAT \cite{NEAT} for learning the strategy as it can cope with continuous action spaces, works with our parametrized input space and can be applied to our cooperative learning environment. It is also a well-known and established algorithm, meaning it is easier to find pre-made and well-tested implementations. It does however not readily extend to cooperative learning. We will describe our solution to this below.

%Important actions in the domination game can be described in multiple ways. For example, moving to a control point can be seen as taking a step in a specific direction every timestep, where this step could also be an action towards an ammo location, whilst avoiding obstacles and such. This looks at things from the direct action (turn,speed,shoot) perspective. But we can also make an abstraction over each of these steps and define movement towards a control point as a single action, letting the implemented A* algorithm do it's work 'behind the scenes' so to say. By defining such abstracted actions, it becomes easier to deal with the problem as a whole, as it makes it possible to formulate intuitive rules like: If  a control point is not taken, and you have ammo, take the control point.
%Taking this approach, we split our problem up into two parts. A representational question and a strategical question. The representation defines what actions can be taken, and what information is necessary for that. The strategical question combines these two as a function from the necessary information (input) to the actions (output).

\subsection{Action space representation}
The action space contains all actions that we deemed useful or necessary for agents to perform well in the DG. The first set of actions we discuss is the movement to locations of direct interest. These are the control points and the ammo spawns. By keeping a list of all found ammo spots we can always have agents move towards one of the 6 found ammo spawns, whether there is currently ammo there or not.

The next important set of actions are the ones that interact with the enemy. Because of the importance in the DG of shooting down opponents, for every variant of our algorithm we shoot enemy agents whenever they can be shot by our agents, regardless of the rest of the state. In each step we make a list of the enemy agents that can be shot, taking into account obstructions by the scenery and other agents. We determine if an agent can shoot an enemy based on the actual hitboxes of the enemy, instead of only considering the middle. This means that the effective range of shooting is increased from 60 to 66 units, the shooting angle is increased, and in some cases where the middle of an enemy is obstructed from view, its sides can be shot.
We apply the Hungarian algorithm \cite{hungarian} to the shooting assignment of agents to opponents, preferring agents with more ammo to shoot over agents with less ammo, while maximizing the number of enemies that are shot. We also take into account that enemies that are in their spawn point can not be shot.

The last interaction action is the combat action, where we assign one (and just one) bot to go after an opponent. This action makes a bot move to an enemy bot untill it can shoot it. If the enemy bot is directed towards the bot itself, it moves towards it only a little so as to avoid the possibility that they drive past each other.

We also defined a spawn camping action where the bot moves towards the enemy spawn and waits there (so it can shoot enemies that move away from the spawn), and an exploration action that makes the agent move to places on the map that have not been seen by any agent yet. This latter action is useful for when the agents have not found all ammo spawns yet.
The last actions we defined makes agents move to one of several ``camp'' locations. 
To detect camp spots, we convolve the map with 8 kernels that each detect a safe spot right besides a corner.
Camp spots are then ranked by the maximum importance of any adjacent square, where importance is defined as the number of paths from the enemy spawn to a point of interest (control point or ammo spot) that pass through the square.


\subsection{State space representation}
Here we explain what information we extracted and used from the state space of the DG. The first important thing to note is that the private information of all agents can be accessed by all the other agents, so the joint observation is available to all agents. There are several easily obtained variables in this observation that we make frequent use of. The amount of ammo that each agent has, the distances of each agent to each point of interest (control point, ammo spot), the state of the control point (captured by either team or not), and the visible enemies.

Besides these readily available state features, we also keep a list of ammo spawns that have been found. Furthermore, we keep a timer that keeps track of when we expect an ammo spawn to spawn ammo. We completely disregard the possibility that an enemy could pick up ammo, and simply reset the counter when ammo is not available at a location where we predict there should be.
Note that without this information, the DG is not Markovian, even in a single agent scenario.

We also keep track of all points on the current map that have not been seen before, so that these can be explored in case not all ammo locations have been found. Furthermore, instead of the distance (path length) to each point of interest, we make frequent use of the number of turns it would take an agent to get to the specified location. This is composed of the path length and the number of turns that are lost to rotating towards the right walking directions.


\subsection{Rule-based action selection}
For our rule based system we followed a very basic tactic. If an agent has no ammo, we assign an ammo location to it. If there is no ammo location, or getting to it takes fewer turns than the estimated time-to-spawn, we move to a control point that does not belong to us yet. If we took control of all control points, we move to the middle control point. If an agent has ammo, if it locally sees enemies it chases them in order to kill them. As described above, the actual shooting assignment is always done for agents that can shoot an enemy. If an agent has ammo but there are no enemies close, it goes after a random uncontrolled control point. If all control points are captured, agents with ammo move to specified camping locations or camp the control point that is closest to the enemy spawn.
For each agent that is looking for ammo, we use the Hungarian algorithm to determine where they should go, by using the least combined distance as a measure. The same is done for the assignment of camp locations.


\subsection{NEAT-based action selection}
One of the issues of using NEAT (or any other learning algorithm) for selecting the strategy of our agents is that the evaluation of a game takes several seconds. Since NEAT has to evaluate the performance of every member of the population by having the agents play against multiple opponents, the number of generations that can be run in a few days time is limited.
%The NEAT algorithm explores the function space in an intelligent but random fashion.
Furthermore, the more inputs and outputs there are, the longer it will take NEAT to find good network structures. So it is very important to keep the number inputs and outputs small. However, we also have to take into account the performance: the inputs and ouputs have to be descriptive and powerful enough to make the network capable of representing good policies.

Another issue is that of coordination. If we define a single NEAT network for every agent, they can only learn to coordinate with the others if some information on the other friendly agents is included in the input. It would also be possible to have a single network for all agents, but this would effectively sextuple the number of inputs and outputs, whilst also introducing needless symmetries.
Thus we defined a set of 44 inputs which we reasoned are at the very least necessary to make an informed decision about the 17 actions. The list is given in appendix A and we will only state the main intuitions here. For all the main interest locations and enemies, we sort them based on path-wise proximity to each agent and only consider the closest three. As there are 6 possible enemies and ammo locations, this winnows down the input space, whilst still being informative if we make the rough assumption that enemies or ammo locations farther away do not matter as much. The second important idea to our state representation is that for each interest location we add the distance of the other agents that are closest to it. For ammo locations, we also include that closest other agents ammo.
This representation should enable NEAT to learn structures such as ``move to that ammo location, unless there's an ally with less ammo than me who is closer to that ammo spot.'' -- and similar constructions.


