\section{Related Work}
\label{sec:related_work}

A fair amount of theoretical and empirical work has been done in the field of multi-agent planning and learning.
However, it turns out that the DG puts such strong constraints on candidate solution methods, that many well-known approaches are ruled out, or are at best impractical.
We will explore the space of formalisms to understand how to best model the DG, and we will relate this to the applicability of several solution methods.
%For this reason, we will explore the place of the DG in the space of formalisms that have been studied, and relate this to the applicability of solution methods.
In particular, we will discuss models of the strategic interactions between agents in multi-agent settings, learning in such settings, the all-important issue of coordination and the extension of learning methods to continuous domains.

\subsection{Models of strategic interactions}
The field that has traditionally studied the strategic interactions between multiple agents is game theory \cite{neumann44a}.
In its simplest form, game theory tries to predict the outcome of a (zero-sum) ``normal-form game'' where two or more agents simultaneously decide what action to take.
Since the pay-off of each agent depends on the \emph{joint action}, agents must reason about the actions of other agents.
This basic framework is of little use to us, though, because it does not model multiple states and the transitions between them, and assumes the game is known and so does not model learning.

The NFG has been extended to incorporate these aspects, resulting in sequential and repeated games \cite{GTandMARL}.
In a \emph{sequential} game (often formalized as a Stochastic Game, \cite{shapley53}), there are multiple states, and transition probabilities depend on the joint action, as in the well known single agent MDP framework.
%This can be seen as a generalization of both NFGs and MDPs (which are well known from the planning literature, and will not be reviewed here).
In the form introduced by Shapley \cite{shapley53}, stochastic games do not model partial observability, but they are easily extended to partially observable stochastic games (POSGs).
The POSG is a good model of the domination game.
A similar framework is the decentralized-POMDP, but while decentralized systems often have desirable properties such as graceful degradation, this comes at a computational cost, and the DG does not require a decentralized policy.
Furthermore, Dec-POMDPs are only applicable to fully cooperative games making them less useful as a model of the DG.

%but while this formalism has the desirable property of decentralized 
%While the DG seems to fit the Markov Game framework well at first blush, the fit is not quite perfect because the DG is \emph{partially observable}; something that is usually not modelled in a Markov Game.

%One framework that 
%Another framework that capture partial observability in a multi-agent system is the Dec-POMDP framework \cite{OliehoekRLSOTA}.
%However, solving Dec-POMDPs quickly becomes intractable as the state-action space becomes larger, and importantly, the DG need not be decentralized.
%That is, while each agent independently receives an observation and has to return an action, agents have an almost unlimited capacity for communication, so that it is possible to centralize decision making.
%

%Unfortunately, exact solution algorithms for POSGs and Dec-POMDPs have very high computational complexity.
%As we will discuss in the next section, we don't have to deal with complicating factors like 
%we can ignore partial observability fairly safely, and model the game as a Markov Game.
%deterministic policy mss niet beste voor multi-agent setting...

The DG is also \emph{repeated}, i.e. the entire game (consisting of multiple time-steps) is played repeatedly, each time on a new map.
This allows us to learn a model of the environment and opponent, a value function or a policy from experience.
We can see that learning is essential, because unlike in MDP-planning, no transition model is available (since it depends on the opponents), and so classic planning is impossible.
It should be noted though that many issues that make repeated games interesting from a multi-agent perspective are not present in the DG:
since no learning is possible \emph{during} a tournament, agents don't need to reason about the effects of their actions in subsequent games (e.g. ``reputation management'' \cite{reputations}).

\subsection{Learning in multi-agent systems}
In general, learning involves approximating either a model, a value function or a policy.
As explained by van Hasselt \cite{RLcontinuous}, model approximation in continuous spaces is not very useful: even if we learn a model, planning in a continuous MDP is intractable.
For similar reasons, value approximation is not practical: we could approximate the optimal Q-value, but choosing an action $a$ such that $a = max_a Q(s,a)$ for some state $s$ may involve a non-trivial optimization problem. We could discretize the whole space, but if we want the states to be descriptive enough to capture the complexity of the game, the state space would be enormous.
It is clear that the most straightforward approach for a problem of this complexity is therefore policy approximation, also known as direct policy search or actor-only methods.

Direct policy search methods work by updating some parametrized policy, either by gradient methods \cite{RLcontinuous} or through evolutionary algorithms \cite{coevolutionRosinBelew}.
One method that has attracted a lot of attention recently is NeuroEvolution of Augmenting Topologies (NEAT) \cite{NEAT}.
NEAT uses three techniques to evolve neural networks with increasingly complex topologies.
These techniques are speciation to protect innovations, an ``innovation number'' based crossover technique that avoids catastropic crossovers, and starting with minimal structure.

\subsection{Coordination}
An important problem in cooperative games is that of coordination: how to agree on the joint action that should be taken, when agents decide independently.
There are several approaches to this problem, as reviewed by Vlassis et al. \cite{masintro}.
This work is focused on exploiting sparse interactions to improve efficiency.
It is quite clear that an ``optimal'' policy for the DG has very dense interactions, although it may still be possible to exploit efficient algorithms at a minimal cost.

In (pseudo-) centralized, discrete action space settings, it is possible to cast the coordination problem as an assignment problem \cite{hungarian}.
This is the approach we initially took for our rule-based system, and will be discussed in more detail in the next section.

\subsection{Continuous state-action spaces}
The continuous nature of the state-action space of the domination game is a serious complicating factor.
Most work has focused on either optimizing the parameters of some family of continuous functions \cite{RLcontinuous} or discretizing the state space in an intelligent way.

