\documentclass[conference]{IEEEtran}

% graphics package
\usepackage[pdftex]{graphicx}
\usepackage{hyperref}
\usepackage{amsmath}
\graphicspath{{img/}}

\begin{document}
%
% paper title
% can use linebreaks \\ within to get better formatting as desired
\title{Solving the Domination Game with Q-learning, Eligibility Traces \& Monte Carlo}

% author names and affiliations
% use a multiple column layout for up to three different
% affiliations
\author{\IEEEauthorblockN{Toby Hijzen}
\IEEEauthorblockA{ 6332331\\
Universiteit van Amsterdam\\
toby.hijzen@gmail.com}
\and
\IEEEauthorblockN{Hessel van der Molen}
\IEEEauthorblockA{ 5619785\\
Universiteit van Amsterdam\\
hmolen.science@gmail.com}
\and
\IEEEauthorblockN{Kristin Rieping}
\IEEEauthorblockA{10252428\\
Universiteit van Amsterdam\\
10252428@uva.student.nl}
\and
\IEEEauthorblockN{Koen Klinkers}
\IEEEauthorblockA{10282815\\
Universiteit van Amsterdam\\
10282815@uva.student.nl}
}
\maketitle

\begin{abstract}
In this report we discuss the application of Q-learning, Eligibility Traces \& Monte Carlo learning to the domination game. We show simplifications that were necessary to reduce the state-space to tractable proportions. On this simplification Q-learning,  Eligibility Traces \& Monte Carlo learning was performed using different settings: eligibility traces, dynamic task distribution and extended states space. All implementations are tested against a random action choosing agent. The results are compared in a learning and a greedy action selection phase.
\end{abstract}

\section{Introduction}
The presented report is a description of the reinforcement learning agent made for the domination game \footnote{\url{https://github.com/noio/Domination-Game}}. 
In this game there are two teams battling to capture Control Points (CPs) in a 2D world. Each team has 6 agents and their goal is to maximize their score. Scoring is done by capturing a CP: each game step a CP is captured it results in a +1 increase of the score of the team holding the CP (and a -1 decrease of the score of the other team). On start up both teams have a score of 500 points. The captured CPs have to be defended, for which their are 6 locations where ammo is spawned. When an agent is shot it is respawned to its basis where it has to wait until it is allowed to move again. A team wins if its score is 1000 or when it has gained more points than the other team after 600 steps. 

This game has couple of challenges which make it difficult for Reinforcement Learning (RL). First the state space is continuous, resulting in many states when discretized. For RL to be feasible we need to reduce the number of states in the state space. This is always a very hard problem. Next, for an agent to perform well he needs to coordinate with other agents. This coupled with the continuous state space it makes learning very difficult. 

The setup of this zero-sum game is multi agent which is cooperative as well as competitive. The state and action space are continuous. The game is finite and the agents have partial observability perception of the state-space.

 We decided to implement an agent using single agent Q-learning with and without eligibility traces (inherently Monte Carlo). In this setup coordination is not learned automatically. A joint-action space would have been a solution for this problem, but with the restriction of a small amount of calculation time, this approach would be infeasible. So we decided to solve the problem of coordination with the Hungarian algorithm next to the Q-learning for solving the coordination problem.

\section{The Model}

\subsection{Agent design}
The used agent is designed in multiple levels. Each level is shared among each agent, enabling them to access global data (like a shared Q-table). The top-level describes the multi-agent aspect: dividing tasks among agents. A task is defined as capturing and guarding a specific control point. The mid-level describes single agent behaviour: each agent should survive in its local environment. This includes decisions over its ammo, shoot and task management. The lowest and last level handles all hard-coded manoeuvres like path planning, rotating, exact position calculation and ammo location calculation.
In our approach we tried to solve the mid-level behaviour with the use of Q-learning. All agents share the exact same Q-table, enabling the system to update the table with 6 values each step in the game\footnote{In a game step all agent can execute one single action}. The task selection (top-level) is done using a static approach and a dynamic approach which makes use of a build-in implementation of the+ Hungarian algorithm.


\subsection{State Action Space}
\label{sec:stateactionspace}
In almost every learning domain, defining the state-action space is a hard problem. There are many variables or features which can be taken into account in defining a precise representation. The difficulty of feature selection also holds in presented domination game. Locally (the agent's mid-level) it is possible for an agent to observe 11 other agents. An agent has a local overview of 100 pixels towards each direction, making a 2D square of 200x200 pixels of observation points. In a naive design these values alone could create a state-action space of $(200^2)^11$ values, excluding information like distances towards ammo-points or the CPs. To tackle this problem we created a selection of highly discretized features and actions which will be explained in the following sections. In total our state-action space is described by 2592 state-action pairs.


\subsubsection{States}
\label{subsubsec:states}
The state-space is based on five features. In table \ref{tab:states} the different features are presented. 

\begin{table}[h]
\begin{center}
    \begin{tabular}{l l l}
	State features	&Discretization					& Intervals\\
 	 \hline
	Ammo Distance 	& Steps towards best ammo location		& 0, 1, 2, 3+\\
	Task Distance	& Steps towards the assigned CP 		& 0, 1, 2, 3, 4, 5+\\
	Foe	 		& Treads and targets for shooting		& 0, 1, 2, 3, 4\\
	Ammo			& Number of bullets of Agent			& 0, 1-2, 3+\\
	Task	 		& State of the assigned CP 			& Red, Neutral, Blue
    \end{tabular}	
\end{center}
\caption{The state space used by the agents}
\label{tab:states}
\end{table}

There are two features that are defined by a distance: `Ammo Distance' and `Task Distance'. Distance is defined by the amount of game steps it takes for an agent to reach its goal. To define the best discretization steps multiple games where runned while recording in each step the distances towards ammo-locations and CPs of each agent. The resulting histograms are shown in Figure \ref{fig:ammodistance} and Figure \ref{fig:taskdistance}. The state-feature `Foe' is an abstraction of the foes in the area. It distinguishes between:`No foes present' (0), `there are foes that can shoot me and there are foes i can shoot' (1), `there are foes that can shoot me and there are no foes i can shoot' (2), `there are no foes that can shoot me and there are foes i can shoot' (3) and finally `there are no foes that can shoot me and there are no foes i can shoot' (4). This representation is chosen because it abstracts away the amount of foes presented but it still preserves the information for which action to choose. If an agent picks-up an ammo-pack, its ammo-count increases with 3. Since an ammo-count of 1 or 2 is not really different an `Ammo' feature of 3 values is chosen. The definition of a the `Task' feature is quite straightforward. In total there are 1080 states possible with the above described features.

\begin{figure}[h]
  \centering   
  \includegraphics[scale=0.50]{./stepsammo.png} 
  \caption{Distribution steps towards ammo distance}
 \label{fig:ammodistance}
\end{figure}


\begin{figure}[h]
  \centering   
  \includegraphics[scale=0.50]{./stepstask.png} 
  \caption{Distribution steps towards task distance}
 \label{fig:taskdistance}
\end{figure}


\subsubsection{Actions}
The action space of the agents can be defined as the amount of degrees the agent should turn, how many pixels it should drive and if it should shoot. Since these values are continuous can be exact calculated (put in the low-level of the agent) the chosen action space is more generalized. The generalized actions are defined as:
\begin{itemize}
\item Go to the assigned CP (task)
\item Pick up ammo
\item Shoot foe
\end{itemize}
Determining to which ammo-location the agent should go is done by calculating the steps the agent should make to go from its current position to the ammo location, waiting for ammo to respawn and then moving to its assigned CP. The path with the lowest amount of steps is chosen to be the best path and the used ammo-location is set to be the best ammo-location.
Performing the `Shoot a foe' action is only possible when the agent observes a foe and is able to shoot it. As described in section \ref{subsubsec:states}, this only is possible in $2/5^{th}$ of values of the `Foe'-state.
In total our action space thus consists of 2.4 actions.


\subsection{Q-learning}
The first implementation consist of a `plain' Q-learning approach as mentioned in \cite{ref:sutonbarto}. The used update formula is described by:
\begin{equation}
Q(s_t,a_t) \leftarrow Q(s_t,a_t) + \alpha \delta_t
\end{equation}
\begin{equation}
 \delta_t = R_{t+1} + \lambda \operatorname*{max}_{a'} Q(s_{t+1},{a'}_t)-Q(s_t,a_t)
\end{equation}
Where $\alpha$ is the learning rate, $R_t$ is the reward at time step $t$,  $\lambda$ is the discount-factor and $Q(s_t, a_t)$ the Q-value.
A second implementation is made by incorporating eligibility traces. These traces are maintained individually and implemented as `replacing traces' \cite{ref:sutonbarto}. The adapted update-equation is described as:
\begin{equation}
Q(s_t,a_t) \leftarrow Q(s_t,a_t) + \alpha \delta_t e_t(s,a)
\end{equation}
where the eligibility is
\begin{equation}
\forall s,a \, e_t(s,a) = \lambda \gamma e_{t-1}(s,a)\\
e(s_t,a_t)=1
\end{equation}
Each time the agent takes an exploratory action the eligibility trace is reset. The exploration is done by using epsilon-greedy action selection.

\subsection{reward}
For the agents to learn they require feedback in the form of reward. The agents main goal is to capture or hold the CP it is assigned to. Thus capturing the control point gives a reward (+5 if it was neutral, +10 if it was captured by the foes) to all the agents assigned to this control point. A negative reward is given for losing the control point (-5 for losing to neutral, -10 for losing it to the foes). To speed up learning intermediate goals are also rewarded. These are collecting ammo (+3), killing enemies (+5) and a negative reward for being killed (-5).


\section{Experimental Setup}
In the previous section the basic model was given. Here we fill in the details as to the parameters and discretization of the state-space.
Training was done against a random agent with the same action set. but randomly selection actions from this set.

%deze subsectie is hier ook een beetje raar, misschien moeten deze sectie en hungarian toch naar model en in experimental setup moet gewoon een oplijsting van dingen die we hebben getest
\subsection{discretization of the distance}
We can not use a state for every possible distance because we  would have to many states. Therefore we first used an arbitrary threshold were we treat every distance above the threshold the same. Later we recorded how often certain distances occurred. These can be seen in fig \href{fig:ammodistance} and \href{fig: taskdistance}. Based on these results we tried to adjust the state discretization to: Ammo Distance:[0,1-2,3-8,more], Task Distance:[0,1,3-6,7-10,more]. We run a small experiment of of 3000 games. The setups were identical the only difference was that one set of agents used the old discretization the others the new one based on the data. But in the end it didn't really matter that much. It is probably the case that the best discretization is: 0,1,more. Because the nature of the game distance prediction beyond 1 is not very reliable and therefore not useful to make a distinction between them. Also, any advantage is offset by the increase in the state space. 

\subsection{Optimal Strategy}\label{hungarian}
%Kristin 
In the game there are 3 CPs, the intial strategy is that to each CP there are 2 agents assigned. Agent 1 and 2 are assigned to the first CP, agent 3 and 4 to the second and agent 5 and 6 to the third CP. With this setup the problem occurs if an agent who is far away from the basis gets killed and he has to go back to his CP. This takes a lot of time which makes it easier for the opponent to capture the CP. So we need to make the strategy more dynamic, so that agents can switch between the CPs. We use the step distance of every agent to the CPs and calculate the optimal strategy with the minimal total stepsize. This is done with the hungarian algorithm. 
\subsection{Overview of the Tests}
To summarize a number of tests are done to investigate the behaviour of the learning. The basis of the experiment was learning using eligibility traces. Different aspects were, as also previously described, tested. The effect of using the hungarian algorithm, expanding the state space and looking at the extremes of the eligibility back propagation. Table \ref{tab:short} gives a summary of the different methods with corresponding abbreviations.
\begin{table}
\begin{center}
    \begin{tabular}{l l}
	Agent Type & Abbreviation\\
 	 \hline
	Eligibility traces  & E\\
	Eligibility traces with Hungarian& EH\\
	Monte Carlo & MC\\
	Q-learining with Hungarian& QH\\
	Monte Carlo with Hungarian& MCH\\
	Expanded State Space & ESS
    \end{tabular}	
\end{center}
\caption{A list of al the different implementations with corresponding abbreviations}
\label{tab:short}
\end{table}

\section{The Results}
% zie mail voor wat er moet worden gedaan
Initially the exploration is tested to see if the agent visited at least most states. As previously calculated the total number of states is 2592. Figure \ref{fig:states} shows that after 3000 games the agent has visited 2376. Almost the total number of states. Thus exploration seems to work fine.\\
\begin{figure}[h]
  \centering   
  \includegraphics[scale=0.50]{./states.png} 
  \caption{Number off states visited}
 \label{fig:states}
\end{figure}
We have trained an agent using different methods and have enabled learning for 3000 games each with a step length of 600. Here we give the performance for the different methods playing against the random agent. We test the performance of the learned strategy against an agent that selects actions randomly from the available action set. We tested the three variants of our agent with and without the Hungarian algorithm. A special case is the Extended agent. This agent is the same as eligibility traces with Hungarian. Only this agent has an extended state space by using a finer discretization for the state features.
In table \ref{tab:scoresoffline} and \ref{tab:scoresonline} are given for performance while learning and after learning. In essence online and offline. This is important because the methods behave differently under different circumstances. Which affects the choice of an algorithm. For example if learning is acceptably fast the agent could use it to adapt to the other agent. Looking at the average rewards it seems that Q-learning is the best choice for online learning as seen in \ref{tab:scoresonline}. Unfortunatly due to the stochastic behaviour of the agent there is not always a significant difference. This is illustrated in table \ref{tab:significantonline} tested with an unpaired student t-test for 0.05 \% significance. Where  for example can be seen that the score of ESS is significantly higher than the score of MCH.
Q-learning and Monte Carlo work best in the offline case as can be seen in table \ref{tab:scoresoffline}. Overall best performance with significance, as shown in table \ref{tab:significantoffline},  is still obtained with the expanded state-space.

\begin{table}
\begin{center}
    \begin{tabular}{l c}
	Agent Type&Avg Score\\
 	 \hline
	Monte Carlo with Hungarian& 655\\
	Eligibility traces with Hungarian& 678\\
	Monte Carlo & 679\\
	Expanded State Space & 681\\
	Eligibility traces  & 686\\
	Q-learining with Hungarian& 709
    \end{tabular}	
\end{center}
\caption{The average score online}
\label{tab:scoresonline}
\end{table}

\begin{table}
\begin{center}
    \begin{tabular}{|p{0.7cm}||p{0.7cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|}
\hline
&MCH&EH&MC&ESS&E&QH\\
\hline
MCH&x&-&-&-&-&+\\
\hline
EH&&x&-&-&-&+\\
\hline
MC&&&x&-&-&+\\
\hline
ESS&&&&x&-&+\\
\hline
E&&&&&x&-\\
\hline
QH&&&&&&x\\
\hline

    \end{tabular}	
\end{center}
\caption{Improvement significant  online}
\label{tab:significantonline}
\end{table}


\begin{table}
\begin{center}
    \begin{tabular}{l c}
	Agent Type&Avg Score\\
 	 \hline
	Eligibility traces  & 571\\
	Eligibility traces with Hungarian& 619\\
	Monte Carlo & 666\\
	Q-learining with Hungarian& 669\\
	Monte Carlo with Hungarian& 675\\
	Expanded State Space & 716
    \end{tabular}	
\end{center}
\caption{The average score offline}
\label{tab:scoresoffline}
\end{table}

\begin{table}
\begin{center}
    \begin{tabular}{|p{0.7cm}||p{0.7cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|}
\hline	
&E&EH&MC&QH&MCH&ESS\\
\hline
E&x&+&+&+&+&+\\
\hline
EH&&x&+&+&+&+\\
\hline
MC&&&x&-&-&+\\
\hline
QH&&&&x&-&+\\
\hline
MCH&&&&&x&+\\
\hline
ESS&&&&&&x\\
\hline

    \end{tabular}	
\end{center}
\caption{Improvement significant offline}
\label{tab:significantoffline}
\end{table}

\section{Conclusion}



Early on in the course we decided to not bother making a heuristic agent for the computation. We figured that a learning agent would make a far better report so why waste our time on a heuristic agent. But after finishing the project we have to come back on that. There are two main reasons for this. First having an heuristic agent and extending it for learning is far easier then starting a learning agents from bottom up. When your learning agent is failing it is not sure why. The problem could be an error in your implementation or your representation might be inadequate. When you have a working agent as a basis it is much easier to solve these kind of problems. Also think process involved in making a heuristic agent will give you a better intuition for what is a better state representation later on when you are trying to implement reinforcement learning.

The performance of our agent is really poor. The reason for this is very clear. Our agents don't cooperate. The main challenge of this game is resource coordination. Resource coordination includes making sure that ammo is optimally shared between agents but also the coordination between agents when shooting multiple foes. The problem is that it is impossible for our agents to learn these coordinates because of the fact that we only do independent Q-learning. So why did we choose Q-learning in the first place?

Coordination in Q-learning is always very difficult and in the beginning we didn't expect coordination to be this important in the game. We thought that there was a lot to gain for the behaviour of the individual agent. The reason that the behaviour of the individual agent isn't that important is because that the game isn't as dynamic as it seems in the first place. We hoped that things like avoiding foes the prevent shooting was possible but in the game the possibilities for this are limited. Because of this the individual behaviour is actually pretty straight forward and isn't a real learning problem.

\section{Further Work}
A much observed failure of the agents is the non-communication about which agent takes what ammo. Because there is no communication about this is, it happens quite often that multiple agents wants to take the same ammo, while only one (or none) of the can have it. Determining which agent can have which ammo and communicating this information will result in better performance: agents will have more ammo and therefore a better chance in defeating the opposite team. To solve this problem a Hungarian approach can be used.
%Hungarian kan ook worden toegepast op ammo en schieten



\begin{thebibliography}{1}

\bibitem{ref:sutonbarto}
Richard S. Sutton and Andrew G. Barto, \emph{Reinforcement Learning: An introduction}. \relax MIT Press, Cambridge, MA, 1998 A Bradford Book.

\end{thebibliography}

\end{document}
