\documentclass[11pt,a4paper,oneside]{article}
\usepackage{fullpage}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage[pdftex]{graphicx}
\usepackage{graphicx, subfigure}


\begin{document}
\title{Autonomous Agents \\ Assignment 3: Multi-Agent Planning and Learning}
\author{Robbert Iepsma 6139108 \and Chiel Kooistra 5743028  \and Boudewijn Bod\'ewes 6049028}
\date{deadline 21/10/2012/23:59}
\maketitle

\section{Introduction} In this assignment we again built forth on the implementation we made for the the Prey-Predator game. Our objective is to modify the environment such that, now more than one predator can hunt the prey. The predators operate as a team and share their reward, i.e. they both get a positive reward when either of them catches the prey. If however they might crash into each other they all receive a negative reward, this holds even when the prey is in the same location as the crash takes place or if one of the predators catches the prey while two others crash. Another change of rule is that all agents now move simultaneously enabling two agent to so to speak switch places without crashing or being caught! To make things more interesting, instead of hunting a randomly moving prey the predators now hunt a prey that has a mind of its own. This could result in a impossible prey to catch, as the prey could always stay a tile ahead of the predators. Luckily for the predators though, with a probability of 0.2 the prey sometimes trips. This last condition ensures the predators are in principle always able to catch the prey. In subsection \ref{ss:ImplementationOfChanges} we describe the choices we made and steps we took to implement the changes of the environment as described above. Subsection \ref{ss:IndependentQ-learning} presents a analyzation of Independent Q learning for different numbers of predators and parameter settings. Lastly we implement the minimax-Q algorithm and analyze its performance compared to independent Q-learning. 


\section{Assignment}
\subsection{Implementation of changes}\label{ss:ImplementationOfChanges}
Implementing all of the changes of rules and conditions into our game was rather straight forward we therefore did not  encounter any mentionable problems. Something what is worth mentioning though is the change in size of $Q$ as a consequences of the new rules. Whilst our former state space for one predator was a manageable $5 \times 11^2$ every extra predator adds a factor $11^2$ to this equation. The size of the state space is thus roughly equal to $|Q| = 5 \times 11^{2n} $ where $n$ is the number of predators. When playing with 4 predators this leads to a Q of approximately one billion states in size of which every agent stores his own!  The only reduction we made is that an agent considers two states equal, if the only thing different is two or more other predators other than the agent himself swop places, that it is: he only discriminates between prey, himself and other agents. As can be expected, running multiple episodes with more than three agents becomes almost undoable, this is why we only tested our algorithms with up to three predators.       
	

\subsection{Independent Q-learning}\label{ss:IndependentQ-learning}

In figure 1 (c and d) we have plotted the average number of runs a game against episodes, not looking at if it is won by the predators or by the prey. Also in figure 1 (a and b) we show the percentage of Q filled against the number of episodes played. We did this for 1 and 2 predators running 10 $\times$1000 and 10 $\times$ 90.000 episodes respectively. The results were averaged and smoothed by averaging over 1000 episodes for the 2 predator setting and 10 episodes for the 1 predator setting. In both cases we see that the average length of a game only decreases after almost every state in Q is visited more than once.  

\begin{figure}[ht!]
\label{fig:alpha}
     \begin{center}
        \subfigure[$Q$ size 1 predator]{%
            \includegraphics[width=0.5\textwidth]{Q1}
        }%
        \subfigure[$Q$ size 2 predators]{%
            \includegraphics[width=0.5\textwidth]{Q2}
        }
        \subfigure[average length of a game with 1 predator]{%
           \includegraphics[width=0.5\textwidth]{runs1}
        }%
        \subfigure[average length of a game with 2 predators]{%
           \includegraphics[width=0.5\textwidth]{runs2}
        }
        
    \end{center}
    \caption{comparison of independent Q-learning for one and two predators}
\end{figure}

	

\subsection{minimax-Q algorithm}\label{ss:minimax-Qalgorithm}


\section{Conclusion}


\end{document}
