\documentclass[11pt,a4paper,oneside]{article}
\usepackage{fullpage}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage[pdftex]{graphicx}
\usepackage{graphicx, subfigure}


\begin{document}
\title{Autonomous Agents \\ Assignment 1: Single Agent Planning}
\author{Robbert Iepsma 6139108 \and Chiel Kooistra 5743028  \and Sebastian Dr\"oppelman 5783453 \and Boudewijn Bod{\'e}wes 6049028}
\date{deadline 21/9/2012/23:59}
\maketitle

\section{Introduction}
In this report we present our answers to the questions of the first assignment belonging to the `autonomous agents' course. Considering this is the first of three consecutive assignments building forth on one another, we have tried to encompass this notion in our code by emphasizing on a generic design. Aside from the requirements demanded by the exercises we have implemented a GUI to display the status of the game as it is proceeding. Our programming language of choice is JAVA being among the recommended languages and being one in which all team members are comfortable programming.

\section{Assignment}
\begin{enumerate}

\item[1.]
State: [ Predator(0,0), Prey(5.5) ] $\rightarrow$ Predator action  (0,0) $\rightarrow$ Prey action  (0,-1)\\
State: [ Predator (0,0), Prey  (5,4) ] $\rightarrow$ Predator action  (0,-10) $\rightarrow$ Prey action  (1,0)\\
State: [ Predator  (0,10), Prey  (6,4) ] $\rightarrow$ Predator action  (0,1) $\rightarrow$ Prey action  (0,0)\\
State: [ Predator  (0,0), Prey  (6,4) ] $\rightarrow$ Predator action  (1,0) $\rightarrow$ Prey action  (0,0)\\
State: [ Predator  (1,0), Prey  (6,4) ] $\rightarrow$ Predator action  (0,1) $\rightarrow$ Prey action  (0,0)\\
State: [ Predator  (1,1), Prey  (6,4) ] $\rightarrow$ Predator action  (0,1) $\rightarrow$ Prey action  (0,0)\\
Average turns: 255.16 Standard deviation: 196.85

\item[2.]

\item[3.]
Immediately in the start it became apparent that simply representing each possible state of the game explicitly would result in a vast state space. Our world being a $11 \times 11$ world containing two agents, i.e. predator and prey, which can be on any of these $121$ locations results in a state space of $11^4 = 14.641$ different states. We never used this simple representation and immediately suggested a state space in which the prey is always in the middle of a field of $11 \times 11$ squares thus reducing the state space by a factor $11^2$ due to the fact that we treat all world locations the prey is at as the same state space location effectively making the location of the prey relative to that of the predator. For example when the predator an prey are at locations (3,2) and (7,4) respectively the state representation would result to (4,2).
\emph{(figure 1)} \begin{figure}[ht!]
     \begin{center}
        \subfigure[real world]{%
            \includegraphics[width=0.35\textwidth]{oldstate}
        }%
        \subfigure[state space]{%
           \includegraphics[width=0.35\textwidth]{newstate}
        }
    \end{center}
    \caption{difference in real world and state space representation}
\end{figure}
In this way our state space is now reduced to $11^2 = 121$ states and therefore enhancing the performance of our algoritms. Exploiting the symmetry in our world another obvious thing to do would be using horizontal, vertical and diagonal reflections to further reduce the state space to 21 states. Keeping in mind our emphasis on a generic design this reduction is however not pursued given the fact that this reduction would not be possible in a world that contains more than 2 agents.


\item[4.]

\item[5.]

\end{enumerate}

\section{Conclusion}


\end{document}
