\documentclass[a4paper,12pt]{article}
\usepackage[utf8x]{inputenc}
\usepackage{graphicx}
\usepackage{biblatex}
\usepackage{wrapfig}
\usepackage{pdflscape}
\usepackage{float}
\usepackage{chngpage}
\usepackage[pdftex,colorlinks=true]{hyperref}


\bibliography{references.bib}

%opening
\title{Games and A.I. Techniques\\
Assignment One\\Report}
\author{James Alford         9803143Q\\
Matthew Blair     s3251525\\
Russell Cowan     s3196589\\
Thomas Harris     s3236050\\}
\begin{document}

\maketitle
\vfill
\tableofcontents
\newpage

\section{Instructions}

To open the project in the editor, you will have to load the main.unity scene which is located in the Assets folder.

Once the game starts, move the mouse to control the captain herder. The captain herder will follow the mouses movement.
To zoom in and out, roll the middle mouse button.
To move about the field, drag the mouse while pressing and holding the middle mouse button.


Turn on Gizmos in Unity Editor (top right of play window) to see the direction and collision rays produced by the agents and herders.

\section{Agent State Machine}


\begin{figure}[H]
 \centering
  \includegraphics[scale=0.4]{./images/image00.png}
 \caption{State Machine}
\end{figure}


Agent behaviour is managed by a state machine.
Each state uses one or more steering behaviours to control an agent at any one time. 

This enables agents to:
\begin{itemize}
 \item Flee from approaching herder(s).
 \item “Flock” as they flee the herder(s)
 \item Attempt to “Break Out” from the herd.
 \item Wander as a loose flock.
\end{itemize}

  \subsection{Agent States}
Agents change colour based on their state.

    \subsubsection{Wander State - \emph{White}}
This state is entered when the agent is out of range of all herders.

While in the wander state, an agents movement is controlled by both the \textbf{Wander} and \textbf{Boid} steering behaviours. 
The Wander steering behaviour is used to add randomness to an agents motion, as a relaxed and grazing animal may move. 
A small amount of the Boid steering behaviour is blended in so that agents, although moving randomly, tend to flock together. 
For wander, the Separation sub behaviour of Boid is given a higher weight so that the agents don't get to close to each other.

This state is left when a herder gets too close for comfort. 

    \subsubsection{Flee State - \emph{Green}}

This state is entered when when a herder gets too close.

While fleeing an agents movement is controlled by both the \textbf{Flee} and \textbf{Boid} steering behaviours. The Flee steering behaviour drives the agent away from the closest Herder. The Boid steering behaviour is blended in with a high weight given to is Cohesion sub behaviour so that agents flock closely together while fleeing, as scared animals might.

There are two events that cause the agent to leave this state. The first, an agent is no longer too close to a herder. The second, an agent attempts a “break out”.

The decision to attempt a break out is determined by a random number generator.

    \subsubsection{Break Out State - \emph{Black}}

The Break Out state is entered when the agent is fleeing and decided to try to evade its flock and its herder.

When a break out starts an agent notes the centre of it local flock.

While in break out its movement is determined by blending the result of two Flee steering behaviours. The first steers the Agent to fee its local flock, the second steers the agent to \textbf{flee} its herder. By using both, the agent moves away from its flock, but not towards its herder. This state is left when the agent has moved a given distance away from the position is started the break out from.
\section{Agent Steering Behaviours}

All of the Agent Steering behaviours are sub classes of the Abstract class “Steering Behaviour”. This allows us to access them through a uniform interface. The method “GetSteering” gives a target position to steer the agent towards.

\subsection{Flee}

%TODO A vector average of the herders
Uses knowledge of an agents position and the position of the herder closest to it to steer the agent away from the herder.

\subsection{Wander}

\begin{figure}[H]
 \centering
  \includegraphics[scale=0.8]{./images/image03.png}
 \caption{Wander}
\end{figure}



%\begin{wrapfigure}{right}{0.4\textwidth}
  %\begin{center}
	  %\includegraphics[scale=0.8]{./images/image03.png}
  %\caption{Wander}
  %\end{center}
%\end{wrapfigure}


To give a random but orderly motion, a point is chosen in front of the agent, then a random point on a circle around that point is chosen as the target for the agent.

Based on the Wander behaviour outlined in Artificial Intelligence for Games\cite{ianmillington}.

%\pagebreak 
%\clearpage

\subsection{Boid}

Our agents use a simplified boid flocking behaviour based on the the boids paper\cite{siggraph} and website\cite{craigreynolds}. Our boids implementation is broken down into three simple sub-behaviours of which each agent independently calculates and acts upon.  

\begin{figure}[H]
 \centering
  \includegraphics[scale=0.8]{./images/image05.jpg}
 \caption{Boid vectors}
\end{figure}


The cohesion behaviour, shown as a red ray, keeps the agents together by finding the average position of the agent's nearest neighbours and moves the agent towards it.

The alignment behaviour, shown as a light blue ray, gets the average direction of the agent's neighbours and moves the agent towards that average.

The separation behaviour, shown as a green ray, takes the average position of the agent's nearest neighbours and moves the agent away from that position.

\begin{figure}[H]
 \centering
  \includegraphics[scale=0.6]{./images/image09.jpg}
 \caption{Boid vectors}
\end{figure}


These three simple sub-behaviours are then blended with weights to produce a new direction, shown as a white ray, for the agent.  This combination creates the simulated flocking behaviour that you see to the left.  Here the agents flock together, following a similar direction (light blue), keeping close together (red) but also ensuring a bit of space (green).

\section{Agent Movement}
The motion of the active objects is controlled through a 'PhysicsObject' class, its interface is simply a method, setDestination, that takes a position vector as a parameter. The physics object is then responsible for applying a force (in the forward direction only) and torque to the object that will take it to its destination.

The physics object will apply forward force proportional to its distance to the target and inversely proportional to its angle away from the target.


\section{Creative Extensions}


\subsection{Collision Avoidance}
Both the Agents and Herders use our PhysicsObject class to move. This is allows us to detect when a collision is imminent, and temporarily adjust the destination point of the object without any of the steering behaviours concerning themselves with avoiding collision. 

Each object uses a number of ray casts (shown by the dark blue lines) to “feel” for object in its path, the ray length is based on a minimum length and scales with the magnitude of the velocity. The ray casts are spread out in a fan shape, getting shorter the further the angle from straight ahead. If any of the rays hit something a temporary new destination is found by trying to find the smallest angle change that would avoid the obstacle by sweeping the rays back and forth, in this way the objects can quite effectively navigate around convex objects.

\subsection{Multiple Herders}

The herders are implemented in a one captain herder to many slave herders style.  The captain herder doesn't actively herd instead it follows the mouse pointer at all times.  The slave herders use a simple state machine to either converge on the captain (herder will turn red) or herd the agents (herder will turn yellow).  

The converge behaviour is triggered if the slave herder is too far from the captain and makes the slave herder move towards the captain's position. Agents will instinctively move away from the nearest herder which is shown on the right as a yellow ray from the agent.


The herding behaviour involves the captain selecting a target group of agents, this is done by first finding the closest agent to the captain and then recursively adding any agents within a set range from any agent in the group to the group. Once the herding group has been established a circle is defined using the centre of the group and a radius based on how spread out the target group is.

\begin{figure}[H]
\begin{minipage}[b]{0.5\linewidth}
\centering
 \includegraphics[scale=0.235]{./images/image01.png}
 \caption{A small target group}
\end{minipage}
\hspace{0.2cm}
\begin{minipage}[b]{0.5\linewidth}
\centering
 \includegraphics[scale=0.22]{./images/image07.png}
 \caption{A larger target group}
\end{minipage}
\end{figure}

After the herding circle has been established each herder is assigned a position(angle) on the circle based upon the order in which they were created, if the position to which a herder is assigned lies on the opposite side of the circle to which the herder is currently
located it is assigned a destination off to the side of the circle, in this way they will navigate around the circle instead of through it, travelling through the circle would cause the agents to scatter - the opposite of herding.%TODO

\begin{figure}[H]
 \centering
  \includegraphics[scale=0.4]{./images/image08.png}
 \caption{Herders about to circling}
\end{figure}

%\begin{landscape}

\begin{adjustwidth}{-4em}{-4em}

\section{Class Diagram}

\begin{figure}[H]
 \centering
  \includegraphics[scale=0.45]{./images/image02.png}
 \caption{State Diagram}
\end{figure}
\end{adjustwidth}
%\end{landscape}


\pagebreak

\printbibliography

\end{document}
