\documentclass[10pt]{article} 
\usepackage[a4paper]{geometry}
\geometry{verbose,tmargin=25mm,bmargin=25mm,lmargin=40mm,rmargin=10mm}
%\geometry{verbose,tmargin=10mm,bmargin=10mm,lmargin=10mm,rmargin=10mm}
\usepackage{color}
\usepackage{array}
\usepackage{multirow}
\usepackage{amsmath}
\usepackage{setspace}
\usepackage{graphicx}
\usepackage{subfig}
\usepackage{siunitx}
\usepackage{amsmath}
\doublespacing 
\usepackage[pdfusetitle,bookmarks=true,pdfborder={0 0
1},colorlinks=true]{hyperref}
\usepackage{listings}
\hypersetup{linkcolor=black,citecolor=black,filecolor=black,urlcolor=black}

\DeclareSIUnit\inch{in}

\begin{document}

\include{header}

\section{Introduction}

The aim of this project is to develop a set of suitable strategies to control a
team of five simulated to play a game of football against five other robots.

As discussed in the previously submitted project plan \cite{pplr}, it the aim
was to produce a system that uses projections of the future state of the system
to plan out trajectories for all of the robots, with the aim of scoring a goal.
The system would include a module that selected the best of the current options
based on the state of the game, and would include monitoring code to ensure that
the team abides by the rules of the game.

The following reqport begins by continuing and summarising the literature review
from the previous report.  It then goes on to describe the methods used to
develop the code that has been submitted, and the changes that were made as the
project continued. The final software produced is then discussed, along with its
limitations. Finally, the successes and limitations of the project as a whole
are discussed, as well as some of the work that could be done to extend this
project.


\section{Literature Review}

A literature review has already been completed in the previous report.  In
summary, a large amount of work has been done in developing robots for this
purpose.  Various different techniques have been examined to make the robots
work together, including flocking algorithms \cite{taskBasedFlocking},
artificial immune system algorithms \cite{artificialImmuneSystemCooperation} and
role based strategies \cite{taskRoleSelectionStrategy}
\cite{multiagentsDynamicBoxChange}. Decision making algorithms have included the
max-plus algorithm \cite{maxPlusAlgorithm}, simulated annealing
\cite{simulatedAnnealingDecisionMaking} and extended behaviour networks to model
human behaviour \cite{modellingHumanDecisionMaking}. Risk based algorithms have
also been tried \cite{balancingGainsRisksCostsPassing}. Route planning
techniques including A* \cite{aiModernApproach} and potential field navigation
\cite{intelligentAlgorithmPathPlanning} have been examined, and the latter has
been taken further in this project.

As well as the resources already considered, other resources were used to
support development of this project.  In particular, the OpenCL development
depended heavily on the introduction provided by AMD \cite{amdOpenCLTutorial}.
This was supported by the full and quick references provided by Khronos
(\cite{openCl11Spec} and \cite{openCl11QuickRef} respectively).
  
\section{Computational Methods}

\subsection{Motion Control\label{sub:motionControl}}

\subsubsection{Simulator Characteristics}

No documentation was provided for the simulator, and so all of the
characteristics of the system needed to be determined before a working system
could be produced.  This included the simulator timing system, the coordinate
system and the model used to represent the robots.

Two methods of timing could be used by the system:
\begin{itemize}
 \item Fixed time-step - Every simulator cycle is assumed to be a constant length, either running in real-time or freely running.
 \item Variable time-step - The time-step length is proportional to the wall-time elapsed since the last time-step.
\end{itemize}

In order to identify this, the real-time length of each time-step was recorded
over a period of time.  If a fixed time-step was used, and the simulator was
attempting to run in real-time, the results should show most of the readings at
this time-step level, with others longer due to other CPU use, but none
significantly shorter.  A free running real-time simulator would display a
varying time-step depending on the length of the instructions to be executed
each cycle, as well as any other activities currently taking place on the
computer.  This alone, however, would not be a conclusive result, as fixed
time-step simulator that isn't running in real-time would display similar
results.

The distribution produced is shown in Figure \ref{fig:timestepDistribution}.  As
can be seen, the real time-step length varies from
\SIrange{17.5}{26.0}{\milli\second}, distributed around the mean of
\SI{19.17}{\milli\second}.  As already discussed, this excludes the real-time
fixed time-step case, but it is not possible to use this result to determine the
simulator's time-step type.

\begin{figure}
 \includegraphics[width=\textwidth]{Images/time-step-length-distribution} \\
 Mean : \SI{19.17}{\milli\second} \\
 Standard Deviation : \SI{1.258}{\milli\second}

 \caption{Time-step Frequency Distribution}
 \label{fig:timestepDistribution}
\end{figure}

An initial run was set up to examine the behaviour of the system over time, and
is shown in Figure \ref{fig:initialSpeedTestRun}.  The velocity of the system
(determined by dividing the distance travelled by the time-step) has been
calculated with both the measured time-step, as well as a fixed time-step of
\SI{20}{\milli\second} (chosen as a round number approximating the values
measured). The actual value of the fixed time-step is not a concern, as it will
only have a scaling effect on the velocity, which can be accounted for later.

The calculation with a variable time-step (Figure
\ref{fig:initialSpeedTestRunVariable}) displays a significant oscillation of the
velocity around the expected behaviour.  In contrast, the fixed time-step values
almost exactly matches the expected behaviour (Figure
\ref{fig:initialSpeedTestRunFixed}).  While it is possible that the simulator
could be emulating a non-steady output velocity from the motors, it is expected
that this would display in both graphs.  This makes it very likely that the
time-step used by the simulator has a fixed length.

The anomaly displayed at approximatly \SI{0.6}{\second} on Figure
\ref{fig:initialSpeedTestRunFixed} is also seen in a large number of runs.  It
cannot be attributed to a single overly long simulator cycle length, as this
would be hidden by the fixed time-step length. The fact that it is present shows
that there were one or more simulator cycles where the system was not updated. 
This suggests that it is due to a bug in the simulator, and any controller will
need to be able to handle this sort of error with minimum disruption.

\begin{figure}
 \centering
 \subfloat[Variable Time-step Length Initial Run
 Velocity]{\label{fig:initialSpeedTestRunVariable}
 \includegraphics[width=0.8\textwidth]{Images/variable-time-step-initial-run}}
 \\
 \subfloat[Fixed Time-step Length Initial Run
 Velocity]{\label{fig:initialSpeedTestRunFixed}
 \includegraphics[width=0.8\textwidth]{Images/fixed-time-step-initial-run}}
 \caption{Initial Run Velocities}
 \label{fig:initialSpeedTestRun}
\end{figure}

Finally, it was observed that the simulation pauses when the simulator window
loses the user focus.  It then resumes without anomaly when the simulator
regains the focus.  As the simulator does not inform this strategies of this,
and does not provide the strategies with any timing data, it is unreasonable to
believe that real-time values are in use.  Were they used, the strategies would
experience control issues caused by the anomalously long time-step when the
simulator lost the focus (and so was suspended) for a significant period of
time.  For this reason, it was assumed that a fixed time-step length was in use,
and the length chosen was \SI{20}{\milli\second}.

The coordinate system in use by the simulator also needed to be verified.  The
conventional coordinate system used by computer programs is the same as the
system used by the computer's graphics systems. This defines $\left(0,0\right)$
as the top-left corner, with positive $x$ pointing to the right of the screen
and positive $y$ pointing down the screen. The trigonometric functions provided
by the CRT operate also use radians, and the specification for the playing field
is provided in metric units.  It was initially assumed that the simulator would
follow these definitions, but this was quickly disproved.

Examination of the header files provided to support the simulator revealed that
the basic constants describing the field were defined in inches. The constants
provided also defined the bottom of the field (as shown on the screen) at a
lower coordinate than the top.

The units of rotation were not defined in the header files, and so had to be
found by experimentation.  One of the robots was controlled to spin on the spot
by driving the motors in opposite directions.  The reported angle of rotation
was then monitored using a debugger as each cycle passed.  This showed that the
angle reported varied between from \numrange{+180}{-180}, which infers that the
coordinate system is using degrees to represent angles, and means that
conversion will need to be made to allow the CRT trigonometric functions to be
used.

The final coordinate frame is shown in Figure \ref{fig:simCoordinateFrame}.

\begin{figure}
 \centering
 \includegraphics[trim=2cm 23.5cm 13cm 2cm, clip=true]{Images/illustrations}
 \caption{Simulation Coordinate Frame}
 \label{fig:simCoordinateFrame}
\end{figure}

\subsubsection{Uncontrolled Behaviour}

The model used for the robots in the simulator is also not documented. It is
therefore assumed that the robot's motion is modelled on that produced by an
electric motor. The basic behaviour of the system should therefore be described
by the transfer function in Equation \ref{eq:basicMotorModel}
\cite{basicControlNotes}.

\begin{equation}
 \label{eq:basicMotorModel}
 G\left(s\right) = A \cdot \frac{1}{s+B}
\end{equation}

In order to determine the values of the constants $A$ and $B$, a simple strategy
was set up that fed one of the robots a fixed control signal.  This caused it to
accelerate in a straight line until it reached a constant speed.  The position
at each time-step was recorded, to allow the velocity and acceleration to be
calculated, and this was repeated with a number of different input signals,
until the robot could no longer achieve a steady-state speed without colliding
with a wall.

If the model is correct, the steady-state velocity should be directly
proportional to the input velocity, and so this serves as a good initial test of
the model. The steady-state velocity was calculated by averaging the final ten
samples of each run, and is plotted against input signal in Figure
\ref{fig:inputOutputVelocityGraph}.  A line of best fit was plotted to
demonstrate the linear relationship between input and output.

It is interesting to note that the best fit ($R^2 = 1$) is achieved if the
constraint that zero input produces zero output is removed.  This infers that
there is a systematic error that is being introduced by the simulation.  This
will be disregarded, as the error is small (\SI{-0.3405}{\inch\per\second}) and
should have minimal effect on the behaviour of the controller.

Given that the output velocity is 0.7564 times the input velocity, the maximum
velocity of the robots can now be found. First, the maximum input signal was
found by passing the simulator a very large number to determine its response. It
was discovered that the simulator caps the input value at 125. This gives a
theoretical maximum speed of \SI{94.55}{\inch\per\second}.   It was noted during
this experimentation that the robot could not reach it's theoretical full speed
without colliding with a wall, instead reaching a maximum speed of approximately
\SI{30.256}{\inch\per\second}.

In order to determine the equation controlling the system, the velocity data was
first normalised by dividing it by the input signal.  This allows the various
velocity profiles recorded to be compared.  The systematic error observed will
cause the data to become slightly distorted, but as previously mentioned, it is
believed the effect will be small.  The MATLAB Curve Fitting Toolbox was then
used to fit the data to a curve.

The best fit achieved is shown in Figure \ref{fig:bestFit}.  This is fitting the
data to an equation of the form $y = a \cdot e^{bx} - c \cdot e^{dx} $.  The
function was expected to be of the form $y = a(1-b \cdot e^{cx})$, as this is
what is the form of the time-domain response of the theoretical model in
Equation \ref{eq:basicMotorModel} to a step-input.

It was noted that the $b$ term is very small, and the $a$ and $c$ terms are
approximately the same.  Therefore, it was expected that the model could be
approximated with an equation of the expected form.  This is illustrated in
Figure \ref{fig:approximateFit}, which demonstrates an acceptable fit to the
majority of the data.  While the steady-state velocity implied by this model is
different to that calculated earlier, this model gives a good approximation of
the dynamic behaviour shown by the system, which is believed to be more useful
for this project.

Some of the data in the plot appears to be shifted along in the time-axis, and
is likely interfering with the line fitting process. This was determined to be
due to an error in the original data, where the time had not been reset to zero
at an appropriate point on one of the test runs. When the data was shifted back
into a more expected location, the approximate fit was improved (shown in Figure
\ref{fig:approximateFitTimeShiftRemoved}).

Using this information, the system can be modelled with Equation
\ref{eq:finalMotorModel}, which can be used to design the control systems.

\begin{equation}
 \label{eq:finalMotorModel}
 G\left(s\right) = \frac{2.12778}{s+2.863}
\end{equation}

\begin{figure}
 \centering
 \includegraphics[width=0.8\textwidth]{Images/input-signal-vs-output-speed}
 \caption{Input Control Signal vs Output Velocity}
 \label{fig:inputOutputVelocityGraph}

 Line of best fit: $y=0.7564x$ \\
 Quality of fit: $R^2 = 0.9998$
\end{figure}

\begin{figure}
 \centering
 \includegraphics[width=0.8\textwidth]{Images/best-fit-model}
 \caption{Curve Fitting Toolbox Best Fit}
 \label{fig:bestFit}

 Line of best fit : $y=0.7432 e^{-0.0001886x}-0.7248 e^{-2.863x}$ \\
 Quality of fit : $R^2 = 0.976$
\end{figure}

\begin{figure}
 \centering
 \includegraphics[width=0.8\textwidth]{Images/approximate-fit-model}
 \caption{Approximated Curve Fit}
 \label{fig:approximateFit}

 Line of best fit : $y=0.7432 \left(1-e^{-2.863x}\right)$
\end{figure}

\begin{figure}
 \centering
 \includegraphics[width=0.8\textwidth]{Images/approximate-fit-model-time-shift}
 \caption{Approximated Curve Fit with Corrected Data}
 \label{fig:approximateFitTimeShiftRemoved}

 Line of best fit : $y=0.7432 \left(1-e^{-2.863x}\right)$
\end{figure}

\subsubsection{Controllers}

In order to allow the control of either the position or velocity as desired, the
motion control was implemented using two layers.  The first layer controls the
velocity of the robot, while the second layer manipulates the first to control
position.

Both layers are simple proportional controllers, with the velocity control
receiving velocity feedback and the position control receiving position
feedback.  The overall control design is shown in Figure
\ref{fig:overalController}.

\begin{figure}
 \includegraphics[width=0.8\textwidth]{Images/main-model}
 \caption{Overall Control System Model}
 \label{fig:overalController}
\end{figure}

The gains were tuned using the MATLAB Control System Toolbox. The velocity
control was handled first, to ensure that it would stand alone, and then motion
control was tuned to adequately manipulate the resulting velocity.  The gains
calculated were then transferred to the C++ classes VelocityController and
MotionController to be tested.

Unfortunately, it was found that the system was highly unstable, and the
specific reasons could not be quickly isolated.  As the aim of the project was
to investigate the high-level control of the robot, rather than the low-level
motor control, the problem was not closely examined.  Instead a set of gains
were produced by a trial-and-error method, which produced a sub-optimal but
stable system.  The final code used can be seen in Appendix \ref{app:velControl}
and \ref{app:motionControl}.

The control of the direction of motion was achieved by noting the similarity
between the robot footballers and the Mouse robot used in the second year
project \cite{mouseProjectReport}.  It was changed by altering the relative
velocities of the individual wheels, which causes the robot to move through an
arc.  Proportional control was again used for this purpose, with the velocity
difference proportional to the angular error.

It is recognised that the controllers in use are not the optimal designs that
could be used for the system.  While this will have an effect on the performance
of the robot (and so the range of motions that can be performed), it was decided
that investigating the higher level control was more important for this specific
work.  The effect of a better control system can then be investigated in the
future.

\subsection{Motion Prediction}
It was planned that the robots would function by tracking the ball and plotting
intercept trajectories.  This would require the path of the ball to be predicted
to ensure that the robot correctly intercepts the ball.

The motion of the ball can be approximated with a quadratic equation
($x=at^2+ut+c$), which is sufficient to represent the motion of a body when
under a constant force (e.g. friction).  It does not include a term to represent
effects related to velocity (e.g. air resistance), but it is assumed that these
effects will be small (if present at all in the simulation), and so will not
cause significant error in the predicted motion.

Provided information about past motion is stored, the coefficients can be found
using the least squares method. This technique works by minimizing the distance
between the equation and the data-points provided, essentially minimizing
Equation \ref{eq:leastSquaresEq}.

\begin{equation}
  S = \sum_{i=1}^n (a t_i^2+ u t_i + c - x_i)^2 
  \label{eq:leastSquaresEq}
\end{equation}

Partial differentials of this equation can be used to find the minimum with
respect to $a$, $u$ and $c$, and which gives Equation
\ref{eq:leastSquaresMatrix}.

\begin{equation}
  \left[
   \begin{matrix}
    \sum t_i^4 & \sum t_i^3 & \sum t_i^2 \\
    \sum t_i^3 & \sum t_i^2 & \sum t_i^1 \\
    \sum t_i^2 & \sum t_i^1 & \sum n
   \end{matrix}
  \right]
  \left[
   \begin{matrix}
    a \\
    u \\
    c
   \end{matrix}
  \right]
  =
  \left[
   \begin{matrix}
    \sum t_i^2 y_i \\
    \sum t_i y_i \\
    \sum y_i
   \end{matrix}
  \right]
  \label{eq:leastSquaresMatrix}
\end{equation}

This can then be solved using matrix techniques, and the full derivation is
shown in Appendix \ref{app:leastSquaresDerivation}.

In order to allow for sudden changes in motion, only a small number of past
positions would be recorded.  This would become a compromise between increasing
the number of samples to increase accuracy over a large continuous section, and
decreasing the number of samples to minimise the effect of sudden changes in
direction.

If the equation is solved by inverting the matrix of time-step sums, the amount
of computation can be reduced by always setting the first time-step to be at
$t=0$, and then adjusting the input time values appropriately whenever the
resulting coefficients are used.  This means the inverted matrix can be
calculated once and then stored.

The actual inversion can be quickly calculated using the inversion code provided
by Intel \cite{intelMatrixInverse}.  This takes advantages of SIMD instructions
to speed up the calculation.  This was initially implemented to deal with the
condition where the time-step changed each time, which would require the matrix
to be recalculated every time, and was kept, even though the overall gains are
negligible if the matrix is only calculated once.

Similar techniques can then be used to perform the multiplications required to
solve the final equations.

\subsection{Route Finding}
In order to determine the best method of planning a route, some time was spent
investigating different route finding techniques to examine their behaviour. 
The two main techniques investigated were the A* algorithm and the potential
field force algorithm shown in \cite{intelligentAlgorithmPathPlanning}.

\subsubsection{A* Algorithm}

The A* algorithm is a discrete route finding algorithm, which requires the field
to be divided into a number of squares to function \cite{aiModernApproach}.  It
then represents the possible moves around the field as a tree, and then
traverses the tree to find the shortest route.  The algorithm theoretically
operates faster than the similar Dijkstra's algorithm, by using a heuristic
algorithm to predict the best moves \cite{wikipediaAStar}.

The algorithm uses two sets, one to hold the squares to be examined (the open
set), and one to hold the squares that have been examined (the closed set).
Squares are added to the open set when one of their neighbours is examined, and
are then examined in an order defined by the heuristic algorithm.  Each square
is examined to determine its distance from the origin (based on the already
examined neighbouring squares' distances) and its predicted distance to the
destination. Provided the heuristic is good, this will result in a short route
being found between the start and end points.

The test version of the algorithm was implemented using the .Net Framework,
initially using the basic \texttt{List(Of T)} class to hold the sets.  This was
found to be a very slow, and when profiled it was discovered that a lot of the
time was spent checking if a square was already in the closed set.  With the
\texttt{List(Of T)} class, the find algorithm is an $O(n)$ operation
\cite{msdnListOfT}, and so takes longer as the algorithm continues.  When this
was replaced with a \texttt{HashSet(Of T)}, which uses an $O(1)$ operation to
find elements \cite{msdnHashSetOfT}, a significant time saving was made (as can
be seen in Table \ref{tab:aStarTimingsTable}).

An attempt to partially parallelise the operation resulted in a slightly slower
run time (see Table \ref{tab:aStarTimingsTable}). This is probably because the
nature of the A* algorithm makes it rare to have enough concurrent work to
justify the increased overhead of parallel algorithms.

\begin{singlespace}
\begin{table}
\centering%
\begin{tabular}{|c|m{3cm}|p{2cm}|}
\hline
\multirow{2}{*}{Task} & \multirow{2}{3cm}{Container Class} &
\multirow{2}{2cm}{Execution Time (\si{\second})} \\
 &  & \\
\hline
\multirow{2}{*}{Serial Algorithm} & \texttt{List(Of T)} & \num{25.37}
\\
\cline{2-3}
 & \texttt{HashSet(Of T)} & \num{5.159} \\
\hline
\multirow{2}{*}{Parallel Algorithm} & \texttt{List(Of T)} & \num{25.72}
\\
\cline{2-3}
 & \texttt{HashSet(Of T)} & \num{5.502} \\
\hline
\end{tabular}

\caption{A* Algorithm Execution Times\label{tab:aStarTimingsTable}}
\end{table}

\end{singlespace}

\subsubsection{Potential Field Force Algorithm}

The potential field force algorithm is described in detail later in this report
(see Section \ref{sub:Potential-Field-Force}).  This initial investigation used
it to plan a route in advance that the robot can follow (as opposed to a
real-time guidance routine as used later).  It proved troublesome in this
capacity, as a full implementation would require a full model of the robot's
behaviour to process.  When this was omitted, simplifying the program, the route
finder often ended with the path achieving an orbit around the target point
instead of approaching it directly.  This rendered the algorithm as was useless
for planning a route in advance.

\subsubsection{Route Planning Evaluation}

At this point, the practicality of using routes planned in advance was
reconsidered.  As shown previously, the full route planning has the potential to
take a significant amount of time compared to the ideal \SI{20}{\milli\second}
cycle length.  This would result in an algorithm that takes multiple cycles to
run (and is therefore always working on out-of-date information), or which only
runs over a very short distance in the future.

Given that the state of the game can change significantly over the
\SI{5}{\second} it currently took to calculate a route using the A* algorithm,
this would render a route calculated substantially in advance nearly useless. 
The short distance route may be slightly useful, but would be of little help in
planning an overall strategy compared to the investment in calculation time.

For this reason, the decision was made to shift from a pre-planned route to a
real-time guidance system, based on the potential field force navigation
algorithm. This removes the need for a route planning algorithm, and it was
hoped that the robot's strategy could then be embedded into the structure of the
field.

\subsection{Potential Field Force Navigation\label{sub:Potential-Field-Force}}

With this technique, a potential field is used to create a simulated force that
guides each robot around the field (as described in
\cite{intelligentAlgorithmPathPlanning}). The field is produced by taking a
collection of objects and projecting a field around them of a given shape. This
field can either add to or subtract from the total field potential surrounding
it, and the total field can then be calculated with Equation
\ref{eq:fieldSummation}.

\begin{equation}
P(x,y)=\sum_{i=0}^{N-1}p_{i}\left(x,y\right)\label{eq:fieldSummation}
\end{equation}

The direction that the robot will move in is then determined by the gradient of
the field at the current point, such that the robot moves down the field
potential. This force vector (which controls the velocity of the robot) is given
by Equation \ref{eq:forceSummation}.

\begin{equation}
\boldsymbol{F}(x,y)=-\left(\frac{{\partial P\left(x,y\right)}}{\partial x}+\frac{{\partial P\left(x,y\right)}}{\partial y}j\right)\label{eq:forceSummation}
\end{equation}

If the fields can be differentiated over the entire field, this can be
calculated using Equation \ref{eq:forceDifferentiation}.

\begin{equation}
\boldsymbol{F}(x,y)=-\left(\sum_{i=0}^{N-1}\frac{\partial p_{i}\left(x,y\right)}{\partial x}+\frac{\partial p_{i}\left(x,y\right)}{\partial y}j\right)
\label{eq:forceDifferentiation}
\end{equation}

However, if one or more of the fields cannot be differentiated, the force vector
must be calculated discretely. This is achieved using Equation
\ref{eq:forceDifferentiationDiscrete}.

\begin{equation}
\boldsymbol{F}(x,y)=\left(P\left(x-1,y\right)-P\left(x+1,y\right)\right)+\left(P\left(x,y-1\right)-P\left(x,y+1\right)\right)j
\label{eq:forceDifferentiationDiscrete}
\end{equation}

The negative gradient is used to ensure the robot moves towards lower field
potentials. The technique would work equally well if it was attracted to higher
field potentials, provided all the equations were suitably inverted.

For the initial control technique, the velocity of the robot is proportional to
the force vector. Other techniques are discussed by
\cite{intelligentAlgorithmPathPlanning}, but will be experimented on later.

Navigation of the game field can now be achieved by manipulating the attractive
and repulsive fields to guide the robot to a target. Computational power
allowing, each robot could have a separate field that guides it based on the
goals specific to its role.

\subsection{Potential Field Shapes}

Attractive fields are used to designate areas which the robot should head for.
In order to ensure that the robot always moves towards them, it is important
that their effect is felt across the entire field. However, they must not be so
strong that they overwhelm the short distance effects of the repulsive fields.

Repulsive fields are used to alter the route of the robot when moving towards
these targets. For example, such fields are used to get the robot to avoid
collisions, as well as to control where the robot intercepts the ball. These
fields typically only act over a short distance, as they should only have an
effect when they are required. They need to be strong enough to overwhelm the
attractive fields at short range.

The field shapes experimented with are described in the following sections, and
a selection of them was integrated into the final strategy, in order to produce
the best combination for each scenario.

As the basic fields in use are axisymmetric in shape (or begin that way), the
fields are best described using polar coordinates. The coordinates are defined
as in Figure \ref{fig:polarCoordinateFrame}, and are always centred on the field
producing object. This frame of reference can then be translated to the
Cartesian coordinates in use elsewhere to allow the total field at a point to be
calculated.

\begin{figure}
 \centering
 \includegraphics[trim=9cm 23.5cm 7cm 2cm, clip=true]{Images/illustrations}
 \caption{Simulation Coordinate Frame}
 \label{fig:polarCoordinateFrame}
\end{figure}

\subsubsection{Basic Attractive Field\label{sub:Basic-Attractive-Field}}

The field suggested by \cite{intelligentAlgorithmPathPlanning} for the
attractive point is described with Equation \ref{eq:quadraticBallField}.

\begin{equation}
P(r,\theta)=k_{attr}r^{2}\label{eq:quadraticBallField}
\end{equation}

This produces an attractive force that is linearly proportional to the distance
to the object (in this case the ball). This is a very useful field for a route
finder (as described in the paper), where it is desirable for the robot to stop
when it reaches the destination, and so it needs to slow down as it approaches
the target. However, in this application it is more useful for the robot to make
a rapid approach and not slow down after arrival. This is because the ball is
not held to the robot in any way, so if the robot slows down after it arrives,
the ball will move away without it.

In order to overcome this problem, the field was simplified to Equation
\ref{eq:conicBallField}.


\begin{equation}
P(r,\theta)=k_{attr}r\label{eq:conicBallField}
\end{equation}

This field produces a constant force of $k_{attr}$ acting towards the centre of
the ball, which will hold the robot against the ball (as far as possible) as the
ball and robot move around. It doesn't provide any guidance once the ball has
been reached, and so this will need to be provided by externally once
the intercept is complete.

\subsubsection{Basic Repulsive Field\label{sub:Basic-Repulsive-Field}}

One of the rules of the game is that the robot cannot intentionally collide with
an opposing robot, unless it is in possession of the ball \cite{simurosotSim}.
This means that the robot must actively avoid the opposition. It is also
advantageous to be able to avoid the robot's own team, as this prevents them
getting into a position where they cannot move or are fighting against each
other.

A apparently simple solution would be to surround all the obstacles with a
region of potential that is substantially higher than the surrounding area, as
described by Equation \ref{eq:circleRepulseField}.

\begin{equation}
P\left(r,\theta\right)=\begin{cases}
k_{repulse} & r<R_{0}\\
0 & r\geq R_{0}
\end{cases}\label{eq:circleRepulseField}
\end{equation}

This produces a field as illustrated in Figure \ref{fig:circleRepulseField}.

\begin{figure}
 \centering
 \subfloat[Field
 Magnitude]{\label{fig:circleRepulseField}\includegraphics[width=0.3\textwidth]{Images/circle-field}}
 \subfloat[Field
 Gradient]{\label{fig:circleRepulseFieldGradient}\includegraphics[width=0.3\textwidth]{Images/circle-field-gradient}}
 \caption{Circular Repulsive Field}
\end{figure}

In theory, the robot would avoid this circle, as the algorithm should not have
the robot move into a region of high potential. However, this does not work as
the controller actually responds to the gradient of the field. As shown in
Figure \ref{fig:circleRepulseFieldGradient}, this type of field produces a very
high gradient, but only in a very small region. Once inside the repulsive
circle, the gradient returns again to zero, and so does not affect the motion of
the robot. As the robot has inertia, it is very likely that it will pass through
the region of high force and then leave it before its velocity has altered
sufficiently to avoid the obstacle.

In order to maximise the repulsive effect of the field, the force must be
exerted over a large area. This can be achieved with a field that gradually
builds in strength as it nears the obstacle. The simplest example is defined by
equation \ref{eq:simpleRepulse}.

\[
P\left(r,\theta\right)=\begin{cases}
k_{repulse}\cdot \left(R_{0} - r\right) & r<R_{0}\\
0 & r\geq R_{0}
\end{cases}
\label{eq:simpleRepulse}
\]

This field will create a constant force that acts radially away from the
obstacle, much like the attractive force described in Section
\ref{sub:Basic-Attractive-Field}, but over a restricted area (as illustrated in
Figure \ref{fig:conicRepelField} and \ref{fig:conicRepelFieldGradient}).

\begin{figure}
 \centering
 \subfloat[Field
 Magnitude]{\label{fig:conicRepelField}\includegraphics[width=0.3\textwidth]{Images/conic-repel-field}}
 \subfloat[Field
 Gradient]{\label{fig:conicRepelFieldGradient}\includegraphics[width=0.3\textwidth]{Images/conic-repel-field-gradient}}
 \caption{Conic Repulsive Field}
\end{figure}

If the target is behind the obstacle, and the force is exactly strong enough, it
will cause the robot to stop in the repulsive field. If the force is too small,
it will only slow the robot down, and if the force is too strong it will cause
the robot to move clear of the field. If the target hasn't moved significantly,
the robot will then approach the repulsive field again, only to be repelled once
more. This will result in the robot repeatedly 'bouncing' off of the field. The
robot will become stuck if the target does not move, but it is thought that the
rapidly changing environment the controller will be working in will mean that
this is not a significant issue.

While the constant force will work, provided it is strong enough, the case when
it is too strong and causes 'bouncing' is undesirable as the motion of the robot
may become erratic. A better solution would be one where the force gradually
increases as it approaches the target. It is also desirable to have a field that
naturally decays to a small level at a distance without relying on conditional
operations (which are expensive on GPU hardware). Two field shapes which meet
this specification are a direct inverse proportionality, or the Gaussian
function (described by Equations \ref{eq:inverseRepulseField} and
\ref{eq:gaussianRepulseField} respectively).

\begin{equation}
P\left(r,\theta\right)=\frac{k_{repulse}}{r}\label{eq:inverseRepulseField}
\end{equation}

\begin{equation}
P\left(r,\theta\right)=k_{repulse}\cdot
e^{\left(-\frac{r^{2}}{2\sigma}\right)}\label{eq:gaussianRepulseField}
\end{equation}

For this purpose, the Gaussian function was selected, because it was believed
that a flatter region in the centre of the field would be preferable over an
infinite field potential (see Section \ref{sub:fieldDesignConsiderations}). This
produces a field as shown in Figure \ref{fig:gaussianField}. As the gradient
image shows, this produces a region of rapidly increasing force around the
object, with no discontinuities that produce disruptive motion at the edge of
the field.

The OpenCL code used to calculate this field can be found in Appendix
\ref{app:gaussianRepulsive}.

\begin{figure}
 \centering
 \subfloat[Field
 Magnitude]{\includegraphics[width=0.3\textwidth]{Images/gauss-field}}
 \subfloat[Field
 Gradient]{\includegraphics[width=0.3\textwidth]{Images/gauss-field-gradient}}
 \caption{Gaussian Repulsive Field}
 \label{fig:gaussianField}
\end{figure}

\subsubsection{Shaped Approach Guiding Field\label{sub:Shaped-Approach-Guiding}}

As the robot has no means of holding on to the ball, it is not possible for the
robots to turn more than a small amount when in possession of the ball without
losing it. This makes it particularly important that the robot approaches the
ball from the correct side. This can be achieved by positioning a region around
the ball that guides the robot to the correct position.

The initial attempt at this placed a specially shaped field around the ball,
which was developed by rotating a Gaussian function along a circle around the
ball, with its height proportional to the angle from the desired access
direction. This is described by the Equation \ref{eq:wrappedGaussian} and shown
in Figure \ref{fig:wrappedGaussianField}.

\begin{equation}
P\left(r,\theta\right)=e^{\frac{-\left(r-r_{0}\right)^{2}}{2\sigma}}\cdot\left|\frac{\theta}{\pi}\right|\label{eq:wrappedGaussian}
\end{equation}

\[
-\pi\leq\theta\leq\pi
\]

\begin{figure}
 \centering
 \includegraphics[width=0.3\textwidth]{Images/wrapped-field}
 \caption{Shaped Gaussian Repulsive Field}
 \label{fig:wrappedGaussianField}
\end{figure}

This field initially looked promising, as it contains the desired slope towards
a specific approach angle and repels away from every other angle. On testing,
however, it was noticed that, as the field is at a higher potential than its
surroundings, the edges of the field force the robot radially away from the ball
instead of towards the target angle, and so the robot just stops at the edge of
the field.

The field was then modified to be at a lower potential than its surroundings,
removing the previous radial force. However, it was determined that a field wide
enough to be of any use in directing the robot had to have a radius that caused
the field to strongly interfere with motion elsewhere in the playing field. When
other robots were introduced into the field, even in their starting positions,
the robot was unable to approach the ball without a collision with another
player.

Other attempts were made with different shaped fields around the ball (for
example one field resembled a helter-skelter slide, which produced a constant
force to guide the ball in when within a certain radial region), but all
resulted in either an large field that caused too much long-distance
interference, or forced the robot radially away from the robot.

While a single field could probably be found that achieved what was desired, it
was determined that the search would be too time-consuming with the lack of
guiding information.

\subsubsection{Paired Source Approach Guidance Field\label{sub:paired-field}}

As discussed in Section \ref{sub:Shaped-Approach-Guiding}, a field was required
to guide the robot's the approach to ball. As a single field did not function as
desired, it was decided that a pair of fields would be tried.

A basic attractive field (see Section \ref{sub:Basic-Attractive-Field}) was
positioned on the desired approach side of the ball (offset by a number of
inches in the $x$ axis) and a basic repulsive field (see Section
\ref{sub:Basic-Repulsive-Field}) was placed in the symmetrically opposite
position. This produces the overall field shown in Figure
\ref{fig:pairedApproachField}.

\begin{figure}
 \centering
 \subfloat[Field
 Magnitude]{\includegraphics[width=0.3\textwidth]{Images/paired-field}}
 \subfloat[Field
 Gradient]{\includegraphics[width=0.3\textwidth]{Images/paired-field-gradient}}
 \caption{Paired Approach Guidance Field}
 \label{fig:pairedApproachField}
\end{figure}

This technique immediately showed positive results, with the robot moving onto
the attractive point while avoiding the ball if it approached from the wrong
side. This is achieved because the combination of the field produces a region
that the robot will not pass through, because to do so would involve passing
through the repulsive region (as shown in Figure
\ref{fig:pairedSourceApproachFieldNoFlyZone}).

This field configuration breaks down if the repulsive point is too far from the
ball or if the robot ends up too near the ball on the wrong side (the region
shown in Figure \ref{fig:pairedSourceApproachFieldBreakDownZone}), as the
repulsive field acts to force the robot towards the ball instead of away from
it.  The strategy will need to be designed so that this occurrence is dealt
with, or prevented from happening altogether.

\begin{figure}
 \centering
 \subfloat[Field Exclusion Zone]{\includegraphics[trim=2cm 3.5cm 10cm 21cm,
 clip=true]{Images/illustrations}\label{fig:pairedSourceApproachFieldNoFlyZone}}
 \subfloat[Break-down Region]{\includegraphics[trim=2cm 22.5cm 10cm 2cm,
 clip=true,
 page=2]{Images/illustrations}\label{fig:pairedSourceApproachFieldBreakDownZone}}
\end{figure}

To ensure that the robot approaches from the correct location, the points must
be placed to position the robot along the line that joins the ball and the
desired destination (initially, the goal).  This is achieved by providing this
vector to the field calculation code, which then sets the point locations
appropriately.

When this field is in use, the robot is not attracted to the actual position of
the ball, and so this field will not allow the robot to guide it to the goal.
However, if a new set of field configurations are used once this robot is in
position, this limitation can be overcome.

\subsubsection{Stretched Repulsive Field}

The main purpose of the repulsive fields around the robots is to avoid
collisions and to try to keep the ball away from the opposing team.  With a
robot that is not moving (which most of the initial testing was done with), it
is useful for the field to be circular as this keeps the robot away no matter
which direction it approaches from.

In the case of a moving opponent, or an opponent where collisions in one axis
are more of a concern than in the other, it can be useful for the repulsive
field to reflect this.  For example, a robot moving rapidly in one direction is
of less of a concern if it is being approached from a vector normal to its line
of motion, as it is less likely to cause a collision (as the robot will likely
move out of the way).

This field shaping was achieved by slightly modifying the Gaussian function used
to produce the repulsive field, resulting in Equation
\ref{eq:stretchedGaussianField}.

\begin{equation}
P\left(r,\theta\right)=k_{repulse}\cdot
e^{\left(-\left(\frac{x^{2}}{2\sigma_x}+\frac{y^{2}}{2\sigma_y}\right)\right)}\label{eq:stretchedGaussianField}
\end{equation} 

By setting $\sigma_x$ to be different to $\sigma_y$, an elliptical field  is
produced, which has an effect further out in one axis than in the other. 
Manipulation of the $x$, $y$, $\sigma_x$ and $\sigma_y$ terms can produce a
field with any desired orientation.

The OpenCL code used to test this field can be found in Appendix
\ref{app:stretchedGaussianRepulsive}.

Initial testing was done with stretching in the $x$ axis, which produces the
field shown in Figure \ref{fig:stretchedGaussianField}.  This field would be
useful for projecting around the goal-keeper, as the approaching robot does not
want to approach the goalkeeper head-on, but should easily be able to pass it
by.

\begin{figure}
 \centering
 \subfloat[Field
 Magnitude]{\includegraphics[width=0.3\textwidth]{Images/stretched-gauss-field}}
 \subfloat[Field
 Gradient]{\includegraphics[width=0.3\textwidth]{Images/stretched-gauss-field-gradient}}
 \caption{Stretched Gaussian Repulsive Field}
 \label{fig:stretchedGaussianField}
\end{figure}

The tests proved successful in controlling the robot's motion, with the robot
moving around the obstacle in the desired ellipse when previously approaching
for a head-on collision.  However, the test used, where the attacking robot that
was already in possession of the ball approached an opponent deliberately placed
in the way of the goal, highlighted an issue with the robot's control of the
ball.  This is further discussed in Section \ref{sub:simulatorIssues}.

\subsection{Field Calculations}

In order to determine the direction the robot should move in, the field needs to
be calculated at four points, as described in Section
\ref{sub:Potential-Field-Force}. Even for the most complex fields in use, this
is not particularly computationally challenging. However, the efficiency of the
code could be improved by vectorising the data and taking advantage of the
SIMD instructions available on most modern CPUs to calculate all four
points simultaneously. This would allow the field calculations to take less
time, resulting in more time available per time-step to perform other
operations.

In addition, it is useful to render the entire field as an image, so that it can
be considered for debugging purposes. Given that the playing field is
approximately \SI{88}{\inch} by \SI{72}{\inch}, and is being considered at
\SI[quotient-mode = fraction]{1 / 10}{\inch} scale, this gives \num{633600} data
points to consider. This size of dataset will require a large amount of CPU
cycles to calculate, even if the individual data point's requirements are
relatively modest. With further delays introduced by inter-process communication
(it is not possible to alter the simulator to produce the image locally), as
well as other delays and overheads introduced by the operating system, it
quickly becomes challenging to render the field in real-time.

The calculation times for both the entire field and a set of four points using
the OpenCl code on both the CPU and GPU are shown in Table
\ref{tab:Field-Strenth-Calculation}. These clearly show that the calculation of
the entire field is best done on the GPU, where the acceleration from the
massively parallel computation structure masks the additional overheads. The
four points, however, are best done on the CPU, where it does not suffer from
the significantly larger memory transfer times which affect GPU operations.

\begin{singlespace}
\begin{table}
\centering%
\begin{tabular}{|c|m{2cm}|p{2cm}|p{3cm}|m{2cm}|}
\hline
\multirow{2}{*}{Task} & \multirow{2}{3cm}{Computation Platform} &
\multirow{2}{2cm}{Execution Time (\si{\micro\second})} &
\multirow{2}{3cm}{Memory Transfer Time (\si{\micro\second})} &
\multirow{2}{1.8cm}{Total Time (\si{\micro\second})} \\
 &  &  &  & \\
\hline
\multirow{2}{*}{Entire field} & CPU & \num{23500} & \num{479} & \num{24100}
\\
\cline{2-5}
 & GPU & \num{2050} & \num{2600} & \num{8611} \\
\hline
\multirow{2}{*}{Four data points} & CPU & \num{14.4} & \num{0.342} &
\num{60.0}
\\
\cline{2-5}
 & GPU & \num{18.8} & \num{1040} & \num{4390} \\
\hline
\end{tabular}

In this test, overheads include transferring the initial data to the platform
and setting up the platform before the calculation, but not one-time-only
initialisation done by the platform.

\caption{Field Strenth Calculation Times\label{tab:Field-Strenth-Calculation}}
\end{table}

\end{singlespace}

\subsubsection{Design Philosophy}

The code (shown in Appendix \ref{sub:OpenCL-Kernels}) is written in an attempt
to take advantage of the parallel nature of its execution. Each kernel (a
function run either on the CPU or GPU) is designed to work on an individual data
point, and the kernel is called multiple times by the platform, once for each
data point. As the order of execution cannot be guaranteed, each kernel is
written so that it doesn't depend on the values produced for other data points.

To simplify the code, the platform is configured to run the kernel over a 2D
space that represents the field. Each instance of the kernel is then given an ID
in each dimension by the computation platform, which is used to determine the
coordinates of the data point it should be working on. This means that each
instance can operate in isolation without any knowledge of what work has already
been done. The results are then stored in a shared array (at an index determine
by the coordinates) which is returned to the host program when the operation has
finished. Where consecutive functions are required, a series of kernels can be
queued in turn, and the data is only returned when the queue is complete (this
represents a significant saving with the GPU code, as GPU to host transfers are
relatively slow).

\subsubsection{GPU Optimisations}

The architecture of a GPU presents different optimisation challenges to that of
CPU. In particular, the number of concurrent operations the GPU can process is
limited by the resource usage of the code.

In order to calculate the Gaussian function, a function to calculate the natural
exponential is needed. Initially, the built in OpenCL function was used to
calculate this. This function is defined by the OpenCL standard to have a
specific accuracy, which is constant on any hardware. The implementation of this
on the hardware in use proved to use a large number of GPRs.

A GPU has a large but finite number of GPRs available, which are then shared
between all of the running kernel instances. If a kernel requires a larger
number of GPRs, then fewer kernel instances can run concurrently. This makes
limiting the GPRs in use very important.

The OpenCL standard provides for a set of so-called 'native' functions, which
are produced specifically for the hardware in use, and which are often more
efficient than the general implementation (for example, they sometimes map to
single instructions that perform the functions). The accuracy of the function,
however is implementation specific, and so could change from computer to
computer and platform to platform (e.g. CPU to GPU) \cite{openCl11Spec}. In this
case, the native exponential function uses far fewer GPRs, removing the resource
pressure encountered, and the undefined accuracy is not important.

The OpenCL specification also provides geometric functions such as vector
distance and vector length to complement its vector data types. These have been
used as far as possible, as they can also allow the compiler to perform hardware
specific optimisations (and again they can sometimes map directly to individual
instructions on the hardware).\cite{openCl11Spec}

\subsubsection{Field Design Consideration\label{sub:fieldDesignConsiderations}}

As previously discussed (see Section \ref{sub:Basic-Repulsive-Field}), it is
important that a repulsive field changes gradually over an reasonably large
region, as this allows the field to overcome the effects of inertia before the
robot leaves it's area of effect.  The methods used to evaluate the field
gradient also enforce this limitation, particularly on fields which exhibit step
changes.

As the robot approaches a field with a step-change of intensity (like that shown
in Figure \ref{fig:circleRepulseField}), the field initially evaluates as flat
with respect to the outside field (as in, the field has no effect on the surrounding
field), as shown in Figure \ref{fig:field-evaluation-pre-step}.  The simulator
then advances one time-step, using the speed instructions given.  Three things
could then happen:

\begin{itemize}
  \item The robot remains outside the field
  \item The robot ends on the boundary of the field
  \item The robot ends inside the field
\end{itemize}

The robot will only register as being on the boundary of the field if it is
positioned within \SI{0.2}{\inch} of the field, i.e. within two grid steps, as
illustrated in Figure \ref{fig:field-evaluation-boundary-condition}.  This
represents a very small area, which could easily be missed if the robot is
moving at greater than \SI{10}{\inch\per\second} (\SI{0.2}{\inch} per time-step)
and the time-step falls wrong.

\begin{figure}
  \centering
  \subfloat[No-Change Condition]{\includegraphics[trim=2cm 17.5cm 10cm 7cm,
  clip=true]{Images/illustrations}\label{fig:field-evaluation-pre-step}}
  \subfloat[Boundary Condition]{\includegraphics[trim=2cm 9cm 10cm 16.5cm,
  clip=true]{Images/illustrations}\label{fig:field-evaluation-boundary-condition}}
  \\
  \subfloat{\includegraphics[trim=2cm 14.75cm 11cm 13cm,
  clip=true]{Images/illustrations}}
  \caption{Evaluation of a Step-Changing Field}
\end{figure}

If the robot begins the next time-step already inside the field, the field will
again evaluate as flat with respect to the outside field, and so the field will
have no effect.

Given that the robot can move at over \SI{50}{\inch\per\second}, the likelihood
of the robot falling consistently within the \SI{0.2}{\inch} region when
required becomes vanishingly small, rendering these fields useless for the
purposes required.

Another limitation is introduced by the accuracy of the floating point values
used to calculate the field intensities.  It is simpler if all the robot
positions can be passed to the calculations without considering which one is the
robot whose actions are being analysed.  As well as allowing fields to be reused
for different robots without recalculating, this would also allow multiple
fields to be calculated concurrently by the GPU while reducing the slow memory
transfers between CPU and GPU by only sending the position data once. This means
that the robot being considered will also have a field projected around it. 
This is not a significant problem as long as the field is symmetrical and
centred on the robot, as this will cause the field to be equal at each point
considered, and so the effect will cancel out.

However, it is a limitation of floating point numbers that if a very large
number and a very small number are added together, the effect of the small
number may be lost.  This means, if the field immediately around the robot has a
very large intensity, the effect of a poorly scaled exterior field will be lost.
This was why a Gaussian function was chosen over an inverse proportionality to
generate the basic repulsive field (see Section
\ref{sub:Basic-Repulsive-Field}), as an inverse proportionality would become
very large very quickly as it approached the centre of the robot, and could make
the appropriate scaling of the fields more difficult to achieve.

\section{Software Produced}

Two types of software have been produced:

\begin{itemize}
\item The game playing strategy files
\item The field renderer
\end{itemize}

Only the strategy files are required to play the game, as the field renderer is
only used to debug the potential fields.

\subsection{Strategy Files}

The strategy files are standard Microsoft Windows DLL files which implement
three functions defined by the simulator:

\begin{itemize}
\item Create - Performs the initial setup for the strategy
\item Strategy - Called on every simulator cycle to control the robots
\item Destroy - Intended to perform the clean-up required for the strategy. This does not appear to be called by the simulator at this time, and so has been left as an empty function.
\end{itemize}

A selection of these strategy files were produced, each designed to perform a
different function.  They are then made available to the simulator, and the
appropriate file can be loaded on demand.


\subsubsection{HoldStillStrategy}

If no strategy file is loaded, the simulator will load and run the standard
strategy.  During testing, it was useful to have all the robots hold still
unless something else was required.  In order to achieve this, HoldStillStrategy
was implemented with an empty Strategy function.  This causes all the robots to
be fed no input, and they remain where they are.

This strategy was also used to test the interface between the simulator and the
strategy files, to ensure that everything had been set up correctly.

\subsubsection{PhysicsStrategy}

In order to analyse the model used by the simulator, an action that produced
simple repeatable behaviour was needed.  This was achieved by having the
strategy feed a constant input into both motors, causing the robot to accelerate
in a straight line away from its starting location.  The implementation of this
can be seen in Appendix \ref{app:physicsStrategy}.

The position of the robot was then logged every cycle, allowing the velocity and
acceleration to be calculated by simple differentiation.  The time that each of
the samples was taken at was also recorded, which allowed the time-step
behaviour to be analysed.

The standard \texttt{clock} function provided by the Microsoft implementation of
the CRT operates on a \SI{1}{\milli\second} scale \cite{windowsSDK}.  The
time-step length seen from the simulator varied around
\SIrange{19}{20}{\milli\second}, and so it was decided that the accuracy of
\texttt{clock} was insufficient for the time-step distribution analysis. It was
replaced with the high-resolution performance counter provided by the Windows
API, which on the test computer, operated at approximately
\SI{2.9}{\mega\hertz}. This provides much greater accuracy than the CRT
function.

The collected data was then output into a CSV file, which recorded the date and
time of the run, the input velocity, and each time-step's values.  This could
then be analysed using Microsoft Excel or MATLAB.

\subsubsection{SquareStrategy}

Once the motion controllers were designed, a simple test routine was required to
ensure that it was working properly.  SquareStrategy was designed to move a
single robot around a pre-planned path to ensure that the controller was working
as expected.

At the time of design, it was still anticipated that a trajectory planning
system would be used in the main strategy.  As a result, some time was spent
producing a reusable PathController.  This controller maintains a list of paths
for each robot, as well as the current place in each route, and manipulates the
motion control to move each robot along their respective path.

The motion test was then implemented by loading in a route for a single robot,
and allowing it to move around that route.  This was the testing that
highlighted the instability problem with the designed controller, and was used
to assist the development of a new set of gains by hand.

\subsubsection{InterceptStrategy}

The main strategy, named InterceptStrategy because it was initially used to have
the robot intercept the ball only, uses the potential field force navigation to
have a single robot intercept the ball and move it towards the blue goal.  This
is implemented by using a state-machine to control which field configuration is
used. The following states are defined:

\begin{itemize}
  \item State 0 - Approach the ball and position to direct it to the goal
  \item State 1 - Make contact with the ball
  \item State 2 - Push the ball to the goal, while maintaining contact with it
  as far as possible
\end{itemize}

State 0 is the default state on start-up, and is switched to if the ball is far
away from the robot.  It uses the normal repulsive fields (see Section
\ref{sub:Basic-Repulsive-Field}) to direct the robot away from obstacles, and a
paired source (see Section \ref{sub:paired-field}) to direct the robot to the
side of the ball furthest from the goal.  The paired source is aligned so that
the robot's final approach angle to the ball is along the vector between the
ball and the goal.

This field is calculated using the OpenCL code found in Appendix
\ref{app:initialApproachField}.

State 1 is triggered when the robot reaches the attractive point placed by state
0.  This switches the paired source for a basic attractive point placed on the
ball (see Section \ref{sub:Basic-Attractive-Field}) to position the robot next
to the ball and start it moving towards the goal.

This field is also calculated using the OpenCL code found in Appendix
\ref{app:initialApproachField}, but with the input ball velocity set to
$\left(0,0\right)$.

State 2 is triggered when the robot reaches the ball.  This then maintains the
attractive field for the ball, and adds on a field that attracts the robot
towards the goal.

This field is calculated using the OpenCL found in Appendix
\ref{app:possessionField}.

Provided there are no obstacles between the point where the robot intercepts the
ball and the goal, this strategy will cause a robot to score a goal.  It cannot
currently cope with any opposing action, and any close encounter with an
opponent will cause the robot to move away from the opponent rather than
maintain contact with the ball.

In order to allow the Field Renderer (discussed in Section
\ref{sub:field-renderer}) to function, the strategy needs to provide the
Renderer with data about the current state of the game.  It achieves this by
passing the current environment data, the current strategy state and the current
velocity of the ball through the named pipe set up by the Field Renderer.  The
software attempts to do this on every simulator cycle, but if it fails (e.g. if
the Renderer is busy handling the last received status report), it ignores the
error and carries on.  This also allows the system to continue if the Renderer
is busy, or even if it is not running at all.

\subsection{Field Renderer\label{sub:field-renderer}}

The field renderer provides a near real-time view of the potential field and the
field gradients at run time to allow easier debugging of the field calculations.
The program is implemented using C\# with the MS .Net Framework and the Windows
Presentation Foundation UI library. The OpenCL code is then executed using the
Cloo library, which allows the program's managed code to access the unmanaged
functions required to use OpenCL.

The program receives a status report from the simulator using a named pipe for
which it acts as the server. The binary data received is interpreted into a
structure to match the one used in the strategy file, and is stored as the
latest environment. This occurs at the end of every simulator cycle, and is run
in a separate execution thread. This allows the simulator to continue after
transmission is complete without waiting for the renderer to work with the
environment. Every time the status report is received, it triggers a rendering
cycle, unless a rendering cycle is already underway. If a report arrives while a
rendering cycle is unfinished, it is discarded. This ensures that the rendered
images are as up to date as possible.

The rendering process is also executed in its own thread, ensuring that it does
not interfere with the user interface. The process creates a copy of the latest
status report, which ensures that the process is not affected by the
concurrently running communication with the simulator. The code then performs
the same process as the strategy file, using the OpenCL field calculation code
(this time executed on the GPU) to calculate every data point in the field.
Further OpenCL code is then used to calculate the magnitude of the gradient
across the entire field, and to transform both sets of data into greyscale
bitmaps. The produced bitmaps are then passed to the user interface thread,
which then displays the images to the user.

\section{Discussion}

\subsection{Problems Encountered} 

\subsubsection{Simulator Issues\label{sub:simulatorIssues}}

The lack of documentation for the simulator has already been discussed in
Section \ref{sub:motionControl}.  As previously mentioned, the behaviour of the
system had to be reverse-engineered from the system's response to different
inputs, allowing a model to be produced.  Because the governing variables are
not fully known, the model is only an approximation, but it appears to be
sufficient to control the system to a basic degree.

Because the focus of this project is on the high level control, it was decided
that only a small amount of time would be spent on the low level control.  This
chosen focus resulted in a low-quality controller that has almost certainly had
a negative impact on the quality of the high level control.

It was noticed early on that the simulator provides no indication to the
strategy files that the game has been restarted (e.g. after a goal has been
scored and the player and ball positions have been reset).  It has been
necessary to make strategies resilient to sudden changes in the environment, and
during testing it was often necessary to restart the entire simulator (and
hence, the strategy code) when a new test run was required.

The most significant problem encountered with the simulator was with the robots.
 Designed as simple cubes, they have no surface features to aid the manipulation
of the ball.  In order to change the velocity of the ball, a force must be
applied to it to increase the velocity in the desired direction, and decrease
the velocity perpendicular to that direction.  Because the robot is flat and the
ball is round, it is only possible to exert a force in the robot's direction of
motion. If a substantial velocity change is desired, it can be achieved by
having the robot collide with the ball with sufficient force to make any
perpendicular velocity negligible compared to the forwards velocity.  However,
it is not possible to create the small perpendicular forces required to achieve
the minor adjustments to the vector of the ball required by the potential field
navigation techniques used. This would require a change to the design of the
robot which would allow it to exert these forces.  This change could not be made
with the current simulator, as the source code for the simulator is not
available.

\subsubsection{OpenCL Issues}

Development using OpenCL presented a number of issues not normally encountered
when developing with conventional, CPU run software.  This was a particular
problem because a conventional debugger is not simply attached to the kernel
instances, so normal debugging techniques (e.g. stepping through code and
examining variable values) could not easily be achieved.

The most common and disruptive problems encountered related to the transfer of
data from the host computer to the GPU.  Where a large amount of data needs to
be transferred, it is done by passing memory addresses to the OpenCL engine. 
This allows the local memory to be used instead of allocating some of the highly
limited memory available on the graphics card.  In some cases, for reasons that
are still not entirely clear, invalid memory addresses were passed to the
runtime, and the graphics card attempted to access these invalid memory
addresses.  This appeared to cause a low-level fault in the graphics hardware,
which caused the computer to lock up, and required a system restart to resolve.

Another problem encountered with the data transfers were matching CPU and GPU
data types.  No type-safety is provided by the OpenCL runtime when passing
function parameters, and so the runtime will just convert the binary data into
the expected data type.  Specific problems were encountered with floating point
numbers, as the simulator worked at double precision while the GPU only
supported single precision, and integral types, as the CPU software defaulted to
operating with 64-bit integers, while the GPU defaulted to 32-bit integers.

\subsubsection{Controller Issues}

As previously described, it was found that the controller designed for the
system did not function correctly when implemented in the strategy file. 
Shortly before the project deadline, a bug was discovered in the velocity
calculation code, the change in position was being divided by \SI{20}{\second}
instead of \SI{20}{\milli\second}, resulting in a recorded velocity 1000 times
smaller than expected.  This changed the apparent dynamic behaviour of the
system, causing the required gains to change.

An attempt was made to fix this issue, but it was determined that doing so
detuned the controllers, and there was insufficient time to recalibrate it with
the corrected calculations.

\subsection{System Limitations}

The system was designed and tested on a PC with the following specification:

\begin{itemize}
 \item Microsoft Windows 7 (64-bit)
 \item Intel Core i5-2320 CPU
 \item AMD Radeon HD 6450 GPU
 \item \SI{4}{\giga\byte} RAM
\end{itemize}

The strategy file should run on any modern CPU, provided a compatible OpenCL
runtime library is available (the design process used the AMD OpenCL library). 
It requires Microsoft Windows, but only because the simulator requires it. With
the exception of the named pipe and high performance timing routines (neither
of which are required for the actual strategy), all of the code should be
cross-platform compatible.

The field renderer requires the .Net Framework 4, and so will only run on MS
Windows Vista or later.  Additionally, the field rendering requires a graphics
card which supports OpenCL.  Currently, both NVIDIA and AMD/ATI produce such
graphics chipsets.

\section{Conclusions}

In the Project Plan and Literature Review already submitted, the objectives were
set required the following things to be created:

\begin{itemize}
\item A subsystem that will allow the robots to position themselves at an
arbitrary position with a given final speed and direction of motion (within the
limits of the robots).  This will allow the robots to interact with the ball to
produce the motion required by the higher-level planning algorithms.
\item A subsystem that can predict the location of the ball or an opposing robot
at a given time in the future.  The prediction will only need to work for the
next two to three seconds, allowing better decisions to be made.
\item A subsystem that will monitor the behaviour of each robot and ensure that
it's actions will not cause a foul (as described in the FIRA rules of play
\cite{simurosotSim}).  For example, it should intervene if an action will cause
a collision with an opponent by altering a planned route appropriately.  This
will ensure that the team is compliant with the rules and does not incur any
corrective action from the referee.
\item A subsystem that will combine the previous two to produce a path for a
robot to allow it to interact with the ball to change the balls speed and
direction of motion.  This will allow the higher level planning sub-systems to
state a desired ball motion path and have a robot cause that action to happen.
\item A subsystem that will take in the current state of the system and decide
on an appropriate course of action. When the ball is under the control of the
current player, it will have the robots attack the opposing goal and attempt to
score a goal.  When the ball is under the control of the opposition it will
attempt to defend its own goal and disrupt the opponents actions. When the ball
is not under any defined control, it will attempt to take control of the ball.
\end{itemize}

\cite{pplr}

These objectives have been met with mixed results.

A set of controllers have been produced, which can be manipulated to position
the robot at a given position, or to make the robot move with a given velocity.
It is not currently possible to specify the velocity of the robot at a given
point in space and time, but it should be possible to achieve this by
manipulating the two controllers.  However, this objective is no longer as
important, since the emphasis has changed from a planned motion strategy to a
more dynamic one.

Some code has been produced that can begin to solve the equations required to
predict the velocity of an object at a given time in the future.  It is
currently based on a simplified equation of motion, which while applicable to
the ball when it is moving freely, is less accurate in the prediction of another
robot's position.  This code is currently incomplete, as it became less urgent
when the focus of the project changed.

While some effort has been made into controlling the robot so that avoids
collisions, the code does not currently make any other consideration to the
other rules of the game.  At present, the collision avoidance is also
unreliable.

The current code successfully manipulates the robot to intercept the ball and
cause it to score a goal.  It isn't robust enough to properly play a full game
yet, as it can only cope with a very small set of conditions, but, with proper
manipulation of the fields, sufficient functionality is available to have the
robot direct the ball to any useful location.

The decision making code is currently very simple, based around a state machine
which responds to a specific set of circumstances.  It is not sufficient to play
a game yet (for example, it cannot correctly take possession of the ball from an
opponent), but it is in a position to be expanded to do so, once new field
configurations are made available.

Overall, each of the requirements has been partially met, but at present the
system does not meet the full needs.  A lot of the work completed provides a
basic level of functionality, which can then be built on to achieve the more
complex goals.

\section{Future Work}

The code presented represents only part of the implementation required to
produce a competitive robot football team for the FIRA competition.  At present,
the system cannot cope with any opposing activity, and operates at very low
speeds.  A comparison with the default strategy has not been performed, but it
is anticipated that the produced strategy would lose, as the default strategy
operates the robots far faster, and can successfully compete with an opposing
team.

If the system is intended to be used with a set of real robots, it would be very
useful to reimplement the simulator to create a better representation of the
robots in use.  This would resolve the problems encountered with the design of
the robot (e.g. the lack of fine control), and would allow the information
provided to the robot controller to better match the information provided by the
real system.  If the simulator is not changed, the strategy will need to be
reconsidered to take into account this limitation.

The main controllers need to be improved to better take advantage of the
capabilities of the robots.  This should include dealing with the current time
division problem (perhaps by removing the division altogether) and retuning the
controllers to match.

Currently, for the paired source field, the position of the field is calculated
(to determine when states should change) locally as well as on the calculation
platform. It would make debugging and reuse easier if this calculation could be
performed in one place only, reducing the amount of duplicate code and ensuring
that the debugging code always matches the actual field calculations used.

Additional field configuration need to be designed to cope with different
situations.  This includes ones to implement defensive strategies (perhaps
implementing some of the techniques from \cite{neuroHassleDefence}), as well as
others to take possession of the ball from the opposition. The current
strategies also need to be improved to have the robot's move faster and react
faster and more accurately to the changing state of the game.  This will likely
involved developing some new field shapes, and modifying some of the existing
ones to better match the requirements. Once this is achieved, the fields can be
spread out to all the robots, controlling all of them around the pitch instead
of just one.

Once sophisticated motion can be completed, it would be interesting to see if
field shapes could be developed that cause the robot to pass the ball in a
certain direction, or perform other more complex behaviour.

The current field selection routine is also currently very simplistic.  The
current options could be extended to have a field to cover every possibility. 
Alternatively, a fuzzy logic or artificial neural network system could be used
to select the fields.  Finally, it could be interesting to apply fuzzy logic
techniques to combine the fields instead.

Once some more versatile field configurations (and configuration selection code)
has been completed, the system can then be tested against the default strategy,
as well as any other strategies that can be found, and the performance of the
system can be evaluated.

Overall, it is felt that a good start has been made on the project, but reduced
man-hours (compared to normal competitive teams) and the large change of plan
made early in the project have made it impossible to bring the project to its
conclusion.  It is believed that a good foundation has been produced that will
allow further work to achieve the objectives to be completed.

\cleardoublepage{}
\include{appendices} 

\end{document}
 