\chapter{Technical Content}

% (10 pages) 
%
% * Choose some of the more interesting/challenging/novel aspects of
%   the project to discuss in detail.
%
% * Explain why they are technically interesting or challenging and discuss how
%   the problems were overcome.

\section{Pod Server} % Ant

In order to understand how to control the pod, we initially looked at the
previous group's pod server code. One of the group members from the previous
year who had written most of the server code warned us of several critical
issues in their server. One of the main problems they pointed out was that the
Motionbase code was not thread safe, something which they had only noticed at
the last minute and fixed with a quick but nasty hack.

We decided to implement a new pod server from scratch, since we felt there were
too many things going wrong with the previous year's. In order to prevent these
kind of problems happening again, we designed our server to be as
generic and standalone as possible, so that future groups will be able to pick
up our server and use it immediately.

The pod server runs on a separate computer from the game. The pod is
connected directly to this machine. Since the Motionbase API only works with
Windows, the pod server was only developed for that platform. The server
receives commands from the game or a terminal client program we made
specifically for testing purposes. It then transmits these commands over the
network and translates them for the pod.

Commands are available to start/stop the pod, poll the joystick inputs, move the pod
to a specific location and query the state of the pod. The pod server runs two
main threads, one to receive the commands from the network, the other to control
the pod.  Two main classes control the execution of these threads. A \verb"Pod"
class is an object representation of the pod. It contains fields for the
pod's position along it's axes, and the current state of it's inputs. Upon the
creation of a \verb"Pod" class, the pod's parameter files are read. When
\verb"Pod::start()" is called the pod's state machine is initialised and the
pod is ready for the door to be shut and the other interlocks closed in order
to start running. The method also calls \verb"Pod::run()" which runs as it's
own thread. In a timely manner \verb"Pod::run()" services the state machine and
checks if the state has changed. If the pod is `following' i.e. it is in a
state where it is able to move to a given position, the pod is instructed to
move to the position described by the \verb"Pod" class fields before polling
it's inputs and storing the information in the correct fields.  We attempted
using separate threads to move and poll the inputs of the pod but we found this
technique would not work.

The \verb"Pod" class is controlled by the server's main class \verb"PodServer".
\verb"PodServer::run()" is called by the \verb"main()" function of the pod
server. The method checks for commands received over the network and services
them. If the command was to start the pod then \verb"Pod::start()" is called.
The move command updates the \verb"Pod" class' position fields and the poll
command reads the fields describing the state of the inputs.

%two pod controlling threads
%memory mapped values

% Jamie
\section{Network Communication}

% - Initial select() based server with polling based commands, each receieved
%   command run through process command. Also ascii messages! (rev. 29)
% - refactored messages to transmit binary data, created a new protocol
% - created threads for most frequent communications. also removed polling
% - Issues with buffer overruns related to rate at which client/server can
%   send/receive messages - added code to syncronise sending on both sides
% - UDP socket experiments, timing and countng etc

Once the pod server was at a stage where we could input simple commands to it
using the terminal, such as moving the pod to a particular position or to query
its state, the next step was to integrate networking code to enable such commands to
be passed over the network. Client and server classes had been developed
separately that provided functionality to enable a server program to accept any
number of different client connections. A demoable `echo' server had been set up
where each client could send a string message to the server and receive the same
string back as a response.

Integrating this networking code into the pod server and the game was reasonably
straight forward. A \verb"PodServerInterface" class was created for the game to
provide methods to connect to, move and stop the pod. It did not take long
before we had a system where we could control the pitch and roll of the
pod by moving the mouse in the game. At this stage all communications were in
the form of ASCII strings which had to be constructed then read back either end.
The next step was a far greater challenge and involved controlling the pod using
it's in-built joystick. The critical difference in this case was that the game
needed to receive the joystick values in order to process them before it could
send a command to move. The initial method to do this was for the game to send
a poll command to prompt the server to send back the joystick data. When this
was received the game would send the pod server a command to move.

We immediately found that the rate at which we were receiving the joystick data
was impacting heavily on the control of the game. This was particularly obvious
because the tilting of the environment is directly dependent on the input
device. For example, a slow rate of 10 updates per second would give the
impression of the game frame rate running at 10 per second. A rate of at least
50 per second would produce smooth control of the game. The initial response to
this problem was to look at the overheads involved in the communication, which
immediately highlighted the string processing. To address this issue we
refactored the network code to implement a binary protocol, but this had no effect
on the transmission rate. The next obvious issue was that of the essentially
redundant polling commands sent by the game as they were very frequent while
the game was played. It was clear that instead the game could just send `begin'
and `end' messages to the server to tell the server to start and stop sending
constant joystick data.

As we were using blocking sockets, implementing this involved creating a new
thread of execution in the \verb"PodServerInterface" class so that the
\verb"receive()" function would not interrupt execution of the rest of the game.
Again we found that this did not provide any significant increase in
performance. After some more thought, we suspected that the cause was the high
frequency communication over just a single socket connection to the server,
which was reading the sockets using the \verb"select()" function. Further
modifications were performed so that three dedicated socket connections were
setup with the server; one for move commands, one for joystick data and the
other for all other commands. Both the move and joystick connections were given
their own threads in the game on the server. This vastly improved the situation,
but was not without it's own problems.

Removing the polling commands meant that the server would send joystick data
messages as fast as it could. The receiving thread in the game could not keep up
with the rate (\~800 per second), and would be processing old messages at the
end of its buffer. To limit the server's send rate, a wait function was added
to the loop so each iteration would take a minimum specified time. The same was
added to the move loop in the game, as it was only necessary to send move
commands at about 50 per second. After a suitable interval was found for the joystick
polling, we found that the networking worked brilliantly.

Upon reflection of the problems that we encountered, a great deal of it was due to our
inexperience with using the Sockets library and using it to develop real-time
applications. Many of the problems manifested themselves in unobvious ways that
took time to understand.

\section{Pod Movement} % Tom

In the game, we wanted the player to have an experience that would feel like
they were actually inside the orb. We decided to go about this in two ways,
firstly by simulating the tilt of the world and secondly by simulating
collisions with other objects.

\subsection{Simulating a Tilting Level in the Pod}

The easiest way to simulate the level tilting (pitch and roll) was to pitch and
roll the pod by the same amounts. We had previously limited the amount the level
could tilt by to prevent the camera going `under' the floor, which we found was
very suitable for the pod's limited range of movements. This part of the pod
simulation was fairly easy and just involved passing a pitch and roll value and
making sure it wasn't out of the pod's movement range.

However, after some testing we found that quick movements between the extremes
of the user's input made the pod have very jerky movements and so we worked on
making the transitions between different amounts of pitch and roll smoother. The
jerkiness in the pod movement was due to large accelerations in the pitch and
the roll caused by sudden changes in direction. For the sake of simplicity,
the method we undertook to smooth the pitching and rolling will be described for
pitch only but applies to roll in exactly the same way.

The smoothing was achieved in a similar way to how we implemented the camera
tracking. In each call of the pod's function, we calculated the difference
between the current pitch of the pod and the pitch that was needed to simulate
the tilt of the world. We also maintained a pitch speed in radians per second
which allowed us to work out how long it would take the pod to achieve the
desired pitch at its current pitch speed. If this time was larger than a set
duration we increased the speed by some predefined amount (an acceleration),
otherwise we decreased it. The pod was then pitched by the pitch speed which was
multiplied by the time for that frame.

We discovered that having a small acceleration value gives smoother movement.
However, having a value that is too small would prevent the pod from reacting
quickly enough to a change in the direction of the pitch movement required.
Therefore a balance was required and this was initially found by observing the
pod's movements. We then played in the pod to experience them first hand and
finally altered the acceleration value based on feedback from some of our
testers. In our implementation of the smoothing we used a couple of constants
that allowed us to easily change the sensitivity and smoothness of the pod.

The only downside of this method is that the pod doesn't react as quickly
compared to the method without smoothing. However, the lag is fairly
insignificant and the experience for the player is much better.

\subsection{Simulating Collisions in the Pod}

\subsubsection{Initial Method}

The first way we implemented the simulation of collisions in the pod was by
getting the current acceleration on the orb, calculating the effect of gravity
on the orb and taking the difference between them. In essence, this would give
us any acceleration on the orb that wasn't due to gravity, for example a
collision. The main challenge in this was to work out the effect of gravity.

To calculate the effect of gravity on the orb, we added a contact callback to
the physics engine. This meant that every time the orb was in contact with
another object, i.e. by rolling along the floor, we had a function called where
we could obtain the normal of the object that the orb was in contact with. For
each physics frame there may be many contacts (i.e. a wall and floor at the
same time) and we had to work out the collective effect of all of these. To do
this we used a recursive method on an array of contact normals where the initial
gravity is the current force acting on the orb and through each iteration of
the loop, its direction and strength is altered based on a normal. Figure
\ref{gravityalgo} shows the pseudo code for this algorithm.

\begin{figure}[ht]
\begin{lstlisting}
    gravity = orb->getGravity();
    for i = 0 to normals.size() do
    begin
        Quaternion rotation = gravity.getRotationTo(normals[i]);
        Vector3 gravity_direction = gravity;
        gravity_direction.normalise();
        double scale = sin(acos(gravity_direction.dotProduct(normals[i]));
        gravity = rotation * gravity * scale;
    end
\end{lstlisting}
\caption{Algorithm for calculating the effect of gravity on the orb given an array of contact normals}
\label{gravityalgo}
\end{figure}

After the loop, \verb"gravity" will be the expected resultant force on the orb
after contact with any object whose normal was in the array if there wasn't a
collision. To calculate the actual acceleration on the orb, we took the
difference between the current velocity and previous velocity and divided it by
the duration of the frame. The two forces (gravity and the actual acceleration)
can then be compared to see if there is a difference caused by other external
forces such as a collision. If there was a large enough difference then we
simulated a collision.

The difference calculated is in real world coordinates and not relative to the
way that the camera is looking. Therefore, the direction of the force was
rotated so that the simulation is relative to the way the camera is looking.
This also includes adjusting for any pitch on the camera from a third-person
perspective.

Once the final force has been calculated it's added to an array of forces to be
simulated along with a time value of 0. Each time a movement command needs to be
sent to the pod, the algorithm in figure \ref{PodAlgorithm} works over the
array.

\begin{figure}[ht]
\begin{lstlisting}
    x = y = z = 0;
    for i = 0 to forces.size() do
    begin
        double scale = getScale(forces[i].time_ago);
        if ( scale < 0.05 )
            removeForce(i);
        else
            x += forces[i].force.x * scale;
            y += forces[i].force.y * scale;
            z += forces[i].force.z * scale;
        end
        forces[i].time_ago += time_since_last_command_sent
    end
    x *= POD_SENSITIVITY;
    y *= POD_SENSITIVITY;
    z *= POD_SENSITIVITY;
    sendMove(x, y, z);
\end{lstlisting}
\caption{Algorithm for determining the position of the pod}
\label{PodAlgorithm}
\end{figure}

The \verb"getScale()" function takes a time in seconds of the time since the
force was simulated and returns a scale to multiply the force by to give a
position for the pod. Equation \ref{eq1} gives this, where $s$ is scale, $t$ is
time since and $d$ is the pod movement decay constant (\verb"POD_MOVEMENT_DECAY").

\begin{equation}
s = 0.5 + \frac{sin((t / d) + 1)}{2}
\label{eq1}
\end{equation}

Equation \ref{eq1} gives scales for the position. It can be seen from the graph
in Figure \ref{graphscale} that the pod moves very quickly in the direction of the
acceleration to be simulated but that the return of the pod to the origin is
slow and gentle. This means that the acceleration of the pod is large in the
first section of the simulation but very small afterwards. The player should
therefore only feel the initial large acceleration which is the only
acceleration that would be felt if they were in the orb. This is ideal for our
game where we want the user to experience a sharp movement right at the
beginning to simulate the collision but then nothing afterwards, whilst still
allowing the pod to return to its central position and simulate further
collisions.

\begin{figure}[h]
    %\begin{minipage}[t]{.3\textwidth}
        \begin{center}
            \includegraphics[scale = 0.5]{./images/pod_movement_decay.png}
        \end{center}
    %\end{minipage}
    %\hfill
    %\begin{minipage}[t]{.3\textwidth}
        %\begin{center}  
            %%\includegraphics[scale = ??]{./first_derivative.png}
        %\end{center}
    %\end{minipage}
    %\hfill
    %\begin{minipage}[t]{.3\textwidth}
        %\begin{center}  
            %%\includegraphics[scale = ??]{./second_derivative.png}
        %\end{center}
    %\end{minipage}
    \caption{Graph showing scale against time\_ago}
\label{graphscale}
\end{figure}

We have again implemented this method with constants that we can alter to change
the simulation. For example, increasing the \verb"POD_MOVEMENT_DECAY" variable
would increase the amount of time that the pod takes to return to the center.
Doing this would decrease the amount that the player could feel the pod moving
after the simulation. However, increasing it too much is not desirable because
it also increases the amount of time before the pod returns to its origin. This
could mean that another simulation in a similar direction not soon after the
current one would be simulated properly.

\subsubsection{Second Method}

Having successfully implemented the first method we discovered that there was a
lag of around 0.2 to 0.3 seconds from the pod receiving the move command and the
pod actually doing the movement. This meant that the collision simulation did
not feel realistic and further to this there is no way that this problem could
be overcome by the initial method. Therefore, we started another method of
collision simulation by predicting what the orb was going to collide with.

This new method involved shooting rays from the orb's current position in the
direction that it was travelling and testing for objects that would be hit
within the delay time of the pod. The OgreNewt library provides a class for
shooting rays and the results that it provides contained the body that was hit
as well as that body's normal.

The direction of the acceleration on the orb as a result of a collision is in
the direction of the normal and so by using ray casting we could quite easily
get the direction needed for the simulation of a collision. The strength of the
acceleration is based on the orb's velocity as well as the angle between the
orb's direction and the contact normal. An angle of close to $180^o$ (head on
collision) gives a large acceleration, whereas an angle close to $90^o$ (glancing
hit) gives a smaller acceleration. Angles less than $90^o$ cannot be achieved
because this would imply that the orb is travelling away from the contacted
surface. This acceleration has to again be rotated so it is relative to the
player and the resulting position of the pod is scaled over time in the same way
as previously described.

Although this new method doesn't always correctly predict the collisions, we
feel that it provides a very realistic simulation in almost all cases. When the
orb is moving quickly it cannot deviate from the ray's direction very much and
so this method works well in these cases. When the orb is moving slowly it can
deviate more. However, the resulting acceleration is very small and so the pod
movements are also very small. Therefore, in the event of an incorrect collision
prediction, it is unlikely that the movement will be large and hence felt, and so
this new method is generally much better than the first method that we used.

To increase its performance, we also fire rays in directions slightly different
to those of the direction of the orb. In all we fire five rays, one in the
exact direction of the orb, one slightly up, one slightly down, one slightly
left and one slightly right. This allows us to predict collisions even if the
orb changes direction slightly without affecting the overall performance of the
game too much. The final collision that we simulate, if any, is the one that is
closest out of all five rays.

To prevent multiple simulations of the same collision, we only add a simulation
if a set amount of time has elapsed since the last simulation. It is not
possible to do this by testing if two physics bodies are the same because all
the static geometry in the level is grouped under one instance of a body.

If there is no collision, we also fire another ray in a slightly more downwards
direction to check if the orb will be in contact with the floor. If it is not
then we simulate a falling orb by moving the pod downwards. Because it is hard
to tell how far the orb will fall and how far to move the pod, each downward
movement that we send is very small. During one duration of the orb not being
predicted to be in contact with the floor, several downward movements are added,
at a slower rate as time progresses, so that cumulatively they produce a large
movement. If the orb is believed to come in contact with the floor again soon
then downward movements are no longer sent.

For example, when the orb hits a wall it is momentarily airborne. A few downward
movements would be sent but not many which would provide a good simulation of
this action. When the orb falls off of the level, very many downward movements
would be sent.

\section{Camera Tracking} % Tom

As part of our research into how to make our game unique to those already
available, we noticed that in several games, the direction that the camera
looked appeared to be rigidly fixed to the direction that their orb was moving.
This is something that we really wanted to avoid because after bouncing off of a
wall for example and suddenly changing direction, the player can become
disorientated and lost within the level.

Each time OGRE updated, we updated the direction of the camera. This was done by
calculating the amount of yaw needed to get from the camera's direction to the
orb's direction and maintaining a yaw speed. If the camera could not achieve
the yaw needed at the current yaw speed then the yaw speed was increased by a
predefined constant, otherwise it was decreased by the same amount.

Overall, this meant that the camera could not change direction quickly but could
still follow the orb. We felt that this was a much needed feature in our game
after playing similar ones.

\section{Moving Objects} % Kenny

A key part of the plans we had for our levels relied on us being able to have
computer-controlled moving objects in our game. The main requirements for these
objects were as follows:

\bit

\item To move along a pre-determined path, with variable speed.

\item To be defined when creating a level from within Maya, without the use of
additional scripting tools.

\item To have associated physics bodies, so that collisions with the orb would
be possible.

\item To have the option of their movement being triggered when the orb hits
them, allowing us to create moving platforms for our levels.

\eit

As the objects had to have a component in the physics world as well as a
graphical OGRE component, we could not use OGRE's Animation framework to move
the objects. This was because OGRE's Animation framework would only move the
graphical part of the object, and the physics representation would be left
behind. However, if we created our own system that moved the physics body,
OgreNewt would automatically update the position of the graphical mesh in the
OGRE world.

Ultimately this meant that we would not be able to define the movement path of
the object using Maya's key frame animation, because we could only export that
into an OGRE animation, which was unsuitable for our needs. Instead we decided
to simply define the nodes of the movement path in Maya and then use the
position of these nodes to dictate the movement. To associate these nodes with
the moving object we decided to make them parented to the object in Maya. Then
in our code we could simply loop over all the child nodes of the moving object,
and use their positions and IDs as the parameters for our movement path.

\subsection{The First Attempt}

After we had worked out how to define the moving objects in Maya, we could start
work on implementing the movement. We decided to create a \verb'MovingObject'
class that would encapsulate all this code. Its main functions include

\bit

\item Storage of the movement path, and the current state of the moving object.

\item Force callback function, allowing us to set the forces that would make the
platform move.

\item To provide a contact callback function so that movement can be triggered
by the orb.

\eit

In order to move the object the physics engine allowed us to simply set the 
velocity of the object whenever it called our force callback function. The 
algorithm we used to move the object is summarised in Figure
\ref{moving_object_code}.

\begin{figure}[ht]
\begin{lstlisting}
    START:
    while (position != target_position) do
    begin
        direction = normalise(target_position - position)
        setVelocity( direction * PLATFORM_SPEED)
    end
    if (target_delay > 0)
      wait for target_delay seconds
    end
    if (target.wait_for_orb)
      wait until orb hits platform

    end
    goto START
\end{lstlisting}
\caption{Pseudo-code for movement}
\label{moving_object_code}
\end{figure}

This technique worked successfully for the majority of non-interactive moving
objects, but when we made a moving platform to carry the orb, it had serious
problems. As the orb had a mass and a gravitational force acting on it, this
caused the moving platform to twist and fall when the orb was on top of it.


% Maybe a picture here to demonstrate

\subsection{Improved Method}

In order to tackle this problem we looked into one of the more advanced features
of our physics engine: joints. In the context of a physics engine, joints allow
constraints to be put upon objects, to restrict their movement. To solve our 
problem we used two different type of joints:

\bit

\item An up vector joint to create a fixed up vector on our object meaning that 
it could not rotate around the x-axis or z-axis.

\item A slider joint to fix the possible directions of movement to a single
direction vector.

\eit

Every time the direction of movement changed we would have to destroy and
recreate the slider joint, so that the previous constraint would not remain.
During the research on joints we also discovered that generally it was very bad
practice to set the velocity of an object from within a physics engine, as it
gave very unrealistic results. Now that we were using a slider joint we could
instead use the joint callback function to set acceleration along the joint.
This had the following advantages:

\bit

\item Realistic acceleration and deceleration of the object.

\item Other external forces were cancelled out, we were now explicitly setting
the acceleration on every joint callback so any other forces were overridden.

\item Simplified code, mainly because we were dealing with the objects position
as a scalar value of progress along the slider joints axis, rather than as
a 3-dimensional vector in global space.

\eit

\section{Static Geometry and Level Detail} % Kenny and Jamie

When we started making larger, more complex levels the frame rate of our game
started to drop. Our first thought was that we needed to reduce the amount of
polygons in our levels, but as we were only modelling simple, geometric levels
the polygon count could not really be reduced much further. After reading up on
the issue we found out that modern graphics cards are optimised to render large
batches of triangles in a single operation. For example, 1,000,000 triangles
might render at 300 frames per second in a single batch, whereas 1,000 batches
of 1,000 triangles each (which is still 1,000,000 triangles) might render at 30
frames per second, if the batching is organized poorly.

After we discovered this we went about researching how OGRE could help us batch
our polygons together more efficiently. As most of the geometry in our levels
was static it seemed that we should be taking advantage of OGRE's static
geometry feature. In OGRE, a \verb_StaticGeometry_ object can group together
many separate meshes into a single object. The advantage of this is that all
the polygons in this object that use the same material will be batched together
and rendered in a single operation by the GPU.

So, we went about changing our \verb"DotSceneLoader" class to add all the static
objects in the level to one \verb_StaticGeometry_ object. After doing this we
found that the average frame rate on the smaller levels increased by
approximately 20\%. However, when we tested the larger levels we found that
the already poor frame rate had dropped even further.

To understand what was happening we researched further about the properties of
static geometry. We discovered that it was not a sure-fire way to increase the
frame rate of an OGRE application, and that a number of caveats existed when
using the feature. The most relevant was that typically a static geometry object
will render all of the objects in the group if any part of it is visible. This
meant that every single mesh would always be rendered in our levels, even if 
only a small part of it was actually visible on screen.

We were sure that this was why large levels were having performance issues, so
we worked towards a solution. The solution we came up with was to divide up our
level into regions of a fixed size, then all the objects in a region will be
built into a \verb_StaticGeometry_ object. The whole process is summarised
in figure \ref{static_geom}.

\begin{figure}[ht]
\begin{lstlisting}
    function create static_geoms:

    y_factor = world_size.z / STATIC_REGION_SIZE + 1;
    x_factor = y_factor * (world_size.y / STATIC_REGION_SIZE + 1);
    for each node in static_scene_nodes do
    begin
      pos = node.position - world_min
      index = pos.x / STATIC_REGION_SIZE * x_factor
              + pos.y / STATIC_REGION_SIZE * y_factor
              + pos.z / STATIC_REGION_SIZE
      static_geoms[index].add(node)
    end
\end{lstlisting}
\caption{Static geometry construction}
\label{static_geom}
\end{figure}

This now meant that when a region of a level was not visible, OGRE will not need
to render it. After some tweaking of the \verb'STATIC_REGION_SIZE' constant, we
found that frame rates for the larger levels experienced a 20 to 100\% increase,
depending on how much of the level was in view. 

\section{Replays and Ghosts} % Kenny and Tom

After we had most of the core functionality of the game completed, we thought
it would be nice to add a replay feature to the game. When discussing this idea
we also decided it would be even better to extend this to allow the player to
compete against a replay of a successful run. When we started thinking about
the implications of these extended features we soon realised a common problem -
that a large proportion of our source code was very inflexible. The problems
included:

\bit

\item No abstraction of input devices, this meant adding a replay playback device
would make the source code even more long-winded.

\item The calculation of the direction of the force on the orb and the
direction of the camera were very interlinked and having a orb without a camera
would cause problems

\item No encapsulation of all the dynamic parts of the level which meant creating
ghost copies of all these objects would be a complicated process.

\item No base \verb"PlayState" class meant that implementing different play modes
would be cumbersome.

\eit

\subsection{Refactoring of Code}

Before commencing with any of the replay and ghost implementation, we decided
that there was a lot of preparation needed. So far in the game's development,
we had only considered there being one player and this was very apparent in a
lot of places. Therefore, a large amount of the project was refactored to allow
for multiple players, multiple ways of controlling players and multiple ways of
controlling the gameplay. Upon completion, we could then control players using
some sort of recording, allow replays to be watched and allow a user to play
against a ghost.

We identified three main elements to work on to allow the above functionality:

\bit

\item Create an interface for an input device.

\item Create a base class for the \verb"PlayStates" that would provide basic playing functionality.

\item Create a new \verb"Player" class that would handle everything to do with an individual player.

\eit

\subsubsection{Input Device Interface}

This provides a very generic framework that is implemented by any input device
(Figure \ref{inputdevices}). There is no common functionality between input
devices such as a mouse or a joystick and the way that we decided to record
replays. However, we implemented \verb"InputDevice" with
\verb"UserControlledInputDevice" which translates any input device,
such as a mouse or joystick, into a pitch and roll value and calculates a force
to set on the orb, because a large amount of this work is generic to any such
input device. 

Finally, we also had to create wrapper classes for the OIS input devices so that
they could be used within this framework. We also added some extra functionality
into these, such as controlling the menu via a joystick.

\subsection{Replays}

There were two ways that we could record replays, either by recording the
position of the orb at all times as well as the position of moving objects and
every other interactive element of the game, or by recording the forces that
were set on the orb. We believed that the second option would be best as the
first may become too large and complicated if there are many interactive objects
and the second would also be able to playback all elements controlled by physics
callbacks, such as animations of teleports and particle effects when collecting
score pick-ups.

The main issue with recording replays and playing the recordings back was to do
with controlling the update rate of the physics engine. OgreNewt has an OGRE
\verb"FrameListener" implementation to control the rate at which the physics world is
updated, which we used in the initial stages of our project. However, after a lot
of consideration we realised that this was the core of most of out problems.

The first way we recorded replays was to record the force on the orb along with
the time that the force was first applied. This meant that if the force was
constant throughout the level then the replay file size would be very small.

In a replay, we update the force on the orb based on the frame time obtained
from an OGRE \verb"FrameEvent". However, it soon became apparent that this would not
work because OGRE and OgreNewt do not update at exactly the same times between
the recording of the replay and the playback of it, causing some OgreNewt
updates to have different forces on the orb than they would have done in the
initial recording. This caused the orb to deviate from its original path, the
problem becoming worse as time went on because of earlier mistakes.

Our first solution to the problem was to base the updates of the force exerted
on the orb on OgreNewt instead of OGRE and to use a frame count instead of a
time. We believed this would work because OgreNewt updates at a constant rate
and we could therefore ensure that at each frame, the force on the orb was the
same in the replay than it was in the original recording. However, this did not
completely fix the problem. After reading through the OgreNewt implementation of
OGRE's \verb"FrameListener", we realised that if OgreNewt becomes more than ten
physics frames behind the desired update rate, then the world is updated for the
duration of time that has elapsed since the last update. We could not guarantee
when this would happen or the time with which this physics world would be updated with.

To overcome this new problem, we effectively implemented the OgreNewt's
\verb"FrameListener" within our own \verb"PlayState". We would now have to manually control
when the physics world was updated but could guarantee that it would be the same
between the recording of a replay and the playback of it.

Our next hurdle to overcome was that the replays did not work with anything that
could move or had special interaction with the orb such as teleports and moving
objects. After much research, we could not come up with a solution and accepted
that although replays weren't exact, there were more important things to do.
Recently though, we found that OGRE cannot guarantee the order at which its
registered \verb"FrameListeners" are called, and most of our specially controlled
objects were based around these. We have tried to rectify the situation by
having only one \verb"FrameListener" registered at any point in time and by manually
calling any extra ones from the registered one. We have not had time to fully
test this solution.

\section{Ghost Mode}

The main issue with having a ghost player to play against was that it should not
interfere with the main player in any way. This raised a number of issues

\bit

\item Any object in the level that could be moved by the orb would need to be
cloned so that the ghost had its own set of these objects.

\item Collisions between objects that belonged to different players should be
ignored by the physics engine.

\eit 

In order to clone all the player-specific objects each object was given a clone
method. This method would copy all the information that was associated with the
object and also create copies of any graphical meshes and physics bodies, where
appropriate.

In order to filter out any inappropriate collisions we decided that we should
give the physics bodies of each player unique ID's. To do this we simply added
on a base value to each body ID. For example, when there were five different
types of body the main player would have body ID's of 0-5, while the ghost
player would use 6-10. Then in every collision callback we could filter invalid
collisions as shown in Figure \ref{collision_callback}.

\begin{figure}[ht]
\begin{lstlisting}
   if (body_1.type % BT_COUNT != body_2.type % BT_COUNT)
      return false
      
\end{lstlisting}
\caption{Collision callback filter}
\label{collision_callback}
\end{figure}

The final issue to resolve was making all the objects that were associated with
a ghost appear semi-transparent, so that it was apparent which objects the main
player could interact with. This was done as a sub-process of cloning the OGRE
meshes, and consisted of making a copy of the material that each mesh used. This
copied material would be set to have a lower alpha value, causing it to appear
ghost-like. By copying the material before changing it the main player's meshes
remained unaffected.

\vspace{100mm}
